text
stringlengths
1
3.65k
source
stringlengths
15
79
motivated by the recent experimental progress on the $ r _ { d ^ { ( \ ast ) } } $ and $ r _ { k ^ { ( \ ast ) } } $ anomalies in $ b $ - decays, we consider an extension of the standard model by a single vector leptoquark field. we study how one can achieve the required lepton flavour non - universality, starting from a priori universal gauge couplings. while the unitary coupling flavour structure, induced by the mass misalignment of quarks and leptons after $ su ( 2 ) _ l $ breaking, does not allow to comply with stringent bounds from lepton flavour violating processes, we find that effectively non - unitary couplings, due to mixings with heavy vector - like fermions, hold the key to simultaneously address the $ r _ { k ^ { ( \ ast ) } } $ and $ r _ { d ^ { ( \ ast ) } } $ anomalies. furthermore, in the near future, the expected progress in the sensitivity on charged lepton flavour violating observables should allow to probe a large region of the preferred parameter space in this class of vector leptoquark models.
arxiv:2012.06315
this paper presents a comprehensive study of linear - quadratic ( lq ) mean field games ( mfgs ) in hilbert spaces, generalizing the classic lq mfg theory to scenarios involving $ n $ agents with dynamics governed by infinite - dimensional stochastic equations. in this framework, both state and control processes of each agent take values in separable hilbert spaces. all agents are coupled through the average state of the population which appears in their linear dynamics and quadratic cost functional. specifically, the dynamics of each agent incorporates an infinite - dimensional noise, namely a $ q $ - wiener process, and an unbounded operator. the diffusion coefficient of each agent is stochastic involving the state, control, and average state processes. we first study the well - posedness of a system of $ n $ coupled semilinear infinite - dimensional stochastic evolution equations establishing the foundation of mfgs in hilbert spaces. we then specialize to $ n $ - player lq games described above and study the asymptotic behaviour as the number of agents, $ n $, approaches infinity. we develop an infinite - dimensional variant of the nash certainty equivalence principle and characterize a unique nash equilibrium for the limiting mfg. finally, we study the connections between the $ n $ - player game and the limiting mfg, demonstrating that the empirical average state converges to the mean field and that the resulting limiting best - response strategies form an $ \ epsilon $ - nash equilibrium for the $ n $ - player game in hilbert spaces.
arxiv:2403.01012
we study when an additive mapping preserving orthogonality between two complex inner product spaces is automatically complex - linear or conjugate - linear. concretely, let $ h $ and $ k $ be complex inner product spaces with dim $ ( h ) \ geq 2 $, and let $ a : h \ to k $ be an additive map preserving orthogonality. we obtain that $ a $ is zero or a positive scalar multiple of a real - linear isometry from $ h $ into $ k $. we further prove that the following statements are equivalent : $ ( a ) $ $ a $ is complex - linear or conjugate - linear. $ ( b ) $ for every $ z \ in h $ we have $ a ( i z ) \ in \ { \ pm i a ( z ) \ } $. $ ( c ) $ there exists a non - zero point $ z \ in h $ such that $ a ( i z ) \ in \ { \ pm i a ( z ) \ } $. $ ( d ) $ there exists a non - zero point $ z \ in h $ such that $ i a ( z ) \ in a ( h ) $. the mapping $ a $ neither is complex - linear nor conjugate - linear if, and only if, there exists a non - zero $ x \ in h $ such that $ i a ( x ) \ notin a ( h ) $ ( equivalently, for every non - zero $ x \ in h $, $ i a ( x ) \ notin a ( h ) $ ). among the consequences we show that, under the hypothesis above, the mapping $ a $ is automatically complex - linear or conjugate - linear if $ a $ has dense range, or if $ h $ and $ k $ are finite dimensional with dim $ ( k ) < 2 \ hbox { dim } ( h ) $.
arxiv:2503.16341
in this article, we consider the heterodinuclear complex [ ni ( dpt ) ( h2o ) cu ( pba ) ]. 2h2o [ pba = 1, 3 - propylenebis ( oxamato ) and dpt = bis - ( 3 - aminopropyl ) amine ] realized through the theoretical model of mixed spin - ( 1 / 2, 1 ) coupled via heisenberg interaction. we study the behaviors of thermal quantum correlations of the above material via measurement - induced nonlocality ( min ) based on hilbert - schmidt norm and fidelity. we observe that the quantum correlation measures increase with the magnetic field in an unconventional way. the role of system parameters is also brought out at thermal equilibrium. the highlight of the results is that we are able to show the existence of room temperature quantum correlation using fidelity based min whereas the entanglement ceases to exist at 141k
arxiv:2211.16230
we demonstrate the existence of a two - dimensional anomalous floquet insulator ( afi ) phase : an interacting ( periodically - driven ) non - equilibrium topological phase of matter with no counterpart in equilibrium. the afi is characterized by a many - body localized bulk, exhibiting nontrivial micromotion within a driving period, and delocalized ( thermalizing ) chiral states at its boundaries. for a geometry without edges, we argue analytically that the bulk may be many - body localized in the presence of interactions, deriving conditions where stability is expected. we investigate the interplay between the thermalizing edge and the localized bulk via numerical simulations of an afi in a geometry with edges. we find that non - uniform particle density profiles remain stable in the bulk up to the longest timescales that we can access, while the propagating edge states persist and thermalize, despite being coupled to the bulk. these findings open the possibility of observing quantized edge transport in interacting systems at high temperature. the analytical approach introduced in this paper can be used to study the stability of other anomalous floquet phases.
arxiv:1712.02789
mitsch ' s natural partial order on the semigroup of binary relations is here characterised by equations in the theory of relation algebras. the natural partial order has a complex relationship with the compatible partial order of inclusion, which is explored by means of a sublattice of the lattice of preorders on the semigroup. the corresponding sublattice for the partition monoid is also described.
arxiv:1208.6366
as the most energetic explosions in the universe, gamma - ray bursts ( grbs ) are commonly believed to be generated by relativistic jets. recent observational evidence suggests that the jets producing grbs are likely to have a structured nature. some studies have suggested that non - axisymmetric structured jets may be formed through internal non - uniform magnetic dissipation processes or the precession of the central engine. in this study, we analyze the potential characteristics of grb afterglows within the framework of non - axisymmetric structured jets. we simplify the profile of the asymmetric jet as a step function of the azimuth angle, dividing the entire jet into individual elements. by considering specific cases, we demonstrate that the velocity, energy, and line - of - sight direction of each jet element can greatly affect the behaviour of the overall light curve. the radiative contributions from multiple elements may lead to the appearance of multiple distinct peaks or plateaus in the light curve. furthermore, fluctuations in the rising and declining segments of each peak can be observed. these findings establish a theoretical foundation for future investigations into the structural characteristics of grbs by leveraging grb afterglow data.
arxiv:2310.20442
two retractions $ m $ and $ n $ on convex cones $ \ bf m $ and respectively $ \ bf n $ of a real vector space $ x $ are called mutually polar if $ m + n = i $ and $ mn = nm = 0. $ in this note it is shown, that if the cones $ \ bf m $ and $ \ bf n $ are generating, $ \ sigma $ - monotone complete, $ m $ and $ n $ are $ \ sigma $ - monotone continuous, then the subadditivity of $ m $ and $ n $ ( with respect to the order relation induced by $ \ bf m $ and respectively $ \ bf n $ ) imply that $ \ bf m $ and $ \ bf n $ are lattice cones. $ ( x, \ bf m ) $ and $ ( x, \ bf n ) $ become $ \ sigma $ - dedekind complete riesz spaces, $ m $ and $ - n $ are the positive part, respectively the negative part mappings in $ ( x, \ bf m ) $ ; $ n $ and $ - m $ are the positive part, respectively the negative part mappings in $ ( x, \ bf n ) $.
arxiv:2304.04594
we pose a fundamental question in computational learning theory : can we efficiently test whether a training set satisfies the assumptions of a given noise model? this question has remained unaddressed despite decades of research on learning in the presence of noise. in this work, we show that this task is tractable and present the first efficient algorithm to test various noise assumptions on the training data. to model this question, we extend the recently proposed testable learning framework of rubinfeld and vasilyan ( 2023 ) and require a learner to run an associated test that satisfies the following two conditions : ( 1 ) whenever the test accepts, the learner outputs a classifier along with a certificate of optimality, and ( 2 ) the test must pass for any dataset drawn according to a specified modeling assumption on both the marginal distribution and the noise model. we then consider the problem of learning halfspaces over gaussian marginals with massart noise ( where each label can be flipped with probability less than $ 1 / 2 $ depending on the input features ), and give a fully - polynomial time testable learning algorithm. we also show a separation between the classical setting of learning in the presence of structured noise and testable learning. in fact, for the simple case of random classification noise ( where each label is flipped with fixed probability $ \ eta = 1 / 2 $ ), we show that testable learning requires super - polynomial time while classical learning is trivial.
arxiv:2501.09189
we derive a weighted $ l ^ 2 $ - estimate of the witten spinor in a complete riemannian spin manifold $ ( m ^ n, g ) $ of non - negative scalar curvature which is asymptotically schwarzschild. the interior geometry of $ m $ enters this estimate only via the lowest eigenvalue of the square of the dirac operator on a conformal compactification of $ m $.
arxiv:math/0501195
galactic collisions are normally modeled in a cdm model by assuming the dm consists of a small number of very massive objects. this note shows that the behaviour of a cdm halo during collisions depends critically on the mass of the particles that make it up, and in particular, all halo particles below a certain characteristic mass are likely to be lost.
arxiv:astro-ph/9607090
in mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from ( and usually simpler than ) the rest of the class ; " degeneracy " is the condition of being a degenerate case. the definitions of many classes of composite or structured objects often implicitly include inequalities. for example, the angles and the side lengths of a triangle are supposed to be positive. the limiting cases, where one or several of these inequalities become equalities, are degeneracies. in the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. equivalently, it becomes a " line segment ". often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object ( or of some part of it ) occur. for example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. this is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. as another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and / or dimension may be different for some exceptional values, called degenerate cases. in such a degenerate case, the solution set is said to be degenerate. for some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. in particular, the class of objects may often be defined or characterized by systems of equations. in most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non - degenerate cases. this may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined ( if needed ) in each specific situation. a degenerate case thus has special features which makes it non - generic, or a special case. however, not all non - generic or special cases are degenerate. for example, right triangles, isosceles triangles and equilateral triangles are non - generic and non - degenerate. in fact, degenerate cases often correspond to singularities, either in the object
https://en.wikipedia.org/wiki/Degeneracy_(mathematics)
in this study we investigate the structural and chemical changes of monatomic coo $ _ 2 $ chains grown self - organized on the ir ( 100 ) surface [ p. ferstl et al., prl 117, 2016, 046101 ] and on pt ( 100 ) under reducing and oxidizing conditions. by a combination of quantitative low - energy electron diffraction, scanning tunnelling microscopy, and density functional theory we show that the cobalt oxide wires are completely reduced by h $ _ 2 $ at temperatures above 320 k and a 3x1 ordered ir $ _ 2 $ co or pt $ _ 2 $ co surface alloy is formed. depending on temperature the surface alloy on ir ( 100 ) is either hydrogen covered ( t < 400 k ) or clean and eventually undergoes an irreversible order - disorder transition at about 570 k. the pt $ _ 2 $ co surface alloy disorders with the desorption of hydrogen, whereby co submerges into subsurface sites. vice versa, applying stronger oxidants than o $ _ 2 $ such as no $ _ 2 $ leads to the formation of coo3 chains on ir ( 100 ) in a 3x1 superstructure. on pt ( 100 ) such a coo $ _ 3 $ phase could not be prepared so far, which however, is due to the uhv conditions of our experiments. as revealed by theory this phase will become stable in a regime of higher pressure. in general, the structures can be reversibly switched on both surfaces using the respective agents o $ _ 2 $, no $ _ 2 $ and h $ _ 2 $.
arxiv:1810.08574
in this manuscript, we investigate the oscillatory behaviour of the anisotropy in the diagonal bianchi - i spacetimes. our starting point is a simplification of einstein ' s equations using only observable or physical variables. as a consequence, we are able to : ( a ) prove general results concerning the existence of oscillations of the anisotropy in the primordial and the late - time universe. for instance, in the expanding scenario, we show that a past weakly mixmaster behaviour ( oscillations as we approach the kasner solutions ) might appear even with no violation of the usual energy conditions, while in the future, the pulsation ( oscillations around isotropic solutions ) seems to be most favored ; ( b ) determine a large scheme for deriving classes of physically motivated exact solutions, and we give some ( including the general barotropic perfect fluid and the magnetic one ) ; ( c ) understand the physical conditions for the occurrence of the isotropization or anisotropization during the cosmological evolution ; ( d ) understand how anisotropy and energy density are converted one into another. in particular, we call attention to the presence of a residue in the energy density in a late - time isotropic universe coming from its past anisotropic behaviour.
arxiv:2104.00470
the main contribution of this thesis is a tannaka duality theorem for proper lie groupoids. this result is obtained by replacing the category of smooth vector bundles over the base manifold of a lie groupoid with a larger category, the category of smooth euclidean fields, and by considering smooth actions of lie groupoids on smooth euclidean fields. the notion of smooth euclidean field that is introduced here is the smooth, finite dimensional analogue of the familiar notion of continuous hilbert field. in the second part of the thesis, ordinary smooth representations of lie groupoids on smooth vector bundles are systematically studied from the point of view of tannaka duality, and various results are obtained in this direction.
arxiv:0809.3394
we prove the existence of entire solutions of the monge - amp \ ` ere equations with prescribed asymptotic behavior at infinity of the plane, which was left by caffarelli - li in 2003. the special difficulty of the problem in dimension two is due to the global logarithmic term in the asymptotic expansion of solutions at infinity. furthermore, we give a pde proof of the characterization of the space of solutions of the monge - amp \ ` ere equation $ \ det \ nabla ^ 2 u = 1 $ with $ k \ ge 2 $ singular points, which was established by g \ ' alvez - mart \ ' inez - mira in 2005. we also obtain the existence in higher dimensional cases with general right hand sides.
arxiv:1809.05421
within a chiral model which provides a good description of the properties of rho and a _ 1 mesons in vacuum, it is shown that, to order t ^ 2, the rho - and a _ 1 - meson masses remain constant in the chiral limit, even if at tree level they are proportional to the chiral condensate, sigma _ 0. numerically, the temperature dependence of the masses turns out to be small also for realistic parameter sets and high temperatures. the weak temperature dependence of the masses is consistent with the eletsky - ioffe mixing theorem, and traces of mixing effects can be seen in the spectral function of the vector correlator at finite temperature.
arxiv:nucl-th/0110005
within $ z ^ \ prime $ models, neutral meson mixing severely constrains beyond the standard model ( sm ) effects in flavour changing neutral current ( fcnc ) processes. however, in certain regions of the $ z ^ \ prime $ parameter space, the contributions to meson mixing observables become negligibly small even for large $ z ^ \ prime $ couplings. while this a priori allows for significant new physics ( np ) effects in fcnc decays, we discuss how large $ z ^ \ prime $ couplings in one neutral meson sector can generate effects in meson mixing observables of other neutral mesons, through correlations stemming from $ \ text { su ( 2 ) } _ l $ gauge invariance and through renormalization group ( rg ) effects in the sm effective field theory ( smeft ). this is illustrated with the example of $ b _ s ^ 0 - \ bar b _ s ^ 0 $ mixing, which in the presence of both left - and right - handed $ z ^ \ prime bs $ couplings $ \ delta _ l ^ { bs } $ and $ \ delta _ r ^ { bs } $ remains sm - like for $ \ delta _ r ^ { bs } \ approx 0. 1 \, \ delta _ l ^ { bs } $. we show that in this case, large $ z ^ \ prime bs $ couplings generate effects in $ d $ and $ k $ meson mixing observables, but that the $ d $ and $ k $ mixing constraints and the relation between $ \ delta _ r ^ { bs } $ and $ \ delta _ l ^ { bs } $ are fully compatible with a lepton flavour universality ( lfu ) conserving explanation of the most recent $ b \ to s \ ell ^ + \ ell ^ - $ experimental data without violating other constraints like $ e ^ + e ^ - \ to \ ell ^ + \ ell ^ - $ scattering. assuming lfu, invariance under the $ \ text { su ( 2 ) } _ l $ gauge symmetry leads then to correlated effects in $ b \ to s \ nu \ bar \ nu $ observables presently studied intensively by the belle ii experiment, which allow to probe the $ z ^ \ prime $ parameter space that is opened up by the vanishing np contributions to $ b _ s ^ 0 - \ bar b _ s ^ 0 $ mixing. in this scenario
arxiv:2412.14254
the highly sensitive, phase - and frequency - resolved detection of microwave electric fields is of central importance for diverse fields ranging from astronomy, remote sensing, communication and microwave quantum technology. however, present quantum sensing of microwave electric fields primarily relies on atom - based electrometers only enabling amplitude measurement. moreover, the best sensitivity of atom - based electrometers is limited by photon shot noise to few $ \ mu $ vcm $ ^ { - 1 } $ hz $ ^ { - 1 / 2 } $ : while going beyond is in principle possible by using squeezed light or schr \ " odinger - cat state, the former is very challenging for atomic experiments while the latter is feasible in all but very small atomic systems. here we report a novel microwave electric field quantum sensor termed as quantum superhet, which, for the first time, enables experimental measurement of phase and frequency, and makes a sensitivity few tens of nvcm $ ^ { - 1 } $ hz $ ^ { - 1 / 2 } $ readily accessible for current experiments. this sensor is based on microwave - dressed rydberg atoms and tailored optical spectrum, with very favorable scalings on sensitivity gains. we can experimentally achieve a sensitivity of $ 55 $ nvcm $ ^ { - 1 } $ hz $ ^ { - 1 / 2 } $, with the minimum detectable field being three orders of magnitude smaller than existing quantum electrometers. we also measure phase and frequency, being able to reach a frequency accuracy of few tens of $ \ mu $ hz for microwave field of just few tens of nvcm $ ^ { - 1 } $. our technique can be also applied to sense electric fields at terahertz or radio frequency. this work is a first step towards realizing the long sought - after electromagnetic - wave quantum sensors with quantum projection noise limited sensitivity, promising broad applications such as in radio telescope, terahertz communication and quantum control.
arxiv:1902.11063
with today ' s abundant streams of data, the only constant we can rely on is change. for stream classification algorithms, it is necessary to adapt to concept drift. this can be achieved by monitoring the model error, and triggering counter measures as changes occur. in this paper, we propose a drift detection mechanism that fits a beta distribution to the model error, and treats abnormal behavior as drift. it works with any given model, leverages prior knowledge about this model, and allows to set application - specific confidence thresholds. experiments confirm that it performs well, in particular when drift occurs abruptly.
arxiv:1811.10900
autonomous robots, autonomous vehicles, and humans wearing mixed - reality headsets require accurate and reliable tracking services for safety - critical applications in dynamically changing real - world environments. however, the existing tracking approaches, such as simultaneous localization and mapping ( slam ), do not adapt well to environmental changes and boundary conditions despite extensive manual tuning. on the other hand, while deep learning - based approaches can better adapt to environmental changes, they typically demand substantial data for training and often lack flexibility in adapting to new domains. to solve this problem, we propose leveraging the neurosymbolic program synthesis approach to construct adaptable slam pipelines that integrate the domain knowledge from traditional slam approaches while leveraging data to learn complex relationships. while the approach can synthesize end - to - end slam pipelines, we focus on synthesizing the feature extraction module. we first devise a domain - specific language ( dsl ) that can encapsulate domain knowledge on the important attributes for feature extraction and the real - world performance of various feature extractors. our neurosymbolic architecture then undertakes adaptive feature extraction, optimizing parameters via learning while employing symbolic reasoning to select the most suitable feature extractor. our evaluations demonstrate that our approach, neurosymbolic feature extraction ( nfex ), yields higher - quality features. it also reduces the pose error observed for the state - of - the - art baseline feature extractors orb and sift by up to 90 % and up to 66 %, respectively, thereby enhancing the system ' s efficiency and adaptability to novel environments.
arxiv:2407.06889
using the recursive green ' s function technique, we study the coherent electron conductance of a quantum point contact in the presence of a scanning probe microscope tip. images of the coherent fringe inside a quantum point contact for different widths are obtained. it is found that the conductance of a specific channel is reduced while other channels are not affected as long as the tip is located at the positions correspending to that channel. moreover, the coherent fringe is smoothed out by increasing the temperature or the voltage across the device. our results are consistent with the experiments reported by topinka et al. [ science 289, 2323 ( 2000 ) ].
arxiv:cond-mat/0208508
supervised approaches for neural abstractive summarization require large annotated corpora that are costly to build. we present a french meeting summarization task where reports are predicted based on the automatic transcription of the meeting audio recordings. in order to build a corpus for this task, it is necessary to obtain the ( automatic or manual ) transcription of each meeting, and then to segment and align it with the corresponding manual report to produce training examples suitable for training. on the other hand, we have access to a very large amount of unaligned data, in particular reports without corresponding transcription. reports are professionally written and well formatted making pre - processing straightforward. in this context, we study how to take advantage of this massive amount of unaligned data using two approaches ( i ) self - supervised pre - training using a target - side denoising encoder - decoder model ; ( ii ) back - summarization i. e. reversing the summarization process by learning to predict the transcription given the report, in order to align single reports with generated transcription, and use this synthetic dataset for further training. we report large improvements compared to the previous baseline ( trained on aligned data only ) for both approaches on two evaluation sets. moreover, combining the two gives even better results, outperforming the baseline by a large margin of + 6 rouge - 1 and rouge - l and + 5 rouge - 2 on two evaluation sets
arxiv:2007.15296
the paper describes gamification, virality and retention in the freemium educational online platform with 40, 000 users as an example. relationships between virality and retention parameters as measurable metrics are calculated and discussed using real examples. virality and monetization can be both competing and complementary mechanisms for the system growth. the k - growth factor, which combines both virality and retention, is proposed as the metrics of the overall freemium system performance in terms of the user base growth. this approach can be tested using a small number of users to assess the system potential performance. if the k - growth factor is less than one, the product needs further development. if the k - growth factor it is greater than one, the system retains existing and attracts new users, thus a large scale market launch can be successful.
arxiv:1412.5401
theory based ai research has had a hard time recently and the aim here is to propose a model of what llms are actually doing when they impress us with their language skills. the model integrates three established theories of human decision - making from philosophy, sociology, and computer science. the paper starts with the collective understanding of reasoning from the early days of ai research - primarily because that model is how we humans think we think, and is the most accessible. it then describes what is commonly thought of as " reactive systems " which is the position taken by many philosophers and indeed many contemporary ai researchers. the third component to the proposed model is from sociology and, although not flattering to our modern ego, provides an explanation to a puzzle that for many years has occupied those of us working on conversational user interfaces.
arxiv:2402.08403
multi - view stereo is an important research task in computer vision while still keeping challenging. in recent years, deep learning - based methods have shown superior performance on this task. cost volume pyramid network - based methods which progressively refine depth map in coarse - to - fine manner, have yielded promising results while consuming less memory. however, these methods fail to take fully consideration of the characteristics of the cost volumes in each stage, leading to adopt similar range search strategies for each cost volume stage. in this work, we present a novel cost volume pyramid based network with different searching strategies for multi - view stereo. by choosing different depth range sampling strategies and applying adaptive unimodal filtering, we are able to obtain more accurate depth estimation in low resolution stages and iteratively upsample depth map to arbitrary resolution. we conducted extensive experiments on both dtu and blendedmvs datasets, and results show that our method outperforms most state - of - the - art methods.
arxiv:2207.12032
we devise an ab initio formalism for the quantum dynamics of auger decay by laser - dressed atoms which are inner - shell ionized by extreme ultraviolet ( xuv ) light. the optical dressing laser is assumed to be sufficiently weak such that ground - state electrons are neither excited nor ionized by it. however, the laser has a strong effect on continuum electrons which we describe in strong - field approximation with volkov waves. the xuv light pulse has a low peak intensity and its interaction is treated as a one - photon process. the quantum dynamics of the inner - shell hole creation with subsequent auger decay is given by equations of motion ( eoms ). for this paper, the eoms are simplified in terms of an essential - states model which is solved analytically and averaged over magnetic subshells. we apply our theory to the m _ 4, 5 n _ 1 n _ 2, 3 auger decay of a 3d hole in a krypton atom. the orbitals are approximated by scaled hydrogenic wave functions. a single attosecond pulse produces 3d vacancies which auger decay in the presence of an 800nm laser with an intensity of 10 ^ 13 w / cm ^ 2. we compute the auger electron spectrum and assess the convergence of the various quantities involved.
arxiv:0905.3756
of the subject. the rules ( which included a + 0 = a { \ displaystyle a + 0 = \ a } and a Γ— 0 = 0 { \ displaystyle a \ times 0 = 0 } ) were all correct, with one exception : 0 0 = 0 { \ displaystyle { \ frac { 0 } { 0 } } = 0 }. later in the chapter, he gave the first explicit ( although still not completely general ) solution of the quadratic equation : a x 2 + b x = c { \ displaystyle \ ax ^ { 2 } + bx = c } to the absolute number multiplied by four times the [ coefficient of the ] square, add the square of the [ coefficient of the ] middle term ; the square root of the same, less the [ coefficient of the ] middle term, being divided by twice the [ coefficient of the ] square is the value. this is equivalent to : x = 4 a c + b 2 βˆ’ b 2 a { \ displaystyle x = { \ frac { { \ sqrt { 4ac + b ^ { 2 } } } - b } { 2a } } } also in chapter 18, brahmagupta was able to make progress in finding ( integral ) solutions of pell ' s equation, x 2 βˆ’ n y 2 = 1, { \ displaystyle \ x ^ { 2 } - ny ^ { 2 } = 1, } where n { \ displaystyle n } is a nonsquare integer. he did this by discovering the following identity : brahmagupta ' s identity : ( x 2 βˆ’ n y 2 ) ( x β€² 2 βˆ’ n y β€² 2 ) = ( x x β€² + n y y β€² ) 2 βˆ’ n ( x y β€² + x β€² y ) 2 { \ displaystyle \ ( x ^ { 2 } - ny ^ { 2 } ) ( x ' ^ { 2 } - ny ' ^ { 2 } ) = ( xx ' + nyy ' ) ^ { 2 } - n ( xy ' + x ' y ) ^ { 2 } } which was a generalisation of an earlier identity of diophantus : brahmagupta used his identity to prove the following lemma : lemma ( brahmagupta ) : if x = x 1, y = y 1 { \ displaystyle x = x _ { 1 }, \ \ y = y _ { 1 } \ \ } is a solution of x 2 βˆ’ n y 2 = k 1
https://en.wikipedia.org/wiki/Indian_mathematics
monitored many - body systems can exhibit a phase transition between entangling and disentangling dynamical phases by tuning the strength of measurements made on the system as it evolves. this phenomenon is called the measurement - induced phase transition ( mipt ). understanding the properties of the mipt is a prominent challenge for both theory and experiment at the intersection of many - body physics and quantum information. realizing the mipt experimentally is particularly challenging due to the postselection problem, which demands a number of experimental realizations that grows exponentially with the number of measurements made during the dynamics. proposed approaches that circumvent the postselection problem typically rely on a classical decoding process that infers the final state based on the measurement record. but the complexity of this classical process generally also grows exponentially with the system size unless the dynamics is restricted to a fine - tuned set of unitary operators. in this work we overcome these difficulties. we construct a tree - shaped quantum circuit whose nodes are haar - random unitary operators followed by weak measurements of tunable strength. for these circuits, we show that the mipt can be detected without postselection using only a simple classical decoding process whose complexity grows linearly with the number of qubits. our protocol exploits the recursive structure of tree circuits, which also enables a complete theoretical description of the mipt, including an exact solution for its critical point and scaling behavior. we experimentally realize the mipt on quantinuum ' s h1 - 1 trapped - ion quantum computer and show that the experimental results are precisely described by theory. our results close the gap between analytical theory and postselection - free experimental observation of the mipt.
arxiv:2502.01735
a field of ~ 38 ' x38 ' around the supernova remnant ( snr ) g349. 7 + 0. 2 has been surveyed in the co j = 1 - 0 transition with the 12 meter telescope of the nrao, using the on - the - fly technique. the resolution of the observations is 54 ". we have found that this remnant is interacting with a small co cloud which, in turn, is part of a much larger molecular complex, which we call the ` ` large co shell ' '. the large co shell has a diameter of about 100 pc, an h _ 2 mass of 930, 000 solar masses, and a density of 35 cm - 3. we investigate the origin of this structure and suggest that an old supernova explosion ocurred about 4 million years ago, as a suitable hypothesis. analyzing the interaction between g349. 7 + 0. 2 and the large co shell, it is possible to determine that the shock front currently driven into the molecular gas is a non - dissociative shock ( c - type ), in agreement with the presence of oh 1720 mhz masers. the positional and kinematical coincidence among one of the co clouds that constitute the large co shell, an iras point - like source and an ultracompact h ii region, indicate the presence of a recently formed star. we suggest that the formation of this star was triggered during the expansion of the large co shell, and suggest the possibility that the same expansion also created the progenitor star of g349. 7 + 0. 2. the large co shell would then be one of the few observational examples of supernova - induced star formation.
arxiv:astro-ph/0010041
the viscous slowing down of supercooled liquids that leads to glass formation can be considered as a classical, and is assuredly a thoroughly studied, example of a " jamming process ". in this review, we stress the distinctive features characterizing the phenomenon. we also discuss the main theoretical approaches, with an emphasis on the concepts ( free volume, dynamic freezing and mode - coupling approximations, configurational entropy and energy landscape, frustration ) that could be useful in other areas of physics where jamming processes are encountered.
arxiv:cond-mat/0003368
the interactive segmentation task consists in the creation of object segmentation masks based on user interactions. the most common way to guide a model towards producing a correct segmentation consists in clicks on the object and background. the recently published segment anything model ( sam ) supports a generalized version of the interactive segmentation problem and has been trained on an object segmentation dataset which contains 1. 1b masks. though being trained extensively and with the explicit purpose of serving as a foundation model, we show significant limitations of sam when being applied for interactive segmentation on novel domains or object types. on the used datasets, sam displays a failure rate $ \ text { fr } _ { 30 } @ 90 $ of up to $ 72. 6 \ % $. since we still want such foundation models to be immediately applicable, we present a framework that can adapt sam during immediate usage. for this we will leverage the user interactions and masks, which are constructed during the interactive segmentation process. we use this information to generate pseudo - labels, which we use to compute a loss function and optimize a part of the sam model. the presented method causes a relative reduction of up to $ 48. 1 \ % $ in the $ \ text { fr } _ { 20 } @ 85 $ and $ 46. 6 \ % $ in the $ \ text { fr } _ { 30 } @ 90 $ metrics.
arxiv:2404.08421
we show that the duality between channel capacity and data compression is retained when state information is available to the sender, to the receiver, to both, or to neither. we present a unified theory for eight special cases of channel capacity and rate distortion with state information, which also extends existing results to arbitrary pairs of independent and identically distributed ( i. i. d. ) correlated state information available at the sender and at the receiver, respectively. in particular, the resulting general formula for channel capacity assumes the same form as the generalized wyner ziv rate distortion function.
arxiv:cs/0508050
in this work we propose a method for probing the chirality of nanoscale electromagnetic near fields utilizing the properties of a coherent superposition of free - electron vortex states in electron microscopes. electron beams optically modulated into vortices carry orbital angular momentum, thanks to which they are sensitive to the spatial phase distribution and topology of the investigated field. the sense of chirality of the studied specimen can be extracted from the spectra of the electron beam with nanoscale precision owing to the short picometer de broglie wavelength of the electron beam. we present a detailed case study of the interaction of a coherent superposition of electron vortex states and the optical near field of a golden nanosphere illuminated by circularly polarized light as an example, and we examine the chirality sensitivity of electron vortex beams on intrinsically chiral plasmonic nanoantennae.
arxiv:2411.05579
pull - back transformations between heun and gauss hypergeometric equations give useful expressions of heun functions in terms of better understood hypergeometric functions. this article classifies, up to mobius automorphisms, the coverings p1 - to - p1 that yield pull - back transformations from hypergeometric to heun equations with at least one free parameter ( excluding the cases when the involved hypergeometric equation has cyclic or dihedral monodromy ). in all, 61 parametric hypergeometric - to - heun transformations are found, of maximal degree 12. among them, 28 pull - backs are compositions of smaller degree transformations between hypergeometric and heun functions. the 61 transformations are realized by 48 different belyi coverings ( though 2 coverings should be counted twice as their moduli field is quadratic ). the same belyi coverings appear in several other contexts. for example, 38 of the coverings appear in herfutner ' s list of elliptic surfaces over p1 with four singular fibers, as their j - invariants. in passing, we demonstrate an elegant way to show that there are no coverings p1 - to - p1 with some branching patterns.
arxiv:1204.2730
time reversal of acoustic waves can be achieved efficiently by the persistent control of excitations in a finite region of the system. the procedure, called time reversal mirror, is stable against the inhomogeneities of the medium and it has numerous applications in medical physics, oceanography and communications. as a first step in the study of this robustness, we apply the perfect inverse filter procedure that accounts for the memory effects of the system. in the numerical evaluation of such procedures we developed the pair partitioning method for a system of coupled oscillators. the algorithm, inspired in the trotter strategy for quantum dynamics, obtains the dynamic for a chain of coupled harmonic oscillators by the separation of the system in pairs and applying a stroboscopic sequence that alternates the evolution of each pair. we analyze here the formal basis of the method and discuss his extension for including energy dissipation inside the medium.
arxiv:0802.2110
laryngeal cancer is a malignant disease with a high morality rate in otorhinolaryngology, posing an significant threat to human health. traditionally larygologists manually visual - inspect laryngeal cancer in laryngoscopic videos, which is quite time - consuming and subjective. in this study, we propose a novel automatic framework via 3d - large - scale pretrained models termed 3d - lsptm for laryngeal cancer detection. firstly, we collect 1, 109 laryngoscopic videos from the first affiliated hospital sun yat - sen university with the approval of the ethics committee. then we utilize the 3d - large - scale pretrained models of c3d, timesformer, and video - swin - transformer, with the merit of advanced featuring videos, for laryngeal cancer detection with fine - tuning techniques. extensive experiments show that our proposed 3d - lsptm can achieve promising performance on the task of laryngeal cancer detection. particularly, 3d - lsptm with the backbone of video - swin - transformer can achieve 92. 4 % accuracy, 95. 6 % sensitivity, 94. 1 % precision, and 94. 8 % f _ 1.
arxiv:2409.01459
we study m - theory fivebranes wrapped on special lagrangian submanifolds ( $ \ s _ n $ ) in calabi - yau three - and fourfolds. when the m5 wraps a four - cycle, the resulting theory is a two - dimensional domain wall embedded in three - dimensional bulk with four supercharges. the theory on the wall is specified in terms of the geometry of the cy manifold and the cycle $ \ s _ 4 $. it is chiral and anomalous, however the presence of a three - dimensional gravitational chern - simons terms with a coefficient that jumps when crossing the wall allows to cancel the anomaly by inflow. kahler manifolds of special type, where the potential depends only on the real part of the complex coordinate, are shown to emerge as the target spaces of two - dimensional sigma - models when the m5 is wrapped on $ \ s _ 3 \ times s ^ 1 $, thus providing a physical realization of some recent symplectic construction by hitchin.
arxiv:hep-th/9906190
a set a of positive integers is called a perfect difference set if every nonzero integer has an unique representation as the difference of two elements of a. we construct dense perfect difference sets from dense sidon sets. as a consequence of this new approach, we prove that there exists a perfect difference set a such that a ( x ) > > x ^ { \ sqrt { 2 } - 1 - o ( 1 ) }. we also prove that there exists a perfect difference set a such that limsup _ { x \ to \ infty } a ( x ) / \ sqrt x \ geq 1 / \ sqrt 2.
arxiv:math/0609244
precise energies of rovibrational states of the exotic hydrogen - like molecule $ ( dt \ mu ) xee $ are of importance for $ dt \ mu $ resonant formation, which is a key process in the muon - catalyzed fusion cycle. the effect of the internal structure and motion of the $ dt \ mu $ quasi - nucleus on energy levels is studied using the three - body description of the $ ( dt \ mu ) xee $ molecule based on the hierarchy of scales and corresponding energies of its constituent subsystems. for a number of rovibrational states of $ ( dt \ mu ) dee $ and $ ( dt \ mu ) tee $, the shifts and splittings of energy levels are calculated in the second order of the perturbation theory.
arxiv:physics/0403116
we formulated and implemented a procedure to generate aliasing - free excitation source signals. it uses a new antialiasing filter in the continuous time domain followed by an iir digital filter for response equalization. we introduced a cosine - series - based general design procedure for the new antialiasing function. we applied this new procedure to implement the antialiased fujisaki - ljungqvist model. we also applied it to revise our previous implementation of the antialiased fant - liljencrants model. a combination of these signals and a lattice implementation of the time varying vocal tract model provides a reliable and flexible basis to test fo extractors and source aperiodicity analysis methods. matlab implementations of these antialiased excitation source models are available as part of our open source tools for speech science.
arxiv:1702.06724
let \ zeta be the intersection exponent of random walks in z ^ 3 and \ alpha be a positive real number. we construct a stochastic process from a simple random walk by erasing loops of length at most n ^ \ alpha. we will prove that for \ alpha < \ frac { 1 } { 1 + 2 \ zeta }, the limiting distribution is gaussian. for \ alpha > 2 the limiting distribution will be shown to be equal to the limiting distribution of the loop erased walk.
arxiv:math/0411551
in recent years, large strides have been taken in developing machine learning methods for dermatological applications, supported in part by the success of deep learning ( dl ). to date, diagnosing diseases from images is one of the most explored applications of dl within dermatology. convolutional neural networks ( convnets ) are the most common ( dl ) method in medical imaging due to their training efficiency and accuracy, although they are often described as black boxes because of their limited explainability. one popular way to obtain insight into a convnet ' s decision mechanism is gradient class activation maps ( grad - cam ). a quantitative evaluation of the grad - cam explainability has been recently made possible by the release of dermxdb, a skin disease diagnosis explainability dataset which enables explainability benchmarking of convnet architectures. in this paper, we perform a literature review to identify the most common convnet architectures used for this task, and compare their grad - cam explanations with the explanation maps provided by dermxdb. we identified 11 architectures : densenet121, efficientnet - b0, inceptionv3, inceptionresnetv2, mobilenet, mobilenetv2, nasnetmobile, resnet50, resnet50v2, vgg16, and xception. we pre - trained all architectures on an clinical skin disease dataset, and fine - tuned them on a dermxdb subset. validation results on the dermxdb holdout subset show an explainability f1 score of between 0. 35 - 0. 46, with xception displaying the highest explainability performance. nasnetmobile reports the highest characteristic - level explainability sensitivity, despite it ' s mediocre diagnosis performance. these results highlight the importance of choosing the right architecture for the desired application and target market, underline need for additional explainability datasets, and further confirm the need for explainability benchmarking that relies on quantitative analyses.
arxiv:2302.12084
the apparent lack of cold molecular gas in blue compact dwarf ( bcd ) galaxies is at variance with their intense star - formation episode. the co molecule, often used a tracer of h2 through a conversion function, is selectively photodissociated in dust - poor environments and, as a result, a potentially large fraction of h2 is expected to reside in the so - called co - dark gas, where it could be traced instead by infrared cooling lines [ ci ], [ cii ], and [ oi ]. although the fraction of co - dark gas to total molecular gas is expected to be relatively large in metal - poor galaxies, many uncertainties remain due to the difficulty in identifying the main heating mechanism associated to the cooling lines observed in such galaxies. investigations of the herschel dwarf galaxy survey show that the heating mechanism in the neutral gas of bcds cannot be dominated by the photoelectric effect on dust grains below some threshold metallicity, implying that other heating mechanisms need to be invoked, along with a new interpretation of the corresponding infrared line diagnostics. in the study presented here and in lebouteiller et al. 2017, we use optical and infrared lines to constrain the physical conditions in the hii region + hi region of the bcd izw18 ( 18mpc ; 2 % solar metallicity ) within a consistent photoionization and photodissociation model. we show that the hi region is entirely heated by a single ultraluminous x - ray source with important consequences on the applicability of [ cii ] to trace the star - formation rate and to trace the co - dark gas. we derive stringent upper limits on the size of h2 clumps that may be detected in the future with jwst and iram / noema. we also show that the nature of the x - ray source can be inferred from the corresponding signatures in the ism. finally we speculate that star formation may be quenched in extremely metal - poor dwarf galaxies due to x - ray photoionization.
arxiv:1809.08077
rap generation, which aims to produce lyrics and corresponding singing beats, needs to model both rhymes and rhythms. previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance. in this paper, we develop deeprapper, a transformer - based rap generation system that can model both rhymes and rhythms. since there is no available rap dataset with rhythmic beats, we develop a data mining pipeline to collect a large - scale rap dataset, which includes a large number of rap songs with aligned lyrics and rhythmic beats. second, we design a transformer - based autoregressive language model which carefully models rhymes and rhythms. specifically, we generate lyrics in the reverse order with rhyme representation and constraint for rhyme enhancement and insert a beat symbol into lyrics for rhythm / beat modeling. to our knowledge, deeprapper is the first system to generate rap with both rhymes and rhythms. both objective and subjective evaluations demonstrate that deeprapper generates creative and high - quality raps with rhymes and rhythms. code will be released on github.
arxiv:2107.01875
multiparticle entanglement is of great significance for quantum metrology and quantum information processing. we here present an efficient scheme to generate stable multiparticle entanglement in a solid state setup, where an array of silicon - vacancy centers are embedded in a quasi - one - dimensional acoustic diamond waveguide. in this scheme, the continuum of phonon modes induces a controllable dissipative coupling among the siv centers. we show that, by an appropriate choice of the distance between the siv centers, the dipole - dipole interactions can be switched off due to destructive interferences, thus realizing a dicke superradiance model. this gives rise to an entangled steady state of siv centers with high fidelities. the protocol provides a feasible setup for the generation of multiparticle entanglement in a solid state system.
arxiv:2002.10760
the magnetic behavior of $ fe _ { 3 - x } o _ 4 $ nanoparticles synthesized either by high - temperature decomposition of an organic iron precursor or low - temperature co - precipitation in aqueous conditions, is compared. transmission electron microscopy, x - ray absorption spectroscopy, x - ray magnetic circular dichroism and magnetization measurements show that nanoparticles synthesized by thermal decomposition display high crystal quality and bulk - like magnetic and electronic properties, while nanoparticles synthesized by co - precipitation show much poorer crystallinity and particle - like phenomenology, including reduced magnetization, high closure fields and shifted hysteresis loops. the key role of the crystal quality is thus suggested since particle - like behavior for particles larger than about 5 nm is only observed when they are structurally defective. these conclusions are supported by monte carlo simulations. it is also shown that thermal decomposition is capable of producing nanoparticles that, after further stabilization in physiological conditions, are suitable for biomedical applications such as magnetic resonance imaging or bio - distribution studies.
arxiv:1011.2573
self / other distinction and self - recognition are important skills for interacting with the world, as it allows humans to differentiate own actions from others and be self - aware. however, only a selected group of animals, mainly high order mammals such as humans, has passed the mirror test, a behavioural experiment proposed to assess self - recognition abilities. in this paper, we describe self - recognition as a process that is built on top of body perception unconscious mechanisms. we present an algorithm that enables a robot to perform non - appearance self - recognition on a mirror and distinguish its simple actions from other entities, by answering the following question : am i generating these sensations? the algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning. the robot learns the relation between its actions and its body with the effect produced in the visual field and its body sensors. the prediction error generated between the models and the real observations during the interaction is used to infer the body configuration through free energy minimization and to accumulate evidence for recognizing its body. experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions, such as mirror recognition in any perspective, robot - robot distinction and human - robot differentiation.
arxiv:2004.05473
the rsa cryptosystem, now widely used for the security of computer networks. in the 19th century, mathematicians such as karl weierstrass and richard dedekind increasingly focused their research on internal problems, that is, pure mathematics. this led to split mathematics into pure mathematics and applied mathematics, the latter being often considered as having a lower value among mathematical purists. however, the lines between the two are frequently blurred. the aftermath of world war ii led to a surge in the development of applied mathematics in the us and elsewhere. many of the theories developed for applications were found interesting from the point of view of pure mathematics, and many results of pure mathematics were shown to have applications outside mathematics ; in turn, the study of these applications may give new insights on the " pure theory ". an example of the first case is the theory of distributions, introduced by laurent schwartz for validating computations done in quantum mechanics, which became immediately an important tool of ( pure ) mathematical analysis. an example of the second case is the decidability of the first - order theory of the real numbers, a problem of pure mathematics that was proved true by alfred tarski, with an algorithm that is impossible to implement because of a computational complexity that is much too high. for getting an algorithm that can be implemented and can solve systems of polynomial equations and inequalities, george collins introduced the cylindrical algebraic decomposition that became a fundamental tool in real algebraic geometry. in the present day, the distinction between pure and applied mathematics is more a question of personal research aim of mathematicians than a division of mathematics into broad areas. the mathematics subject classification has a section for " general applied mathematics " but does not mention " pure mathematics ". however, these terms are still used in names of some university departments, such as at the faculty of mathematics at the university of cambridge. = = = unreasonable effectiveness = = = the unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist eugene wigner. it is the fact that many mathematical theories ( even the " purest " ) have applications outside their initial object. these applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. examples of unexpected applications of mathematical theories can be found in many areas of mathematics. a notable example is the prime factorization of natural numbers that was discovered more than 2, 000 years before its common use for secure internet communications through the rsa cryptosystem. a second historical example
https://en.wikipedia.org/wiki/Mathematics
packet loss is a major cause of voice quality degradation in voip transmissions with serious impact on intelligibility and user experience. this paper describes a system based on a generative adversarial approach, which aims to repair the lost fragments during the transmission of audio streams. inspired by the powerful image - to - image translation capability of generative adversarial networks ( gans ), we propose bin2bin, an improved pix2pix framework to achieve the translation task from magnitude spectrograms of audio frames with lost packets, to noncorrupted speech spectrograms. in order to better maintain the structural information after spectrogram translation, this paper introduces the combination of two stft - based loss functions, mixed with the traditional gan objective. furthermore, we employ a modified patchgan structure as discriminator and we lower the concealment time by a proper initialization of the phase reconstruction algorithm. experimental results show that the proposed method has obvious advantages when compared with the current state - of - the - art methods, as it can better handle both high packet loss rates and large gaps.
arxiv:2307.15611
we discuss a novel mechanism of axion production during scattering of alfven waves by a fast moving schwarzschild black holes. the process couples classical macroscopic objects, and effectively large amplitude electromagnetic ( em ) waves, to microscopic axions. the key ingredient is that the motion of a black hole ( bh ) across magnetic field creates classical non - zero second poincare invariant, the electromagnetic anomaly ( lyutikov 2011 ). in the case of magnetized plasma supporting alfven wave, it is the fluctuating component of the magnetic field that contributes to the anomaly : for sufficiency small bh moving with the super - alfvenic velocity the plasma does not have enough time to screen the parallel electric field. this creates time - dependent $ { \ bf e } \ cdot { \ bf b } \ neq 0 $, and production of axions via the axion - em coupling.
arxiv:2108.06364
we show that an isometric action of a compact quantum group on the underlying geodesic metric space of a compact connected riemannian manifold $ ( m, g ) $ with strictly negative curvature is automatically classical, in the sense that it factors through the action of the isometry group of $ ( m, g ) $. this partially answers a question by d. goswami.
arxiv:1503.07984
we develop an approach through geometric functional analysis to error correcting codes and to reconstruction of signals from few linear measurements. an error correcting code encodes an n - letter word x into an m - letter word y in such a way that x can be decoded correctly when any r letters of y are corrupted. we prove that most linear orthogonal transformations q from r ^ n into r ^ m form efficient and robust robust error correcting codes over reals. the decoder ( which corrects the corrupted components of y ) is the metric projection onto the range of q in the l _ 1 norm. an equivalent problem arises in signal processing : how to reconstruct a signal that belongs to a small class from few linear measurements? we prove that for most sets of gaussian measurements, all signals of small support can be exactly reconstructed by the l _ 1 norm minimization. this is a substantial improvement of recent results of donoho and of candes and tao. an equivalent problem in combinatorial geometry is the existence of a polytope with fixed number of facets and maximal number of lower - dimensional facets. we prove that most sections of the cube form such polytopes.
arxiv:math/0502299
we analyze the finite size corrections to entanglement in quantum critical systems. by using conformal symmetry and density functional theory, we discuss the structure of the finite size contributions to a general measure of ground state entanglement, which are ruled by the central charge of the underlying conformal field theory. more generally, we show that all conformal towers formed by an infinite number of excited states ( as the size of the system $ l \ to \ infty $ ) exhibit a unique pattern of entanglement, which differ only at leading order $ ( 1 / l ) ^ 2 $. in this case, entanglement is also shown to obey a universal structure, given by the anomalous dimensions of the primary operators of the theory. as an illustration, we discuss the behavior of pairwise entanglement for the eigenspectrum of the spin - 1 / 2 xxz chain with an arbitrary length $ l $ for both periodic and twisted boundary conditions.
arxiv:0808.0020
nonlocal two - qubit quantum gates are represented by canonical decomposition or equivalently by operator - schmidt decomposition. the former decomposition results in geometrical representation such that all the two - qubit gates form tetrahedron within which perfect entanglers form a polyhedron. on the other hand, it is known from the later decomposition that schmidt number of nonlocal gates can be either 2 or 4. in this work, some aspects of later decomposition are investigated. it is shown that two gates differing by local operations possess same set of schmidt coefficients. employing geometrical method, it is established that schmidt number 2 corresponds to controlled unitary gates. further, all the edges of tetrahedron and polyhedron are characterized using schmidt strength, a measure of operator entanglement. it is found that one edge of the tetrahedron possesses the maximum schmidt strength, implying that all the gates in the edge are maximally entangled.
arxiv:1006.3412
in general the kernel of qcd ' s gap equation possesses a domain of analyticity upon which the equation ' s solution at nonzero chemical potential is simply obtained from the in - vacuum result through analytic continuation. on this domain the single - quark number - and scalar - density distribution functions are mu - independent. this is illustrated via two models for the gap equation ' s kernel. the models are alike in concentrating support in the infrared. they differ in the form of the vertex but qualitatively the results are largely insensitive to the ansatz. in vacuum both models realise chiral symmetry in the nambu - goldstone mode and in the chiral limit, with increasing chemical potential, exhibit a first - order chiral symmetry restoring transition at mu ~ m ( 0 ), where m ( p ^ 2 ) is the dressed - quark mass function. there is evidence to suggest that any associated deconfinement transition is coincident and also of first - order.
arxiv:0807.2755
the homotopy coherent nerve from simplicial categories to simplicial sets and its left adjoint c are important to the study of ( infinity, 1 ) - categories because they provide a means for comparing two models of their respective homotopy theories, giving a quillen equivalence between the model structures for quasi - categories and simplicial categories. the functor c also gives a cofibrant replacement for ordinary categories, regarded as trivial simplicial categories. however, the hom - spaces of the simplicial category cx arising from a quasi - category x are not well understood. we show that when x is a quasi - category, all 2, 1 - horns in the hom - spaces of its simplicial category can be filled. we prove, unexpectedly, that for any simplicial set x, the hom - spaces of cx are 3 - coskeletal. we characterize the quasi - categories whose simplicial categories are locally quasi, finding explicit examples of 3 - dimensional horns that cannot be filled in all other cases. finally, we show that when x is the nerve of an ordinary category, cx is isomorphic to the simplicial category obtained from the standard free simplicial resolution, showing that the two known cofibrant " simplicial thickenings " of ordinary categories coincide, and furthermore its hom - spaces are 2 - coskeletal.
arxiv:0912.4809
this paper maps out the relation between different approaches for handling preferences in argumentation with strict rules and defeasible assumptions by offering translations between them. the systems we compare are : non - prioritized defeats i. e. attacks, preference - based defeats, and preference - based defeats extended with reverse defeat.
arxiv:1709.07255
the automatic synthesis of policies for robotic - control tasks through reinforcement learning relies on a reward signal that simultaneously captures many possibly conflicting requirements. in this paper, we in \ - tro \ - duce a novel, hierarchical, potential - based reward - shaping approach ( hprs ) for defining effective, multivariate rewards for a large family of such control tasks. we formalize a task as a partially - ordered set of safety, target, and comfort requirements, and define an automated methodology to enforce a natural order among requirements and shape the associated reward. building upon potential - based reward shaping, we show that hprs preserves policy optimality. our experimental evaluation demonstrates hprs ' s superior ability in capturing the intended behavior, resulting in task - satisfying policies with improved comfort, and converging to optimal behavior faster than other state - of - the - art approaches. we demonstrate the practical usability of hprs on several robotics applications and the smooth sim2real transition on two autonomous - driving scenarios for f1tenth race cars.
arxiv:2110.02792
we present a study of the effects of collisional dynamics on the formation and detectability of cold tidal streams. a semi - analytical model for the evolution of the stellar mass function was implemented and coupled to a fast stellar stream simulation code, as well as the synthetic cluster evolution code emacss for the mass evolution as a function of a globular cluster orbit. we find that the increase in the average mass of the escaping stars for clusters close to dissolution has a major effect on the observable stream surface density. as an example, we show that palomar 5 would have undetectable streams ( in an sdss - like survey ) if it was currently three times more massive, despite the fact that a more massive cluster loses stars at a higher rate. this bias due to the preferential escape of low - mass stars is an alternative explanation for the absence of tails near massive clusters, than a dark matter halo associated with the cluster. we explore the orbits of a large sample of milky way globular clusters and derive their initial masses and remaining mass fraction. using properties of known tidal tails we explore regions of parameter space that favour the detectability of a stream. a list of high probability candidates is discussed
arxiv:1702.02543
well - designed current control is a key factor in ensuring the efficient and safe operation of modular multilevel converters ( mmcs ). even though this control problem involves multiple control objectives, conventional current control schemes are comprised of independently designed decoupled controllers, e. g., proportional - integral ( pi ) or proportional - resonant ( pr ). due to the bilinearity of the mmc dynamics, tuning pi and pr controllers so that good performance and constraint satisfaction are guaranteed is quite challenging. this challenge becomes more relevant in an ac / ac mmc configuration due to the complexity of tracking the single - phase sinusoidal components of the mmc output. in this paper, we propose a method to design a multivariable controller, i. e., a static feedback gain, to regulate the mmc currents. we use a physics - informed transformation to model the mmc dynamics linearly and synthesise the proposed controller. we use this linear model to formulate a linear matrix inequality that computes a feedback gain that guarantees safe and effective operation, including ( i ) limited tracking error, ( ii ) stability, and ( iii ) meeting all constraints. to test the efficacy of our method, we examine its performance in a direct ac / ac mmc simulated in simulink / plecs and in a scaled - down ac / ac mmc prototype to investigate the ultra - fast charging of electric vehicles.
arxiv:2403.18371
radar and lidar, provided by two different range sensors, each has pros and cons of various perception tasks on mobile robots or autonomous driving. in this paper, a monte carlo system is used to localize the robot with a rotating radar sensor on 2d lidar maps. we first train a conditional generative adversarial network to transfer raw radar data to lidar data, and achieve reliable radar points from generator. then an efficient radar odometry is included in the monte carlo system. combining the initial guess from odometry, a measurement model is proposed to match the radar data and prior lidar maps for final 2d positioning. we demonstrate the effectiveness of the proposed localization framework on the public multi - session dataset. the experimental results show that our system can achieve high accuracy for long - term localization in outdoor scenes.
arxiv:2005.04644
egocentric augmented reality devices such as wearable glasses passively capture visual data as a human wearer tours a home environment. we envision a scenario wherein the human communicates with an ai agent powering such a device by asking questions ( e. g., where did you last see my keys? ). in order to succeed at this task, the egocentric ai assistant must ( 1 ) construct semantically rich and efficient scene memories that encode spatio - temporal information about objects seen during the tour and ( 2 ) possess the ability to understand the question and ground its answer into the semantic memory representation. towards that end, we introduce ( 1 ) a new task - episodic memory question answering ( emqa ) wherein an egocentric ai assistant is provided with a video sequence ( the tour ) and a question as an input and is asked to localize its answer to the question within the tour, ( 2 ) a dataset of grounded questions designed to probe the agent ' s spatio - temporal understanding of the tour, and ( 3 ) a model for the task that encodes the scene as an allocentric, top - down semantic feature map and grounds the question into the map to localize the answer. we show that our choice of episodic scene memory outperforms naive, off - the - shelf solutions for the task as well as a host of very competitive baselines and is robust to noise in depth, pose as well as camera jitter. the project page can be found at : https : / / samyak - 268. github. io / emqa.
arxiv:2205.01652
it is proved that dicritical singularities of real analytic levi - flat sets coincide with the set of segre degenerate points.
arxiv:1606.09294
link prediction is an important task in social network analysis. there are different characteristics ( features ) in a social network that can be used for link prediction. in this paper, we evaluate the effectiveness of aggregated features and topological features in link prediction using supervised learning. the aggregated features, in a social network, are some aggregation functions of the attributes of the nodes. topological features describe the topology or structure of a social network, and its underlying graph. we evaluated the effectiveness of these features by measuring the performance of different supervised machine learning methods. specifically, we selected five well - known supervised methods including j48 decision tree, multi - layer perceptron ( mlp ), support vector machine ( svm ), logistic regression and naive bayes ( nb ). we measured the performance of these five methods with different sets of features of the dblp dataset. our results indicate that the combination of aggregated and topological features generates the best performance. for evaluation purposes, we used accuracy, area under the roc curve ( auc ) and f - measure. our selected features can be used for the analysis of almost any social network. this is because these features provide the important characteristics of the underlying graph of the social networks. the significance of our work is that the selected features can be very effective in the analysis of big social networks. in such networks we usually deal with big data sets, with millions or billions of instances. using fewer, but more effective, features can help us for the analysis of big social networks.
arxiv:2006.16327
in this paper we present a new steepest - descent type algorithm for convex optimization problems. our algorithm pieces the unknown into sub - blocs of unknowns and considers a partial optimization over each sub - bloc. in quadratic optimization, our method involves newton technique to compute the step - lengths for the sub - blocs resulting descent directions. our optimization method is fully parallel and easily implementable, we first presents it in a general linear algebra setting, then we highlight its applicability to a parabolic optimal control problem, where we consider the blocs of unknowns with respect to the time dependency of the control variable. the parallel tasks, in the last problem, turn " on " the control during a specific time - window and turn it " off " elsewhere. we show that our algorithm significantly improves the computational time compared with recognized methods. convergence analysis of the new optimal control algorithm is provided for an arbitrary choice of partition. numerical experiments are presented to illustrate the efficiency and the rapid convergence of the method.
arxiv:1403.7254
this review presents and evaluates various formalisms for the purpose of modelling the semantics of financial derivatives contracts. the formalism proposed by lee is selected as the best candidate among those initially reviewed. further examination and evaluation of this formalism is done.
arxiv:1901.01815
in this paper we present a new point of view on the mathematical foundations of statistical physics of infinite volume systems. this viewpoint is based on the newly introduced notions of transition energy function, transition energy field and one - point transition energy field. the former of them, namely the transition energy function, is a generalization of the notion of relative hamiltonian introduced by pirogov and sinai. however, unlike the ( relative ) hamiltonian, our objects are defined axiomatically by their natural and physically well - founded intrinsic properties. the developed approach allowed us to give a proper mathematical definition of the hamiltonian without involving the notion of potential, to propose a justification of the gibbs formula for infinite systems and to answer the problem stated by d. ruelle of how wide the class of specifications, which can be represented in gibbsian form, is. furthermore, this approach establishes a straightforward relationship between the probabilistic notion of ( gibbs ) random field and the physical notion of ( transition ) energy, and so opens the possibility to directly apply probabilistic methods to the mathematical problems of statistical physics.
arxiv:1810.05388
a graph $ g = ( v, e ) $ is total weight $ ( k, k ' ) $ - choosable if the following holds : for any list assignment $ l $ which assigns to each vertex $ v $ a set $ l ( v ) $ of $ k $ real numbers, and assigns to each edge $ e $ a set $ l ( e ) $ of $ k ' $ real numbers, there is a proper $ l $ - total weighting, i. e., a map $ \ phi : v \ cup e \ to \ mathbb { r } $ such that $ \ phi ( z ) \ in l ( z ) $ for $ z \ in v \ cup e $, and $ \ sum _ { e \ in e ( u ) } \ phi ( e ) + \ phi ( u ) \ ne \ sum _ { e \ in e ( v ) } \ phi ( e ) + \ phi ( v ) $ for every edge $ \ { u, v \ } $. a graph is called nice if it contains no isolated edges. as a strengthening of the famous 1 - 2 - 3 conjecture, it was conjectured in [ t. wong and x. zhu, total weigt choosability of graphs, j. graph th. 66 ( 2011 ), 198 - 212 ] that every nice graph is total weight $ ( 1, 3 ) $ - choosable. the problem whether there is a constant $ k $ such that every nice graph is total weight $ ( 1, k ) $ - choosable remained open for a decade and was recently solved by cao [ l. cao, total weight choosability of graphs : towards the 1 - 2 - 3 conjecture, j. combin. th. b, 149 ( 2021 ), 109 - 146 ], who proved that every nice graph is total weight $ ( 1, 17 ) $ - choosable. this paper improves this result and proves that every nice graph is total weight $ ( 1, 5 ) $ - choosable.
arxiv:2104.05410
we investigate minimum weak $ \ alpha $ - riesz energy problems with external fields in both the unconstrained and constrained settings for generalized condensers $ ( a _ 1, a _ 2 ) $ such that the closures of $ a _ 1 $ and $ a _ 2 $ in $ \ mathbb r ^ n $ are allowed to intersect one another. ( such problems with the standard $ \ alpha $ - riesz energy in place of the weak one would be unsolvable, which justifies the need for the concept of weak energy when dealing with condenser problems. ) we obtain sufficient and / or necessary conditions for the existence of minimizers, provide descriptions of their supports and potentials, and single out their characteristic properties. to this end we have discovered an intimate relation between minimum weak $ \ alpha $ - riesz energy problems over signed measures associated with $ ( a _ 1, a _ 2 ) $ and minimum $ \ alpha $ - green energy problems over positive measures carried by $ a _ 1 $. crucial for our analysis of the latter problems is the perfectness of the $ \ alpha $ - green kernel, established in our recent paper. as an application of the results obtained, we describe the support of the $ \ alpha $ - green equilibrium measure.
arxiv:1810.00791
small angular scale structure of the soft x - ray background correlated with the galaxy distribution is investigated. an extensive data sample from the rosat and xmm - newton archives are used. excess emission below 1 kev extending up to at least 1. 5 mpc around galaxies is detected. the relative amplitude of the excess emission in the 0. 3 - 0. 5 kev band amounts to 1. 3 + / - 0. 2 % of the total background flux. a steep spectrum of the emission at higher energies is indicated by a conspicuous decline of the signal above 1 kev. the xmm - newton epic / mos data covering wider energy range than the rosat pspc are consistent with a thermal bremsstrahlung spectrum with k } t < 0. 5 kev. this value is consistent with temperatures of the warm - hot intergalactic medium derived by several groups from hydrodynamic simulations. correlation analysis allows for estimate of the average excess emission associated with galaxies but the data are insufficient to constrain physical parameters of the whim and to determine the contribution of whim to the total baryonic mass density.
arxiv:astro-ph/0501275
density functional calculations of the electronic structure are used to elucidate the bonding of li $ _ 3 $ alh $ _ 6 $. it is found that this material is best described as ionic, and in particular that the [ alh $ _ 6 $ ] $ ^ { 3 - } $ units are not reasonably viewed as substantially covalent.
arxiv:cond-mat/0407232
when each edge device of a network only perceives a local part of the environment, collaborative inference across multiple devices is often needed to predict global properties of the environment. in safety - critical applications, collaborative inference must be robust to significant network failures caused by environmental disruptions or extreme weather. existing collaborative learning approaches, such as privacy - focused vertical federated learning ( vfl ), typically assume a centralized setup or that one device never fails. however, these assumptions make prior approaches susceptible to significant network failures. to address this problem, we first formalize the problem of robust collaborative inference over a dynamic network of devices that could experience significant network faults. then, we develop a minimalistic yet impactful method called multiple aggregation with gossip rounds and simulated faults ( mags ) that synthesizes simulated faults via dropout, replication, and gossiping to significantly improve robustness over baselines. we also theoretically analyze our proposed approach to explain why each component enhances robustness. extensive empirical results validate that mags is robust across a range of fault rates - including extreme fault rates.
arxiv:2312.16638
graphene ranks highly as a possible material for future high - speed and flexible electronics. current fabrication routes, which rely on metal substrates, require post synthesis transfer of the graphene onto a si wafer or in the case of epitaxial growth on sic, temperatures above 1000 { \ deg } c are required. both the handling difficulty and high temperatures are not best suited to present day silicon technology. we report a facile chemical vapor deposition approach in which nano - graphene and few layer graphene is directly formed over magnesium oxide and can be achieved at temperatures as low as 325 { \ deg } c.
arxiv:1103.0497
mathematics is a field of study that discovers and organizes methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. there are many areas of mathematics, which include number theory ( the study of numbers ), algebra ( the study of formulas and related structures ), geometry ( the study of shapes and spaces that contain them ), analysis ( the study of continuous changes ), and set theory ( presently used as a foundation for all mathematics ). mathematics involves the description and manipulation of abstract objects that consist of either abstractions from nature or β€” in modern mathematics β€” purely abstract entities that are stipulated to have certain properties, called axioms. mathematics uses pure reason to prove properties of objects, a proof consisting of a succession of applications of deductive rules to already established results. these results include previously proved theorems, axioms, and β€” in case of abstraction from nature β€” some basic properties that are considered true starting points of the theory under consideration. mathematics is essential in the natural sciences, engineering, medicine, finance, computer science, and the social sciences. although mathematics is extensively used for modeling phenomena, the fundamental truths of mathematics are independent of any scientific experimentation. some areas of mathematics, such as statistics and game theory, are developed in close correlation with their applications and are often grouped under applied mathematics. other areas are developed independently from any application ( and are therefore called pure mathematics ) but often later find practical applications. historically, the concept of a proof and its associated mathematical rigour first appeared in greek mathematics, most notably in euclid ' s elements. since its beginning, mathematics was primarily divided into geometry and arithmetic ( the manipulation of natural numbers and fractions ), until the 16th and 17th centuries, when algebra and infinitesimal calculus were introduced as new fields. since then, the interaction between mathematical innovations and scientific discoveries has led to a correlated increase in the development of both. at the end of the 19th century, the foundational crisis of mathematics led to the systematization of the axiomatic method, which heralded a dramatic increase in the number of mathematical areas and their fields of application. the contemporary mathematics subject classification lists more than sixty first - level areas of mathematics. = = areas of mathematics = = before the renaissance, mathematics was divided into two main areas : arithmetic, regarding the manipulation of numbers, and geometry, regarding the study of shapes. some types of pseudoscience, such as numerology and astrology, were not then clearly distinguished from mathematics. during the renaissance, two more
https://en.wikipedia.org/wiki/Mathematics
we explore a nonlinear field model to describe the interplay between the ability of excitons to be bose condensed and their interaction with other modes of a crystal. we apply our consideration to the long - living paraexcitons in cu2o. taking into account the exciton - phonon interaction and introducing a coherent phonon part of the moving condensate, we solve quasi - stationary equations for the exciton - phonon condensate. these equations support localized solutions, and we discuss the conditions for the inhomogeneous condensate to appear in the crystal. allowable values of the characteristic width of ballistic condensates are estimated. the stability conditions of the moving condensate are analyzed by use of landau arguments, and landau critical parameters appear in the theory. it follows that, under certain conditions, the condensate can move through the crystal as a stable droplet. to separate the coherent and non - coherent parts of the exciton - phonon packet, we suggest to turn off the phonon wind by the changes in design of the 3d crystal and boundary conditions for the moving droplet.
arxiv:cond-mat/9912339
we propose and demonstrate the functioning of a special rapid single flux quantum ( rsfq ) circuit with frequency - dependent damping. this damping is achieved by shunting individual josephson junctions by pieces of open - ended rc transmission lines. our circuit includes a toggle flip - flop cell, josephson transmission lines transferring single flux quantum pulses to and from this cell, as well as dc / sfq and sfq / dc converters. due to the desired frequency - dispersion in the rc line shunts which ensures sufficiently low noise at low frequencies, such circuits are well - suited for integrating with the flux / phase josephson qubit and enable its efficient control.
arxiv:0804.0442
the isolation intervals of the real roots of the symbolic monic cubic polynomial $ x ^ 3 + a x ^ 2 + b x + c $ are determined, in terms of the coefficients of the polynomial, by solving the siebeck - marden - northshield triangle - the equilateral triangle that projects onto the three real roots of the cubic polynomial and whose inscribed circle projects onto an interval with endpoints equal to stationary points of the polynomial
arxiv:2107.01847
the increasing volumes of data produced by high - throughput instruments coupled with advanced computational infrastructures for scientific computing have enabled what is often called a { \ em fourth paradigm } for scientific research based on the exploration of large datasets. current scientific research is often interdisciplinary, making data integration a critical technique for combining data from different scientific domains. research data management is a critical part of this paradigm, through the proposition and development of methods, techniques, and practices for managing scientific data through their life cycle. research on microbial communities follows the same pattern of production of large amounts of data obtained, for instance, from sequencing organisms present in environmental samples. data on microbial communities can come from a multitude of sources and can be stored in different formats. for example, data from metagenomics, metatranscriptomics, metabolomics, and biological imaging are often combined in studies. in this article, we describe the design and current state of implementation of an integrative research data management framework for the cluster of excellence balance of the microverse aiming to allow for data on microbial communities to be more easily discovered, accessed, combined, and reused. this framework is based on research data repositories and best practices for managing workflows used in the analysis of microbial communities, which includes recording provenance information for tracking data derivation.
arxiv:2207.06890
a lot of technological advances depend on next - generation materials, such as graphene, which enables a raft of new applications, for example better electronics. manufacturing such materials is often difficult ; in particular, producing graphene at scale is an open problem. we provide a series of datasets that describe the optimization of the production of laser - induced graphene, an established manufacturing method that has shown great promise. we pose three challenges based on the datasets we provide - - modeling the behavior of laser - induced graphene production with respect to parameters of the production process, transferring models and knowledge between different precursor materials, and optimizing the outcome of the transformation over the space of possible production parameters. we present illustrative results, along with the code used to generate them, as a starting point for interested users. the data we provide represents an important real - world application of machine learning ; to the best of our knowledge, no similar datasets are available.
arxiv:2107.14257
an index - guiding photonic crystal fiber ( pcf ) with an array of air holes surrounding the silica core region has special characteristics compared to conventional single - mode fibers ( smfs ). using the effective index method and the gaussian beam propagation theory, the macro - bending and splice losses for pcfs are investigated. the wavelength dependence of the cladding index of the pcf has been taken properly into account. we obtain the effective spot size for different configurations of pcfs, which is used for computing the splice losses. the gaussian approximation for the fundamental modal field leads to simple closed - form expressions for the splice losses produced by transverse, longitudinal and angular offsets. calculations of macro - bending losses are based on antenna theory for bend standard fibers.
arxiv:0705.2875
the characteristics of the star with the " prescribed " density distribution $ \ rho = \ rho _ c [ 1 - ( r / r ) ^ \ alpha ] $ are analytically studied.
arxiv:astro-ph/0003430
the cuprates exhibit a prominent charge - density - wave ( cdw ) instability with wavevector along [ 100 ], i. e., the cu - o bond direction. whereas cdw order is most prominent at moderate doping and low temperature, there exists increasing evidence for dynamic charge correlations throughout a large portion of the temperature - doping phase diagram. in particular, signatures of incipient charge order have been observed as phonon softening and / or broadening near the cdw wavevector approximately half - way through the brillouin zone. most of this work has focused on moderately - doped cuprates, for which the cdw order is robust, or on optimally - doped samples, for which the superconducting transition temperature ( $ t _ c $ ) attains its maximum. here we present a time - of - flight neutron scattering study of phonons in simple - tetragonal $ \ text { hgba } _ 2 \ text { cuo } _ { 4 + \ delta } $ ( $ t _ c = 55 $ k ) at a low doping level where prior work showed the cdw order to be weak. we employ and showcase a new software - based technique that mines the large number of measured brillouin zones for useful data in order to improve accuracy and counting statistics. density - functional theory has not provided an accurate description of phonons in $ \ text { hgba } _ 2 \ text { cuo } _ { 4 + \ delta } $, yet we find the right set of parameters to qualitatively reproduce the data. the notable exception is a dispersion minimum in the longitudinal cu - o bond - stretching branch along [ 100 ]. this discrepancy suggests that, while cdw order is weak, there exist significant dynamic charge correlations in the optic phonon range at low doping, near the edge of the superconducting dome.
arxiv:2002.02593
trapped ions for quantum information processing has been an area of intense study due to the extraordinarily high fidelity operations that have been reported experimentally. specifically, barium trapped ions have been shown to have exceptional state - preparation and measurement ( spam ) fidelities. the $ ^ { 133 } \ mathrm { ba } ^ + $ ( $ i = 1 / 2 $ ) isotope in particular is a promising candidate for large - scale quantum computing experiments. however, a major pitfall with this isotope is that it is radioactive and is thus generally used in microgram quantities to satisfy safety regulations. we describe a new method for creating microgram barium chloride ( $ \ mathrm { bacl } _ 2 $ ) ablation targets for use in trapped ion experiments and compare our procedure to previous methods. we outline two recipes for fabrication of ablation targets that increase the production of neutral atoms for isotope - selective loading of barium ions. we show that heat - treatment of the ablation targets greatly increases the consistency at which neutral atoms can be produced and we characterize the uniformity of these targets using trap - independent techniques such as energy dispersive x - ray spectroscopy ( eds ) and neutral fluorescence collection. our comparison between fabrication techniques and demonstration of consistent neutral fluorescence paves a path towards reliable loading of $ ^ { 133 } \ mathrm { ba } ^ + $ in surface traps and opens opportunities for scalable quantum computing with this isotope.
arxiv:2402.06632
over the past few years the displacement statistics of self - propelled particles has been intensely studied, revealing their long - time diffusive behavior. here, we demonstrate that a concerted combination of boundary conditions and switching on and off the self - propelling drive can generate and afterwards arbitrarily often restore a non - stationary centered peak in their spatial distribution. this corresponds to a partial reversibility of their statistical behavior, in opposition to the above - mentioned long - time diffusive nature. interestingly, it is a diffusive process that mediates and makes possible this procedure. it should be straightforward to verify our predictions in a real experimental system.
arxiv:1505.01706
we propose a novel way to communicate signals in the form of waves across a d - dimensional lattice. the mechanism is based on quantum search algorithms and makes it possible to both search for marked positions in a regular grid and to communicate between two ( or more ) points on the lattice. remarkably, neither the sender nor the receiver needs to know the position of each other despite the fact that the signal is only exchanged between the contributing parties. this is an example of using wave interference as a resource by controlling localisation phenomena effectively. possible experimental realisations will be discussed.
arxiv:1001.0335
we reconsider randomized algorithms for the low - rank approximation of symmetric positive semi - definite ( spsd ) matrices such as laplacian and kernel matrices that arise in data analysis and machine learning applications. our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of spsd matrices. our results highlight complementary aspects of sampling versus projection methods ; they characterize the effects of common data preprocessing steps on the performance of these algorithms ; and they point to important differences between uniform sampling and nonuniform sampling methods based on leverage scores. in addition, our empirical results illustrate that existing theory is so weak that it does not provide even a qualitative guide to practice. thus, we complement our empirical results with a suite of worst - case theoretical bounds for both random sampling and random projection methods. these bounds are qualitatively superior to existing bounds - - - e. g. improved additive - error bounds for spectral and frobenius norm error and relative - error bounds for trace norm error - - - and they point to future directions to make these algorithms useful in even larger - scale machine learning applications.
arxiv:1303.1849
modern programming follows the continuous integration ( ci ) and continuous deployment ( cd ) approach rather than the traditional waterfall model. even the development of modern programming languages uses the ci / cd approach to swiftly provide new language features and to adapt to new development environments. unlike in the conventional approach, in the modern ci / cd approach, a language specification is no more the oracle of the language semantics because both the specification and its implementations can co - evolve. in this setting, both the specification and implementations may have bugs, and guaranteeing their correctness is non - trivial. in this paper, we propose a novel n + 1 - version differential testing to resolve the problem. unlike the traditional differential testing, our approach consists of three steps : 1 ) to automatically synthesize programs guided by the syntax and semantics from a given language specification, 2 ) to generate conformance tests by injecting assertions to the synthesized programs to check their final program states, 3 ) to detect bugs in the specification and implementations via executing the conformance tests on multiple implementations, and 4 ) to localize bugs on the specification using statistical information. we actualize our approach for the javascript programming language via jest, which performs n + 1 - version differential testing for modern javascript engines and ecmascript, the language specification describing the syntax and semantics of javascript in a natural language. we evaluated jest with four javascript engines that support all modern javascript language features and the latest version of ecmascript ( es11, 2020 ). jest automatically synthesized 1, 700 programs that covered 97. 78 % of syntax and 87. 70 % of semantics from es11. using the assertion - injection, it detected 44 engine bugs in four engines and 27 specification bugs in es11.
arxiv:2102.07498
we investigate quantum symmetries in terms of their large - time stability with respect to perturbations of the hamiltonian. we find a complete algebraic characterization of the set of symmetries robust against a single perturbation and we use such result to characterize their stability with respect to arbitrary sets of perturbations.
arxiv:2411.18529
we analyze the transverse k \ " { a } hler - ricci flow equation on sasaki - ein \ - stein space $ y ^ { p, q } $. explicit solutions are produced representing new five - dimensional sasaki structures. solutions which do not modify the transverse metric preserve the sasaki - einstein feature of the contact structure. if the transverse metric is altered, the deformed metrics remain sasaki, but not einstein.
arxiv:1910.12495
we experimentally investigate the phase winding dynamics of a harmonically trapped two - component bec subject to microwave induced rabi oscillations between two pseudospin components. while the single particle dynamics can be explained by mapping the system to a two - component bose - hubbard model, nonlinearities due to the interatomic repulsion lead to new effects observed in the experiments : in the presence of a linear magnetic field gradient, a qualitatively stable moving magnetic order that is similar to antiferromagnetic order is observed after critical winding is achieved. we also demonstrate how the phase winding can be used as a new tool to generate copious dark - bright solitons in a two - component bec, opening the door for new experimental studies of these nonlinear features.
arxiv:1306.6102
we analyze the behavior of a suspension of rigid rod - like particles in shear flow using a modified version of the doi model, and construct diagrams for phase coexistence under conditions of constant imposed stress and constant imposed strain rate, among paranematic, flow - aligning nematic, and log - rolling nematic states. we calculate the effective constitutive relations that would be measured through the regime of phase separation into shear bands. we calculate phase coexistence by examining the stability of interfacial steady states and find a wide range of possible ` ` phase ' ' behaviors.
arxiv:cond-mat/9904236
we study the mobility of an overdamped particle in a periodic potential tilted by a constant force. the mobility exhibits a stochastic resonance in inhomogeneous systems with space dependent friction coefficient. the result indicates that the presence of oscillating external field is not essential for the observability of stochastic resonance, at least in the inhomogenous medium.
arxiv:cond-mat/9902216
this is a book about lieb ' s simplified approach to the bose gas, which is a family of effective single - particle equations to study the ground state of many - body systems of interacting bosons. it was introduced by lieb in 1963, and recently found to have some rather intriguing properties. one of the equations of the approach, called the simple equation, has been proved to make a prediction for the ground state energy that is asymptotically accurate both in the low - and the high - density regimes. its predictions for the condensate fraction, two - point correlation function, and momentum distribution also agree with those of bogolyubov theory at low density, despite the fact that it is based on ideas that are very different from those of bogolyubov theory. in addition, another equation of the approach called the big equation has been found to yield numerically accurate results for these observables over the entire range of densities for certain interaction potentials. this book is an introduction to lieb ' s simplified approach, and little background knowledge is assumed. we begin with a discussion of bose gases and quantum statistical mechanics, and the notion of bose - einstein condensation, which is one of the main motivations for the approach. we then move on to an abridged bibliographical overview on known theorems and conjectures about bose gases in the thermodynamic limit. next, we introduce lieb ' s simplified approach, and its derivation from the many - body problem. we then give an overview of results, both analytical and numerical, on the predictions of the approach. we then conclude with a list of open problems.
arxiv:2308.00290
we investigate the geometric properties of lightlike surfaces in the minkowski space $ \ r ^ { 2, 1 } $, using cartan ' s method of moving frames to compute a complete set of local invariants for such surfaces. using these invariants, we give a complete local classification of lightlike surfaces of constant type in $ \ r ^ { 2, 1 } $ and construct new examples of such surfaces.
arxiv:1302.7015
long wavelength dynamics of 1d bose gases with repulsive contact interactions can be captured by generalized hydrodynamics ( ghd ) which predicts the evolution of the local rapidity distribution. the latter corresponds to the momentum distribution of quasiparticles, which have infinite lifetime owing to the integrability of the system. here we experimentally investigate the dynamics for an initial situation that is the junction of two semi - infinite systems in different stationary states, a protocol referred to as " bipartite quench " protocol. more precisely we realise the particular case where one half of the system is the vacuum state. we show that the evolution of the boundary density profile exhibits ballistic dynamics obeying the euler hydrodynamic scaling. the boundary profiles are similar to the ones predicted with zero - temperature ghd in the quasi - bec regime, with deviations due to non - zero entropy effects. we show that this protocol, provided the boundary profile is measured with infinite precision, permits to reconstruct the rapidity distribution of the initial state. for our data, we extract the initial rapidity distribution by fitting the boundary profile and we use a 3 - parameter ansatz that goes beyond the thermal assumption. finally, we investigate the local rapidity distribution inside the boundary profile, which, according to ghd, presents, on one side, features of zero - entropy states. the measured distribution shows the asymmetry predicted by ghd, although unelucidated deviations remain.
arxiv:2505.05839
he et al. ( 2018 ) have called into question the utility of pre - training by showing that training from scratch can often yield similar performance to pre - training. we show that although pre - training may not improve performance on traditional classification metrics, it improves model robustness and uncertainty estimates. through extensive experiments on adversarial examples, label corruption, class imbalance, out - of - distribution detection, and confidence calibration, we demonstrate large gains from pre - training and complementary effects with task - specific methods. we introduce adversarial pre - training and show approximately a 10 % absolute improvement over the previous state - of - the - art in adversarial robustness. in some cases, using pre - training without task - specific methods also surpasses the state - of - the - art, highlighting the need for pre - training when evaluating future methods on robustness and uncertainty tasks.
arxiv:1901.09960
shot boundary detection in video is one of the key stages of video data processing. a new method for shot boundary detection based on several video features, such as color histograms and object boundaries, has been proposed. the developed algorithm was tested on the open bbc planet earth [ 1 ] and rai [ 2 ] datasets, and the msu cc datasets, based on videos used in the video codec comparison conducted at msu, as well as videos from the ibm set, were also plotted. the total dataset for algorithm development and testing exceeded the known trecvid datasets. based on the test results, the proposed algorithm for scene change detection outperformed its counterparts with a final f - score of 0. 9794.
arxiv:2109.01057
several finite dimensional quasi - probability representations of quantum states have been proposed to study various problems in quantum information theory and quantum foundations. these representations are often defined only on restricted dimensions and their physical significance in contexts such as drawing quantum - classical comparisons is limited by the non - uniqueness of the particular representation. here we show how the mathematical theory of frames provides a unified formalism which accommodates all known quasi - probability representations of finite dimensional quantum systems. moreover, we show that any quasi - probability representation satisfying two reasonable properties is equivalent to a frame representation and then prove that any such representation of quantum mechanics must exhibit either negativity or a deformed probability calculus.
arxiv:0711.2658
the path integral molecular dynamics ( pimd ) method provides a convenient way to compute the quantum mechanical structural and thermodynamic properties of condensed phase systems at the expense of introducing an additional set of high - frequency normal modes on top of the physical vibrations of the system. efficiently sampling such a wide range of frequencies provides a considerable thermostatting challenge. here we introduce a simple stochastic path integral langevin equation ( pile ) thermostat which exploits an analytic knowledge of the free path integral normal mode frequencies. we also apply a recently - developed colored - noise thermostat based on a generalized langevin equation ( gle ), which automatically achieves a similar, frequency - optimized sampling. the sampling efficiencies of these thermostats are compared with that of the more conventional nos \ ' e - hoover chain ( nhc ) thermostat for a number of physically relevant properties of the liquid water and hydrogen - in - palladium systems. in nearly every case, the new pile thermostat is found to perform just as well as the nhc thermostat while allowing for a computationally more efficient implementation. the gle thermostat also proves to be very robust delivering a near - optimum sampling efficiency in all of the cases considered. we suspect that these simple stochastic thermostats will therefore find useful application in many future pimd simulations.
arxiv:1009.1045