text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
spatial solitons can exist in various kinds of nonlinear optical resonators with and without amplification. in the past years different types of these localized structures such as vortices, bright, dark solitons and phase solitons have been experimentally shown to exist. many links appear to exist to fields different from optics, such as fluids, phase transitions or particle physics. these spatial resonator solitons are bistable and due to their mobility suggest schemes of information processing not possible with the fixed bistable elements forming the basic ingredient of traditional electronic processing. the recent demonstration of existence and manipulation of spatial solitons in emiconductor microresonators represents a step in the direction of such optical parallel processing applications. we review pattern formation and solitons in a general context, show some proof of principle soliton experiments on slow systems, and describe in more detail the experiments on semiconductor resonator solitons which are aimed at applications.
|
arxiv:nlin/0210073
|
we present a first calculation of the rate for plasmon production in semiconductors from nuclei recoiling against dark matter. the process is analogous to bremsstrahlung of transverse photon modes, but with a longitudinal plasmon mode emitted instead. for dark matter in the 10 mev - 1 gev mass range, we find that the plasmon bremsstrahlung rate is 4 - 5 orders of magnitude smaller than that for elastic scattering, but 4 - 5 orders of magnitude larger than the transverse bremsstrahlung rate. because the plasmon can decay into electronic excitations and has characteristic energy given by the plasma frequency $ \ omega _ p $, with $ \ omega _ p \ approx 16 $ ev in si crystals, plasmon production provides a distinctive signature and new method to detect nuclear recoils from sub - gev dark matter.
|
arxiv:2003.12077
|
radiative corrections due to initial state radiation in electron - positron annihilation are calculated within the qed structure function approach. results are shown in the next - to - leading logarithmic approximation up to $ o ( \ alpha ^ 4 l ^ 3 ) $ order, where $ l = \ ln ( s / m _ e ^ 2 ) $ is the large logarithm. several mistakes in previous calculations are corrected. the results are relevant for future high - precision experiments at $ e ^ + e ^ - $ colliders.
|
arxiv:2405.03443
|
the dsm - 1 was published in 1952, contains 128 diagnostic categories, described in 132 pages. the dsm - 5 appeared in 2013, contains 541 diagnostic categories, described in 947 pages. the field of psychology is characterised by a steady proliferation of diagnostic models and subcategories, that seems to be inspired by the principle of " divide and inflate ". this approach is in contrast with experimental evidence, which suggests on one hand that traumas of various kind are often present in the anamnesis of patients and, on the other, that the gene variants implicated are shared across a wide range of diagnoses. in this work i propose a holistic approach, built with tools borrowed from the field of artificial intelligence. my model is based on two pillars. the first one is trauma, which represents the attack to the mind, is psychological in nature and has its origin in the environment. the second pillar is dissociation, which represents the mind defence in both physiological and pathological conditions, and incorporates all other defence mechanisms. damages to dissociation can be considered as another category of attacks, that are neurobiological in nature and can be of genetic or environmental origin. they include, among other factors, synaptic over - pruning, abuse of drugs and inflammation. these factors concur to weaken the defence, represented by the neural networks that implement the dissociation mechanism in the brain. the model is subsequently used to interpret five mental conditions : ptsd, complex ptsd, dissociative identity disorder, schizophrenia and bipolar disorder. ideally, this is a first step towards building a model that aims to explain a wider range of psychopathological affections with a single theoretical framework. the last part is dedicated to sketching a new psychotherapy for psychological trauma.
|
arxiv:1909.02199
|
lattice dynamics for five ordered pmn supercells were calculated from first principles by the frozen phonon method. maximal symmetries of all supercells are reduced by structural instabilities. lattice modes corresponding to these instabilities, equilibrium ionic positions, and infrared reflectivity spectra were computed for all supercells. results are compared with our experimental data for a chemically disordered pmn single crystal.
|
arxiv:cond-mat/0404349
|
a multivariate polynomial $ p ( x ) = p ( x _ 1,..., x _ n ) $ is sos - convex if its hessian $ h ( x ) $ can be factored as $ h ( x ) = m ^ t ( x ) m ( x ) $ with a possibly nonsquare polynomial matrix $ m ( x ) $. it is easy to see that sos - convexity is a sufficient condition for convexity of $ p ( x ) $. moreover, the problem of deciding sos - convexity of a polynomial can be cast as the feasibility of a semidefinite program, which can be solved efficiently. motivated by this computational tractability, it has been recently speculated whether sos - convexity is also a necessary condition for convexity of polynomials. in this paper, we give a negative answer to this question by presenting an explicit example of a trivariate homogeneous polynomial of degree eight that is convex but not sos - convex. interestingly, our example is found with software using sum of squares programming techniques and the duality theory of semidefinite optimization. as a byproduct of our numerical procedure, we obtain a simple method for searching over a restricted family of nonnegative polynomials that are not sums of squares.
|
arxiv:0903.1287
|
action recognition in still images has seen major improvement in recent years due to advances in human pose estimation, object recognition and stronger feature representations. however, there are still many cases in which performance remains far from that of humans. in this paper, we approach the problem by learning explicitly, and then integrating three components of transitive actions : ( 1 ) the human body part relevant to the action ( 2 ) the object being acted upon and ( 3 ) the specific form of interaction between the person and the object. the process uses class - specific features and relations not used in the past for action recognition and which use inherently two cycles in the process unlike most standard approaches. we focus on face - related actions ( fra ), a subset of actions that includes several currently challenging categories. we present an average relative improvement of 52 % over state - of - the art. we also make a new benchmark publicly available.
|
arxiv:1601.04293
|
we study the geometry of the morphism between moduli spaces of hypersurfaces in $ \ mathbb p ^ { n - 1 } $ that sends a smooth hypersurface of degree $ d + 1 $ to its associated hypersurface of degree $ n ( d - 1 ) $. as a result, we obtain a compactification of the moduli space of smooth hypersurfaces such that the induced rational map from the standard git compactification often contracts the discriminant divisor.
|
arxiv:1807.02082
|
type ia supernovae ( sne ia ) are useful distance indicators in cosmology, provided their luminosity is standardized by applying empirical corrections based on light - curve properties. one factor behind these corrections is dust extinction, accounted for in the color - luminosity relation of the standardization. this relation is usually assumed to be universal, which could potentially introduce systematics into the standardization. the ` ` mass - step ' ' observed for sne ia hubble residuals has been suggested as one such systematic. we seek to obtain a completer view of dust attenuation properties for a sample of 162 sn ia host galaxies and to probe their link to the ` ` mass - step ' '. we infer attenuation laws towards hosts from both global and local ( 4 kpc ) dark energy survey photometry and composite stellar population model fits. we recover a optical depth / attenuation slope relation, best explained by differing star / dust geometry for different galaxy orientations, which is significantly different from the optical depth / extinction slope relation observed directly for sne. we obtain a large variation of attenuation slopes and confirm these change with host properties, like stellar mass and age, meaning a universal sn ia correction should ideally not be assumed. analyzing the cosmological standardization, we find evidence for a ` ` mass - step ' ' and a two dimensional ` ` dust - step ' ', both more pronounced for red sne. although comparable, the two steps are found no to be completely analogous. we conclude that host galaxy dust data cannot fully account for the ` ` mass - step ' ', using either an alternative sn standardization with extinction proxied by host attenuation or a ` ` dust - step ' ' approach.
|
arxiv:2211.14291
|
in this paper, we study asymmetric ramsey properties of the random graph $ g _ { n, p } $. let $ r \ in \ mathbb { n } $ and $ h _ 1, \ ldots, h _ r $ be graphs. we write $ g _ { n, p } \ to ( h _ 1, \ ldots, h _ r ) $ to denote the property that whenever we colour the edges of $ g _ { n, p } $ with colours from the set $ [ r ] : = \ { 1, \ ldots, r \ } $ there exists $ i \ in [ r ] $ and a copy of $ h _ i $ in $ g _ { n, p } $ monochromatic in colour $ i $. there has been much interest in determining the asymptotic threshold function for this property. r \ " { o } dl and ruci \ ' { n } ski determined the threshold function for the general symmetric case ; that is, when $ h _ 1 = \ cdots = h _ r $. a conjecture of kohayakawa and kreuter, if true, would fully resolve the asymmetric problem. recently, the 1 - statement of this conjecture was confirmed by mousset, nenadov and samotij. building on work of marciniszyn, skokan, sp \ " { o } hel and steger, we reduce the 0 - statement of kohayakawa and kreuter ' s conjecture to a certain deterministic subproblem. to demonstrate the potential of this approach, we show this subproblem can be resolved for almost all pairs of regular graphs. this therefore resolves the 0 - statement for all such pairs of graphs.
|
arxiv:2105.15151
|
despite significant research efforts and advancements, cancer remains a leading cause of mortality. early cancer prediction has become a crucial focus in cancer research to streamline patient care and improve treatment outcomes. manual tumor detection by histopathologists can be time consuming, prompting the need for computerized methods to expedite treatment planning. traditional approaches to tumor detection rely on supervised learning, necessitates a large amount of annotated data for model training. however, acquiring such extensive labeled data can be laborious and time \ - intensive. this research examines the three learning environments : supervised learning ( sl ), semi \ - supervised learning ( semi \ - sl ), and self \ - supervised learning ( self \ - sl ) : to predict kidney, lung, and breast cancer. three pre \ - trained deep learning models ( residual network \ - 50, visual geometry group \ - 16, and efficientnetb0 ) are evaluated based on these learning settings using seven carefully curated training sets. to create the first training set ( ts1 ), sl is applied to all annotated image samples. five training sets ( ts2 \ - ts6 ) with different ratios of labeled and unlabeled cancer images are used to evaluatesemi \ - sl. unlabeled cancer images from the final training set ( ts7 ) are utilized for self \ - sl assessment. among different learning environments, outcomes from the semi \ - sl setting show a strong degree of agreement with the outcomes achieved in the sl setting. the uniform pattern of observations from the pre \ - trained models across all three datasets validates the methodology and techniques of the research. based on modest number of labeled samples and minimal computing cost, our study suggests that the semi \ - sl option can be a highly viable replacement for the sl option under label annotation constraint scenarios.
|
arxiv:2408.07988
|
we study the lyapunov stability of a family of nongeneric equilibria with spin for underwater vehicles with noncoincident centers. the nongeneric equilibria belong to singular symplectic leaves that are not characterized as a preimage o a regular value of the casimir functions. we find an invariant submanifold such that the nongeneric equilibria belong to a preimage of a regular value that involves sub - casimir functions. we obtain results for nonlinear stability on this invariant submanifold.
|
arxiv:1411.4388
|
consider $ l $ groups of point sources or spike trains, with the $ l ^ { \ text { th } } $ group represented by $ x _ l ( t ) $. for a function $ g : \ mathbb { r } \ rightarrow \ mathbb { r } $, let $ g _ l ( t ) = g ( t / \ mu _ l ) $ denote a point spread function with scale $ \ mu _ l > 0 $, and with $ \ mu _ 1 < \ cdots < \ mu _ l $. with $ y ( t ) = \ sum _ { l = 1 } ^ { l } ( g _ l \ star x _ l ) ( t ) $, our goal is to recover the source parameters given samples of $ y $, or given the fourier samples of $ y $. this problem is a generalization of the usual super - resolution setup wherein $ l = 1 $ ; we call this the multi - kernel unmixing super - resolution problem. assuming access to fourier samples of $ y $, we derive an algorithm for this problem for estimating the source parameters of each group, along with precise non - asymptotic guarantees. our approach involves estimating the group parameters sequentially in the order of increasing scale parameters, i. e., from group $ 1 $ to $ l $. in particular, the estimation process at stage $ 1 \ leq l \ leq l $ involves ( i ) carefully sampling the tail of the fourier transform of $ y $, ( ii ) a \ emph { deflation } step wherein we subtract the contribution of the groups processed thus far from the obtained fourier samples, and ( iii ) applying moitra ' s modified matrix pencil method on a deconvolved version of the samples in ( ii ).
|
arxiv:1807.02862
|
we calculate an electron - phonon scattering and intrinsic transport properties of black phosphorus monolayer using tight - binding and boltzmann treatments as a function of temperature, carrier density, and electric field. the low - field mobility shows weak dependence on density and, at room temperature, falls in the range of 300 - 1000 cm ^ 2 / vs in the armchair direction and 50 - 120 cm ^ 2 / vs in the zig - zag direction with the anisotropy due to an effective mass difference. at high fields, drift velocity is linear with electric field up to 1 - 2 v / micron reaching values of 10 ^ 7 cm / s in the armchair direction, unless self - heating effects are included.
|
arxiv:1704.01086
|
recent years have seen an increasing use of signal temporal logic ( stl ) as a formal specification language for symbolic control, due to its expressiveness and closeness to natural language. furthermore, stl specifications can be encoded as cost functions using stl ' s robust semantics, transforming the synthesis problem into an optimization problem. unfortunately, these cost functions are non - smooth and non - convex, and exact solutions using mixed - integer programming do not scale well. recent work has focused on using smooth approximations of robustness, which enable faster gradient - based methods to find local maxima, at the expense of soundness and / or completeness. we propose a novel robustness approximation that is smooth everywhere, sound, and asymptotically complete. our approach combines the benefits of existing approximations, while enabling an explicit tradeoff between conservativeness and completeness.
|
arxiv:2006.05239
|
we present 2 " - 10 " imaging of eleven transitions from nine molecular species across the nuclear bar in maffei 2. the data were obtained with the bima and ovro interferometers. the ten detected transitions are compared with existing co isotopologues, hcn, cs and millimeter continuum data. dramatic spatial variations among the mapped species are observed across the nuclear bar. a principle component analysis is performed to characterize correlations between the transitions, star formation and molecular column density. the analysis reveals that hcn, hnc, hco + and 3 mm continuum are tightly correlated, indicating a direct connection to massive star formation. we find two main morphologically distinct chemical groups, ch3oh, sio and hnco comprising the grain chemistry molecules, versus hcn, hnc, hco + and c2h, molecules strong in the presence of star formation. the grain chemistry molecules, hnco, ch3oh and sio, trace hydrodynamical bar shocks. the near constancy of the hnco / ch3oh, sio / ch3oh and sio / hnco ratios argue that shock properties are uniform across the nucleus. hcn / hco +, hcn / hnc, hcn / cs and hcn / co ratios are explained primarily by variations in density. high hco + / n2h + ratios are correlated with the c2h line, suggesting that this ratio may be a powerful new dense photon - dominated region ( pdr ) probe in external galaxies. c2h reveals a molecular outflow along the minor axis. the morphology and kinematics of the outflow are consistent with an outflow age of 6 - 7 myrs.
|
arxiv:1206.4098
|
we compare to different extensions of the ancient game of nim : moore ' s nim $ ( n, \ leq k ) $ and exact nim $ ( n, = k ) $. given integers $ n $ and $ k $ such that $ 0 < k \ leq n $, we consider $ n $ piles of stones. two players alternate turns. by one move it is allowed to choose and reduce any ( i ) at most $ k $ or ( ii ) exactly $ k $ piles of stones in games nim $ ( n, \ leq k ) $ and nim $ ( n, = k ) $, respectively. the player who has to move but cannot is the loser. both games coincide with nim when $ k = 1 $. game nim $ ( n, \ leq k ) $ was introduced by moore ( 1910 ) who characterized its sprague - grundy ( sg ) values 0 ( that is, p - positions ) and 1. the first open case is sg values 2 for nim $ ( 4, \ leq 2 ) $. game nim $ ( n, = k ) $, was introduced in 2018. an explicit formula for its sg function was computed for $ 2k \ geq n $. in contrast, case $ 2k < n $ seems difficult : even the p - positions are not known already for nim $ ( 5, = 2 ) $. yet, it seems that the p - position of games nim $ ( n + 1, = 2 ) $ and nim $ ( n + 1, \ leq 2 ) $ are closely related. ( note that p - positions of the latter are known. ) here we provide some theoretical and computational evidence of such a relation for $ n = 5 $.
|
arxiv:2311.18772
|
generating logical form equivalents of human language is a fresh way to employ neural architectures where long short - term memory effectively captures dependencies in both encoder and decoder units. the logical form of the sequence usually preserves information from the natural language side in the form of similar tokens, and recently a copying mechanism has been proposed which increases the probability of outputting tokens from the source input through decoding. in this paper we propose a caching mechanism as a more general form of the copying mechanism which also weighs all the words from the source vocabulary according to their relation to the current decoding context. our results confirm that the proposed method achieves improvements in sequence / token - level accuracy on sequence to logical form tasks. further experiments on cross - domain adversarial attacks show substantial improvements when using the most influential examples of other domains for training.
|
arxiv:1807.07333
|
we review a nonparametric version of amari ' s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on orlicz banach spaces. this nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as relaxed optimization, kullback - leibler divergence, boltzmann entropy, boltzmann equation
|
arxiv:1308.5312
|
aim of this work is the study of differential equations governing non - - dissipative non - - linear oscillators ; these arise in different physical models such as the treatment of relativistic oscillators, up to generalizations to duffing ' s relativistic oscillators and in non - - relativistic models which deals with cables with an attached midpoint mass, or some harmonic duffing oscillators.
|
arxiv:2211.01035
|
r - parity can be extended to a continuous global u ( 1 ) $ { } _ r $ symmetry. we investigate whether an anomalous u ( 1 ) $ { } _ r $ can be identified as the pq symmetry suitable for solving the strong cp problem within supersymmetric extensions of the standard model. in this case, u ( 1 ) $ { } _ r $ is broken at some intermediate scale and the qcd axion is the r - axion. moreover, the r - symmetry can be naturally gauged via the green - schwartz mechanism within completions to supergravity, thus evading the axion quality problem. obstacles to realizing this scenario are highlighted and phenomenologically viable approaches are identified.
|
arxiv:2407.17557
|
the reflectance of graphene is investigated in the framework of the dirac model with account of its realistic properties, such as nonzero chemical potential and band gap, at any temperature. for this purpose, the exact reflection coefficients of the electromagnetic waves on a graphene sheet expressed via the polarization tensor and ultimately via the electrical conductivity of graphene have been used. the reflectance of graphene is computed as a function of frequency and chemical potential at different temperatures and values of the band - gap parameter. the minimum values of the reflectance are found which are reached in the infrared domain at the points of vanishing imaginary part of the conductivity of graphene. for a gapped graphene, the maximum reflectance equal to unity is reached at the points where the imaginary part of conductivity diverges. the computational results demonstrate an interesting interplay between the band gap and chemical potential in their combined effect on the reflectance. specifically, there are wide frequency intervals where the reflectance of graphene increases with increasing chemical potential and decreasing band gap. the numerical computations are found to be in good agreement with the analytic asymptotic expressions in the regions of their applicability. several technological areas, where the obtained results could be used, are listed.
|
arxiv:1807.09130
|
recent experimental studies on near - field thermophotovoltaic ( tpv ) energy conversion have mainly focused on enhancing performance via photon tunneling of evanescent waves. in the sub - micron gap, however, there exist peculiar phenomena caused by the interference of propagating waves, which is seldom observed due to the dramatic increase of the radiation by evanescent waves in full spectrum range. here, we experimentally demonstrate the oscillatory nature of near - field tpv energy conversion in the far - to - near - field transition regime ( 250 - 2600 nm ), where evanescent and propagating modes are comparable due to the selective spectral response by the pv cell. noticeably, it was possible to produce the same amount of photocurrent at different vacuum gaps of 870 and 322 nm, which is 10 \ % larger than the far - field value. considering the great challenges in maintaining nanoscale vacuum gap in practical devices, this study suggests an alternative approach to the design of a tpv system that will outperform conventional far - field counterparts.
|
arxiv:2108.12117
|
the generalized fault diagram, a data structure for failure analysis based on the influence diagram, is defined. unlike the fault tree, this structure allows for dependence among the basic events and replicated logical elements. a heuristic procedure is developed for efficient processing of these structures.
|
arxiv:1304.2758
|
an interesting theme in complex differential geometry is to find a correspondence between algebraic objects and differential geometric objects. one of the most attractive is the non - abelian hodge theory of simpson. in this paper, pursuing an analogue of the non - abelian hodge theory in the context of $ q $ - difference modules, we study kobayashi - hitchin correspondences between doubly periodic monopoles and parabolic $ q $ - difference modules, depending on twistor parameters.
|
arxiv:1902.03551
|
the mid - infrared ratio [ neiii ] 15. 6mum / [ neii ] 12. 8mum is a strong diagnostic of the ionization state of emission line objects, due to its use of only strong neon emission lines only weakly affected by extinction. however this ratio is not available to ground - based telescopes as only a few spectroscopic windows are available in the mir. to deal with this problem we aimed to verify if there exists a conversion law between ground - accessible, strong mir line ratio [ siv ] / [ neii ] and the diagnostic [ neiii ] / [ neii ] ratio that can serve as a reference for future ground - based observations. we collated the [ siv ] 10. 5mum, [ neii ] 12. 8mum, [ neiii ] 15. 6 \ mum and [ siii ] 18. 7mum emission line fluxes from a wide range of sources in the rich spitzer and iso archives, and compared the [ neiii ] / [ neii ], [ siv ] / [ siii ], and [ siv ] / [ neii ] ratios. we find a strong correlation between the [ siv ] / [ neii ] and [ \ neiii ] / [ \ neii ] ratio, with a linear fit of log ( [ neiii ] / [ neii ] ) = 0. 81log ( [ siv ] / [ neii ] ) + 0. 36, accurate to a factor of ~ 2 over four orders of magnitude in the line ratios. this demonstrates clearly the ability of ground - based infrared spectrographs to do ionization studies of nebulae.
|
arxiv:0810.0010
|
primary power standards in the microwave domain are realized using a calorimetric technique, usually identified with the used measurement system, i. e., the microcalorimeter. it is adjusted for measurement of power ratios with a relative accuracy that, after an appropriate system calibration, is of order of 10 ^ - 3, at least in the microwave domain ( 1 ghz - 18 ghz ). hereby we describe the calibration process implemented at the istituto nazionale di ricerca metrologica ( italy ) for realizing a coaxial power standard based on indirect heating thermocouples. particular regard is devoted to describe the nearly ideal thermal load used for determining the microcalorimeter losses and their influence on the measurand accuracy.
|
arxiv:2103.04857
|
bathymetry, the study of underwater topography, relies on sonar mapping of submerged structures. these measurements, critical for infrastructure health monitoring, often require expensive instrumentation. the high financial risk associated with sensor damage or vessel loss creates a reluctance to deploy uncrewed surface vessels ( usvs ) for bathymetry. however, the crewed - boat bathymetry operations, are costly, pose hazards to personnel, and frequently fail to achieve the stable conditions necessary for bathymetry data collection, especially under high currents. further research is essential to advance autonomous control, navigation, and data processing technologies, with a particular focus on bathymetry. there is a notable lack of accessible hardware platforms that allow for integrated research in both bathymetry - focused autonomous control and navigation, as well as data evaluation and processing. this paper addresses this gap through the design and implementation of two complementary usv systems tailored for uncrewed bathymetry research. this includes a low - cost usv for navigation and control research ( nac - usv ) and a second, high - end usv equipped with a high - resolution multi - beam sonar and the associated hardware for bathymetry data quality evaluation and post - processing research ( bep - usv ). the nac - usv facilitates the investigation of autonomous, fail - safe navigation and control, emphasizing the stability requirements for high - quality bathymetry data collection while minimizing the risk to equipment. the bep - usv, which mirrors the nac - usv hardware, is then used for additional control validation and in - depth exploration of bathymetry data evaluation and post - processing methodologies. we detail the design and implementation of both systems, and open source the design. furthermore, we demonstrate the system ' s effectiveness in a range of operational scenarios.
|
arxiv:2502.12539
|
we study the effect of fractal initial conditions in closed reactive systems in the cases of both mobile and immobile reactants. for the reaction $ a + a \ to a $, in the absence of diffusion, the mean number of particles $ a $ is shown to decay exponentially to a steady state which depends on the details of the initial conditions. the nature of this dependence is demonstrated both analytically and numerically. in contrast, when diffusion is incorporated, it is shown that the mean number of particles $ < n ( t ) > $ decays asymptotically as $ t ^ { - d _ f / 2 } $, the memory of the initial conditions being now carried by the dynamical power law exponent. the latter is fully determined by the fractal dimension $ d _ f $ of the initial conditions.
|
arxiv:cond-mat/0305167
|
unitary sampling expectation - value reconstruction ( user ) is used to determine expectation values that are not directly accessible on an analog quantum simulator.
|
arxiv:2203.17249
|
we study the evolution of a metric of a two dimensional black hole under the second loop renormalization group fow, the rg - 2 fow. since the black hole metric is noncompact ( we consider it asymptotically flat ) we adapt some proofs for the compact case to the asymptotically flat case. we found that the appearance of horizons during the evolution is related to the condition of parabolicity of the flow. we also show that the entanglement entropy of a two dimensional black hole is monotonic under the rg - 2 flow. we generalize the results obtained for the first loop approximation and discuss the implications for higher order loops.
|
arxiv:1905.00102
|
let m and \ bar m be n - dimensional manifolds equipped with suitable borel probability measures \ rho and \ bar \ rho. ma, trudinger & wang gave sufficient conditions on a transportation cost c \ in c ^ 4 ( m \ times \ bar m ) to guarantee smoothness of the optimal map pushing \ rho forward to \ bar \ rho ; the necessity of these conditions was deduced by loeper. the present manuscript shows the form of these conditions to be largely dictated by the covariance of the question ; it expresses them via non - negativity of the sectional curvature of certain null - planes in a novel but natural pseudo - riemannian geometry which the cost c induces on the product space m \ times \ bar m. h \ " older continuity of optimal maps was established for rougher mass distributions by loeper, still relying on a key result of trudinger & wang which required certain structure on the domains and the cost. we go on to develop this theory for mass distributions on differentiable manifolds - - recovering loeper ' s riemannian examples such as the round sphere as particular cases - - give a direct proof of the key result mentioned above, and revise loeper ' s h \ " older continuity argument to make it logically independent of all earlier works, while extending it to less restricted geometries and cost functions even for subdomains m and \ bar m of r ^ n. we also give new examples of geometries satisfying the hypotheses - - obtained using submersions and tensor products - - and some connections to spacelike lagrangian submanifolds in symplectic geometry.
|
arxiv:0712.3077
|
recently, deep reinforcement learning has shown promising results for learning fast heuristics to solve routing problems. meanwhile, most of the solvers suffer from generalizing to an unseen distribution or distributions with different scales. to address this issue, we propose a novel architecture, called invariant nested view transformer ( invit ), which is designed to enforce a nested design together with invariant views inside the encoders to promote the generalizability of the learned solver. it applies a modified policy gradient algorithm enhanced with data augmentations. we demonstrate that the proposed invit achieves a dominant generalization performance on both tsp and cvrp problems with various distributions and different problem scales.
|
arxiv:2402.02317
|
we develop a method of constructing excited states in one dimensional spin chains which are derived from the $ su ( 2 ) _ 1 $ wess - zumino - witten conformal field theory ( cft ) using a parent hamiltonian approach. the resulting systems are equivalent to the haldane - shastry model. in our ansatz, correlation functions between primary fields correspond to the ground state of the spin system, whereas excited states are obtained by insertion of descendant fields. our construction is based on the current algebra of the cft and emphasizes the close relation between the spectrum of the spin system and the underlying cft. this general structure might imply that the method could be applied to a wider range of model systems.
|
arxiv:1501.07557
|
the vacuum expectation value ( vev ) of the fermionic current density is investigated in the geometry of two parallel branes in locally ads spacetime with a part of spatial dimensions compactified to a torus. along the toral dimensions quasiperiodicity conditions are imposed with general phases and the presence of a constant gauge field is assumed. different types of boundary conditions are discussed on the branes, including the bag boundary condition and the conditions arising in $ z _ { 2 } $ - symmetric braneworld models. nonzero vacuum currents appear along the compact dimensions only. in the region between the branes they are decomposed into the brane - free and brane - induced contributions. both these contributions are periodic functions of the magnetic flux enclosed by compact dimensions with the period equal to the flux quantum. depending on the boundary conditions, the presence of the branes can either increase or decrease the vacuum current density. for a part of boundary conditions, a memory effect is present in the limit when one of the branes tends to the ads boundary. unlike to the fermion condensate and the vev of the energy - momentum tensor, the vev of the current density is finite on the branes. applications are given to higher - dimensional generalizations of the randall - sundrum models with two branes and with toroidally compact subspace. the features of the fermionic current are discussed in odd - dimensional parity and time - reversal symmetric models. the corresponding results for three - dimensional spacetime are applied to finite length curved graphene tubes threaded by a magnetic flux. it is shown that a nonzero current density can also appear in the absence of the magnetic flux if the fields corresponding to two different points of the brillouin zone obey different boundary conditions on the tube edges.
|
arxiv:1907.13379
|
we report on a measurement of the $ d ^ { + } $ - meson production cross section as a function of transverse momentum ( $ p _ t $ ) in proton - antiproton ( $ p \ bar { p } $ ) collisions at 1. 96 tev center - of - mass energy, using the full data set collected by the collider detector at fermilab in tevatron run ii and corresponding to 10 fb $ ^ { - 1 } $ of integrated luminosity. we use $ d ^ { + } \ to k ^ - \ pi ^ + \ pi ^ + $ decays fully reconstructed in the central rapidity region $ | y | < 1 $ with transverse momentum down to 1. 5 gev / $ c $, a range previously unexplored in $ p \ bar { p } $ collisions. inelastic $ p \ bar { p } $ - scattering events are selected online using minimally - biasing requirements followed by an optimized offline selection. the $ k ^ - \ pi ^ + \ pi ^ + $ mass distribution is used to identify the $ d ^ + $ signal, and the $ d ^ + $ transverse impact - parameter distribution is used to separate prompt production, occurring directly in the hard scattering process, from secondary production from $ b $ - hadron decays. we obtain a prompt $ d ^ + $ signal of 2950 candidates corresponding to a total cross section $ \ sigma ( d ^ +, 1. 5 < p _ t < 14. 5 ~ \ mbox { gev / } c, | y | < 1 ) = 71. 9 \ pm 6. 8 ( \ mbox { stat } ) \ pm 9. 3 ( \ mbox { syst } ) ~ \ mu $ b. while the measured cross sections are consistent with theoretical estimates in each $ p _ t $ bin, the shape of the observed $ p _ t $ spectrum is softer than the expectation from quantum chromodynamics. the results are unique in $ p \ bar { p } $ collisions and can improve the shape and uncertainties of future predictions.
|
arxiv:1610.08989
|
recent advancements in large - scale pre - training have shown the potential to learn generalizable representations for downstream tasks. in the graph domain, however, capturing and transferring structural information across different graph domains remains challenging, primarily due to the inherent differences in topological patterns across various contexts. additionally, most existing models struggle to capture the complexity of rich graph structures, leading to inadequate exploration of the embedding space. to address these challenges, we propose gfse, a universal graph structural encoder designed to capture transferable structural patterns across diverse domains such as molecular graphs, social networks, and citation networks. gfse is the first cross - domain graph structural encoder pre - trained with multiple self - supervised learning objectives. built on a graph transformer, gfse incorporates attention mechanisms informed by graph inductive bias, enabling it to encode intricate multi - level and fine - grained topological features. the pre - trained gfse produces generic and theoretically expressive positional and structural encoding for graphs, which can be seamlessly integrated with various downstream graph feature encoders, including graph neural networks for vectorized features and large language models for text - attributed graphs. comprehensive experiments on synthetic and real - world datasets demonstrate gfse ' s capability to significantly enhance the model ' s performance while requiring substantially less task - specific fine - tuning. notably, gfse achieves state - of - the - art performance in 81. 6 % evaluated cases, spanning diverse graph models and datasets, highlighting its potential as a powerful and versatile encoder for graph - structured data.
|
arxiv:2504.10917
|
##tric tensor or alternating form. symmetric tensors occur widely in engineering, physics and mathematics. = = = galois theory = = = given a polynomial, it may be that some of the roots are connected by various algebraic equations. for example, it may be that for two of the roots, say a and b, that a2 + 5b3 = 7. the central idea of galois theory is to consider those permutations ( or rearrangements ) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. an important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. thus, galois theory studies the symmetries inherent in algebraic equations. = = = automorphisms of algebraic objects = = = in abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. it is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. the set of all automorphisms of an object forms a group, called the automorphism group. it is, loosely speaking, the symmetry group of the object. = = = = examples = = = = in set theory, an arbitrary permutation of the elements of a set x is an automorphism. the automorphism group of x is also called the symmetric group on x. in elementary arithmetic, the set of integers, z, considered as a group under addition, has a unique nontrivial automorphism : negation. considered as a ring, however, it has only the trivial automorphism. generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. a group automorphism is a group isomorphism from a group to itself. informally, it is a permutation of the group elements such that the structure remains unchanged. for every group g there is a natural group homomorphism g → aut ( g ) whose image is the group inn ( g ) of inner automorphisms and whose kernel is the center of g. thus, if g has trivial center it can be embedded into its own automorphism group. in linear algebra, an endomorphism of a vector space v is a linear operator v → v. an automorphism is an invertible linear operator on v. when the vector space is finite - dimensional, the automorphism group of v is the same as the general linear group, g
|
https://en.wikipedia.org/wiki/Symmetry_in_mathematics
|
background : at intermediate energies, transfer reactions hardly occur because the momentum - matching condition is difficult to satisfy. in the standard distorted - wave born approximation, a particle to be transferred must have a momentum being similar to the momentum transfer. purpose : we propose a new reaction framework based on the distorted - wave impulse approximation for transfer reactions at intermediate energies, aiming to ease the momentum - matching condition. methods : the $ ( p, d ) $ reaction is described as a $ p $ - $ d $ elastic scattering for backward - angle scattering in the target nucleus and the proton that formed a $ pn $ pair is left and bound in the residual nucleus in the final channel. the momentum transfer is shared by the deuteron in the target nucleus and the proton in the residual nucleus. results : the new framework is applied to the $ ^ { 16 } $ o ( $ p, d $ ) $ ^ { 15 } $ o reaction at 200 ~ mev and compared with experimental data. the angular distribution is satisfactorily well described, whereas an anomalously large scaling factor is needed to reproduce the absolute value. the transition matrix is analyzed in detail and the mechanism of the momentum sharing is clarified. conclusions : the new reaction framework for transfer reactions at intermediate energies seems to be promising for describing the reaction mechanism but fails to explain the absolute value of the cross section. the use of the $ p $ - $ d $ transition amplitude instead of its cross section will be necessary to draw a conclusion on the applicability of the present framework.
|
arxiv:2503.01259
|
supervisory signals can help topic models discover low - dimensional data representations that are more interpretable for clinical tasks. we propose a framework for training supervised latent dirichlet allocation that balances two goals : faithful generative explanations of high - dimensional data and accurate prediction of associated class labels. existing approaches fail to balance these goals by not properly handling a fundamental asymmetry : the intended task is always predicting labels from data, not data from labels. our new prediction - constrained objective trains models that predict labels from heldout data well while also producing good generative likelihoods and interpretable topic - word parameters. in a case study on predicting depression medications from electronic health records, we demonstrate improved recommendations compared to previous supervised topic models and high - dimensional logistic regression from words alone.
|
arxiv:1712.00499
|
conformance checking is a crucial aspect of process mining, where the main objective is to compare the actual execution of a process, as recorded in an event log, with a reference process model, e. g., in the form of a petri net or a bpmn. conformance checking enables identifying deviations, anomalies, or non - compliance instances. it offers different perspectives on problems in processes, bottlenecks, or process instances that are not compliant with the model. performing conformance checking in federated ( inter - organizational ) settings allows organizations to gain insights into the overall process execution and to identify compliance issues across organizational boundaries, which facilitates process improvement efforts among collaborating entities. in this paper, we propose a privacy - aware federated conformance - checking approach that allows for evaluating the correctness of overall cross - organizational process models, identifying miscommunications, and quantifying their costs. for evaluation, we design and simulate a supply chain process with three organizations engaged in purchase - to - pay, order - to - cash, and shipment processes. we generate synthetic event logs for each organization as well as the complete process, and we apply our approach to identify and evaluate the cost of pre - injected miscommunications.
|
arxiv:2501.13576
|
general considerations on the equivalence conjectures and a review of few mathematical results.
|
arxiv:2211.02961
|
this study evaluates the potential of generative models, trained on historical era5 reanalysis data, for simulating windstorms over the uk. four generative models, including a standard gan, a wgan - gp, a u - net diffusion model, and a diffusion - gan were assessed based on their ability to replicate spatial and statistical characteristics of windstorms. different models have distinct strengths and limitations. the standard gan displayed broader variability and limited alignment on the pca dimensions. the wgan - gp had a more balanced performance but occasionally misrepresented extreme events. the u - net diffusion model produced high - quality spatial patterns but consistently underestimated windstorm intensities. the diffusion - gan performed better than the other models in general but overestimated extremes. an ensemble approach combining the strengths of these models could potentially improve their overall reliability. this study provides a foundation for such generative models in meteorological research and could potentially be applied in windstorm analysis and risk assessment.
|
arxiv:2501.16110
|
in this paper, we investigate the thermalization of hawking radiation from primordial black holes ( pbhs ) in the early universe, taking into account the interference effect on thermalization of high energy particles, known as landau - pomeranchuk - migdal ( lpm ) effect. small pbhs with masses $ \ lesssim 10 ^ 9 \, \ mathrm { g } $ completely evaporate before the big bang nucleosynthesis ( bbn ). the hawking radiation emitted from these pbhs heats up the ambient plasma with temperature lower than the hawking temperature, which results in a non - trivial temperature profile around the pbhs, namely a hot spot surrounding a pbh with a broken power - law tail. we find that the hot spot has a core with a radius much larger than the black hole horizon and its highest temperature is independent of the initial mass of the pbh such as $ 2 \ times 10 ^ { 9 } \, { \ rm gev } \ times ( \ alpha / 0. 1 ) ^ { 19 / 3 } $, where $ \ alpha $ generically represents the fine - structure constants. we also briefly discuss the implications of the existence of the hot spot for phenomenology.
|
arxiv:2210.06238
|
. the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period,
|
https://en.wikipedia.org/wiki/History_of_technology
|
this paper considers joint uplink / downlink of an orthogonal frequency division multiple access ( ofdma ) - based heterogeneous network ( hetnet ) consisting of a single macro base station ( mbs ), multiple femto base stations ( fbss ) and access points ( aps ) where base stations ( bss ) can offload data to aps and each mobile user ( mu ) is able to harvest the received energy using the simultaneous wireless information and power transfer ( swipt ) technique. we also suppose that the harvested energy of mus are used for their uplink information transmission. we devise a radio resource allocation ( rra ) algorithm to maximize the uplink sum data rate of mus subject to a minimum required downlink data rate of each mu and maximum allowable transmit power of each bs, ap, and mu. more specifically, both the frequency division duplex ( fdd ) and time division duplex ( tdd ) schemes are investigated. the proposed non - convex optimization problems are solved using an iterative algorithm. it is also proved that the proposed algorithm converges to a near - optimal solution. simulation results illustrate that the tdd scheme improves the performance compared to the fdd scheme. in addition, it is shown that utilizing the data offloading technique improves the uplink sum data rate of mus compared to the scenario without any ap.
|
arxiv:1705.07940
|
while coupled cluster theory accurately models weakly correlated quantum systems, it often fails in the presence of strong correlations where the standard mean - field picture is qualitatively incorrect. in many cases, these failures can be largely ameliorated by permitting the mean - field reference to break physical symmetries. symmetry - broken coupled cluster, e. g. bogoliubov coupled cluster, theory can indeed provide reasonably accurate energetic predictions, but the broken symmetry can compromise the quality of the resulting wave function and predictions of observables other than the energy. merging symmetry projection and coupled cluster theory is therefore an appealing way to describe strongly correlated systems. independently, two different but related formalisms have been recently proposed to achieve this goal. the two formalisms are contrasted in this manuscript, with results tested on the richardson pairing hamiltonian. both formalisms are based on the disentangled cluster representation of the symmetry - rotated coupled cluster wavefunction. however, they differ in the way that the disentangled clusters are solved. one approach sets up angle - dependent coupled cluster equations, while the other involves first - order ordinary differential equations. the latter approach yields energies and occupation probabilities significantly better than those of number - projected bcs and bcs coupled cluster and, when the disentangled clusters are truncated at low excitation levels, has a computational cost not too much larger than that of bcs coupled cluster. the high quality of results presented in this manuscript indicates that symmetry - projected coupled cluster is a promising method that can accurately describe both weakly and strongly correlated finite many - fermion systems.
|
arxiv:1810.11245
|
this paper describes the design of a 1024 - core processor chip in 16nm finfet technology. the chip ( " epiphany - v " ) contains an array of 1024 64 - bit risc processors, 64mb of on - chip sram, three 136 - bit wide mesh networks - on - chip, and 1024 programmable io pins. the chip has taped out and is being manufactured by tsmc. this research was developed with funding from the defense advanced research projects agency ( darpa ). the views, opinions and / or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the department of defense or the u. s. government.
|
arxiv:1610.01832
|
the multi - messenger combination of gravitational waves ( gws ) from merging massive black hole binaries ( mbhbs ) and the electromagnetic ( em ) counterpart from the surrounding circumbinary disk ( cbd ) will open avenues to new scientific pursuits. in order to realize this science, we need to correctly localize the host galaxy of the merging mbhb. multi - wavelength, time - dependent electromagnetic ( em ) signatures can greatly facilitate the identification of the unique em counterpart among many sources in lisa ' s localization volume. to this end, we studied merging unequal - mass mbhbs embedded in a cbd using high - resolution 2d simulations, with a $ \ gamma $ - law equation of state, incorporating viscous heating, shock heating and radiative cooling. we simulate each binary starting from before it decouples from the cbd until just after the merger. we compute em signatures and identify distinct features before, during, and after the merger. we corroborate previous findings of a several order of magnitude drop in the thermal x - ray luminosity near the time of merger, but with delayed timing compared to an equal - mass system. the source remains x - ray dark for hours post - merger. our main results are a potential new signature of a sharp spike in the thermal x - ray emission just before the tell - tale steep drop occurs. this feature may further help to identify em counterparts of lisa ' s unequal mbhbs before merger without the need for extensive pre - merger monitoring. additionally, we find a role - reversal, in which the primary out - accretes the secondary during late inspiral, which may diminish signatures originating from doppler modulation.
|
arxiv:2503.01494
|
we adapt the notion of an algebraic theory to work in the setting of quasicategories developed recently by joyal and lurie. we develop the general theory at some length. we study one extended example in detail : the theory of commutative monoids ( which turns out to be essentially just a 2 - category ). this gives a straightforward, combinatorially explicit, and instructive notion of a commutative monoid. we prove that this definition is equivalent ( in appropriate senses ) both to the classical concept of an e - infinity monoid and to lurie ' s concept of a commutative algebra object.
|
arxiv:1109.1598
|
we give natural examples of factors of the muchnik lattice which capture intuitionistic propositional logic ( ipc ), arising from the concepts of lowness, 1 - genericity, hyperimmune - freeness and computable traceability. this provides a purely computational semantics for ipc.
|
arxiv:1210.6538
|
a family of deformed ads4 - reissner - nordstr \ " om black branes, governed by a free parameter, is derived using the adm formalism, in the context of the membrane paradigm. their new event horizons, the hawking temperature and other aspects are scrutinized. ads / cft near - horizon methods are then implemented to compute the shear viscosity - to - entropy ratio for the deformed ads4 - reissner - nordstr \ " om metric. the killing equation is shown to yield new values for the free parameter and the shear viscosity - to - entropy ratio is used to derive a reliable range for tidal charge.
|
arxiv:1904.01093
|
binding, lateral diffusion and exchange are fundamental dynamic processes involved in protein association with cellular membranes. in this study, we developed numerical simulations of lateral diffusion and exchange of fluorophores in membranes with arbitrary bleach geometry and exchange of the membrane localized fluorophore with the cytosol during fluorescence recovery after photobleaching ( frap ) experiments. the model simulations were used to design frap experiments with varying bleach region sizes on plasma - membrane localized wild type gfp - ras2 with a dual lipid anchor and mutant gfp - ras2c318s with a single lipid anchor in live yeast cells to investigate diffusional mobility and the presence of any exchange processes operating in the time scale of our experiments. model parameters estimated using data from frap experiments with a 1 micron x 1 micron bleach region - of - interest ( roi ) and a 0. 5 micron x 0. 5 micron bleach roi showed that gfp - ras2, single or dual lipid modified, diffuses as single species with no evidence of exchange with a cytoplasmic pool. this is the first report of ras2 mobility in yeast plasma membrane. the methods developed in this study are generally applicable for studying diffusion and exchange of membrane associated fluorophores using frap on commercial confocal laser scanning microscopes.
|
arxiv:1005.1589
|
recently, onari and kontani submitted a paper [ arxiv : 1105. 6233 ] which criticizes our recent theoretical study [ arxiv : 1103. 0586 ] on the neutron scattering experiment as a probe for determining the superconducting gap in the iron pnictides. in their paper, onari and kontani have developed a formalism in which the imaginary part of the dynamical spin susceptibility ( im $ \ chi _ s ( \ vec { q }, \ omega ) $ ) in the superconducting state can be more accurately calculated especially in the $ \ omega < 2 \ delta $ regime, where $ \ delta $ is the superconducting gap. in section iiic of their paper, they mention that the conclusions of our paper are " incorrect based on inaccurate numerical calculation ". in the present comment, we show that this in fact is not correct.
|
arxiv:1106.2376
|
spin ice is a frustrated magnetic system that at low temperatures exhibits a coulomb phase, a classical spin liquid with topological order and deconfined excitations. this work establishes the presence of a coulomb phase with coexisting ferromagnetic order in a microscopic model of classical spin ice subject to uniaxial lattice distortion. general theoretical arguments are presented for the presence of such a phase, and its existence is confirmed using monte carlo results. this example is used to illustrate generic properties of spin liquids with magnetic order, including deconfinement of monopoles, signatures in the neutron - scattering structure factor, and critical behavior at phase transitions. an analogous phase, a superfluid with spontaneously broken particle - hole symmetry, is demonstrated in a model of hard - core lattice bosons, related to spin ice through the quantum - classical correspondence.
|
arxiv:1412.1095
|
we show that generative models can be used to capture visual geometry constraints statistically. we use this fact to infer the 3d shape of object categories from raw single - view images. differently from prior work, we use no external supervision, nor do we use multiple views or videos of the objects. we achieve this by a simple reconstruction task, exploiting the symmetry of the objects ' shape and albedo. specifically, given a single image of the object seen from an arbitrary viewpoint, our model predicts a symmetric canonical view, the corresponding 3d shape and a viewpoint transformation, and trains with the goal of reconstructing the input view, resembling an auto - encoder. our experiments show that this method can recover the 3d shape of human faces, cat faces, and cars from single view images, without supervision. on benchmarks, we demonstrate superior accuracy compared to other methods that use supervision at the level of 2d image correspondences.
|
arxiv:1906.01568
|
this is a challenging paper including some review and new results. since the non - commutative version of the classical system based on the compact group su ( 2 ) has been constructed in ( quant - ph / 0502174 ) by making use of jaynes - commings model and so - called quantum diagonalization method in ( quant - ph / 0502147 ), we construct a non - commutative version of the classical system based on the non - compact group su ( 1, 1 ) by modifying the compact case. in this model the hamiltonian is not hermite but pseudo hermite, which causes a big difference between two models. for example, in the classical representation theory of su ( 1, 1 ), unitary representations are infinite dimensional from the starting point. therefore, to develop a unitary theory of non - commutative system of su ( 1, 1 ) we need an infinite number of non - commutative systems, which means a kind of { \ bf second non - commutativization }. this is a very hard and interesting problem. we develop a corresponding theory though it is not always enough, and present some challenging problems concerning how classical properties can be extended to the non - commutative case. this paper is arranged for the convenience of readers as the first subsection is based on the standard model ( su ( 2 ) system ) and the next one is based on the non - standard model ( su ( 1, 1 ) system ). this contrast may make the similarity and difference between the standard and non - standard models clear.
|
arxiv:quant-ph/0506026
|
the adiabatic connection formula of ground - state density functional theory relates the correlation energy to a coupling - constant integral over a purely potential contribution, and is widely used to understand and improve approximations. the corresponding formula for thermal density functional theory is cast as an integral over temperatures instead, ranging upward from the system ' s physical temperature. we also show how to relate different different correlation components to each other, either in terms of temperature - or coupling - constant integrations. we illustrate our results on the uniform electron gas.
|
arxiv:1509.03060
|
in this note we prove two main results. 1. in a rigid braided finite tensor category over c ( not necessarily semisimple ), some power of the casimir element and some even power of the braiding is unipotent. 2. in a ( semisimple ) modular category, the twists are roots of unity dividing the algebraic integer d ^ { 5 / 2 }, where d is the global dimension of the category ( the sum of squares of dimensions of simple objects ). both results generalize vafa ' s theorem, saying that in a modular category twists are roots of unity, and square of the braiding has finite order. we also discuss the notion of the quasi - exponent of a finite rigid tensor category, which is motivated by results 1 and 2 and the paper math / 0109196 of s. gelaki and the author.
|
arxiv:math/0207007
|
the standard approach for photoacoustic imaging with variable speed of sound is time reversal, which consists in solving a well - posed final - boundary value problem for the wave equation backwards in time. this paper investigates the iterative landweber regularization algorithm, where convergence is guaranteed by standard regularization theory, notably also in cases of trapping sound speed or for short measurement times. we formulate and solve the direct and inverse problem on the whole euclidean space, what is common in standard photoacoustic imaging, but not for time - reversal algorithms, where the problems are considered on a domain enclosed by the measurement devices. we formulate both the direct and adjoint photoacoustic operator as the solution of an interior and an exterior differential equation which are coupled by transmission conditions. the prior is solved numerically using a galerkin scheme in space and finite difference discretization in time, while the latter consists in solving a boundary integral equation. we therefore use a bem - fem approach for numerical solution of the forward operators. we analyze this method, prove convergence, and provide numerical tests. moreover, we compare the approach to time reversal.
|
arxiv:1507.01741
|
in this paper, we propose a novel lagrange multiplier approach, named zero - factor ( zf ) approach to solve a series of gradient flow problems. the numerical schemes based on the new algorithm are unconditionally energy stable with the original energy and do not require any extra assumption conditions. we also prove that the zf schemes with specific zero factors lead to the popular sav - type method. to reduce the computation cost and improve the accuracy and consistency, we propose a zero - factor approach with relaxation, which we named the relaxed zero - factor ( rzf ) method, to design unconditional energy stable schemes for gradient flows. the rzf schemes can be proved to be unconditionally energy stable with respect to a modified energy that is closer to the original energy, and provide a very simple calculation process. the variation of the introduced zero factor is highly consistent with the nonlinear free energy which implies that the introduced zf method is a very efficient way to capture the sharp dissipation of nonlinear free energy. several numerical examples are provided to demonstrate the improved efficiency and accuracy of the proposed method.
|
arxiv:2210.02723
|
first, we construct a bijection between the set of $ h $ - vectors and the set of socle - vectors of artinian algebras. as a corollary, we find the minimum codimension that an artinian algebra with a given socle - vector can have. then, we study the main problem in the paper : determining when there is a unique socle - vector for a given $ h $ - vector. we solve the problem completely if the codimension is at most 3.
|
arxiv:math/0411229
|
many efforts have been devoted to designing sampling, mining, and weighting strategies in high - level deep metric learning ( dml ) loss objectives. however, little attention has been paid to low - level but essential data transformation. in this paper, we develop a novel mechanism, the independent domain embedding augmentation learning ( { ideal } ) method. it can simultaneously learn multiple independent embedding spaces for multiple domains generated by predefined data transformations. our ideal is orthogonal to existing dml techniques and can be seamlessly combined with prior dml approaches for enhanced performance. empirical results on visual retrieval tasks demonstrate the superiority of the proposed method. for example, the ideal improves the performance of ms loss by a large margin, 84. 5 \ % $ \ rightarrow $ 87. 1 \ % on cars - 196, and 65. 8 \ % $ \ rightarrow $ 69. 5 \ % on cub - 200 at recall $ @ 1 $. our ideal with ms loss also achieves the new state - of - the - art performance on three image retrieval benchmarks, \ ie, \ emph { cars - 196 }, \ emph { cub - 200 }, and \ emph { sop }. it outperforms the most recent dml approaches, such as circle loss and xbm, significantly. the source code and pre - trained models of our method will be available at \ emph { \ url { https : / / github. com / emdata - ailab / ideal } }.
|
arxiv:2105.10112
|
after being collected for patient care, observational health data ( ohd ) can further benefit patient well - being by sustaining the development of health informatics and medical research. vast potential is unexploited because of the fiercely private nature of patient - related data and regulations to protect it. generative adversarial networks ( gans ) have recently emerged as a groundbreaking way to learn generative models that produce realistic synthetic data. they have revolutionized practices in multiple domains such as self - driving cars, fraud detection, digital twin simulations in industrial sectors, and medical imaging. the digital twin concept could readily apply to modelling and quantifying disease progression. in addition, gans posses many capabilities relevant to common problems in healthcare : lack of data, class imbalance, rare diseases, and preserving privacy. unlocking open access to privacy - preserving ohd could be transformative for scientific research. in the midst of covid - 19, the healthcare system is facing unprecedented challenges, many of which of are data related for the reasons stated above. considering these facts, publications concerning gan applied to ohd seemed to be severely lacking. to uncover the reasons for this slow adoption, we broadly reviewed the published literature on the subject. our findings show that the properties of ohd were initially challenging for the existing gan algorithms ( unlike medical imaging, for which state - of - the - art model were directly transferable ) and the evaluation synthetic data lacked clear metrics. we find more publications on the subject than expected, starting slowly in 2017, and since then at an increasing rate. the difficulties of ohd remain, and we discuss issues relating to evaluation, consistency, benchmarking, data modelling, and reproducibility.
|
arxiv:2005.13510
|
we consider n identical two - level atoms coupled to an optical cavity, which is coherently driven by an external field. in the limit of small atomic excitation, the reflection and transmission coefficients for both fields and intensities are calculated analytically. in addition, the frequency content of the cavity field and hence also the emission spectrum of the cavity is determined. it is discussed in particular how individual collisional dephasing and common atomic energy - level fluctuations prevent the cavity field from being in a coherent state, which in turn affects the outgoing fields.
|
arxiv:1110.3610
|
the dwarf nova ss cygni is a close binary star consisting of a k star transferring mass to a white dwarf by way of an accretion disk. we have obtained new spectroscopic observations of ss cyg with the hobby - eberly telescope ( het ). fits of synthetic spectra for roche - lobe - filling stars to the absorption - line spectrum of the k star yield the amplitude of the k star ' s radial velocity curve and the mass ratio : k _ { k } = 162. 5 + / - 1. 0 km / s and q = m _ { k } / m _ { wd } = 0. 685 + / - 0. 015. the fits also show that the accretion disk and white dwarf contribute a fraction f = 0. 535 + / - 0. 075 of the total flux at 5500 angstroms. taking the weighted average of our results with previously published results obtained using similar techniques, we find < k _ { k } > = 163. 7 + / - 0. 7 km / s and < q > = 0. 683 + / - 0. 012. the orbital light curve of ss cyg shows an ellipsoidal variation diluted by light from the disk and white dwarf. from an analysis of the ellipsoidal variations we limit the orbital inclination to the range 45 deg. < = i < = 56 deg. the derived masses of the k star and white dwarf are m _ { k } = 0. 55 + / - 0. 13 m _ sun and m _ { wd } = 0. 81 + / - 0. 19 m _ sun, where the uncertainties are dominated by systematic errors in the orbital inclination. the k star in ss cyg is 10 % to 50 % larger than an unevolved star with the same mass and thus does not follow the mass - radius relation for zero - age main - sequence stars ; nor does it follow the zams mass / spectral - type relation. its mass and spectral type are, however, consistent with models in which the core hydrogen has been significantly depleted.
|
arxiv:astro-ph/0703087
|
long - term time - series forecasting ( ltsf ) is fundamental to various real - world applications, where transformer - based models have become the dominant framework due to their ability to capture long - range dependencies. however, these models often experience overfitting due to data redundancy in rolling forecasting settings, limiting their generalization ability particularly evident in longer sequences with highly similar adjacent data. in this work, we introduce clmformer, a novel framework that mitigates redundancy through curriculum learning and a memory - driven decoder. specifically, we progressively introduce bernoulli noise to the training samples, which effectively breaks the high similarity between adjacent data points. this curriculum - driven noise introduction aids the memory - driven decoder by supplying more diverse and representative training data, enhancing the decoder ' s ability to model seasonal tendencies and dependencies in the time - series data. to further enhance forecasting accuracy, we introduce a memory - driven decoder. this component enables the model to capture seasonal tendencies and dependencies in the time - series data and leverages temporal relationships to facilitate the forecasting process. extensive experiments on six real - world ltsf benchmarks show that clmformer consistently improves transformer - based models by up to 30 %, demonstrating its effectiveness in long - horizon forecasting.
|
arxiv:2207.07827
|
knowledge graph completion aims to predict missing relations between entities in a knowledge graph. in this work, we propose a relational message passing method for knowledge graph completion. different from existing embedding - based methods, relational message passing only considers edge features ( i. e., relation types ) without entity ids in the knowledge graph, and passes relational messages among edges iteratively to aggregate neighborhood information. specifically, two kinds of neighborhood topology are modeled for a given entity pair under the relational message passing framework : ( 1 ) relational context, which captures the relation types of edges adjacent to the given entity pair ; ( 2 ) relational paths, which characterize the relative position between the given two entities in the knowledge graph. the two message passing modules are combined together for relation prediction. experimental results on knowledge graph benchmarks as well as our newly proposed dataset show that, our method pathcon outperforms state - of - the - art knowledge graph completion methods by a large margin. pathcon is also shown applicable to inductive settings where entities are not seen in training stage, and it is able to provide interpretable explanations for the predicted results. the code and all datasets are available at https : / / github. com / hwwang55 / pathcon.
|
arxiv:2002.06757
|
the validity of our already proposed conjecture - - { \ it horizon creates a local instability which acts as the source of the quantum temperature of black hole } - - is being tested here for kerr black hole. earlier this has been explicitly shown for spherically symmetric static black hole ( sss bh ). the more realistic situation like kerr spacetime, being stationary and axisymmetric, is a non - trivial example to analyze. we show that for a chargeless massless particle, the near horizon radial motion in kerr spacetime, like sss bh, can be locally unstable. the radial contribution in the corresponding hamiltonian is $ \ sim xp $ kind, where $ p $ is the canonical momentum and $ x $ is its conjugate position of particle. finally we show that the horizon thermalization can be explained through this hamiltonian when one dose a semi - classical analysis. it again confirms that near horizon instability is liable for its own temperature and moreover generalizes the validity of our conjectured mechanism for the black hole horizon thermalization.
|
arxiv:2103.11613
|
the study by oberlack et al. ( 2006 ) consists of two main parts : a direct numerical simulation ( dns ) of a turbulent plane channel flow with streamwise rotation and a preceding lie - group symmetry analysis on the two - point correlation equation ( tpc ) to analytically predict the scaling of the mean velocity profiles for different rotation rates. we will only comment on the latter part, since the dns result obtained in the former part has already been commented on by recktenwald et al. ( 2009 ), stating that the observed mismatch between dns and their performed experiment is possibly due to the prescription of periodic boundary conditions on a too small computational domain in the spanwise direction. by revisiting the group analysis part in oberlack et al. ( 2006 ), we will generate more natural scaling laws describing better the mean velocity profiles than the ones proposed. however, due to the statistical closure problem of turbulence, this improvement is illusive. as we will demonstrate, any arbitrary invariant scaling law for the mean velocity profiles can be generated consistent to any higher order in the velocity correlations. this problem of arbitrariness in invariant scaling persists even if we would formally consider the infinite statistical hierarchy of all multi - point correlation equations. the closure problem of turbulence simply cannot be circumvented by just employing the method of lie - group symmetry analysis alone : as the statistical equations are unclosed, so are their symmetries! hence, an a priori prediction as how turbulence scales is thus not possible. only a posteriori by anticipating what to expect from numerical or experimental data the adequate invariant scaling law can be generated through an iterative trial - and - error process. finally, apart from this issue, also several inconsistencies and incorrect statements to be found in oberlack et al. ( 2006 ) will be pointed out.
|
arxiv:1609.08155
|
we present a relationship between the calogero - moser particles confined in harmonic oscillator potentials and a representation theory of the infinite dimensional lie algebra which is a semi - direct sum of virasoro algebra and its module. more precisely, it is a correspondence of excited states of the model and singular vectors in verma modules over the algebra. this is found by a free field realization of the time evolution operator of the model. we investigate the verma modules and some explicit example of singular vectors are given.
|
arxiv:1812.10662
|
the recent successes of deep learning have led to a wave of interest from non - experts. gaining an understanding of this technology, however, is difficult. while the theory is important, it is also helpful for novices to develop an intuitive feel for the effect of different hyperparameters and structural variations. we describe tensorflow playground, an interactive, open sourced visualization that allows users to experiment via direct manipulation rather than coding, enabling them to quickly build an intuition about neural nets.
|
arxiv:1708.03788
|
every ( full ) finite gabor system generated by a unit - norm vector $ g \ in \ mathbb { c } ^ d $ is a finite unit - norm tight frame ( funtf ), and can thus be associated with a ( gabor ) positive operator valued measure ( povm ). such a povm is informationally complete if the $ d ^ 2 $ corresponding rank one matrices form a basis for the space of $ d \ times d $ matrices. a sufficient condition for this to happen is that the povm is symmetric, which is equivalent to the fact that the associated gabor frame is an equiangular tight frame ( etf ). the existence of gabor etf is an important special case of the zauner conjecture. it is known that generically all gabor funtfs lead to informationally complete povms. in this paper, we initiate a classification of non - complete gabor povms. in the process we establish some seemingly simple facts about the eigenvalues of the gram matrix of the rank one matrices generated by a finite gabor frame. we also use these results to construct some sets of $ d ^ 2 $ unit vectors in $ \ mathbb { c } ^ d $ with a relatively smaller number of distinct inner products.
|
arxiv:2106.01509
|
we demonstrate that pure space - like axial gauge quantizations of gauge fields can be constructed in ways which are free from infrared divergences. we begin by constructing an axial gauge formulation in auxiliary coordinates : $ x ^ + = x ^ 0 \ sin { \ theta } + x ^ 1 \ cos { \ theta }, x ^ - = x ^ 0 \ cos { \ theta } - x ^ 1 \ sin { \ theta } $. for \ theta less than \ pi \ over 4 we can take $ x ^ - $ as the evolution parameter and construct a traditional canonical formulation of the temporal gauge schwinger model in which residual gauge fields dependent only on $ x ^ + $ are static canonical variables. then we extrapolate the temporal gauge operator solution into the axial region, \ theta > \ pi \ over 4, where $ x ^ + $ is taken as the evolution parameter. in the axial region we find that we have to change representations of the residual gauge fields from one realizing the pv prescription to one realizing the ml prescription in order for the infrared divergences resulting from $ ( { \ partial } _ - ) ^ { - 1 } $ to be canceled by corresponding ones resulting from the inverse of the hyperbolic laplace operator. finally, by taking the limit $ { \ theta } \ to \ frac { \ pi } { 2 } - 0 $ we obtain an operator solution and the hamiltonian of the axial gauge ( coulomb gauge ) schwinger model in ordinary coordinates. that solution includes auxiliary fields and the representation space is of indefinite metric, providing further evidence that ` ` physical ' ' gauges are no more physical than ` ` unphysical ' ' gauges.
|
arxiv:hep-th/0012095
|
near ubiquitous mobile computing has led to intense interest in dynamic graph theory. this provides a new and challenging setting for algorithmics and complexity theory. for any graph - based problem, the rapid evolution of a ( possibly disconnected ) graph over time naturally leads to the important complexity question : is it better to calculate a new solution from scratch or to adapt the known solution on the prior graph to quickly provide a solution of guaranteed quality for the changed graph? in this paper, we demonstrate that the former is the best approach in some cases, but that there are cases where the latter is feasible. we prove that, under certain conditions, hard problems cannot even be approximated in any reasonable complexity bound - - - i. e., even with a large amount of time, having a solution to a very similar graph does not help in computing a solution to the current graph. to achieve this, we formalize the idea as a maintenance algorithm. using r - regular subgraph as the primary example we show that w [ 1 ] - hardness for the parameterized approximation problem implies the non - existence of a maintenance algorithm for the given approximation ratio. conversely we show that vertex cover, which is fixed - parameter tractable, has a 2 - approximate maintenance algorithm. the implications of np - hardness and npo - hardness are also explored.
|
arxiv:1107.2722
|
a collision model ( cm ) is a framework to describe open quantum dynamics. in its { \ it memoryless } version, it models the reservoir $ \ mathcal r $ as consisting of a large collection of elementary ancillas : the dynamics of the open system $ \ mathcal { s } $ results from successive " collisions " of $ \ mathcal { s } $ with the ancillas of $ \ mathcal r $. here, we present a general formulation of memoryless { \ it composite } cms, where $ \ mathcal s $ is partitioned into the very open system under study $ s $ coupled to one or more auxiliary systems $ \ { s _ i \ } $. their composite dynamics occurs through internal $ s $ - $ \ { s _ i \ } $ collisions interspersed with external ones involving $ \ { s _ i \ } $ and the reservoir $ \ mathcal r $. we show that important known instances of quantum { \ it non - markovian } dynamics of $ s $ - - such as the emission of an atom into a reservoir featuring a lorentzian, or multi - lorentzian, spectral density or a qubit subject to random telegraph noise - - can be mapped on to such { \ it memoryless } composite cms.
|
arxiv:1705.03215
|
we give a centralized deterministic algorithm for constructing linear network error - correcting codes that attain the singleton bound of network error - correcting codes. the proposed algorithm is based on the algorithm by jaggi et al. we give estimates on the time complexity and the required symbol size of the proposed algorithm. we also estimate the probability of a random choice of local encoding vectors by all intermediate nodes giving a network error - correcting codes attaining the singleton bound. we also clarify the relationship between the robust network coding and the network error - correcting codes with known locations of errors.
|
arxiv:cs/0610121
|
given a riemannian surface, we consider a naturally embedded graph which captures part of the topology and geometry of the surface. by studying this graph, we obtain results in three different directions. first, we find bounds on the lengths of homologically independent curves on closed riemannian surfaces. as a consequence, we show that for any $ \ lambda \ in ( 0, 1 ) $ there exists a constant $ c _ \ lambda $ such that every closed riemannian surface of genus $ g $ whose area is normalized at $ 4 \ pi ( g - 1 ) $ has at least $ [ \ lambda g ] $ homologically independent loops of length at most $ c _ \ lambda \ log ( g ) $. this result extends gromov ' s asymptotic $ \ log ( g ) $ bound on the homological systole of genus $ g $ surfaces. we construct hyperbolic surfaces showing that our general result is sharp. we also extend the upper bound obtained by p. buser and p. sarnak on the minimal norm of nonzero period lattice vectors of riemann surfaces % systole of jacobians of riemann surfaces in their geometric approach of the schottky problem to almost $ g $ homologically independent vectors. then, we consider the lengths of pants decompositions on complete riemannian surfaces in connexion with bers ' constant and its generalizations. in particular, we show that a complete noncompact riemannian surface of genus $ g $ with $ n $ ends and area normalized to $ 4 \ pi ( g + \ frac { n } { 2 } - 1 ) $ admits a pants decomposition whose total length ( sum of the lengths ) does not exceed $ c _ g \, n \ log ( n + 1 ) $ for some constant $ c _ g $ depending only on the genus. finally, we obtain a lower bound on the systolic area of finitely presentable nontrivial groups with no free factor isomorphic to $ \ z $ in terms of its first betti number. the asymptotic behavior of this lower bound is optimal.
|
arxiv:1011.2962
|
we formulate a very general conjecture relating the analytical invariants of a normal surface singularity to the seiberg - witten invariants of its link provided that the link is a rational homology sphere. as supporting evidence, we establish its validity for a large class of singularities : some rational and minimally elliptic ( including the cyclic quotient and ` polygonal ' ) singularities, and brieskorn - hamm complete intersections. some of the verifications are based on a result which describes ( in terms of the plumbing graph ) the reidemeister - turaev sign refined torsion ( or, equivalently, the seiberg - witten invariant ) of a rational homology 3 - manifold m, provided that m is given by a negative definite plumbing. these results extend previous work of artin, laufer and s s - t yau, respectively of fintushel - stern and neumann - wahl.
|
arxiv:math/0111298
|
among the different computational approaches modelling the dynamics of isogenic cell populations, discrete stochastic models can describe with sufficient accuracy the evolution of small size populations. however, for a systematic and efficient study of their long - time behaviour over a wide range of parameter values, the performance of solely direct temporal simulations requires significantly high computational time. in addition, when the dynamics of the cell populations exhibit non - trivial bistable behaviour, such an analysis becomes a prohibitive task, since a large ensemble of initial states need to be tested for the quest of possibly co - existing steady state solutions. in this work, we study cell populations which carry the { \ it lac } operon network exhibiting solution multiplicity over a wide range of extracellular conditions ( inducer concentration ). by adopting ideas from the so - called ` ` equation - free ' ' methodology, we perform systems - level analysis, which includes numerical tasks such as the computation of { \ it coarse } steady state solutions, { \ it coarse } bifurcation analysis, as well as { \ it coarse } stability analysis. dynamically stable and unstable macroscopic ( population level ) steady state solutions are computed by means of bifurcation analysis utilising short bursts of fine - scale simulations, and the range of bistability is determined for different sizes of cell populations. the results are compared with the deterministic cell population balance ( cpb ) model, which is valid for large populations, and we demonstrate the increased effect of stochasticity in small size populations with asymmetric partitioning mechanisms.
|
arxiv:1312.3647
|
we address the existence of steady state green - keldysh correlation functions of interacting fermions in mesoscopic systems for both the partitioning and partition - free scenarios. under some spectral assumptions on the non - interacting model and for sufficiently small interaction strength, we show that the system evolves to a ness which does not depend on the profile of the time - dependent coupling strength / bias. for the partitioned setting we also show that the steady state is independent of the initial state of the inner sample. closed formulae for the ness two - point correlation functions ( green - keldysh functions ), in the form of a convergent expansion, are derived. in the partitioning approach, we show that the 0th order term in the interaction strength of the charge current leads to the landauer - buettiker formula, while the 1st order correction contains the mean - field ( hartree - fock ) results.
|
arxiv:1305.4410
|
in this paper, our prime objective is to apply the techniques of parameter estimation theory and the concept of quantum metrology in the form of fisher information to investigate the role of certain physical quantities in the open quantum dynamics of a two entangled qubit system under the markovian approximation. there exist various physical parameters which characterize such system, but can not be treated as any quantum mechanical observable. it becomes imperative to do a detailed parameter estimation analysis to determine the physically consistent parameter space of such quantities. we apply both classical fisher information ( cfi ) and quantum fisher information ( qfi ) to correctly estimate these parameters, which play significant role to describe the out - of - equilibrium and the long range quantum entanglement phenomena of open quantum system. quantum metrology, compared to classical parameter estimation theory, plays a two - fold superior role, improving the precision and accuracy of parameter estimation. additionally, in this paper we present a new avenue in terms of quantum metrology, which beats the classical parameter estimation. we also present an interesting result of revival of out - of - equilibrium feature at the late time scales, arising due to the long range quantum entanglement at early time scale and provide a physical interpretation for the same in terms of bell ' s inequality violation in early time scale giving rise to non - locality.
|
arxiv:2005.13555
|
in this paper, we explore the quasinormal modes ( qnms ) of a black hole surrounded by a fluid of strings within the framework of rastall gravity. we analyze the behavior of scalar, electromagnetic, and gravitational perturbations, focusing on the influence of the black hole charge $ q $ and angular momentum $ l $ on the quasinormal frequencies. our numerical results reveal a significant dependence on the parameter $ \ varepsilon $. these trends are consistent across different types of perturbations, emphasizing the relationship between black hole parameters and qnms behavior.
|
arxiv:2409.16891
|
evert and helton proved that real free spectrahedra are the matrix convex hulls of their absolute extreme points. however, this result does not extend to complex free spectrahedra, and we examine multiple ways in which the analogous result can fail. we also develop some local techniques to determine when matrix convex sets are not ( duals of ) free spectrahedra, as part of a continued study of minimal and maximal matrix convex sets and operator systems. these results apply to both the real and complex cases.
|
arxiv:2108.09185
|
we introduce and study a simple model capturing the main features of unbalanced optimal transport. it is based on equipping the conical extension of the group of all diffeomorphisms with a natural metric, which allows a riemannian submersion to the space of volume forms of arbitrary total mass. we describe its finite - dimensional version and present a concise comparison study of the geometry, hamiltonian features, and geodesics for this and other extensions. one of the corollaries of this approach is that along any geodesic the total mass evolves with constant acceleration, as an object ' s height in a constant buoyancy field.
|
arxiv:2307.05703
|
recently, multi - channel speech enhancement has drawn much interest due to the use of spatial information to distinguish target speech from interfering signal. to make full use of spatial information and neural network based masking estimation, we propose a multi - channel denoising neural network - - spatial dccrn. firstly, we extend s - dccrn to multi - channel scenario, aiming at performing cascaded sub - channel and full - channel processing strategy, which can model different channels separately. moreover, instead of only adopting multi - channel spectrum or concatenating first - channel ' s magnitude and ipd as the model ' s inputs, we apply an angle feature extraction module ( afe ) to extract frame - level angle feature embeddings, which can help the model to apparently perceive spatial information. finally, since the phenomenon of residual noise will be more serious when the noise and speech exist in the same time frequency ( tf ) bin, we particularly design a masking and mapping filtering method to substitute the traditional filter - and - sum operation, with the purpose of cascading coarsely denoising, dereverberation and residual noise suppression. the proposed model, spatial - dccrn, has surpassed eabnet, fasnet as well as several competitive models on the l3das22 challenge dataset. not only the 3d scenario, spatial - dccrn outperforms state - of - the - art ( sota ) model mimo - unet by a large margin in multiple evaluation metrics on the multi - channel conferencingspeech2021 challenge dataset. ablation studies also demonstrate the effectiveness of different contributions.
|
arxiv:2210.08802
|
we give a complete description of the closure of the space of one - generator closed subgroups of psl2 ( r ) for the chabauty topology, by computing explicitly the matrices associated with elements of aut ( d ) = psl2 ( r ), and finding quantities parametrizing the limit cases. along the way, we investigate under what conditions sequences of maps transform convergent sequences of closed subsets of the domain into convergent sequences of closed subsets of the range. in particular, this allows us to compute certain geometric limits of psl2 ( r ) only by looking at the hausdorff limit of some closed subsets of c.
|
arxiv:1202.1365
|
we present a new divergence - free and well - balanced hybrid fv / fe scheme for the incompressible viscous and resistive mhd equations on unstructured mixed - element meshes in 2 and 3 space dimensions. the equations are split into subsystems. the pressure is defined on the vertices of the primary mesh, while the velocity field and the normal components of the magnetic field are defined on an edge - based / face - based dual mesh in two and three space dimensions, respectively. this allows to account for the divergence - free conditions of the velocity field and of the magnetic field in a rather natural manner. the non - linear convective and the viscous terms are solved at the aid of an explicit fv scheme, while the magnetic field is evolved in a divergence - free manner via an explicit fv method based on a discrete form of the stokes law in the edges / faces of each primary element. to achieve higher order of accuracy, a pw - linear polynomial is reconstructed for the magnetic field, which is guaranteed to be divergence - free via a constrained l2 projection. the pressure subsystem is solved implicitly at the aid of a classical continuous fe method in the vertices of the primary mesh. in order to maintain non - trivial stationary equilibrium solutions of the governing pde system exactly, which are assumed to be known a priori, each step of the new algorithm takes the known equilibrium solution explicitly into account so that the method becomes exactly well - balanced. this paper includes a very thorough study of the lid - driven mhd cavity problem in the presence of different magnetic fields. we finally present long - time simulations of soloviev equilibrium solutions in several simplified 3d tokamak configurations even on very coarse unstructured meshes that, in general, do not need to be aligned with the magnetic field lines.
|
arxiv:2305.06497
|
current methods for 3d generation still fall short in physically based rendering ( pbr ) texturing, primarily due to limited data and challenges in modeling multi - channel materials. in this work, we propose muma, a method for 3d pbr texturing through multi - channel multi - view generation and agentic post - processing. our approach features two key innovations : 1 ) we opt to model shaded and albedo appearance channels, where the shaded channels enables the integration intrinsic decomposition modules for material properties. 2 ) leveraging multimodal large language models, we emulate artists ' techniques for material assessment and selection. experiments demonstrate that muma achieves superior results in visual quality and material fidelity compared to existing methods.
|
arxiv:2503.18461
|
several thoughts are presented on the long ongoing difficulties both students and academics face related to calculus 101. some of these thoughts may have a more general interest.
|
arxiv:math/0609343
|
within the nonrelativistic quantum chromodynamics ( nrqcd ) factorization formalism, we compute the relativistic corrections to the exclusive decays of bottomonia with even charge conjugation parity into $ s $ - wave charmonium pairs at leading order in the strong coupling constant. relativistic corrections are resummed for a class of color - singlet contributions to all orders in the charm - quark velocity $ v _ c $ in the charmonium rest frame. almost every process that we consider in this work has negative relativistic corrections ranging from - 20 to - 35, %. among the various processes, the relativistic corrections of the next - to - leading order in $ v _ c $ to the decay rate for $ \ chi _ { b2 } \ to \ eta _ c ( ms ) + \ eta _ c ( ns ) $ with $ m, $ $ n = 1 $ or 2 are very large. in every case, the resummation of the relativistic corrections enhances the rate in comparison with the next - to - leading - order results. we compare our results with available predictions based on the nrqcd factorization formalism. the nrqcd predictions are significantly smaller than those based on the light - cone formalism by an or two orders of magnitudes.
|
arxiv:1108.4104
|
we investigate the evolution of the surface density of a circumbinary accretion disc after the mass loss induced by the merger of two supermassive black holes. we first introduce an analytical model, under the assumption of a disc composed of test particles, to derive the surface density evolution of the disc following the mass loss. the model predicts the formation of sharp density peaks in the disc ; the model also allows us to compute the typical timescale for the formation of these peaks. to test and validate the model, we run numerical simulations of the process using the smoothed particle hydrodynamics ( sph ) code phantom, taking fluid effects into account. we find good agreement in the shape and position of the peaks between the model and the simulations. in a fluid disc, however, the epicyclic oscillations induced by the mass loss can dissipate, and only some of the predicted peaks form in the simulation. to quantify how fast this dissipation proceeds, we introduce an appropriate parameter, and we show that it is effective in explaining the differences between the analytical, collisionless model and a real fluid disc.
|
arxiv:1206.2647
|
the 30 doradus region in the large magellanic cloud ( lmc ) is the most energetic star - forming region in the local group. it is powered by the feedback from the massive stars in r136, the 1 - 2 myr old central massive cluster. 30 doradus has therefore long been regarded as a laboratory for studying star and star cluster formation under conditions reminiscent of the early universe. we use jwst nircam observations to analyse how star formation proceeds in the region. using selections based on theoretical isochrones on colour - magnitude diagrams, we identify populations of different ages. we select pre - main - sequence ( pms ) stars and young stellar objects that show excess emission from warm dust or emission lines. studying the spatial distribution of the different populations, we find that the youngest pms stars with ages < 0. 5 myr are located in an elongated structure that stretches towards the north - east from the central cluster. the same structure is found in the sources that show an infrared excess, appears to be overlapping with cold molecular gas, and covers previously investigated sites of ongoing star formation. pre - main - sequence stars with ages between 1 and 4 myr and upper main - sequence stars are concentrated in the centre of r136, while older stars are more uniformly distributed across the field and likely belong to the lmc field population. nonetheless, we find stars with excess emission from on dust or emission lines as far as 100 pc from the centre, indicating extended recent star formation. we interpret the elongated structure formed by the youngest pms stars to be an indication of the still - ongoing hierarchical assembly of the r136 cluster. additionally, the lower density of old pms stars with emission due to ongoing accretion in the central region suggests that feedback from the r136 stars is effective in disrupting the disks of pms stars.
|
arxiv:2311.06336
|
spectral observations of molecular line profiles reveal the so - called ` blue profiles ' for double - peaked molecular lines with stronger blue and weaker red peaks as notable features for star - forming cloud core collapses under the self - gravity. in contrast, 25 - 30 per cent of observed molecular spectral line profiles in star - forming clouds or cores also show the so - called double - peaked ` red profiles ' with red peaks stronger than blue peaks. gao & lou ( 2010 ) show that these unexplained ` red profiles ' can be signatures of global dynamics for envelope expansion with core collapse ( eecc ) within star - forming molecular clouds or cores. we demonstrate here that spatially - resolved ` red profiles ' of hco + ( j = 1 - 0 ) and cs ( j = 2 - 1 ) molecular transitions from the low - mass star - forming cloud core fest 1 - 457 together with its radial profile of column density inferred from dust extinction observations appear to reveal a self - similar hydrodynamic shock phase for global eecc. observed spectral profiles of c18o ( j = 1 - 0 ) are also fitted by the same eecc model. for further observational tests, the spatially - resolved profiles of molecular transitions hco + ( j = 3 - 2 ) and cs ( j = 3 - 2 ) as well as the radial profiles of ( sub ) millimetre continuum emissions at three wavelengths of 1. 2mm, 0. 85mm and 0. 45mm from fest 1 - 457 are also predicted.
|
arxiv:1011.2248
|
with the vigorous development of mobile photography technology, major mobile phone manufacturers are scrambling to improve the shooting ability of equipments and the photo beautification algorithm of software. however, the improvement of intelligent equipments and algorithms cannot replace human subjective photography technology. in this paper, we propose the aesthetic language guidance of image ( alg ). we divide alg into alg - t and alg - i according to whether the guiding rules are based on photography templates or guidance images. whether it is alg - t or alg - i, we guide photography from three attributes of color, lighting and composition of the images. the differences of the three attributes between the input images and the photography templates or the guidance images are described in natural language, which is aesthetic natural language guidance ( alg ). also, because of the differences in lighting and composition between landscape images and portrait images, we divide the input images into landscape images and portrait images. both alg - t and alg - i conduct aesthetic language guidance respectively for the two types of input images ( landscape images and portrait images ).
|
arxiv:2208.04740
|
four subriemannian ( sr ) structures over the euclidean sphere $ \ mathbb { s } ^ 7 $ are considered in accordance to the previous literature. the defining bracket generating distribution is chosen as the horizontal space in the hopf fibration, the quaternionic hopf fibration or spanned by a suitable number of canonical vector fields. in all cases the induced sr geodesic flow on $ t ^ * \ mathbb { s } ^ 7 $ is studied. adapting a method by a. thimm, a maximal set of functionally independent and poisson commuting first integrals are constructed, including the corresponding sr hamiltonian. as a result, the complete integrability in the sense of liouville is proved for the sr geodesic flow. it is observed that these first integrals arise as the symbols of commuting second order differential operators one of them being a ( not necessarily intrinsic ) sublaplacian. on the way one explicitly derives the lie algebras of all sr isometry groups intersected with $ o ( 8 ) $.
|
arxiv:2403.10157
|
we investigate the impact of statistical and systematic errors on measurements of linear redshift - space distortions ( rsd ) in future cosmological surveys, analyzing large catalogues of dark - matter halos from the basicc simulation. these allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass ( i. e. bias factor ) of the adopted tracer. we find that measures of the specific growth rate \ beta = f / b using the hamilton / kaiser harmonic expansion of the redshift - space correlation function \ xi ( r _ p, \ pi ) on scales larger than 3 / h mpc are typically under - estimated by up to 10 % for galaxy sized halos. this is significantly larger than the corresponding statistical errors, which amount to a few percent, indicating the importance of non - linear improvements to the kaiser model to obtain accurate measurements of the growth rate. we compare the statistical errors to predictions obtained with the fisher information matrix, based on the usual fkp prescription for the errors on the power spectrum. we show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, but only if applied to strictly linear scales in fourier space ( k < 0. 2 h / mpc ). finally, we present an accurate scaling formula describing the relative error on { \ beta } as a function of the survey parameters, which closely matches the simulation results in all explored regimes. this provides a handy and plausibly more realistic alternative to the fisher matrix approach, to quickly and accurately predict rsd statistical errors expected from future surveys.
|
arxiv:1203.1545
|
a method is presented to construct initial data for einstein ' s equations as a superposition of a gravitational wave perturbation on an arbitrary stationary background spacetime. the method combines the conformal thin sandwich formalism with linear gravitational waves, and allows detailed control over characteristics of the superposed gravitational wave like shape, location and propagation direction. it is furthermore fully covariant with respect to spatial coordinate changes and allows for very large amplitude of the gravitational wave.
|
arxiv:gr-qc/0410016
|
we explore the modifications of hadron structure in a nuclear medium, focusing on the spacelike electromagnetic form factors ( emffs ) of light and heavy - light pseudoscalar mesons. by combining the light - front quark model ( lfqm ) with the quark - meson coupling ( qmc ) model, which reasonably reproduces emffs in free space and the saturation properties of nuclear matter, respectively, we systematically analyze the in - medium emffs and charge radii of mesons with various quark flavors. our findings show that the emffs of charged ( neutral ) mesons exhibit a faster fall - off ( increase ) with increasing four - momentum transfer squared and nuclear density. consequently, the absolute value of the charge radii of mesons increases with nuclear density, where the rate of increase depends on their quark flavor contents. we observe that the emffs of pions and kaons undergo significant modifications in the nuclear medium, while heavy - light mesons are only slightly modified. by decomposing the quark flavor contributions to emffs, we show that the medium effects primarily impact the light - quark sector, leaving the heavy - quark sector nearly unaffected. the results of this study further suggest the importance of the medium effects at the quark level.
|
arxiv:2412.09883
|
we study an inverse first - passage - time problem for wiener process $ x ( t ) $ subject to hold and jump from a boundary $ c. $ let be given a threshold $ s > x ( 0 ) \ ge c, $ and a distribution function $ f $ on $ [ 0, + \ infty ). $ the problem consists in finding the distribution of the holding time at $ c $ and the distribution of jumps from $ c, $ so that the first - passage time of $ x ( t ) $ through $ s $ has distribution $ f. $
|
arxiv:1509.03448
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.