text
stringlengths
1
3.65k
source
stringlengths
15
79
explaining the predictions of opaque machine learning algorithms is an important and challenging task, especially as complex models are increasingly used to assist in high - stakes decisions such as those arising in healthcare and finance. most popular tools for post - hoc explainable artificial intelligence ( xai ) are either insensitive to context ( e. g., feature attributions ) or difficult to summarize ( e. g., counterfactuals ). in this paper, i introduce $ \ textit { rational shapley values } $, a novel xai method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner. i leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in xai. by pairing the distribution of random variables with the appropriate reference class for a given explanation task, i illustrate through theory and experiments how user goals and knowledge can inform and constrain the solution set in an iterative fashion. the method compares favorably to state of the art xai tools in a range of quantitative and qualitative comparisons.
arxiv:2106.10191
the electronic structure of la0. 7ce0. 3mno3 ( lcemo ) thin film has been investigated using photoemission spectroscopy ( pes ) and x - ray absorption spectroscopy ( xas ). the ce 3d core - level pes and xas spectra of lcemo are very similar to those of ceo2, indicating that ce ions are far from being trivalent. a very weak 4f resonance is observed around the ce 4d $ \ to $ 4f absorption edge, suggesting that the localized ce 4f states are almost empty in the ground state. the mn 2p xas spectrum reveals the existence of the mn ( 2 + ) multiplet feature, confirming the mn ( 2 + ) - mn ( 3 + ) mixed - valent states of mn ions in lcemo. the measured mn 3d pes / xas spectra for lcemo agrees reasonably well with the calculated mn 3d pdos using the lsda + u method. the lsda + u calculation predicts a half - metallic ground state for lcemo.
arxiv:cond-mat/0309182
we prove that any ( real or complex ) analytic horizontally conformal submersion from a three - dimensional conformal manifold m to a two - dimensional conformal manifold n can be, locally, ` extended ' to a unique harmonic morphism from the heaven space of m to n.
arxiv:0709.0672
we study how active - region - scale flux tubes rise buoyantly from the base of the convection zone to near the solar surface by embedding a thin flux tube model in a rotating spherical shell of solar - like turbulent convection. these toroidal flux tubes that we simulate range in magnetic field strength from 15 kg to 100 kg at initial latitudes of 1 degree to 40 degrees in both hemispheres. this article expands upon weber, fan, and miesch ( astrophys. j., 741, 11, 2011 ) ( article 1 ) with the inclusion of tubes with magnetic flux of 10 ^ 20 mx and 10 ^ 21 mx, and more simulations of the previously investigated case of 10 ^ 22 mx, sampling more convective flows than the previous article, greatly improving statistics. observed properties of active regions are compared to properties of the simulated emerging flux tubes, including : the tilt of active regions in accordance with joy ' s law as in article 1, and in addition the scatter of tilt angles about the joy ' s law trend, the most commonly occurring tilt angle, the rotation rate of the emerging loops with respect to the surrounding plasma, and the nature of the magnetic field at the flux tube apex. we discuss how these diagnostic properties constrain the initial field strength of the active region flux tubes at the bottom of the solar convection zone, and suggest that flux tubes of initial magnetic field strengths of \ geq 40 kg are good candidates for the progenitors of large ( 10 ^ 21 mx to 10 ^ 22 mx ) solar active regions, which agrees with the results from article 1 for flux tubes of 10 ^ 22 mx. with the addition of more magnetic flux values and more simulations, we find that for all magnetic field strengths, the emerging tubes show a positive joy ' s law trend, and that this trend does not show a statistically significant dependence on the magnetic flux.
arxiv:1208.1292
primordial black holes are considered to be pair created quantum - mechanically during inflation. in the context of general relativity ( gr ), it has been shown that the pair creation rate is exponentially decreasing during inflation. specifically, tiny black holes are favored in the early universe, but they can grow with the horizon scale, as inflation approaches its end. at the same time, cosmological, and not only, shortcomings of gr have triggered the pursuit for a new, alternative theory of gravity. in this paper, by using probability amplitudes from the no boundary proposal ( nbp ), we argue that any alternative gravity should have a black hole creation rate similar to that of gr ; that is, in the early universe the creation of small black holes is in favor, while in the late universe larger black holes are being exponentially suppressed. as an example, we apply this argument in $ f ( r ) $ - theories of gravity and derive a general formula for the rate in any $ f ( r ) $ - theory with constant curvature. finally, we consider well known $ f ( r ) $ - models and using this formula we put constraints in their free parameters.
arxiv:1712.10177
a practical and scalable multicast beamformer design in multi - input multi - output ~ ( mimo ) coded caching ~ ( cc ) systems is introduced in this paper. the proposed approach allows multicast transmission to multiple groups with partially overlapping user sets using receiver dimensions to distinguish between different group - specific streams. additionally, it provides flexibility in accommodating various parameter configurations of the mimo - cc setup and overcomes practical limitations, such as the requirement to use successive interference cancellation ~ ( sic ) at the receiver, while achieving the same degrees - of - freedom ~ ( dof ). to evaluate the proposed scheme, we define the symmetric rate as the sum rate of the partially overlapping streams received per user, comprising a linear multistream multicast transmission vector and the linear minimum mean square error ~ ( lmmse ) receiver. the resulting non - convex symmetric rate maximization problem is solved using alternative optimization and successive convex approximation ~ ( sca ). moreover, a fast iterative lagrangian - based algorithm is developed, significantly reducing the computational overhead compared to previous designs. the effectiveness of our proposed method is demonstrated by extensive simulations.
arxiv:2312.02839
the anais ( annual modulation with nai ( tl ) scintillators ) experiment aims at the confirmation of the dama / libra signal using the same target and technique at the canfranc underground laboratory. 250 kg of ultrapure nai ( tl ) crystals will be used as a target, divided into 20 modules, each coupled to two photomultipliers. two nai ( tl ) crystals of 12. 5 kg each, grown by alpha spectra from a powder having a potassium level under the limit of our analytical techniques, form the anais - 25 set - up. the background contributions are being carefully studied and preliminary results are presented : their natural potassium content in the bulk has been quantified, as well as the uranium and thorium radioactive chains presence in the bulk through the discrimination of the corresponding alpha events by psa, and due to the fast commissioning, the contribution from cosmogenic activated isotopes is clearly identified and their decay observed along the first months of data taking. following the procedures established with anais - 0 and previous prototypes, bulk nai ( tl ) scintillation events selection and light collection efficiency have been also studied in anais - 25.
arxiv:1308.3478
we experimentally investigate the dielectric response of the low - dimensional gapped quantum magnet cu $ _ 2 $ cl $ _ { 4 } \ cdot $ h $ _ 8 $ c $ _ 4 $ so $ _ 2 $ near a magnetic field - induced quantum critical point, which separates the quantum - disordered and helimagnetic ground states. the observed magnetocapacitive effect originates from an improper ferroelectric nature of the transition, which itself is perhaps one of the best known realizations of bose - - einstein condensation of magnons. despite that, we find that the magnetocapacitive effect associated with the transition exhibits huge and very unusual anharmonicities.
arxiv:1503.08173
the fermilab top quark analysis is heavily dependent on the assumption of standard model backgrounds only. in the light gluino scenario, the stop quarks lie near the top in mass and their decays can influence the resulting top quark mass by an amount that is not small relative to the currently quoted errors. several slight anomalies in the top quark analysis find a natural explanation in the light gluino case.
arxiv:hep-ph/9708405
quantum simulation provides a computationally - feasible approach to model and study many problems in chemistry, condensed - matter physics, or high - energy physics where quantum phenomenon define the systems behaviour. in high - energy physics, quite a few possible applications are investigated in the context of gauge theories and their application to dynamic problems, topological problems, high - baryon density configurations, or collective neutrino oscillations. in particular, schemes for simulating neutrino oscillations are proposed using a quantum walk framework. in this study, we approach the problem of simulating neutrino oscillation from the perspective of open quantum systems by treating the position space of quantum walk as environment. we have obtained the recurrence relation for kraus operator which is used to represent the dynamics of the neutrino flavor change in the form of reduced coin states. we establish a connection between the dynamics of reduced coin state and neutrino phenomenology, enabling one to fix the simulation parameters for a given neutrino experiment and reduces the need for extended position space to simulate neutrino oscillations. we have also studied the behavior of linear entropy as a measure of entanglement between different flavors in the same framework.
arxiv:2305.13923
aims. we use the kepler data accumulated on the pulsating db white dwarf kic 08626021 to explore in detail the stability of its oscillation modes, searching in particular for evidences of nonlinear behaviors. methods. we analyse nearly two years of uninterrupted short cadence data, concentrating in particular on identified triplets due to stellar rotation that show intriguing behaviors during the course of the observations. results. we find clear signatures of nonlinear effects attributed to resonant mode coupling mechanisms. we find that a triplet at 4310 { \ mu } hz and this doublet at 3681 { \ mu } hz ( most likely the two visible components of an incomplete triplet ) have clear periodic frequency and amplitude modulations typical of the so - called intermediate regime of the resonance, with time scales consistent with theoretical expectations. another triplet at 5073 { \ mu } hz is likely in a narrow transitory regime in which the amplitudes are modulated while the frequencies are locked. using nonadiabatic pulsation calculations based on a model representative of kic 08626021 to evaluate the linear growth rates of the modes in the triplets, we also provide quantitative information that could be useful for future comparisons with numerical solutions of the amplitude equations. conclusions. the identified modulations are the first clear - cut signatures of nonlinear resonant couplings occurring in white dwarf stars. these should resonate as a warning to projects aiming at measuring the evolutionary cooling rate of kic 08626021, and of white dwarf stars in general. nonlinear modulations of the frequencies can potentially jeopardize any attempt to measure reliably such rates, unless they could be corrected beforehand. these results should motivate further theoretical work to develop nonlinear stellar pulsation theory.
arxiv:1510.06884
in unconstrained thermal equilibrium a local potential for total or fermionic hypercharge does not bias electroweak anomalous processes. we consider two proposed mechanisms for electroweak baryogenesis in this light. in ` spontaneous ' baryogenesis, which was argued to apply in the ` adiabatic ' limit of thick, slow walls, a non - zero result was obtained by setting globally conserved charges to be zero { \ it locally }. we show that this is a poor approximation unless the walls are very thick. for more realistic wall thicknesses the local equilibrium approached as the wall velocity $ v _ w \ rightarrow 0 $ has zero baryon number violation and nonzero global charges on the wall. in the ` charge transport ' mechanism, argued to apply to the case of thin fast walls, calculations of the magnitude of the asymmetry also involve the same error. in corrected calculations the local values of global charges should be determined dynamically rather than fixed locally to zero.
arxiv:hep-ph/9401351
given an anticanonical divisor in a projective variety, one naturally obtains a monotone k \ " ahler manifold. in this paper, for divisors in a certain class ( larger than normal crossings ), we construct smoothing families of contact hypersurfaces with controlled reeb dynamics. we use these to obtain subsets of the divisor complement which are superheavy. in particular, we will show that several examples of lagrangian skeleta of such divisor complements are superheavy, in cases where applying lagrangian floer theory may be intractable.
arxiv:2408.13187
matrix models have been shown to be equivalent to noncommutative field theories. in this work we study noncommutative x - y model and try to understand kosterlitz thouless transition in it by analysing the equivalent matrix model. we consider the cases of a finite lattice and infinite lattice separately. we show that the critical value of the matrix model coupling is identical for the finite and infinite lattice cases. however, the critical values of the coupling of the continuum field theory, in the large $ n $ limit, is finite in the infinite lattice case and zero in the case of finite lattice.
arxiv:hep-th/0105051
in this paper, we address a simplified version of a problem arising from volcanology. specifically, as reduced form of the boundary value problem for the lam \ ' e system, we consider a neumann problem for harmonic functions in the half - space with a cavity $ c $. zero normal derivative is assumed at the boundary of the half - space ; differently, at $ \ partial c $, the normal derivative of the function is required to be given by an external datum $ g $, corresponding to a pressure term exerted on the medium at $ \ partial c $. under the assumption that the ( pressurized ) cavity is small with respect to the distance from the boundary of the half - space, we establish an asymptotic formula for the solution of the problem. main ingredients are integral equation formulations of the harmonic solution of the neumann problem and a spectral analysis of the integral operators involved in the problem. in the special case of a datum $ g $ which describes a constant pressure at $ \ partial c $, we recover a simplified representation based on a polarization tensor.
arxiv:1508.02051
grace / susy - loop is a program package for the automatic calculation of the mssm amplitudes in one - loop order. we present features of grace / susy - loop, processes calculated using grace / susy - loop and an extension of the non - linear gauge formalism applied to grace / susy - loop.
arxiv:1006.3491
probabilistic topic models are popular unsupervised learning methods, including probabilistic latent semantic indexing ( plsi ) and latent dirichlet allocation ( lda ). by now, their training is implemented on general purpose computers ( gpcs ), which are flexible in programming but energy - consuming. towards low - energy implementations, this paper investigates their training on an emerging hardware technology called the neuromorphic multi - chip systems ( nmss ). nmss are very effective for a family of algorithms called spiking neural networks ( snns ). we present three snns to train topic models. the first snn is a batch algorithm combining the conventional collapsed gibbs sampling ( cgs ) algorithm and an inference snn to train lda. the other two snns are online algorithms targeting at both energy - and storage - limited environments. the two online algorithms are equivalent with training lda by using maximum - a - posterior estimation and maximizing the semi - collapsed likelihood, respectively. they use novel, tailored ordinary differential equations for stochastic optimization. we simulate the new algorithms and show that they are comparable with the gpc algorithms, while being suitable for nms implementation. we also propose an extension to train plsi and a method to prune the network to obey the limited fan - in of some nmss.
arxiv:1804.03578
i derive unidirectional wave equations for fields propagating in materials with both electric and magnetic dispersion and nonlinearity. the derivation imposes no conditions on the pulse profile except that the material modulates the propagation slowly, that is, that loss, dispersion, and nonlinearity have only a small effect over the scale of a wavelength. it also allows a direct term - to - term comparison of the exact bidirectional theory with its approximate unidirectional counterpart.
arxiv:0909.3407
two recent developments have accelerated progress in image reconstruction from human brain activity : large datasets that offer samples of brain activity in response to many thousands of natural scenes, and the open - sourcing of powerful stochastic image - generators that accept both low - and high - level guidance. most work in this space has focused on obtaining point estimates of the target image, with the ultimate goal of approximating literal pixel - wise reconstructions of target images from the brain activity patterns they evoke. this emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image - generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate. we introduce a novel reconstruction procedure ( second sight ) that iteratively refines an image distribution to explicitly maximize the alignment between the predictions of a voxel - wise encoding model and the brain activity patterns evoked by any target image. we show that our process converges on a distribution of high - quality reconstructions by refining both semantic content and low - level image details across iterations. images sampled from these converged image distributions are competitive with state - of - the - art reconstruction algorithms. interestingly, the time - to - convergence varies systematically across visual cortex, with earlier visual areas generally taking longer and converging on narrower image distributions, relative to higher - level brain areas. second sight thus offers a succinct and novel method for exploring the diversity of representations across visual brain areas.
arxiv:2306.00927
we study the behavior of bipartite entanglement at fixed von neumann entropy. we look at the distribution of the entanglement spectrum, that is the eigenvalues of the reduced density matrix of a quantum system in a pure state. we report the presence of two continuous phase transitions, characterized by different entanglement spectra, which are deformations of classical eigenvalue distributions.
arxiv:1302.3383
little research has explored how information engagement ( ie ), the degree to which individuals interact with and use information in a manner that manifests cognitively, behaviorally, and affectively. this study explored the impact of phrasing, specifically word choice, on ie and decision making. synthesizing two theoretical models, user engagement theory uet and information behavior theory ibt, a theoretical framework illustrating the impact of and relationships among the three ie dimensions of perception, participation, and perseverance was developed and hypotheses generated. the framework was empirically validated in a large - scale user study measuring how word choice impacts the dimensions of ie. the findings provide evidence that ie differs from other forms of engagement in that it is driven and fostered by the expression of the information itself, regardless of the information system used to view, interact with, and use the information. the findings suggest that phrasing can have a significant effect on the interpretation of and interaction with digital information, indicating the importance of expression of information, in particular word choice, on decision making and ie. the research contributes to the literature by identifying methods for assessment and improvement of ie and decision making with digital text.
arxiv:2305.09798
what would happen if temperatures were subdued and result in a cool summer? one can easily imagine that air conditioner, ice cream or beer sales would be suppressed as a result of this. less obvious is that agricultural shipments might be delayed, or that sound proofing material sales might decrease. the ability to extract such causal knowledge is important, but it is also important to distinguish between cause - effect pairs that are known and those that are likely to be unknown, or rare. therefore, in this paper, we propose a method for extracting rare causal knowledge from japanese financial statement summaries produced by companies. our method consists of three steps. first, it extracts sentences that include causal knowledge from the summaries using a machine learning method based on an extended language ontology. second, it obtains causal knowledge from the extracted sentences using syntactic patterns. finally, it extracts the rarest causal knowledge from the knowledge it has obtained.
arxiv:2408.01748
of soil will slip relative to the base of soil and lead to slope failure. if the interface between the mass and the base of a slope has a complex geometry, slope stability analysis is difficult and numerical solution methods are required. typically, the interface ' s exact geometry is unknown, and a simplified interface geometry is assumed. finite slopes require three - dimensional models to be analyzed, so most slopes are analyzed assuming that they are infinitely wide and can be represented by two - dimensional models. = = sub - disciplines = = = = = geosynthetics = = = geosynthetics are a type of plastic polymer products used in geotechnical engineering that improve engineering performance while reducing costs. this includes geotextiles, geogrids, geomembranes, geocells, and geocomposites. the synthetic nature of the products make them suitable for use in the ground where high levels of durability are required. their main functions include drainage, filtration, reinforcement, separation, and containment. geosynthetics are available in a wide range of forms and materials, each to suit a slightly different end - use, although they are frequently used together. some reinforcement geosynthetics, such as geogrids and more recently, cellular confinement systems, have shown to improve bearing capacity, modulus factors and soil stiffness and strength. these products have a wide range of applications and are currently used in many civil and geotechnical engineering applications including roads, airfields, railroads, embankments, piled embankments, retaining structures, reservoirs, canals, dams, landfills, bank protection and coastal engineering. = = = offshore = = = offshore ( or marine ) geotechnical engineering is concerned with foundation design for human - made structures in the sea, away from the coastline ( in opposition to onshore or nearshore engineering ). oil platforms, artificial islands and submarine pipelines are examples of such structures. there are a number of significant differences between onshore and offshore geotechnical engineering. notably, site investigation and ground improvement on the seabed are more expensive ; the offshore structures are exposed to a wider range of geohazards ; and the environmental and financial consequences are higher in case of failure. offshore structures are exposed to various environmental loads, notably wind, waves and currents. these phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan and need to be taken into account in offshore design. in subsea geotechnical engineering, seabed materials
https://en.wikipedia.org/wiki/Geotechnical_engineering
motivated by the very recent work of gao, y., chen, j., wang, j., zou, h. [ comm. algebra, 49 ( 8 ) ( 2021 ) 3241 - 3254 ; mr4283143 ], we introduce two new generalized inverses named weak drazin ( wd ) and weak drazin moore - penrose ( wdmp ) inverses for elements in rings. a few of their properties are then provided, and the fact that the proposed generalized inverses coincide with different well - known generalized inverses under certain assumptions is established. further, we discuss additive properties, reverse - order law and forward - order law for wd and wdmp generalized inverses. some examples are also provided in support of the theoretical results.
arxiv:2303.06651
we define here an analogue, for the n \ ' eron model of a semi - stable abelian variety defined over a number field, of m. j. taylor ' s class - invariant homomorphism ( defined for abelian schemes ). then we extend an annulation result ( in the case of an elliptic curve ), and an injectivity result regarding an arakelovian version of this homomorphism. this is the sequel to the paper " invariants de classes : le cas semi - stable ".
arxiv:math/0401445
hierarchical clustering is a critical task in numerous domains. many approaches are based on heuristics and the properties of the resulting clusterings are studied post hoc. however, in several applications, there is a natural cost function that can be used to characterize the quality of the clustering. in those cases, hierarchical clustering can be seen as a combinatorial optimization problem. to that end, we introduce a new approach based on a * search. we overcome the prohibitively large search space by combining a * with a novel \ emph { trellis } data structure. this combination results in an exact algorithm that scales beyond previous state of the art, from a search space with $ 10 ^ { 12 } $ trees to $ 10 ^ { 15 } $ trees, and an approximate algorithm that improves over baselines, even in enormous search spaces that contain more than $ 10 ^ { 1000 } $ trees. we empirically demonstrate that our method achieves substantially higher quality results than baselines for a particle physics use case and other clustering benchmarks. we describe how our method provides significantly improved theoretical bounds on the time and space complexity of a * for clustering.
arxiv:2104.07061
in multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. this paper studies cross - modal learning via the pairwise constraint, and aims to find the common structure hidden in different modalities. we first propose a compound regularization framework to deal with the pairwise constraint, which can be used as a general platform for developing cross - modal algorithms. for unsupervised learning, we propose a cross - modal subspace clustering method to learn a common structure for different modalities. for supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross - modal matching method based on compound? 21 regularization along with an iteratively reweighted algorithm to find the global optimum. extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and show that the proposed cross - modal methods can further reduce the semantic gap between different modalities and improve the clustering / retrieval accuracy.
arxiv:1411.7798
we study radiation pressure due to lyman alpha line photons, obtaining and exploring analytical expressions for the force - multiplier, $ m _ f ( n _ h, z ) = f _ \ alpha / ( l _ \ alpha / c ) $, as a function of gas column density, $ n _ h $, and metallicity, $ z $, for both dust - free and dusty media, employing a wkb approach for the latter case. solutions for frequency offset emission to emulate non - static media moving with a bulk velocity $ v $, have also been obtained. we find that, in static media, ly $ \ alpha $ pressure dominates over both photoionization and dust - mediated uv radiation pressure in a very wide parameter range ( $ 16 < \ log n _ h < 23 $ ; $ - 4 < \ log [ z / z _ \ odot ] < 0 $ ). for example, it overwhelms the other two forces by 10 ( 300 ) times in standard ( low - $ z $ ) star - forming clouds. thus, in agreement with previous studies, we conclude that ly $ \ alpha $ pressure plays a dominant role in the initial acceleration of the gas around luminous sources, and must be implemented in galaxy formation, evolution and outflow models and simulations.
arxiv:2103.14655
motivated by recent results of kapron and steinberg ( lics 2018 ) we introduce new forms of iteration on length in the setting of applied lambda - calculi for higher - type poly - time computability. in particular, in a type - two setting, we consider functionals which capture iteration on input length which bound interaction with the type - one input parameter, by restricting to a constant either the number of times the function parameter may return a value of increasing size, or the number of times the function parameter may be applied to an argument of increasing size. we prove that for any constant bound, the iterators obtained are equivalent, with respect to lambda - definability over type - one poly - time functions, to the recursor of cook and urquhart which captures cobham ' s notion of limited recursion on notation in this setting.
arxiv:1908.04923
3d scene graphs ( 3dsgs ) are an emerging description ; unifying symbolic, topological, and metric scene representations. however, typical 3dsgs contain hundreds of objects and symbols even for small environments ; rendering task planning on the full graph impractical. we construct taskography, the first large - scale robotic task planning benchmark over 3dsgs. while most benchmarking efforts in this area focus on vision - based planning, we systematically study symbolic planning, to decouple planning performance from visual representation learning. we observe that, among existing methods, neither classical nor learning - based planners are capable of real - time planning over full 3dsgs. enabling real - time planning demands progress on both ( a ) sparsifying 3dsgs for tractable planning and ( b ) designing planners that better exploit 3dsg hierarchies. towards the former goal, we propose scrub, a task - conditioned 3dsg sparsification method ; enabling classical planners to match and in some cases surpass state - of - the - art learning - based planners. towards the latter goal, we propose seek, a procedure enabling learning - based planners to exploit 3dsg structure, reducing the number of replanning queries required by current best approaches by an order of magnitude. we will open - source all code and baselines to spur further research along the intersections of robot task planning, learning and 3dsgs.
arxiv:2207.05006
the shallow water equations ( swe ) are a commonly used model to study tsunamis, tides, and coastal ocean circulation. however, there exist various approaches to discretize and solve them efficiently. which of them is best for a certain scenario is often not known and, in addition, depends heavily on the used hpc platform. from a simulation software perspective, this places a premium on the ability to adapt easily to different numerical methods and hardware architectures. one solution to this problem is to apply code generation techniques and to express methods and specific hardware - dependent implementations on different levels of abstraction. this allows for a separation of concerns and makes it possible, e. g., to exchange the discretization scheme without having to rewrite all low - level optimized routines manually. in this paper, we show how code for an advanced quadrature - free discontinuous galerkin ( dg ) discretized shallow water equation solver can be generated. here, we follow the multi - layered approach from the exastencils project that starts from the continuous problem formulation, moves to the discrete scheme, spells out the numerical algorithms, and, finally, maps to a representation that can be transformed to a distributed memory parallel implementation by our in - house scala - based source - to - source compiler. our contributions include : a new quadrature - free discontinuous galerkin formulation, an extension of the class of supported computational grids, and an extension of our toolchain allowing to evaluate discrete integrals stemming from the dg discretization implemented in python. as first results we present the whole toolchain and also demonstrate the convergence of our method for higher order dg discretizations.
arxiv:1904.08684
as algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. the output of an algorithm can be discriminatory for many reasons, most notably : ( 1 ) the data used to train the algorithm might be biased ( in various ways ) to favor certain populations over others ; ( 2 ) the analysis of this training data might inadvertently or maliciously introduce biases that are not borne out in the data. this work focuses on the latter concern. we develop and study multicalbration - - a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. multicalibration guarantees accurate ( calibrated ) predictions for every subpopulation that can be identified within a specified class of computations. we think of the class as being quite rich ; in particular, it can contain many overlapping subgroups of a protected group. we show that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. along the way, we present new algorithms for learning a multicalibrated predictor, study the computational complexity of this task, and draw new connections to computational learning models such as agnostic learning.
arxiv:1711.08513
polymer - air multilayer ( pam ) was developed to decrease the heat loss through window glass panes. a pam consists of a few polymer films separated from each other by air gaps. thanks to the excellent optical properties of the polymer films, the visual transmittance of pam is higher than 70 %, and the haze is less than 2 %. pam not only has mechanisms to reduce the conductive and convective heat transfer, but also can obstruct the radiative heat transfer. with a 4 ~ 6 mm thick pam coating, the u - factor of a glass pane can be lowered from above 1 btu / { { h } { ft } ^ 2 { { \ deg } f } } to 0. 5 ~ 0. 6 btu / { { h } { ft } ^ 2 { { \ deg } f } }. pam is resilient and robust, relevant to the window retrofitting applications.
arxiv:2005.14395
most recommender systems optimize the model on observed interaction data, which is affected by the previous exposure mechanism and exhibits many biases like popularity bias. the loss functions, such as the mostly used pointwise binary cross - entropy and pairwise bayesian personalized ranking, are not designed to consider the biases in observed data. as a result, the model optimized on the loss would inherit the data biases, or even worse, amplify the biases. for example, a few popular items take up more and more exposure opportunities, severely hurting the recommendation quality on niche items - - known as the notorious mathew effect. in this work, we develop a new learning paradigm named cross pairwise ranking ( cpr ) that achieves unbiased recommendation without knowing the exposure mechanism. distinct from inverse propensity scoring ( ips ), we change the loss term of a sample - - we innovatively sample multiple observed interactions once and form the loss as the combination of their predictions. we prove in theory that this way offsets the influence of user / item propensity on the learning, removing the influence of data biases caused by the exposure mechanism. advantageous to ips, our proposed cpr ensures unbiased learning for each training instance without the need of setting the propensity scores. experimental results demonstrate the superiority of cpr over state - of - the - art debiasing solutions in both model generalization and training efficiency. the codes are available at https : / / github. com / qcactus / cpr.
arxiv:2204.12176
environmental disturbances, such as sensor data noises, various lighting conditions, challenging weathers and external adversarial perturbations, are inevitable in real self - driving applications. existing researches and testings have shown that they can severely influence the vehicles perception ability and performance, one of the main issue is the false positive detection, i. e., the ghost object which is not real existed or occurs in the wrong position ( such as a non - existent vehicle ). traditional navigation methods tend to avoid every detected objects for safety, however, avoiding a ghost object may lead the vehicle into a even more dangerous situation, such as a sudden break on the highway. considering the various disturbance types, it is difficult to address this issue at the perceptual aspect. a potential solution is to detect the ghost through relation learning among the whole scenario and develop an integrated end - to - end navigation system. our underlying logic is that the behavior of all vehicles in the scene is influenced by their neighbors, and normal vehicles behave in a logical way, while ghost vehicles do not. by learning the spatio - temporal relation among surrounding vehicles, an information reliability representation is learned for each detected vehicle and then a robot navigation network is developed. in contrast to existing works, we encourage the network to learn how to represent the reliability and how to aggregate all the information with uncertainties by itself, thus increasing the efficiency and generalizability. to the best of the authors knowledge, this paper provides the first work on using graph relation learning to achieve end - to - end robust navigation in the presence of ghost vehicles. simulation results in the carla platform demonstrate the feasibility and effectiveness of the proposed method in various scenarios.
arxiv:2203.09952
this is a short ( and personal ) introduction in german to the connections between artificial intelligence, philosophy, and logic, and to the author ' s work. dies ist eine kurze ( und persoenliche ) einfuehrung in die zusammenhaenge zwischen kuenstlicher intelligenz, philosophie, und logik, und in die arbeiten des autors.
arxiv:1901.00365
we measure the full distribution of current fluctuations in a single - electron transistor with a controllable bistability. the conductance switches randomly between two levels due to the tunneling of single electrons in a separate single - electron box. the electrical fluctuations are detected over a wide range of time scales and excellent agreement with theoretical predictions is found. for long integration times, the distribution of the time - averaged current obeys the large - deviation principle. we formulate and verify a fluctuation relation for the bistable region of the current distribution.
arxiv:1606.06839
as new experimental data arrive from the lhc the prospect of indirectly detecting new physics through precision tests of the standard model grows more exciting. precise experimental and theoretical inputs are required to test the unitarity of the ckm matrix and to search for new physics effects in rare decays. lattice qcd calculations of nonperturbative inputs have reached a precision at the level of a few percent ; in many cases aided by the use of lattice perturbation theory. this review examines the role of lattice perturbation theory in b physics calculations on the lattice in the context of two questions : how is lattice perturbation theory used in the different heavy quark formalisms implemented by the major lattice collaborations? and what role does lattice perturbation theory play in determinations of nonperturbative contributions to the physical processes at the heart of the search for new physics? framing and addressing these questions reveals that lattice perturbation theory is a tool with a spectrum of applications in lattice b physics.
arxiv:1210.7266
the phase - space description of bosonic quantum systems has numerous applications in such fields as quantum optics, trapped ultracold atoms, and transport phenomena. extension of this description to the case of fermionic systems leads to formal grassmann phase - space quasiprobability distributions and master equations. the latter are usually considered as not possessing probabillistic interpretation and as not directly computationally accessible. here, we describe how to construct $ c $ - number interpretations of grassmann phase space representations and their master equations. as a specific example, the grassmann $ b $ representation is considered. we disscuss how to introduce $ c $ - number probability distributions on grassmann algebra and how to integrate them. a measure of size and proximity is defined for grassmann numbers, and the grassmann derivatives are introduced which are based on infinitesimal variations of function arguments. an example of $ c $ - number interpretation of formal grassmann equations is presented.
arxiv:1609.06360
for exchangeable data, mixture models are an extremely useful tool for density estimation due to their attractive balance between smoothness and flexibility. when additional covariate information is present, mixture models can be extended for flexible regression by modeling the mixture parameters, namely the weights and atoms, as functions of the covariates. these types of models are interpretable and highly flexible, allowing non only the mean but the whole density of the response to change with the covariates, which is also known as density regression. this article reviews bayesian covariate - dependent mixture models and highlights which data types can be accommodated by the different models along with the methodological and applied areas where they have been used. in addition to being highly flexible, these models are also numerous ; we focus on nonparametric constructions and broadly organize them into three categories : 1 ) joint models of the responses and covariates, 2 ) conditional models with single - weights and covariate - dependent atoms, and 3 ) conditional models with covariate - dependent weights. the diversity and variety of the available models in the literature raises the question of how to choose among them for the application at hand. we attempt to shed light on this question through a careful analysis of the predictive equations for the conditional mean and density function as well as predictive comparisons in three simulated data examples.
arxiv:2307.16298
hijing + + ( heavy ion jet interaction generator ) is the successor of the widely used original hijing, developed almost three decades ago. while the old versions ( 1. x and 2. x ) were written in fortran, hijing + + was completely rewritten in c + +. during the development we keep in mind the requirements of the high - energy heavy - ion community : the new monte carlo software have a well designed modular framework, therefore any future modifications are much easier to implement. it contains all the physical models that were also present in it ' s predecessor, but utilizing modern c + + features it also includes native thread based parallelism, an easy - to - use analysis interface and a modular plugin system, which makes room for possible future improvements. in this paper we summarize the results of our performance tests measured on 2 widely used architectures.
arxiv:1811.02131
in this paper, we study homothetic tube model predictive control ( mpc ) of discrete - time linear systems subject to bounded additive disturbance and mixed constraints on the state and input. different from most existing work on robust mpc, we assume that the true disturbance set is unknown but a conservative surrogate is available a priori. leveraging the real - time data, we develop an online learning algorithm to approximate the true disturbance set. this approximation and the corresponding constraints in the mpc optimisation are updated online using computationally convenient linear programs. we provide statistical gaps between the true and learned disturbance sets, based on which, probabilistic recursive feasibility of homothetic tube mpc problems is discussed. numerical simulations are provided to demonstrate the efficacy of our proposed algorithm and compare with state - of - the - art mpc algorithms.
arxiv:2505.03482
for efficient and high - fidelity local facial attribute editing, most existing editing methods either require additional fine - tuning for different editing effects or tend to affect beyond the editing regions. alternatively, inpainting methods can edit the target image region while preserving external areas. however, current inpainting methods still suffer from the generation misalignment with facial attributes description and the loss of facial skin details. to address these challenges, ( i ) a novel data utilization strategy is introduced to construct datasets consisting of attribute - text - image triples from a data - driven perspective, ( ii ) a causality - aware condition adapter is proposed to enhance the contextual causality modeling of specific details, which encodes the skin details from the original image while preventing conflicts between these cues and textual conditions. in addition, a skin transition frequency guidance technique is introduced for the local modeling of contextual causality via sampling guidance driven by low - frequency alignment. extensive quantitative and qualitative experiments demonstrate the effectiveness of our method in boosting both fidelity and editability for localized attribute editing. the code is available at https : / / github. com / connorxian / ca - edit.
arxiv:2412.13565
fine granularity is an essential requirement for controllable text generation, which has seen rapid growth with the ability of llms. however, existing methods focus mainly on a small set of attributes like 3 to 5, and their performance degrades significantly when the number of attributes increases to the next order of magnitude. to address this challenge, we propose a novel zero - shot approach for extremely fine - grained controllable generation ( efcg ), proposing auto - reconstruction ( ar ) and global preference optimization ( gpo ). in the ar phase, we leverage llms to extract soft attributes ( e. g., emphasis on simplicity and minimalism in design ) from raw texts, and combine them with programmatically derived hard attributes ( e. g., the text should be between 300 and 400 words ) to construct massive ( around 45 ) multi - attribute requirements, which guide the fine - grained text reconstruction process under weak supervision. in the gpo phase, we apply direct preference optimization ( dpo ) to refine text generation under diverse attribute combinations, enabling efficient exploration of the global combination space. additionally, we introduce an efficient attribute sampling strategy to identify and correct potentially erroneous attributes, further improving global optimization. our framework significantly improves the constraint satisfaction rate ( csr ) and text quality for efcg by mitigating position bias and alleviating attention dilution.
arxiv:2502.12375
consider the problem of a government that wants to reduce the debt - to - gdp ( gross domestic product ) ratio of a country. the government aims at choosing a debt reduction policy which minimises the total expected cost of having debt, plus the total expected cost of interventions on the debt ratio. we model this problem as a singular stochastic control problem over an infinite time - horizon. in a general not necessarily markovian framework, we first show by probabilistic arguments that the optimal debt reduction policy can be expressed in terms of the optimal stopping rule of an auxiliary optimal stopping problem. we then exploit such link to characterise the optimal control in a two - dimensional markovian setting in which the state variables are the level of the debt - to - gdp ratio and the current inflation rate of the country. the latter follows uncontrolled ornstein - uhlenbeck dynamics and affects the growth rate of the debt ratio. we show that it is optimal for the government to adopt a policy that keeps the debt - to - gdp ratio under an inflation - dependent ceiling. this curve is given in terms of the solution of a nonlinear integral equation arising in the study of a fully two - dimensional optimal stopping problem.
arxiv:1607.04153
we compare the first results on searches for supersymmetry with the large hadron collider ( lhc ) to the current and near - term performance of experiments sensitive to neutralino dark matter. we limit our study to the particular slices of parameter space of the constrained minimal supersymmetric extension to the standard model where cms and atlas exclusion limits have been presented so far. we show where, on that parameter space, the lightest neutralino possesses a thermal relic abundance matching the value inferred by cosmological observations. we then calculate rates for, and estimate the performance of, experiments sensitive to direct and indirect signals from neutralino dark matter. we argue that this is a unique point in time, where the quest for supersymmetry - - at least in one of its practical and simple incarnations - - is undergoing a close scrutiny from the lhc and from dark matter searches that is both synergistic and complementary. should the time of discovery finally unravel, the current performances of the collider program and of direct and indirect dark matter searches are at a conjuncture offering unique opportunities for a breakthrough on the nature of physics beyond the standard model.
arxiv:1105.5162
i share some memories and offer a personal perspective on jacob bekenstein ' s legacy, focussing on black hole entropy and the bekenstein bound. i summarize a number of fascinating recent developments that grew out of bekenstein ' s pioneering contributions, from the ryu - takayanagi proposal to the quantum null energy condition.
arxiv:1810.01880
a grand challenge of the 21st century cosmology is to accurately estimate the cosmological parameters of our universe. a major approach to estimating the cosmological parameters is to use the large - scale matter distribution of the universe. galaxy surveys provide the means to map out cosmic large - scale structure in three dimensions. information about galaxy locations is typically summarized in a " single " function of scale, such as the galaxy correlation function or power - spectrum. we show that it is possible to estimate these cosmological parameters directly from the distribution of matter. this paper presents the application of deep 3d convolutional networks to volumetric representation of dark - matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum - likelihood point estimates using " cosmological models ". this opens the way to estimating the parameters of our universe with higher accuracy.
arxiv:1711.02033
designers of millimeter wave ( mmwave ) cellular systems need to evaluate line - of - sight ( los ) maps to provide good service to users in urban scenarios. in this letter, we derive estimators to obtain los maps in scenarios with potential blocking elements. applying previous stochastic geometry results, we formulate the optimal bayesian estimator of the los map using a limited number of actual measurements at different locations. the computational cost of the optimal estimator is derived and is proven to be exponential in the number of available data points. an approximation is discussed, which brings the computational complexity from exponential to quasi - linear and allows the implementation of a practical estimator. finally, we compare numerically the optimal estimator and the approximation with other estimators from the literature and also with an original heuristic estimator with good performance and low computational cost. for the comparison, both synthetic layouts and a real layout of chicago have been used.
arxiv:2411.07193
the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations.
arxiv:1703.06936
we study the generalized hankel transform of the family of sequences satisfying the recurrence relation $ a _ { n + 1 } = \ bigl ( \ alpha + \ frac { \ beta } { n + \ gamma } \ bigr ) a _ n $. we apply the obtained formula to several particular important sequences. incidentally, we find a connection between some well known formulas that had previously arisen in literature in dissimilar settings. additionally, given a non - zero sequence satisfying the above recurrence, we evaluate the hankel transform of the sequence of its reciprocals.
arxiv:0912.0684
a quantum computer is proposed in which information is stored in the two lowest electronic states of doped quantum dots ( qds ). many qds are located in a microcavity. a pair of gates controls the energy levels in each qd. a controlled not ( cnot ) operation involving any pair of qds can be effected by a sequence of gate - voltage pulses which tune the qd energy levels into resonance with frequencies of the cavity or a laser. the duration of a cnot operation is estimated to be much shorter than the time for an electron to decohere by emitting an acoustic phonon.
arxiv:quant-ph/9903065
we investigate the vortex lattice and vortex bound states in csfe $ _ 2 $ as $ _ 2 $ single crystals by scanning tunneling microscopy / spectroscopy ( stm / sts ) under various magnetic fields. a possible structural transition or crossover of vortex lattice is observed with the increase of magnetic field, i. e., the vortex lattice changes from a distorted hexagonal lattice to a distorted tetragonal one at the magnetic field near 0. 5 t. it is found that a mixture of stripelike hexagonal and square vortex lattices emerges in the crossover region. the vortex bound state is also observed in the vortex center. the tunneling spectra crossing a vortex show that the bound - state peak position holds near zero bias with stm tip moving away from the vortex core center. the fermi energy estimated from the vortex bound state energy is very small. our investigations provide experimental information to both the vortex lattice and the vortex bound states in this iron - based superconductor.
arxiv:1801.02348
the fubini - study metric of quantum state manifold generated by the operators which satisfy the heisenberg lie algebra is calculated. the similar problem is studied for the manifold generated by the so ( 3 ) lie algebra operators. using these results we calculate the fubini - study metrics of state manifolds generated by the position and momentum operators. also the metrics of quantum state manifolds generated by some spin systems are obtained. finally, we generalize this problem for operators of an arbitrary lie algebra.
arxiv:1706.00250
the spin - statistics connection is obtained for a simple formulation of a classical field theory containing even and odd grassmann variables. to that end, the construction of irreducible canonical realizations of the rotation group corresponding to general causal fields is reviewed. the connection is obtained by imposing local commutativity on the fields and exploiting the parity operation to exchange spatial coordinates in the scalar product of classical field evaluated at one spatial location with the same field evaluated at a distinct location. the spin - statistics connection for irreducible canonical realizations of the poincar \ ' { e } group of spin $ j $ is obtained in the form : classical fields and their conjugate momenta satisfy fundamental field - theoretic poisson bracket relations for 2 $ j $ even, and fundamental poisson antibracket relations for 2 $ j $ odd
arxiv:physics/0601014
drumming belongs to a family of musical instruments whose practice, whether as an amateur or at a high level, is associated with an increased risk of musculoskeletal disorders ( msd ), particularly of the upper limbs and lumbar spine. the vast majority of drummers learn to play on acoustic instruments, the sound intensity of which is proportional to the striking force developed. this correlation is disrupted when playing the electronic version of the instrument, which is often purchased by musicians seeking to reduce the sound produced ( e. g. playing in apartments ). the aim of this study was therefore to analyze whether drumming on electronic equipment would lead to a change in the kinematics and feel of drummers. to this end, several drummers were recruited to perform repeated rhythms at different pitches on acoustic and electric drums under two sound conditions ( sound on and sound off with noise - canceling headphones ). the sound produced and the kinematics of the upper limbs were measured by video motion capture during the beats. in addition, self - confrontation interviews were conducted after each condition. the drummers, confronted with video recordings of their actions, were asked to describe, explain and comment step by step on their performance. these interviews were also used to assess their ability to maintain a constant strike force. a questionnaire was used to obtain subjective information on how they felt. the results showed a lower sound power of electronic drums, despite a similar striking speed. this gesture - sound decorrelation could explain the increase in msd among drummers when switching from an acoustic to an electronic instrument.
arxiv:2505.08571
a centrosymmetric permutation is one which is invariant under the reverse - complement operation, or equivalently one whose associated standard young tableaux under the robinson - schensted algorithm are both invariant under the schutzenberger involution. in this paper, we characterize the set of permutations avoiding 1243 and 2143 whose images under the reverse - complement mapping also avoid these patterns. we also characterize in a simple manner the corresponding schroder paths under a bijection of egge and mansour. we then use these results to enumerate centrosymmetric permutations avoiding the patterns 1243 and 2143. in a similar manner, centrosymmetric involutions avoiding these same patterns are shown to be enumerated by the pell numbers.
arxiv:1002.1229
we introduce the notion of t - restricted doubling dimension of a point set in euclidean space as the local intrinsic dimension up to scale t. in many applications information is only relevant for a fixed range of scales. we present an algorithm to construct a hierarchical net - tree up to scale t which we denote as the net - forest. we present a method based on locality sensitive hashing to compute all near neighbours of points within a certain distance. our construction of the net - forest is probabilistic, and we guarantee that with high probability, the net - forest is supplemented with the correct neighbouring information. we apply our net - forest construction scheme to create an approximate cech complex up to a fixed scale ; and its complexity depends on the local intrinsic dimension up to that scale.
arxiv:1406.4822
we discuss a new scenario for the formation of intermediate mass black holes in dense star clusters. in this scenario, intermediate mass black holes are formed as a result of dynamical interactions of hard binaries containing a stellar mass black hole, with other stars and binaries. we discuss the necessary conditions to initiate the process of intermediate mass black hole formation and the influence of an intermediate mass black hole on the host global globular cluster properties. we discuss two scenarios for intermediate mass black hole formation. the slow and fast scenarios. they occur later or earlier in the cluster evolution and require smaller or extremely large central densities, respectively. in our simulations, the formation of intermediate mass black holes is highly stochastic. in general, higher formation probabilities follow from larger cluster concentrations ( i. e. central densities ). we further discuss possible observational signatures of the presence of intermediate mass black holes in globular clusters that follow from our simulations. these include the spatial and kinematic structure of the host cluster, possible radio, x - ray and gravitational wave emissions due to dynamical collisions or mass - transfer and the creation of hypervelocity main sequence escapers during strong dynamical interactions between binaries and an intermediate mass black hole. all simulations discussed in this paper were performed with the mocca monte carlo code. mocca accurately follows most of the important physical processes that occur during the dynamical evolution of star clusters but, as with other dynamical codes, it approximates the dissipative processes connected with stellar collisions and binary mergers.
arxiv:1506.05234
non - maxwellian electron velocity space distribution functions ( evdf ) are useful signatures of plasma conditions and non - local consequences of collisionless magnetic reconnection. in the past, evdfs were obtained mainly for antiparallel reconnection and under the influence of weak guide - fields in the direction perpendicular to the reconnection plane. evdfs are, however, not well known, yet, for oblique ( or component - ) reconnection in dependence on stronger guide - magnetic fields and for the exhaust ( outflow ) region of reconnection away from the diffusion region. in view of the multi - spacecraft magnetospheric multiscale mission ( mms ), we derived the non - maxwellian evdfs of collisionless magnetic reconnection in dependence on the guide - field strength $ b _ g $ from small ( $ b _ g \ approx0 $ ) to very strong ( $ b _ g = 8 $ ) guide - fields, taking into account the feedback of the self - generated turbulence. for this sake, we carried out 2. 5d fully - kinetic particle - in - cell simulations using the acronym code. we obtained anisotropic evdfs and electron beams propagating along the separatrices as well as in the exhaust region of reconnection. the beams are anisotropic with a higher temperature in the direction perpendicular rather than parallel to the local magnetic field. the beams propagate in the direction opposite to the background electrons and cause instabilities. we also obtained the guide - field dependence of the relative electron - beam drift speed, threshold and properties of the resulting streaming instabilities including the strongly non - linear saturation of the self - generated plasma turbulence. this turbulence and its non - linear feedback cause non - adiabatic parallel electron acceleration and evdfs well beyond the limits of the quasi - linear approximation, producing phase space holes and an isotropizing pitch - angle scattering.
arxiv:1608.03110
theoretical studying of the very inner structure of faint satellite galaxy requires very high - resolution hydro - dynamical simulations with realistic models for star formation, which are beginning to emerge recently. in this work we present an analytical description to model the inner kinematic of satellites in the milky way ( mw ). we use a monte - carlo method to produce merger trees for mw mass halo and analytical models to produce stellar mass in the satellite galaxies. we consider two important processes which can significantly modify the inner mass distribution in satellite galaxy. the first is baryonic feedback which can induce a flat inner profile depending on the star formation efficiency in the galaxy. the second is the tidal stripping to reduce and re - distribute the mass inside satellite. we apply this model to mw satellite galaxies in both cdm and thermal relic wdm models. it is found that tidal heating must be effective to produce a relatively flat distribution of the satellite circular velocities, to agree with the data. the constraint on wdm mass depends on the host halo mass. for a mw halo with dark matter mass lower than $ 2 \ times 10 ^ { 12 } m _ { \ odot } $, a 2 kev wdm model can be safely excluded as the predicted satellite circular velocities are systematically lower than the data. for wdm with mass of 3. 5 kev, it requires the mw halo mass to be larger than $ 1. 5 \ times 10 ^ { 12 } m _ { \ odot } $, otherwise the 3. 5 kev model can also be excluded. our current model can not exclude the wdm model with mass larger than 10 kev.
arxiv:1911.05257
we predict superconductivity for the carbon - boron clathrate srb3c3 at 27 - 43 k for coulomb pseudopotential ( mu * ) values between 0. 17 and 0. 10 using first - principles calculations with conventional electron - phonon coupling. electrical transport measurements, facilitated by a novel in situ experimental design compatible with extreme synthesis conditions ( > 3000 k at 50 gpa ), show non - hysteretic resistivity drops that track the calculated magnitude and pressure dependence of superconductivity for mu * = 0. 15, and transport measurements collected under applied magnetic fields confirm superconductivity with an onset tc of approximately 20 k at 40 gpa. carbon - based clathrates thus represent a new class of superconductors similar to other covalent metals like mgb2 and doped fullerenes. carbon clathrates share structures similar to superconducting superhydrides, but covalent c - b bonds allow metastable persistence at ambient conditions.
arxiv:1708.03483
while language models store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs. recently, retrieval - augmented models, such as realm, rag, and retro, have incorporated world knowledge into language generation by leveraging an external non - parametric index and have demonstrated impressive performance with constrained model sizes. however, these methods are restricted to retrieving only textual knowledge, neglecting the ubiquitous amount of knowledge in other modalities like images - - much of which contains information not covered by any text. to address this limitation, we propose the first multimodal retrieval - augmented transformer ( murag ), which accesses an external non - parametric multimodal memory to augment language generation. murag is pre - trained with a mixture of large - scale image - text and text - only corpora using a joint contrastive and generative loss. we perform experiments on two different datasets that require retrieving and reasoning over both images and text to answer a given query : webqa, and multimodalqa. our results show that murag achieves state - of - the - art accuracy, outperforming existing models by 10 - 20 \ % absolute on both datasets and under both distractor and full - wiki settings.
arxiv:2210.02928
having precise perception of the environment is crucial for ensuring the secure and reliable functioning of autonomous driving systems. radar object detection networks are one fundamental part of such systems. cnn - based object detectors showed good performance in this context, but they require large compute resources. this paper investigates sparse convolutional object detection networks, which combine powerful grid - based detection with low compute resources. we investigate radar specific challenges and propose sparse kernel point pillars ( skpp ) and dual voxel point convolutions ( dvpc ) as remedies for the grid rendering and sparse backbone architectures. we evaluate our skpp - dpvcn architecture on nuscenes, which outperforms the baseline by 5. 89 % and the previous state of the art by 4. 19 % in car ap4. 0. moreover, skpp - dpvcn reduces the average scale error ( ase ) by 21. 41 % over the baseline.
arxiv:2308.07748
we provide a complete picture to the self - gravitating non - relativistic gas at thermal equilibrium using monte carlo simulations, analytic mean field methods ( mf ) and low density expansions. the system is shown to possess an infinite volume limit in the grand canonical ( gce ), canonical ( ce ) and microcanonical ( mce ) ensembles when ( n, v ) - > infty, keeping n / v ^ { 1 / 3 } fixed. we compute the equation of state ( we do not assume it as is customary in hydrodynamics ), as well as the energy, free energy, entropy, chemical potential, specific heats, compressibilities and speed of sound ; we analyze their properties, signs and singularities. all physical quantities turn out to depend on a single variable eta = g m ^ 2 n / [ v ^ { 1 / 3 } t } that is kept fixed in the n - > infty and v - > infty limit. the system is in a gaseous phase for eta < eta _ t and collapses into a dense object for eta > eta _ t in the ce with the pressure becoming large and negative. at eta \ simeq eta _ t the isothermal compressibility diverges and the gas collapses. our monte carlo simulations yield eta _ t \ simeq 1. 515. we find that pv / [ nt ] = f ( eta ). the function f ( eta ) has a second riemann sheet which is only physically realized in the mce. in the mce, the collapse phase transition takes place in this second sheet near eta _ { mc } = 1. 26 and the pressure and temperature are larger in the collapsed phase than in the gaseous phase. both collapse phase transitions ( in the ce and in the mce ) are of zeroth order since the gibbs free energy has a jump at the transitions.
arxiv:astro-ph/0505561
we establish a mod 2 index theorem for real vector bundles over 8k + 2 dimensional compact pin $ ^ - $ manifolds. the analytic index is the reduced $ \ eta $ invariant of ( twisted ) dirac operators and the topological index is defined through $ ko $ - theory. our main result extends the mod 2 index theorem of atiyan and singer to non - orientable manifolds.
arxiv:1508.02619
recently, it was demonstrated that field - free switching could be achieved by combining spin - orbit torque ( sot ) and dzyaloshinskii - moriya interaction ( dmi ). however, this mechanism only occurs under certain conditions which have not been well discussed. in this letter, it is found that the ratio of domain wall width to diameter of nanodots could be utilized as a criteria for judging this mechanism. influences of different magnetic parameters are studied, including exchange constant, dmi magnitude and field - like toque, etc. besides, we reveal the importance of the shrinkage of magnetic domain wall surface energy for metastable states. our work provides guidelines to the experiments on the dmi - induced field - free magnetization switching, and also offers a new way for the design of sot - based memory or logic circuits.
arxiv:2009.09446
collective phenomena with universal properties have been observed in many complex systems with a large number of components. here we present a microscopic model of the emergence of scaling behavior in such systems, where the interaction dynamics between individual components is mediated by a global variable making the mean - field description exact. using the example of financial markets, we show that asset price can be such a global variable with the critical role of coordinating the actions of agents who are otherwise independent. the resulting model accurately reproduces empirical properties such as the universal scaling of the price fluctuation and volume distributions, long - range correlations in volatility and multiscaling.
arxiv:1006.0628
understanding the emergence of biological structures and their changes is a complex problem. on a biochemical level, it is based on gene regulatory networks ( grn ) consisting on interactions between the genes responsible for cell differentiation and coupled in a greater scale with external factors. in this work we provide a systematic methodological framework to construct waddington ' s epigenetic landscape of the grn involved in cellular determination during the early stages of development of angiosperms. as a specific example we consider the flower of the plant \ textit { arabidopsis thaliana }. our model, which is based on experimental data, recovers accurately the spatial configuration of the flower during cell fate determination, not only for the wild type, but for its homeotic mutants as well. the method developed in this project is general enough to be used in the study of the relationship between genotype - phenotype in other living organisms.
arxiv:1802.04347
recovered from the disease. those in this category are not able to be infected again or to transmit the infection to others. the flow of this model may be considered as follows : s → i → r { \ displaystyle { \ mathcal { s } } \ rightarrow { \ mathcal { i } } \ rightarrow { \ mathcal { r } } } using a fixed population, n = s ( t ) + i ( t ) + r ( t ) { \ displaystyle n = s ( t ) + i ( t ) + r ( t ) }, kermack and mckendrick derived the following equations : d s d t = − β s i d i d t = β s i − γ i d r d t = γ i { \ displaystyle { \ begin { aligned } { \ frac { ds } { dt } } & = - \ beta si \ \ [ 8pt ] { \ frac { di } { dt } } & = \ beta si - \ gamma i \ \ [ 8pt ] { \ frac { dr } { dt } } & = \ gamma i \ end { aligned } } } several assumptions were made in the formulation of these equations : first, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of β { \ displaystyle \ beta }, which is considered the contact or infection rate of the disease. therefore, an infected individual makes contact and is able to transmit the disease with β n { \ displaystyle \ beta n } others per unit time and the fraction of contacts by an infected with a susceptible is s / n { \ displaystyle s / n }. the number of new infections in unit time per infective then is β n ( s / n ) { \ displaystyle \ beta n ( s / n ) }, giving the rate of new infections ( or those leaving the susceptible category ) as β n ( s / n ) i = β s i { \ displaystyle \ beta n ( s / n ) i = \ beta si } ( brauer & castillo - chavez, 2001 ). for the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. however, infectives are leaving this class per unit time to enter the recovered / removed class at a rate γ { \ displaystyle \ gamma } per unit time ( where γ { \ displaystyle \ gamma } represents
https://en.wikipedia.org/wiki/Network_science
although word - level prosody modeling in neural text - to - speech ( tts ) has been investigated in recent research for diverse speech synthesis, it is still challenging to control speech synthesis manually without a specific reference. this is largely due to lack of word - level prosody tags. in this work, we propose a novel approach for unsupervised word - level prosody tagging with two stages, where we first group the words into different types with a decision tree according to their phonetic content and then cluster the prosodies using gmm within each type of words separately. this design is based on the assumption that the prosodies of different type of words, such as long or short words, should be tagged with different label sets. furthermore, a tts system with the derived word - level prosody tags is trained for controllable speech synthesis. experiments on ljspeech show that the tts model trained with word - level prosody tags not only achieves better naturalness than a typical fastspeech2 model, but also gains the ability to manipulate word - level prosody.
arxiv:2202.07200
several models of dark matter motivate the concept of hidden sectors consisting of su ( 3 ) _ c x su ( 2 ) _ l x u ( 1 ) _ y singlet fields. the interaction between our and hidden matter could be transmitted by new abelian u ' ( 1 ) gauge bosons a ' mixing with ordinary photons. if such a ' s with the mass in the sub - gev range exist, they would be produced through mixing with photons emitted in two photon decays of \ eta, \ eta ' neutral mesons generated by the high energy proton beam in a neutrino target. the a ' s would then penetrate the downstream shielding and be observed in a neutrino detector via their a ' - > e + e - decays. using bounds from the charm neutrino experiment at cern that searched for an excess of e + e - pairs from heavy neutrino decays, the area excluding the \ gamma - a ' mixing range 10 ^ { - 7 } < \ epsilon < 10 ^ { - 4 } for the a ' mass region 1 < m _ a ' < 500 mev is derived. the obtained results are also used to constrain models, where a new gauge boson x interacts with quarks and leptons. new upper limits on the branching ratio as small as br ( \ eta - > \ gamma x ) < 10 ^ { - 14 } and br ( \ eta ' - > \ gamma x ) < 10 ^ { - 12 } are obtained, which are several orders of magnitude more restrictive than the previous bounds from the crystal barrel experiment.
arxiv:1204.3583
in this paper we discuss the possible usage of the compressive sampling based wavelet analysis for the efficient measurement and for the early detection of one dimensional ( 1d ) vibrational rogue waves. we study the construction of the triangular ( v - shaped ) wavelet spectra using compressive samples of rogue waves that can be modeled as peregrine and akhmediev - peregrine solitons. we show that triangular wavelet spectra can be sensed by compressive measurements at the early stages of the development of vibrational rogue waves. our results may lead to development of the efficient vibrational rogue wave measurement and early sensing systems with reduced memory requirements which use the compressive sampling algorithms. in typical solid mechanics applications, compressed measurements can be acquired by randomly positioning single sensor and multisensors.
arxiv:1706.01972
by all measures, wireless networking has seen explosive growth over the past decade. fourth generation long term evolution ( 4g lte ) cellular technology has increased the bandwidth available for smartphones, in essence, delivering broadband speeds to mobile devices. the most recent 5g technology is further enhancing the transmission speeds and cell capacity, as well as, reducing latency through the use of different radio technologies and is expected to provide internet connections that are an order of magnitude faster than 4g lte. technology continues to advance rapidly, however, and the next generation, 6g, is already being envisioned. 6g will make possible a wide range of powerful, new applications including holographic telepresence, telehealth, remote education, ubiquitous robotics and autonomous vehicles, smart cities and communities ( iot ), and advanced manufacturing ( industry 4. 0, sometimes referred to as the fourth industrial revolution ), to name but a few. the advances we will see begin at the hardware level and extend all the way to the top of the software " stack. " artificial intelligence ( ai ) will also start playing a greater role in the development and management of wireless networking infrastructure by becoming embedded in applications throughout all levels of the network. the resulting benefits to society will be enormous. at the same time these exciting new wireless capabilities are appearing rapidly on the horizon, a broad range of research challenges loom ahead. these stem from the ever - increasing complexity of the hardware and software systems, along with the need to provide infrastructure that is robust and secure while simultaneously protecting the privacy of users. here we outline some of those challenges and provide recommendations for the research that needs to be done to address them.
arxiv:2101.01279
a generalized formalism of the so - called non - adiabatic quantum molecular dynamics is presented, which applies for atomic many - body systems in external laser fields. the theory treats the nuclear dynamics and electronic transitions simultaneously in a mixed classical - quantum approach. exact, self - consistent equations of motion are derived from the action principle by combining time - dependent density functional theory in basis expansion with classical molecular dynamics. structure and properties of the resulting equations of motion as well as the energy and momentum balance equations are discussed in detail. future applications of the formalism are briefly outlined.
arxiv:physics/0112064
efficiently controlling the trapping process, especially the trapping efficiency, is central in the study of trap problem in complex systems, since it is a fundamental mechanism for diverse other dynamic processes. thus, it is of theoretical and practical significance to study the control technique for trapping problem. in this paper, we study the trapping problem in a family of proposed directed fractals with a deep trap at a central node. the directed fractals are a generalization of previous undirected fractals by introducing the directed edge weights dominated by a parameter. we characterize all the eigenvalues and their degeneracies for an associated matrix governing the trapping process. the eigenvalues are provided through an exact recursive relation deduced from the self - similar structure of the fractals. we also obtain the expressions for the smallest eigenvalue and the mean first - passage time ( mfpt ) as a measure of trapping efficiency, which is the expected time for the walker to first visit the trap. the mfpt is evaluated according to the proved fact that it is approximately equal to reciprocal of the smallest eigenvalue. we show that the mfpt is controlled by the weight parameter, by modifying which, the mfpt can scale superlinealy, linearly, or sublinearly with the system size. thus, this work paves a way to delicately controlling the trapping process in the fractals.
arxiv:1307.0901
we consider quantum systems with causal dynamics in discrete spacetimes, also known as quantum cellular automata ( qca ). due to time - discreteness this type of dynamics is not characterized by a hamiltonian but by a one - time - step unitary. this can be written as the exponential of a hamiltonian but in a highly non - unique way. we ask if any of the hamiltonians generating a qca unitary is local in some sense, and we obtain two very different answers. on one hand, we present an example of qca for which all generating hamiltonians are fully non - local, in the sense that interactions do not decay with the distance. we expect this result to have relevant consequences for the classification of topological phases in floquet systems, given that this relies on the effective hamiltonian. on the other hand, we show that all one - dimensional quasi - free fermionic qcas have quasi - local generating hamiltonians, with interactions decaying exponentially in the massive case and algebraically in the critical case. we also prove that some integrable systems do not have local, quasi - local nor low - weight constants of motion ; a result that challenges the standard definition of integrability.
arxiv:2006.10707
tur \ ' an, mitrinovi \ ' c - adamovi \ ' c and wilker type inequalities are deduced for regular coulomb wave functions. the proofs are based on a mittag - leffler expansion for the regular coulomb wave function, which may be of independent interest. moreover, some complete monotonicity results concerning the coulomb zeta functions and some interlacing properties of the zeros of coulomb wave functions are given.
arxiv:1504.06448
the prompt gamma ray emission was investigated in the 16a mev energy region by means of the 36, 40ar + 96, 92zr fusion reactions leading to a compound nucleus in the vicinity of 132ce. we show that the prompt radiation, which appears to be still effective at such a high beam energy, has an angular distribution pattern consistent with a dipole oscillation along the symmetry axis of the dinuclear system. the data are compared with calculations based on a collective bremsstrahlung analysis of the reaction dynamics.
arxiv:0710.1512
the kinetic inductance ( ki ) of superconducting devices can be exploited for reducing the footprint of linear elements as well as for introducing nonlinearity to the circuit. we characterize the linear and nonlinear properties of a multimode resonator fabricated from amorphous tungsten silicide ( wsi ) with a fundamental frequency of \ ( f _ 1 = 172 \ ) mhz. we show how the multimode structure of the device can be used to extract the different quality factors and to aid the nonlinear characterization. in the linear regime the footprint is reduced by a factor of \ ( \ sim 2. 9 \ ) with standard lateral dimensions with no significant degradation of the internal quality factor compared to a similar al device. in the nonlinear regime we observe self positive frequency shifts at low powers which can be attributed to saturation of tunneling two - level systems. the cross mode nonlinearities are described well by a kerr model with a self - kerr coefficient in the order of \ ( | k _ { 11 } | / 2 \ pi \ approx 1. 5 \ times10 ^ { - 7 } \ ) hz / photon. these properties together with a reproducible fabrication process make wsi a promising candidate for creating linear and nonlinear circuit qed elements.
arxiv:2107.13264
in recent years, continual learning with pre - training ( clpt ) has received widespread interest, instead of its traditional focus of training from scratch. the use of strong pre - trained models ( ptms ) can greatly facilitate knowledge transfer and alleviate catastrophic forgetting, but also suffers from progressive overfitting of pre - trained knowledge into specific downstream tasks. a majority of current efforts often keep the ptms frozen and incorporate task - specific prompts to instruct representation learning, coupled with a prompt selection process for inference. however, due to the limited capacity of prompt parameters, this strategy demonstrates only sub - optimal performance in continual learning. in comparison, tuning all parameters of ptms often provides the greatest potential for representation learning, making sequential fine - tuning ( seq ft ) a fundamental baseline that has been overlooked in clpt. to this end, we present an in - depth analysis of the progressive overfitting problem from the lens of seq ft. considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced slow learner with classifier alignment ( slca + + ) framework to unleash the power of seq ft, serving as a strong baseline approach for clpt. our approach involves a slow learner to selectively reduce the learning rate of backbone parameters, and a classifier alignment to align the disjoint classification layers in a post - hoc fashion. we further enhance the efficacy of sl with a symmetric cross - entropy loss, as well as employ a parameter - efficient strategy to implement seq ft with slca + +. across a variety of continual learning scenarios on image classification benchmarks, our approach provides substantial improvements and outperforms state - of - the - art methods by a large margin. code : https : / / github. com / gengdavid / slca.
arxiv:2408.08295
the functional renormalization group ( frg ) approach for spin models relying on a pseudo - fermionic description has proven to be a powerful technique in simulating ground state properties of strongly frustrated magnetic lattices. a drawback of the frg framework is that it is formulated in the imaginary - time matsubara formalism and thus only able to access static correlations, a limitation shared with most other many - body approaches. a description of the dynamical properties of magnetic systems is the key to bridging the gap between theory and neutron scattering spectra. we take the decisive step of expanding the scope of pseudo - fermion frg to the keldysh formalism, which, while originally developed to address non - equilibrium phenomena, enables a direct calculation of the equilibrium dynamical spin structure factors on generic lattices in arbitrary dimension. we identify the principal features characterizing the low - energy spectra of exemplary zero -, one - and two - dimensional spin - $ 1 / 2 $ heisenberg models as well as the kitaev honeycomb model.
arxiv:2503.11596
it was recently shown that the homogeneous and isotropic cosmology of a massless scalar field coupled to general relativity exhibits a new hidden conformal invariance under mobius transformation of the proper time, additionally to the invariance under time - reparamterization. the resulting noether charges form a $ sl ( 2, \ mathbb { r } ) $ lie algebra, which encapsulates the whole kinematics and dynamics of the geometry. this allows to map flrw cosmology onto conformal mechanics and formulate quantum cosmology in $ \ text { cft } _ 1 $ terms. here, we show that this conformal structure is embedded in a larger $ so ( 3, 2 ) $ algebra of observables, which allows to present all the dirac observables for the whole gravity plus matter sectors in a unified picture. not only this allows one to quantize the system and its whole algebra of observables as a single irreducible representation of $ so ( 3, 2 ) $, but this also gives access to a scalar field operator $ \ hat { \ phi } $ opening the door to the inclusion of non - trivial potentials for the scalar field. as such, this extended conformal structure might allow to perform a group quantization of inflationary cosmological backgrounds.
arxiv:2001.11807
dense high - energy monoenergetic proton beams are vital for wide applications, thus modern laser - plasma - based ion acceleration methods are aiming to obtain high - energy proton beams with energy spread as low as possible. in this work, we put forward a quantum radiative compression method to post - compress a highly accelerated proton beam and convert it to a dense quasi - monoenergetic one. we find that when the relativistic plasma produced by radiation pressure acceleration collides head - on with an ultraintense laser beam, large - amplitude plasma oscillations are excited due to quantum radiation - reaction and the ponderomotive force, which induce compression of the phase space of protons located in its acceleration phase with negative gradient. our three - dimensional spin - resolved qed particle - in - cell simulations show that hollow - structure proton beams with a peak energy $ \ sim $ gev, relative energy spread of few percents and number $ n _ p \ sim10 ^ { 10 } $ ( or $ n _ p \ sim 10 ^ 9 $ with a $ 1 \ % $ energy spread ) can be produced in near future laser facilities, which may fulfill the requirements of important applications, such as, for radiography of ultra - thick dense materials, or as injectors of hadron colliders.
arxiv:2104.14239
let $ \ mathcal { h } $ denote an ariki - koike algebra over a field of characteristic $ p \ geq 0 $. for each $ r $ - multipartition $ { \ bf \ lambda } $ of $ n $, we define a $ \ mathcal { h } $ - module $ s ^ { { \ bf \ lambda } } $ and for each kleshchev $ r $ - multipartition $ { \ bf \ mu } $ of $ n $, we define an irreducible $ \ mathcal { h } $ - module $ d ^ { { \ bf \ mu } } $. given a multipartition $ { \ bf \ lambda } $ and a kleshchev multipartition $ { \ bf \ mu } $ both lying in a rouquier block and which have a common multicore, we give a closed formula for the graded decomposition number $ [ s ^ { { \ bf \ lambda } } : d ^ { { \ bf \ mu } } ] _ v $ when $ p = 0 $.
arxiv:2303.04668
we provide a simple and short proof of a multidimensional borg - levinson type theorem. precisely, we prove that the spectral boundary data determine uniquely the corresponding potential appearing in the sch \ " odinger operator on an admissible riemannian manifold. we also sketch the proof of the case of incomplete spectral boundary data.
arxiv:1912.03055
emission from blazar jets in the ultraviolet, optical, and infrared is polarized. if these low - energy photons were inverse - compton scattered, the upscattered high - energy photons retain a fraction of the polarization. current and future x - ray and gamma - ray polarimeters such as integral - spi, pogolite, x - calibur, gamma - ray burst polarimeter, gems - like missions, astro - h, and polarix have the potential to discover polarized x - rays and gamma - rays from blazar jets for the first time. detection of such polarization will open a qualitatively new window into high - energy blazar emission ; actual measurements of polarization degree and angle will quantitatively test theories of jet emission mechanisms. we examine the detection prospects of blazars by these polarimetry missions using examples of 3c 279, pks 1510 - 089, and 3c 454. 3, bright sources with relatively high degrees of low - energy polarization. we conclude that while balloon polarimeters will be challenged to detect blazars within reasonable observational times ( with x - calibur offering the most promising prospects ), space - based missions should detect the brightest blazars for polarization fractions down to a few percent. typical flaring activity of blazars could boost the overall number of polarimetric detections by nearly a factor of five to six purely accounting for flux increase of the brightest of the comprehensive, all - sky, fermi - lat blazar distribution. the instantaneous increase in the number of detections is approximately a factor of two, assuming a duty cycle of 20 % for every source. the detectability of particular blazars may be reduced if variations in the flux and polarization fraction are anticorrelated. simultaneous use of variability and polarization trends could guide the selection of blazars for high - energy polarimetric observations.
arxiv:1502.00453
gravitational wave observations can provide unprecedented insight into the fundamental nature of gravity and allow for novel tests of modifications to general relativity. one proposed modification suggests that gravity may undergo a phase transition in the strong - field regime ; the detection of such a new phase would constitute a smoking - gun for corrections to general relativity at the classical level. several classes of modified gravity predict the existence of such a transition - known as spontaneous scalarization - associated with the spontaneous symmetry breaking of a scalar field near a compact object. using a strong - field - agnostic effective - field - theory approach, we show that all theories that exhibit spontaneous scalarization can also manifest dynamical scalarization, a phase transition associated with symmetry breaking in a binary system. we derive an effective point - particle action that provides a simple parametrization describing both phenomena, which establishes a foundation for theory - agnostic searches for scalarization in gravitational - wave observations. this parametrization can be mapped onto any theory in which scalarization occurs ; we demonstrate this point explicitly for binary black holes with a toy model of modified electrodynamics.
arxiv:1906.08161
the spiral galaxy ngc 6503 exhibits a regular kinematical structure except for a remarkable drop of the stellar velocity dispersion values in the central region. to investigate the dynamics of the disc a theoretical framework has been described. this includes a mass decomposition of the galaxy into a family of disc / halo realizations compatible with the observed photometry and rotation curve. for this family stellar velocity dispersion values and stability parameters were calculated, showing that the more massive discs, although having larger dispersions, are less stable. however, a reliable theoretical description of the inner regions where the drop occurs cannot be given. that is why we have resorted to numerical calculations. pure stellar 3d simulations have been performed for the family of decompositions. a clear result is that disc / dark halo mass ratios approaching those of the maximum disc limit generate a large bar structure. this is incompatible with the observed morphology of ngc 6503. for the larger radii the stellar kinematics resulting from the simulations essentially agrees with that predicted by the theory, but the central velocity dispersion drop could not be reproduced. a close inspection reveals that the central nuclear region is very small and bright. therefore, tentatively, this nucleus was considered as an isothermal sphere and a core fitting procedure was applied. for an adopted equal mass - to - light ratio of disc and nucleus, a velocity dispersion of 21. 5 km / s is predicted, in excellent agreement with the observed central value. the observed dispersion drop can thus be explained by a separate kinematically distinct galactic component.
arxiv:astro-ph/9704190
a schematic two - level model consisting of a " collective " bosonic state and an " elementary " meson is constructed that provides interpolation from a hadronic description ( a la rapp / wambach ) to b / r scaling for the description of properties of vector mesons in dense medium. the development is based on a close analogy to the degenerate schematic model of brown for giant resonances in nuclei.
arxiv:nucl-th/9902009
extracting information from raw data is probably one of the central activities of experimental scientific enterprises. this work is about a pipeline in which a specific model is trained to provide a compact, essential representation of the training data, useful as a starting point for visualization and analyses aimed at detecting patterns, regularities among data. to enable researchers exploiting this approach, a cloud - based system is being developed and tested in the neanias project as one of the ml - tools of a thematic service to be offered to the eosc. here, we describe the architecture of the system and introduce two example use cases in the astronomical context.
arxiv:2204.13933
the kibble zurek mechanism in a relativistic $ \ phi ^ { 4 } $ scalar field theory in $ d = ( 1 + 1 ) $ is studied using uniform matrix product states. the equal time two point function in momentum space $ g _ { 2 } ( k ) $ is approximated as the system is driven through a quantum phase transition at a variety of different quench rates $ \ tau _ { q } $. we focus on looking for signatures of topological defect formation in the system and demonstrate the consistency of the picture that the two point function $ g _ { 2 } ( k ) $ displays two characteristic scales, the defect density $ n $ and the kink width $ d _ { k } $. consequently, $ g _ { 2 } ( k ) $ provides a clear signature for the formation of defects and a well defined measure of the defect density in the system. these results provide a benchmark for the use of tensor networks as powerful non - perturbative non - equilibrium methods for relativistic quantum field theory, providing a promising technique for the future study of high energy physics and cosmology.
arxiv:1711.10452
we compute all planar two - loop six - point feynman integrals entering scattering observables in massless gauge theories such as qcd. a central result of this paper is the formulation of the differential - equations method under the algebraic constraints stemming from four - dimensional kinematics, which in this case leaves only 8 independent scales. we show that these constraints imply that one must compute topologies with only up to 8 propagators, instead of the expected 9. this leads to the decoupling of entire classes of integrals that do not contribute to scattering amplitudes in four dimensional gauge theories. we construct a pure basis and derive their canonical differential equations, of which we discuss the numerical solution. this work marks an important step towards the calculation of massless $ 2 \ to 4 $ scattering processes at two loops.
arxiv:2412.19884
we propose a single chunk model of long - term memory that combines the basic features of the act - r theory and the multiple trace memory architecture. the pivot point of the developed theory is a mathematical description of the creation of new memory traces caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time. using the available psychological and physiological data these constructions are justified. the final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the caputo type fractional time derivative. several characteristic situations of the learning ( continuous and discontinuous ) and forgetting processes are studied numerically. in particular, it is demonstrated that, first, the " learning " and " forgetting " exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters. second, as far as the spacing effects are concerned, the longer the discontinuous learning process, the longer the time interval within which a subject remembers the information without its considerable lost. besides, the latter relationship is a linear proportionality.
arxiv:1402.4058
generative modeling is a flavor of machine learning with applications ranging from computer vision to chemical design. it is expected to be one of the techniques most suited to take advantage of the additional resources provided by near - term quantum computers. we implement a data - driven quantum circuit training algorithm on the canonical bars - and - stripes data set using a quantum - classical hybrid machine. the training proceeds by running parameterized circuits on a trapped ion quantum computer, and feeding the results to a classical optimizer. we apply two separate strategies, particle swarm and bayesian optimization to this task. we show that the convergence of the quantum circuit to the target distribution depends critically on both the quantum hardware and classical optimization strategy. our study represents the first successful training of a high - dimensional universal quantum circuit, and highlights the promise and challenges associated with hybrid learning schemes.
arxiv:1812.08862
we study the eigenvector mass distribution of an $ n \ times n $ wigner matrix on a set of coordinates $ i $ satisfying $ | i | \ ge c n $ for some constant $ c > 0 $. for eigenvectors corresponding to eigenvalues at the spectral edge, we show that the sum of the mass on these coordinates converges to a gaussian in the $ n \ rightarrow \ infty $ limit, after a suitable rescaling and centering. the proof proceeds by a two moment matching argument. we directly compare edge eigenvector observables of an arbitrary wigner matrix to those of a gaussian matrix, which may be computed explicitly.
arxiv:2303.11142
the hyperfine transition of $ ^ 3 $ he $ ^ + $ at 3. 5cm has been thought as a probe of the high - z igm since it offers a unique insight into the evolution of the helium component of the gas, as well as potentially give an independent constraint on the 21cm signal from neutral hydrogen. in this paper, we use radiative transfer simulations of reionization driven by sources such as stars, x - ray binaries, accreting black holes and shock heated interstellar medium, and simulations of a high - z quasar to characterize the signal and analyze its prospects of detection. we find that the peak of the signal lies in the range 1 - 50 $ \ mu $ k for both environments, but while around the quasar it is always in emission, in the case of cosmic reionization a brief period of absorption is expected. as the evolution of heii is determined by stars, we find that it is not possible to distinguish reionization histories driven by more energetic sources. on the other hand, while a bright qso produces a signal in 21cm that is very similar to the one from a large collection of galaxies, its signature in 3. 5cm is very peculiar and could be a powerful probe to identify the presence of the qso. we analyze the prospects of the signal ' s detectability using ska1 - mid as our reference telescope. we find that the noise power spectrum dominates over the power spectrum of the signal, although a modest s / n ratio can be obtained when the wavenumber bin width and the survey volume are sufficiently large.
arxiv:2007.00934
rx j1301. 9 + 2747 is a unique active galaxy with supersoft x - ray spectrum that lacks significant emission at energies above 2 kev. in addition, it is one of few galaxies displaying quasi - periodic x - ray eruptions that recur on a timescale of 13 - 20 ks. we present multi - epoch radio observations of rx j1301. 9 + 2747 using gmrt, vla and vlba. the vlba imaging at 1. 6 ghz reveals a compact radio emission unresolved at a scale of < 0. 7 pc, with a brightness temperature of t _ b > 5x10 ^ 7 k. the radio emission is variable by more than a factor of 2. 5 over a few days, based on the data taken from vla monitoring campaigns. the short - term radio variability suggests that the radio emitting region has a size as small as 8x10 ^ { - 4 } pc, resulting in an even higher brightness temperature of t _ b ~ 10 ^ { 12 } k. a similar limit on the source size can be obtained if the observed flux variability is not intrinsic and caused by the interstellar scintillation effect. the overall radio spectrum is steep with a time - averaged spectral index alpha = - 0. 78 + / - 0. 03 between 0. 89 ghz and 14 ghz. these observational properties rule out a thermal or star - formation origin of the radio emission, and appear to be consistent with the scenario of episodic jet ejections driven by magnetohydrodynamic process. simultaneous radio and x - ray monitoring observations down to a cadence of hours are required to test whether the compact and variable radio emission is correlated with the quasi - periodic x - ray eruptions.
arxiv:2207.06585
we examine the fraction of massive ( $ m _ { * } > 10 ^ { 10 } m _ { \ odot } $ ), compact star - forming galaxies ( csfgs ) that host an active galactic nucleus ( agn ) at $ z \ sim2 $. these csfgs are likely the direct progenitors of the compact quiescent galaxies observed at this epoch, which are the first population of passive galaxies to appear in large numbers in the early universe. we identify csfgs that host an agn using a combination of hubble wfc3 imaging and chandra x - ray observations in four fields : the chandra deep fields, the extended groth strip, and the ukidss ultra deep survey field. we find that $ 39. 2 ^ { + 3. 9 } _ { - 3. 6 } $ \ % ( 65 / 166 ) of csfgs at $ 1. 4 < z < 3. 0 $ host an x - ray detected agn. this fraction is 3. 2 times higher than the incidence of agn in extended star - forming galaxies with similar masses at these redshifts. this difference is significant at the $ 6. 2 \ sigma $ level. our results are consistent with models in which csfgs are formed through a dissipative contraction that triggers a compact starburst and concurrent growth of the central black hole. we also discuss our findings in the context of cosmological galaxy evolution simulations that require feedback energy to rapidly quench csfgs. we show that the agn fraction peaks precisely where energy injection is needed to reproduce the decline in the number density of csfgs with redshift. our results suggest that the first abundant population of massive, quenched galaxies emerged directly following a phase of elevated supermassive black hole growth and further hints at a possible connection between agn and the rapid quenching of star formation in these galaxies.
arxiv:1710.05921
we investigate the quantum geometry of $ 2d $ surface $ s $ bounding the cauchy slices of 4d gravitational system. we investigate in detail and for the first time the symplectic current that naturally arises boundary term in the first order formulation of general relativity in terms of the ashtekar - barbero connection. this current is proportional to the simplest quadratic form constructed out of the triad field, pulled back on $ s $. we show that the would - be - gauge degrees of freedom - - - arising from $ su ( 2 ) $ gauge transformations plus diffeomorphisms tangent to the boundary, are entirely described by the boundary $ 2 $ - dimensional symplectic form and give rise to a representation at each point of $ s $ of $ sl ( 2, \ mathbb { r } ) \ times su ( 2 ) $. independently of the connection with gravity, this system is very simple and rich at the quantum level with possible connections with conformal field theory in 2d. a direct application of the quantum theory is modelling of the black horizons in quantum gravity.
arxiv:1507.02573