text
stringlengths
1
3.65k
source
stringlengths
15
79
state space subspace algorithms for input - output systems have been widely applied but also have a reasonably well - developedasymptotic theory dealing with consistency. however, guaranteeing the stability of the estimated system matrix is a major issue. existing stability - guaranteed algorithms are computationally expensive, require several tuning parameters, and scale badly to high state dimensions. here, we develop a new algorithm that is closed - form and requires no tuning parameters. it is thus computationally cheap and scales easily to high state dimensions. we also prove its consistency under reasonable conditions.
arxiv:2408.07918
$ sra $ - free spaces is a wide class of metric spaces including finite dimensional alexandrov spaces of non - negative curvature, complete berwald spaces of nonnegative flag curvature, cayley graphs of virtually abelian groups and doubling metric spaces of non - positive busemann curvature with extendable geodesics. this class also includes arbitrary big balls in complete, locally compact $ cat ( k ) $ - spaces $ ( k \ in \ mathbb r ) $ with locally extendable geodesics, finite - dimensional alexandrov spaces of curvature $ \ ge k $ with $ k \ in r $ and complete finsler manifolds satisfying the doubling condition. we show that $ sra $ - free spaces allow bi - lipschitz embeddings in euclidean spaces. as a corollary we obtain a quantitative bi - lipschitz embedding theorem for balls in finite dimensional alexandrov spaces of curvature bounded from below conjectured by s. eriksson - bique. the main tool of the proof is an extension theorem for bi - lipschitz maps into euclidean spaces. this extension theorem is close in nature with the embedding theorem of j. seo and may be of independent interest.
arxiv:1906.02477
we investigate orbital motion of cold clouds in the broad line region of active galactic nuclei subject to the gravity of a black hole and a force due to a nonisotropic central source and a drag force proportional to the velocity square. the intercloud is described using the standard solutions for the advection - dominated accretion flows. orbit of a cloud decays because of the drag force, but the typical time scale of falling of clouds onto the central black hole is shorter comparing to the linear drag case. this time scale is calculated when a cloud is moving through a static or rotating intercloud. we show that when the drag force is a quadratic function of the velocity, irrespective of the initial conditions and other input parameters, clouds will generally fall onto the central region much faster than the age of whole system and since cold clouds present in most of the broad line regions, we suggest that mechanisms for continuous creation of the clouds must operate in these systems.
arxiv:1605.09144
a recent developing trend in clustering is the advancement of algorithms that not only identify clusters within data, but also express and capture the uncertainty of cluster membership. evidential clustering addresses this by using the dempster - shafer theory of belief functions, a framework designed to manage and represent uncertainty. this approach results in a credal partition, a structured set of mass functions that quantify the uncertain assignment of each object to potential groups. the python framework evclust, presented in this paper, offers a suite of efficient evidence clustering algorithms as well as tools for visualizing, evaluating and analyzing credal partitions.
arxiv:2502.06587
modern work on the cross - linguistic computational modeling of morphological inflection has typically employed language - independent data splitting algorithms. in this paper, we supplement that approach with language - specific probes designed to test aspects of morphological generalization. testing these probes on three morphologically distinct languages, english, spanish, and swahili, we find evidence that three leading morphological inflection systems employ distinct generalization strategies over conjugational classes and feature sets on both orthographic and phonologically transcribed inputs.
arxiv:2310.13686
the production and decay of new possible heavy majorana neutrinos are analyzed in hadronic collisions. new bounds on the mixing of these particles with standard neutrinos are estimated according to a fundamental representation suggested by grand unified models. a clear signature for these majorana neutrinos is given by same - sign dileptons plus a charged weak vector boson in the final state. we discuss the experimental possibilities for the future large hadron collider ( lhc ) at cern.
arxiv:hep-ph/0002024
we propose that macroscopic objects built from negative - permeability metamaterials may experience resonantly enhanced magnetic force in low - frequency magnetic fields. resonant enhancement of the time - averaged force originates from magnetostatic surface resonances ( msr ) which are analogous to the electrostatic resonances of negative - permittivity particles, well known as surface plasmon resonances in optics. we generalize the classical problem of msr of a homogeneous object to include anisotropic metamaterials, and consider the most extreme case of anisotropy where the permeability is negative in one direction but positive in the others. it is shown that deeply subwavelength objects made of such indefinite ( hyperbolic ) media exhibit a pronounced magnetic dipole resonance that couples strongly to uniform or weakly inhomogeneous magnetic field and provides strong enhancement of the magnetic force, enabling applications such as enhanced magnetic levitation.
arxiv:1111.1695
transitions between different conformational states are ubiquitous in proteins, being involved in signaling, catalysis and other fundamental activities in cells. however, modeling those processes is extremely difficult, due to the need of efficiently exploring a vast conformational space in order to seek for the actual transition path for systems whose complexity is already high in the steady states. here we report a strategy that simplifies this task attacking the complexity on several sides. we first apply a minimalist coarse - grained model to calmodulin, based on an empirical force field with a partial structural bias, to explore the transition paths between the apo - closed state and the ca - bound open state of the protein. we then select representative structures along the trajectory based on a structural clustering algorithm and build a cleaned - up trajectory with them. we finally compare this trajectory with that produced by the online tool minactionpath, by minimizing the action integral using a harmonic network model, and with that obtained by the prompt morphing method, based on an optimal mass transportation - type approach including physical constraints. the comparison is performed both on the structural and energetic level, using the coarse - grained and the atomistic force fields upon reconstruction. our analysis indicates that this method returns trajectories capable of exploring intermediate states with physical meaning, retaining a very low computational cost, which can allow systematic and extensive exploration of the multi - stable proteins transition pathways.
arxiv:1905.05631
inter neuron communication happens through the exchange of neurotransmitters at the synapse by a process known as exocytosis. this makes exocytosis a fundamental process of information exchange in the body. the exocytosis process has a distinct geometry as it involves a vesicle that attaches to the cell membrane and then releases the neurotransmitters through a pore. significant recent research, both experimental and numerical, attempt to understand the time dynamics of exocytosis. in this manuscript, we share an analytical model that predicts the key output parameters of exocytosis based on the geometry of the vesicle and pore. our analytical predictions are well supported by detailed numerical simulations. this model could help extract geometrical parameters from experimental data and hence could be of broad interest.
arxiv:2301.13825
magnetic reconnection in the relativistic and trans - relativistic regimes is able to accelerate particles to hard power law energy spectra $ f \ propto \ gamma ^ { - p } $ ( approaching $ p = 1 $ ). the underlying acceleration mechanism that determines the spectral shape is currently a topic of intense investigation. by means of fully kinetic plasma simulations, we carry out a study of particle acceleration during magnetic reconnection in the trans - relativistic regime of a proton - electron plasma. while earlier work in this parameter regime has focused on the effects of electric field parallel to the local magnetic field on the particle injection ( from thermal energy to the lower energy bound of the power - law spectrum ), here we examine the roles of both parallel and perpendicular electric fields to gain a more complete understanding on the injection process and further development of a power - law spectrum. we show that the parallel electric field does contribute significantly to particle injection, and is more important in the initial phase of magnetic reconnection. however, as the simulation proceeds, the acceleration by the perpendicular electric field becomes more important for particle injection and completely dominates the acceleration responsible for the high - energy power - law spectrum. this holds robustly, in particular for longer reconnection times and larger systems, i. e. in simulations that are more indicative of the processes in astrophysical sources.
arxiv:2001.02732
let $ g $ be a compact group of linear transformations of an euclidean space $ v $. the $ g $ - invariant $ c ^ \ infty $ functions can be expressed as $ c ^ \ infty $ functions of a finite basic set of $ g $ - invariant homogeneous polynomials, called an integrity basis. the mathematical description of the orbit space $ v / g $ depends on the integrity basis too : it is realized through polynomial equations and inequalities expressing rank and positive semi - definiteness conditions of the $ p $ - matrix, a real symmetric matrix determined by the integrity basis. the choice of the basic set of $ g $ - invariant homogeneous polynomials forming an integrity basis is not unique, so it is not unique the mathematical description of the orbit space too. if $ g $ is an irreducible finite reflection group, saito et al. in 1980 characterized some special basic sets of $ g $ - invariant homogeneous polynomials that they called { \ em flat }. they also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups $ e _ 7 $ and $ e _ 8 $. in this paper the flat basic sets of invariant homogeneous polynomials of $ e _ 7 $ and $ e _ 8 $ and the corresponding $ p $ - matrices are determined explicitly. using the results here reported one is able to determine easily the $ p $ - matrices corresponding to any other integrity basis of $ e _ 7 $ or $ e _ 8 $. from the $ p $ - matrices one may then write down the equations and inequalities defining the orbit spaces of $ e _ 7 $ and $ e _ 8 $ relatively to a flat basis or to any other integrity basis. the results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups $ e _ 7 $ and $ e _ 8 $ or one of the lie groups $ e _ 7 $ and $ e _ 8 $ in their adjoint representations.
arxiv:1003.1095
considering four - neutrino schemes of type 3 + 1, we identify four small regions of the neutrino mixing parameter space compatible with all data. assuming a small mixing between the sterile neutrino and the isolated mass eigenstate we show that large nu _ mu - > nu _ tau and nu _ e - > nu _ tau transitions are predicted in short - baseline experiments and could be observed in the near future in dedicated experiments. we discuss also implications for solar, atmospheric and long - baseline neutrino experiments and we present a formalism that allows to describe in 3 + 1 schemes atmospheric neutrino oscillations, long - baseline nu _ mu disappearance and nu _ mu - > nu _ tau transitions in matter.
arxiv:hep-ph/0010009
weak gravitational lensing is one of the most promising cosmological probes of the late universe. several large ongoing ( des, kids, hsc ) and planned ( lsst, euclid, wfirst ) astronomical surveys attempt to collect even deeper and larger scale data on weak lensing. due to gravitational collapse, the distribution of dark matter is non - gaussian on small scales. however, observations are typically evaluated through the two - point correlation function of galaxy shear, which does not capture non - gaussian features of the lensing maps. previous studies attempted to extract non - gaussian information from weak lensing observations through several higher - order statistics such as the three - point correlation function, peak counts or minkowski - functionals. deep convolutional neural networks ( cnn ) emerged in the field of computer vision with tremendous success, and they offer a new and very promising framework to extract information from 2 or 3 - dimensional astronomical data sets, confirmed by recent studies on weak lensing. we show that a cnn is able to yield significantly stricter constraints of ( $ \ sigma _ 8, \ omega _ m $ ) cosmological parameters than the power spectrum using convergence maps generated by full n - body simulations and ray - tracing, at angular scales and shape noise levels relevant for future observations. in a scenario mimicking lsst or euclid, the cnn yields 2. 4 - 2. 8 times smaller credible contours than the power spectrum, and 3. 5 - 4. 2 times smaller at noise levels corresponding to a deep space survey such as wfirst. we also show that at shape noise levels achievable in future space surveys the cnn yields 1. 4 - 2. 1 times smaller contours than peak counts, a higher - order statistic capable of extracting non - gaussian information from weak lensing maps.
arxiv:1902.03663
we present the result of the searches for a low mass standard model higgs boson performed at the tevatron ppbar collider ( sqrt ( s ) = 1. 96 tev ) by the cdf and d0 experiments with an integrated luminosity of up to 8. 5 fb ^ - 1. individual searches are discussed and classified according to their sensitivity. primary channels rely on the associate production with a vector boson ( wh or zh ) and the h - > bbbar decay channel ( favored for m _ h < 135 gev / c ^ 2 ). event selection is based on the leptonic decay of the vector boson and the identification of b - hadron enriched jets. each individual channel is sensitive, for m _ h = 115 gev / c ^ 2, to less than 5 times the sm expected cross section and the most sensitive channels can exclude a production cross section of 2. 3 x sigma _ h sm. secondary channels rely on a variety of final states. although they are from 2 to 5 times less sensitive than any primary channel, they contribute to the tevatron combination and, in some cases, they pose strong constrains on exotic higgs boson models.
arxiv:1201.6001
we consider an agent interacting with an environment in a single stream of actions, observations, and rewards, with no reset. this process is not assumed to be a markov decision process ( mdp ). rather, the agent has several representations ( mapping histories of past interactions to a discrete state space ) of the environment with unknown dynamics, only some of which result in an mdp. the goal is to minimize the average regret criterion against an agent who knows an mdp representation giving the highest optimal reward, and acts optimally in it. recent regret bounds for this setting are of order $ o ( t ^ { 2 / 3 } ) $ with an additive term constant yet exponential in some characteristics of the optimal mdp. we propose an algorithm whose regret after $ t $ time steps is $ o ( \ sqrt { t } ) $, with all constants reasonably small. this is optimal in $ t $ since $ o ( \ sqrt { t } ) $ is the optimal regret in the setting of learning in a ( single discrete ) mdp.
arxiv:1302.2553
the main geometric ingredient of the closed string field theory are the string vertices, the collections of string diagrams describing the elementary closed string interactions, satisfying the quantum batalian - vilkovisky master equation. they can be characterized using the riemann surfaces endowed with the metric solving the generalized minimal area problem. however, an adequately developed theory of such riemann surfaces is not available yet, and consequently description of the string vertices via riemann surfaces with the minimal area metric fails to provide practical tools for performing calculations. we describe an alternate construction of the string vertices satisfying the batalian - vilkovisky master equation using riemann surfaces endowed with the metric having constant curvature $ - 1 $ all over the surface. we argue that this construction provides an approximately gauge invariant closed string field theory.
arxiv:1706.07366
graham et al. ( 2015a ) reported a periodically varying quasar and supermassive black hole binary candidate, pg1302 - 102 ( hereafter pg1302 ), which was discovered in the catalina real - time transient survey ( crts ). its combined lincoln near - earth asteroid research ( linear ) and crts optical light curve is well fitted to a sinusoid of an observed period of $ \ approx 1, 884 $ days and well modeled by the relativistic doppler boosting of the secondary mini - disk ( d ' orazio et al. 2015 ). however, the linear + crts light curve from mjd $ \ approx 52700 $ to mjd $ \ approx 56400 $ covers only $ \ sim 2 $ cycles of periodic variation, which is a short baseline that can be highly susceptible to normal, stochastic quasar variability ( vaughan et al. 2016 ). in this letter, we present a re - analysis of pg1302, using the latest light curve from the all - sky automated survey for supernovae ( asas - sn ), which extends the observational baseline to the present day ( mjd $ \ approx 58200 $ ), and adopting a maximum likelihood method which searches for a periodic component in addition to stochastic quasar variability. when the asas - sn data are combined with the previous linear + crts data, the evidence for periodicity decreases. for genuine periodicity one would expect that additional data would strengthen the evidence, so the decrease in significance may be an indication that the binary model is disfavored.
arxiv:1803.05448
we show that the minimal discrepancy of a point set in the $ d $ - dimensional unit cube with respect to orlicz norms can exhibit both polynomial and weak tractability. in particular, we show that the $ \ psi _ \ alpha $ - norms of exponential orlicz spaces are polynomially tractable.
arxiv:1910.12571
leveraging an established exercise in negotiation education, we build a novel dataset for studying how the use of language shapes bilateral bargaining. our dataset extends existing work in two ways : 1 ) we recruit participants via behavioral labs instead of crowdsourcing platforms and allow participants to negotiate through audio, enabling more naturalistic interactions ; 2 ) we add a control setting where participants negotiate only through alternating, written numeric offers. despite the two contrasting forms of communication, we find that the average agreed prices of the two treatments are identical. but when subjects can talk, fewer offers are exchanged, negotiations finish faster, the likelihood of reaching agreement rises, and the variance of prices at which subjects agree drops substantially. we further propose a taxonomy of speech acts in negotiation and enrich the dataset with annotated speech acts. our work also reveals linguistic signals that are predictive of negotiation outcomes.
arxiv:2306.07117
in this paper, we introduce two deterministic models aimed at capturing the dynamics of congested internet connections. the first model is a continuous - time model that combines a system of differential equations with a sudden change in one of the state variables. the second model is a discrete - time model with a time step that arises naturally from the system. results from these models show good agreement with the well - known ns network simulator, better than the results of a previous, similar model. this is due in large part to the use of the sudden change to reflect the impact of lost data packets. we also discuss the potential use of this model in network traffic state estimation.
arxiv:cs/0409022
graph convolutional networks ( gcns ) have been very successful in modeling non - euclidean data structures, like sequences of body skeletons forming actions modeled as spatio - temporal graphs. most gcn - based action recognition methods use deep feed - forward networks with high computational complexity to process all skeletons in an action. this leads to a high number of floating point operations ( ranging from 16g to 100g flops ) to process a single sample, making their adoption in restricted computation application scenarios infeasible. in this paper, we propose a temporal attention module ( tam ) for increasing the efficiency in skeleton - based action recognition by selecting the most informative skeletons of an action at the early layers of the network. we incorporate the tam in a light - weight gcn topology to further reduce the overall number of computations. experimental results on two benchmark datasets show that the proposed method outperforms with a large margin the baseline gcn - based method while having 2. 9 times less number of computations. moreover, it performs on par with the state - of - the - art with up to 9. 6 times less number of computations.
arxiv:2010.12221
elastic scattering in a quantum wire has several novel features not seen in 1d, 2d or 3d. in this work we consider a single channel quantum wire as its application is inevitable in making devices based on quantum interference effects. we consider a point defect or a single delta function impurity in such a wire and show how some of these novel features affect friedel - sum - rule ( fsr ) in a way, that is quite unlike in 1d, 2d and 3d.
arxiv:cond-mat/0210635
it is shown that a cluster expansion technique, which is usually applied in the high - temperature regime to calcutate virial coefficients, can be applied to evaluate the superfluid transition temperature of the bcs - bec crossover \ ` a la lee and yang. the transition temperature is identified with the emergence of the singularity in the sum of a certain infinite series of cluster functions. in the weak - coupling limit, we reproduce the thouless criterion and the number equation of nozi \ ` eres and schmitt - rink, and hence the transition temperature of the bcs theory. in the strong - coupling limit, we reproduce the transition temperature of bose - einstein condensation of non - interacting tightly bound dimers.
arxiv:1310.4665
we have investigated the structural dynamics in photoexcited 1, 2 - diiodotetrafluoroethane molecules ( c2f4i2 ) in the gas phase experimentally using ultrafast electron diffraction and theoretically using fomo - casci excited state dynamics simulations. the molecules are excited by an ultra - violet femtosecond laser pulse to a state characterized by a transition from the iodine 5p orbital to a mixed 5p | | hole and cf2 antibonding orbital, which results in the cleavage of one of the carbon - iodine bonds. we have observed, with sub - angstrom resolution, the motion of the nuclear wavepacket of the dissociating iodine atom followed by coherent vibrations in the electronic ground state of the c2f4i radical. the radical reaches a stable classical ( non - bridged ) structure in less than 200 fs.
arxiv:1904.01515
in this paper, we solve the eigenvalues and eigenvectors problem with bohr collective hamil - tonian for triaxial nuclei. the? beta part of the collective potential is taken to be equal to hulth? en potential while the gamma part is defined by a new generalized potential obtained from a ring shaped one. analytical expressions for spectra and wave functions are derived by means of a recent version of the asymptotic iteration method and the usual approximations. the calculated energies and b ( e2 ) transition rates are compared with experimental data and the available theoretical results in the literature.
arxiv:1510.04525
in 1981. the same year an academic society solely devoted to the topic was formed, the international association for the study of popular music. the association ' s founding was partly motivated by the interdisciplinary agenda of popular musicology though the group has been characterized by a polarized ' musicological ' and ' sociological ' approach also typical of popular musicology. = = = music theory, analysis and composition = = = music theory is a field of study that describes the elements of music and includes the development and application of methods for composing and for analyzing music through both notation and, on occasion, musical sound itself. broadly, theory may include any statement, belief or conception of or about music ( boretz, 1995 ). a person who studies or practices music theory is a music theorist. some music theorists attempt to explain the techniques composers use by establishing rules and patterns. others model the experience of listening to or performing music. though extremely diverse in their interests and commitments, many western music theorists are united in their belief that the acts of composing, performing and listening to music may be explicated to a high degree of detail ( this, as opposed to a conception of musical expression as fundamentally ineffable except in musical sounds ). generally, works of music theory are both descriptive and prescriptive, attempting both to define practice and to influence later practice. musicians study music theory to understand the structural relationships in the ( nearly always notated ) music. composers study music theory to understand how to produce effects and structure their own works. composers may study music theory to guide their precompositional and compositional decisions. broadly speaking, music theory in the western tradition focuses on harmony and counterpoint, and then uses these to explain large scale structure and the creation of melody. = = = music psychology = = = music psychology applies the content and methods of psychology to understand how music is created, perceived, responded to, and incorporated into individuals ' and societies ' daily lives. its primary branches include cognitive musicology, which emphasizes the use of computational models for human musical abilities and cognition, and the cognitive neuroscience of music, which studies the way that music perception and production manifests in the brain using the methodologies of cognitive neuroscience. while aspects of the field can be highly theoretical, much of modern music psychology seeks to optimize the practices and professions of music performance, composition, education and therapy. = = = performance practice and research = = = performance practice draws on many of the tools of historical musicology to answer the specific question of how music was performed in various
https://en.wikipedia.org/wiki/Musicology
we present a simple but effective attention named the unary - pairwise attention ( upa ) for modeling the relationship between 3d point clouds. our idea is motivated by the analysis that the standard self - attention ( sa ) that operates globally tends to produce almost the same attention maps for different query positions, revealing difficulties for learning query - independent and query - dependent information jointly. therefore, we reformulate the sa and propose query - independent ( unary ) and query - dependent ( pairwise ) components to facilitate the learning of both terms. in contrast to the sa, the upa ensures query dependence via operating locally. extensive experiments show that the upa outperforms the sa consistently on various point cloud understanding tasks including shape classification, part segmentation, and scene segmentation. moreover, simply equipping the popular pointnet + + method with the upa even outperforms or is on par with the state - of - the - art attention - based approaches. in addition, the upa systematically boosts the performance of both standard and modern networks when it is integrated into them as a compositional module.
arxiv:2203.00172
for the large sparse systems of weakly nonlinear equations arising in the discretizations of many classical differential and integral equations, this paper presents a class of synchronous parallel multi - splitting two - stage two - parameter over - relaxation ( tor ) methods for getting their solutions by the high - speed multiprocessor systems. under suitable assumptions, we study the global convergence properties of these synchronous multi - splitting two - stage tor methods.
arxiv:1212.4746
gk per, a classical nova of 1901, is thought to undergo variable mass accretion on to a magnetized white dwarf ( wd ) in an intermediate polar system ( ip ). we organized a multi - mission observational campaign in the x - ray and ultraviolet ( uv ) energy ranges during its dwarf nova ( dn ) outburst in 2015 march - april. comparing data from quiescence and near outburst, we have found that the maximum plasma temperature decreased from about 26 to 16. 2 + / - 0. 4 kev. this is consistent with the previously proposed scenario of increase in mass accretion rate while the inner radius of the magnetically disrupted accretion disc shrinks, thereby lowering the shock temperature. a nustar observation also revealed a high - amplitude wd spin modulation of the very hard x - rays with a single - peaked profile, suggesting an obscuration of the lower accretion pole and an extended shock region on the wd surface. the x - ray spectrum of gk per measured with the swift x - ray telescope varied on time - scales of days and also showed a gradual increase of the soft x - ray flux below 2 kev, accompanied by a decrease of the hard flux above 2 kev. in the chandra observation with the high energy transmission gratings, we detected prominent emission lines, especially of ne, mg and si, where the ratios of h - like to he - like transition for each element indicate a much lower temperature than the underlying continuum. we suggest that the x - ray emission in the 0. 8 - 2 kev range originates from the magnetospheric boundary.
arxiv:1705.07707
unlike the standard model where neutrino masses can be made arbitrarily small, we find in the minimal left - right symmetric model that dirac type yukawa coupling $ h _ d \ sim 10 ^ { - 4. 2 } $ for $ \ nu _ \ tau $ is generated from charged fermion yukawa couplings through one loop renormalization group ( rg ) running. using the seesaw relation this implies that the natural scale for right - handed neutrino ' s majorana mass ( seesaw scale ) $ m _ n = f v _ r \ gtrsim 2000 tev $. if we take the $ su ( 2 ) _ r \ times u ( 1 ) _ { b - l } $ breaking scale $ v _ r $ to be below 50 tev, so that it can be probed by lhc and a future collider, then the large $ h _ d $ generated on rg running increases the mass $ m _ \ nu $ of the third generation light neutrino and severely suppresses the pmns mixing angle $ \ theta _ { 23 } $. we discuss the tuning needed for seesaw scales that can be probed by colliders and find the parameter space that can be tested by neutrino experiments for higher scales up to $ 10 ^ { 15 } gev $.
arxiv:1704.07249
topological phase transitions go beyond ginzburg and landau ' s paradigm of spontaneous symmetry breaking and occur without an associated local order parameter. instead, such transitions can be characterized by the emergence of non - local order parameters, which require measurements on extensively many particles simultaneously - an impossible venture in real materials. on the other hand, quantum simulators have demonstrated such measurements, making them prime candidates for an experimental confirmation of non - local topological order. here, building upon the recent advances in preparing few - particle fractional chern insulators using ultracold atoms and photons, we propose a realistic scheme for detecting the hidden off - diagonal long - range order ( hodlro ) characterizing laughlin states. furthermore, we demonstrate the existence of this hidden order in fractional chern insulators, specifically for the $ \ nu = \ frac { 1 } { 2 } $ - laughlin state in the isotropic hofstadter - bose - hubbard model. this is achieved by large - scale numerical density matrix renormalization group ( dmrg ) simulations based on matrix product states, for which we formulate an efficient sampling procedure providing direct access to hodlro in close analogy to the proposed experimental scheme. we confirm the characteristic power - law scaling of hodlro, with an exponent $ \ frac { 1 } { \ nu } = 2 $, and show that its detection requires only a few thousand snapshots. this makes our scheme realistically achievable with current technology and paves the way for further analysis of non - local topological orders, e. g. in topological states with non - abelian anyonic excitations.
arxiv:2309.03666
we develop a uniform test for detecting and dating explosive behavior of a strictly stationary garch $ ( r, s ) $ ( generalized autoregressive conditional heteroskedasticity ) process. namely, we test the null hypothesis of a globally stable garch process with constant parameters against an alternative where there is an ' abnormal ' period with changed parameter values. during this period, the change may lead to an explosive behavior of the volatility process. it is assumed that both the magnitude and the timing of the breaks are unknown. we develop a double supreme test for the existence of a break, and then provide an algorithm to identify the period of change. our theoretical results hold under mild moment assumptions on the innovations of the garch process. technically, the existing properties for the qmle in the garch model need to be reinvestigated to hold uniformly over all possible periods of change. the key results involve a uniform weak bahadur representation for the estimated parameters, which leads to weak convergence of the test statistic to the supreme of a gaussian process. in simulations we show that the test has good size and power for reasonably large time series lengths. we apply the test to apple asset returns and bitcoin returns.
arxiv:1812.03475
orthoferrites are a class of magnetic materials with a magnetic ordering temperature above 600 k, predominant g - type antiferromagnetic ordering of the fe - spin system and, depending on the rare - earth ion, a spin reorientation of the fe spin taking place at lower temperatures. dyfeo3 is of particular interest since the spin reorientation is classified as a morin transition with the transition temperature depending strongly on the dy - fe interaction. here, we report a detailed study of the magnetic and structural properties of microcrystalline dyfeo3 powder and bulk single crystal using neutron diffraction and magnetometry between 1. 5 and 450 k. we find that, while the magnetic properties of the single crystal are largely as expected, the powder shows strongly modified magnetic properties, including a modified spin reorientation and a smaller dy - fe interaction energy of the order of 10 { \ mu } ev. subtle structural differences between powder and single crystal show that they belong to distinct magnetic space groups. in addition, the dy ordering at 2 k in the powder is incommensurate, with a modulation vector of 0. 0173 ( 5 ) c *, corresponding to a periodicity of ~ 58 unit cells.
arxiv:2207.06013
safe human - robot interactions require robots to be able to learn how to behave appropriately in \ sout { humans ' world } \ rev { spaces populated by people } and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. in humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. toward the next generation of robots with bio - inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities : the sensory representations of body schema, peripersonal space, and the active self in humans. second, we provide a survey of robotics models of these sensory representations and robotics models of the self ; and we compare these models with the human counterparts. finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self - exploration.
arxiv:2011.12860
in this article, we study the possibility of sustaining a static and spherically symmetric traversable wormhole geometries admitting conformal motion in einstein gravity, which presents a more systematic approach to search a relation between matter and geometry. in wormhole physics, the presence of exotic matter is a fundamental ingredient and we show that this exotic source can be dark energy type which support the existence of wormhole spacetimes. in this work we model a wormhole supported by dark energy which admits conformal motion. we also discuss the possibility of detection of wormholes in the outer regions of galactic halos by means of gravitational lensing. the studies of the total gravitational energy for the exotic matter inside a static wormhole configuration are also done.
arxiv:1612.04669
the $ ^ { 12 } $ c ( p, 2p ) $ ^ { 11 } $ b reaction at $ e _ p = 98. 7 $ mev proton beam energy is analyzed using a rigorous three - particle scattering formalism extended to include the internal excitation of the nuclear core or residual nucleus. the excitation proceeds via the core interaction with any of the external nucleons. we assume the $ ^ { 11 } $ b ground and low - lying excited states [ $ \ frac32 ^ - $ ( 0. 0 mev ), $ \ frac52 ^ - $ ( 4. 45 mev ), $ \ frac72 ^ - $ ( 6. 74 mev ) ] and the excited states [ $ \ frac12 ^ - $ ( 2. 12 mev ), $ \ frac32 ^ - $ ( 5. 02 mev ) ] to be members of $ k = \ frac32 ^ - $ and $ k = \ frac12 ^ - $ rotational bands, respectively. the dynamical core excitation results in a significant cross section for the reaction leading to the $ \ frac52 ^ - $ ( 4. 45 mev ) excited state of $ ^ { 11 } $ b that cannot be populated through the single - particle excitation mechanism. the detailed agreement between the theoretical calculations and data depends on the used optical model parametrizations and the kinematical configuration of the detected nucleons.
arxiv:2407.03800
we give a direct non - abstract proof of the spectral mapping theorem for the helffer - sj \ " ostrand functional calculus for linear operators on banach spaces with real spectra and consequently give a new non - abstract direct proof for the spectral mapping theorem for self - adjoint operators on hilbert spaces. our exposition is closer in spirit to the proof by explicit construction of the existence of the functional calculus given by davies. we apply an extension theorem of seeley to derive a functional calculus for semi - bounded operators.
arxiv:1001.0232
transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high - resolution medical images. the recent advent of the receptance weighted key value ( rwkv ) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. to leverage its advanced design, we propose restore - rwkv, the first rwkv - based model for medical image restoration. since the original rwkv model is designed for 1d sequences, we make two necessary modifications for modeling spatial relations in 2d medical images. first, we present a recurrent wkv ( re - wkv ) attention mechanism that captures global dependencies with linear computational complexity. re - wkv incorporates bidirectional attention as basic for a global receptive field and recurrent attention to effectively model 2d dependencies from various scan directions. second, we develop an omnidirectional token shift ( omni - shift ) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. these adaptations make the proposed restore - rwkv an efficient and effective model for medical image restoration. even a lightweight variant of restore - rwkv, with only 1. 16 million parameters, achieves comparable or even superior results compared to existing state - of - the - art ( sota ) methods. extensive experiments demonstrate that the resulting restore - rwkv achieves sota performance across a range of medical image restoration tasks, including pet image synthesis, ct image denoising, mri image super - resolution, and all - in - one medical image restoration. code is available at : https : / / github. com / yaziwel / restore - rwkv.
arxiv:2407.11087
we express the einstein - vlasov system in spherical symmetry in terms of a dimensionless momentum variable $ z $ ( radial over angular momentum ). this regularises the limit of massless particles, and in that limit allows us to obtain a reduced system in independent variables $ ( t, r, z ) $ only. similarly, in this limit the vlasov density function $ f $ for static solutions depends on a single variable $ q $ ( energy over angular momentum ). this reduction allows us to show that any given static metric which has vanishing ricci scalar, is vacuum at the centre and for $ r > 3m $ and obeys certain energy conditions uniquely determines a consistent $ f = \ bar k ( q ) $ ( in closed form ). vice versa, any $ \ bar k ( q ) $ within a certain class uniquely determines a static metric ( as the solution of a system of two first - order quasilinear odes ). hence the space of static spherically symmetric solutions of einstein - vlasov is locally a space of functions of one variable. for a simple 2 - parameter family of functions $ \ bar k ( q ) $, we construct the corresponding static spherically symmetric solutions, finding that their compactness is in the interval $ 0. 7 \ lesssim \ rm max _ r ( 2m / r ) \ le 8 / 9 $. this class of static solutions includes one that agrees with the approximately universal type - i critical solution recently found by akbarian and choptuik ( ac ) in numerical time evolutions. we speculate on what singles it out as the critical solution found by fine - tuning generic data to the collapse threshold, given that ac also found that { \ em all } static solutions are one - parameter unstable and sit on the threshold of collapse.
arxiv:1610.08908
of karl taylor compton ( 1930 – 1948 ), james rhyne killian ( 1948 – 1957 ), and chancellor julius adams stratton ( 1952 – 1957 ), whose institution - building strategies shaped the expanding university. by the 1950s, mit no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government. in late 1960s and early 1970s, student and faculty activists protested against the vietnam war and mit ' s defense research. in this period mit ' s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in vietnam as well as guidance systems for nuclear missiles. the union of concerned scientists was founded on march 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. mit ultimately divested itself from the instrumentation laboratory and moved all classified research off - campus to the mit lincoln laboratory facility in 1973 in response to the protests. the student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. johnson was seen to be highly successful in leading his institution to " greater strength and unity " after these times of turmoil. however six mit students were sentenced to prison terms at this time and some former student leaders, such as michael albert and george katsiaficas, are still indignant about mit ' s role in military research and its suppression of these protests. ( richard leacock ' s film, november actions, records some of these tumultuous events. ) in the 1980s, there was more controversy at mit over its involvement in sdi ( space weaponry ) and cbw ( chemical and biological warfare ) research. more recently, mit ' s research for the military has included work on robots, drones and ' battle suits '. = = = recent history = = = mit has kept pace with and helped to advance the digital age. in addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at project mac, the artificial intelligence laboratory, and the tech model railroad club wrote some of the earliest interactive computer video games like spacewar! and created much of modern hacker slang and culture. several major computer - related organizations have originated at mit since the 1980s : richard stallman ' s gnu project and the subsequent free software foundation were founded in the mid - 1980s at the ai lab ; the mit media lab was founded in 1985 by nicholas negroponte
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
we graft synchronization onto girard ' s geometry of interaction in its most concrete form, namely token machines. this is realized by introducing proof - nets for smll, an extension of multiplicative linear logic with a specific construct modeling synchronization points, and of a multi - token abstract machine model for it. interestingly, the correctness criterion ensures the absence of deadlocks along reduction and in the underlying machine, this way linking logical and operational properties.
arxiv:1405.3427
we present a description of maximal partial ovoids of size $ q ^ 2 - 1 $ of the parabolic quadric $ \ q ( 4, q ) $ as sharply transitive subsets of $ \ sl ( 2, q ) $ and show their connection with spread sets. this representation leads to an elegant explicit description of all known examples. we also give an alternative representation of these examples which is related to root systems.
arxiv:1201.5967
we use a computer algebra system to compute, in an efficient way, optimal control variational symmetries up to a gauge term. the symmetries are then used to obtain families of noether ' s first integrals, possibly in the presence of nonconservative external forces. as an application, we obtain eight independent first integrals for the sub - riemannian nilpotent problem ( 2, 3, 5, 8 ).
arxiv:math/0604072
we report a quantum teleportation experiment in which nonlinear interactions are used for the bell state measurements. the experimental results demonstrate the working principle of irreversibly teleporting an unknown arbitrary quantum state from one system to another distant system by disassembling into and then later reconstructing from purely classical information and nonclassical epr correlations. the distinct feature of this experiment is that \ emph { all } four bell states can be distinguished in the bell state measurement. teleportation of a quantum state can thus occur with certainty in principle.
arxiv:quant-ph/0010046
let $ a $ and $ b $ be relatively prime positive integers. in this paper the weighted sum $ \ sum _ { n \ in { \ rm nr } ( a, b ) } \ lambda ^ { n - 1 } n ^ m $ is given explicitly or in terms of the apostol - bernoulli numbers, where $ m $ is a nonnegative integer, and $ { \ rm nr } ( a, b ) $ denotes the set of positive integers nonrepresentable in terms of $ a $ and $ b $.
arxiv:2105.08274
zeckendorf ' s theorem states every positive integer has a unique decomposition as a sum of non - adjacent fibonacci numbers. this result has been generalized to many sequences $ \ { a _ n \ } $ arising from an integer positive linear recurrence, each of which has a corresponding notion of a legal decomposition. previous work proved the number of summands in decompositions of $ m \ in [ a _ n, a _ { n + 1 } ) $ becomes normally distributed as $ n \ to \ infty $, and the individual gap measures associated to each $ m $ converge to geometric random variables, when the leading coefficient in the recurrence is positive. we explore what happens when this assumption is removed in two special sequences. in one we regain all previous results, including unique decomposition ; in the other the number of legal decompositions exponentially grows and the natural choice for the legal decomposition ( the greedy algorithm ) only works approximately 92. 6 \ % of the time ( though a slight modification always works ). we find a connection between the two sequences, which explains why the distribution of the number of summands and gaps between summands behave the same in the two examples. in the course of our investigations we found a new perspective on dealing with roots of polynomials associated to the characteristic polynomials. this allows us to remove the need for the detailed technical analysis of their properties which greatly complicated the proofs of many earlier results in the subject, as well as handle new cases beyond the reach of existing techniques.
arxiv:1606.09309
we introduce transverse ferromagnetic interactions, in addition to a simple transverse field, to quantum annealing of the random - field ising model to accelerate convergence toward the target ground state. the conventional approach using only the transverse - field term is known to be plagued by slow convergence when the true ground state has strong ferromagnetic characteristics for the random - field ising model. the transverse ferromagnetic interactions are shown to improve the performance significantly in such cases. this conclusion is drawn from the analyses of the energy eigenvalues of instantaneous stationary states as well as by the very fast algorithm of bethe - type mean - field annealing adopted to quantum systems. the present study highlights the importance of a flexible choice of the type of quantum fluctuations to achieve the best possible performance in quantum annealing. the existence of such flexibility is an outstanding advantage of quantum annealing over simulated annealing.
arxiv:quant-ph/0702214
we construct a new order parameter for finite temperature qcd by considering the quark condensate for u ( 1 ) - valued temporal boundary conditions for the fermions. fourier transformation with respect to the boundary condition defines the dual condensate. this quantity corresponds to an equivalence class of polyakov loops, thereby being an order parameter for the center symmetry. we explore the duality relation between the quark condensate and these dressed polyakov loops numerically, using quenched lattice qcd configurations below and above the qcd phase transition. it is demonstrated that the dirac spectrum responds differently to changing the boundary condition, in a manner that reproduces the expected polyakov loop pattern. we find the dressed polyakov loops to be dominated by the lowest dirac modes, in contrast to thin polyakov loops investigated earlier.
arxiv:0801.4051
let $ p ( s ) $ be the space of projective structures on a closed surface $ s $ of genus $ g > 1 $ and let $ q ( s ) $ be the subset of $ p ( s ) $ of projective structures with quasifuchsian holonomy. it is known that $ q ( s ) $ consists of infinitely many connected components. in this paper, we will show that the closure of any exotic component of $ q ( s ) $ is not a topological manifold with boundary and that any two components of $ q ( s ) $ have intersecting closures.
arxiv:math/0603074
we study graph - based laplacian semi - supervised learning at low labeling rates. laplacian learning uses harmonic extension on a graph to propagate labels. at very low label rates, laplacian learning becomes degenerate and the solution is roughly constant with spikes at each labeled data point. previous work has shown that this degeneracy occurs when the number of labeled data points is finite while the number of unlabeled data points tends to infinity. in this work we allow the number of labeled data points to grow to infinity with the number of labels. our results show that for a random geometric graph with length scale $ \ varepsilon > 0 $ and labeling rate $ \ beta > 0 $, if $ \ beta \ ll \ varepsilon ^ 2 $ then the solution becomes degenerate and spikes form, and if $ \ beta \ gg \ varepsilon ^ 2 $ then laplacian learning is well - posed and consistent with a continuum laplace equation. furthermore, in the well - posed setting we prove quantitative error estimates of $ o ( \ varepsilon \ beta ^ { - 1 / 2 } ) $ for the difference between the solutions of the discrete problem and continuum pde, up to logarithmic factors. we also study $ p $ - laplacian regularization and show the same degeneracy result when $ \ beta \ ll \ varepsilon ^ p $. the proofs of our well - posedness results use the random walk interpretation of laplacian learning and pde arguments, while the proofs of the ill - posedness results use $ \ gamma $ - convergence tools from the calculus of variations. we also present numerical results on synthetic and real data to illustrate our results.
arxiv:2006.02765
estimating the 3d structure of the drivable surface and surrounding environment is a crucial task for assisted and autonomous driving. it is commonly solved either by using 3d sensors such as lidar or directly predicting the depth of points via deep learning. however, the former is expensive, and the latter lacks the use of geometry information for the scene. in this paper, instead of following existing methodologies, we propose road planar parallax attention network ( rpanet ), a new deep neural network for 3d sensing from monocular image sequences based on planar parallax, which takes full advantage of the omnipresent road plane geometry in driving scenes. rpanet takes a pair of images aligned by the homography of the road plane as input and outputs a $ \ gamma $ map ( the ratio of height to depth ) for 3d reconstruction. the $ \ gamma $ map has the potential to construct a two - dimensional transformation between two consecutive frames. it implies planar parallax and can be combined with the road plane serving as a reference to estimate the 3d structure by warping the consecutive frames. furthermore, we introduce a novel cross - attention module to make the network better perceive the displacements caused by planar parallax. to verify the effectiveness of our method, we sample data from the waymo open dataset and construct annotations related to planar parallax. comprehensive experiments are conducted on the sampled dataset to demonstrate the 3d reconstruction accuracy of our approach in challenging scenarios.
arxiv:2111.11089
universal adversarial perturbations are image - agnostic and model - independent noise that when added with any image can mislead the trained deep convolutional neural networks into the wrong prediction. since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, existing techniques use additional neural networks to detect the existence of these noises at the input image source. in this paper, we demonstrate an attack strategy that when activated by rogue means ( e. g., malware, trojan ) can bypass these existing countermeasures by augmenting the adversarial noise at the ai hardware accelerator stage. we demonstrate the accelerator - level universal adversarial noise attack on several deep learning models using co - simulation of the software kernel of conv2d function and the verilog rtl model of the hardware under the fusesoc environment.
arxiv:2111.09488
in multi - task learning ( mtl ), gradient balancing has recently attracted more research interest than loss balancing since it often leads to better performance. however, loss balancing is much more efficient than gradient balancing, and thus it is still worth further exploration in mtl. note that prior studies typically ignore that there exist varying improvable gaps across multiple tasks, where the improvable gap per task is defined as the distance between the current training progress and desired final training progress. therefore, after loss balancing, the performance imbalance still arises in many cases. in this paper, following the loss balancing framework, we propose two novel improvable gap balancing ( igb ) algorithms for mtl : one takes a simple heuristic, and the other ( for the first time ) deploys deep reinforcement learning for mtl. particularly, instead of directly balancing the losses in mtl, both algorithms choose to dynamically assign task weights for improvable gap balancing. moreover, we combine igb and gradient balancing to show the complementarity between the two types of algorithms. extensive experiments on two benchmark datasets demonstrate that our igb algorithms lead to the best results in mtl via loss balancing and achieve further improvements when combined with gradient balancing. code is available at https : / / github. com / yanqidai / igb4mtl.
arxiv:2307.15429
after the covid - 19 pandemic, we saw an increase in demand for epidemiological mathematical models. the goal of this work is to study the optimal control for an age - structured model as a strategy of quarantine of infected people, which is done via pontryagin ' s maximum principle. since quarantine campaigns are not just a matter of public health, also posing economic challenges, the optimal control problem does not simply minimize the number of infected individuals. instead, it jointly minimizes this number and the economic costs associated to the campaigns, providing data that can help authorities make decisions when dealing with epidemics. the controls are the quarantine entrance parameters, which are numerically calculated for different lengths of isolation. the best strategies gives a calendar that indicates when the isolation measures can be relaxed, and the consequences of a delay in the start of the quarantine are analyzed by presenting the reduction in the number of deaths for the strategy with optimal control compared to a no - quarantine landscape.
arxiv:2411.00312
let $ x _ 1, x _ 2, \ dots $ be a short - memory linear process of random variables. for $ 1 \ leq q < 2 $, let $ \ cf $ be a bounded set of real - valued functions on $ [ 0, 1 ] $ with finite $ q $ - variation. it is proved that $ \ { n ^ { - 1 / 2 } \ sum _ { i = 1 } ^ nx _ if ( i / n ) \ colon \, f \ in \ cf \ } $ converges in outer distribution in the banach space of bounded functions on $ \ cf $ as $ n \ to \ infty $. several applications to a regression model and a multiple change point model are given.
arxiv:1909.11434
amyotrophic lateral sclerosis ( als ), a progressive neuromuscular degenerative disease, severely restricts patient communication capacity within a few years of onset, resulting in a significant deterioration of quality of life. the p300 speller brain computer interface ( bci ) offers an alternative communication medium by leveraging a subject ' s eeg response to characters traditionally highlighted on a character grid on a graphical user interface ( gui ). a recurring theme in p300 - based research is enhancing performance to enable faster subject interaction. this study builds on that theme by addressing key limitations, particularly in the training of multi - subject classifiers, and by integrating advanced language models to optimize stimuli presentation and word prediction, thereby improving communication efficiency. furthermore, various advanced large language models such as generative pre - trained transformer ( gpt2 ), bert, and bart, alongside dijkstra ' s algorithm, are utilized to optimize stimuli and provide word completion choices based on the spelling history. in addition, a multi - layered smoothing approach is applied to allow for out - of - vocabulary ( oov ) words. by conducting extensive simulations based on randomly sampled eeg data from subjects, we show substantial speed improvements in typing passages that include rare and out - of - vocabulary ( oov ) words, with the extent of improvement varying depending on the language model utilized. the gains through such character - level interface optimizations are approximately 10 %, and gpt2 for multi - word prediction provides gains of around 40 %. in particular, some large language models achieve performance levels within 10 % of the theoretical performance limits established in this study. in addition, both within and across subjects, training techniques are explored, and speed improvements are shown to hold in both cases.
arxiv:2410.15161
the space $ d ' _ \ lambda $ of distributions having their $ c ^ \ infty $ wavefront set in a cone $ \ lambda $ has become important in physics because of its role in the formulation of quantum field theory in curved spacetime. it is also a basic object in microlocal analysis, but not well studied from a functional analytic viewpoint. in order to compute its completion in the open cone case, we introduce generalized spaces $ d ' _ { \ gamma, \ lambda } $ where we also control the union of h ^ s - wave front sets in a second cone $ \ gamma $ contained in $ \ lambda $. we can compute bornological and topological duals, completions and bornologifications of natural topologies for spaces in this class. all our topologies are nuclear, ultrabornological when bornological and we can describe when they are quasi - lb. we also give concrete microlocal representations of bounded and equicontinuous sets in those spaces and work with general support conditions including future compact or space compact support conditions on globally hyperbolic manifolds, as motivated by physics applications to be developed in a second paper.
arxiv:1411.3012
digital systems find it challenging to keep up with cybersecurity threats. the daily emergence of more than 560, 000 new malware strains poses significant hazards to the digital ecosystem. the traditional malware detection methods fail to operate properly and yield high false positive rates with low accuracy of the protection system. this study explores the ways in which malware can be detected using these machine learning ( ml ) and deep learning ( dl ) approaches to address those shortcomings. this study also includes a systematic comparison of the performance of some of the widely used ml models, such as random forest, multi - layer perceptron ( mlp ), and deep neural network ( dnn ), for determining the effectiveness of the domain of modern malware threat systems. we use a considerable - sized database from kaggle, which has undergone optimized feature selection and preprocessing to improve model performance. our finding suggests that the dnn model outperformed the other traditional models with the highest training accuracy of 99. 92 % and an almost perfect auc score. furthermore, the feature selection and preprocessing can help improve the capabilities of detection. this research makes an important contribution by analyzing the performance of the model on the performance metrics and providing insight into the effectiveness of the advanced detection techniques to build more robust and more reliable cybersecurity solutions against the growing malware threats.
arxiv:2504.17930
adding an extra singlet scalar $ s $ to the higgs sector can provide a barrier at tree level between a false vacuum with restored electroweak symmetry and the true one. this has been demonstrated to readily give a strong phase transition as required for electroweak baryogenesis. we show that with the addition of a fermionic dark matter particle $ \ chi $ coupling to $ s $, a simple uv - complete model can realize successful electroweak baryogenesis. the dark matter gets a cp asymmetry that is transferred to the standard model through a $ cp \ portal \ interaction $, which we take to be a coupling of $ \ chi $ to $ \ tau $ leptons and an inert higgs doublet. the cp asymmetry induced in left - handed $ \ tau $ leptons biases sphalerons to produce the baryon asymmetry. the model has promising discovery potential at the lhc, while robustly providing a large enough baryon asymmetry and correct dark matter relic density with reasonable values of the couplings.
arxiv:1702.08909
0 | $ = 7. 0 $ \ times $ 10 $ ^ { - 4 } $ ) at 298 k. within this $ t $ - interval, two ranges of $ j ^ \ prime _ { \ mathrm { eff } } $ with linear temperature variation but different slopes, with a kink at $ \ sim $ 80 k, are observed and discussed. this $ t $ - dependence arises from the growing population of the triplet state and its relevance in the properties of various arrays of dus is discussed. our experimental procedures and results are compared with those of previous works.
arxiv:1704.06642
a currently discussed interpretation of quantum theory, time - symmetrized quantum theory, makes certain claims about the properties of systems between pre - and post - selection measurements. these claims are based on a counterfactual usage of the aharonov - bergmann - lebowitz ( abl ) rule for calculating the probabilities of measurement outcomes between such measurements. it has been argued by several authors that the counterfactual usage of the abl rule is, in general, incorrect. this paper examines what might appear to be a loophole in those arguments and shows that this apparent loophole cannot be used to support a counterfactual interpretation of the abl rule. it is noted that the invalidity of the counterfactual usage of the abl rule implies that the characterization of those outcomes receiving probability 1 in a counterfactual application of the rule as ` elements of reality ' is, in general, unfounded.
arxiv:quant-ph/9807015
temporal point processes ( tpps ) are often used to represent the sequence of events ordered as per the time of occurrence. owing to their flexible nature, tpps have been used to model different scenarios and have shown applicability in various real - world applications. while tpps focus on modeling the event occurrence, marked temporal point process ( mtpp ) focuses on modeling the category / class of the event as well ( termed as the marker ). research in mtpp has garnered substantial attention over the past few years, with an extensive focus on supervised algorithms. despite the research focus, limited attention has been given to the challenging problem of developing solutions in semi - supervised settings, where algorithms have access to a mix of labeled and unlabeled data. this research proposes a novel algorithm for semi - supervised learning for marked temporal point processes ( ssl - mtpp ) applicable in such scenarios. the proposed ssl - mtpp algorithm utilizes a combination of labeled and unlabeled data for learning a robust marker prediction model. the proposed algorithm utilizes an rnn - based encoder - decoder module for learning effective representations of the time sequence. the efficacy of the proposed algorithm has been demonstrated via multiple protocols on the retweet dataset, where the proposed ssl - mtpp demonstrates improved performance in comparison to the traditional supervised learning approach.
arxiv:2107.07729
the development of 3 - dimensional environments to be used within a biomechanical physics simulation framework, such as articulated total body, can be laborious and time intensive. this brief article demonstrates how the aras 360 software package can aid the user by speeding up development time.
arxiv:1405.2063
that for any positive integer $ m \ geq 4 $ there exists an integer $ n ( m ) $ such that for $ n \ geq n ( m ) $, the polynomials $ \ sum _ { k = 0 } ^ m { m \ choose k } p ( n + k ) x ^ k $ have only real zeros. this conjecture was independently posed by ono.
arxiv:1706.10245
spatio - temporal network dynamics is an emergent property of many complex systems which remains poorly understood. we suggest a new approach to its study based on the analysis of dynamical motifs - - small subnetworks with periodic and chaotic dynamics. we simulate randomly connected neural networks and, with increasing density of connections, observe the transition from quiescence to periodic and chaotic dynamics. we explain this transition by the appearance of dynamical motifs in the structure of these networks. we also observe domination of periodic dynamics in simulations of spatially distributed networks with local connectivity and explain it by absence of chaotic and presence of periodic motifs in their structure.
arxiv:cond-mat/0311330
recent progress has been made in establishing normal approximation bounds in terms of the wasserstein - $ p $ distance for i. i. d. and locally dependent random variables. however, for $ p > 1 $, no such results have been demonstrated for dependent variables under $ \ alpha $ - mixing conditions. in this paper, we extend the wasserstein - $ p $ bounds to $ \ alpha $ - mixing random fields. we show that, under appropriate conditions, the rescaled average of random fields converges to the standard normal distribution in the wasserstein - $ p $ distance at a rate of $ o ( | t | ^ { - \ beta } ) $, where $ | t | $ is the size of the index set, and $ \ beta \ in ( 0, 1 / 2 ] $ depends on $ p $, the dimension $ d $ of the random fields, and the decay rate of the $ \ alpha $ - mixing coefficients. notably, $ \ beta = 1 / 2 $ is achievable if the mixing coefficients decay at a sufficiently fast polynomial rate. our results are derived through a carefully constructed cumulant - based edgeworth expansion and an adaptation of recent developments in stein ' s method. additionally, we introduce a novel constructive graph approach that leverages combinatorial techniques to establish the desired expansion for general dependent variables.
arxiv:2309.07031
we present the first estimate of age, stellar metallicity and chemical abundance ratios, for an individual early - type galaxy at high - redshift ( z = 1. 426 ) in the cosmos field. our analysis is based on observations obtained with the x - shooter instrument at the vlt, which cover the visual and near infrared spectrum at high ( r > 5000 ) spectral resolution. we measure the values of several spectral absorptions tracing chemical species, in particular magnesium and iron, besides determining the age - sensitive d4000 break. we compare the measured indices to stellar population models, finding good agreement. we find that our target is an old ( t > 3 gyr ), high - metallicity ( [ z / h ] > 0. 5 ) galaxy which formed its stars at z _ { form } > 5 within a short time scale ~ 0. 1 gyr, as testified by the strong [ \ alpha / fe ] ratio ( > 0. 4 ), and has passively evolved in the first > 3 - 4 gyr of its life. we have verified that this result is robust against the choice and number of fitted spectral features, and stellar population model. the result of an old age and high - metallicity has important implications for galaxy formation and evolution confirming an early and rapid formation of the most massive galaxies in the universe.
arxiv:1509.04000
we investigate the effects of strong color fields and of the associated enhanced intrinsic transverse momenta on the phi - meson production in ultrarelativistic heavy ion collisions at rhic. the observed consequences include a change of the spectral slopes, varying particle ratios, and also modified mean transverse momenta. in particular, the composition of the production processes of phi mesons, that is, direct production vs. coalescence - like production, depends strongly on the strength of the color fields and intrinsic transverse momenta and thus represents a sensitive probe for their measurement.
arxiv:nucl-th/0404005
in our previous publications we discussed various manifestations of a new decay channel of the low excited heavy nuclei called collinear cluster tri - partition ( cct ). the most populated cct mode was revealed in the mass correlation distribution of fission fragments ( ffs ) as a local region ( " bump " ) of increased yields below the loci linked to the conventional binary fission. the bump was dubbed " ni - bump " because it is centered at the masses associated with the magic isotopes of ni. intriguing features of the cct, especially high collinearity of the cct partners and relatively high probability comparable with that typical for conventional ternary fission, have caused rather wide discussion. in the majority of dedicated publications, the ffs partitions from the ni - bump have been analyzed from the different points of view. in our publications, we have underlined that ni - bump manifests itself at the detectable level only in the spectrometer arm that faces the source backing. so far, this fact has been left beyond the scope of all known theoretical considerations, while the backing likely plays a crucial role in the observation of the cct experimental pattern.
arxiv:2003.08591
we investigate a promising conformal field theory realization scheme for topological quantum computation based on the fibonacci anyons, which are believed to be realized as quasiparticle excitations in the $ \ mathbb { z } _ 3 $ parafermion fractional quantum hall state in the second landau level with filling factor $ \ nu = 12 / 5 $. these anyons are non - abelian and are known to be capable of universal topological quantum computation. the quantum information is encoded in the fusion channels of pairs of such non - abelian anyons and is protected from noise and decoherence by the topological properties of these systems. the quantum gates are realized by braiding of these anyons. we propose here an implementation of the $ n $ - qubit topological quantum register in terms of $ 2n + 2 $ fibonacci anyons. the matrices emerging from the anyon exchanges, i. e. the generators of the braid group for one qubit are derived from the coordinate wave functions of a large number of electron holes and 4 fibonacci anyons which can furthermore be represented as correlation functions in $ \ mathbb { z } _ 3 $ parafermionic two - dimensional conformal field theory. the representations of the braid groups for more than 4 anyons are obtained by fusing pairs of anyons before braiding, thus reducing eventually the system to 4 anyons.
arxiv:2404.01779
aims. the aim of this study is a detailed abundance analysis of the newly discovered r - rich star he 1405 0822, which has [ fe = h ] = - 2. 40. this star shows enhancements of both r - and s - elements, [ ba / fe ] = + 1. 95 and [ eu / fe ] = 1. 54, for which reason it is called r + s star. methods. stellar parameters and element abundances were determined by analying high - quality vlt / uves spectra. we used fe i line excitation equilibria to derive the effective temperature. the surface gravity was calculated from the fei / feii and ti i / ti ii equilibria. results. we determined accurate abundances for 39 elements, including 19 neutron - capture elements. he 1405 - 0822 is a red giant. its strong enhancements of c, n, and s - elements are the consequence of enrichment by a former agb companion with an initial mass of less than 3 m _ sun. the heavy n - capture element abundances ( including eu, yb, and hf ) seen in he 1405 - 0822 do not agree with the r - process pattern seen in strongly r - process - enhanced stars. we discuss possible enrichment scenarios for this star. the enhanced alpha elements can be explained as the result of enrichment by supernovae of type ii. na and mg may have partly been synthesized in a former agb companion, when the primary 22 ^ ne acted as a neutron poison in the 13 ^ c - pocket.
arxiv:1308.4492
group is a non - empty set g { \ displaystyle g } together with a binary operation on g { \ displaystyle g }, here denoted " β‹… { \ displaystyle \ cdot } ", that combines any two elements a { \ displaystyle a } and b { \ displaystyle b } of g { \ displaystyle g } to form an element of g { \ displaystyle g }, denoted a β‹… b { \ displaystyle a \ cdot b }, such that the following three requirements, known as group axioms, are satisfied : associativity for all a { \ displaystyle a }, b { \ displaystyle b }, c { \ displaystyle c } in g { \ displaystyle g }, one has ( a β‹… b ) β‹… c = a β‹… ( b β‹… c ) { \ displaystyle ( a \ cdot b ) \ cdot c = a \ cdot ( b \ cdot c ) }. identity element there exists an element e { \ displaystyle e } in g { \ displaystyle g } such that, for every a { \ displaystyle a } in g { \ displaystyle g }, one has e β‹… a = a { \ displaystyle e \ cdot a = a } and a β‹… e = a { \ displaystyle a \ cdot e = a }. such an element is unique ( see below ). it is called the identity element ( or sometimes neutral element ) of the group. inverse element for each a { \ displaystyle a } in g { \ displaystyle g }, there exists an element b { \ displaystyle b } in g { \ displaystyle g } such that a β‹… b = e { \ displaystyle a \ cdot b = e } and b β‹… a = e { \ displaystyle b \ cdot a = e }, where e { \ displaystyle e } is the identity element. for each a { \ displaystyle a }, the element b { \ displaystyle b } is unique ( see below ) ; it is called the inverse of a { \ displaystyle a } and is commonly denoted a βˆ’ 1 { \ displaystyle a ^ { - 1 } }. = = = notation and terminology = = = formally, a group is an ordered pair of a set and a binary operation on this set that satisfies the group axioms. the set is called the underlying set of the group, and the operation is called the group operation or the group law
https://en.wikipedia.org/wiki/Group_(mathematics)
color - singlet gauge bosons with renormalizable couplings to quarks but not to leptons must interact with additional fermions ( " anomalons " ) required to cancel the gauge anomalies. analyzing the decays of such leptophobic bosons into anomalons, i show that they produce final states involving leptons at the lhc. resonant production of a flavor - universal leptophobic $ z ' $ boson leads to cascade decays via anomalons, whose signatures include a leptonically decaying $ z $, missing energy and several jets. a $ z ' $ boson that couples to the right - handed quarks of the first and second generations undergoes cascade decays that violate lepton universality and include signals with two leptons and jets, or with a higgs boson, a lepton, a $ w $ and missing energy.
arxiv:1506.04435
, and prevent or investigate criminal activity. with the advent of programs such as the total information awareness program, technologies such as high - speed surveillance computers and biometrics software, and laws such as the communications assistance for law enforcement act, governments now possess an unprecedented ability to monitor the activities of citizens. however, many civil rights and privacy groups β€” such as reporters without borders, the electronic frontier foundation, and the american civil liberties union β€” have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. fears such as this have led to lawsuits such as hepting v. at & t. the hacktivist group anonymous has hacked into government websites in protest of what it considers " draconian surveillance ". = = = end to end encryption = = = end - to - end encryption ( e2ee ) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. it involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. end - to - end encryption prevents intermediaries, such as internet service providers or application service providers, from reading or tampering with communications. end - to - end encryption generally protects both confidentiality and integrity. examples of end - to - end encryption include https for web traffic, pgp for email, otr for instant messaging, zrtp for telephony, and tetra for radio. typical server - based communications systems do not include end - to - end encryption. these systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. examples of non - e2ee systems are google talk, yahoo messenger, facebook, and dropbox. the end - to - end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. e2ee also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent. = = = ssl / tls = = = the introduction and rapid growth of e - commerce on the world wide web in the mid - 1990s made it obvious that some form of authentication and encryption was needed. netscape took the first shot at a new standard. at the time, the dominant web browser was netscape navigator. netscape created a standard called secure socket
https://en.wikipedia.org/wiki/Computer_network
objective : to develop an automatic image normalization algorithm for intensity correction of images from breast dynamic contrast - enhanced magnetic resonance imaging ( dce - mri ) acquired by different mri scanners with various imaging parameters, using only image information. methods : dce - mr images of 460 subjects with breast cancer acquired by different scanners were used in this study. each subject had one t1 - weighted pre - contrast image and three t1 - weighted post - contrast images available. our normalization algorithm operated under the assumption that the same type of tissue in different patients should be represented by the same voxel value. we used four tissue / material types as the anchors for the normalization : 1 ) air, 2 ) fat tissue, 3 ) dense tissue, and 4 ) heart. the algorithm proceeded in the following two steps : first, a state - of - the - art deep learning - based algorithm was applied to perform tissue segmentation accurately and efficiently. then, based on the segmentation results, a subject - specific piecewise linear mapping function was applied between the anchor points to normalize the same type of tissue in different patients into the same intensity ranges. we evaluated the algorithm with 300 subjects used for training and the rest used for testing. results : the application of our algorithm to images with different scanning parameters resulted in highly improved consistency in pixel values and extracted radiomics features. conclusion : the proposed image normalization strategy based on tissue segmentation can perform intensity correction fully automatically, without the knowledge of the scanner parameters. significance : we have thoroughly tested our algorithm and showed that it successfully normalizes the intensity of dce - mr images. we made our software publicly available for others to apply in their analyses.
arxiv:1807.02152
in view of tucker ' s lemma ( an equivalent combinatorial version of the borsuk - ulam theorem ), the present authors ( 2013 ) introduced the kth altermatic number of a graph g as a tight lower bound for the chromatic number of g. in this note, we present a purely combinatorial proof for this result.
arxiv:1510.06932
the generalized uncertainty principle discloses a self - complete characteristic of gravity, namely the possibility of masking any curvature singularity behind an event horizon as a result of matter compression at the planck scale. in this paper we extend the above reasoning in order to overcome some current limitations to the framework, including the absence of a consistent metric describing such planck - scale black holes. we implement a minimum - size black hole in terms of the extremal configuration of a neutral non - rotating metric, which we derived by mimicking the effects of the generalized uncertainty principle via a short scale modified version of einstein gravity. in such a way, we find a self - consistent scenario that reconciles the self - complete character of gravity and the generalized uncertainty principle.
arxiv:1310.8153
the purpose of this paper is to explore nevanlinna theory of the entire curve $ \ exh _ a f : = ( \ exp _ af, f ) : \ c \ to a \ times \ lie ( a ) $ associated with an entire curve $ f : \ c \ to \ lie ( a ) $, where $ \ exp _ a : \ lie ( a ) \ to a $ is an exponential map of a semi - abelian variety $ a $. firstly we give a nevanlinna theoretic proof to the { \ em analytic ax - schanuel theorem } for semi - abelian varieties, which was proved by j. ax 1972 in the case of formal power series ( ax - schanuel theorem ). we assume some non - degeneracy condition for $ f $ such that the elements of the vector - valued function $ f ( z ) - f ( 0 ) \ in \ lie ( a ) \ iso \ c ^ n $ are $ \ q $ - linearly independent in the case of $ a = ( \ c ^ * ) ^ n $. then by making use of the log bloch - ochiai theorem and a key estimate which we show, we prove that $ \ td _ \ c \, \ exh _ a f \ geq n + 1 $. our next aim is to establish a { \ em 2nd main theorem } for $ \ exh _ a f $ and its $ k $ - jet lifts with truncated counting functions at level one.
arxiv:2203.00470
gravitational wave detectors in the ligo / virgo frequency band are able to measure the individual masses and the composite tidal deformabilities of neutron - star binary systems. this paper demonstrates that high accuracy measurements of these quantities from an ensemble of binary systems can in principle be used to determine the high density neutron - star equation of state exactly. this analysis assumes that all neutron stars have the same thermodynamically stable equation of state, but does not use simplifying approximations for the composite tidal deformability or make additional assumptions about the high density equation of state.
arxiv:1807.02538
as gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer. monte carlo methods act as a convenient bridge for connecting intractable mathematical expressions with actionable estimates via sampling. conventional approaches for simulating gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations. this distribution - centric characterization leads to generative strategies that scale cubically in the size of the desired random vector. these methods are prohibitively expensive in cases where we would, ideally, like to draw high - dimensional vectors or even continuous sample paths. in this work, we investigate a different line of reasoning : rather than focusing on distributions, we articulate gaussian conditionals at the level of random variables. we show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling gaussian process posteriors. starting from first principles, we derive these methods and analyze the approximation errors they introduce. we, then, ground these results by exploring the practical implications of pathwise conditioning in various applied settings, such as global optimization and reinforcement learning.
arxiv:2011.04026
we use fermionic path integral quantum monte carlo to study the effects of fermion flavor on the physical properties of dipolar exciton condensates in double layer systems. we find that by including spin in the system weakens the effective interlayer interaction strength, yet this has very little effect on the kosterlitz - thouless transition temperature. we further find that, to obtain the correct description of screening, it is necessary to account for correlation in both the interlayer and intralayer interactions. we show that while the excitonic binding cannot completely surpress screening by additional fermion flavors, their screening effectiveness is reduced leading to a much higher transition temperatures than predicted with large - n analysis.
arxiv:1108.6107
nadkarni ' s theorem asserts that for a countable borel equivalence relation ( cber ) exactly one of the following holds : ( 1 ) it has an invariant borel probability measure or ( 2 ) it admits a borel compression, i. e., a borel injection that maps each equivalence class to a proper subset of it. we prove in this paper an effective version of nadkarni ' s theorem, which shows that if a cber is effectively borel, then either alternative ( 1 ) above holds or else it admits an effectively borel compression. as a consequence if a cber is effectively borel and admits a borel compression, then it actually admits an effectively borel compression. we also prove an effective version of the ergodic decomposition theorem. finally a counterexample is given to show that alternative ( 1 ) above does not admit an effective version.
arxiv:2305.14518
we extend the earlier suggested qcd - motivated model for the $ q ^ 2 $ - dependence of the generalized gerasimov - drell - hearn ( gdh ) sum rule which assumes the smooth dependence of the structure function $ g _ t $, while the sharp dependence is due to the $ g _ 2 $ contribution and is described by the elastic part of the burkhardt - cottingham sum rule. the model successfully predicts the low crossing point for the proton gdh integral, but is at variance with the recent very accurate jlab data. we show that, at this level of accuracy, one should include the previously neglected radiative and power qcd corrections, as boundary values for the model. we stress that the gdh integral, when measured with such a high accuracy achieved by the recent jlab data, is very sensitive to qcd power corrections. we estimate the value of these power corrections from the jlab data at $ q ^ 2 \ sim 1 { gev } ^ 2 $. the inclusion of all qcd corrections leads to a good description of proton, neutron and deuteron data at all $ q ^ 2 $.
arxiv:hep-ph/0410228
deep neural networks ( dnns ) have shown great potential in non - reference image quality assessment ( nr - iqa ). however, the annotation of nr - iqa is labor - intensive and time - consuming, which severely limits their application especially for authentic images. to relieve the dependence on quality annotation, some works have applied unsupervised domain adaptation ( uda ) to nr - iqa. however, the above methods ignore that the alignment space used in classification is sub - optimal, since the space is not elaborately designed for perception. to solve this challenge, we propose an effective perception - oriented unsupervised domain adaptation method styleam for nr - iqa, which transfers sufficient knowledge from label - rich source domain data to label - free target domain images via style alignment and mixup. specifically, we find a more compact and reliable space i. e., feature style space for perception - oriented uda based on an interesting / amazing observation, that the feature style ( i. e., the mean and variance ) of the deep layer in dnns is exactly associated with the quality score in nr - iqa. therefore, we propose to align the source and target domains in a more perceptual - oriented space i. e., the feature style space, to reduce the intervention from other quality - irrelevant feature factors. furthermore, to increase the consistency between quality score and its feature style, we also propose a novel feature augmentation strategy style mixup, which mixes the feature styles ( i. e., the mean and variance ) before the last layer of dnns together with mixing their labels. extensive experimental results on two typical cross - domain settings ( i. e., synthetic to authentic, and multiple distortions to one distortion ) have demonstrated the effectiveness of our proposed styleam on nr - iqa.
arxiv:2207.14489
various options are discussed to de - fossilize heavy - duty vehicles ( hdv ), including battery - electric vehicles ( bev ), electric road systems ( ers ), and indirect electrification via hydrogen fuel cells or e - fuels. we investigate their power sector implications in future scenarios of germany with high renewable energy shares, using an open - source capacity expansion model and route - based truck traffic data. power sector costs are lowest for flexibly charged bev that also carry out vehicle - to - grid operations, and highest for e - fuels. if bev and ers - bev are not optimally charged, power sector costs increase, but are still substantially lower than in scenarios with hydrogen or e - fuels. this is because indirect electrification is less energy efficient, which outweighs potential flexibility benefits. bev and ers - bev favor solar photovoltaic energy, while hydrogen and e - fuels favor wind power and increase fossil electricity generation. results remain qualitatively robust in sensitivity analyses.
arxiv:2303.16629
we study the unzipping of double stranded dna ( dsdna ) by applying a pulling force at a fraction $ s $ $ ( 0 \ le s \ le 1 ) $ from the anchored end. from exact analytical and numerical results, the complete phase diagram is presented. the phase diagram shows a strong ensemble dependence for various values of $ s $. in addition, we show the existence of an ` ` eye ' ' phase and a triple point.
arxiv:cond-mat/0403752
we show that three dimensional " sliding " analogs of the kosterlitz - thouless phase, in stacked classical two - dimensional xy models and quantum systems of coupled luttinger liquids, can be enlarged by the application of a parallel magnetic field, which has the effect of increasing the scaling dimensions of the most relevant operators that can perturb the critical sliding phases. within our renormalization group analysis, we also find that for the case of coupled luttinger liquids, this effect is interleaved with the onset of the integer quantum hall effect for weak interactions and fields. we comment on experimental implications for a conjectured smectic metal phase in the cuprates.
arxiv:cond-mat/0007349
in this paper we address the question : ` ` do the limits on technirho production at the tevatron mean what we think they do? ' ' these limits are based on calculations that rely on vector meson dominance ( vmd ). vmd was invented in order to describe the interaction of electrons with hadrons ( the rho meson and pions ). the method has been used also as a tool in the study of technicolor phenomenology. nevertheless there are evidences in the sense that, even in its original, context vmd is not completely realized. in this work we investigate the consequences of a deviation from complete vmd for the phenomenology of colored technihadrons. we focus specially on the production of the color octet technirho and color triplet technipions. we found that a relative small direct coupling of the proto - technirho to quarks is enough to suppress or even eliminate the interaction among quarks and the physical technirho. on the other hand, it is possible to suppress the coupling of the physical technirho to technipions but in this case a large interaction among the technipions and the proto - gluon must be introduced. the consequences for the limits on the mass of the color octet technirho are also investigated.
arxiv:hep-ph/0603094
this paper addresses the challenging problem of category - level pose estimation. current state - of - the - art methods for this task face challenges when dealing with symmetric objects and when attempting to generalize to new environments solely through synthetic data training. in this work, we address these challenges by proposing a probabilistic model that relies on diffusion to estimate dense canonical maps crucial for recovering partial object shapes as well as establishing correspondences essential for pose estimation. furthermore, we introduce critical components to enhance performance by leveraging the strength of the diffusion models with multi - modal input representations. we demonstrate the effectiveness of our method by testing it on a range of real datasets. despite being trained solely on our generated synthetic data, our approach achieves state - of - the - art performance and unprecedented generalization qualities, outperforming baselines, even those specifically trained on the target domain.
arxiv:2402.12647
two new methods for investigation of two - dimensional quantum systems, whose hamiltonians are not amenable to separation of variables, are proposed. 1 ) the first one - $ susy - $ separation of variables - is based on the intertwining relations of higher order susy quantum mechanics ( hsusy qm ) with supercharges allowing for separation of variables. 2 ) the second one is a generalization of shape invariance. while in one dimension shape invariance allows to solve algebraically a class of ( exactly solvable ) quantum problems, its generalization to higher dimensions has not been yet explored. here we provide a formal framework in hsusy qm for two - dimensional quantum mechanical systems for which shape invariance holds. given the knowledge of one eigenvalue and eigenfunction, shape invariance allows to construct a chain of new eigenfunctions and eigenvalues. these methods are applied to a two - dimensional quantum system, and partial explicit solvability is achieved in the sense that only part of the spectrum is found analytically and a limited set of eigenfunctions is constructed explicitly.
arxiv:hep-th/0201080
we investigate an additive perturbation of a complex wishart random matrix and prove that a large deviation principle holds for the spectral measures. the rate function is associated to a vector equilibrium problem coming from logarithmic potential theory, which in our case is a quadratic map involving the logarithmic energies, or voiculescu ' s entropies, of two measures in the presence of an external field and an upper constraint. the proof is based on a two type particles coulomb gas representation for the eigenvalue distribution, which gives a new insight on why such variational problems should describe the limiting spectral distribution. this representation is available because of a nikishin structure satisfied by the weights of the multiple orthogonal polynomials hidden in the background.
arxiv:1204.6261
new experimental research programs in the field of neutrino physics are calling for new detectors with large masses, high energy resolution and good background rejection capabilities. this paper presents a novel hybrid organic / inorganic scintillator, which is able to improve on all three aspects simultaneously. this scintillator consists of microscopic grains of inorganic crystals suspended in an organic scintillating carrier medium. due to multiple scattering off the crystals, this scintillator appears opaque over longer distances and is intended for use in specialized detectors. thanks to the crystal phase it can natively incorporate a large variety of elements in large quantities, so that a sufficiently large detector can reach elemental loadings on the ton - or multiton scale. at the same time, this composition can produce very high light outputs and provides additional particle identification capabilities. this scintillator concept is expected to provide significant advantages for future neutrino experiments, like searches of neutrinoless double beta experiments and reactor antineutrino physics.
arxiv:1807.00628
we study a non - local version of the cahn - hilliard dynamics for phase separation in a two - component incompressible and immiscible mixture with linear mobilities. in difference to the celebrated local model with nonlinear mobility, it is only assumed that the divergences of the two fluxes - - - but not necessarily the fluxes themselves - - - annihilate each other. our main result is a rigorous proof of existence of weak solutions. the starting point is the formal representation of the dynamics as a constrained gradient flow in the wasserstein metric. we then show that time - discrete approximations by means of the incremental minimizing movement scheme converge to a weak solution in the limit. further, we compare the non - local model to the classical cahn - hilliard model in numerical experiments. our results illustrate the significant speed - up in the decay of the free energy due to the higher degree of freedom for the velocity fields.
arxiv:1712.06446
we have mapped the warm molecular gas traced by the h _ 2 s ( 0 ) - h _ 2 s ( 5 ) pure rotational mid - infrared emission lines over a radial strip across the nucleus and disk of m51 ( ngc 5194 ) using the infrared spectrograph ( irs ) on the spitzer space telescope. the six h _ 2 lines have markedly different emission distributions. we obtained the h _ 2 temperature and surface density distributions by assuming a two temperature model : a warm ( t = 100 - 300 k ) phase traced by the low j ( s ( 0 ) - s ( 2 ) ) lines and a hot phase ( t = 400 - 1000 k ) traced by the high j ( s ( 2 ) - s ( 5 ) ) lines. the lowest molecular gas temperatures are found within the spiral arms ( t ~ 155 k ), while the highest temperatures are found in the inter - arm regions ( t > 700 k ). the warm gas surface density reaches a maximum of 11 m _ sun / pc ^ 2 in the northwestern spiral arm, whereas the hot gas surface density peaks at 0. 24 m _ sun / pc ^ 2 at the nucleus. the spatial offset between the peaks in the warm and hot phases and the differences in the distributions of the h _ 2 line emission suggest that the warm phase is mostly produced by uv photons in star forming regions while the hot phase is mostly produced by shocks or x - rays associated with nuclear activity. the warm h _ 2 is found in the dust lanes of m51, spatially offset from the brightest hii regions. the warm h _ 2 is generally spatially coincident with the cold molecular gas traced by co ( j = 1 - 0 ) emission, consistent with excitation of the warm phase in dense photodissociation regions ( pdrs ). in contrast, the hot h _ 2 is most prominent in the nuclear region. here, over a 0. 5 kpc radius around the nucleus of m51, the hot h _ 2 coincides with [ o iv ] ( 25. 89 micron ) and x - ray emission indicating that shocks and / or x - rays are responsible for exciting this phase.
arxiv:0712.0022
the existing field theories are based on the properties of closed exterior forms, which are invariant ones and correspond to conservation laws for physical fields. hence, to understand the foundations of field theories and their unity, one has to know how such closed exterior forms are obtained. in the present paper it is shown that closed exterior forms corresponding to field theories are obtained from the equations modelling conservation ( balance ) laws for material media. it has been developed the evolutionary method that enables one to describe the process of obtaining closed exterior forms. the process of obtaining closed exterior forms discloses the mechanism of evolutionary processes in material media and shows that material media generate, discretely, the physical structures, from which the physical fields are formed. this justifies the quantum character of field theories. on the other hand, this process demonstrates the connection between field theories and the equations for material media and points to the fact that the foundations of field theories must be conditioned by the properties of material media. it is shown that the external and internal symmetries of field theories are conditioned by the degrees of freedom of material media. the classification parameter of physical fields and interactions, that is, the parameter of the unified field theory, is connected with the number of noncommutative balance conservation laws for material media.
arxiv:physics/0603118
in this paper, following the recent work of fathi ( 2018 ) in the classical case, we provide by two different methods a sharp symmetrized free talagrand inequality for the semicircular law, which improves the free tci of biane and voiculescu ( 2000 ). the first proof holds only in the one - dimensional case and has the advantage of providing a connection with the machinery of free moment maps introduced by bahr and boschert ( 2023 ) and a free reverse log - sobolev inequality. this case also and sheds light on a dual formulation via the free version of the functional blaschke - santalo inequality. the second proof gives the result in a multidimensional setting and relies on a random matrix approximation approach developed by biane ( 2003 ), hiai, petz and ueda ( 2004 ) combined with fathi ' s inequality on euclidean spaces.
arxiv:2410.02715
the one - loop corrections to the weak mixing angle $ \ sin ^ 2 \ theta _ { eff } ^ b $ derived from the $ z { \ bar b } b $ vertex, are known since 1985. it took another 30 years to calculate the complete electroweak two - loop corrections to $ \ sin ^ 2 \ theta _ { eff } ^ b $. the main obstacle was the calculation of the o ( 700 ) bosonic two - loop vertex integrals with up to three mass scales, at $ s = m _ z ^ 2 $. we did not perform the usual integral reduction and master evaluation, but chose a completely numerical approach, using two different calculational chains. one method relies on publicly available sector decomposition implementations. further, we derived mellin - barnes ( mb ) representations, exploring the publicly available mb suite. we had to supplement the mb suite by two new packages : ambre ~ 3, a mathematica program, for the efficient treatment of non - planar integrals and mbnumerics for advanced numerics in the minkowskian space - time. our preliminary result for ll2016, the " dessert ", for the electroweak bosonic two - loop contributions to $ \ sin ^ 2 \ theta _ { eff } ^ b $ is : $ \ delta \ sin ^ 2 \ theta _ { eff } ^ { b ( \ alpha ^ 2, \ rm bos ) } = \ sin ^ 2 \ theta _ w ~ \ delta \ kappa _ b ^ { ( \ alpha ^ 2, bos ) } $, with $ \ delta \ kappa _ b ^ { ( \ alpha ^ 2, bos ) } = - 1. 0276 x 10 ^ { - 4 } $. this contribution is about a quarter of the corresponding fermionic corrections and of about the same magnitude as several of the known higher - order qcd corrections. the $ \ sin ^ 2 \ theta _ { eff } ^ b $ is now predicited in the standard model with a relative error of $ 10 ^ { - 4 } $ [ 1 ].
arxiv:1610.07059
the reionization of cosmic hydrogen, left over from the big bang, increased its temperature to > ~ 1. e4 k. this photo - heating resulted in an increase of the minimum mass of galaxies and hence a suppression of the cosmic star formation rate. the affected population of dwarf galaxies included the progenitors of massive galaxies that formed later. we show that a massive galaxy at a redshift z > ~ 6 should show a double - peaked star formation history marked by a clear break. this break reflects the suppression signature from reionization of the region in which the galaxy was assembled. since massive galaxies originate in overdense regions where cosmic evolution is accelerated, their environment reionizes earlier than the rest of the universe. for a galaxy of ~ 1. e12 m _ solar in stars at a redshift of z ~ 6. 5, the star formation rate should typically be suppressed at a redshift z > ~ 10 since the rest of the universe is known to have reionized by z > ~ 6. 5. indeed, this is inferred to be the case for hudf - jd2, a massive galaxy which is potentially at z ~ 6. 5 but is inferred to have formed the bulk of its 3. e11 m _ solar in stars at z > ~ 9.
arxiv:astro-ph/0510421
we suggest a scheme to reconstruct the covariance matrix of a two - mode state using a single homodyne detector plus a polarizing beam splitter and a polarization rotator. it can be used to fully characterize bipartite gaussian states and to extract relevant informations on generic states.
arxiv:quant-ph/0509180
let $ n ( x, y ) $ denote the number of integers $ n \ le x $ which are divisible by a shifted prime $ p - 1 $ with $ p > y $, $ p $ prime. improving upon recent bounds of mcnew, pollack and pomerance, we establish the exact order of growth of $ n ( x, y ) $ for all $ x \ ge 2y \ ge 4 $.
arxiv:1604.00281