text
stringlengths
1
3.65k
source
stringlengths
15
79
we discuss recent developments in neutrino physics and focus, in particular, on neutrino oscillations and matter effects of three light active neutrinos. moreover, we discuss the difference between dirac and majorana neutrinos, neutrinoless $ \ beta \ beta $ - decay, absolute neutrino masses and electromagnetic moments. basic mechanisms and a few models for neutrino masses and mixing are also presented.
arxiv:hep-ph/0307149
in this paper, we consider the following question : how many degree $ d $ curves are there in $ \ mathbb { p } ^ 3 $ ( passing through the right number of generic lines and points ), whose image lies inside a $ \ mathbb { p } ^ 2 $, having $ \ delta $ nodes and one singularity of codimension $ k $. we obtain an explicit formula for this number when $ \ delta + k \ leq 4 $ ( i. e. the total codimension of the singularities is not more than four ). we use a topological method to compute the degenerate contribution to the euler class ; it is an extension of the method that originates in a paper by a. zinger and which is further pursued by s. basu and the second author. using this method, we have obtained formulas when the singularities present are more degenerate than nodes ( such as cusps, tacnodes and triple points ). when the singularities are only nodes, we have verified that our answers are consistent with those obtained by by s. kleiman and r. piene and by t. laarakker. we also verify that our answer for the characteristic number of planar cubics with a cusp and the number of planar quartics with two nodes and one cusp is consistent with the answer obtained by r. singh and the second author, where they compute the characteristic number of rational planar curves in $ \ mathbb { p } ^ 3 $ with a cusp. we also verify some of the numbers predicted by the conjecture made by pandharipande, regarding the enumerativity of bps numbers for $ \ mathbb { p } ^ 3 $.
arxiv:2007.11933
automated cell detection and localization from microscopy images are significant tasks in biomedical research and clinical practice. in this paper, we design a new cell detection and localization algorithm that combines deep convolutional neural network ( cnn ) and compressed sensing ( cs ) or sparse coding ( sc ) for end - to - end training. we also derive, for the first time, a backpropagation rule, which is applicable to train any algorithm that implements a sparse code recovery layer. the key observation behind our algorithm is that cell detection task is a point object detection task in computer vision, where the cell centers ( i. e., point objects ) occupy only a tiny fraction of the total number of pixels in an image. thus, we can apply compressed sensing ( or, equivalently sparse coding ) to compactly represent a variable number of cells in a projected space. then, cnn regresses this compressed vector from the input microscopy image. thanks to the sc / cs recovery algorithm ( l1 optimization ) that can recover sparse cell locations from the output of cnn. we train this entire processing pipeline end - to - end and demonstrate that end - to - end training provides accuracy improvements over a training paradigm that treats cnn and cs - recovery layers separately. our algorithm design also takes into account a form of ensemble average of trained models naturally to further boost accuracy of cell detection. we have validated our algorithm on benchmark datasets and achieved excellent performances.
arxiv:1810.03075
starting from a characterization of admissible cheataev and vakonomic variations in a field theory with constraints we show how the so called parametrized variational calculus can help to derive the vakonomic and the non - holonomic field equations. we present an example in field theory where the non - holonomic method proved to be unphysical.
arxiv:math-ph/0608063
we study the effects of time - dependent mass injection and heating on the evolution of the interstellar medium ( ism ) in elliptical galaxies. as the large and luminous ellipticals have supermassive black holes at their cores, which were probably much less massive in the young universe, feeding these black holes is essential. we examine steady state solutions and describe the impact of the initial starburst on the evolution of the ism and consequences for galactic activity, based on results from starburst99.
arxiv:astro-ph/0502403
in this note we classify compact 4 - manifolds with harmonic weyl tensor and nonnegative biorthogonal curvature
arxiv:1505.05430
we present a procedure to construct ( n + 1 ) - hom - nambu - lie algebras from n - hom - nambu - lie algebras equipped with a generalized trace function. it turns out that the implications of the compatibility conditions, that are necessary for this construction, can be understood in terms of the kernel of the trace function and the range of the twisting maps. furthermore, we investigate the possibility of defining ( n + k ) - lie algebras from n - lie algebras and a k - form satisfying certain conditions.
arxiv:1103.0093
we study the boundary behaviour of a meromorphic map $ f : \ mathbb c \ to \ widehat { \ mathbb c } $ on its invariant simply connected fatou component $ u $. to this aim, we develop the theory of accesses to boundary points of $ u $ and their relation to the dynamics of $ f $. in particular, we establish a correspondence between invariant accesses from $ u $ to infinity or weakly repelling points of $ f $ and boundary fixed points of the associated inner function on the unit disc. we apply our results to describe the accesses to infinity from invariant fatou components of the newton maps.
arxiv:1411.5473
we develop a method for showing that various modal logics that are valid in their countably generated canonical kripke frames must also be valid in their uncountably generated ones. this is applied to many systems, including the logics of finite width, and a broader class of multimodal logics of ` finite achronal width ' that are introduced here.
arxiv:2207.12596
a nonlinear flag is a finite sequence of nested closed submanifolds. we study the geometry of frechet manifolds of nonlinear flags, in this way generalizing the nonlinear grassmannians. as an application we describe a class of coadjoint orbits of the group of hamiltonian diffeomorphisms that consist of nested symplectic submanifolds, i. e., symplectic nonlinear flags.
arxiv:2002.04364
the only instance when the general relativistic ( gtr ) collapse equations have been solved ( almost ) exactly to explicitly find the metric coefficients is the case of a homogeneous spherical dust ( oppenheimer and snyder in 1939 in phy. rev. 56, 455 ). the equation ( 37 ) of their paper showed the formation of an event horizon for a collapsing homogeneous dust ball of mass m in that the circumference radius the outermost surface, r _ b = r _ 0 = 2m in a proper time proportional to r _ 0 ^ { - 1 / 2 } in the limit of large schrarzschild time t = infinity. but eq. ( 37 ) was approximated from the eq. ( 36 ) whose essential character is t = log ( y + 1 / y - 1 ), where, at the boundary of the star y = r _ b / r _ 0 = r _ b / 2 m. and since the argument of a logarithmic function can not be negative, one must have y > = 1 or 2m / r _ b < = 1. this shows that, atleast, in this case ( i ) trapped surfaces are not formed, ( ii ) if the collapse indeed proceeds upto r = 0, we must have m = 0, and ( iii ) proper time taken for collapse = infinity. thus, the gravitational mass of os black holes are unique and equal to zero. in the preceding paper ( astro - ph / 9904162 ), we assumed the existence of a finite mass bh and studied its properties in terms of kruskal - szekeres coordinates. and we showed that the radial geodesic of a material particle which must be timelike at r = 2m, if m > 0, actually becomes null. and this independently showed that bhs have unique mass, m = 0.
arxiv:astro-ph/9904163
we briefly review some examples of confinement which arise in condensed matter physics. we focus on two instructive cases : the off - critical ising model in a magnetic field, and an array of weakly coupled ( extended ) hubbard chains in the wigner crystal phase. in the appropriate regime, the elementary excitations in these 1 + 1 and quasi - one - dimensional systems are confined into ` mesons '. although the models are generically non - integrable, quantum mechanics and form factor techniques yield valuable information.
arxiv:cond-mat/0409602
highly coherent wave is favorable for applications in which phase retrieval is necessary, yet a high coherent wave is prone to encounter rayleigh fading phenomenon as it passes through a medium of random scatters. as an exemplary case, phase - sensitive optical time - domain reflectometry ( \ phi - otdr ) utilizes coherent interference of backscattering light along a fiber to achieve ultra - sensitive acoustic sensing, but sensing locations with fading won ' t be functional. apart from the sensing domain, fading is also ubiquitous in optical imaging and wireless telecommunication, therefore it is of great interest. in this paper, we theoretically describe and experimentally verify how the fading phenomena in one - dimension optical scatters will be suppressed with arbitrary number of independent probing channels. we initially theoretically explained why fading would cause severe noise in the demodulated phase of \ phi - otdr ; then m - degree summation of incoherent scattered light - waves is studied for the purpose of eliminating fading. finally, the gain of the retrieved phase signal - to - noise - ratio and its fluctuations were analytically derived and experimentally verified. this work provides a guideline for fading elimination in one - dimension optical scatters, and it also provides insight for optical imaging and wireless telecommunication.
arxiv:1812.03985
the two most intense wildfires of the last decade that took place in canada in 2017 and australia in 2019 - 2020 were followed by large injections of smoke in the stratosphere due to pyroconvection. it was discovered by khaykin et al. ( 2020, doi : 10. 1038 / s43247 - 020 - 00022 - 5 ) and kablick et al. ( 2020, doi : 10. 1029 / 2020gl088101 ) that, after the australian event, part of this smoke self - organized as anticyclonic confined vortices that rose in the mid - latitude stratosphere up to 35 km. based on caliop observations and the era5 reanalysis, this new study analyzes the canadian case and find, similarly, that a large plume penetrated the stratosphere by 12 august 2017 and got trapped within a meso - scale anticyclonic structure which travelled across the atlantic. it then broke into three offsprings that could be followed until mid - october performing three round the world journeys and rising up to 23 km. we analyze the dynamical structure of the vortices produced by these two wildfires and demonstrate how they are maintained by the assimilation of data from instruments measuring the signature of the vortices in the temperature and ozone field. we propose that these vortices can be seen as bubbles of low absolute potential vorticity and smoke carried vertically across the stratification from the troposphere inside the middle stratosphere by their internal heating, against the descending flux of the brewer - dobson circulation.
arxiv:2011.13239
we investigate how observations of synchrotron intensity fluctuations can be used to probe the sonic and alfv \ ' enic mach numbers of interstellar turbulence, based on mock observations performed on simulations of magnetohydrodynamic turbulence. we find that the structure function slope, and a diagnostic of anisotropy that we call the integrated quadrupole ratio modulus, both depend on the alfv \ ' enic mach number. however, these statistics also depend on the orientation of the mean magnetic field in the synchrotron emitting region relative to our line of sight, and this creates a degeneracy that cannot be broken by observations of synchrotron intensity alone. we conclude that the polarization of synchrotron emission could be analyzed to break this degeneracy, and suggest that this will be possible with the square kilometre array.
arxiv:1603.02751
in this paper we present the feynman - de broglie - bohm propagator for a semiclassical formulation of the gross - pitaeviskii equation.
arxiv:1001.3384
relative cross sections for the valence shell photoionisation ( pi ) of $ \ rm ^ 2s $ ground level and $ \ rm ^ 2d $ metastable ca $ ^ { + } $ ions were measured with high energy resolution by using the ion - photon merged - beams technique at the advanced light source. overview measurements were performed with a full width at half maximum bandpass of $ \ delta e = 17 $ ~ mev, covering the energy range 20 ~ ev - - 56 ~ ev. details of the pi spectrum were investigated at energy resolutions reaching the level of $ \ delta e = 3. 3 $ ~ mev. the photon energy scale was calibrated with an uncertainty of $ \ pm5 $ ~ mev. by comparison with previous absolute measurements % by kjeldsen et al in the energy range 28 ~ ev - - 30. 5 ~ ev and by lyon et al in the energy range 28 ~ ev - - 43 ~ ev the present experimental high - resolution data were normalised to an absolute cross - section scale and the fraction of metastable ca $ ^ { + } $ ions that were present in the parent ion beam was determined to be 18 $ \ pm $ 4 \ %. large - scale r - matrix calculations using the dirac coulomb approximation and employing 594 levels in the close - coupling expansion were performed for the ca $ ^ { + } ( 3s ^ 23p ^ 64s ~ ^ 2 \ textrm { s } _ { 1 / 2 } ) $ and ca $ ^ { + } ( 3s ^ 2 3p ^ 6 3d ~ ^ 2 \ textrm { d } _ { 3 / 2, 5 / 2 } ) $ levels. the experimental data are compared with the results of these calculations and previous theoretical and experimental studies.
arxiv:1710.06475
as distributed systems grow in scale and complexity, the need for flexible automation of systems management functions also grows. we outline a framework for building tools that provide distributed, scalable, declarative, modular, and continuous automation for distributed systems. we focus on four points of design : 1 ) a state - management approach that prescribes source - of - truth for configured and discovered system states ; 2 ) a technique to solve the declarative unification problem for a class of automation problems, providing state convergence and modularity ; 3 ) an eventual - consistency approach to state synchronization which provides automation at scale ; 4 ) an event - driven architecture that provides always - on state enforcement. we describe the methodology, software architecture for the framework, and constraints for these techniques to apply to an automation problem. we overview a reference application built on this framework that provides state - aware system provisioning and node lifecycle management, highlighting key advantages. we conclude with a discussion of current and future applications.
arxiv:2104.13263
recently it has been argued that some of the fine - tuning problems of the mssm inflation associated with the existence of a saddle point along a flat direction may be solved naturally in a class of supergravity models. here we extend the analysis and show that the constraints on the kahler potentials in these models are considerably relaxed when the location of the saddle point is treated as a free variable. we also examine the effect of supergravity corrections on inflationary predictions and find that they can slightly alter the value of the spectral index. as an example, for flat direction field values $ | \ bar { \ phi } _ 0 | = 1 \ times10 ^ { - 4 } m _ p $ we find $ n \ sim0. 92... 0. 94 $ while the prediction of the mssm inflation without any corrections is $ n \ sim0. 92 $.
arxiv:0710.1613
spatial associations have been found between interstellar neutral hydrogen ( hi ) emission morphology and small - scale structure observed by the wilkinson microwave anisotropy probe ( wmap ) in an area bounded by l = 60 & 180 deg, b = 30 & 70 deg, which was the primary target for this study. this area is marked by the presence of highly disturbed local hi and a preponderance of intermediate - and high - velocity gas. the hi distribution toward the brightest peaks in the wmap internal linear combination ( ilc ) map for this area is examined and by comparing with a second area on the sky it is demonstrated that the associations do not appear to be the result of chance coincidence. close examination of several of the associations reveals important new properties of diffuse interstellar neutral hydrogen structure. in the case of high - velocity cloud mi, the hi and wmap ilc morphologies are similar and an excess of soft x - ray emission and h - alpha emission have been reported for this feature. it is suggested that the small angular - scale, high frequency continuum emission observed by wmap may be produced at the surfaces of hi features interacting one another, or at the interface between moving hi structures and regions of enhanced plasma density in the surrounding interstellar medium. it is possible that dust grains play a role in producing the emission. however, the primary purpose of this report is to draw attention to these apparent associations without offering an unambiguous explanation as to the relevant emission mechanism ( s ).
arxiv:0704.1125
the extraction of the nuclear incompressibility from the isoscalar giant monopole resonance ( gmr ) measurements is analysed. both pairing and mutually enhanced magicity ( mem ) effects play a role in the shift of the gmr energy between the doubly closed shell $ ^ { 208 } $ pb nucleus and other pb isotopes. pairing effects are microscopically predicted whereas the mem effect is phenomenologically evaluated. accurate measurements of the gmr in open - shell pb isotopes are called for.
arxiv:0907.3423
low - rank modeling generally refers to a class of methods that solve problems by representing variables of interest as low - rank matrices. it has achieved great success in various fields including computer vision, data mining, signal processing and bioinformatics. recently, much progress has been made in theories, algorithms and applications of low - rank modeling, such as exact low - rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. these advances have brought more and more attentions to this topic. in this paper, we review the recent advance of low - rank modeling, the state - of - the - art algorithms, and related applications in image analysis. we first give an overview to the concept of low - rank modeling and challenging problems in this area. then, we summarize the models and algorithms for low - rank matrix recovery and illustrate their advantages and limitations with numerical experiments. next, we introduce a few applications of low - rank modeling in the context of image analysis. finally, we conclude this paper with some discussions.
arxiv:1401.3409
instrumental variable methods provide a powerful approach to estimating causal effects in the presence of unobserved confounding. but a key challenge when applying them is the reliance on untestable " exclusion " assumptions that rule out any relationship between the instrument variable and the response that is not mediated by the treatment. in this paper, we show how to perform consistent iv estimation despite violations of the exclusion assumption. in particular, we show that when one has multiple candidate instruments, only a majority of these candidates - - - or, more generally, the modal candidate - response relationship - - - needs to be valid to estimate the causal effect. our approach uses an estimate of the modal prediction from an ensemble of instrumental variable estimators. the technique is simple to apply and is " black - box " in the sense that it may be used with any instrumental variable estimator as long as the treatment effect is identified for each valid instrument independently. as such, it is compatible with recent machine - learning based estimators that allow for the estimation of conditional average treatment effects ( cate ) on complex, high dimensional data. experimentally, we achieve accurate estimates of conditional average treatment effects using an ensemble of deep network - based estimators, including on a challenging simulated mendelian randomization problem.
arxiv:2006.11386
in their 2007 paper, jarvis, kaufmann, and kimura defined the full orbifold $ k $ - theory of an orbifold $ { \ mathfrak x } $, analogous to the chen - ruan orbifold cohomology of $ { \ mathfrak x } $ in that it uses the obstruction bundle as a quantum correction to the multiplicative structure. we give an explicit algorithm for the computation of this orbifold invariant in the case when $ { \ mathfrak x } $ arises as an abelian symplectic quotient. our methods are integral $ k $ - theoretic analogues of those used in the orbifold cohomology case by goldin, holm, and knutson in 2005. we rely on the $ k $ - theoretic kirwan surjectivity methods developed by harada and landweber. as a worked class of examples, we compute the full orbifold $ k $ - theory of weighted projective spaces that occur as a symplectic quotient of a complex affine space by a circle. our computations hold over the integers, and in the particular case of weighted projective spaces, we show that the associated invariant is torsion - free.
arxiv:0812.4964
a strongly non - degenerate mixed function has a milnor open book structures on a sufficiently small sphere. we introduce the notion of { \ em a holomorphic - like } mixed function and we will show that a link defined by such a mixed function has a canonical contact structure. then we will show that this contact structure for a certain holomorphic - like mixed function is carried by the milnor open book.
arxiv:1204.5528
modern machine learning approaches excel in static settings where a large amount of i. i. d. training data are available for a given task. in a dynamic environment, though, an intelligent agent needs to be able to transfer knowledge and re - use learned components across domains. it has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. however, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models ( e. g., between the ground truth and an estimate ), on both the observational and interventional level. in the present work, we introduce the interventional kullback - leibler ( ikl ) divergence to quantify both structural and distributional differences between models based on a finite set of multi - environment distributions generated by interventions from the ground truth. since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.
arxiv:2302.05380
since the initial development of one - dimensional electron gases ( 1deg ) two decades ago, there has been intense interest in both the fundamental physics and the potential applications, including quantum computation, of these quantum transport systems. while experimental measurements of 1degs reveal the conductance through a system, they do not probe critical other aspects of the underlying physics, including energy eigenstate distribution, magnetic field effects, and band structure. these are better accessed by theoretical modeling, especially modeling of the energy and wavefunction distribution across a system : the local density of states ( dos ). in this thesis, a numerical greens function model of the local dos in a 1deg has been developed and implemented. the model uses an iterative method in a discrete lattice to calculate greens functions by vertical slice across a 1deg. the numerical model is adaptable to arbitrary surface gate geometry and arbitrary finite magnetic field conditions. when compared with exact analytical results for the local dos, waveband structure, and real band structure, the model returned very accurate results. a second numerical model was also developed that measured the transmission and reflection coefficients through the quantum system based on the landauer - buttiker formalism. the combination of the local dos model with the transmission coefficients model was applied to two current research topics : antidot behavior and zero - dimensional to one - dimensional tunneling. these models can be further applied to investigate a wide range of quantum transport phenomena.
arxiv:0910.0186
the field of deep learning is rich with empirical evidence of human - like performance on a variety of regression, classification, and control tasks. however, despite these successes, the field lacks strong theoretical error bounds and consistent measures of network generalization and learned invariances. in this work, we introduce two new measures, the gi - score and pal - score, that capture a deep neural network ' s generalization capabilities. inspired by the gini coefficient and palma ratio, measures of income inequality, our statistics are robust measures of a network ' s invariance to perturbations that accurately predict generalization gaps, i. e., the difference between accuracy on training and test sets.
arxiv:2104.03469
we investigate semi - classical properties of maupertuis - jacobi correspondence in 2 - d for families of hamiltonians $ ( h _ \ lambda ( x, \ xi ), { \ cal h } _ \ lambda ( x, \ xi ) ) $, when $ { \ cal h } _ \ lambda ( x, \ xi ) $ is the perturbation of completely integrable hamiltonian $ \ widetilde { \ cal h } $ veriying some isoenergetic non - degeneracy conditions. assuming the weyl $ h $ - pdo $ h ^ w _ \ lambda $ has only discrete spectrum near $ e $, and the energy surface $ \ { \ widetilde { \ cal h } = { \ cal e } \ } $ is separated by some pairwise disjoint lagrangian tori, we show that most of eigenvalues for $ \ hat h _ \ lambda $ near $ e $ are asymptotically degenerate as $ h \ to0 $. this applies in particular for the determination of trapped modes by an island, in the linear theory of water - waves. we also consider quasi - modes localized near rational tori. finally, we discuss breaking of maupertuis - jacobi correspondence on the equator of katok sphere.
arxiv:1305.3785
spectrally efficient multi - antenna wireless communication systems are a key challenge as service demands continue to increase. at the same time, powering up radio access networks is facing environmental and regulation limitations. in order to achieve more power efficiency, we design a directional modulation precoder by considering an $ m $ - qam constellation, particularly with $ m = 4, 8, 16, 32 $. first, extended detection regions are defined for desired constellations using analytical geometry. then, constellation points are placed in the optimal positions of these regions while the minimum euclidean distance to adjacent constellation points and detection region boundaries is kept as in the conventional $ m $ - qam modulation. for further power efficiency and symbol error rate similar to that of fixed design in high snr, relaxed detection regions are modeled for inner points of $ m = 16, 32 $ constellations. the modeled extended and relaxed detection regions as well as the modulation characteristics are utilized to formulate symbol - level precoder design problems for directional modulation to minimize the transmission power while preserving the minimum required snr at the destination. in addition, the extended and relaxed detection regions are used for precoder design to minimize the output of each power amplifier. we transform the design problems into convex ones and devise an interior point path - following iterative algorithm to solve the mentioned problems and provide details on finding the initial values of the parameters and the starting point. results show that compared to the benchmark schemes, the proposed method performs better in terms of power and peak power reduction as well as symbol error rate reduction for a wide range of snrs.
arxiv:1702.06878
the advancement of science relies on the exchange of ideas across disciplines and the integration of diverse knowledge domains. however, tracking knowledge flows and interdisciplinary integration in rapidly evolving, multidisciplinary fields remains a significant challenge. this work introduces a novel network analysis framework to study the dynamics of knowledge transfer directly from citation data. by applying dynamic community detection to cumulative, time - evolving citation networks, we can identify research areas as groups of papers sharing knowledge sources and outputs. our analysis characterises the life - cycles and knowledge transfer patterns of these dynamic communities over time. we demonstrate our approach through a case study of explainable artificial intelligence ( xai ) research, an emerging interdisciplinary field at the intersection of machine learning, statistics, and psychology. key findings include : ( i ) knowledge transfer between these important foundational topics and the contemporary topics in xai research is limited, and the extent of knowledge transfer varies across different contemporary research topics ; ( ii ) certain application domains exist as isolated " knowledge silos " ; ( iii ) significant " knowledge gaps " are identified between related xai research areas, suggesting opportunities for cross - pollination and improved knowledge integration. by mapping interdisciplinary integration and bridging knowledge gaps, this work can inform strategies to synthesise ideas from disparate sources and drive innovation. more broadly, our proposed framework enables new insights into the evolution of knowledge ecosystems directly from citation data, with applications spanning literature review, research planning, and science policy.
arxiv:2406.03921
in this paper we present an overview on recent progress in studies of qcd at finite temperature and densities within the functional renormalization group ( frg ) approach. the frg is a nonperturbative continuum field approach, in which quantum, thermal and density fluctuations are integrated in successively with the evolution of the renormalization group ( rg ) scale. the frg results for the qcd phase structure and the location of the critical end point ( cep ), the qcd equation of state ( eos ), the magnetic eos, baryon number fluctuations confronted with recent experimental measurements, various critical exponents, spectral functions in the critical region, the dynamical critical exponent, etc., are presented. recent estimates of the location of the cep from first - principle qcd calculations within frg and dyson - schwinger equations, which passes through lattice benchmark tests at small baryon chemical potentials, converge in a rather small region at baryon chemical potentials of about 600 mev. a region of inhomogeneous instability indicated by a negative wave function renormalization is found with $ \ mu _ b \ gtrsim 420 $ mev. it is found that the non - monotonic dependence of the kurtosis of the net - proton number distributions on the beam collision energy observed in experiments, could arise from the increasingly sharp crossover in the regime of low collision energy.
arxiv:2205.00468
we construct a heavy fermion representation for twisted bilayer graphene ( tbg ) systems. two local orbitals ( per spin / valley ) are analytically found, which are exactly the maximally localized zero modes of the continuum hamiltonian near the aa - stacking center. they have similar properties to the wannier functions in [ arxiv : 2111. 05865v2 ], but also have a clear interpretation as the zeroth pseudo landau levels ( zll ) of dirac fermions under the uniform strain field created by twisting [ arxiv : 1810. 03103v3 ]. the electronic states of tbg can be viewed as the hybridization between these zll orbitals and other itinerant states which can be obtained following the standard procedure of orthogonalized plane wave method. the " heavy fermion " model for tbg separates the strongly correlated components from the itinerant components and provides a solid base for the comprehensive understanding of the exotic physics in tbg.
arxiv:2209.09515
adding activity or driving to a thermal system may modify its phase diagram and response functions. we study that effect for a curie - weiss model where the thermal bath switches rapidly between two temperatures. the critical temperature moves with the nonequilibrium driving, opening up a new region of stability for the paramagnetic phase ( zero magnetization ) at low temperatures. furthermore, phase coexistence between the paramagnetic and ferromagnetic phases becomes possible at low temperatures. following the excess heat formalism, we calculate the nonequilibrium thermal response and study its behaviour near phase transitions. where the specific heat at the critical point makes a finite jump in equilibrium ( discontinuity ), it diverges once we add the second thermal bath. finally, ( also ) the nonequilibrium specific heat goes to zero exponentially fast with vanishing temperature, realizing an extended third law.
arxiv:2307.01795
we have analyzed archival asca data on the soft x - ray transient source v404 cyg in quiescence. we find that in the energy range 0. 7 to 8. 5 kev the spectrum is a hard power - law with a photon spectral index between 1. 8 and 2. 6 ( 90 % confidence limits ). we present a model of v404 cyg in which the accretion flow has two components : ( 1 ) an outer thin disk with a small annular extent, and ( 2 ) a large interior region where the flow is advection - dominated. nearly all the radiation in the infrared, optical, uv and x - ray bands is from the advection - dominated zone ; the thin disk radiates primarily in the infrared where it contributes about ten percent of the observed flux. the spectrum we calculate with this model is in excellent agreement with the asca x - ray data presented here, as well as with previous optical data. moreover, the fit is very insensitive to the choice of parameters such as black hole mass, orbital inclination, viscosity coefficient $ \ alpha $, and magnetic field strength. we consider the success of the model to be strong support for the advection - dominated accretion paradigm, and further evidence of the black hole nature of v404 cyg. we discuss strategies whereby systems with advection - dominated accretion could be used to prove the reality of event horizons in black holes.
arxiv:astro-ph/9610014
small angle neutron scattering ( sans ) is used to measure the density of heavy water contained in 1 - d cylindrical pores of mesoporous silica material mcm - 41 - s - 15, with pores of diameter of 15 + - 1 a. in these pores the homogenous nucleation process of bulk water at 235 k does not occur and the liquid can be supercooled down to at least 160 k. the analysis of sans data allows us to determine the absolute value of the density of d2o as a function of temperature. we observe a density minimum at 210 + - 5 k with a value of 1. 041 + - 0. 003 g / cm3. we show that the results are consistent with the predictions of molecular dynamics simulations of supercooled bulk water. this is the first experimental report of the existence of the density minimum in supercooled water.
arxiv:0704.2221
we consider the effects of an outflow on radiation escaping from the infalling envelope around a massive protostar. using numerical radiative transfer calculations, we show that outflows with properties comparable to those observed around massive stars lead to significant anisotropy in the stellar radiation field, which greatly reduces the radiation pressure experienced by gas in the infalling envelope. this means that radiation pressure is a much less significant barrier to massive star formation than has previously been thought.
arxiv:astro-ph/0411526
a necessary condition is given for a sequence of identically distributed and pairwise positively quadrant dependent random variables obeying the strong laws of large numbers with respect to the normalising constants $ n ^ { 1 / p } $ $ ( 1 \ leqslant p < 2 ) $.
arxiv:2004.02949
all the new layer perovskite superconductors seem to show a phenomenon of symmetry mixing with repect to the order parameter. an analysis of the different alternative of mixing and how far could them be presented is carried out. for the particular case of s + id symmetry of the gap, the temperature dependence of the specific heat ( $ c _ { es } $ ) and the thermodynamic critical magnetic field ( $ h _ { c } $ ) are calculated. a double peak transition is observed on $ c _ { es } ( t ) $ in the mixed regime while the single peak behavior is recovered for a purely symmetric state ( s or d ). $ c _ { es } $ presents a quadratic law at low tempeartures for a d - wave gap and for the s - wave one the typical exponential attenuation. the temperature dependence of $ h _ { c } $ shows a clear phase transition of second order at temperatures where the d - wave component becomes negligible. a comparison with other results and experiments is done.
arxiv:cond-mat/9905124
in this paper, we outline a method to compute supersymmetric one - loop integrands in ten - dimensional sym theory. it relies on the constructive interplay between their cubic - graph organization and brst invariance of the underlying pure spinor superstring description. the five - and six - point amplitudes are presented in a manifestly local form where the kinematic dependence is furnished by brst - covariant expressions in pure spinor superspace. at five points, the local kinematic numerators are shown to satisfy the bcj duality between color and kinematics leading to supergravity amplitudes as a byproduct. at six points, the sources of the hexagon anomaly are identified in superspace as systematic obstructions to brst invariance. our results are expected to reproduce any integrated sym amplitude in dimensions $ d < 8 $.
arxiv:1410.0668
we calculate the typical bipartite entanglement entropy $ \ langle s _ a \ rangle _ n $ in systems containing indistinguishable particles of any kind as a function of the total particle number $ n $, the volume $ v $, and the subsystem fraction $ f = v _ a / v $, where $ v _ a $ is the volume of the subsystem. we expand our result as a power series $ \ langle s _ a \ rangle _ n = a f v + b \ sqrt { v } + c + o ( 1 ) $, and find that $ c $ is universal ( i. e., independent of the system type ), while $ a $ and $ b $ can be obtained from a generating function characterizing the local hilbert space dimension. we illustrate the generality of our findings by studying a wide range of different systems, e. g., bosons, fermions, spins, and mixtures thereof. we provide evidence that our analytical results describe the entanglement entropy of highly excited eigenstates of quantum - chaotic spin and boson systems, which is distinct from that of integrable counterparts.
arxiv:2310.19862
fine - grained visual classification ( fgvc ) involves classifying closely related sub - classes. this task is difficult due to the subtle differences between classes and the high intra - class variance. moreover, fgvc datasets are typically small and challenging to gather, thus highlighting a significant need for effective data augmentation. recent advancements in text - to - image diffusion models offer new possibilities for augmenting classification datasets. while these models have been used to generate training data for classification tasks, their effectiveness in full - dataset training of fgvc models remains under - explored. recent techniques that rely on text2image generation or img2img methods, often struggle to generate images that accurately represent the class while modifying them to a degree that significantly increases the dataset ' s diversity. to address these challenges, we present saspa : structure and subject preserving augmentation. contrary to recent methods, our method does not use real images as guidance, thereby increasing generation flexibility and promoting greater diversity. to ensure accurate class representation, we employ conditioning mechanisms, specifically by conditioning on image edges and subject representation. we conduct extensive experiments and benchmark saspa against both traditional and recent generative data augmentation methods. saspa consistently outperforms all established baselines across multiple settings, including full dataset training, contextual bias, and few - shot classification. additionally, our results reveal interesting patterns in using synthetic data for fgvc models ; for instance, we find a relationship between the amount of real data used and the optimal proportion of synthetic data. code is available at https : / / github. com / eyalmichaeli / saspa - aug.
arxiv:2406.14551
we use a monte carlo implementation of recently developed models of double diffraction to assess the sensitivity of the lhc experiments to standard model higgs bosons produced in exclusive double diffraction. the signal is difficult to extract, due to experimental limitations related to the first level trigger, and to contamination by inclusive double diffractive background. assuming the above difficulties can be overcome, the expected signal - to - background ratio is presented as a function of the experimental resolution on the missing mass. injecting a missing mass resolution of 2 gev, a signal - to - background ratio of about 0. 5 is obtained ; a resolution of 1 gev brings a signal to background ratio of 1. this result is lower than previous estimates, and the discrepancy is explained.
arxiv:hep-ph/0406061
we analyze the following group learning problem in the context of opinion diffusion : consider a network with $ m $ users, each facing $ n $ options. in a discrete time setting, at each time step, each user chooses $ k $ out of the $ n $ options, and receive randomly generated rewards, whose statistics depend on the options chosen as well as the user itself, and are unknown to the users. each user aims to maximize their expected total rewards over a certain time horizon through an online learning process, i. e., a sequence of exploration ( sampling the return of each option ) and exploitation ( selecting empirically good options ) steps. within this context we consider two group learning scenarios, ( 1 ) users with uniform preferences and ( 2 ) users with diverse preferences, and examine how a user should construct its learning process to best extract information from other ' s decisions and experiences so as to maximize its own reward. performance is measured in { \ em weak regret }, the difference between the user ' s total reward and the reward from a user - specific best single - action policy ( i. e., always selecting the set of options generating the highest mean rewards for this user ). within each scenario we also consider two cases : ( i ) when users exchange full information, meaning they share the actual rewards they obtained from their choices, and ( ii ) when users exchange limited information, e. g., only their choices but not rewards obtained from these choices.
arxiv:1309.3697
by applying the ward identity found by weinberg two new relations of the amplitude of $ a _ { 1 } \ rightarrow \ rho \ pi $ with other physical quantities have been found.
arxiv:hep-ph/9504305
it is shown that the well known racah sum rule and biedenharn - elliott identity satisfied by the recoupling coefficients or by the $ 6 - j $ symbols of the usual rotation $ so ( 3 ) $ algebra can be extended to the corresponding features of the super - rotation $ osp ( 1 | 2 ) $ superalgebra. the structure of the sum rules is completely similar in both cases, the only difference concerns the signs which are more involved in the super - rotation case.
arxiv:hep-th/9402040
we evaluate the predictive power of the neutrino mass matrices arising from seesaw mechanism subjected to texture zero and satisfying a cyclic permutation invariant. we found that only two from eight possible patterns of the neutrino mass matrices are invariant under a cyclic permutation. the two resulted neutrino mass matrices which are invariant under a cyclic permutation can be used qualitatively to explain the neutrino mixing phenomena for solar neutrino and to derive the mixing angle that agress with the experimental data.
arxiv:0705.3290
geodesics are studied in one of the weyl metrics, referred to as the m - - q solution. first, arguments are provided, supporting our belief that this space - - time is the more suitable ( among the known solutions of the weyl family ) for discussing the properties of strong quasi - - spherical gravitational fields. then, the behaviour of geodesics is compared with the spherically symmetric situation, bringing out the sensitivity of the trajectories to deviations from spherical symmetry. particular attention deserves the change of sign in proper radial acceleration of test particles moving radially along symmetry axis, close to the $ r = 2m $ surface, and related to the quadrupole moment of the source.
arxiv:gr-qc/0402052
in this article we study several classes of ` small ' 2 - groups : we complete the classification, started in [ stancu, 2006 ], of all saturated fusion systems on metacyclic p - groups for all primes p. we consider suzuki 2 - groups, and classify all center - free saturated fusion systems on 2 - groups of 2 - rank 2. we end by classifying all possible f - centric, f - radical subgroups in saturated fusion systems on 2 - groups of 2 - rank 2.
arxiv:1007.1639
let $ g $ be a simple graph and let $ p $ and $ q $ be two integer - valued functions on $ v ( g ) $ with $ p < q $ in which for each $ v \ in v ( g ) $, $ q ( v ) \ ge \ frac { 1 } { 2 } d _ g ( v ) $ and $ p ( v ) \ ge \ frac { 1 } { 2 } q ( v ) - 2 $. in this note, we show that $ g $ has an orientation such that for each vertex $ v $, $ d ^ + _ g ( v ) \ in \ { p ( v ), p ( v ) + 1, q ( v ) - 1, q ( v ) \ } $ if and only if it has an orientation such that for each vertex $ v $, $ p ( v ) \ le d ^ + _ g ( v ) \ le q ( v ) $ where $ d ^ + _ g ( v ) $ denotes the out - degree of $ v $ in $ g $. from this result, we refine a result due to addario - berry, dalal, and reed ( 2008 ) in bipartite simple graphs on the existence of degree constrained factors.
arxiv:2205.10883
the equation of state ( eos ) of dense nuclear matter is a key factor to determine the internal structure and properties of neutron stars. however, the eos of high - density nuclear matter has great uncertainty mainly because the terrestrial nuclear experiments cannot reproduce matter as dense as that in the inner core of a neutron star. fortunately, continuous improvements in astronomical observations of neutron stars provide the opportunity to inversely constrain the eos of high - density nuclear matter. a number of methods have been proposed to implement this inverse constraint, such as the bayesian analysis algorithm, the lindblom ' s approach, and so on. neural network algorithm is an effective new method developed in recent years. by employing a set of isospin - dependent parametric eoss as the training sample of neural network algorithm, we set up an effective way to reconstruct the eos with relative accuracy through a few mass - radius data. based on the obtained neural network algorithms and according to the nicer observations on masses and radii of neutron stars with assumed precision, we get the inversely constrained eos and further calculate the corresponding macroscopic properties of the neutron star. the results are basically consistent with the constraint on eos from the huth $ et ~ al. $ based on bayesian analysis. moreover, the results show that even though the neural network algorithm was obtained by using the finite parameterized eos as the training set, it is valid for any rational parameter combination of the parameterized eos model.
arxiv:2312.15629
we study heavy quark energy loss and thermalization in hot and dense nuclear medium. the diffusion of heavy quarks is calculated via a langevin equation, both for a static medium as well as for a quark - gluon plasma ( qgp ) medium generated by a ( 3 + 1 ) - dimensional hydrodynamic model. we investigate how the initial configuration of the qgp and its properties affect the final state spectra and elliptic flow of heavy flavor mesons and non - photonic electrons. it is observed that both the geometric anisotropy of the initial profile and the flow profile of the hydrodynamic medium play important roles in the heavy quark energy loss process and the development of elliptic flow. within our definition of the thermalization criterion and for reasonable values of the diffusion constant, we observe thermalization times that are longer than the lifetime of the qgp phase.
arxiv:1209.5405
sample efficiency and performance in the offline setting have emerged as significant challenges of deep reinforcement learning. we introduce q - value weighted regression ( qwr ), a simple rl algorithm that excels in these aspects. qwr is an extension of advantage weighted regression ( awr ), an off - policy actor - critic algorithm that performs very well on continuous control tasks, also in the offline setting, but has low sample efficiency and struggles with high - dimensional observation spaces. we perform an analysis of awr that explains its shortcomings and use these insights to motivate qwr. we show experimentally that qwr matches the state - of - the - art algorithms both on tasks with continuous and discrete actions. in particular, qwr yields results on par with sac on the mujoco suite and - with the same set of hyperparameters - yields results on par with a highly tuned rainbow implementation on a set of atari games. we also verify that qwr performs well in the offline rl setting.
arxiv:2102.06782
robotic arms are key components in fruit - harvesting robots. in agricultural settings, conventional serial or parallel robotic arms often fall short in meeting the demands for a large workspace, rapid movement, enhanced capability of obstacle avoidance and affordability. this study proposes a novel hybrid six - degree - of - freedom ( dof ) robotic arm that combines the advantages of parallel and serial mechanisms. inspired by yoga, we designed two sliders capable of moving independently along a single rail, acting as two feet. these sliders are interconnected with linkages and a meshed - gear set, allowing the parallel mechanism to lower itself and perform a split to pass under obstacles. this unique feature allows the arm to avoid obstacles such as pipes, tables and beams typically found in greenhouses. integrated with serially mounted joints, the patented hybrid arm is able to maintain the end ' s pose even when it moves with a mobile platform, facilitating fruit picking with the optimal pose in dynamic conditions. moreover, the hybrid arm ' s workspace is substantially larger, being almost three times the volume of ur3 serial arms and fourteen times that of the abb irb parallel arms. experiments show that the repeatability errors are 0. 017 mm, 0. 03 mm and 0. 109 mm for the two sliders and the arm ' s end, respectively, providing sufficient precision for agricultural robots.
arxiv:2407.19826
the ability to predict and therefore to anticipate the future is an important attribute of intelligence. it is also of utmost importance in real - time systems, e. g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. while prediction of the raw rgb pixel values in future video frames has been studied in previous work, here we introduce the novel task of predicting semantic segmentations of future frames. given a sequence of video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. we develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. our results on the cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future rgb frames. prediction results up to half a second in the future are visually convincing and are much more accurate than those of a baseline based on warping semantic segmentations using optical flow.
arxiv:1703.07684
data is compressed by reducing its redundancy, but this also makes the data less reliable, more prone to errors. in this paper a novel approach of image compression based on a new method that has been created for image compression which is called five modulus method ( fmm ). the new method consists of converting each pixel value in an 8 - by - 8 block into a multiple of 5 for each of the r, g and b arrays. after that, the new values could be divided by 5 to get new values which are 6 - bit length for each pixel and it is less in storage space than the original value which is 8 - bits. also, a new protocol for compression of the new values as a stream of bits has been presented that gives the opportunity to store and transfer the new compressed image easily.
arxiv:1211.4591
the concept of universal shower profile is used to characterize the average behavior of high energy cosmic rays. the shape variables contain important information about composition. they are independent of the primary cross - section by construction, but affected by other hadronic parameters, like multiplicity. the two variables give access to the average nuclear mass of the sample and their compatibility serves as a test of hadronic models.
arxiv:1209.6011
while several benefits were realized for multilingual vision - language pretrained models, recent benchmarks across various tasks and languages showed poor cross - lingual generalisation when multilingually pre - trained vision - language models are applied to non - english data, with a large gap between ( supervised ) english performance and ( zero - shot ) cross - lingual transfer. in this work, we explore the poor performance of these models on a zero - shot cross - lingual visual question answering ( vqa ) task, where models are fine - tuned on english visual - question data and evaluated on 7 typologically diverse languages. we improve cross - lingual transfer with three strategies : ( 1 ) we introduce a linguistic prior objective to augment the cross - entropy loss with a similarity - based loss to guide the model during training, ( 2 ) we learn a task - specific subnetwork that improves cross - lingual generalisation and reduces variance without model modification, ( 3 ) we augment training examples using synthetic code - mixing to promote alignment of embeddings between source and target languages. our experiments on xgqa using the pretrained multilingual multimodal transformers uc2 and m3p demonstrate the consistent effectiveness of the proposed fine - tuning strategy for 7 languages, outperforming existing transfer methods with sparse models. code and data to reproduce our findings are publicly available.
arxiv:2209.02982
much effort is spent everyday by programmers in trying to reduce long, failing execution traces to the cause of the error. we present a new algorithm for error cause localization based on a reduction to the maximal satisfiability problem ( max - sat ), which asks what is the maximum number of clauses of a boolean formula that can be simultaneously satisfied by an assignment. at an intuitive level, our algorithm takes as input a program and a failing test, and comprises the following three steps. first, using symbolic execution, we encode a trace of a program as a boolean trace formula which is satisfiable iff the trace is feasible. second, for a failing program execution ( e. g., one that violates an assertion or a post - condition ), we construct an unsatisfiable formula by taking the trace formula and additionally asserting that the input is the failing test and that the assertion condition does hold at the end. third, using max - sat, we find a maximal set of clauses in this formula that can be satisfied together, and output the complement set as a potential cause of the error. we have implemented our algorithm in a tool called bug - assist for c programs. we demonstrate the surprising effectiveness of the tool on a set of benchmark examples with injected faults, and show that in most cases, bug - assist can quickly and precisely isolate the exact few lines of code whose change eliminates the error. we also demonstrate how our algorithm can be modified to automatically suggest fixes for common classes of errors such as off - by - one.
arxiv:1011.1589
social choice functions ( scfs ) map the preferences of a group of agents over some set of alternatives to a non - empty subset of alternatives. the gibbard - satterthwaite theorem has shown that only extremely restrictive scfs are strategyproof when there are more than two alternatives. for set - valued scfs, or so - called social choice correspondences, the situation is less clear. there are miscellaneous - mostly negative - results using a variety of strategyproofness notions and additional requirements. the simple and intuitive notion of kelly - strategyproofness has turned out to be particularly compelling because it is weak enough to still allow for positive results. for example, the pareto rule is strategyproof even when preferences are weak, and a number of attractive scfs ( such as the top cycle, the uncovered set, and the essential set ) are strategyproof for strict preferences. in this paper, we show that, for weak preferences, only indecisive scfs can satisfy strategyproofness. in particular, ( i ) every strategyproof rank - based scf violates pareto - optimality, ( ii ) every strategyproof support - based scf ( which generalize fishburn ' s c2 scfs ) that satisfies pareto - optimality returns at least one most preferred alternative of every voter, and ( iii ) every strategyproof non - imposing scf returns the condorcet loser in at least one profile. we also discuss the consequences of these results for randomized social choice.
arxiv:2102.00499
photovoltaic power forecasting ( pvpf ) is a critical area in time series forecasting ( tsf ), enabling the efficient utilization of solar energy. with advancements in machine learning and deep learning, various models have been applied to pvpf tasks. however, constructing an optimal predictive architecture for specific pvpf tasks remains challenging, as it requires cross - domain knowledge and significant labor costs. to address this challenge, we introduce autopv, a novel framework for the automated search and construction of pvpf models based on neural architecture search ( nas ) technology. we develop a brand new nas search space that incorporates various data processing techniques from state - of - the - art ( sota ) tsf models and typical pvpf deep learning models. the effectiveness of autopv is evaluated on diverse pvpf tasks using a dataset from the daqing photovoltaic station in china. experimental results demonstrate that autopv can complete the predictive architecture construction process in a relatively short time, and the newly constructed architecture is superior to sota predefined models. this work bridges the gap in applying nas to tsf problems, assisting non - experts and industries in automatically designing effective pvpf models.
arxiv:2408.00601
epitaxial mn - doped bifeo3 ( mbfo ) thin films were grown on gaas ( 001 ) substrate with srtio3 ( sto ) buffer layer by pulsed laser deposition. x - ray diffraction results demonstrate that the films show pure ( 00l ) orientation, and mbfo ( 100 ) / / sto ( 100 ), whereas sto ( 100 ) / / gaas ( 110 ). piezoresponse force microscopy images and polarization versus electric field loops indicate that the mbfo films grown on gaas have an effective ferroelectric switching. the mbfo films exhibit good ferroelectric behavior ( 2pr ~ 92 { \ mu } c / cm2 and 2ec ~ 372 kv / cm ). ferromagnetic property with saturated magnetization of 6. 5 emu / cm3 and coercive field of about 123 oe is also found in the heterostructure at room temperature.
arxiv:1309.4222
this paper presents a high - level circuit obfuscation technique to prevent the theft of intellectual property ( ip ) of integrated circuits. in particular, our technique protects a class of circuits that relies on constant multiplications, such as filters and neural networks, where the constants themselves are the ip to be protected. by making use of decoy constants and a key - based scheme, a reverse engineer adversary at an untrusted foundry is rendered incapable of discerning true constants from decoy constants. the time - multiplexed constant multiplication ( tmcm ) block of such circuits, which realizes the multiplication of an input variable by a constant at a time, is considered as our case study for obfuscation. furthermore, two tmcm design architectures are taken into account ; an implementation using a multiplier and a multiplierless shift - adds implementation. optimization methods are also applied to reduce the hardware complexity of these architectures. the well - known satisfiability ( sat ) and automatic test pattern generation ( atpg ) attacks are used to determine the vulnerability of the obfuscated designs. it is observed that the proposed technique incurs small overheads in area, power, and delay that are comparable to the hardware complexity of prominent logic locking methods. yet, the advantage of our approach is in the insight that constants - - instead of arbitrary circuit nodes - - become key - protected.
arxiv:2105.06122
markov state models ( msms ) are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long - timescale statistical kinetics and slow dynamical modes of biomolecular systems. however, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non - experts applying these methods to novel biological systems. here, we consider cross - validation with a new objective function for estimators of these slow dynamical modes, a generalized matrix rayleigh quotient ( gmrq ), which measures the ability of a rank - $ m $ projection operator to capture the slow subspace of the system. it is shown that a variational theorem bounds the gmrq from above by the sum of the first $ m $ eigenvalues of the system ' s propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. this overfitting can be detected and avoided through cross - validation. these result make it possible to construct markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors.
arxiv:1407.8083
in this paper, we study the linear inviscid damping for the linearized $ \ beta $ - plane equation around shear flows. we develop a new method to give the explicit decay rate of the velocity for a class of monotone shear flows. this method is based on the space - time estimate and the vector field method in sprit of the wave equation. for general shear flows including the sinus flow, we also prove the linear damping by establishing the limiting absorption principle, which is based on the compactness method introduced by wei - zhang - zhao in \ cite { wzz2 }. the main difficulty is that the rayleigh - kuo equation has more singular points due to the coriolis effects so that the compactness argument becomes more involved and delicate.
arxiv:1809.03065
discovering human cognitive and emotional states using multi - modal physiological signals draws attention across various research applications. physiological responses of the human body are influenced by human cognition and commonly used to analyze cognitive states. from a network science perspective, the interactions of these heterogeneous physiological modalities in a graph structure may provide insightful information to support prediction of cognitive states. however, there is no clue to derive exact connectivity between heterogeneous modalities and there exists a hierarchical structure of sub - modalities. existing graph neural networks are designed to learn on non - hierarchical homogeneous graphs with pre - defined graph structures ; they failed to learn from hierarchical, multi - modal physiological data without a pre - defined graph structure. to this end, we propose a hierarchical heterogeneous graph generative network ( h2g2 - net ) that automatically learns a graph structure without domain knowledge, as well as a powerful representation on the hierarchical heterogeneous graph in an end - to - end fashion. we validate the proposed method on the cogpilot dataset that consists of multi - modal physiological signals. extensive experiments demonstrate that our proposed method outperforms the state - of - the - art gnns by 5 % - 20 % in prediction accuracy.
arxiv:2401.02905
we show that quantum walks interpolate between a coherent ` wave walk ' and a random walk depending on how strongly the walker ' s coin state is measured ; i. e., the quantum walk exhibits the quintessentially quantum property of complementarity, which is manifested as a trade - off between knowledge of which path the walker takes vs the sharpness of the interference pattern. a physical implementation of a quantum walk ( the quantum quincunx ) should thus have an identifiable walker and the capacity to demonstrate the interpolation between wave walk and random walk depending on the strength of measurement.
arxiv:quant-ph/0404043
the icecube neutrino observatory instruments about 1 km ^ 3 of deep, glacial ice at the geographic south pole with 5160 photomultipliers to detect cherenkov light from charged relativistic particles. the experiment pursues a wide range of scientific questions ranging from particle physics such as neutrino oscillations to high - energy neutrino astronomy. most of these efforts rely heavily on an ever more precise understanding of the optical properties of the instrumented ice. an unexpected light propagation effect, observed by the experiment, is an anisotropic attenuation, which is aligned with the local flow of the ice. the exact cause is still under investigation. in this contribution, the micro - structure of ice as a birefringent polycrystal is explored as the cause for this anisotropy.
arxiv:1908.07608
precise localization and tracking of moving non - collaborative persons and objects using a network of ultra - wideband ( uwb ) radar nodes has been shown to represent a practical and effective approach. in uwb radar sensor networks ( rsns ), existence of strong clutter, weak target echoes, and closely spaced targets are obstacles to achieving a satisfactory tracking performance. using a track - before - detect ( tbd ) approach, the waveform obtained by each node during a time period are jointly processed. both spatial information and temporal relationship between measurements are exploited in generating all possible candidate trajectories and only the best trajectories are selected as the outcome. the effectiveness of the developed tbd technique for uwb rsns is confirmed by numerical simulations and by two experimental results, both carried out with actual uwb signals. in the first experiment, a human target is tracked by a monostatic radar network with an average localization error of 41. 9 cm with no false alarm trajectory in a cluttered outdoor environment. in the second experiment, two targets are detected by multistatic radar network with localization errors of 25. 4 cm and 19. 7 cm, and detection rate of the two targets is 88. 75 %, and no false alarm trajectory.
arxiv:2108.00501
let $ p $ be an operator of dirac type and let $ d = p ^ 2 $ be the associated operator of laplace type. we impose spectral boundary conditions and study the leading heat content coefficients for $ d $.
arxiv:math-ph/0308021
l shell line and total x - ray production cross sections in 78pt, 79au, 82pb, 83bi, 90th, and 92u targets ionized by 4 - 6 mev / u fluorine ions were measured. these cross sections are compared with available theories for l shell ionization using single - and multiple - hole fluorescence and the coster - kronig yields. the ecpssr and the ecusar theories exhibit good agreement with the measured data, whereas, the fba theory overestimates them by a factor of two. although for the f ion charge states q = 6 - 8 the multiple - hole atomic parameters do not significantly differ from the single - hole values, after an account for the multiple - holes, our data are better in agreement with the ecusar than the ecpssr theory.
arxiv:1610.00100
the remarkable flexibility and adaptability of both deep learning models and ensemble methods have led to the proliferation for their application in understanding many physical phenomena. traditionally, these two techniques have largely been treated as independent methodologies in practical applications. this study develops an optimized ensemble deep learning ( oedl ) framework wherein these two machine learning techniques are jointly used to achieve synergistic improvements in model accuracy, stability, scalability, and reproducibility prompting a new wave of applications in the forecasting of dynamics. unpredictability is considered as one of the key features of chaotic dynamics, so forecasting such dynamics of nonlinear systems is a relevant issue in the scientific community. it becomes more challenging when the prediction of extreme events is the focus issue for us. in this circumstance, the proposed oedl model based on a best convex combination of feed - forward neural networks, reservoir computing, and long short - term memory can play a key role in advancing predictions of dynamics consisting of extreme events. the combined framework can generate the best out - of - sample performance than the individual deep learners and standard ensemble framework for both numerically simulated and real world data sets. we exhibit the outstanding performance of the oedl framework for forecasting extreme events generated from lienard - type system, prediction of covid - 19 cases in brazil, dengue cases in san juan, and sea surface temperature in nino 3. 4 region.
arxiv:2106.08968
it is known that the noncommutativity of d - brane coordinate is responsible for describing the higher - dimensional d - branes in terms of more fundamental ones such as d - particles or d - instantons, while considering a noncommutative torus as a target space is conjectured to be equivalent to introducing the background antisymmetric tensor field in matrix models. in the present paper we clarify the dual nature of both descriptions. namely the noncommutativity of conjugate momenta of the d - brane coordinates realizes the target space structure, whereas noncommutativity of the coordinates themselves realizes world volume structure. we explicitly construct a boundary state for the dirichlet boundary condition where the string boundary is adhered to the d - brane on the noncommutative torus. there are non - trivial relations between the parameters appeared in the algebra of the coordinates and that of the momenta.
arxiv:hep-th/9902004
we develop the formalism of supersymmetric localization in supergravity using the deformed brst algebra defined in the presence of a supersymmetric background as recently formulated in arxiv : 1806. 03690. the gravitational functional integral localizes onto the cohomology of a global supercharge $ q _ \ text { eq } $, obeying $ q _ \ text { eq } ^ 2 = h $, where $ h $ is a global symmetry of the background. our construction naturally produces a twisted version of supergravity whenever supersymmetry can be realized off - shell. we present the details of the twisted graviton multiplet and ghost fields for the superconformal formulation of four - dimensional n = 2 supergravity. as an application of our formalism, we systematize the computation of the exact quantum entropy of supersymmetric black holes. in particular, we compute the one - loop determinant of the $ q _ \ text { eq } \ mathcal { v } $ deformation operator for the off - shell fluctuations of the weyl multiplet around the $ ads _ 2 \ times s ^ 2 $ saddle. this result, which is consistent with the corresponding large - charge on - shell analysis, is needed to complete the first - principles computation of the quantum entropy.
arxiv:1806.04479
correlations among stock returns during volatile markets differ substantially compared to those from quieter markets. during times of financial crisis, it has been observed that traditional dependency in global markets breaks down. however, such an upheaval in dependency structure happens over a span of several months, with the breakdown coinciding with a major bankruptcy or sovereign default. even though risk managers generally agree that identifying these periods of breakdown is important, there are few statistical methods to test for significant breakdowns. the purpose of this paper is to propose a simple test to detect such structural changes in global markets. this test relies on the assumption that asset price follows a geometric brownian motion. we test for a breakdown in correlation structure using eigenvalue decomposition. we derive the asymptotic distribution under the null hypothesis and apply the test to stock returns. we compute the power of our test and compare it with the power of other known tests. our test is able to accurately identify the times of structural breakdown in real - world stock returns. overall we argue, despite the parsimony and simplicity in the assumption of geometric brownian motion, our test can perform well to identify the breakdown in dependency of global markets.
arxiv:1809.07114
heavy quarks serve as crucial probes for exploring the properties of the hot and dense medium formed in heavy - ion collision experiments. understanding the modification of their energy as they traverse the medium is a focal point of research, with various authors extensively studying this phenomenon. this study specifically concentrates on the equilibrium phase, the quark - gluon plasma, and offers a comparative analysis of heavy quark energy loss through medium polarization, elastic collisions, and radiation. notably, while previous studies have compared polarization loss and radiation, our work extends this by incorporating elastic collisions for a more comprehensive examination. the significance of medium polarization, particularly at low momentum, is underscored, as it has been found to contribute substantially. the formalism for energy loss and drag coefficient in each case is presented, followed by the calculation of the nuclear modification factor ( $ r _ { aa } $ ) for a holistic comparative study.
arxiv:2304.04003
we consider zeta functions : $ z ( f ; p ; s ) = \ sum _ { \ m \ in \ n ^ { n } } f ( m _ 1,..., m _ n ) p ( m _ 1,..., m _ n ) ^ { - s / d } $ where $ p \ in \ r [ x _ 1,..., x _ n ] $ has degree $ d $ and $ f $ is a function arithmetic in origin, e. g. a multiplicative function. in this paper, i study the meromorphic continuation of such series beyond an a priori domain of absolute convergence when $ f $ and $ p $ satisfy properties one typically meets in applications. as a result, i prove an explicit asymptotic for a general class of lattice point problems subject to arithmetic constraints.
arxiv:math/0505558
differential privacy ( dp ) is the prevailing technique for protecting user data in machine learning models. however, deficits to this framework include a lack of clarity for selecting the privacy budget $ \ epsilon $ and a lack of quantification for the privacy leakage for a particular data row by a particular trained model. we make progress toward these limitations and a new perspective by which to visualize dp results by studying a privacy metric that quantifies the extent to which a model trained on a dataset using a dp mechanism is ` ` covered " by each of the distributions resulting from training on neighboring datasets. we connect this coverage metric to what has been established in the literature and use it to rank the privacy of individual samples from the training set in what we call a privacy profile. we additionally show that the privacy profile can be used to probe an observed transition to indistinguishability that takes place in the neighboring distributions as $ \ epsilon $ decreases, which we suggest is a tool that can enable the selection of $ \ epsilon $ by the ml practitioner wishing to make use of dp.
arxiv:2306.15790
functional simulation is an essential step in digital hardware design. recently, there has been a growing interest in leveraging large language models ( llms ) for hardware testbench generation tasks. however, the inherent instability associated with llms often leads to functional errors in the generated testbenches. previous methods do not incorporate automatic functional correction mechanisms without human intervention and still suffer from low success rates, especially for sequential tasks. to address this issue, we propose correctbench, an automatic testbench generation framework with functional self - validation and self - correction. utilizing only the rtl specification in natural language, the proposed approach can validate the correctness of the generated testbenches with a success rate of 88. 85 %. furthermore, the proposed llm - based corrector employs bug information obtained during the self - validation process to perform functional self - correction on the generated testbenches. the comparative analysis demonstrates that our method achieves a pass ratio of 70. 13 % across all evaluated tasks, compared with the previous llm - based testbench generation framework ' s 52. 18 % and a direct llm - based generation method ' s 33. 33 %. specifically in sequential circuits, our work ' s performance is 62. 18 % higher than previous work in sequential tasks and almost 5 times the pass ratio of the direct method. the codes and experimental results are open - sourced at the link : https : / / github. com / autobench / correctbench
arxiv:2411.08510
for coupled oscillator networks with laplacian coupling the master stability function ( msf ) has proven a particularly powerful tool for assessing the stability of the synchronous state. using tools from group theory this approach has recently been extended to treat more general cluster states. however, the msf and its generalisations require the determination of a set of floquet multipliers from variational equations obtained by linearisation around a periodic orbit. since closed form solutions for periodic orbits are invariably hard to come by the framework is often explored using numerical techniques. here we show that further insight into network dynamics can be obtained by focusing on piecewise linear ( pwl ) oscillator models. not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. the price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of jacobians. this is naturally accommodated with the use of \ textit { saltation } matrices. by augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. by way of illustration we analyse an integrate - and - fire network model with event - driven synaptic coupling as well as a diffusively coupled network built from planar pwl nodes, including a reduction of the popular morris - - lecar neuron model. we use these examples to emphasise that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity.
arxiv:1803.09977
most deep learning models are based on deep neural networks with multiple layers between input and output. the parameters defining these layers are initialized using random values and are " learned " from data, typically using stochastic gradient descent based algorithms. these algorithms rely on data being randomly shuffled before optimization. the randomization of the data prior to processing in batches that is formally required for stochastic gradient descent algorithm to effectively derive a useful deep learning model is expected to be prohibitively expensive for in situ model training because of the resulting data communications across the processor nodes. we show that the stochastic gradient descent ( sgd ) algorithm can still make useful progress if the batches are defined on a per - processor basis and processed in random order even though ( i ) the batches are constructed from data samples from a single class or specific flow region, and ( ii ) the overall data samples are heterogeneous. we present block - random gradient descent, a new algorithm that works on distributed, heterogeneous data without having to pre - shuffle. this algorithm enables in situ learning for exascale simulations. the performance of this algorithm is demonstrated on a set of benchmark classification models and the construction of a subgrid scale large eddy simulations ( les ) model for turbulent channel flow using a data model similar to that which will be encountered in exascale simulation.
arxiv:1903.00091
the peer review process in major artificial intelligence ( ai ) conferences faces unprecedented challenges with the surge of paper submissions ( exceeding 10, 000 submissions per venue ), accompanied by growing concerns over review quality and reviewer responsibility. this position paper argues for the need to transform the traditional one - way review system into a bi - directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation, creating an accountability framework that promotes a sustainable, high - quality peer review system. the current review system can be viewed as an interaction between three parties : the authors, reviewers, and system ( i. e., conference ), where we posit that all three parties share responsibility for the current problems. however, issues with authors can only be addressed through policy enforcement and detection tools, and ethical concerns can only be corrected through self - reflection. as such, this paper focuses on reforming reviewer accountability with systematic rewards through two key mechanisms : ( 1 ) a two - stage bi - directional review system that allows authors to evaluate reviews while minimizing retaliatory behavior, ( 2 ) a systematic reviewer reward system that incentivizes quality reviewing. we ask for the community ' s strong interest in these problems and the reforms that are needed to enhance the peer review process.
arxiv:2505.04966
we consider the problem of learning models for risk - sensitive reinforcement learning. we theoretically demonstrate that proper value equivalence, a method of learning models which can be used to plan optimally in the risk - neutral setting, is not sufficient to plan optimally in the risk - sensitive setting. we leverage distributional reinforcement learning to introduce two new notions of model equivalence, one which is general and can be used to plan for any risk measure, but is intractable ; and a practical variation which allows one to choose which risk measures they may plan optimally for. we demonstrate how our framework can be used to augment any model - free risk - sensitive algorithm, and provide both tabular and large - scale experiments to demonstrate its ability.
arxiv:2307.01708
we consider the large time behaviour of solutions to the porous medium equation with a fisher - kpp type reaction term and nonnegative, compactly supported initial function in $ l ^ \ infty ( \ mathbb { r } ^ n ) \ setminus \ { 0 \ } $ : \ begin { equation } \ label { eq : abstract } \ tag { $ \ star $ } u _ t = \ delta u ^ m + u - u ^ 2 \ quad \ text { in } q : = \ mathbb { r } ^ n \ times \ mathbb { r } _ +, \ qquad u ( \ cdot, 0 ) = u _ 0 \ quad \ text { in } \ mathbb { r } ^ n, \ end { equation } with $ m > 1 $. it is well known that the spatial support of the solution $ u ( \ cdot, t ) $ to this problem remains bounded for all time $ t > 0 $. in spatial dimension one it is known that there is a minimal speed $ c _ * > 0 $ for which the equation admits a wavefront solution $ \ phi _ { c _ * } $ with a finite front, and it attract solutions with initial functions behaving like a heaviside function. in dimension one we can obtain an analogous stability result for the case of compactly supported initial data. in higher dimensions we show that $ \ phi _ { c _ * } $ is still attractive, albeit that a logarithmic shifting occurs. more precisely, if the initial function in \ eqref { eq : abstract } is additionally assumed to be radially symmetric, then there exists a second constant $ c ^ * > 0 $ independent of the dimension $ n $ and the initial function $ u _ 0 $, such that \ [ \ lim _ { t \ to \ infty } \ left \ { \ sup _ { x \ in \ mathbb r ^ n } \ big | u ( x, t ) - \ phi _ { c _ * } ( | x | - c _ * t + ( n - 1 ) c ^ * \ log t - r _ 0 ) \ big | \ right \ } = 0 \ ] for some $ r _ 0 \ in \ mathbb { r } $ ( depending on $ u _ 0 $ ). if the initial function is not radially symmetric, then there exist $ r _ 1, r _ 2 \
arxiv:1806.02022
a new numerical code for computing stationary axisymmetric rapidly rotating stars in general relativity is presented. the formulation is based on a fully constrained - evolution scheme for 3 + 1 numerical relativity using the dirac gauge and maximal slicing. we use both the polytropic and mit bag model equations of state to demonstrate that the code can construct rapidly rotating neutron star and strange star models. we compare numerical models obtained by our code and a well - established code, which uses a different gauge condition, and show that the two codes agree to high accuracy.
arxiv:gr-qc/0603048
in schema - guided dialogue state tracking models estimate the current state of a conversation using natural language descriptions of the service schema for generalization to unseen services. prior generative approaches which decode slot values sequentially do not generalize well to variations in schema, while discriminative approaches separately encode history and schema and fail to account for inter - slot and intent - slot dependencies. we introduce splat, a novel architecture which achieves better generalization and efficiency than prior approaches by constraining outputs to a limited prediction space. at the same time, our model allows for rich attention among descriptions and history while keeping computation costs constrained by incorporating linear - time attention. we demonstrate the effectiveness of our model on the schema - guided dialogue ( sgd ) and multiwoz datasets. our approach significantly improves upon existing models achieving 85. 3 jga on the sgd dataset. further, we show increased robustness on the sgd - x benchmark : our model outperforms the more than 30 $ \ times $ larger d3st - xxl model by 5. 0 points.
arxiv:2306.09340
in this paper, we characterize compatibility of distributions and probability measures on a measurable space. for a set of indices $ \ mathcal j $, we say that the tuples of probability measures $ ( q _ i ) _ { i \ in \ mathcal j } $ and distributions $ ( f _ i ) _ { i \ in \ mathcal j } $ are { compatible } if there exists a random variable having distribution $ f _ i $ under $ q _ i $ for each $ i \ in \ mathcal j $. we first establish an equivalent condition using conditional expectations for general ( possibly uncountable ) $ \ mathcal j $. for a finite $ n $, it turns out that compatibility of $ ( q _ 1, \ dots, q _ n ) $ and $ ( f _ 1, \ dots, f _ n ) $ depends on the heterogeneity among $ q _ 1, \ dots, q _ n $ compared with that among $ f _ 1, \ dots, f _ n $. we show that, under an assumption that the measurable space is rich enough, $ ( q _ 1, \ dots, q _ n ) $ and $ ( f _ 1, \ dots, f _ n ) $ are compatible if and only if $ ( q _ 1, \ dots, q _ n ) $ dominates $ ( f _ 1, \ dots, f _ n ) $ in a notion of heterogeneity order, defined via multivariate convex order between the radon - nikodym derivatives of $ ( q _ 1, \ dots, q _ n ) $ and $ ( f _ 1, \ dots, f _ n ) $ with respect to some reference measures. we then proceed to generalize our results to stochastic processes, and conclude the paper with an application to portfolio selection problems under multiple constraints.
arxiv:1706.01168
we calculate analytically the asymptotic form of quasi - normal modes of perturbations of arbitrary spin of a schwarzschild black hole including first - order corrections. we use the teukolsky equation which applies to both bosonic and fermionic modes. remarkably, we arrive at explicit expressions which coincide with those derived using the regge - wheeler equation for integer spin. our zeroth - order expressions agree with the results of wkb analysis. in the case of dirac fermions, our results are in good agreement with numerical data.
arxiv:hep-th/0610170
neuromorphic computing implementing spiking neural networks ( snn ) is a promising technology for reducing the footprint of optical transceivers, as required by the fast - paced growth of data center traffic. in this work, an snn nonlinear demapper is designed and evaluated on a simulated intensity - modulation direct - detection link with chromatic dispersion. the snn demapper is implemented in software and on the analog neuromorphic hardware system brainscales - 2 ( bss - 2 ). for comparison, linear equalization ( le ), volterra nonlinear equalization ( vnle ), and nonlinear demapping by an artificial neural network ( ann ) implemented in software are considered. at a pre - forward error correction bit error rate of 2e - 3, the software snn outperforms le by 1. 5 db, vnle by 0. 3 db and the ann by 0. 5 db. the hardware penalty of the snn on bss - 2 is only 0. 2 db, i. e., also on hardware, the snn performs better than all software implementations of the reference approaches. hence, this work demonstrates that snn demappers implemented on electrical analog hardware can realize powerful and accurate signal processing fulfilling the strict requirements of optical communications.
arxiv:2302.14726
let $ g $ be a camina $ p $ - group of nilpotence class $ 3 $. we prove that if $ g ' < c _ g ( g ' ) $, then $ | z ( g ) | \ le | g ' : g _ 3 | ^ { 1 / 2 } $. we also prove that if $ g / g _ 3 $ has only one or two abelian subgroups of order $ | g : g ' | $, then $ g ' < c _ g ( g ' ) $. if $ g / g _ 3 $ has $ p ^ a + 1 $ abelian subgroups of order $ | g : g ' | $, then either $ g ' < c _ g ( g ' ) $ or $ | z ( g ) | \ le p ^ { 2a } $.
arxiv:1510.06293
information theoretical measures, such as entropy, mutual information, and various divergences, exhibit robust characteristics in image registration applications. however, the estimation of these quantities is computationally intensive in high dimensions. on the other hand, consistent estimation from pairwise distances of the sample points is possible, which suits random projection ( rp ) based low dimensional embeddings. we adapt the rp technique to this task by means of a simple ensemble method. to the best of our knowledge, this is the first distributed, rp based information theoretical image registration approach. the efficiency of the method is demonstrated through numerical examples.
arxiv:1210.0824
blocking transformation is performed in quantum field theory at finite temperature. it is found that the manner temperature deforms the renormalized trajectories can be used to understand better the role played by the quantum fluctuations. in particular, it is conjectured that domain formation and mass parameter generation can be observed in theories without spontaneous symmetry breaking.
arxiv:hep-th/9403171
saturn ' s diffuse e ring consists of many tiny ( micron and sub - micron ) grains of water ice distributed between the orbits of mimas and titan. various gravitational and non - gravitational forces perturb these particles ' orbits, causing the ring ' s local particle density to vary noticeably with distance from the planet, height above the ring - plane, hour angle and time. using remote - sensing data obtained by the cassini spacecraft in 2005 and 2006, we investigate the e - ring ' s three - dimensional structure during a time when the sun illuminated the rings from the south at high elevation angles ( > 15 degrees ). these observations show that the ring ' s vertical thickness grows with distance from enceladus ' orbit and its peak brightness density shifts from south to north of saturn ' s equator plane with increasing distance from the planet. these data also reveal a localized depletion in particle density near saturn ' s equatorial plane around enceladus ' semi - major axis. finally, variations are detected in the radial brightness profile and the vertical thickness of the ring as a function of longitude relative to the sun. possible physical mechanisms and processes that may be responsible for some of these structures include solar radiation pressure, variations in the ambient plasma, and electromagnetic perturbations associated with saturn ' s shadow.
arxiv:1111.2568
we build an evolution model of the central black hole that depends on the processes of gas accretion, the capture of stars, mergers as well as electromagnetic torque. in case of gas accretion in the presence of cooling sources, the flow is momentum - driven, after which the black hole reaches a saturated mass ; subsequently, it grows only by stellar capture and mergers. we model the evolution of the mass and spin with the initial seed mass and spin in $ \ lambda $ cdm cosmology. for stellar capture, we have assumed a power - law density profile for the stellar cusp in a framework of relativistic loss cone theory that include the effects of black hole spin, carter ' s constant, loss cone angular momentum, and capture radius. based on this, the predicted capture rates of $ 10 ^ { - 5 } $ - - $ 10 ^ { - 6 } $ yr $ ^ { - 1 } $ are closer to the observed range. we have considered the merger activity to be effective for $ z \ lesssim 4 $, and we self - consistently include the blandford - znajek torque. we calculate these effects on the black hole growth individually and in combination, for deriving the evolution. before saturation, accretion dominates the black hole growth ( $ \ sim 95 \ % $ of the final mass ), and subsequently, stellar capture and mergers take over with roughly equal contribution. the simulations of the evolution of the $ m _ { \ bullet } - \ sigma $ relation using these effects are consistent with available observations. we run our model backward in time and retrodict the parameters at formation. our model will provide useful inputs for building demographics of the black holes and in formation scenarios involving stellar capture.
arxiv:2004.05000
rapid increases in food supplies have reduced global hunger, while rising burdens of diet - related disease have made poor diet quality the leading cause of death and disability around the world. today ' s " double burden " of undernourishment in utero and early childhood then undesired weight gain and obesity later in life is accompanied by a third less visible burden of micronutrient imbalances. the triple burden of undernutrition, obesity, and unbalanced micronutrients that underlies many diet - related diseases such as diabetes, hypertension and other cardiometabolic disorders often coexist in the same person, household and community. all kinds of deprivation are closely linked to food insecurity and poverty, but income growth does not always improve diet quality in part because consumers cannot directly or immediately observe the health consequences of their food options, especially for newly introduced or reformulated items. even after direct experience and epidemiological evidence reveals relative risks of dietary patterns and nutritional exposures, many consumers may not consume a healthy diet because food choice is driven by other factors. this chapter reviews the evidence on dietary transition and food system transformation during economic development, drawing implications for how research and practice in agricultural economics can improve nutritional outcomes.
arxiv:2202.02579
in this paper, symmetry analysis is extended to study nonlocal differential equations, in particular two integrable nonlocal equations, the nonlocal nonlinear schr \ " odinger equation and the nonlocal modified korteweg - - de vries equation. lie point symmetries are obtained based on a general theory and used to reduce these equations to nonlocal and local ordinary differential equations separately ; namely one symmetry may allow reductions to both nonlocal and local equations depending on how the invariant variables are chosen. for the nonlocal modified korteweg - - de vries equation, analogously to the local situation, all reduced local equations are integrable. at the end, we also define complex transformations to connect nonlocal differential equations and differential - difference equations.
arxiv:1904.01854
this is the second of a series of popular lectures on quantum chromodynamics. the first ( introductory ) lecture can be found here : https : / / scfh. ru / blogs / o _ fizike _ i _ fizikah / polet - nad - kvantovoy - khromodinamikoy / the lecture deals with the one of the main pillars of quantum chromodynamics - - - quantum mechanics. non - physicists usually consider quantum mechanics as an extremely weird subject far from the everyday common sense. partly this is true. however we will try to argue in this lecture that in its essence quantum mechanics is, in a some sense, even more natural than classical mechanics, and it is not so far from the common sense as a layman usually assumes.
arxiv:1811.03428
the validity of the work by lamata et al [ phys. rev. lett. 98, 253005 ( 2007 ) ] can be further shown by quantum field theory considerations.
arxiv:0712.0491
a new class of functions is presented. the structure of the algorithm, particularly the selection criteria ( branching ), is used to define the fundamental property of the new class. the most interesting property of the new functions is that instances are easy to compute but if input to the function is vague the description of a function is exponentially complex. this property puts a new light on randomness especially on the random oracle model with a couple of practical examples of random oracle implementation. consequently, there is a new interesting viewpoint on computational complexity in general.
arxiv:1309.0296
we consider a land mobile satellite communication system using spread spectrum techniques where the uplink is exposed to mt jamming attacks, and the downlink is corrupted by multi - path fading channels. we proposes an anti - jamming receiver, which exploits inherent low - dimensionality of the received signal model, by formulating a robust principal component analysis ( robust pca ) - based recovery problem. simulation results verify that the proposed receiver outperforms the conventional receiver for a reasonable rank of the jamming signal.
arxiv:1803.02549