text
stringlengths
1
3.65k
source
stringlengths
15
79
we consider the $ 1d $ one - component plasma ( ocp ) in thermal equilibrium, consisting of $ n $ equally charged particles on a line, with pairwise coulomb repulsion and confined by an external harmonic potential. we study two observables : ( i ) the distribution of the gap between two consecutive particles in the bulk and ( ii ) the distribution of the number of particles $ n _ i $ in a fixed interval $ i = [ - l, + l ] $ inside the bulk, the so - called full - counting - statistics ( fcs ). for both observables, we compute, for large $ n $, the distribution of the typical as well as atypical large fluctuations. we show that the distribution of the typical fluctuations of the gap are described by the scaling form $ { \ cal p } _ { \ rm gap, bulk } ( g, n ) \ sim n h _ \ alpha ( g \, n ) $, where $ \ alpha $ is the interaction coupling and the scaling function $ h _ \ alpha ( z ) $ is computed explicitly. it has a faster than gaussian tail for large $ z $ : $ h _ \ alpha ( z ) \ sim e ^ { - z ^ 3 / ( 96 \ alpha ) } $ as $ z \ to \ infty $. similarly, for the fcs, we show that the distribution of the typical fluctuations of $ n _ i $ is described by the scaling form $ { \ cal p } _ { \ rm fcs } ( n _ i, n ) \ sim 2 \ alpha \, u _ \ alpha [ 2 \ alpha ( n _ i - \ bar { n } _ i ) ] $, where $ \ bar { n } _ i = l \, n / ( 2 \ alpha ) $ is the average value of $ n _ i $ and the scaling function $ u _ \ alpha ( z ) $ is obtained explicitly. for both observables, we show that the probability of large fluctuations are described by large deviations forms with respective rate functions that we compute explicitly. our numerical monte - carlo simulations are in good agreement with our analytical predictions.
arxiv:2202.12118
we consider superfluid hydrodynamics of two - dimensional bose - einstein condensates. interpreting the curvature of the macroscopic condensate wavefunction as an effective gravity in such a superfluid universe, we argue for a superfluid equivalence principle - - - that the gravitational mass of a quantised vortex should be equal to the inertial vortex mass. in this model, gravity and electromagnetism have the same origin and are emergent properties of the superfluid universe, which itself emerges from the underlying collective structure of more elementary particles, such as atoms. the bose - einstein condensate is identified as the elusive dark matter of the superfluid universe with vortices and phonons, respectively, corresponding to massive charged particles and massless photons. implications of this cosmological picture of superfluids to the physics of dense vortex matter are considered.
arxiv:2001.03302
we consider the maximum - a - posteriori inference problem in discrete graphical models and study solvers based on the dual block - coordinate ascent rule. we map all existing solvers in a single framework, allowing for a better understanding of their design principles. we theoretically show that some block - optimizing updates are sub - optimal and how to strictly improve them. on a wide range of problem instances of varying graph connectivity, we study the performance of existing solvers as well as new variants that can be obtained within the framework. as a result of this exploration we build a new state - of - the art solver, performing uniformly better on the whole range of test instances.
arxiv:2004.07715
the proliferation of radical online communities and their violent offshoots has sparked great societal concern. however, the current practice of banning such communities from mainstream platforms has unintended consequences : ( i ) the further radicalization of their members in fringe platforms where they migrate ; and ( ii ) the spillover of harmful content from fringe back onto mainstream platforms. here, in a large observational study on two banned subreddits, r / the \ _ donald and r / fatpeoplehate, we examine how factors associated with the recro radicalization framework relate to users ' migration decisions. specifically, we quantify how these factors affect users ' decisions to post on fringe platforms and, for those who do, whether they continue posting on the mainstream platform. our results show that individual - level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform. whereas social - level factors, users ' connection with the radical community, only affect the propensity to be coactive on both platforms. overall, our findings pave the way for evidence - based moderation policies, as the decisions to migrate and remain coactive amplify unintended consequences of community bans.
arxiv:2212.04765
a number of ` modified ' newtonian potentials of various forms are available in the literature which accurately approximate some general relativistic effects important for studying accretion discs around a schwarzschild black hole. such potentials may be called ` pseudo - schwarzschild ' potentials because they nicely mimic the space - time around a non - rotating / slowly rotating compact object. in this paper, we examine the validity of the application of some of these potentials to study the spherically symmetric, transonic, hydrodynamic accretion onto a schwarzschild black hole. by comparing the values of various dynamical and thermodynamic accretion parameters obtained for flows using these potentials with full general relativistic calculations, we have shown that though the potentials discussed in this paper were originally proposed to mimic the relativistic effects manifested in disc accretion, it is quite reasonable to use most of the potentials in studying various dynamical as well as thermodynamic quantities for spherical accretion to compromise between the ease of handling of a newtonian description of gravity and the realistic situations described by complicated general relativistic calculations. also we have shown that depending on the chosen regions of parameter space spanned by specific energy $ { \ cal e } $ and adiabatic index $ \ gamma $ of the flow, one potential may have more importance than another and we could identify which potential is the best approximation for full general relativistic flow in scwarzschild space - time for particular values of $ { \ cal e } $ and $ \ gamma $.
arxiv:astro-ph/0212007
if $ ( m, g ) $ and $ ( \ hm, \ hg ) $ are two smooth connected complete oriented riemannian manifolds of dimensions $ n $ and $ \ hn $ respectively, we model the rolling of $ ( m, g ) $ onto $ ( \ hm, \ hg ) $ as a driftless control affine systems describing two possible constraints of motion : the first rolling motion $ \ sigma _ { ns } $ captures the no - spinning condition only and the second rolling motion $ \ sigma _ { r } $ corresponds to rolling without spinning nor slipping. two distributions of dimensions $ ( n + \ hn ) $ and $ n $, respectively, are then associated to the rolling motions $ \ sigma _ { ns } $ and $ \ sigma _ { r } $ respectively. this generalizes the rolling problems considered in \ cite { chitourkokkonen1 } where both manifolds had the same dimension. the controllability issue is then addressed for both $ \ sigma _ { ns } $ and $ \ sigma _ { r } $ and completely solved for $ \ sigma _ { ns } $. as regards to $ \ sigma _ { r } $, basic properties for the reachable sets are provided as well as the complete study of the case $ ( n, \ hn ) = ( 3, 2 ) $ and some sufficient conditions for non - controllability.
arxiv:1312.4885
in this paper we show that a split central simple algebra with quadratic pair which decomposes into a tensor product of quaternion algebras with involution and a quaternion algebra with quadratic pair is adjoint to a quadratic pfister form. this result is new in characteristic two, otherwise it is equivalent to the pfister factor conjecture proven in [ 3 ].
arxiv:1512.01349
multi - robot collaboration has become a needed component in unknown environment exploration due to its ability to accomplish various challenging situations. potential - field - based methods are widely used for autonomous exploration because of their high efficiency and low travel cost. however, exploration speed and collaboration ability are still challenging topics. therefore, we propose a distributed multi - robot potential - field - based exploration ( dmpf - explore ). in particular, we first present a distributed submap - based multi - robot collaborative mapping method ( dsmc - map ), which can efficiently estimate the robot trajectories and construct the global map by merging the local maps from each robot. second, we introduce a potential - field - based exploration strategy augmented with modified wave - front distance and colored noises ( mwf - cn ), in which the accessible frontier neighborhood is extended, and the colored noise provokes the enhancement of exploration performance. the proposed exploration method is deployed for simulation and real - world scenarios. the results show that our approach outperforms the existing ones regarding exploration speed and collaboration ability.
arxiv:2407.07409
the ability to automatically detect fraudulent escrow websites is important in order to alleviate online auction fraud. despite research on related topics, fake escrow website categorization has received little attention. in this study we evaluated the effectiveness of various features and techniques for detecting fake escrow websites. our analysis included a rich set of features extracted from web page text, image, and link information. we also proposed a composite kernel tailored to represent the properties of fake websites, including content duplication and structural attributes. experiments were conducted to assess the proposed features, techniques, and kernels on a test bed encompassing nearly 90, 000 web pages derived from 410 legitimate and fake escrow sites. the combination of an extended feature set and the composite kernel attained over 98 % accuracy when differentiating fake sites from real ones, using the support vector machines algorithm. the results suggest that automated web - based information systems for detecting fake escrow sites could be feasible and may be utilized as authentication mechanisms.
arxiv:1309.7261
we theoretically investigate the cooling of a propagating phonon through brillouin scattering. to that end, we propose to introduce an external viscous force using brillouin scattering and an electro - optic feedback. short delays feedback show an efficient control of the brillouin linewidth whereas long delays can induce fano - like resonances.
arxiv:1708.09220
nsurl - 2019 task 7 focuses on named entity recognition ( ner ) in farsi. this task was chosen to compare different approaches to find phrases that specify named entities in farsi texts, and to establish a standard testbed for future researches on this task in farsi. this paper describes the process of making training and test data, a list of participating teams ( 6 teams ), and evaluation results of their systems. the best system obtained 85. 4 % of f1 score based on phrase - level evaluation on seven classes of nes including person, organization, location, date, time, money and percent.
arxiv:2003.09029
we use contact geometry to describe the monoid of projectively equivariant meromorphic differential operators on a complex curve, quantization of which generalizes known constructions of classical equivariants to non - commutative function algebras in several variables.
arxiv:1909.02137
in the kerr - newman spacetime the teukolsky master equation, governing the fundamental test fields, is of great importance. we derive an analogous master equation for the non - rotating c - metric which encompass massless klein - gordon field, neutrino field, maxwell field, rarita - schwinger field and gravitational perturbations. this equation is shown to be separable in terms of " accelerated spin weighted spherical harmonics ". it is shown that, contrary to ordinary spin weighted spherical harmonics, the " accelerated " ones are different for different spins. in some cases, the equation for eigenfunctions and eigenvalues are explicitly solved.
arxiv:1603.01451
we constrain the growth index $ \ gamma $ by performing a full - shape analysis of the power spectrum multipoles measured from the boss dr12 data. we adopt a theoretical model based on the effective field theory of the large scale structure ( eftoflss ) and focus on two different cosmologies : $ \ gamma $ cdm and $ \ gamma \ nu $ cdm, where we also vary the total neutrino mass. we explore different choices for the priors on the primordial amplitude $ a _ s $ and spectral index $ n _ s $, finding that informative priors are necessary to alleviate degeneracies between the parameters and avoid strong projection effects in the posterior distributions. our tightest constraints are obtained with 3 $ \ sigma $ planck priors on $ a _ s $ and $ n _ s $ : we obtain $ \ gamma = 0. 647 \ pm 0. 085 $ for $ \ gamma $ cdm and $ \ gamma = 0. 612 ^ { + 0. 075 } _ { - 0. 090 } $, $ m _ \ nu < 0. 30 $ for $ \ gamma \ nu $ cdm at 68 \ % c. l., in both cases $ \ sim 1 \ sigma $ consistent with the $ \ lambda $ cdm prediction $ \ gamma \ simeq 0. 55 $. additionally, we produce forecasts for a stage - iv spectroscopic galaxy survey, focusing on a desi - like sample. we fit synthetic data - vectors for three different galaxy samples generated at three different redshift bins, both individually and jointly. focusing on the constraining power of the large scale structure alone, we find that forthcoming data can give an improvement of up to $ \ sim 85 \ % $ in the measurement of $ \ gamma $ with respect to the boss dataset when no cmb priors are imposed. on the other hand, we find the neutrino mass constraints to be only marginally better than the current ones, with future data able to put an upper limit of $ m _ \ nu < 0. 27 ~ { \ rm ev } $. this result can be improved with the inclusion of planck priors on the primordial parameters, which yield $ m _ \ nu < 0. 18 ~ { \ rm ev } $.
arxiv:2306.09275
the classical hodgkin - huxley ( hh ) point - neuron model of action potential generation is four - dimensional. it consists of four ordinary differential equations describing the dynamics of the membrane potential and three gating variables associated to a transient sodium and a delayed - rectifier potassium ionic currents. conductance - based models of hh type are higher - dimensional extensions of the classical hh model. they include a number of supplementary state variables associated with other ionic current types, and are able to describe additional phenomena such as sub - threshold oscillations, mixed - mode oscillations ( subthreshold oscillations interspersed with spikes ), clustering and bursting. in this manuscript we discuss biophysically plausible and phenomenological reduced models that preserve the biophysical and / or dynamic description of models of hh type and the ability to produce complex phenomena, but the number of effective dimensions ( state variables ) is lower. we describe several representative models. we also describe systematic and heuristic methods of deriving reduced models from models of hh type.
arxiv:2209.14751
time integrated spectroscopic measurements are carried out to characterize transient plasma stream produced in a coaxial pulsed plasma accelerator. this method allows the estimation of different plasma parameters and its evolution with time. it also provides information on the existence of different excited states from the spectral emissions of plasma. using argon as the discharge medium, the electron density estimated from stark broadened line profiles gives a peak value $ \ sim 5 \ times 10 ^ { 21 } m ^ { - 3 } $ at a discharge voltage of 15 kv and the flow velocity of the plasma stream is measured to be $ \ sim ( 22 + 5 ) $ km using doppler shift method. assuming p - lte model, the electron excitation temperature is found to be $ \ sim 0. 88 $ ev using boltzmann plot method. a temporal evolution of the plasma stream and its characteristic variation is studied from a time of $ 50 \ mu s - 300 \ mu s $ in steps of $ 50 \ mu s $ by adjusting delay time in the triggering. analysis of different spectral lines shows the existence of some metastable states of ar ii having a long lifetime. the evolution of different ar ii transitions to metastable and non - metastable lower levels is observed for different time frame. the temporal evolution study shows a decrease in electron density from $ 1. 96 \ times 10 ^ { 21 } m ^ { - 3 } $ to $ 1. 23 \ times 10 ^ { 20 } m ^ { - 3 } $ at $ 300 \ mu s $ after the initiation of plasma formation. a decrease in excitation temperature from 0. 86 ev to 0. 72 ev is observed till $ 250 \ mu s $ and then again rises to 0. 77 ev at $ 300 \ mu s $
arxiv:2008.00473
i argue that string theory compactified on a riemann surface crosses over at small volume to a higher dimensional background of supercritical string theory. several concrete measures of the count of degrees of freedom of the theory yield the consistent result that at finite volume, the effective dimensionality is increased by an amount of order $ 2h / v $ for a surface of genus $ h $ and volume $ v $ in string units. this arises in part from an exponentially growing density of states of winding modes supported by the fundamental group, and passes an interesting test of modular invariance. further evidence for a plethora of examples with the spacelike singularity replaced by a higher dimensional phase arises from the fact that the sigma model on a riemann surface can be naturally completed by many gauged linear sigma models, whose rg flows approximate time evolution in the full string backgrounds arising from this in the limit of large dimensionality. in recent examples of spacelike singularity resolution by tachyon condensation, the singularity is ultimately replaced by a phase with all modes becoming heavy and decoupling. in the present case, the opposite behavior ensues : more light degrees of freedom arise in the small radius regime. i comment on the emerging zoology of cosmological singularities that results.
arxiv:hep-th/0510044
white blood cells ( wbcs ) are the most diverse cell types observed in the healing process of injured skeletal muscles. in the course of healing, wbcs exhibit dynamic cellular response and undergo multiple protein expression changes. the progress of healing can be analyzed by quantifying the number of wbcs or the amount of specific proteins in light microscopic images obtained at different time points after injury. in this paper, we propose an automated quantifying and analysis framework to analyze wbcs using light microscopic images of uninjured and injured muscles. the proposed framework is based on the localized iterative otsu ' s threshold method with muscle edge detection and region of interest extraction. compared with the threshold methods used in imagej, the li otsu ' s threshold method has high resistance to background area and achieves better accuracy. the cd68 - positive cell results are presented for demonstrating the effectiveness of the proposed work.
arxiv:2409.06722
we investigate the problem of scanning and prediction ( " scandiction ", for short ) of multidimensional data arrays. this problem arises in several aspects of image and video processing, such as predictive coding, for example, where an image is compressed by coding the error sequence resulting from scandicting it. thus, it is natural to ask what is the optimal method to scan and predict a given image, what is the resulting minimum prediction loss, and whether there exist specific scandiction schemes which are universal in some sense. specifically, we investigate the following problems : first, modeling the data array as a random field, we wish to examine whether there exists a scandiction scheme which is independent of the field ' s distribution, yet asymptotically achieves the same performance as if this distribution was known. this question is answered in the affirmative for the set of all spatially stationary random fields and under mild conditions on the loss function. we then discuss the scenario where a non - optimal scanning order is used, yet accompanied by an optimal predictor, and derive bounds on the excess loss compared to optimal scanning and prediction. this paper is the first part of a two - part paper on sequential decision making for multi - dimensional data. it deals with clean, noiseless data arrays. the second part deals with noisy data arrays, namely, with the case where the decision maker observes only a noisy version of the data, yet it is judged with respect to the original, clean data.
arxiv:cs/0609049
the method of detection of dust in the stratosphere and mesosphere by the twilight sky background observations is being considered. the polarization measurements are effective for detection of the meteoric dust scattering on the background consisting basically of troposphere multiple scattering. the method is based on the observed and explained polarization properties of the sky background during different stages of twilight. it is used to detect the mesosphere dust after the leonids maximum in 2002 and to investigate its evolution. the polarization method takes into account the multiple scattering and sufficient contribution of moonlight scattering background and turns out to be more sensitive than existing analogs used in the present time.
arxiv:astro-ph/0612586
darksusy is a versatile tool for precision calculations of a large variety of dark matter - related signals, ranging from predictions for the dark matter relic density to dark matter self - interactions and rates relevant for direct and indirect detection experiments. in all of these areas significant new code additions have been made in recent years, since the release of darksusy 6 in 2018, which we summarize in this overview. in particular, darksusy now allows users to compute the relic density for feebly interacting massive particles via the freeze - in mechanism, but also offers new routines for freeze - out calculations in the presence of secluded dark sectors as well as for models where kinetic equilibrium is not fully established during the freeze - out process. on the direct detection side, the effect of cosmic - ray upscattering of dark matter has been fully implemented, leading to a subdominant relativistic component in the expected dark matter flux at earth. finally, updated yields relevant for indirect searches with gamma rays, neutrinos or charged cosmic rays have been added ; the new default spectra are based on a large number of pythia 8 runs, but users can also easily switch between various alternative spectra. further code details, including a manual and various concrete example applications, are provided at www. darksusy. org.
arxiv:2203.07439
providing a human - understandable explanation of classifiers ' decisions has become imperative to generate trust in their use for day - to - day tasks. although many works have addressed this problem by generating visual explanation maps, they often provide noisy and inaccurate results forcing the use of heuristic regularization unrelated to the classifier in question. in this paper, we propose a new general perspective of the visual explanation problem overcoming these limitations. we show that visual explanation can be produced as the difference between two generated images obtained via two specific conditional generative models. both generative models are trained using the classifier to explain and a database to enforce the following properties : ( i ) all images generated by the first generator are classified similarly to the input image, whereas the second generator ' s outputs are classified oppositely. ( ii ) generated images belong to the distribution of real images. ( iii ) the distances between the input image and the corresponding generated images are minimal so that the difference between the generated elements only reveals relevant information for the studied classifier. using symmetrical and cyclic constraints, we present two different approximations and implementations of the general formulation. experimentally, we demonstrate significant improvements w. r. t the state - of - the - art on three different public data sets. in particular, the localization of regions influencing the classifier is consistent with human annotations.
arxiv:2106.10947
the article concerns the existence and uniqueness of quantisations of cluster algebras. we prove that cluster algebras with an initial exchange matrix of full rank admit a quantisation in the sense of berenstein - zelevinsky and give an explicit generating set to construct all quantisations.
arxiv:1402.1094
in nuclear engineering, modeling and simulations ( m & ss ) are widely applied to support risk - informed safety analysis. since nuclear safety analysis has important implications, a convincing validation process is needed to assess simulation adequacy, i. e., the degree to which m & s tools can adequately represent the system quantities of interest. however, due to data gaps, validation becomes a decision - making process under uncertainties. expert knowledge and judgments are required to collect, choose, characterize, and integrate evidence toward the final adequacy decision. however, in validation frameworks csau : code scaling, applicability, and uncertainty ( nureg / cr - 5249 ) and emdap : evaluation model development and assessment process ( rg 1. 203 ), such a decision - making process is largely implicit and obscure. when scenarios are complex, knowledge biases and unreliable judgments can be overlooked, which could increase uncertainty in the simulation adequacy result and the corresponding risks. therefore, a framework is required to formalize the decision - making process for simulation adequacy in a practical, transparent, and consistent manner. this paper suggests a framework " predictive capability maturity quantification using bayesian network ( pcmqbn ) " as a quantified framework for assessing simulation adequacy based on information collected from validation activities. a case study is prepared for evaluating the adequacy of a smoothed particle hydrodynamic simulation in predicting the hydrodynamic forces onto static structures during an external flooding scenario. comparing to the qualitative and implicit adequacy assessment, pcmqbn is able to improve confidence in the simulation adequacy result and to reduce expected loss in the risk - informed safety analysis.
arxiv:2010.03373
are used with a completely different meaning. this may lead to sentences that are correct and true mathematical assertions, but appear to be nonsense to people who do not have the required background. for example, " every free module is flat " and " a field is always a ring ". = = relationship with sciences = = mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. the independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. for example, the perihelion precession of mercury could only be explained after the emergence of einstein ' s general relativity, which replaced newton ' s law of gravitation as a better mathematical model. there is still a philosophical debate whether mathematics is a science. however, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. like them, it is falsifiable, which means in mathematics that, if a result or a theory is wrong, this can be proved by providing a counterexample. similarly as in science, theories and results ( theorems ) are often obtained from experimentation. in mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects ( often mind representations without physical support ). for example, when asked how he came about his theorems, gauss once replied " durch planmassiges tattonieren " ( through systematic experimentation ). however, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. = = = pure and applied mathematics = = = until the 19th century, the development of mathematics in the west was mainly motivated by the needs of technology and science, and there was no clear distinction between pure and applied mathematics. for example, the natural numbers and arithmetic were introduced for the need of counting, and geometry was motivated by surveying, architecture and astronomy. later, isaac newton introduced infinitesimal calculus for explaining the movement of the planets with his law of gravitation. moreover, most mathematicians were also scientists, and many scientists were also mathematicians. however, a notable exception occurred with the tradition of pure mathematics in ancient greece. the problem of integer factorization, for example, which goes back to euclid in 300 bc, had no practical application before its use in
https://en.wikipedia.org/wiki/Mathematics
this paper addresses a major flaw of the cycle consistency loss when used to preserve the input appearance in the face - to - face synthesis domain. in particular, we show that the images generated by a network trained using this loss conceal a noise that hinders their use for further tasks. to overcome this limitation, we propose a ' ' recurrent cycle consistency loss " which for different sequences of target attributes minimises the distance between the output images, independent of any intermediate step. we empirically validate not only that our loss enables the re - use of generated images, but that it also improves their quality. in addition, we propose the very first network that covers the task of unconstrained landmark - guided face - to - face synthesis. contrary to previous works, our proposed approach enables the transfer of a particular set of input features to a large span of poses and expressions, whereby the target landmarks become the ground - truth points. we then evaluate the consistency of our proposed approach to synthesise faces at the target landmarks. to the best of our knowledge, we are the first to propose a loss to overcome the limitation of the cycle consistency loss, and the first to propose an ' ' in - the - wild ' ' landmark guided synthesis approach. code and models for this paper can be found in https : / / github. com / esanchezlozano / gannotation
arxiv:2004.07165
we give the lagrangian formulation of a generic non - minimally extended einstein - maxwell theory with an action that is linear in the curvature and quadratic in the electromagnetic field. we derive the coupled field equations by a first order variational principle using the method of lagrange multipliers. we look for solutions describing plane - fronted einstein - maxwell waves with parallel rays. we give a family of exact solutions associated with a partially massless spin - 2 photon and a partially massive spin - 2 graviton.
arxiv:1101.1177
since its formulation, tur \ ' an ' s hypergraph problems have been among the most challenging open problems in extremal combinatorics. one of them is the following : given a $ 3 $ - uniform hypergraph $ \ mathcal { f } $ on $ n $ vertices in which any five vertices span at least one edge, prove that $ | \ mathcal { f } | \ ge ( 1 / 4 - o ( 1 ) ) \ binom { n } { 3 } $. the construction showing that this bound would be best possible is simply $ \ binom { x } { 3 } \ cup \ binom { y } { 3 } $ where $ x $ and $ y $ evenly partition the vertex set. this construction has the following more general $ ( 2p + 1, p + 1 ) $ - property : any set of $ 2p + 1 $ vertices spans a complete sub - hypergraph on $ p + 1 $ vertices. one of our main results says that, quite surprisingly, for all $ p > 2 $ the $ ( 2p + 1, p + 1 ) $ - property implies the conjectured lower bound.
arxiv:2004.08734
lutwak, yang and zhang \ cite { lyz2018 } introduced the $ l _ p $ dual curvature measure that unifies several other geometric measures in dual brunn - minkowski theory and brunn - minkowski theory. motivated by works in \ cite { lyz2018 }, we consider the uniqueness and continuity of the solution to the $ l _ p $ dual minkowski problem. to extend the important work ( theorem \ ref { uniquepolytope } ) of lyz to the case for general convex bodies, we establish some new minkowski - type inequalities which are closely related to the optimization problem associated with the $ l _ p $ dual minkowski problem. when $ q < p $, the uniqueness of the solution to the $ l _ p $ dual minkowski problem for general convex bodies is obtained. moreover, we obtain the continuity of the solution to the $ l _ p $ dual minkowski problem for convex bodies.
arxiv:2103.13075
controllable music generation promotes the interaction between humans and composition systems by projecting the users ' intent on their desired music. the challenge of introducing controllability is an increasingly important issue in the symbolic music generation field. when building controllable generative popular multi - instrument music systems, two main challenges typically present themselves, namely weak controllability and poor music quality. to address these issues, we first propose spatiotemporal features as powerful and fine - grained controls to enhance the controllability of the generative model. in addition, an efficient music representation called remi _ track is designed to convert multitrack music into multiple parallel music sequences and shorten the sequence length of each track with byte pair encoding ( bpe ) techniques. subsequently, we release bandcontrolnet, a conditional model based on parallel transformers, to tackle the multiple music sequences and generate high - quality music samples that are conditioned to the given spatiotemporal control features. more concretely, the two specially designed modules of bandcontrolnet, namely structure - enhanced self - attention ( se - sa ) and cross - track transformer ( ctt ), are utilized to strengthen the resulting musical structure and inter - track harmony modeling respectively. experimental results tested on two popular music datasets of different lengths demonstrate that the proposed bandcontrolnet outperforms other conditional music generation models on most objective metrics in terms of fidelity and inference speed and shows great robustness in generating long music samples. the subjective evaluations show bandcontrolnet trained on short datasets can generate music with comparable quality to state - of - the - art models, while outperforming them significantly using longer datasets.
arxiv:2407.10462
understanding the crystal field splitting and orbital polarization in non - centrosymmetric systems such as ferroelectric materials is fundamentally important. in this study, taking batio $ _ 3 $ ( bto ) as a representative material we investigate titanium crystal field splitting and orbital polarization in non - centrosymmetric tio $ _ 6 $ octahedra with resonant x - ray linear dichroism at ti $ l _ { 2, 3 } $ - edge. the high - quality batio $ _ 3 $ thin films were deposited on dysco $ _ 3 $ ( 110 ) single crystal substrates in a layer - by - layer way by pulsed laser deposition. the reflection high - energy electron diffraction ( rheed ) and element specific x - ray absorption spectroscopy ( xas ) were performed to characterize the structural and electronic properties of the films. in sharp contrast to conventional crystal field splitting and orbital configuration ( $ d _ { xz } $ / $ d _ { yz } $ $ < $ $ d _ { xy } $ $ < $ $ d _ { 3z ^ 2 - r ^ 2 } $ $ < $ $ d _ { x ^ 2 - y ^ 2 } $ or $ d _ { xy } $ $ < $ $ d _ { xz } $ / $ d _ { yz } $ $ < $ $ d _ { x ^ 2 - y ^ 2 } $ $ < $ $ d _ { 3z ^ 2 - r ^ 2 } $ ) according to jahn - teller effect, it is revealed that $ d _ { xz } $, $ d _ { yz } $, and $ d _ { xy } $ orbitals are nearly degenerate, whereas $ d _ { 3z ^ 2 - r ^ 2 } $ and $ d _ { x ^ 2 - y ^ 2 } $ orbitals are split with an energy gap $ \ sim $ 100 mev in the epitaxial bto films. the unexpected degenerate states $ d _ { xz } $ / $ d _ { yz } $ / $ d _ { xy } $ are coupled to ti - o displacements resulting from competition between polar and jahn - teller distortions in non - centrosymmetric tio $ _ 6 $ octhedra of bto films. our results provide a route to manipulate orbital degree of freedom by switching electric polarization in ferroelectric
arxiv:1908.07194
three - body $ b ^ + \ to p \ overline p k ^ + $ and $ b ^ + \ to p \ overline p \ pi ^ + $ decays are studied using a data sample corresponding to an integrated luminosity of 3. 0 $ fb ^ { - 1 } $ collected by the lhcb experiment in proton - proton collisions at center - of - mass energies of $ 7 $ and $ 8 $ tev. evidence of cp violation in the $ b ^ + \ to p \ overline p k ^ + $ decay is found in regions of the phase space, representing the first measurement of this kind for a final state containing baryons. measurements of the forward - backward asymmetry of the light meson in the $ p \ overline p $ rest frame yield $ a _ { \ mathrm { fb } } ( p \ overline p k ^ +, ~ m _ { p \ overline p } < 2. 85 \ mathrm { \, ge \ kern - 0. 1em v \! / } c ^ 2 ) = 0. 495 \ pm0. 012 ~ ( \ mathrm { stat } ) \ pm0. 007 ~ ( \ mathrm { syst } ) $ and $ a _ { \ mathrm { fb } } ( p \ overline p \ pi ^ +, ~ m _ { p \ overline p } < 2. 85 \ mathrm { \, ge \ kern - 0. 1em v \! / } c ^ 2 ) = - 0. 409 \ pm0. 033 ~ ( \ mathrm { stat } ) \ pm0. 006 ~ ( \ mathrm { syst } ) $. in addition, the branching fraction of the decay $ b ^ + \ to \ kern 0. 1em \ overline { \ kern - 0. 1em \ lambda } ( 1520 ) p $ is measured to be $ \ mathcal { b } ( b ^ + \ to \ kern 0. 1em \ overline { \ kern - 0. 1em \ lambda } ( 1520 ) p ) = ( 3. 15 \ pm0. 48 ~ ( \ mathrm { stat } ) \ pm0. 07 ~ ( \ mathrm { syst } ) \ pm0. 26 ~ ( \ mathrm { bf } ) ) \ times 10 ^ { - 7 } $, where bf denotes the uncertainty on secondary branching
arxiv:1407.5907
algorithmic fairness, the research field of making machine learning ( ml ) algorithms fair, is an established area in ml. as ml technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ml systems. yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i. e., the class label is given as a precondition. unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the lipschitz condition in the conventional individual fairness definition. we argue that this perspective represents a more realistic model of fairness research for real - world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. we conducted experiments on four real - world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.
arxiv:2302.08015
we relook at the classic equity fund selection and portfolio construction problems from a new perspective and propose an easy - to - implement framework to tackle the problem in practical investment. rather than the conventional way by constructing a long only portfolio from a big universe of stocks or macro factors, we show how to produce a long - short portfolio from a smaller pool of stocks from mutual fund top holdings and generate impressive results. as these methods are based on statistical evidence, we need closely monitoring the model validity, and prepare repair strategies.
arxiv:2004.10631
have been analyzed the latest experimental data for $ \ gamma + p \ to k ^ { + } + \ lambda $ reaction of $ c _ { x ' } $ and $ c _ { z ' } $ double polarizations. in theoretical calculation, all of these observables can be classified into four legendre classes and represented by associated legendre polynomial function itself \ cite { fasano92 }. in this analysis we attempt to determine the best data model for both observables. we use the bayesian technique to select the best model by calculating the posterior probabilities and comparing the posterior among the models. the posteriors probabilities for each data model are computed using a nested sampling integration. from this analysis we concluded that $ c _ { x ' } $ and $ c _ { z ' } $ double polarizations require two and three order of associated legendre polynomials respectively to describe the data well. the extracted coefficients of each observable will also be presented. it shows the structure of baryon resonances qualitatively
arxiv:1011.4845
numerical weather prediction ( nwp ) has proven to be computationally challenging due to its inherent multiscale nature. currently, the highest resolution nwp models use a horizontal resolution of about 10km. in order to increase the resolution of nwp models highly scalable atmospheric models are needed. the non - hydrostatic unified model of the atmosphere ( numa ), developed by the authors at the naval postgraduate school, was designed to achieve this purpose. numa is used by the naval research laboratory, monterey as the engine inside its next generation weather prediction system neptune. numa solves the fully compressible navier - stokes equations by means of high - order galerkin methods ( both spectral element as well as discontinuous galerkin methods can be used ). mesh generation is done using the p4est library. numa is capable of running middle and upper atmosphere simulations since it does not make use of the shallow - atmosphere approximation. this paper presents the performance analysis and optimization of the spectral element version of numa. the performance at different optimization stages is analyzed using a theoretical performance model as well as measurements via hardware counters. machine independent optimization is compared to machine specific optimization using bg / q vector intrinsics. by using vector intrinsics the main computations reach 1. 2 pflops on the entire machine mira ( 12 % of the theoretical peak performance ). the paper also presents scalability studies for two idealized test cases that are relevant for nwp applications. the atmospheric model numa delivers an excellent strong scaling efficiency of 99 % on the entire supercomputer mira using a mesh with 1. 8 billion grid points. this allows to run a global forecast of a baroclinic wave test case at 3km uniform horizontal resolution and double precision within the time frame required for operational weather prediction.
arxiv:1511.01561
the forces exerted by stars and planets at the moment of birth can in any way shape our futures. astronomer carl sagan declined to sign the statement. sagan said he took this stance not because he thought astrology had any validity, but because he thought that the tone of the statement was authoritarian, and that dismissing astrology because there was no mechanism ( while " certainly a relevant point " ) was not in itself convincing. in a letter published in a follow - up edition of the humanist, sagan confirmed that he would have been willing to sign such a statement had it described and refuted the principal tenets of astrological belief. this, he argued, would have been more persuasive and would have produced less controversy. the use of poetic imagery based on the concepts of the macrocosm and microcosm, " as above so below " to decide meaning such as edward w. james ' example of " mars above is red, so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astro
https://en.wikipedia.org/wiki/Astrology_and_science
we report on experimental studies on entanglement quantification and verification based on uncertainty relations for systems consisting of two qubits. the new proposed measure is shown to be invariant under local unitary transformations, by which entanglement quantification is implemented for two - qubit pure states. the nonlocal uncertainty relations for two - qubit pure states are also used for entanglement verification which serves as a basic proposition and promise to be a good choice for verification of multipartite entanglement.
arxiv:quant-ph/0611196
let $ x $ be a smooth projective horospherical variety of picard number one. we show that a uniruled projective manifold of picard number one is biholomorphic to $ x $ if its variety of minimal rational tangents at a general point is projectively equivalent to that of $ x $. to get a local flatness of the geometric structure arising from the variety of minimal rational tangents, we apply the methods of $ w $ - normal complete step prolongations. we compute the associated lie algebra cohomology space of degree two and show the vanishing of holomorphic sections of the vector bundle having this cohomology space as a fiber.
arxiv:2203.10313
we consider the knizhnik - zamolodchikov system of linear differential equations. the coefficients of this system are rational functions. we prove that under some conditions the solution of kz system is rational too. we give the method of constructing the corresponding rational solution. we deduce the asymptotic formulas for the solution of kz system when $ \ rho $ is integer.
arxiv:0805.2662
we show that the heat flow provides good approximation properties for the area functional on proper $ \ rcd ( k, \ infty ) $ spaces, implying that in this setting the area formula for functions of bounded variation holds and that the area functional coincides with its relaxation. we then obtain partial regularity and uniqueness results for functions whose hypographs are perimeter minimizing. finally, we consider sequences of $ \ rcd ( k, n ) $ spaces and we show that, thanks to the previously obtained properties, sobolev minimizers of the area functional in a limit space can be approximated with minimizers along the converging sequence of spaces. using this last result, we obtain applications on ricci - limit spaces.
arxiv:2405.11938
regarding the civil engineering of shorelines, soft engineering is a shoreline management practice that uses sustainable ecological principles to restore shoreline stabilization and protect riparian habitats. soft shoreline engineering ( sse ) uses the strategic placement of organic materials such as vegetation, stones, sand, debris, and other structural materials to reduce erosion, enhance shoreline aesthetic, soften the land - water interface, and lower costs of ecological restoration. to differentiate soft shoreline engineering from hard shoreline engineering, hard shoreline engineering tends to use steel sheet piling or concrete breakwalls to prevent danger and fortify shorelines. generally, hard shoreline engineering is used for navigational or industrial purposes. to contrast, soft shoreline engineering emphasizes the application of ecological principles rather than compromising the engineered integrity of the shoreline. the opposite alternative is hard engineering. = = background = = hard shoreline engineering is the use of non - organic reinforcing materials, such as concrete, steel, and plastic to fortify shorelines, stop erosion, and protect urban development from flooding. however, as shoreline development among coastal cities increased dramatically, the detrimental ecological factors became apparent. hard shoreline engineering was designed to accommodate human development along the coast, focusing on increasing efficiency in the commercial, navigational, and industrial sectors of the economy. in 2003, the global population living within 120 miles ( 190 km ) of an ocean was 3 billion and is expected to double by the year 2025. these developments came at a high cost, destroying biological communities, isolating riparian habitats, altering the natural transport of sediment by disrupting wave action and long - shore currents. many coastal regions began to see significant coastal degradation due to human development, the detroit river losing as great as 97 % of its coastal wetland habitats. singapore, as well, documented the disappearance of the majority of its mangrove forests, coastal reefs, and mudflat regions between 1920 and 1990 due to shoreline development. towards the end of the 20th century, coastal engineering practices underwent a gradual transition towards incorporating the natural environment into planning considerations. in stark contrast to hard engineering, employed with the sole purpose of improving navigation, industrial and commercial uses of the river, soft engineering takes a multi - faceted approach, developing shorelines for a multitude of benefits and incorporating consideration of fish and wildlife habitat. tasked with the responsibility to construct and maintain united states federally authorized coastal civil works projects, the u. s. army corps of engineers plays a major part in the development of the principles of coastal engineering as practiced within the u. s. in part due to degradation of coastline across the united
https://en.wikipedia.org/wiki/Soft_engineering
low - energy properties of the homogeneous electron gas in one dimension are completely described by the group velocities of its charge ( plasmon ) and spin collective excitations. because of the long range of the electron - electron interaction, the plasmon velocity is dominated by an electrostatic contribution and can be estimated accurately. in this letter we report on quantum monte carlo simulations which demonstrate that the spin velocity is substantially decreased by interactions in semiconductor quantum wire realizations of the one - dimensional electron liquid.
arxiv:cond-mat/0005354
we report on x - ray studies of freely suspended hexatic films of three different liquid crystal compounds. by applying angular x - ray cross - correlation analysis ( xcca ) to the measured diffraction patterns the parameters of the bond - orientational ( bo ) order in the hexatic phase were directly determined. the temperature evolution of the bo order parameters was analyzed on the basis of the multicritical scaling theory ( mcst ). our results confirmed the validity of the mcst in the whole temperature range of existence of the hexatic phase for all three compounds. the temperature dependence of the bo order parameters in the vicinity of the hexatic - smectic transition was fitted by a conventional power law with a critical exponent $ \ beta \ approx0. 1 $ of extremely small value. we found that the temperature dependence of higher order harmonics of the bo order scales as the powers of the first harmonic, with exponent equal to harmonic number. this indicates a nonlinear coupling of the bo order parameters of different order. it is shown that compounds of various composition, possessing different phase sequences, display the same thermodynamic behavior in the hexatic phase and in the vicinity of the smectic - hexatic phase transition.
arxiv:1703.01198
we prove that octants are cover - decomposable, i. e., any 12 - fold covering of any subset of the space with a finite number of translates of a given octant can be decomposed into two coverings. as a corollary, we obtain that any 12 - fold covering of any subset of the plane with a finite number of homothetic copies of a given triangle can be decomposed into two coverings. we also show that any 12 - fold covering of the whole plane with open triangles can be decomposed into two coverings. however, we exhibit an indecomposable 3 - fold covering.
arxiv:1101.3773
the effects of shear on energy release rate and mode mixity in a symmetric sandwich beam with isotropic layers and a debond crack at the face sheet / core interface are investigated through a semi - analytic approach based on two - dimensional elasticity and linear elastic fracture mechanics. semi - analytic expressions are derived for the shear components of energy release rate and mode mixity phase angle which depend on four numerical coefficients derived through accurate finite element analyses. the expressions are combined with earlier results for three - layer configurations subjected to bending - moments and axial forces to obtain solutions for sandwich beams under general loading conditions and for an extensive range of geometrical and material properties. the results are applicable to laboratory specimens used for the characterization of the fracture properties of sandwich composites for civil, marine, energy and aeronautical applications, provided the lengths of the crack and the ligament ahead of the crack tip are above minimum lengths. the physical and mechanical significance of the terms of the energy release rate which depend on the shear forces are explained using structural mechanics concepts and introducing crack tip root rotations to account for the main effects of the near tip deformations.
arxiv:1802.10181
the statistical mechanics of quantum - classical systems with holonomic constraints is formulated rigorously by unifying the classical dirac bracket and the quantum - classical bracket in matrix form. the resulting dirac quantum - classical theory, which conserves the holonomic constraints exactly, is then used to formulate time evolution and statistical mechanics. the correct momentum - jump approximation for constrained system arises naturally from this formalism. finally, in analogy with what was found in the classical case, it is shown that the rigorous linear response function of constrained quantum - classical systems contains non - trivial additional terms which are absent in the response of unconstrained systems.
arxiv:quant-ph/0511142
mid - infrared spectral observations uranus acquired with the infrared spectrometer ( irs ) on the spitzer space telescope are used to determine the abundances of c2h2, c2h6, ch3c2h, c4h2, co2, and tentatively ch3 on uranus at the time of the 2007 equinox. for vertically uniform eddy diffusion coefficients in the range 2200 - 2600 cm2 s - 1, photochemical models that reproduce the observed methane emission also predict c2h6 profiles that compare well with emission in the 11. 6 - 12. 5 micron wavelength region, where the nu9 band of c2h6 is prominent. our nominal model with a uniform eddy diffusion coefficient kzz = 2430 cm2 sec - 1 and a ch4 tropopause mole fraction of 1. 6x10 - 5 provides a good fit to other hydrocarbon emission features, such as those of c2h2 and c4h2, but the model profile for ch3c2h must be scaled by a factor of 0. 43, suggesting that improvements are needed in the chemical reaction mechanism for c3hx species. the nominal model is consistent with a ch3d / ch4 ratio of 3. 0 + - 0. 2x10 - 4. from the best - fit scaling of these photochemical - model profiles, we derive column abundances above the 10 - mbar level of 4. 5 + 01. 1 / - 0. 8 x 10 + 19 molecule - cm - 2 for ch4, 6. 2 + - 1. 0 x 10 + 16 molecule - cm - 2 for c2h2 ( with a value 24 % higher from a different longitudinal sampling ), 3. 1 + - 0. 3 x 10 + 16 molecule - cm - 2 for c2h6, 8. 6 + - 2. 6 x 10 + 13 molecule - cm - 2 for ch3c2h, 1. 8 + - 0. 3 x 10 + 13 molecule - cm - 2 for c4h2, and 1. 7 + - 0. 4 x 10 + 13 molecule - cm - 2 for co2 on uranus. our results have implications with respect to the influx rate of exogenic oxygen species and the production rate of stratospheric hazes on uranus, as well as the c4h2 vapor pressure over c4h2 ice at low temperatures.
arxiv:1407.2118
we present a bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and cmb observations. our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, normal ordering or inverted ordering, with current data. two possible parametrizations ( three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings ) and priors ( linear versus logarithmic ) are examined. we find that the preference for no is only driven by neutrino oscillation data. moreover, the values of the bayes factor indicate that the evidence for no is strong only when the scan is performed over the three neutrino masses with logarithmic priors ; for every other combination of parameterization and prior, the preference for no is only weak. as a by - product of our bayesian analyses, we are able to a ) compare the bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement ; b ) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and c ) find that there is a weak - to - moderate preference for logarithmic priors. these results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass. we also show that the limits on the total neutrino mass $ \ sum m _ \ nu $ can change dramatically when moving from one prior to the other. these results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self - consistent analyses which explore the best parametrization and priors, without combining results that involve different assumptions.
arxiv:1801.04946
a non - perturbative calculation of the gyrotropic pressures associated with large - scale mirror modes is performed, taking into account a finite, possibly anisotropic electron temperature. in the small - amplitude limit, this leads to an extension of an asymptotic model previously derived for cold electrons. a model equation for the profile of subcritical finite - amplitude large - scale structures is also presented.
arxiv:1210.4291
$ \ newcommand { \ arr } { \ mathcal { a } } \ newcommand { \ nums } { k } \ newcommand { \ arrx } [ 1 ] { \ arr ( # 1 ) } \ newcommand { \ eps } { \ varepsilon } \ newcommand { \ opt } { \ mathsf { o } } $ for point sets $ p _ 1, \ ldots, p _ \ nums $, a set of lines $ l $ is halving if any face of the arrangement $ \ arrx { l } $ contains at most $ | p _ i | / 2 $ points of $ p _ i $, for all $ i $. we study the problem of computing a halving set of lines of minimal size. surprisingly, we show a polynomial time algorithm that outputs a halving set of size $ o ( \ opt ^ { 3 / 2 } ) $, where $ \ opt $ is the size of the optimal solution. our solution relies on solving a new variant of the weak $ \ eps $ - net problem for corridors, which we believe to be of independent interest. we also study other variants of this problem, including an alternative setting, where one needs to introduce a set of guards ( i. e., points ), such that no convex set avoiding the guards contains more than half the points of each point set.
arxiv:2208.11275
this work introduces a neural architecture for learning forward models of stochastic environments. the task is achieved solely through learning from temporal unstructured observations in the form of images. once trained, the model allows for tracking of the environment state in the presence of noise or with new percepts arriving intermittently. additionally, the state estimate can be propagated in observation - blind mode, thus allowing for long - term predictions. the network can output both expectation over future observations and samples from belief distribution. the resulting functionalities are similar to those of a particle filter ( pf ). the architecture is evaluated in an environment where we simulate objects moving. as the forward and sensor models are available, we implement a pf to gauge the quality of the models learnt from the data.
arxiv:2112.07745
a description is given of how to construct $ ( 0, 2 ) $ supersymmetric conformal field theories as coset models. these models may be used as non - - trivial backgrounds for heterotic string theory. they are realised as a combination of an anomalously gauged wess - - zumino - - witten model, right - - moving supersymmetric fermions, and left - - moving current algebra fermions. requiring the sum of the gauge anomalies from the bosonic and fermionic sectors to cancel yields the final model. applications discussed include exact models of extremal four - - dimensional charged black holes and taub - - nut solutions of string theory. these coset models may also be used to construct important families of $ ( 0, 2 ) $ supersymmetric heterotic string compactifications. the kazama - - suzuki models are the left - - right symmetric special case of these models.
arxiv:hep-th/9409061
bistable mechanical metamaterials have shown promise for mitigating the harmful consequences of impact by converting kinetic energy into stored strain energy, offering an alternative and potentially synergistic approach to conventional methods of attenuating energy transmission. in this work, we numerically study the dynamic response of a one - dimensional bistable metamaterial struck by a high speed impactor ( where the impactor velocity is commensurate with the sound speed ), using the peak kinetic energy experienced at midpoint of the metamaterial compared to that in an otherwise identical linear system as our performance metric. we make five key findings : 1 ) the bistable material can counter - intuitively perform better ( to nearly 1000x better than the linear system ) as the viscosity decreases ( but remains finite ), but only when sufficiently fine discretization has been reached ( i. e. the system approaches sufficiently close to the continuum limit ) ; 2 ) this discretization threshold is sharp, and depends on the viscosity present ; 3 ) the bistable materials can also perform significantly worse than linear systems ( for low discretization and viscosity or zero viscosity ) ; 4 ) the dependence on discretization stems from the partition of energy into trains of solitary waves that have pulse lengths proportional to the unit cell size, where, with intersite viscosity, the solitary wave trains induce high velocity gradients and thus enhanced damping compared to linear, and low - unit - cell - number bistable, materials ; and 5 ) when sufficiently fine discretization has been reached at low viscosities, the bistable system outperforms the linear one for a wide range of impactor conditions. the first point is particularly important, as it shows the existence of a nonlinear dynamical size effect, where, given a protective layer of some thickness...
arxiv:2410.02090
deep - tissue optical imaging is a longstanding challenge limited by scattering. both optical imaging and treatment can benefit from focusing light in deep tissue beyond one transport mean free path. wavefront shaping based on time - reversed ultrasonically encoded ( true ) optical focusing utilizes ultrasound focus, which is much less scattered than light in biological tissues as the ' guide star '. however, the traditional true is limited by the ultrasound focusing area and pressure tagging efficiency, especially in acoustically heterogeneous medium. even the improved version of iterative true comes at a large time consumption, which limits the application of true. to address this problem, we proposed a method called time - reversed photoacoustic wave guided time - reversed ultrasonically encoded ( trpa - true ) optical focusing by integrating ultrasonic focus guided by time - reversing pa signals, and the ultrasound modulation of diffused coherent light with optical phase conjugation ( opc ), achieving dynamic focusing of light into scattering medium. simulation results show that the focusing accuracy of the proposed method has been significantly improved compared with conventional true, which is more suitable for practical applications that suffers severe acoustic distortion, e. g. transcranial optical focusing.
arxiv:2012.04427
we consider two supersymmetric m5 brane probe solutions in $ \ textrm { ads } _ 7 \ times s ^ 4 $ and one in $ \ textrm { ads } _ 4 \ times s ^ 7 $ that all have the $ \ textrm { ads } _ 3 \ times s ^ 3 $ world - volume geometry. the values of the classical action of the first two m5 probes ( with $ s ^ 3 $ in $ \ textrm { ads } _ 7 $ or in $ s ^ 4 $ ) are related to the leading $ n ^ 2 $ parts in the anomaly b - coefficient in the ( 2, 0 ) theory corresponding to a spherical surface defect in symmetric or antisymmetric $ su ( n ) $ representations. we present a detailed computation of the corresponding one - loop m5 brane partition functions finding that they vanish ( in a particular regularization ). this implies the vanishing of the order $ n ^ 0 $ part in the b - anomaly coefficients, in agreement with earlier predictions for their exact values. it remains, however, a puzzle of how to reproduce the non - vanishing order $ n $ terms in these coefficients within the semiclassical m5 - brane probe setup.
arxiv:2411.11626
this paper addresses the issues of conservativeness and computational complexity of probabilistic robustness analysis. we solve both issues by defining a new sampling strategy and robustness measure. the new measure is shown to be much less conservative than the existing one. the new sampling strategy enables the definition of efficient hierarchical sample reuse algorithms that reduce significantly the computational complexity and make it independent of the dimension of the uncertainty space. moreover, we show that there exists a one to one correspondence between the new and the existing robustness measures and provide a computationally simple algorithm to derive one from the other.
arxiv:0707.0823
markov random fields ( mrfs ) are invaluable tools across diverse fields, and spatiotemporal mrfs ( stmrfs ) amplify their effectiveness by integrating spatial and temporal dimensions. however, modeling spatiotemporal data introduces additional hurdles, including dynamic spatial dimensions and partial observations, prevalent in scenarios like disease spread analysis and environmental monitoring. tracking high - dimensional targets with complex spatiotemporal interactions over extended periods poses significant challenges in accuracy, efficiency, and computational feasibility. to tackle these obstacles, we introduce the variable target mrf scalable particle filter ( vt - mrf - spf ), a fully online learning algorithm designed for high - dimensional target tracking over stmrfs with varying dimensions under partial observation. we rigorously guarantee algorithm performance, explicitly indicating overcoming the curse of dimensionality. additionally, we provide practical guidelines for tuning graphical parameters, leading to superior performance in extensive examinations.
arxiv:2404.18857
we study a gas network flow regulation control problem showing the closed - loop consequences of using interconnected component models, which have been designed to preserve a variant of mass flow conservation without the inclusion of algebraic constraints into the dynamics. these are candidate \ textit { control - oriented } models because they are linear state - space systems. this leads to a study of mass conservation in flow models and the inheritance of conservation at the network level when present at each component. conservation is expressed as a transfer function property at dc. this property then is shown to imply the existence of integrators and other dc structure of the network model, which has important consequences for the subsequent control design. an example based on an industrial system is used to explore the facility of moving from modeling to automated interconnection or components to model reduction to digital controller design and performance evaluation. throughout, the focus is on the teasing out of control orientation in modeling. the example shows a strong connection between the modeling and the controller design.
arxiv:2211.06826
we study possibility of efficient reflection of very cold neutrons ( vcn ) from powders of nanoparticles. in particular, we measured the scattering of vcn at a powder of diamond nanoparticles as a function of powder sample thickness, neutron velocity and scattering angle. we observed extremely intense scattering of vcn even off thin powder samples. this agrees qualitatively with the model of independent nanoparticles at rest. we show that this intense scattering would allow us to use nanoparticle powders very efficiently as the very first reflectors for neutrons with energies within a complete vcn range up to $ 10 ^ { - 4 } $ ev.
arxiv:0805.2634
the average ground state energies for spin glasses on bethe lattices of connectivities r = 3,..., 15 are studied numerically for a gaussian bond distribution. the extremal optimization heuristic is employed which provides high - quality approximations to ground states. the energies obtained from extrapolation to the thermodynamic limit smoothly approach the ground - state energy of the sherrington - kirkpatrick model for r - > \ infty. consistently for all values of r in this study, finite - size corrections are found to decay approximately with ~ n ^ { - 4 / 5 }. the possibility of ~ n ^ { - 2 / 3 } corrections, found previously for bethe lattices with a bimodal + - j bond distribution and also for the sherrington - kirkpatrick model, are constrained to the additional assumption of very specific higher - order terms. instance - to - instance fluctuations in the ground state energy appear to be asymmetric up to the limit of the accuracy of our heuristic. the data analysis provides insights into the origin of trivial fluctuations when using continuous bonds and / or sparse networks.
arxiv:0911.2583
we construct link invariants using the $ d _ { 2n } $ subfactor planar algebras, and use these to prove new identities relating certain specializations of colored jones polynomials to specializations of other quantum knot polynomials. these identities can also be explained by coincidences between small modular categories involving the even parts of the $ d _ { 2n } $ planar algebras. we discuss the origins of these coincidences, explaining the role of $ so $ level - rank duality, kirby - melvin symmetry, and properties of small dynkin diagrams. one of these coincidences involves $ g _ 2 $ and does not appear to be related to level - rank duality.
arxiv:1003.0022
∈ x z ∈ y ) { \ displaystyle x = y \ implies \ forall z, ( z \ in x \ iff z \ in y ) } logic axiom : x = y z, ( x ∈ z y ∈ z ) { \ displaystyle x = y \ implies \ forall z, ( x \ in z \ iff y \ in z ) } set theory axiom : ( z, ( z ∈ x z ∈ y ) ) x = y { \ displaystyle ( \ forall z, ( z \ in x \ iff z \ in y ) ) \ implies x = y } the first two are given by the substitution property of equality from first - order logic ; the last is a new axiom of the theory. incorporating half of the work into the first - order logic may be regarded as a mere matter of convenience, as noted by azriel levy. " the reason why we take up first - order predicate calculus with equality is a matter of convenience ; by this, we save the labor of defining equality and proving all its properties ; this burden is now assumed by the logic. " = = = set equality based on first - order logic without equality = = = in first - order logic without equality, two sets are defined to be equal if they contain the same elements. then the axiom of extensionality states that two equal sets are contained in the same sets. set theory definition : ( x = y ) : = z, ( z ∈ x z ∈ y ) { \ displaystyle ( x = y ) \ : = \ \ forall z, ( z \ in x \ iff z \ in y ) } set theory axiom : x = y z, ( x ∈ z y ∈ z ) { \ displaystyle x = y \ implies \ forall z, ( x \ in z \ iff y \ in z ) } or, equivalently, one may choose to define equality in a way that mimics, the substitution property explicitly, as the conjunction of all atomic formuals : set theory definition : ( x = y ) : = { \ displaystyle ( x = y ) : = } z ( z ∈ x z ∈ y ) ∧ { \ displaystyle \ ; \ forall z ( z \ in x \ implies z \ in y ) \, \ land \, } w ( x ∈ w y ∈ w ) { \ displays
https://en.wikipedia.org/wiki/Equality_(mathematics)
optically detected magnetic resonance ( odmr ) provides ultrasensitive means to detect and image a small number of electron and nuclear spins, down to the single spin level with nanoscale resolution. despite the significant recent progress in this field, it has never been combined with the power of pulsed magnetic resonance imaging ( mri ) techniques. here, we demonstrate for the first time how these two methodologies can be integrated using short pulsed magnetic field gradients to spatially - encode the sample. this results in what we denote as an " optically detected magnetic resonance imaging " ( odmri ) technique. it offers the advantage that the image is acquired in parallel from all parts of the sample, with well - defined three - dimensional point - spread function, and without any loss of spectroscopic information. in addition, this approach may be used in the future for parallel but yet spatially - selective efficient addressing and manipulation of the spins in the sample. such capabilities are of fundamental importance in the field of quantum spin - based devices and sensors.
arxiv:1412.8650
we study the casimir effect with different temperatures between the plates ( $ t $ ) resp. outside of them ( $ t ' $ ). if we consider the inner system as the black body radiation for a special geometry, then contrary to common belief the temperature approaches a constant value for vanishing volume during isentropic processes. this means : the reduction of the degrees of freedom can not be compensated by a concentration of the energy during an adiabatic contraction of the two - plate system. looking at the casimir pressure, we find one unstable equilibrium point for isothermal processes with $ t > t ' $. for isentropic processes there is additionally one stable equilibrium point for larger values of the distances between the two plates. }
arxiv:quant-ph/9811034
we present a study of the evolution of several classes of mgii absorbers, and their corresponding feii absorption, over a large fraction of cosmic history : 2. 3 to 8. 7 gyrs from the big bang. our sample consists of 87 strong ( wr ( mgii ) > 0. 3 a ) mgii absorbers, with redshifts 0. 2 < z < 2. 5, measured in 81 quasar spectra obtained from the very large telescope ( vlt ) / ultraviolet and visual echelle spectrograph ( uves ) archives of high - resolution spectra ( r \ sim 45, 000 ). no evolutionary trend in wr ( feii ) / wr ( mgii ) is found for moderately strong mgii absorbers ( 0. 3 < wr ( mgii ) < 1. 0 a ). however, at lower z we find an absence of very strong mgii absorbers ( those with wr ( mgii ) > 1 a ) with small ratios of equivalent widths of feii to mgii. at high z, very strong mgii absorbers with both small and large wr ( feii ) / wr ( mgii ) values are present. we compare our findings to a sample of 100 weak mgii absorbers ( wr ( mgii ) < 0. 3 a ) found in the same quasar spectra by narayanan et al. ( 2007 ). the main effect driving the evolution of very strong mgii systems is the difference between the kinematic profiles at low and high redshifts. at high z, we observe that, among the very strong mgii absorbers, all of the systems with small ratios of wr ( feii ) / wr ( mgii ) have relatively large velocity spreads, resulting in less saturated profiles. at low z, such kinematically spread systems are absent, and both feii and mgii are saturated, leading to wr ( feii ) / wr ( mgii ) values that are all close to 1. the high redshift, small wr ( feii ) / wr ( mgii ) systems could correspond to sub - dla systems, many of which have large velocity spreads and are possibly linked to superwinds in star forming galaxies. in addition to the change in saturation due to kinematic evolution, the smaller wr ( feii ) / wr ( mgii ) values could be due to a lower abundance of fe at high z, which would indicate relatively early stages of star formation in those environments.
arxiv:1208.1739
we present an alternative to the kohn - sham formulation of density functional theory for the ground - state properties of strongly interacting electronic systems. the idea is to start from the limit of zero kinetic energy and systematically expand the universal energy functional of the density in powers of a " coupling constant " that controls the magnitude of the kinetic energy. the problem of minimizing the energy is reduced to the solution of a strictly correlated electron problem in the presence of an effective potential, which plays in our theory the same role that the kohn - sham potential plays in the traditional formulation. we discuss several schemes for approximating the energy functional, and report preliminary results for low - density quantum dots.
arxiv:0908.0669
we investigate the effect of inertial particles on rayleigh - b \ ' enard convection using weakly nonlinear stability analysis. in the presence of nonlinear effects, we study the limiting value of growth of instabilities by deriving a cubic landau equation. an euler - euler / two - fluid formulation is being used to describe the flow instabilities in particle - laden rayleigh - b \ ' enard convection. the nonlinear results are presented near the critical point ( bifurcation point ) for water droplets in the dry air system. it is found that supercritical bifurcation is the only type of bifurcation beyond the critical point. interaction of settling particles with the flow and the reynolds stress or distortion terms emerge due to the nonlinear self - interaction of fundamental modes, breaking down the top - bottom symmetry of the secondary flow structures. in addition to the distortion functions, the nonlinear interaction of fundamental modes generates higher harmonics, leading to the tendency of preferential concentration of uniformly distributed particles, which is completely absent in the linear stability analysis. it is shown that in the presence of thermal energy coupling between the fluid and particles, the difference between the horizontally averaged heat flux at the hot and cold surface is equal to the net sensible heat flux advected by the particles. the difference between the heat fluxes at hot and cold surfaces is increased with an increase in particle concentration.
arxiv:2503.15411
the evolution equations of the yukawa couplings and quark mixings are derived for the one - loop renormalization group equations in the two universal extra dimension models ( ued ), that is six - dimensional models, compactified in different possible ways to yield standard four space - time dimension. different possibilities for the matter fields are discussed, such as the case of bulk propagating or localised brane fields. we discuss in both cases the evolution of the yukawa couplings, the jarlskog parameter and the ckm matrix elements, and we find that, for both scenarios, as we run up to the unification scale, significant renormalization group corrections are present. we also discuss the results of different observables of the five - dimensional ued model in comparison with these six - dimensional models and the model dependence of the results.
arxiv:1306.4852
in this paper we propose a technique for distributing entanglement in architectures in which interactions between pairs of qubits are constrained to a fixed network $ g $. this allows for two - qubit operations to be performed between qubits which are remote from each other in $ g $, through gate teleportation. we demonstrate how adapting \ emph { quantum linear network coding } to this problem of entanglement distribution in a network of qubits can be used to solve the problem of distributing bell states and ghz states in parallel, when bottlenecks in $ g $ would otherwise force such entangled states to be distributed sequentially. in particular, we show that by reduction to classical network coding protocols for the $ k $ - pairs problem or multiple multicast problem in a fixed network $ g $, one can distribute entanglement between the transmitters and receivers with a clifford circuit whose quantum depth is some ( typically small and easily computed ) constant, which does not depend on the size of $ g $, however remote the transmitters and receivers are, or the number of transmitters and receivers. these results also generalise straightforwardly to qudits of any prime dimension. we demonstrate our results using a specialised formalism, distinct from and more efficient than the stabiliser formalism, which is likely to be helpful to reason about and prototype such quantum linear network coding circuits.
arxiv:1910.03315
we study largest singular values of large random matrices, each with mean of a fixed rank $ k $. our main result is a limit theorem as the number of rows and columns approach infinity, while their ratio approaches a positive constant. it provides a decomposition of the largest $ k $ singular values into the deterministic rate of growth, random centered fluctuations given as explicit linear combinations of the entries of the matrix, and a term negligible in probability. we use this representation to establish asymptotic normality of the largest singular values for random matrices with means that have block structure. we also deduce asymptotic normality for the largest eigenvalues of the normalized covariance matrix arising in a model of population genetics.
arxiv:1802.02960
full size single - sided gaas microstrip detectors with integrated coupling capacitors and bias resistors have been fabricated on 3 ' ' substrate wafers. pecvd deposited sio _ 2 and sio _ 2 / si _ 3n _ 4 layers were used to provide coupling capacitaces of 32. 5 pf / cm and 61. 6 pf / cm, respectively. the resistors are made of sputtered cermet using simple lift of technique. the sheet resistivity of 78 kohm / sq. and the thermal coefficient of resistance of less than 4x10 ^ - 3 / degree c satisfy the demands of small area biasing resistors, working on a wide temperature range.
arxiv:physics/9709041
compact white dwarf ( wd ) binaries are important sources for space - based gravitational - wave ( gw ) observatories, and an increasing number of them are being identified by surveys like ztf. we study the effects of nonlinear dynamical tides in such binaries. we focus on the global three - mode parametric instability and show that it has a much lower threshold energy than the local wave - breaking condition studied previously. by integrating networks of coupled modes, we calculate the tidal dissipation rate as a function of orbital period. we construct phenomenological models that match these numerical results and use them to evaluate the spin and luminosity evolution of a wd binary. while in linear theory the wd ' s spin frequency can lock to the orbital frequency, we find that such a lock cannot be maintained when nonlinear effects are taken into account. instead, as the orbit decays, the spin and orbit go in and out of synchronization. each time they go out of synchronization, there is a brief but significant dip in the tidal heating rate. while most wds in compact binaries should have luminosities that are similar to previous traveling - wave estimates, a few percent should be about ten times dimmer because they reside in heating rate dips. this offers a potential explanation for the low luminosity of the co wd in j0651. lastly, we consider the impact of tides on the gw signal and show that lisa and tiango can constrain the wd ' s moment of inertia to better than 1 % for deci - hz systems.
arxiv:2005.03058
quantum communication between remote superconducting systems is being studied intensively to increase the number of integrated superconducting qubits and to realize a distributed quantum computer. since optical photons must be used for communication outside a dilution refrigerator, the direct conversion of microwave photons to optical photons has been widely investigated. however, the direct conversion approach suffers from added photon noise, heating due to a strong optical pump, and the requirement for large cooperativity. instead, for quantum communication between superconducting qubits, we propose an entanglement distribution scheme using a solid - state spin quantum memory that works as an interface for both microwave and optical photons. the quantum memory enables quantum communication without significant heating inside the refrigerator, in contrast to schemes using high - power optical pumps. moreover, introducing the quantum memory naturally makes it possible to herald entanglement and parallelization using multiple memories.
arxiv:2202.07888
we study embeddability of rational ruled surfaces as symplectic hyperplane sections into closed integral symplectic manifolds. from this we obtain results on stein fillability of boothby - - wang bundles over rational ruled surfaces.
arxiv:2005.11483
planetary engulfment events can occur while host stars are on the main sequence. the addition of rocky planetary material during engulfment will lead to refractory abundance enhancements in the host star photosphere, but the level of enrichment and its duration will depend on mixing processes that occur within the stellar interior, such as convection, diffusion, and thermohaline mixing. we examine engulfment signatures by modeling the evolution of photospheric lithium abundances. because lithium can be burned before or after the engulfment event, it produces unique signatures that vary with time and host star type. using mesa stellar models, we quantify the strength and duration of these signatures following the engulfment of a 1, 10, or 100 $ m _ { \ oplus } $ planetary companion with bulk earth composition, for solar - metallicity host stars with masses ranging from 0. 5 $ - $ 1. 4 $ m _ { \ odot } $. we find that lithium is quickly depleted via burning in low - mass host stars ( $ \ lesssim 0. 7 \, m _ \ odot $ ) on a time scale of a few hundred myrs, but significant lithium enrichment signatures can last for gyrs in g - type stars ( $ \ sim \! 0. 9 \, m _ { \ odot } $ ). for more massive stars ( 1. 3 $ - $ 1. 4 $ m _ { \ odot } $ ), engulfment can enhance internal mixing and diffusion processes, potentially decreasing the surface lithium abundance. our predicted signatures from exoplanet engulfment are consistent with observed lithium - rich solar - type stars and abundance enhancements in chemically inhomogeneous binary stars.
arxiv:2207.13232
this paper focuses on providing the computation methods for the backward time tempered fractional feynman - kac equation, being one of the models recently proposed in [ wu, deng, and barkai, phys. rev. e, 84 ( 2016 ) 032151 ]. the discretization for the tempered fractional substantial derivative is derived, and the corresponding finite difference and finite element schemes are designed with well established stability and convergence. the performed numerical experiments show the effectiveness of the presented schemes.
arxiv:1605.04134
understanding the interaction of primordial gravitational waves ( gws ) with the cosmic microwave background ( cmb ) plasma is important for observational cosmology. in this article, we provide an analysis of an effect apparently overlooked as yet. we consider a single free electric charge and suppose that it can be agitated by primordial gws propagating through the cmb plasma, resulting in periodic, regular motion along particular directions. light reflected by the charge will be partially polarized, and this will imprint a characteristic pattern on the cmb. we study this effect by considering a simple model in which anisotropic incident electromagnetic ( em ) radiation is rescattered by a charge sitting in spacetime perturbed by gws and becomes polarized. as the charge is driven to move along particular directions, we calculate its dipole moment to determine the leading - order rescattered em radiation. the stokes parameters of the rescattered radiation exhibit a net linear polarization. we investigate how this polarization effect can be schematically represented out of the stokes parameters. we work out the representations of gradient modes ( e - modes ) and curl modes ( b - modes ) to produce polarization maps. although the polarization effect results from gws, we find that its representations, the e - and b - modes, do not practically reflect the gw properties such as strain amplitude, frequency and polarization states.
arxiv:1607.03779
we prove that a family $ \ mathcal { t } $ of distinct triangles on $ n $ given vertices that does not have a rainbow triangle ( that is, three edges, each taken from a different triangle in $ \ mathcal { t } $, that form together a triangle ) must be of size at most $ \ frac { n ^ 2 } { 8 } $. we also show that this result is sharp and characterize the extremal case. in addition, we discuss a version of this problem in which the triangles are not necessarily distinct, and show that in this case, the same bound holds asymptotically. after posting the original arxiv version of this paper, we learned that the sharp upper bound of $ \ frac { n ^ 2 } { 8 } $ was proved much earlier by gy \ h { o } ri ( 2006 ) and independently by frankl, f \ " uredi and simonyi ( unpublished ). gy \ h { o } ri also obtained a stronger version of our result for the case when repetitions are allowed.
arxiv:2209.15493
spin glass models involving multiple replicas with constrained overlaps have been studied in [ fpv92 ; pt07 ; pan18a ]. for the spherical versions of these models [ ko19 ; ko20 ] showed that the limiting free energy is given by a parisi type minimization. in this work we show that for sherrington - kirkpatrick ( i. e. 2 - spin ) interactions, it can also be expressed in terms of a thouless - andersson - palmer ( tap ) variational principle. this is only the second spin glass model where a mathematically rigorous tap computation of the free energy at all temperatures and external fields has been achieved. the variational formula we derive here also confirms that the model is replica symmetric, a fact which is natural but not obviously deducible from its parisi formula.
arxiv:2304.04031
symbol alphabets of n - particle amplitudes in n = 4 super - yang - mills theory are known to contain certain cluster variables of gr ( 4, n ) as well as certain algebraic functions of cluster variables. in this paper we solve the c z = 0 matrix equations associated to several cells of the totally non - negative grassmannian, combining methods of arxiv : 2012. 15812 for rational letters and arxiv : 2007. 00646 for algebraic letters. we identify sets of parameterizations of the top cell of gr _ + ( 5, 9 ) for which the solutions produce all of ( and only ) the cluster variable letters of the 2 - loop nine - particle nmhv amplitude, and identify plabic graphs from which all of its algebraic letters originate.
arxiv:2106.01406
time series anomaly detection ( tsad ) is an evolving area of research motivated by its critical applications, such as detecting seismic activity, sensor failures in industrial plants, predicting crashes in the stock market, and so on. across domains, anomalies occur significantly less frequently than normal data, making the f1 - score the most commonly adopted metric for anomaly detection. however, in the case of time series, it is not straightforward to use standard f1 - score because of the dissociation between ` time points ' and ` time events '. to accommodate this, anomaly predictions are adjusted, called as point adjustment ( pa ), before the $ f _ 1 $ - score evaluation. however, these adjustments are heuristics - based, and biased towards true positive detection, resulting in over - estimated detector performance. in this work, we propose an alternative adjustment protocol called ` ` balanced point adjustment ' ' ( ba ). it addresses the limitations of existing point adjustment methods and provides guarantees of fairness backed by axiomatic definitions of tsad evaluation.
arxiv:2409.13053
given a compact space $ k $ and a banach space $ e $ we study the structure of positive measures on the product space $ k \ times b _ { e ^ * } $ representing functionals on $ c ( k, e ) $, the space of $ e $ - valued continuous functions on $ k $. using the technique of disintegration we provide an alternative approach to the procedure of transference of measures introduced by batty ( 1990 ). this enables us to substantially strengthen some of his results, to discover a rich order structure on these measures, to identify maximal and minimal elements and to relate them to the classical choquet order.
arxiv:2405.04202
[ abridged ] we report the results of a high - resolution spectroscopic survey of all the stars more luminous than mv = 6. 5 mag within 14. 5 pc from the sun. we derive stellar parameters and perform a preliminary abundance and kinematic analysis of the f - g - k stars in the sample. the inferred metallicity ( [ fe / h ] ) distribution is centered at about - 0. 1 dex, and shows a standard deviation of 0. 2 dex. we identify a number of metal - rich k - type stars which appear to be very old, confirming the claims for the existence of such stars in the solar neighborhood. with atmospheric effective temperatures and gravities derived independently of the spectra, we find that our classical lte model - atmosphere analysis of metal - rich ( and mainly k - type ) stars provides discrepant abundances from neutral and ionized lines of several metals. based on transitions of majority species, we discuss abundances of 16 chemical elements. in agreement with earlier studies we find that the abundance ratios to iron of si, sc, ti, co, and zn become smaller as the iron abundance increases until approaching the solar values, but the trends reverse for higher iron abundances. at any given metallicity, stars with a ` low ' galactic rotational velocity tend to have high abundances of mg, si, ca, sc, ti, co, zn, and eu, but low abundances of ba, ce, and nd. the sun appears deficient by roughly 0. 1 dex in o, si, ca, sc, ti, y, ce, nd, and eu, compared to its immediate neighbors with similar iron abundances.
arxiv:astro-ph/0403108
we consider massless elementary particles in a quantum theory based on a galois field ( gfqt ). we previously showed that the theory has a new symmetry between particles and antiparticles, which has no analogue in the standard approach. we now prove that the symmetry is compatible with all operators describing massless particles. consequently, massless elementary particles can have only the half - integer spin ( in conventional units ), and the existence of massless neutral elementary particles is incompatible with the spin - statistics theorem. in particular, this implies that the photon and the graviton in the gfqt can only be composite particles.
arxiv:hep-th/0207192
for human action recognition tasks ( har ), 3d convolutional neural networks have proven to be highly effective, achieving state - of - the - art results. this study introduces a novel streaming architecture based toolflow for mapping such models onto fpgas considering the model ' s inherent characteristics and the features of the targeted fpga device. the harflow3d toolflow takes as input a 3d cnn in onnx format and a description of the fpga characteristics, generating a design that minimizes the latency of the computation. the toolflow is comprised of a number of parts, including i ) a 3d cnn parser, ii ) a performance and resource model, iii ) a scheduling algorithm for executing 3d models on the generated hardware, iv ) a resource - aware optimization engine tailored for 3d models, v ) an automated mapping to synthesizable code for fpgas. the ability of the toolflow to support a broad range of models and devices is shown through a number of experiments on various 3d cnn and fpga system pairs. furthermore, the toolflow has produced high - performing results for 3d cnn models that have not been mapped to fpgas before, demonstrating the potential of fpga - based systems in this space. overall, harflow3d has demonstrated its ability to deliver competitive latency compared to a range of state - of - the - art hand - tuned approaches being able to achieve up to 5 $ \ times $ better performance compared to some of the existing works.
arxiv:2303.17218
in this paper, we have studied the consequences of the $ \ mu $ - $ \ tau $ reflection symmetry for leptogenesis in the type - i seesaw model with diagonal dirac neutrino mass matrix. we have first obtained the phenomenologically allowed values of the model parameters, which show that there may exist zero or equal entries in the majorana mass matrix for the right - handed neutrinos, and then studied their predictions for three right - handed neutrino masses, which show that there may exist two nearly degenerate right - handed neutrinos. then, we have studied the consequences of the model for leptogenesis. due to the $ \ mu $ - $ \ tau $ reflection symmetry, leptogenesis can only work in the two - flavor regime. furthermore, leptogenesis cannot work for the particular case of $ r = 1 $. accordingly, for some benchmark values of $ r \ neq 1 $, we have given the constraints of leptogenesis for relevant parameters. furthermore, we have investigated the possibilities of leptogenesis being induced by the renormalization group evolution effects for two particular scenarios. for the particular case of $ r = 1 $, the renormalization group evolution effects will break the orthogonality relations among different columns of $ m ^ { } _ { \ rm d } $ and consequently induce leptogenesis to work. for the low - scale resonant leptogenesis scenario which is realized for nearly degenerate right - handed neutrinos, the renormalization group evolution effects can break the $ \ mu $ - $ \ tau $ reflection symmetry and consequently induce leptogenesis to work.
arxiv:2405.02804
we show that every connected compact or bordered riemann surface contains a cantor set whose complement admits a complete conformal minimal immersion in $ \ mathbb r ^ 3 $ with bounded image. the analogous result holds for holomorphic immersions into any complex manifold of dimension at least $ 2 $, for holomorphic null immersions into $ \ mathbb c ^ n $ with $ n \ ge 3 $, for holomorphic legendrian immersions into an arbitrary complex contact manifold, and for superminimal immersions in any self - dual or anti - self - dual einstein four - manifold.
arxiv:2202.07601
starting from gravity as a chern - simons action for the ads algebra in five dimensions, it is possible to deform the theory through an expansion of the lie algebra that leads to a system consisting of the einstein - hilbert action plus nonminimally coupled matter. the deformed system is gauge invariant under the poincare group enlarged by an abelian ideal. although the resulting action naively looks like general relativity plus corrections due to matter sources, it is shown that the nonminimal couplings produce a radical departure from gr. indeed, the dynamics is not continuously connected to the one obtained from einstein - hilbert action. in a matter - free configuration and in the torsionless sector, the field equations are too strong a restriction on the geometry as the metric must satisfy both the einstein and pure gauss - bonnet equations. in particular, the five - dimensional schwarzschild geometry fails to be a solution ; however, configurations corresponding to a brane - world with positive cosmological constant on the worldsheet are admissible when one of the matter fields is switched on. these results can be extended to higher odd dimensions.
arxiv:hep-th/0605174
a new method for selecting stars in the galactic bar based on 2mass infrared photometry in combination with stellar proper motions from the kharkiv xpm catalogue has been implemented. in accordance with this method, red clump and red giant branch stars are preselected on the color - - magnitude diagram and their photometric distances are calculated. since the stellar proper motions are indicators of a larger velocity dispersion toward the bar and the spiral arms compared to the stars with circular orbits, applying the constraints on the proper motions of the preselected stars that take into account the galactic rotation has allowed the background stars to be eliminated. based on a joint analysis of the velocities of the selected stars and their distribution on the galactic plane, we have confidently identified the segment of the galactic bar nearest to the sun with an orientation of 20 $ ^ \ circ $ - - 25 $ ^ \ circ $ with respect to the galactic center - - sun direction and a semimajor axis of no more than 3 kpc.
arxiv:1402.3153
we study nonlinear parabolic stochastic partial differential equations with wick - power and wick - polynomial type nonlinearities set in the framework of white noise analysis. these equations include the stochastic fujita equation, the stochastic fisher - kpp equation and the stochastic fitzhugh - nagumo equation among many others. by implementing the theory of $ c _ 0 - $ semigroups and evolution systems into the chaos expansion theory in infinite dimensional spaces, we prove existence and uniqueness of solutions for this class of spdes. in particular, we also treat the linear nonautonomous case and provide several applications featured as stochastic reaction - diffusion equations that arise in biology, medicine and physics.
arxiv:2303.06229
in recent years several wireless communication standards have been developed and more are expected, each with different scope in terms of spatial coverage, radio access capabilities, and mobility support. heterogeneous networks combine multiple of these radio interfaces both in network infrastructure and in user equipment which requires a new multi - radio framework, enabling mobility and handover management for multiple rats. the use of heterogeneous networks can capitalize on the overlapping coverage and allow user devices to take advantage of the fact that there are multiple radio interfaces. this paper presents the functional architecture for such a framework and proposes a generic signaling exchange applicable to a range of different handover management protocols that enables seamless mobility. the interworking of radio resource management, access selection and mobility management is defined in a generic and modular way, which is extensible for future protocols and standards.
arxiv:1105.1516
in this study, the structural, electronic and optical properties of pb doped rutile sno $ _ 2 $ were investigated using the range separated hybrid exchange - correlation functional method. in the calculations, lda functional was used instead of pbe functional. the electronic structure of sno $ _ 2 $ obtained by this method is quite compatible with the experimental data. the sno $ _ 2 $ has an important usage area in optoelectronic devices due to its transparent and conductive nature. one of these important areas is the use of sno $ _ 2 $ as an electron transport layer ( etl ) in perovskite solar cells. therefore, the energy level of the conduction band of the sno $ _ 2 $ is important. in the pb doped sno $ _ 2 $ cases, the band gap narrows as the pb doping rate increases. the bandgap of sno $ _ 2 $ can be narrowed from 3. 60 ev to 3. 02 ev with a % 12. 5 pb doping ratio, and this narrowing is proportional to the amount of pb. the calculation results obtained in this study show that the decrease in the energy level of the bottom of the conduction band plays an important role in the narrowing of the band gap and there is no significant change in the energy level of the top of the valence band. due to this effect of the pb atom, the energy level of the conduction band can be adjusted by using the doping ratio of the pb atom and the band gap can be narrowed in a controlled manner. with the pb doping, the energy levels of the sno $ _ 2 $ etl can be adjusted in a range according to the type of perovskite used in solar cell. in addition, the doping with pb does not create electron traps in the band gap, which is important in the transport process of electrons.
arxiv:2010.15380
we present a panoramic view of the main scientific manuscripts left unpublished by the brightest italian theoretical physicist of the xx century, ettore majorana. we deal in particular : ( i ) with his very original " study " notes ( the so - called " volumetti " ), already published by us in english, in 2003, c / o kluwer acad. press, dordrecht & boston, and in the original italian language, in 2006, c / o zanichelli pub., bologna, italy ; and ( ii ) with a selection of his research notes ( the so - called " quaderni " ), that we shall publish c / o springer, berlin. we seize the present opportunity for setting forth also some suitable - scarcely known - information about majorana ' s life and work, on the basis of documents ( letters, testimonies, different documents... ) discovered or collected by ourselves during the last decades. [ a finished, enlarged version of this paper will appear as the editors ' preface, at the beginning of the coming book " ettore majorana - unpublished research notes on theoretical physics ", edited by s. esposito, e. recami, a. van der merwe and r. battiston, to be printed by springer verlag, berlin ].
arxiv:0709.1183
the study of lfv decays of the higgs boson, $ h \ to \ ell _ i \ ell _ j $, has become an active research subject both from the experimental and theoretical points of view. such decays vanish within the sm and are highly suppressed in several theoretical extensions. due to its relevance and relative simplicity to reconstruct the signal at future colliders, it is an important tool to probe sm extensions where it could reach detectable levels. here we identify a mechanism that induces lfv higgs interactions, by linking it with the appearance of cp violation in the scalar sector, within the context of general multi - higgs models. we then focus on the simplest model of this type to study its phenomenology. the scalar sector of this minimal model consisting of a higgs doublet and a froggatt - - nielsen ( fn ) ( complex ) singlet is studied thoroughly, including the scalar spectrum and the yukawa interactions. constraints on the parameters of the model are derived from low - energy observables and lhc higgs data, which is then applied to study the resulting predicted rates for the decay $ h \ rightarrow \ tau \ mu $. overall, branching ratios for $ h \ rightarrow \ tau \ mu $ of the order $ 10 ^ { - 3 } $ are obtained within this approach consistent with all known constraints.
arxiv:1706.00054
we study the implicit bias of batch normalization trained by gradient descent. we show that when learning a linear model with batch normalization for binary classification, gradient descent converges to a uniform margin classifier on the training data with an $ \ exp ( - \ omega ( \ log ^ 2 t ) ) $ convergence rate. this distinguishes linear models with batch normalization from those without batch normalization in terms of both the type of implicit bias and the convergence rate. we further extend our result to a class of two - layer, single - filter linear convolutional neural networks, and show that batch normalization has an implicit bias towards a patch - wise uniform margin. based on two examples, we demonstrate that patch - wise uniform margin classifiers can outperform the maximum margin classifiers in certain learning problems. our results contribute to a better theoretical understanding of batch normalization.
arxiv:2306.11680
3d recovery from multi - stereo and stereo images, as an important application of the image - based perspective geometry, serves many applications in computer vision, remote sensing and geomatics. in this chapter, the authors utilize the imaging geometry and present approaches that perform 3d reconstruction from cross - view images that are drastically different in their viewpoints. we introduce our framework that takes ground - view images and satellite images for full 3d recovery, which includes necessary methods in satellite and ground - based point cloud generation from images, 3d data co - registration, fusion and mesh generation. we demonstrate our proposed framework on a dataset consisting of twelve satellite images and 150k video frames acquired through a vehicle - mounted go - pro camera and demonstrate the reconstruction results. we have also compared our results with results generated from an intuitive processing pipeline that involves typical geo - registration and meshing methods.
arxiv:2106.14306
strain tuning emerged as an appealing tool to tune fundamental optical properties of solid state quantum emitters. in particular, the wavelength and fine structure of quantum dot states could be tuned using hybrid semiconductor - piezoelectric devices. here, we show how an applied external stress can directly impact the polarization properties of coupled inas quantum dot - micropillar cavity systems. in our experiment, we find that we can reversibly tune the anisotropic polarization splitting of the fundamental microcavity mode by approximately 60 $ \ mu \ text { ev } $. we discuss the origin of this tuning mechanism, which arises from an interplay between elastic deformation and the photoelastic effect in our micropillar. finally, we exploit this effect to tune the quantum dot polarization opto - mechanically via the polarization - anisotropic purcell effect. our work paves the way for optomechanical and reversible tuning of the polarization and spin properties of light - matter coupled solid state systems.
arxiv:2004.09445
pressureless euler - poisson equations with attractive forces are standard models in newtonian cosmology. in this article, we further develop the spectral dynamics method and apply a novel spectral - dynamics - integration method to study the blowup conditions for $ c ^ { 2 } $ solutions with a bounded domain, $ \ left \ vert x ( t ) \ right \ vert \ leq x _ { 0 } $, where $ \ left \ vert \ cdot \ right \ vert $ denotes the volume and $ x _ { 0 } $ is a positive constant. in particular, we show that if the cosmological constant $ \ lambda < m / x _ { 0 } $, with the total mass $ m $, then the non - trivial $ c ^ { 2 } $ solutions in $ r ^ { n } $ with the irrotational initial condition blow up at a finite time.
arxiv:1403.6234
recent experimental results suggest that the neutrinos of the standard model are massive, though light. therefore they may mix with each other giving rise to lepton flavour or even lepton number violating processes, depending on whether they are dirac or majorana particles. furthermore, the lightness of the observed neutrinos may be explained by the existence of heavy ones, whose effects on lfv would be very sizeable. we present an analysis of the effect of massive neutrinos on flavour - changing decays of the z boson into leptons, at the one - loop level, independent of neutrino mass models. constraints from present experiments are taken into account.
arxiv:hep-ph/0006055