text
stringlengths
1
3.65k
source
stringlengths
15
79
the vera c. rubin observatory will carry out its legacy survey of space and time ( lsst ) with a single - exposure depth of $ r { \ sim } 24. 7 $ and an anticipated baseline of 10 years, allowing to access the milky way ' s old halo not only deeper, but also with a longer baseline and better cadence than e. g. ps1 3 $ \ pi $ ( chambers et al. 2016 ). this will make lsst ideal to study populations of variable stars such as rr lyrae stars ( rrl ). here, we address the question of observing strategy optimization of lsst, as survey footprint definition, single visit exposure time as well as the cadence of repeat visits in different filters are yet to be finalized. we present metrics used to assess the impact of different observing strategies on the reliable detectability and classification of standard - candle variable stars, including detection of amplitude period, phase modulation effects of rr lyrae stars, the so - called blazhko effect ( blazhko 1907, kollath et al. 2011 ), by evaluating metrics for simulated potential survey designs. so far, due to depth and cadence of typical all - sky surveys, it was nearly impossible to study this effect on a larger sample. all - sky surveys with relatively few observations over a moderately long baseline allow only for fitting phase - folded rrl light curves, thus integrating over the complete survey length and hiding any information regarding possible period or phase modulation during the survey. on the other hand, surveys with a cadence to detect slightly changing light curves usually have a relatively small footprint. lsst ' s survey strategy, however, will allow for studying variable stars in a way that makes population studies possible.
arxiv:2109.13212
recently, end - to - end object detectors have gained significant attention from the research community due to their outstanding performance. however, detr typically relies on supervised pretraining of the backbone on imagenet, which limits the practical application of detr and the design of the backbone, affecting the model ' s potential generalization ability. in this paper, we propose a new training method called step - by - step training. specifically, in the first stage, the one - to - many pre - trained yolo detector is used to initialize the end - to - end detector. in the second stage, the backbone and encoder are consistent with the detr - like model, but only the detector needs to be trained from scratch. due to this training method, the object detector does not need the additional dataset ( imagenet ) to train the backbone, which makes the design of the backbone more flexible and dramatically reduces the training cost of the detector, which is helpful for the practical application of the object detector. at the same time, compared with the detr - like model, the step - by - step training method can achieve higher accuracy than the traditional training method of the detr - like model. with the aid of this novel training method, we propose a brand - new end - to - end real - time object detection model called deyov3. deyov3 - n achieves 41. 1 % on coco val2017 and 270 fps on t4 gpu, while deyov3 - l achieves 51. 3 % ap and 102 fps. without the use of additional training data, deyov3 surpasses all existing real - time object detectors in terms of both speed and accuracy. it is worth noting that for models of n, s, and m scales, the training on the coco dataset can be completed using a single 24gb rtx3090 gpu. code will be released at https : / / github. com / ouyanghaodong / deyov3.
arxiv:2309.11851
black hole ( bh ) shadows can be used to probe new physics in the form of ultra - light particles via the phenomenon of superradiant instability. by directly affecting the bh mass and spin, superradiance can lead to a time evolution of the bh shadow, which nonetheless has been argued to be unobservable through very long baseline interferometry ( vlbi ) over realistic observation timescales. we revisit the superradiance - induced bh shadow evolution including the competing effects of gas accretion and gravitational wave ( gw ) emission and, as a first step towards modelling realistic new physics scenarios which predict the existence of multiple ultra - light species, we study the system in the presence of two ultra - light bosons, whose combined effect could help reducing the shadow evolution timescale. we find that accretion and gw emission play a negligible role in our results ( justifying previous simplified analyses ), and that contrary to our intuition the inclusion of an additional ultra - light boson does not shorten the bh shadow evolution timescale and hence improve detection prospects. however, we point out an important subtlety concerning the observationally meaningful definition of the superradiance - induced bh shadow evolution timescale, which reduces the latter by about an order of magnitude, opening up the possibility of observing the superradiance - induced bh shadow evolution with upcoming vlbi arrays, provided angular resolutions just below the $ \ mu { \ rm as } $ level can be reached. as a concrete example, we show that the angular size of the shadow of sgra $ ^ * $ can change by up to $ 0. 6 \, \ mu { \ rm as } $ over a period as short as $ 16 $ years, which further strengthens the scientific case for targeting the shadow of sgra $ ^ * $ with next - generation vlbi arrays.
arxiv:2112.06932
a computational model is presented to calculate the ground state energy of neutral and charged excitons confined in semiconductor quantum dots. the model is based on the variational quantum monte carlo method and effective mass hamiltonians. through an iterative newton - rhapson process, minimizing the local energy, and ( optional ) parallelization of random walkers, fast and accurate estimates of both confinement and coulomb binding energies can be obtained in standard desktop computers. to illustrate the reach of the model, we provide fortran programs and illustrative calculations for colloidal cdse nanoplatelets with large lateral dimensions and dielectric confinement, where electronic correlations are strong. the results compare well with exact variational calculations and largely outperform configuration interaction calculations in computational efficiency.
arxiv:2009.09662
geoscientists use observed data to estimate properties of the earth ' s interior. this often requires non - linear inverse problems to be solved and uncertainties to be estimated. bayesian inference solves inverse problems under a probabilistic framework, in which uncertainty is represented by a so - called posterior probability distribution. recently, variational inference has emerged as an efficient method to estimate bayesian solutions. by seeking the closest approximation to the posterior distribution within any chosen family of distributions, variational inference yields a fully probabilistic solution. it is important to define expressive variational families so that the posterior distribution can be represented accurately. we introduce boosting variational inference ( bvi ) as a computationally efficient means to construct a flexible approximating family comprising all possible finite mixtures of simpler component distributions. we use gaussian mixture components due to their fully parametric nature and the ease with which they can be optimised. we apply bvi to seismic travel time tomography and full waveform inversion, comparing its performance with other methods of solution. the results demonstrate that bvi achieves reasonable efficiency and accuracy while enabling the construction of a fully analytic expression for the posterior distribution. samples that represent major components of uncertainty in the solution can be obtained analytically from each mixture component. we demonstrate that these samples can be used to solve an interrogation problem : to assess the size of a subsurface target structure. to the best of our knowledge, this is the first method in geophysics that provides both analytic and reasonably accurate probabilistic solutions to fully non - linear, high - dimensional bayesian full waveform inversion problems.
arxiv:2312.17646
we introduce a general scheme that permits to generate successive min - max problems for producing critical points of higher and higher indices to palais - smale functionals in banach manifolds equipped with finsler structures. we call the resulting tree of minmax problems a minmax hierarchy. using the viscosity approach to the minmax theory of minimal surfaces introduced by the author in a series of recent works, we explain how this scheme can be deformed for producing smooth minimal surfaces of strictly increasing area in arbitrary codimension. we implement this scheme to the case of the $ 3 - $ dimensional sphere. in particular we are giving a min - max characterization of the clifford torus and conjecture what are the next minimal surfaces to come in the $ s ^ 3 $ hierarchy. among other results we prove here the lower semi continuity of the morse index in the viscosity method below an area level.
arxiv:1705.09848
let x be a smooth complex projective variety of dimension d. it is classical that ample line bundles on x satisfy many beautiful geometric, cohomological, and numerical properties that render their behavior particularly tractable. by contrast, examples due to cutkosky and others have led to the common impression that the linear series associated to non - ample divisors are in general mired in pathology. however starting with fundamental work of fujita, nakayama, and tsuji, it has recently become apparent that arbitrary effective ( or " big " ) divisors in fact display a surprising number of properties analogous to those of ample line bundles. the key is to study the properties in question from an asymptotic perspective. at the same time, many interesting questions and open problems remain. the purpose of the present expository note is to give an invitation to this circle of ideas. in the hope that this informal overview might serve as a jumping off point for the technical literature in the area, we sketch many examples but provide no proofs. we focus on one particular invariant - - the " volume " of a line bundle - - that measures the rate of growth of the number of sections of powers of the bundle in question.
arxiv:math/0505054
we have used an electromigration technique to fabricate a $ \ rm { c _ { { 60 } } } $ single - molecule transistor ( smt ). besides describing our electromigration procedure, we focus and present an experimental study of a single molecule quantum dot containing an even number of electrons, revealing, for two different samples, a clear out - of - equilibrium kondo effect. low temperature magneto - transport studies are provided, which demonstrates a zeeman splitting of the finite bias anomaly.
arxiv:0809.2706
we report an element - specific investigation of electronic and magnetic properties of the graphene / ni ( 111 ) system. using magnetic circular dichroism, the occurrence of an induced magnetic moment of the carbon atoms in the graphene layer aligned parallel to the ni 3d magnetization is observed. we attribute this magnetic moment to the strong hybridization between c $ \ pi $ and ni 3d valence band states. the net magnetic moment of carbon in the graphene layer is estimated to be in the range of $ 0. 05 - 0. 1 \ mu _ b $ per atom.
arxiv:0907.4344
in scattering experiments, physicists observe so - called resonances as peaks at certain energy values in the measured scattering cross sections per solid angle. these peaks are usually associate with certain scattering processes, e. g., emission, absorption, or excitation of certain particles and systems. on the other hand, mathematicians define resonances as poles of an analytic continuation of the resolvent operator through complex dilations. a major challenge is to relate these scattering and resonance theoretical notions, e. g., to prove that the poles of the resolvent operator induce the above mentioned peaks in the scattering matrix. in the case of quantum mechanics, this problem was addressed in numerous works that culminated in simon ' s seminal paper [ 33 ] in which a general solution was presented for a large class of pair potentials. however, in quantum field theory the analogous problem has been open for several decades despite the fact that scattering and resonance theories have been well - developed for many models. in certain regimes these models describe very fundamental phenomena, such as emission and absorption of photons by atoms, from which quantum mechanics originated. in this work we present a first non - perturbative formula that relates the scattering matrix to the resolvent operator in the massless spin - boson model. this result can be seen as a major progress compared to our previous works [ 13 ] and [ 12 ] in which we only managed to derive a perturbative formula.
arxiv:1907.03013
we build a common - knowledge concept recognition system for a systems engineer ' s virtual assistant ( seva ) which can be used for downstream tasks such as relation extraction, knowledge graph construction, and question - answering. the problem is formulated as a token classification task similar to named entity extraction. with the help of a domain expert and text processing methods, we construct a dataset annotated at the word - level by carefully defining a labelling scheme to train a sequence model to recognize systems engineering concepts. we use a pre - trained language model and fine - tune it with the labeled dataset of concepts. in addition, we also create some essential datasets for information such as abbreviations and definitions from the systems engineering domain. finally, we construct a simple knowledge graph using these extracted concepts along with some hyponym relations.
arxiv:2003.11687
we study the properties and the origin of the radio emission in the most luminous early - type galaxies ( etgs ) in the nearby universe ( mk < - 25, recession velocity < 7, 500 km / s ) as seen by the 150 mhz low - frequency array ( lofar ) observations. lofar images are available for 188 of these giant etgs ( getgs ) and 146 ( 78 % ) of them are detected above a typical luminosity of ~ 10e21 w / hz. they show a large spread in power, reaching up to ~ 10e26 w / hz. we confirm a positive link between the stellar luminosity of getgs and their median radio power, the detection rate, and the fraction of extended sources. about two - thirds ( 91 ) of the detected getgs are unresolved, with sizes < 4 kpc, confirming the prevalence of compact radio sources in local sources. forty - six getgs show extended emission on scales ranging from 4 to 340 kpc, at least 80 % of which have a fri class morphology. based on the morphology and spectral index of the extended sources, ~ 30 % of them might be remnant or restarted sources but further studies are needed to confirm this. optical spectroscopy ( available for 44 getgs ) indicates that for seven of them the nuclear gas is ionized by young stars suggesting a contribution to their radio emission from star forming regions. their radio luminosities correspond to a star formation rate ( sfr ) in the range 0. 1 - 8 msun / yr and a median specific sfr of 0. 8x10e - 12 yr - 1. the gas flowing toward the center of getgs can accrete onto the supermassive black hole but also stall at larger radii and form new stars, an indication that feedback does not completely quench star formation. the most luminous getgs ( 25 galaxies with mk < - 25. 8 ) are all detected at 150 mhz however they are not all currently turned on : at least four of them are remnant sources and at least one is likely powered by star formation.
arxiv:2202.08593
we present an interferometric search for large molecules, including methanol, methyl cyanide, ethyl cyanide, ethanol, and methyl formate in comets linear ( c / 2002 t7 ) and neat ( c / 2001 q4 ) with the berkeley - illinois - maryland association ( bima ) array. in addition, we also searched for transitions of the simpler molecules cs, sio, hnc, hn13c and 13co. we detected transitions of methanol and cs around comet linear and one transition of methanol around comet neat within a synthesized beam of ~ 20 ' '. we calculated the total column density and production rate of each molecular species using the variable temperature and outflow velocity ( vtov ) model described by friedel et al. ( 2005 ). considering the molecular production rate ratios with respect to water, comet t7 linear is more similar to comet hale - bopp while comet q4 neat is more similar to comet hyakutake. it is unclear, however, due to such a small sample size, whether there is a clear distinction between a hale - bopp and hyakutake class of comet or whether comets have a continuous range of molecular production rate ratios.
arxiv:astro-ph/0601709
recently, the concept of fog computing which aims at providing time - sensitive data services has become popular. in this model, computation is performed at the edge of the network instead of sending vast amounts of data to the cloud. thus, fog computing provides low latency, location awareness to end users, and improves quality - of - services ( qos ). one key feature in this model is the designing of payment plan from network operator ( no ) to fog nodes ( fn ) for the rental of their computing resources, such as computation capacity, spectrum, and transmission power. in this paper, we investigate the problem of how to design the efficient payment plan to maximize the no ' s revenue while maintaining fn ' s incentive to cooperate through the moral hazard model in contract theory. we propose a multi - dimensional contract which considers the fns ' characteristics such as location, computation capacity, storage, transmission bandwidth, and etc. first, a contract which pays the fns by evaluating the resources they have provided from multiple aspects is proposed. then, the utility maximization problem of the no is formulated. furthermore, we use the numerical results to analyze the optimal payment plan, and compare the no ' s utility under different payment plans.
arxiv:1701.07877
" four elements " and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. in the following decades, many important discoveries were made, such as the nature of ' air ' which was discovered to be composed of many different gases. the scottish chemist joseph black and the flemish jan baptist van helmont discovered carbon dioxide, or what black called ' fixed air ' in 1754 ; henry cavendish discovered hydrogen and elucidated its properties and joseph priestley and, independently, carl wilhelm scheele isolated pure oxygen. the theory of phlogiston ( a substance at the root of all combustion ) was propounded by the german georg ernst stahl in the early 18th century and was only overturned by the end of the century by the french chemist antoine lavoisier, the chemical analogue of newton in physics. lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. english scientist john dalton proposed the modern theory of atoms ; that all substances are composed of indivisible ' atoms ' of matter and that different atoms have varying atomic weights. the development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, jons jacob berzelius and humphry davy, made possible by the prior invention of the voltaic pile by alessandro volta. davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. british william prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. j. a. r. newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by dmitri mendeleev and independently by several other scientists including julius lothar meyer. the inert gases, later called the noble gases were discovered by william ramsay in collaboration with lord rayleigh at the end of the century, thereby filling in the basic structure of the table. organic chemistry was developed by justus von liebig and others, following friedrich wohler ' s synthesis of urea. other crucial 19th century advances were ; an understanding of valence bonding ( edward frankland in 1852 ) and the application of thermodynamics to chemistry ( j. w. gibbs and svante arrhenius in
https://en.wikipedia.org/wiki/Chemistry
a new recipe for concealing objects from detection is suggested. different with traditional cloak which deflects light around the core of the cloak to make the object inside invisible, our cloak guides the light to penetrate the core of the cloak but without striking some region of the cloak shell - the so called folded region. full wave analytical calculation shows that this cloak will lead to a scattering enhancement instead of scattering reduction in contrast to the traditional cloak ; the scattered field distribution can also be changed as if the scatterer is moved from its original position. such interesting phenomenon indicates the proposed cloak can be used to disguise the true information of the object, e. g. the position, the size, etc, and further mislead the observer and avoid being detected.
arxiv:0808.0215
we show that the affine cones over a general fano - mukai fourfold of genus $ g = 7 $, $ 8 $ and $ 9 $ are flexible. equivalently, there is an infinitely transitive action of the special automorphism group on such affine cones. in particular, any mukai fourfold of genus $ 7, 8 $ and $ 9 $ is $ \ bbb a ^ 2 $ - cylindrical.
arxiv:2208.09109
iterated belief revision requires information about the current beliefs. this information is represented by mathematical structures called doxastic states. most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow. this problem is studied for the most common ways of storing a doxastic state. all four methods are able to store every doxastic state, but some do it in less space than others. in particular, the explicit representation ( an enumeration of the current beliefs ) is the more wasteful on space. the level representation ( a sequence of propositional formulae ) and the natural representation ( a history of natural revisions ) are more compact than it. the lexicographic representation ( a history of lexicographic revision ) is even more compact than them.
arxiv:2305.09200
in this paper, we use techniques of conrey, farmer and wallace to find spaces of modular forms $ s _ k ( \ gamma _ 0 ( n ) ) $ where all of the eigenspaces have hecke eigenvalues defined over $ \ f _ p $, and give a heuristic indicating that these are all such spaces.
arxiv:math/0606052
ginzburg - landau ( gl ) equations and gl free energy for flux phase and superconductivity are derived microscopically from the $ t - j $ model on a square lattice. order parameter ( op ) for the flux phase has direct coupling to a magnetic field, in contrast to the superconducting op which has minimal coupling to a vector potential. therefore, when the flux phase op has unidirectional spatial variation, staggered currents would flow in a perpendicular direction. the derived gl theory can be used for various problems in high - $ t _ c $ cuprate superconductors, e. g., states near a surface or impurities, and the effect of an external magnetic field. since the gl theory derived microscopically directly reflects the electronic structure of the system, e. g., the shape of the fermi surface that changes with doping, it can provide more useful information than that from phenomenological gl theories.
arxiv:1708.03542
we report a spatially resolved, x - ray spectral analysis of ngc 4258 using archival { \ it chandra } and { \ it xmm - newton } observations. the { \ it xmm - newton } spectra of the nuclear region are well described by two power - law components, a soft ( 0. 57 kev ) thermal component, and an fe k $ \ alpha $ line with ew = 40 $ \ pm $ 33 ev. the properties of the second, weaker power - law component are similar to those of an off - nuclear source $ 2. 5 \ arcsec $ sw of the nucleus. the spectrum of the extended emission of the entire galaxy is well described by two thermal components ( mekal ) models with temperatures $ \ simeq 0. 60 $ and 0. 22 kev. the { \ it chandra } and { \ it xmm - newton } spectra along the anomalous arms show that the absorbing column density to the se anomalous arm is consistent with absorption by gas in our galaxy, while the absorbing column to the nw anomalous arm is higher, indicating that the nw arm is partially on the far side of the galactic disk. the combined { \ it chandra } data clearly detect the x - ray emission from the hot spots at the end of the approximately n - s radio jets. by assuming the hot spots represent shocked thermal gas at the ends of the jets, we estimate shock powers of $ \ simeq 3 \ times 10 ^ { 39 } f ^ { - 1 / 2 } $ \ ergps ( $ f $ is the filling factor ), similar to the radiative power in the inner anomalous arms, consistent with the notion that the jets could be responsible for heating the gas in the anomalous arms.
arxiv:astro-ph/0701569
the geodesic motion of pseudo - classical spinning particles in the euclidean taub - nut space is analysed. the generalized killing equations for spinning space are investigated and the constants of motion are derived in terms of the solutions of these equations. a simple exact solution, corresponding to trajectories lying on a cone, is given.
arxiv:hep-th/9401036
organometallic lead halide perovskites are highly efficient materials for solar cells and other optoelectronic applications due to their high quantum efficiency and exceptional semiconducting properties. a peculiarity of these perovskites is the substantial ionic motion under external forces. here, we reveal that electric field - and light - induced ionic motion in mapbx3 crystals ( x = cl, br, i and ma = ch3nh3 ) leads to unexpected piezoelectric - like response, an order of magnitude larger than in ferroelectric perovskite oxides. the nominal macroscopic symmetry of the crystals is broken by redistribution of ionic species, which can be controlled deterministically by light and electric field. the revealed piezoelectric response is possibly present in other materials with significant ionic activity but the unique feature of organometallic perovskites is the strong effect on the piezoelectric response of interplay of ionic motion ( ma + and x - 1 ) and photoelectrons generated with illumination.
arxiv:2202.05754
hybrid quantum systems based on magnetic platforms have witnessed the birth and fast development of quantum spintronics. until now, most of the studies rely on magnetic excitations in low - damping magnetic insulators, particularly yttrium iron garnet, while a large class of magnetic systems is ruled out in this interdisciplinary field. here we propose the generation of a magnon bundle in a hybrid magnet - qubit system, where two or more magnons are emitted simultaneously. by tuning the driving frequency of qubit to match the detuning between magnon and qubit mode, one can effectively generate a magnon bundle via super - rabi oscillations. in contrast with general wisdom, magnetic dissipation plays an enabling role in generating the magnon bundle, where the relaxation time of magnons determines the typical time delay between two successive magnons. the maximal damping that allows an antibunched magnon bundle can reach the order of 0. 1, which may break the monopoly of low - dissipation magnetic insulators in quantum spintronics and enables a large class of magnetic materials for quantum manipulation. further, our finding may provide a scalable and generic platform to study multi - magnon physics and benefit the design of magnonic networks for quantum information processing.
arxiv:2301.09095
the rapid development of musical ai technologies has expanded the creative potential of various musical activities, ranging from music style transformation to music generation. however, little research has investigated how musical ais can support music therapists, who urgently need new technology support. this study used a mixed method, including semi - structured interviews and a participatory design approach. by collaborating with music therapists, we explored design opportunities for musical ais in music therapy. we presented the co - design outcomes involving the integration of musical ais into a music therapy process, which was developed from a theoretical framework rooted in emotion - focused therapy. after that, we concluded the benefits and concerns surrounding music ais from the perspective of music therapists. based on our findings, we discussed the opportunities and design implications for applying musical ais to music therapy. our work offers valuable insights for developing human - ai collaborative music systems in therapy involving complex procedures and specific requirements.
arxiv:2402.14503
autonomous vehicles ( avs ) require accurate metric and topological location estimates for safe, effective navigation and decision - making. although many high - definition ( hd ) roadmaps exist, they are not always accurate since public roads are dynamic, shaped unpredictably by both human activity and nature. thus, avs must be able to handle situations in which the topology specified by the map does not agree with reality. we present the variable structure multiple hidden markov model ( vsm - hmm ) as a framework for localizing in the presence of topological uncertainty, and demonstrate its effectiveness on an av where lane membership is modeled as a topological localization process. vsm - hmms use a dynamic set of hmms to simultaneously reason about location within a set of most likely current topologies and therefore may also be applied to topological structure estimation as well as av lane estimation. in addition, we present an extension to the earth mover ' s distance which allows uncertainty to be taken into account when computing the distance between belief distributions on simplices of arbitrary relative sizes.
arxiv:1803.01378
transformer - based large language models ( llms ) encounter challenges in processing long sequences on edge devices due to the quadratic complexity of attention mechanisms and growing memory demands from key - value ( kv ) cache. existing kv cache optimizations struggle with irreversible token eviction in long - output tasks, while alternative sequence modeling architectures prove costly to adopt within established transformer infrastructure. we present edgeinfinite, a memory - efficient solution for infinite contexts that integrates compressed memory into transformer - based llms through a trainable memory - gating module. this approach maintains full compatibility with standard transformer architectures, requiring fine - tuning only a small part of parameters, and enables selective activation of the memory - gating module for long and short context task routing. the experimental result shows that edgeinfinite achieves comparable performance to baseline transformer - based llm on long context benchmarks while optimizing memory consumption and time to first token.
arxiv:2503.22196
we create a one - dimensional strongly correlated quantum gas of $ ^ { 133 } $ cs atoms with attractive interactions by direct laser cooling in 300 ~ ms. after compressing and cooling the optically trapped atoms to the vibrational ground state along two tightly confined directions, the emergence of a non - gaussian time - of - flight distribution along the third, weakly confined direction reveals that the system enters a quantum degenerate regime. we observe a strong reduction of two - and three - body spatial correlations and infer that the atoms are directly cooled into a highly correlated excited metastable state, known as a super - tonks - girardeau gas.
arxiv:1906.05334
respirators, medical masks, and barrier face coverings all filter airborne particles using similar physical principles. however, they are tested for certification using a variety of standardized test methods, creating challenges for the comparison of differently certified products. we have performed systematic experiments to quantify and understand the differences between standardized test methods for n95 respirators ( niosh teb - apr - stp - 0059 under us 42 cfr 84 ), medical face masks ( astm f2299 / f2100 ), and covid - 19 - related barrier face coverings ( astm f3502 - 21 ). our experiments demonstrate the role of face velocity, particle properties ( mean size, size variability, electric charge, density, and shape ), measurement techniques, and environmental preconditioning. the measured filtration efficiency was most sensitive to changes in face velocity and particle charge. relative to the niosh method, users of the astm f2299 / f2100 method have commonly used non - neutralized ( highly charged ) aerosols as well as smaller face velocities, each of which may result in approximately 10 % higher measured filtration efficiencies. in the niosh method, environmental conditioning at elevated humidity increased filtration efficiency in some commercial samples while decreasing it in others, indicating that measurement should be performed both with and without conditioning. more generally, our results provide an experimental basis for the comparison of respirators certified under various international methods, including ffp2, kn95, p2, korea 1st class, and ds2.
arxiv:2106.04059
it is widely believed that, for a given top mass, the higgs mass has a lower bound : if m _ higgs is too small, the higgs vacuum is unstable due to top dynamics. from vacuum instability, the state - of - the - art calculation of the lower bound is close to the current experimental limit. using non - perturbative simulations and large n calculations, we show that the vacuum is in fact never unstable. instead, we investigate the existence of a new lower bound, based on the intrinsic cut - off of this trivial theory.
arxiv:hep-lat/0308020
graystar is a stellar atmospheric and spectral line modelling, post - processing, and visualisation code, suitable for classroom demonstrations and laboratory - style assignments, that has been developed in java and deployed in javascript and html. the only software needed to compute models and post - processed observables, and to visualise the resulting atmospheric structure and observables, is a common web browser. therefore, the code will run on any common pc or related x86 ( - 64 ) computer of the type that typically serves classroom data projectors, is found in undergraduate computer laboratories, or that students themselves own, including those with highly portable form - factors such as net - books and tablets. the user requires no experience with compiling source code, reading data files, or using plotting packages. more advanced students can view the javascript source code using the developer tools provided by common web browsers. the code is based on the approximate gray atmospheric solution and runs quickly enough on current common pcs to provide near - instantaneous results, allowing for real time exploration of parameter space. i describe the user interface and its inputs and outputs and suggest specific pedagogical applications and projects. therefore, this paper may serve as a graystar user manual for both instructors and students. in an accompanying paper, i describe the computational strategy and methodology as necessitated by java and javascript. i have made the application itself, and the html, css, javascript, and java source files available to the community. the web application and source files may be found at www. ap. smu. ca / ~ ishort / graystar.
arxiv:1409.1891
we summarize recent measurements made at the tevatron collider using top event candidates. cross section and mass measurements are discussed in a separate contribution to these proceedings. here we report on studies of the top p $ _ t $ distribution in $ \ ttbar $ production and studies of single top production. properties of top decays examined are : bf ( $ t \ into wb $ ) / bf ( $ t \ into wq $ ), helicity amplitudes of w ' s from top decays and correlations of $ \ ttbar $ decay products. searches for new physics in rare top decays and a search for a state x $ \ into \ ttbar $ are also reported.
arxiv:hep-ex/9909016
both a monomorphism and an epimorphism. a retraction if it has a right inverse, i. e. if there exists a morphism g : b β†’ a with fg = 1b. a section if it has a left inverse, i. e. if there exists a morphism g : b β†’ a with gf = 1a. an isomorphism if it has an inverse, i. e. if there exists a morphism g : b β†’ a with fg = 1b and gf = 1a. an endomorphism if a = b. the class of endomorphisms of a is denoted end ( a ). for locally small categories, end ( a ) is a set and forms a monoid under morphism composition. an automorphism if f is both an endomorphism and an isomorphism. the class of automorphisms of a is denoted aut ( a ). for locally small categories, it forms a group under morphism composition called the automorphism group of a. every retraction is an epimorphism. every section is a monomorphism. the following three statements are equivalent : f is a monomorphism and a retraction ; f is an epimorphism and a section ; f is an isomorphism. relations among morphisms ( such as fg = h ) can most conveniently be represented with commutative diagrams, where the objects are represented as points and the morphisms as arrows. = = types of categories = = in many categories, e. g. ab or vectk, the hom - sets hom ( a, b ) are not just sets but actually abelian groups, and the composition of morphisms is compatible with these group structures ; i. e. is bilinear. such a category is called preadditive. if, furthermore, the category has all finite products and coproducts, it is called an additive category. if all morphisms have a kernel and a cokernel, and all epimorphisms are cokernels and all monomorphisms are kernels, then we speak of an abelian category. a typical example of an abelian category is the category of abelian groups. a category is called complete if all small limits exist in it. the categories of sets, abelian groups and topological spaces are complete. a category is called cartesian closed if it has finite direct products and a morphism
https://en.wikipedia.org/wiki/Category_(mathematics)
taking proton as an ensemble of quark - gluon fock states and using the principle of detailed balance, we construct a simple statistical model for parton distribution of proton. the recent observed bjorken - $ x $ dependent light flavor sea quark asymmetry $ \ bar { d } ( x ) - \ bar { u } ( x ) $ can be well reproduced by monte carlo simulation as a pure statistical effect.
arxiv:hep-ph/0201214
this technical report outlines our submission to the zero - shot track of the visual anomaly and novelty detection ( vand ) 2023 challenge. building on the performance of the winclip framework, we aim to enhance the system ' s localization capabilities by integrating zero - shot segmentation models. in addition, we perform foreground instance segmentation which enables the model to focus on the relevant parts of the image, thus allowing the models to better identify small or subtle deviations. our pipeline requires no external data or information, allowing for it to be directly applied to new datasets. our team ( variance vigilance vanguard ) ranked third in the zero - shot track of the vand challenge, and achieve an average f1 - max score of 81. 5 / 24. 2 at a sample / pixel level on the visa dataset.
arxiv:2306.09269
developing complex engineered systems ( ces ) poses significant challenges for engineers, managers, designers, and businesspeople alike due to the inherent complexity of the systems and contexts involved. furthermore, experts have expressed great interest in filling the gap in theory about how ces develop. this article begins to address that gap in two ways. first, it reviews the numerous definitions of ces along with existing theory and methods on ces development processes. then, it proposes the complex system integrated utilities model ( cesium ), a novel framework for exploring how numerous system and development process characteristics may affect the performance of ces. cesium creates simulated representations of a system architecture, the corresponding engineering organization, and the new product development process through which the organization designs the system. it does so by representing the system as a network of interdependent artifacts designed by agents. agents iteratively design their artifacts through optimization and share information with other agents, thereby advancing the ces toward a solution. this paper describes the model, conducts a sensitivity analysis, provides validation, and suggests directions for future study.
arxiv:2103.12820
we investigate to what extent a generic, generation - dependent $ u ( 1 ) $ symmetry acting on the quark yukawa operators can reduce the number of free parameters by forcing some entries in the yukawa matrices to vanish. the maximal reduction compatible with cp violation yields nine real parameters and one phase, which matches the number of physical observables, implying that such models have no free parameters. we derive a set of results : ( i ) the only possible structures have the form $ m _ 4 \ oplus m _ 5 $, where the subscripts indicate the number of real parameters in the yukawa matrices, ( ii ) there are only two inequivalent yukawa structures, each one giving rise to six different models depending on quark flavour assignments, ( iii ) the $ u ( 1 ) $ symmetries that generate these textures all have a qcd anomaly, and hence are peccei - quinn symmetries, reinforcing the idea of a possible connection between the quark flavour puzzle and the axion solution to the strong cp problem, ( iv ) in some cases the contributions to the qcd anomaly of two generations cancels out, and this opens the possibility that the axion coupling to nucleons could be strongly suppressed. flavour - violating axion couplings to quarks are completely fixed, up to the axion decay constant, providing a non - trivial complementarity between low - energy flavour - violating processes and standard axion searches.
arxiv:1811.09637
we study the c * - algebra crossed product $ c _ 0 ( x ) \ rtimes g $ of a locally compact group $ g $ acting properly on a locally compact hausdorff space $ x $. under some mild extra conditions, which are automatic if $ g $ is discrete or a lie group, we describe in detail, and in terms of the action, the primitive ideal space of such crossed products as a topological space, in particular with respect to its fibring over the quotient space $ g \ backslash x $. we also give some results on the $ \ k $ - theory of such c * - algebras. these more or less compute the $ \ k $ - theory in the case of isolated orbits with non - trivial ( finite ) stabilizers. we also give a purely $ \ k $ - theoretic proof of a result due to paul baum and alain connes on ( \ k ) - theory with complex coefficients of crossed products by finite groups.
arxiv:1012.5214
the japanese spacecraft hayabusa 2 visited the asteroid ( 162173 ) ryugu and provided many high - resolution images of its surface, revealing that ryugu has a spinning - top shape with a prominent equatorial ridge, much like the shapes reported for some other asteroids. in this study, through dozens of numerical calculations, we demonstrate that during a period of fast rotation, ejecta from craters formed at lower and mid - latitudes can be deposited on the equatorial ridge. assuming a rotation period of 3 h, we estimate that an equatorial ridge with a height of 50 m can be generated in 128 ( + 47 / - 27 ) my for a main - belt asteroid, or 3. 1 ( + 4. 2 / - 1. 2 ) gy for a near - earth asteroid. therefore, an equatorial ridge can form within the average mean collisional lifetime of a km - sized asteroid within the main belt, but not for near - earth asteroids. furthermore, our model may explain why blue ( younger ) material occurs on the equatorial ridge.
arxiv:2205.05816
we investigate class field towers of number fields obtained as fixed fields of modular representations of the absolute galois group of the rational numbers. first, for each $ k \ in \ { 12, 16, 18, 20, 22, 26 \ } $, we give explicit rational primes $ \ l $ such that the fixed field of the mod - $ \ l $ representation attached to the unique normalized cusp eigenforms of weight $ k $ on $ \ sl _ 2 ( \ z ) $ has an infinite class field tower. under a conjecture of hardy and littlewood, we further prove that there exist infinitely many such primes for each $ k $ ( in the above list ). second, given a non - cm curve $ e / \ q $, we show that there exists an integer $ m _ e $ such that the fixed field of the representation attached to the $ n $ - division points of $ e $ has an infinite class field tower for a set of integers $ n $ of density one among integers coprime to $ m _ e $.
arxiv:1005.3003
we explain the methodology we developed for improving the interactions accomplished by an embedded conversational agent, drawing from conversation analytic sequential and multimodal analysis. the use case is a pepper robot that is expected to inform and orient users in a library. in order to propose and learn better interactive schema, we are creating a corpus of naturally - occurring interactions that will be made available to the community. to do so, we propose an annotation practice based on some theoretical underpinnings about the use of language and multimodal resources in human - robot interaction. ccs concepts $ \ bullet $ computing methodologies $ \ rightarrow $ discourse, dialogue and pragmatics ; $ \ bullet $ human - centered computing $ \ rightarrow $ text input ; hci theory, concepts and models ; field studies.
arxiv:2308.15097
recently, it has been shown that altering the natural collisional power flow of the proton - boron 11 ( pb11 ) fusion reaction can significantly reduce the lawson product of ion density and confinement time required to achieve ignition. however, these products are still onerous - on the order of $ 7 \ times 10 ^ { 15 } $ cm $ ^ { - 3 } $ s under the most optimistic scenarios. fortunately, a breakeven fusion power plant does not require an igniting plasma, but rather a reactor that produces more electrical power than it consumes. here, we extend the existing 0d power balance analysis to check the conditions on power plant breakeven. we find that even for the base thermonuclear reaction, modern high - efficiency thermal engines should reduce the lawson product to $ 1. 2 \ times 10 ^ { 15 } $ cm $ ^ { - 3 } $ s. we then explore the impact of several potential improvements, including fast proton heating, alpha power capture, direct conversion, and efficient heating. we find that such improvements could reduce the required lawson product by a further order of magnitude, bringing aneutronic fusion to within target iter design parameters.
arxiv:2310.18508
there are various notions of positivity for matrices and linear matrix - valued maps that play important roles in quantum information theory. the cones of positive semidefinite matrices and completely positive linear maps, which represent quantum states and quantum channels respectively, are the most ubiquitous positive cones. there are also many natural cones that can been regarded as " more " or " less " positive than these standard examples. in particular, entanglement theory deals with the cones of separable operators and entanglement witnesses, which satisfy very strong and weak positivity properties respectively. rather complementary to the various cones that arise in entanglement theory are norms. the trace and operator norms for operators and the diamond and completely bounded norms for superoperators are the typical norms that are seen throughout quantum information theory. in this work our main goal is to develop a family of norms that play a role analogous to the cone of entanglement witnesses. we investigate the basic mathematical properties of these norms, including their relationships with other well - known norms, their isometry groups, and their dual norms. we also make the place of these norms in entanglement theory rigorous by showing that entanglement witnesses arise from minimal operator systems, and analogously our norms arise from minimal operator spaces. finally, we connect the various cones and norms considered here to several seemingly unrelated problems from other areas. we characterize the problem of whether or not non - positive partial transpose bound entangled states exist in terms of one of our norms, and provide evidence in favour of their existence. we also characterize the minimum gate fidelity of a quantum channel, the maximum output purity and its completely bounded counterpart, and the geometric measure of entanglement in terms of these norms.
arxiv:1207.1479
these are the notes for a series of lectures at the institute of geometry and topology of the university of stuttgart, germany, in july 13 - 15, 2022. we assume basic knowledge of isometric actions on riemannian manifolds, including the normal slice theorem and the principal orbit type theorem. lecture 1 introduces polar actions and culminates with heintze, liu and olmos ' s argument to characterize them in terms of integrability of the distribution of normal spaces to the principal orbits. the other two lectures are devoted to two of lytchak and thorbergsson ' s results. in lecture 2 we briefly review riemannian orbifolds from the metric point of view, and explain their characterization of orbifold points in the orbit space of a proper and isometric action in terms of polarity of the slice representation above. in lecture 3 we present their proof of the fact that variationally complete actions in the sense of bott and samelson on non - negatively curved manifolds are hyperpolar. the appendix contains explanations of some results used in the lectures, namely : a more or less self - contained derivation of wilking ' s transversal jacobi equation ; a discussion of cartan ' s and hermann ' s criterions for the existence of totally geodesic submanifolds, and a criterion for the polarity of isometric actions on symmetric spaces.
arxiv:2208.03577
we present an analysis of sixteen galaxy clusters, one group and one galaxy drawn from the chandra x - ray observatory ' s data archive. these systems possess prominent x - ray surface brightness depressions associated with cavities or bubbles that were created by interactions between powerful radio sources and the surrounding hot gas. the minimum energy associated with the cavities ranges between pv ~ 10 ^ 55 erg in galaxies, groups, and poor clusters to pv ~ 10 ^ 60 erg in rich clusters. we evaluate the hypothesis that cooling in the hot gas can be quenched by energy injected into the surrounding gas by the rising bubbles. nearly half of the systems in this study may have instantaneous mechanical luminosities large enough to balance cooling, at least for a short period of time, if the cavities are filled with a relativistic gas. we find a trend or upper envelope in the distribution of central x - ray luminosity versus instantaneous mechanical luminosity with the sense that the most powerful cavities are found in the most x - ray luminous systems. such a trend would be expected if many of these systems produce bubbles at a rate that scales in proportion to the cooling rate of the surrounding gas. finally, we use the x - ray cavities to measure the mechanical power of radio sources over six decades of radio luminosity, independently of the radio properties themselves. we find that the ratio of the instantaneous mechanical ( kinetic ) luminosity to the 1. 4 ghz synchrotron luminosity ranges from a few to roughly a thousand. this wide range implies that the 1. 4 ghz synchrotron luminosity is an unreliable gauge of the mechanical power of radio sources.
arxiv:astro-ph/0402348
the second - order post - newtonian solution for the light propagation in the field of kerr - newman black hole is achieved via an iterative method. based on this result, we further obtain the second - order post - newtonian light deflection in kerr - newman spacetime, which is formulated in an united form for any arbitrarily incident directions. all results are exhibited in the coordinate system constituted by the initial light - direction vector, the impact vector, and their cross - product.
arxiv:1802.02331
for another decade, students were able to access class information with linked computer terminals. online learning emerged in 1982 when the western behavioral sciences institute in la jolla, california, opened its school of management and strategic studies. the school employed computer conferencing through the new jersey institute of technology ' s electronic information exchange system ( eies ) to deliver a distance education program to business executives. starting in 1985, connected education offered the first totally online master ' s degree in media studies, through the new school in new york city, also via the eies computer conferencing system. subsequent courses were offered in 1986 by the electronic university network for dos and commodore 64 computers. in 2002, mit began providing online classes free of charge. as of 2009, approximately 5. 5 million students were taking at least one class online. currently, one out of three college students takes at least one online course while in college. at devry university, out of all students that are earning a bachelor ' s degree, 80 % earn two - thirds of their requirements online. also, in 2014, 2. 85 million students out of 5. 8 million students that took courses online, took all of their courses online. from this information, it can be concluded that the number of students taking classes online is on a steady increase. in 1971, ivan illich published a hugely influential book, deschooling society, in which he envisioned " learning webs " as a model for people to network the learning they needed. the 1970s and 1980s saw notable contributions in computer - based learning by murray turoff and starr roxanne hiltz at the new jersey institute of technology as well as developments at the university of guelph in canada. in the uk, the council for educational technology supported the use of educational technology, in particular administering the government ' s national development programme in computer aided learning ( 1973 – 1977 ) and the microelectronics education programme ( 1980 – 1986 ). videoconferencing was an important forerunner to the educational technologies known today. this work was especially popular with museum education. even in recent years, videoconferencing has risen in popularity to reach over 20, 000 students across the united states and canada in 2008 – 2009. disadvantages of this form of educational technology are readily apparent : image and sound quality are often grainy or pixelated ; videoconferencing requires setting up a type of mini - television studio within the museum for broadcast ; space becomes an issue ; and specialized equipment is required for both
https://en.wikipedia.org/wiki/Educational_technology
in the homogeneous space sol $ _ 3 $, a translation surface is parameterized by $ x ( s, t ) = \ alpha ( s ) \ ast \ beta ( t ) $, where $ \ alpha $ and $ \ beta $ are curves contained in coordinate planes and $ \ ast $ denotes the group operation of sol $ _ 3 $. in this paper we study translation surfaces in sol $ _ 3 $ whose mean curvature vanishes.
arxiv:1010.1085
by absorbing fluctuations into a local background, separate universe simulations provide a powerful technique to characterize the response of small - scale observables to the long - wavelength density fluctuations, for example those of the power spectrum and halo mass function which lead to the squeezed - limit $ n $ - point function and halo bias, respectively. using quintessence dark energy as the paradigmatic example, we extend these simulation techniques to cases where non - gravitational forces in other sectors establish a jeans scale across which the growth of density fluctuations becomes scale dependent. by characterizing the separate universes with matching background expansion histories, we show that the power spectrum and mass function responses depend on whether the long - wavelength mode is above or below the jeans scale. correspondingly, the squeezed bispectrum and halo bias also become scale dependent. models of bias that are effectively local in the density field at a single epoch, initial or observed, cannot describe this effect which highlights the importance of temporal nonlocality in structure formation. validated by these quintessence tests, our techniques are applicable to a wide range of models where the complex dynamics of additional fields affect the clustering of matter in the linear regime and it would otherwise be difficult to simulate their impact in the nonlinear regime.
arxiv:1609.01701
utilization of triplet excitons, which generally emit poorly, is always fundamental to realize highly efficient organic light - emitting diodes ( leds ). while triplet harvest and energy transfer via electron exchange between triplet donor and acceptor are fully understood in doped organic phosphorescence and delayed fluorescence systems, the utilization and energy transfer of triplet excitons in quasi - two - dimensional ( quasi - 2d ) perovskite are still ambiguous. here, we use an orange - phosphorescence - emitting ultrathin organic layer to probe triplet behavior in the sky - blue - emitting quasi - 2d perovskite. the delicate white leds architecture enables a carefully tailored dexter - like energy - transfer mode that largely rescues the triplet excitons in quasi - 2d perovskite. our white organic - inorganic leds achieve maximum forward - viewing external quantum efficiency of 8. 6 % and luminance over 15000 cd m - 2, exhibiting a significant efficiency enhancement versus the corresponding sky - blue perovskite led ( 4. 6 % ). the efficient management of energy transfer between excitons in quasi - 2d perovskite and frenkel excitons in organic layer opens the door to fully utilizing excitons for white organic - inorganic leds.
arxiv:2112.00946
we construct solutions of non - uniform black strings in dimensions from $ d \ approx 9 $ all the way up to $ d = \ infty $, and investigate their thermodynamics and dynamical stability. our approach employs the large - $ d $ perturbative expansion beyond the leading order, including corrections up to $ 1 / d ^ 4 $. combining both analytical techniques and relatively simple numerical solution of odes, we map out the ranges of parameters in which non - uniform black strings exist in each dimension and compute their thermodynamics and quasinormal modes with accuracy. we establish with very good precision the existence of sorkin ' s critical dimension and we prove that not only the thermodynamic stability, but also the dynamic stability of the solutions changes at it.
arxiv:1802.08191
sigma clipping is commonly used in astronomy for outlier rejection, but the number of standard deviations beyond which one should clip data from a sample ultimately depends on the size of the sample. chauvenet rejection is one of the oldest, and simplest, ways to account for this, but, like sigma clipping, depends on the sample ' s mean and standard deviation, neither of which are robust quantities : both are easily contaminated by the very outliers they are being used to reject. many, more robust measures of central tendency, and of sample deviation, exist, but each has a tradeoff with precision. here, we demonstrate that outlier rejection can be both very robust and very precise if decreasingly robust but increasingly precise techniques are applied in sequence. to this end, we present a variation on chauvenet rejection that we call " robust " chauvenet rejection ( rcr ), which uses three decreasingly robust / increasingly precise measures of central tendency, and four decreasingly robust / increasingly precise measures of sample deviation. we show this sequential approach to be very effective for a wide variety of contaminant types, even when a significant - - even dominant - - fraction of the sample is contaminated, and especially when the contaminants are strong. furthermore, we have developed a bulk - rejection variant, to significantly decrease computing times, and rcr can be applied both to weighted data, and when fitting parameterized models to data. we present aperture photometry in a contaminated, crowded field as an example. rcr may be used by anyone at https : / / skynet. unc. edu / rcr, and source code is available there as well.
arxiv:1807.05276
we discuss the properties of 137 cataclysmic variables ( cvs ) which are included in the sloan digital sky survey ( sdss ) spectroscopic data base, and for which accurate orbital periods have been measured. 92 of these systems are new discoveries from sdss and were followed - up in more detail over the past few years. 45 systems were previously identified as cvs because of the detection of optical outbursts and / or x - ray emission, and subsequently re - identified from the sdss spectroscopy. the period distribution of the sdss cvs differs dramatically from that of all the previously known cvs, in particular it contains a significant accumulation of systems in the orbital period range 80 - - 86 min. we identify this feature as the elusive " period minimum spike " predicted by cv population models, which resolves a long - standing discrepancy between compact binary evolution theory and observations. we show that this spike is almost entirely due to the large number of cvs with very low accretion activity identified by sdss. the optical spectra of these systems are dominated by emission from the white dwarf photosphere, and display little or no spectroscopic signature from the donor stars, suggesting very low - mass companion stars. we determine the average absolute magnitude of these low - luminosity cvs at the period minimum to be < m _ g > = 11. 6 + - 0. 7. comparison of the sdss cv sample to the cvs found in the hamburg quasar survey and the palomar green survey suggests that the depth of sdss is the key ingredient resulting in the discovery of a large number of intrinsically faint short - period systems.
arxiv:0905.3476
querying complex models for precise information ( e. g. traffic models, database systems, large ml models ) often entails intense computations and results in long response times. thus, weaker models which give imprecise results quickly can be advantageous, provided inaccuracies can be resolved using few queries to a stronger model. in the fundamental problem of computing a maximum - weight basis of a matroid, a well - known generalization of many combinatorial optimization problems, algorithms have access to a clean oracle to query matroid information. we additionally equip algorithms with a fast but dirty oracle modelling an unknown, potentially different matroid. we design and analyze practical algorithms which only use few clean queries w. r. t. the quality of the dirty oracle, while maintaining robustness against arbitrarily poor dirty matroids, approaching the performance of classic algorithms for the given problem. notably, we prove that our algorithms are, in many respects, best - possible. further, we outline extensions to other matroid oracle types, non - free dirty oracles and other matroid problems.
arxiv:2402.02774
strong octupole correlations have been observed in the low - lying states of atomic nuclei across various mass regions. in this review, we provide an overview of beyond mean - field ( bmf ) studies of nuclear octupole collective motions with generator coordinate method ( gcm ) in combination with quantum - number projections that are implemented to restore the broken symmetries in nuclear mean - field states. we highlight recent developments within this framework and their applications to excitation spectra and electromagnetic transition rates in octupole - shaped nuclei and hypernuclei. we discuss the novel phenomena of nucleon clustering in light nuclei. additionally, we explore the phase transition from octupole vibrations to rotational motions as spin increases in heavy nuclei. lastly, we examine the status and future prospects of studies on octupole deformation effects in nuclear schiff moments. these studies, along with the upper limits of atomic electric dipole moment ( edm ), impose stringent constraints on beyond - standard - model time - reversal - violating nucleon - nucleon interactions.
arxiv:2309.09488
we calculate one - loop correction to the two - point functions of curvature perturbation in single - field inflation generated by cubic self - interaction. incorporating the observed red - tilted spectrum of curvature perturbation, the relevant one - loop correction takes a finite value and inversely proportional to the spectral tilt. requiring one - loop correction to be much smaller than the tree - level contribution leads to an upper bound on primordial non - gaussianity. while observationally allowed region of non - gaussian parameter space is found to be entirely included by the region, where one - loop correction is smaller than the tree - level contribution, an appreciably large region has one - loop correction larger than 1 % or even 10 % of the latter. if future observations conclude non - gaussianity falls in such a region, then it would be important to incorporate higher - order corrections to the spectrum in order to achieve precise cosmology. in some extreme cases, where one - loop correction has a comparable magnitude to the tree - level contribution, it might indicate breakdown of the cosmological perturbation theory in the context of single - field inflation.
arxiv:2204.05202
in our first article in this series ( " modular invariant of quantum tori i : definitions nonstandard and standard " arxiv : 0909. 0143 ) a modular invariant of quantum tori was defined. in this paper, we consider the case of the quantum torus associated to the golden mean. we show that the modular invariant is approximately 9538. 249655644 by producing an explicit formula for it involving weighted versions of the rogers - ramanujan functions.
arxiv:1204.2540
terminological knowledge representation systems ( tkrss ) are tools for designing and using knowledge bases that make use of terminological languages ( or concept languages ). we analyze from a theoretical point of view a tkrs whose capabilities go beyond the ones of presently available tkrss. the new features studied, often required in practical applications, can be summarized in three main points. first, we consider a highly expressive terminological language, called alcnr, including general complements of concepts, number restrictions and role conjunction. second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. third, we prove the decidability of a number of desirable tkrs - deduction services ( like satisfiability, subsumption and instance checking ) through a sound, complete and terminating calculus for reasoning in alcnr - knowledge bases. our calculus extends the general technique of constraint systems. as a byproduct of the proof, we get also the result that inclusion statements in alcnr can be simulated by terminological cycles, if descriptive semantics is adopted.
arxiv:cs/9312101
the nature of the merger remnant of binary neutron star ( bns ) remains an open question. from the theoretical point of view, one possible outcome is a supra - massive neutron star ( smns ), which is supported by rigid rotation and through its survival of hundreds of seconds before collapsing into a black hole ( bh ). if this is the case, the smns can emit continuous gravitational waves ( gw ) and electromagnetic ( em ) radiation, particularly in the x - ray band. in this work, the ellipticity and initial frequency of smns are constrained with a bayesian framework using simulated x - ray and gw signals, which could be detected by the transient high energy sky and early universe surveyor ( theseus ) and einstein telescope ( et ), respectively. we found that only considering the x - ray emission can not completely constrain the initial frequency and ellipticity of the smns, but it can reduce the ranges of the parameters. afterwards, we can use the posterior distribution of the x - ray parameter estimates as a prior for the gw parameter estimates. it was found that the 95 $ \ % $ credible region of the joint x - ray - gw analysis was about $ 10 ^ 5 $ times smaller than that of the x - ray analysis alone.
arxiv:2305.01364
this paper introduces cofie, a novel local geometry - aware neural surface representation. cofie is motivated by the theoretical analysis of local sdfs with quadratic approximation. we find that local shapes are highly compressive in an aligned coordinate frame defined by the normal and tangent directions of local shapes. accordingly, we introduce coordinate field, which is a composition of coordinate frames of all local shapes. the coordinate field is optimizable and is used to transform the local shapes from the world coordinate frame to the aligned shape coordinate frame. it largely reduces the complexity of local shapes and benefits the learning of mlp - based implicit representations. moreover, we introduce quadratic layers into the mlp to enhance expressiveness concerning local shape geometry. cofie is a generalizable surface representation. it is trained on a curated set of 3d shapes and works on novel shape instances during testing. when using the same amount of parameters with prior works, cofie reduces the shape error by 48 % and 56 % on novel instances of both training and unseen shape categories. moreover, cofie demonstrates comparable performance to prior works when using only 70 % fewer parameters.
arxiv:2406.03417
in practical chiller systems, applying efficient fault diagnosis techniques can significantly reduce energy consumption and improve energy efficiency of buildings. the success of the existing methods for fault diagnosis of chillers relies on the condition that sufficient labeled data are available for training. however, label acquisition is laborious and costly in practice. usually, the number of labeled data is limited and most data available are unlabeled. the existing methods cannot exploit the information contained in unlabeled data, which significantly limits the improvement of fault diagnosis performance in chiller systems. to make effective use of unlabeled data to further improve fault diagnosis performance and reduce the dependency on labeled data, we proposed a novel semi - supervised data - driven fault diagnosis method for chiller systems based on the semi - generative adversarial network, which incorporates both unlabeled and labeled data into learning process. the semi - generative adversarial network can learn the information of data distribution from unlabeled data and this information can help to significantly improve the diagnostic performance. experimental results demonstrate the effectiveness of the proposed method. under the scenario that there are only 80 labeled samples and 16000 unlabeled samples, the proposed method can improve the diagnostic accuracy to 84 %, while the supervised baseline methods only reach the accuracy of 65 % at most. besides, the minimal required number of labeled samples can be reduced by about 60 % with the proposed method when there are enough unlabeled samples.
arxiv:2011.00187
nowadays, driven by the increasing concern on diet and health, food computing has attracted enormous attention from both industry and research community. one of the most popular research topics in this domain is food retrieval, due to its profound influence on health - oriented applications. in this paper, we focus on the task of cross - modal retrieval between food images and cooking recipes. we present modality - consistent embedding network ( mcen ) that learns modality - invariant representations by projecting images and texts to the same embedding space. to capture the latent alignments between modalities, we incorporate stochastic latent variables to explicitly exploit the interactions between textual and visual features. importantly, our method learns the cross - modal alignments during training but computes embeddings of different modalities independently at inference time for the sake of efficiency. extensive experimental results clearly demonstrate that the proposed mcen outperforms all existing approaches on the benchmark recipe1m dataset and requires less computational cost.
arxiv:2004.01095
we analyse the two definitions of generalized quantifiers for logics of dependence and independence that have been proposed by f. engstr \ " om, comparing them with a more general, higher - order definition of team quantifier. we show that engstr \ " om ' s definitions ( and other quantifiers from the literature ) can be identified, by means of appropriate lifts, with special classes of team quantifiers. we point out that the new team quantifiers express a quantitative and a qualitative component, while engstr \ " om ' s quantifiers only range over the latter. we further argue that engstr \ " om ' s definitions are just embeddings of the first - order generalized quantifiers into team semantics, and fail to capture an adequate notion of team - theoretical generalized quantifier, save for the special cases in which the quantifiers are applied to flat formulas. we also raise several doubts concerning the meaningfulness of the monotone / nonmonotone distinction in this context. in the appendix we develop some proof theory for engstr \ " om ' s quantifiers.
arxiv:1709.07301
we provide an overview of the 3rd generation partnership project ( 3gpp ) work on evolving the 5g wireless technology to support non - terrestrial satellite networks. adapting 5g to support non - terrestrial networks entails a holistic design spanning across multiple areas from radio access network to services and system aspects to core and terminals. in this article, we describe the main topics of non - terrestrial networks, explain in detail the design aspects, and share various design rationales influencing standardization.
arxiv:2103.09156
the set of all cancellable elements of the lattice of semigroup varieties has recently been shown to be countably infinite. but the description of all cancellable elements of the lattice $ \ mathbb { mon } $ of monoid varieties remains unknown. this problem is addressed in the present article. the first example of a monoid variety with modular but non - distributive subvariety lattice is first exhibited. then a necessary condition of the modularity of an element in $ \ mathbb { mon } $ is established. these results play a crucial role in the complete description of all cancellable elements of the lattice $ \ mathbb { mon } $. it turns out that there are precisely five such elements.
arxiv:2101.02418
context. low - mass bodies, such as comets, asteroids, planetesimals, and free - floating planets, are continuously injected into the intra - cluster environment after expulsion from their host planetary systems. these can be modeled as massless particles ( mlps, hereafter ). the dynamics of large populations of mlps, however, has yet received little attention in literature. aims. we investigate the dynamical evolution of mlp populations in star clusters, and characterize their kinematics and ejection rates. methods. we present nbody6 + + gpu - massless, a modified version of the n - body simulation code nbody6 + + gpu, that allows fast integration of star clusters that contain large numbers of massless particles ( mlps ). nbody6 + + gpu - massless contains routines specifically directed at the dynamical evolution of low - mass bodies, such as planets. results. unlike stars, mlps do not participate in the mass segregation process. instead, mlps mostly follow the gravitational potential of the star cluster, which gradually decreases over time due to stellar ejections and stellar evolution. the dynamical evolution of mlps is primarily affected by the evolution of the core of the star cluster. this is most apparent in the outer regions for clusters with higher initial densities. high escape rates of mlps are observed before the core - collapse, after which escape rates remain stable. denser star clusters undergo a more intense core collapse, but this does not impact the dynamical evolution of mlps. the speeds of escaping stars are similar to those of escaping mlps, when disregarding the high - velocity ejections of neutron stars during the first 50 myr.
arxiv:2412.08785
despite profound successes, contrastive representation learning relies on carefully designed data augmentations using domain specific knowledge. this challenge is magnified in natural language processing where no general rules exist for data augmentation due to the discrete nature of natural language. we tackle this challenge by presenting a virtual augmentation supported contrastive learning of sentence representations ( vascl ). originating from the interpretation that data augmentation essentially constructs the neighborhoods of each training instance, we in turn utilize the neighborhood to generate effective data augmentations. leveraging the large training batch size of contrastive learning, we approximate the neighborhood of an instance via its k - nearest in - batch neighbors in the representation space. we then define an instance discrimination task regarding this neighborhood and generate the virtual augmentation in an adversarial training manner. we access the performance of vascl on a wide range of downstream tasks, and set a new state - of - the - art for unsupervised sentence representation learning.
arxiv:2110.08552
in mathematics, a knot is an embedding of the circle ( s1 ) into three - dimensional euclidean space, r3 ( also known as e3 ). often two knots are considered equivalent if they are ambient isotopic, that is, if there exists a continuous deformation of r3 which takes one knot to the other. a crucial difference between the standard mathematical and conventional notions of a knot is that mathematical knots are closed β€” there are no ends to tie or untie on a mathematical knot. physical properties such as friction and thickness also do not apply, although there are mathematical definitions of a knot that take such properties into account. the term knot is also applied to embeddings of s j in sn, especially in the case j = n βˆ’ 2. the branch of mathematics that studies knots is known as knot theory and has many relations to graph theory. = = formal definition = = a knot is an embedding of the circle ( s1 ) into three - dimensional euclidean space ( r3 ), or the 3 - sphere ( s3 ), since the 3 - sphere is compact. two knots are defined to be equivalent if there is an ambient isotopy between them. = = = projection = = = a knot in r3 ( or alternatively in the 3 - sphere, s3 ), can be projected onto a plane r2 ( respectively a sphere s2 ). this projection is almost always regular, meaning that it is injective everywhere, except at a finite number of crossing points, which are the projections of only two points of the knot, and these points are not collinear. in this case, by choosing a projection side, one can completely encode the isotopy class of the knot by its regular projection by recording a simple over / under information at these crossings. in graph theory terms, a regular projection of a knot, or knot diagram is thus a quadrivalent planar graph with over / under - decorated vertices. the local modifications of this graph which allow to go from one diagram to any other diagram of the same knot ( up to ambient isotopy of the plane ) are called reidemeister moves. = = types of knots = = the simplest knot, called the unknot or trivial knot, is a round circle embedded in r3. in the ordinary sense of the word, the unknot is not " knotted " at all. the simplest nontrivial knots are the trefoil knot ( 31 in the table ), the figure - eight knot
https://en.wikipedia.org/wiki/Knot_(mathematics)
carrying information using generation and detection of the orbital current, instead of the spin current, is an emerging field of research, where the orbital hall effect ( ohe ) is an important ingredient. here, we propose a new mechanism of the ohe that occurs in { \ it non - } centrosymmetric materials. we show that the broken inversion symmetry in the 2d transition metal dichalcogenides ( tmdcs ) causes a robust orbital moment, which flow in different directions due to the opposite berry curvatures under an applied electric field, leading to a large ohe. this is in complete contrast to the inversion - symmetric systems, where the orbital moment is induced only by the external electric field. we show that the valley - orbital locking as well as the ohe both appear even in the absence of the spin - orbit coupling. the non - zero spin - orbit coupling leads to the well - known valley - spin locking and the spin hall effect, which we find to be weak, making the tmdcs particularly suitable for direct observation of the ohe, with potential application in { \ it orbitronics }.
arxiv:2003.13181
we consider a landau - de gennes model for a suspension of small colloidal inclusions in a nematic host. we impose suitable anchoring conditions at the boundary of the inclusions, and we work in the dilute regime - i. e., the size of the inclusions is much smaller than the typical separation distance between them, so that the total volume occupied by the inclusions is small. by studying the homogenised limit, and proving rigorous convergence results for local minimisers, we compute the effective free energy for the doped material. in particular, we show that not only the phase transition temperature, but any coefficient of the quartic landau - de gennes bulk potential can be tuned, by suitably choosing the surface anchoring energy density.
arxiv:1901.03541
denoising diffusion models are a class of generative models which have recently achieved state - of - the - art results across many domains. gradual noise is added to the data using a diffusion process, which transforms the data distribution into a gaussian. samples from the generative model are then obtained by simulating an approximation of the time reversal of this diffusion initialized by gaussian samples. recent research has explored adapting diffusion models for sampling and inference tasks. in this paper, we leverage known connections to stochastic control akin to the f \ " ollmer drift to extend established neural network approximation results for the f \ " ollmer drift to denoising diffusion models and samplers.
arxiv:2305.09605
in this paper we ask whether the phenomenon of timing noise long known in electromagnetic pulsar astronomy is likely to be important in gravitational wave ( gw ) observations of spinning - down neutron stars. we find that timing noise is strong enough to be of importance only in the young pulsars, which must have larger triaxialities than theory predicts for their gw emission to be detectable. however, assuming that their gw emission is detectable, we list the pulsars for which timing noise is important, either because it is strong enough that its neglect by the observer would render the source undetectable, or else because it is a measurable feature of the gw signal. we also find that timing noise places a limit on the observation duration of a coherent blind gw search, and suggest that hierarchical search techniques might be able to cope with this problem. demonstration of the presence or absence of timing noise in the gw channel would give a new probe of neutron star physics.
arxiv:gr-qc/0406045
non - thermal electron acceleration efficiency.
arxiv:1710.06421
this paper provides the upper and lower bounds of blowup time and blowup rate as well as the exponential growth estimate of blowup solutions for a pseudo - parabolic equation with singular potential. these results complement the ones obtained in the previous literature.
arxiv:2405.11707
analytical arguments and dynamic monte carlo simulations show that the microstructure of field - driven solid - on - solid interfaces depends strongly on the dynamics. for nonconservative dynamics with transition rates that factorize into parts dependent only on the changes in interaction energy and field energy, respectively ( soft dynamics ), the intrinsic interface width is field - independent. for non - factorizing rates, such as the standard glauber and metropolis algorithms ( hard dynamics ), it increases with the field. consequences for the interface velocity and its anisotropy are discussed.
arxiv:cond-mat/0107615
we use the inverse mean curvature flow to prove a sharp alexandrov - fenchel - type inequality for a class of hypersurfaces in certain locally hyperbolic manifolds. as an application we derive an optimal penrose inequality for asymptotically locally hyperbolic graphs in any dimension $ n \ geq 3 $. when the horizon has the topology of a compact surface of genus at least one, this provides an affirmative answer, for this class of initial data sets, to a question posed by gibbons, chru \ ' sciel and simon on the validity of a penrose - type inequality for black hole solutions carrying a higher genus horizon.
arxiv:1304.7887
using scanning tunneling microscopy and ginzburg - landau simulations we explore vortex configurations in magnetically coupled nbse $ _ 2 $ - permalloy superconductor - ferromagnet bilayer. the permalloy film with stripe domain structure induces periodic local magnetic induction in the superconductor creating a series of pinning - antipinning channels for externally added magnetic flux quanta. such laterally confined abrikosov vortices form quasi - 1d arrays ( chains ). the transitions between multichain states occur through propagation of kinks at the intermediate fields. at high fields we show that the system becomes non - linear due to a change in both the number of vortices and the confining potential. the longitudinal instabilities of the resulting vortex structures lead to vortices ` levitating ' in the anti - pinning channels.
arxiv:0910.3030
the classical multiplicative ergodic theorem ( met ) of oseledets is generalized here to cocycles taking values in a semi - finite von neumann algebra. this allows for a continuous lyapunov distribution.
arxiv:2006.13293
the ( non - initialized, non - deterministic ) asynchronous systems ( in the input - output sense ) are multi - valued functions from m - dimensional signals to sets of n - dimensional signals, the concept being inspired by the modeling of the asynchronous circuits. our purpose is to state the problem of the their stability.
arxiv:cs/0410075
we report charge sensing measurements of a silicon metal - oxide - semiconductor quantum dot using a single - electron transistor as a charge sensor with dynamic feedback control. using digitallycontrolled feedback, the sensor exhibits sensitive and robust detection of the charge state of the quantum dot, even in the presence of charge drifts and random charge rearrangements. the sensor enables the occupancy of the quantum dot to be probed down to the single electron level.
arxiv:1107.1557
in this paper, we propose a new aperiodic formulation of model predictive control for nonlinear continuous - time systems. unlike earlier approaches, we provide event - triggered conditions without using the optimal cost as a lyapunov function candidate. instead, we evaluate the time interval when the optimal state trajectory enters a local set around the origin. the obtained event - triggered strategy is more suitable for practical applications than the earlier approaches in two directions. first, it does not include parameters ( e. g., lipschitz constant parameters of stage and terminal costs ) which may be a potential source of conservativeness for the event - triggered conditions. second, the event - triggered conditions are necessary to be checked only at certain sampling time instants, instead of continuously. this leads to the alleviation of the sensing cost and becomes more suitable for practical implementations under a digital platform. the proposed event - triggered scheme is also validated through numerical simulations.
arxiv:1703.05088
and abbasid caliphate. in the byzantine empire, john philoponus, an alexandrian aristotelian commentator and christian theologian, was the first to question aristotle ' s physics teaching. unlike aristotle, who based his physics on verbal argument, philoponus instead relied on observation and argued for observation rather than resorting to a verbal argument. he introduced the theory of impetus. john philoponus ' criticism of aristotelian principles of physics served as inspiration for galileo galilei during the scientific revolution. a revival in mathematics and science took place during the time of the abbasid caliphate from the 9th century onward, when muslim scholars expanded upon greek and indian natural philosophy. the words alcohol, algebra and zenith all have arabic roots. = = = medieval natural philosophy ( 1100 – 1600 ) = = = aristotle ' s works and other greek natural philosophy did not reach the west until about the middle of the 12th century, when works were translated from greek and arabic into latin. the development of european civilization later in the middle ages brought with it further advances in natural philosophy. european inventions such as the horseshoe, horse collar and crop rotation allowed for rapid population growth, eventually giving way to urbanization and the foundation of schools connected to monasteries and cathedrals in modern - day france and england. aided by the schools, an approach to christian theology developed that sought to answer questions about nature and other subjects using logic. this approach, however, was seen by some detractors as heresy. by the 12th century, western european scholars and philosophers came into contact with a body of knowledge of which they had previously been ignorant : a large corpus of works in greek and arabic that were preserved by islamic scholars. through translation into latin, western europe was introduced to aristotle and his natural philosophy. these works were taught at new universities in paris and oxford by the early 13th century, although the practice was frowned upon by the catholic church. a 1210 decree from the synod of paris ordered that " no lectures are to be held in paris either publicly or privately using aristotle ' s books on natural philosophy or the commentaries, and we forbid all this under pain of ex - communication. " in the late middle ages, spanish philosopher dominicus gundissalinus translated a treatise by the earlier persian scholar al - farabi called on the sciences into latin, calling the study of the mechanics of nature scientia naturalis, or natural science. gundissalinus also proposed his classification of the natural sciences in his 1150 work on the division of philosophy. this was the first detailed classification
https://en.wikipedia.org/wiki/Natural_science
we have performed the underground dark matter search experiment with a sodium fl uoride ( naf ) bolometer array from 2002 through 2003 at kamioka observatory ( 2700 m. w. e. ). the bolometer array consists of eight naf absorbers with a total mass of 176 g, and sensitive ntd germanium thermistors glued to each of them. this experiment aims for the direct detection of weakly interacting massive part icles ( wimps ) via spin - dependent interaction. with an exposure of 3. 38 kg days, we derived the limits on the wimp - n ucleon coupling coefficients, a _ p and a _ n. these limits confirmed and tightened those derived from our previous results wit h the lithium fluoride ( lif ) bolometer. our results excluded the parameter space complementary to the results obtained b y nai detectors of ukdmc experiment.
arxiv:astro-ph/0306365
estimating the leverage effect from high - frequency data is vital but challenged by complex, dependent microstructure noise, often exhibiting non - gaussian higher - order moments. this paper introduces a novel multi - scale framework for efficient and robust leverage effect estimation under such flexible noise structures. we develop two new estimators, the subsampling - and - averaging leverage effect ( sale ) and the multi - scale leverage effect ( msle ), which adapt subsampling and multi - scale approaches holistically using a unique shifted window technique. this design simplifies the multi - scale estimation procedure and enhances noise robustness without requiring the pre - averaging approach. we establish central limit theorems and stable convergence, with msle achieving convergence rates of an optimal $ n ^ { - 1 / 4 } $ and a near - optimal $ n ^ { - 1 / 9 } $ for the noise - free and noisy settings, respectively. a cornerstone of our framework ' s efficiency is a specifically designed msle weighting strategy that leverages covariance structures across scales. this significantly reduces asymptotic variance and, critically, yields substantially smaller finite - sample errors than existing methods under both noise - free and realistic noisy settings. extensive simulations and empirical analyses confirm the superior efficiency, robustness, and practical advantages of our approach.
arxiv:2505.08654
sociometric badges are an emerging technology for study how teams interact in physical places. audio data recorded by sociometric badges is often downsampled to not record discussions of the sociometric badges holders. to gain more information about interactions inside teams with sociometric badges a voice activity detector ( vad ) is deployed to measure verbal activity of the interaction. detecting voice activity from downsampled audio data is challenging because down - sampling destroys information from the data. we developed a vad using deep learning techniques that achieves only moderate accuracy in a low noise meeting setting and in across variable noise levels despite excellent validation performance. experiences and lessons learned while developing the vad are discussed.
arxiv:2108.05553
implies an ethical change from describing and explaining of the existing world to shaping it. one can question the values of information system research, i. e., whose values and what values dominate it, emphasizing that research may openly or latently serve the interests of particular dominant groups. the interests served may be those of the host organization as perceived by its top management, those of information system users, those of information system professionals or potentially those of other stakeholder groups in society. = = academic examples of design science research = = there are limited references to examples of dsr, but adams has completed two phd research topics using peffers et al. ' s dsrp ( both associated with digital forensics but from different perspectives ) : 2013 : the advanced data acquisition model ( adam ) : a process model for digital forensic practice 2024 : the advanced framework for evaluating remote agents ( afera ) : a framework for digital forensic practitioners = = see also = = empirical research action research participant observation case study design thinking = = references = = = = research examples = = adams, r., hobbs, v., mann, g., ( 2013 ). the advanced data acquisition model ( adam ) : a process model for digital forensic practice. url : http : / / researchrepository. murdoch. edu. au / id / eprint / 14422 / 2 / 02whole. pdf = = further reading = = march, s. t., smith, g. f., ( 1995 ). design and natural science research on information technology. decision support systems, 15 ( 4 ), pp. 251 – 266. march, s. t., storey, v. c., ( 2008 ). design science in the information systems discipline : an introduction to the special issue on design science research, mis quarterly, vol. 32 ( 4 ), pp. 725 – 730. mettler t, eurich m, winter r ( 2014 ). " on the use of experiments in design science research : a proposition of an evaluation framework ". communications of the ais. 34 ( 1 ) : 223 – 240. opdenakker, raymond en carin cuijpers ( 2019 ), ’ effective virtual project teams : a design science approach to building a strategic momentum ’, springer verlag. van aken, j. e. ( 2004 ). management research based on the paradigm of the design sciences : the quest for field - tested and grounded technological rules. journal of management studies, 41 ( 2
https://en.wikipedia.org/wiki/Design_science_(methodology)
numerical weather prediction ( nwp ) system is an infrastructure that exerts considerable impacts on modern society. traditional nwp system, however, resolves it by solving complex partial differential equations with a huge computing cluster, resulting in tons of carbon emission. exploring efficient and eco - friendly solutions for nwp attracts interest from artificial intelligence ( ai ) and earth science communities. to narrow the performance gap between the ai - based methods and physic predictor, this work proposes a new transformer - based nwp framework, termed as weatherformer, to model the complex spatio - temporal atmosphere dynamics and empowering the capability of data - driven nwp. weatherformer innovatively introduces the space - time factorized transformer blocks to decrease the parameters and memory consumption, in which position - aware adaptive fourier neural operator ( pafno ) is proposed for location sensible token mixing. besides, two data augmentation strategies are utilized to boost the performance and decrease training consumption. extensive experiments on weatherbench dataset show weatherformer achieves superior performance over existing deep learning methods and further approaches the most advanced physical model.
arxiv:2409.16321
we generalize the strategy, we recently introduced to prove the existence of the thermodynamic limit for the sherrington - kirkpatrick and p - spin models, to a wider class of mean field spin glass systems, including models with multi - component and non - ising type spins, mean field spin glasses with an additional curie - weiss interaction, and systems consisting of several replicas of the spin glass model, where replicas are coupled with terms depending on the mutual overlaps.
arxiv:cond-mat/0208579
chinese character recognition has attracted much research interest due to its wide applications. although it has been studied for many years, some issues in this field have not been completely resolved yet, e. g. the zero - shot problem. previous character - based and radical - based methods have not fundamentally addressed the zero - shot problem since some characters or radicals in test sets may not appear in training sets under a data - hungry condition. inspired by the fact that humans can generalize to know how to write characters unseen before if they have learned stroke orders of some characters, we propose a stroke - based method by decomposing each character into a sequence of strokes, which are the most basic units of chinese characters. however, we observe that there is a one - to - many relationship between stroke sequences and chinese characters. to tackle this challenge, we employ a matching - based strategy to transform the predicted stroke sequence to a specific character. we evaluate the proposed method on handwritten characters, printed artistic characters, and scene characters. the experimental results validate that the proposed method outperforms existing methods on both character zero - shot and radical zero - shot tasks. moreover, the proposed method can be easily generalized to other languages whose characters can be decomposed into strokes.
arxiv:2106.11613
the effect of a diluted planckian radiation field on a xe gas at the electron temperature of 100 ev is investigated within the framework of a collisional radiative model, using the hullac code. the atomic model spans 19 charge states, includes 20 375 configurations and contains more than 2 10 ^ 6 levels. we have simulated detailed spectra comprising more than 10 ^ 9 transitions with the mixed uta model. the radiation temperature tr is varied from 0 to 1. 5 te. the dilution factor, d, applied to decrease the radiation field, is varied independently from 0 to 3 at fixed tr = te. in both cases, the average charge state z * increases from 15 to 27, but in different ways. it is shown that even a dilution d = 0. 01 changes z * by more than 1. 5. different combinations of tr and d yielding exactly the same z *, may give line ratios sufficiently different to be observed. this fact is explained by the interplay of the shape of the radiation field and the atomic structure.
arxiv:1104.4248
detecting anomalies in a temporal sequence of graphs can be applied is areas such as the detection of accidents in transport networks and cyber attacks in computer networks. existing methods for detecting abnormal graphs can suffer from multiple limitations, such as high false positive rates as well as difficulties with handling variable - sized graphs and non - trivial temporal dynamics. to address this, we propose a technique where temporal dependencies are explicitly modelled via time series analysis of a large set of pertinent graph features, followed by using residuals to remove the dependencies. extreme value theory is then used to robustly model and classify any remaining extremes, aiming to produce low false positives rates. comparative evaluations on a multitude of graph instances show that the proposed approach obtains considerably better accuracy than tensorsplat and laplacian anomaly detection.
arxiv:2410.05687
universality of correlation functions obtained in parametric random matrix theory is explored in a multi - parameter formalism, through the introduction of a diffusion matrix $ d _ { ij } ( r ) $, and compared to results from a multi - parameter chaotic model. we show that certain universal correlation functions in 1 - d are no longer well defined by the metric distance between the points in parameter space, due to a global topological dependence on the path taken. by computing the density of diabolical points, which is found to increases quadratically with the dimension of the space, we find a universal measure of the density of diabolical points in chaotic systems.
arxiv:chao-dyn/9612007
d. kazhdan has introduced in 1967 the property ( t ) for local compact groups. in this article we prove that for $ n \ geq 3 $ and $ m \ in \ mathbb { n } $ the group $ sl _ n ( \ textbf { k } ) \ ltimes \ mathcal { m } _ { n, m } ( \ textbf { k } ) $ is a kazhdan group having the outer automorphism group infinite.
arxiv:1110.3407
weak measurement [ 1, 19 ] combined with quantum delayed - choice experiment that use quantum beam splitter instead of the beam splitter give rise to a surprising amplification effect, i. e., counterintuitive negative amplification effect. we show that this effect is caused by the wave and particle behaviours of the system to be and can ' t be explained by a semiclassical wave theory, due to the entanglement of the system and the ancilla in quantum beam splitter. the amplification mechanism about wave - particle duality in quantum mechanics lead us to a scheme for implementation of weak measurement in optomechanical system.
arxiv:1509.00641
this paper describes a real - time general speech reconstruction ( gesper ) system submitted to the icassp 2023 speech signal improvement ( ssi ) challenge. this novel proposed system is a two - stage architecture, in which the speech restoration is performed, and then cascaded by speech enhancement. we propose a complex spectral mapping - based generative adversarial network ( csm - gan ) as the speech restoration module for the first time. for noise suppression and dereverberation, the enhancement module is performed with fullband - wideband parallel processing. on the blind test set of icassp 2023 ssi challenge, the proposed gesper system, which satisfies the real - time condition, achieves 3. 27 p. 804 overall mean opinion score ( mos ) and 3. 35 p. 835 overall mos, ranked 1st in both track 1 and track 2.
arxiv:2306.08454
canonical quantum mechanics postulates hermitian hamiltonians to ensure real eigenvalues. counterintuitively, a non - hermitian hamiltonian, satisfying combined parity - time ( pt ) symmetry, could display entirely real spectra above some phase - transition threshold. such a counterintuitive discovery has aroused extensive theoretical interest in extending canonical quantum theory by including non - hermitian but pt - symmetric operators in the last two decades. despite much fundamental theoretical success in the development of pt - symmetric quantum mechanics, an experimental observation of pseudo - hermiticity remains elusive as these systems with a complex potential seem absent in nature. but nevertheless, the notion of pt symmetry has highly survived in many other branches of physics including optics, photonics, amo physics, acoustics, electronic circuits, material science over the past ten years, and others, where a judicious balance of gain and loss constitutes a pt - symmetric system. here, although we concentrate upon reviewing recent progress on pt symmetry in optical microcavity systems, we also wish to present some new results that may help to accelerate the research in the area. such compound photonic structures with gain and loss provide a powerful platform for testing various theoretical proposals on pt symmetry, and initiate new possibilities for shaping optical beams and pulses beyond conservative structures. throughout this article there is an effort to clearly present the physical aspects of pt - symmetry in optical microcavity systems, but mathematical formulations are reduced to the indispensable ones. readers who prefer strict mathematical treatments should resort to the extensive list of references. despite the rapid progress on the subject, new ideas and applications of pt symmetry using optical microcavities are still expected in the future.
arxiv:1807.03645
in this paper we present an analysis of the magnetic toroidal moment and its relation to the various structural modes in r3c - distorted perovskites with magnetic cations on either the perovskite a or b site. we evaluate the toroidal moment in the limit of localized magnetic moments and show that the full magnetic symmetry can be taken into account by considering small induced magnetic moments on the oxygen sites. our results give a transparent picture of the possible coupling between magnetization, electric polarization, and toroidal moment, thereby highlighting the different roles played by the various structural distortions in multiferroic bifeo _ 3 and in the recently discussed isostructural material fetio _ 3, which has been predicted to exhibit electric field - induced magnetization switching.
arxiv:0901.2812
we consider the motion of an inviscid compressible fluid under the mutual interactions with magnetic field. we show that the initial value problem is ill - - posed in the class of weak solutions for a large class of physically admissible data. we also consider the same problem for inviscid heat - - conductive fluid and show the same result under certain restrictions imposed on the magnetic field. the main tool is the method of convex integration adapted to the euler system with ` variable coefficients '.
arxiv:1903.02039
for natural language understanding and generation, embedding concepts using an order - based representation is an essential task. unlike traditional point vector based representation, an order - based representation imposes geometric constraints on the representation vectors for explicitly capturing various semantic relationships that may exist between a pair of concepts. in existing literature, several approaches on order - based embedding have been proposed, mostly focusing on capturing hierarchical relationships ; examples include vectors in euclidean space, complex, hyperbolic, order, and box embedding. box embedding creates region - based rich representation of concepts, but along the process it sacrifices simplicity, requiring a custom - made optimization scheme for learning the representation. hyperbolic embedding improves embedding quality by exploiting the ever - expanding property of hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the hyperbolic space. in this work, we propose binder, a novel approach for order - based representation. binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods. binder uses a simple and efficient optimization scheme for learning representation vectors with a linear time complexity. our comprehensive experimental results show that binder is very accurate, yielding competitive results on the representation task. but binder stands out from its competitors on the transitive closure link prediction task as it can learn concept embeddings just from the direct edges, whereas all existing order - based approaches rely on the indirect edges.
arxiv:2404.10924
the fine dust detected by ir emission around the nearby beta pic analogue star hd172555 is very peculiar. the dust mineralogy is composed primarily of highly refractory, non - equilibrium materials, with approximately three - quarters of the si atoms in silica ( sio2 ) species. tektite and obsidian lab thermal emission spectra ( non - equilibrium glassy silicas found in impact and magmatic systems ) are required to fit the data. the best - fit model size distribution for the observed fine dust is dn / da = a - 3. 95 + / - 0. 10. this steep a size distribution, with abundant micron - sized particles, argues for a fresh source of material within the last 0. 1 myr. the location of the dust with respect to the star is at 5. 8 + / - 0. 6 au ( equivalent to 1. 9 + / - 0. 2 au from the sun ), within the terrestrial planet formation region but at the outer edge of any possible terrestrial habitability zone. the mass of fine dust is 4 x 10 ^ 19 - 2 x 10 ^ 20 kg, equivalent to a 150 - 200 km radius asteroid. significant emission features centered at 4 and 8 um due to fluorescing sio gas are also found. roughly 10 ^ 22 kg of sio gas, formed by vaporizing silicate rock, is also present in the system, and a separate population of very large, cool grains, massing 10 ^ 21 - 10 ^ 22 kg and equivalent to the largest sized asteroid currently found in the solar system ' s main asteroid belt, dominates the solid circumstellar material by mass. the makeup of the observed dust and gas, and the noted lack of a dense circumstellar gas disk, strong primary x - ray activity, or an extended disk of beta - meteroids argues that the source of the observed circumstellar materials is a giant hypervelocity ( > 10 km sec ^ - 1 ) impact between large rocky planetesimals, similar to the ones which formed the moon and which stripped the surface crustal material off of mercury ' s surface.
arxiv:0906.2536