text
stringlengths
1
3.65k
source
stringlengths
15
79
in this talk, we examine the physical unitarity in a massive yang - mills theory without the higgs field in which the color gauge symmetry is not spontaneously broken and kept intact. for this purpose, we use a new framework proposed one of the authors based on a nonperturbative construction of a non - abelian field describing a massive spin - one vector boson field, which enables us to perform the perturbative and nonperturbative studies on the physical unitarity. moreover, we present a new perturbative treatment for the physical unitarity after giving the general properties of the massive yang - mills theory. then we reproduce the violation of physical unitarity in a transparent way. this work is a preliminary work to the subsequent works in which we present a nonperturbative framework to propose a possible scenario of restoring the physical unitarity in the curci - ferrari model. we discuss the implications for the low - energy qcd in relation to color confinement, glueball mass and brst - invariant dimension - two condensate.
arxiv:1301.2480
we develop a game theoretic model of malware protection using the state - of - the - art sandbox method, to characterize and compute optimal defense strategies for anti - malware. we model the strategic interaction between developers of malware ( m ) and anti - malware ( am ) as a two player game, where am commits to a strategy of generating sandbox environments, and m responds by choosing to either attack or hide malicious activity based on the environment it senses. we characterize the condition for am to protect all its machines, and identify conditions under which an optimal am strategy can be computed efficiently. for other cases, we provide a quadratically constrained quadratic program ( qcqp ) - based optimization framework to compute the optimal am strategy. in addition, we identify a natural and easy to compute strategy for am, which as we show empirically, achieves am utility that is close to the optimal am utility, in equilibrium.
arxiv:2202.13520
in this paper we investigate the scalar aharonov - bohm ( ab ) effect in two of its forms, i. e., its electric form and its gravitational form. the standard form of the electric ab effect involves having particles ( such as electrons ) move in regions with zero electric field but different electric potentials. when a particle is recombined with itself, it will have a different phase, which can show up as a change in the way the single particle interferes with itself when it is recombined with itself. in the case where one has quasi - static fields and potentials, the particle will invariably encounter fringing fields, which makes the theoretical and experimental status of the electric ab effect much less clear than that of the magnetic ( or vector ) ab effect. here we propose using time varying fields outside of a spherical shell, and potentials inside a spherical shell to experimentally test the scalar ab effect. in our proposal a quantum system will always be in a field - free region but subjected to a non - zero time - varying potentials. furthermore, our system will not be spatially split and brought back together as in the magnetic ab experiment. therefore there is no spatial interference and hence no shift in a spatial interference pattern to observe. rather, there arises purely temporal interference phenomena. as in the magnetic ab experiments, these effects are non - classical. we present two versions of this idea : ( i ) a josephson temporal interferometry experiment inside a superconducting spherical shell with a time - varying surface charge ; ( ii ) a two - level atom experiment in which the atomic spectrum acquires fm sidebands when it is placed inside a spherical shell whose exterior mass is sinusoidally varying with time. the former leads to a time - varying internal magnetic field, and the latter leads to a time - varying gravitational redshift.
arxiv:1411.3627
electroencephalography ( eeg ) signals are promising as alternatives to other biometrics owing to their protection against spoofing. previous studies have focused on capturing individual variability by analyzing task / condition - specific eeg. this work attempts to model biometric signatures independent of task / condition by normalizing the associated variance. toward this goal, the paper extends ideas from subspace - based text - independent speaker recognition and proposes novel modifications for modeling multi - channel eeg data. the proposed techniques assume that biometric information is present in the entire eeg signal and accumulate statistics across time in a high dimensional space. these high dimensional statistics are then projected to a lower dimensional space where the biometric information is preserved. the lower dimensional embeddings obtained using the proposed approach are shown to be task - independent. the best subspace system identifies individuals with accuracies of 86. 4 % and 35. 9 % on datasets with 30 and 920 subjects, respectively, using just nine eeg channels. the paper also provides insights into the subspace model ' s scalability to unseen tasks and individuals during training and the number of channels needed for subspace modeling.
arxiv:2007.13517
we investigate the cluster configurations in li isotopes, which are described in the optimization of the multi - slater determinants of the antisymmetrized molecular dynamics. each slater determinant in the superposition is determined simultaneously in the variation of the total energy. the configurations of the excited states are obtained by imposing the orthogonal condition to the ground - state configurations. in li isotopes, various cluster configurations are confirmed and are related to the thresholds of the corresponding cluster emissions. for $ ^ 5 $ li, we predict the $ ^ 3 $ he + $ d $ clustering in the excited state as well as the mirror state of $ ^ 5 $ he with $ ^ 3 $ h + $ d $. for $ ^ { 6 - 9 } $ li, various combinations of the clusters are obtained in the ground and excited states, and the superposition of these basis states reproduces the observed energy spectra. for $ ^ 9 $ li, we predict the linear - chain states consisting of various cluster configurations at 10 - - 13 mev of the excitation energy.
arxiv:2501.16009
can a probabilistic gambler get arbitrarily rich when all deterministic gamblers fail? we study this problem in the context of algorithmic randomness, introducing a new notion - - almost everywhere computable randomness. a binary sequence $ x $ is a. e. \ computably random if there is no probabilistic computable strategy which is total and succeeds on $ x $ for positive measure of oracles. using the fireworks technique we construct a sequence which is partial computably random but not a. e. \ computably random. we also prove the separation between a. e. \ computable randomness and partial computable randomness, which happens exactly in the uniformly almost everywhere dominating turing degrees.
arxiv:2112.04460
cardiovascular diseases ( cvd ), including atherosclerosis cvd ( ascvd ), are multifactorial diseases that present a major economic and social burden worldwide. tremendous efforts have been made to understand traditional risk factors for ascvd, but these risk factors account for only about half of all cases of ascvd. it remains a critical need to identify nontraditional risk factors ( e. g., genetic variants, genes ) contributing to ascvd. further, incorporating functional knowledge in prediction models have the potential to reveal pathways associated with disease risk. we propose bayesian hierarchical factor analysis models that associate multiple omics data, predict a clinical outcome, allow for prior functional information, and can accommodate clinical covariates. the models, motivated by available data and the need for other risk factors of ascvd, are used for the integrative analysis of clinical, demographic, and multi - omics data to identify genetic variants, genes, and gene pathways potentially contributing to 10 - year ascvd risk in healthy adults. our findings revealed several genetic variants, genes and gene pathways that were highly associated with ascvd risk. interestingly, some of these have been implicated in cvd risk. the others could be explored for their potential roles in cvd. our findings underscore the merit in joint association and prediction models.
arxiv:2005.11586
the synchronization of digital twins ( dt ) serves as the cornerstone for effective operation of the dt framework. however, the limitations of channel capacity can greatly affect the data transmission efficiency of wireless communication. unlike traditional communication methods, semantic communication transmits the intended meanings of physical objects instead of raw data, effectively saving bandwidth resource and reducing dt synchronization latency. hence, we are committed to integrating semantic communication into the dt synchronization framework within the mobile edge computing system, aiming to enhance the dt synchronization efficiency of user devices ( uds ). our goal is to minimize the average dt synchronization latency of all uds by jointly optimizing the synchronization strategy, transmission power of uds, and computational resource allocation for both uds and base station. the formulated problem involves sequential decision - making across multiple coherent time slots. furthermore, the mobility of uds introduces uncertainties into the decision - making process. to solve this challenging optimization problem efficiently, we propose a soft actor - critic - based deep reinforcement learning algorithm to optimize synchronization strategy and resource allocation. numerical results demonstrate that our proposed algorithm can reduce synchronization latency by up to 13. 2 \ % and improve synchronization efficiency compared to other benchmark schemes.
arxiv:2503.04387
we establish the supercongruences for the fourteen rigid hypergeometric calabi - - yau threefolds over $ \ mathbb q $ conjectured by rodriguez - villegas in 2003. our first method is based on dwork ' s theory of $ p $ - adic unit roots and it allows us to establish the supercongruences between the truncated hypergeometric series and the corresponding unit roots for ordinary primes. the other method makes use of the theory of hypergeometric motives, in particular, adapts the techniques from the recent work of beukers, cohen and mellit on finite hypergeometric sums over $ \ mathbb q $. essential ingredients in executing the both approaches are the modularity of the underlying calabi - - yau threefolds and a $ p $ - adic perturbation method applied to hypergeometric functions.
arxiv:1705.01663
a digital twin of a direct current brushless ( bldc ) electric motor and propeller is developed for predicting the generated thrust when there is no motion of the system ( static conditions ). the model accounts for the back electromotive force, the propeller drag force, and the finite response time arising from the electromagnet winding inductance and dc resistance. the model is compared to a textbook model of blcd dynamics and to experimental measurements on a kde direct kde2315xf - 885 / 885 kv motor with a 945 propeller and a holybro electronic speed controller ( esc ) driving an air 2216 / 880 kv motor with a 1045 propeller. these systems are typically found on group 1 uncrewed quadcopters ( drones ). both the steady - state and transient dynamics depart substantially from linearized models found in the literature. this study is a starting point for disentangling the dynamics of the motor and the change in propeller dynamics due to complex airflow conditions.
arxiv:2312.09981
in 1955, silicon dioxide surface passivation by carl frosch and lincoln derick, the first planar silicon dioxide transistors by frosch and derick in 1957, planar process by jean hoerni, the monolithic integrated circuit chip by robert noyce at fairchild semiconductor in 1959, the metal – oxide – semiconductor field - effect transistor ( mosfet, or mos transistor ) demonstrated by a team at bell labs in 1960 and the single - chip microprocessor ( intel 4004 ) by federico faggin, marcian hoff, masatoshi shima and stanley mazor at intel in 1971. = = = history of computer engineering education = = = the first computer engineering degree program in the united states was established in 1971 at case western reserve university in cleveland, ohio. as of 2015, there were 250 abet - accredited computer engineering programs in the u. s. in europe, accreditation of computer engineering schools is done by a variety of agencies as part of the eqanie network. due to increasing job requirements for engineers who can concurrently design hardware, software, firmware, and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor ' s degree generally called computer engineering. both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum. as with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers. = = education = = computer engineering is referred to as computer science and engineering at some universities. most entry - level computer engineering jobs require at least a bachelor ' s degree in computer engineering, electrical engineering or computer science. typically one must learn an array of mathematics such as calculus, linear algebra and differential equations, along with computer science. degrees in electronic or electric engineering also suffice due to the similarity of the two fields. because hardware engineers commonly work with computer software systems, a strong background in computer programming is necessary. according to bls, " a computer engineering major is similar to electrical engineering but with some computer science courses added to the curriculum ". some large firms or specialized jobs require a master ' s degree. it is also important for computer engineers to keep up with rapid advances in technology. therefore, many continue learning throughout their careers. this can be helpful, especially when it comes to learning new skills or improving existing ones. for example, as the relative cost of fixing a bug increases the further along it is in the software development cycle, there can be greater
https://en.wikipedia.org/wiki/Computer_engineering
the architectures of biological hard materials reveal finely tailored complex assemblies of mineral crystals. numerous recent studies associate the design of these local assemblies with impressive macroscopic response. reproducing such exquisite control in technical ceramics conflicts with commonly used processing methods. here, we circumvent this issue by combining the recently developed magnetically - assisted slip casting ( masc ) technique with the well - established process of templated grain growth ( tgg ). masc enables the local control over the orientation of platelets dispersed among smaller isotropic particles. after a high temperature pressure - less treatment, the grains of the final ceramic follow the same orientation of the initial platelets. this combination allows us to produce 95 % dense alumina part with a grain orientation following any deliberate orientation. we successfully fabricated microstructures inspired from biological materials with ceramics that present periodically varying patterns with a programmable pitch of a few tens of microns. we confirmed the capacity of the process to tailor local mechanical properties through local grains orientation using micro - indentation. this micrometer scale control over the local mechanical properties could be applied to adapt ceramic structures to complex loads using this inexpensive and scalable process. in systems where functional properties also depend on anisotropic grain orientation, the principle presented here could enable the creation of new multifunctional ceramics.
arxiv:1807.04378
smart traffic control and management become an emerging application for deep reinforcement learning ( drl ) to solve traffic congestion problems in urban networks. different traffic control and management policies can be tested on the traffic simulation. current drl - based studies are mainly supported by the microscopic simulation software ( e. g., sumo ), while it is not suitable for city - wide control due to the computational burden and gridlock effect. to the best of our knowledge, there is a lack of studies on the large - scale traffic simulator for drl testbeds, which could further hinder the development of drl. in view of this, we propose a meso - macro traffic simulator for very large - scale drl scenarios. the proposed simulator integrates mesoscopic and macroscopic traffic simulation models to improve efficiency and eliminate gridlocks. the mesoscopic link model simulates flow dynamics on roads, and the macroscopic bathtub model depicts vehicle movement in regions. moreover, both types of models can be hybridized to accommodate various drl tasks. this creates portals for mixed transportation applications under different contexts. the result shows that the developed simulator only takes 46 seconds to finish a 24 - hour simulation in a very large city with 2. 2 million vehicles, which is much faster than sumo. additionally, we develop a graphic interface for users to visualize the simulation results in a web explorer. in the future, the developed meso - macro traffic simulator could serve as a new environment for very large - scale drl problems.
arxiv:2105.13907
one of the great challenges facing atomically dispersed catalysts, including single - atom catalyst ( sac ) and double - atom catalyst ( dac ) is their ultra - low metal loading ( typically less than 5 wt % ), basically limiting the practical catalytic application, such as oxygen reduction reaction ( orr ) crucial to hydrogen fuel cell and metal - air battery. although some important progresses have been achieved on ultra - high - density ( uhd ) sacs, the reports on uhd - dacs with stable uniform dispersion is still lacking. herein, based on the experimentally synthesized m2n6 motif ( m = sc - zn ), we theoretically demonstrated the existence of the uhd - dacs with the metal loading > 40 wt %, which were confirmed by systematic analysis of dynamic, thermal, mechanical, thermodynamic, and electrochemical stabilities. furthermore, orr activities of the uhd - dacs are comparable with or even better than those of the experimentally synthesized low - density ( ld ) counterparts, and the fe2n6 and co2n6 uhd - dacs locate at the peak of the activity volcano with ultra - low overpotentials of 0. 31 and 0. 33 v, respectively. finally, spin magnetic moment of active center is found to be a catalytic descriptor for orr on the dacs. our work will stimulate the experimental exploration of the ultra - high - density dacs and provides the novel insight into the relationship between orr activity of the dacs and their spin states.
arxiv:2305.02620
the rapid development of quantum computing has demonstrated many unique characteristics of quantum advantages, such as richer feature representation and more secured protection on model parameters. this work proposes a vertical federated learning architecture based on variational quantum circuits to demonstrate the competitive performance of a quantum - enhanced pre - trained bert model for text classification. in particular, our proposed hybrid classical - quantum model consists of a novel random quantum temporal convolution ( qtc ) learning framework replacing some layers in the bert - based decoder. our experiments on intent classification show that our proposed bert - qtc model attains competitive experimental results in the snips and atis spoken language datasets. particularly, the bert - qtc boosts the performance of the existing quantum circuit - based language model in two text classification datasets by 1. 57 % and 1. 52 % relative improvements. furthermore, bert - qtc can be feasibly deployed on both existing commercial - accessible quantum computation hardware and cpu - based interface for ensuring data isolation.
arxiv:2203.03550
a conformable time - scale fractional calculus of order $ \ alpha \ in ] 0, 1 ] $ is introduced. the basic tools for fractional differentiation and fractional integration are then developed. the hilger time - scale calculus is obtained as a particular case, by choosing $ \ alpha = 1 $.
arxiv:1505.03134
we consider a new coarse space for the asm and ras preconditioners to solve elliptic partial differential equations on perforated domains, where the numerous polygonal perforations represent structures such as walls and buildings in urban data. with the eventual goal of modelling urban floods by means of the nonlinear diffusive wave equation, this contribution focuses on the solution of linear problems on perforated domains. our coarse space uses a polygonal subdomain partitioning and is spanned by trefftz - like basis functions that are piecewise linear on the boundary of a subdomain and harmonic inside it. it is based on nodal degrees of freedom that account for the intersection between the perforations and the subdomain boundaries. as a reference, we compare this coarse space to the well - studied nicolaides coarse space with the same subdomain partitioning. it is known that the nicolaides space is unable to prevent stagnation in convergence when the subdomains are not connected ; we work around this issue by separating each subdomain by disconnected component. scalability and robustness are tested for multiple data sets based on realistic urban topography. numerical results show that the new coarse space is very robust and accelerates the number of krylov iterations when compared to nicolaides, independent of the complexity of the data.
arxiv:2211.05880
wearable design is an interdisciplinary field that balances technological innovation, human factors, and human - computer interactions. despite contributions from various disciplines, many projects lack stable interdisciplinary teams, which often leads to design failures. large language models ( llms ) integrate diverse information and generate innovative solutions, making them a valuable tool for enhancing design processes. thus, we have explored the use of llms in wearable design by combining design - thinking principles with llm capabilities. we have developed the " diamond of thought " framework and analysed 1, 603 prototypes and 1, 129 products from a body - centric perspective to create a comprehensive database. we employed retrieval - augmented generation to input database details into the llms, ensuring applicability to wearable design challenges and integration of embodied cognition into the process. our llm - based methodology for wearables has been experimentally validated, demonstrating the potential of llms for the advancement of design practices. this study offers new tools and methods for future wearable designs.
arxiv:2410.06972
precision continuous - wave nmr measurements have been carried out over the entire magnetization curve of euo and are presented in tabular form. two very closely spaced resonances are observed and are attributed to domain and domain - wall signals. both of the signals are useful for analysis in the spin - wave region. only the domain signal is measurable above ~ 50k. the latter is used for fitting tc and the critical exponent beta. the critical - region fits agree with previous measurements, within experimental error. the low - temperature data exhibit a clear - cut t ^ 2 behavior, at variance with the expectations of conventional spin - wave theory. this result is discussed in relation to two semi - empirical spin - wave schemes, one formulated by n. bykovetz, and one by u. koebler. the nmr signal at 4. 2k gives no indication of a quadrupole splitting, in contradiction to the interpretation of several previous spin - echo nmr spectra observed in euo. this issue remains unresolved.
arxiv:1005.1692
we studied the stellar population in the central 6. 6x6. 6arcmin, region of the ultra - deep ( 1msec ) chandra galactic field - the " chandra bulge field " ( cbf ) approximately 1. 5 degrees away from the galactic center - using the hubble space telescope acs / wfc blue ( f435w ) and red ( f625w ) images. we mainly focus on the behavior of red clump giants - a distinct stellar population, which is known to have an essentially constant intrinsic luminosity and color. by studying the variation in the position of the red clump giants on a spatially resolved color - magnitude diagram, we confirm the anomalous total - to - selective extinction ratio, as reported in previous work for other galactic bulge fields. we show that the interstellar extinction in this area is < a _ ( f625w ) > = 4 on average, but varies significantly between ~ 3 - 5 on angular scales as small as 1 arcminute. using the distribution of red clump giants in an extinction - corrected color - magnitude diagram, we constrain the shape of a stellar - mass distribution model in the direction of this ultra - deep chandra field, which will be used in a future analysis of the population of x - ray sources. we also show that the adopted model for the stellar density distribution predicts an infrared surface brightness in the direction of the " chandra bulge field " in good agreement ( i. e. within ~ 15 % ) with the actual measurements derived from the spitzer / irac observations.
arxiv:1003.2965
distributions exhibiting fat tails occur frequently in many different areas of science. a dynamical reason for fat tails can be a so - called superstatistics, where one has a superposition of local gaussians whose variance fluctuates on a rather large spatio - temporal scale. after briefly reviewing this concept, we explore in more detail a class of superstatistics that hasn ' t been subject of many investigations so far, namely superstatistics for which a suitable power beta ^ eta of the local inverse temperature beta is chi ^ 2 - distributed. we show that eta > 0 leads to power law distributions, while eta < 0 leads to stretched exponentials. the special case eta = 1 corresponds to tsallis statistics and the special case eta = - 1 to exponential statistics of the square root of energy. possible applications for granular media and hydrodynamic turbulence are discussed.
arxiv:cond-mat/0510841
we have examined superfluid properties of $ ^ 4 $ he confined to a nano - porous gelsil glass that has nanopores 2. 5 nm in diameter. the pressure - temperature phase diagram was determined by torsional oscillator, heat capacity and pressure studies. the superfluid transition temperature $ t _ { \ mathrm c } $ approaches zero at 3. 4 mpa, indicating a novel " quantum " superfluid transition. by heat capacity measurements, the nonsuperfluid phase adjacent to the superfluid and solid phases is identified to be a nanometer - scale, localized bose condensation state, in which global phase coherence is destroyed. at high pressures, the superfluid density has a $ t $ - linear term, and $ t _ { \ mathrm c } $ is proportional to the zero - temperature superfluid density. these results strongly suggest that phase fluctuations in the superfluid order parameter play a dominant role on the phase diagram and superfluid properties.
arxiv:0712.1153
multiple object tracking ( mot ) has rapidly progressed in recent years. existing works tend to design a single tracking algorithm to perform both detection and association. though ensemble learning has been exploited in many tasks, i. e, classification and object detection, it hasn ' t been studied in the mot task, which is mainly caused by its complexity and evaluation metrics. in this paper, we propose a simple but effective ensemble method for mot, called ensemblemot, which merges multiple tracking results from various trackers with spatio - temporal constraints. meanwhile, several post - processing procedures are applied to filter out abnormal results. our method is model - independent and doesn ' t need the learning procedure. what ' s more, it can easily work in conjunction with other algorithms, e. g., tracklets interpolation. experiments on the mot17 dataset demonstrate the effectiveness of the proposed method. codes are available at https : / / github. com / dyhbupt / ensemblemot.
arxiv:2210.05278
we present exact calculations of the zero - temperature partition function for the $ q $ - state potts antiferromagnet ( equivalently, the chromatic polynomial ) for families of arbitrarily long strip graphs of the square and triangular lattices with width $ l _ y = 4 $ and boundary conditions that are doubly periodic or doubly periodic with reversed orientation ( i. e. of torus or klein bottle type ). these boundary conditions have the advantage of removing edge effects. in the limit of infinite length, we calculate the exponent of the entropy, $ w ( q ) $ and determine the continuous locus $ { \ cal b } $ where it is singular. we also give results for toroidal strips involving ` ` crossing subgraphs ' ' ; these make possible a unified treatment of torus and klein bottle boundary conditions and enable us to prove that for a given strip, the locus $ { \ cal b } $ is the same for these boundary conditions.
arxiv:cond-mat/0007491
the ability of deep learning models to learn continuously is essential for adapting to new data categories and evolving data distributions. in recent years, approaches leveraging frozen feature extractors after an initial learning phase have been extensively studied. many of these methods estimate per - class covariance matrices and prototypes based on backbone - derived feature representations. within this paradigm, we introduce fenec ( feature neighborhood classifier ) and fenec - log, its variant based on the log - likelihood function. our approach generalizes the existing concept by incorporating data clustering to capture greater intra - class variability. utilizing the mahalanobis distance, our models classify samples either through a nearest neighbor approach or trainable logit values assigned to consecutive classes. our proposition may be reduced to the existing approaches in a special case while extending them with the ability of more flexible adaptation to data. we demonstrate that two fenec variants achieve competitive performance in scenarios where task identities are unknown and establish state - of - the - art results on several benchmarks.
arxiv:2503.14301
measurements of the rates for the hadronic decays b ^ + - - > pi k along with the cp - averaged b ^ + - - > pi ^ + - pi ^ 0 branching ratio can be used to bound and extract the weak phase gamma = - arg ( v _ ub ). using preliminary cleo data, we obtain the bounds | gamma | > 93 degrees at 1 sigma, and | gamma | > 71 degrees at 90 % confidence level.
arxiv:hep-ph/9904321
electronic band structure for electrons bound on periodic minimal surfaces is differential - geometrically formulated and numerically calculated. we focus on minimal surfaces because they are not only mathematically elegant ( with the surface characterized completely in terms of " navels " ) but represent the topology of real systems such as zeolites and negative - curvature fullerene. the band structure turns out to be primarily determined by the topology of the surface, i. e., how the wavefunction interferes on a multiply - connected surface, so that the bands are little affected by the way in which we confine the electrons on the surface ( thin - slab limit or zero thickness from the outset ). another curiosity is that different minimal surfaces connected by the bonnet transformation ( such as schwarz ' s p - and d - surfaces ) possess one - to - one correspondence in their band energies at brillouin zone boundaries.
arxiv:cond-mat/0109512
starting from a general operator representation in the time - frequency domain, this paper addresses the problem of approximating linear operators by operators that are diagonal or band - diagonal with respect to gabor frames. a characterization of operators that can be realized as gabor multipliers is given and necessary conditions for the existence of ( hilbert - schmidt ) optimal gabor multiplier approximations are discussed and an efficient method for the calculation of an operator ' s best approximation by a gabor multiplier is derived. the spreading function of gabor multipliers yields new error estimates for these approximations. generalizations ( multiple gabor multipliers ) are introduced for better approximation of overspread operators. the riesz property of the projection operators involved in generalized gabor multipliers is characterized, and a method for obtaining an operator ' s best approximation by a multiple gabor multiplier is suggested. finally, it is shown that in certain situations, generalized gabor multipliers reduce to a finite sum of regular gabor multipliers with adapted windows.
arxiv:0809.2698
motivated by the first observation of the double - charm tetraquark $ t _ { cc } ^ + ( 3875 ) $ by the lhcb collaboration, we investigate the nature of $ t _ { cc } ^ + $ as an isoscalar $ dd ^ * $ hadronic molecule in a meson - exchange potential model incorporated by the coupled - channel effects and three - body unitarity. the $ d ^ 0d ^ 0 \ pi ^ + $ invariant mass spectrum can be well - described and the $ t _ { cc } ^ + $ pole structure can be precisely extracted. under the hypothesis that the interactions between the heavy flavor hadrons can be saturated by the light meson - exchange potentials, the near - threshold dynamics of $ t _ { cc } ^ + $ can shed light on the binding of its heavy - quark spin symmetry ( hqss ) partner $ d ^ * d ^ * $ ( $ i = 0 $ ) and on the nature of other heavy hadronic molecule candidates such as $ x ( 3872 ) $ and $ z _ c ( 3900 ) $ in the charmed - anticharmed systems. the latter states can be related to $ t _ { cc } ^ + $ in the meson - exchange potential model with limited assumptions based on the su ( 3 ) flavor symmetry relations. the combined analysis, on the one hand, indicates the hqss breaking effects among those hqss partners, and on the other hand, highlights the role played by the short and long - distance dynamics for the near threshold $ d ^ { ( * ) } d ^ { ( * ) } $ and $ d ^ { ( * ) } \ bar { d } ^ { ( * ) } + c. c. $ systems.
arxiv:2311.10067
we investigate the statistical distribution of source counts - in - cells in the second data release of the lofar two - metre sky survey ( lotss - dr2 ) and we test a computationally cheap method based on the counts - in - cells to estimate the two - point correlation function. we compare three stochastic models for the counts - in - cells which result in a poisson distribution, a compound poisson distribution, and a negative binomial distribution. by analysing the variance of counts - in - cells for various cell sizes, we fit the reduced normalised variance to a single power - law model representing the angular two - point correlation function. our analysis confirms that radio sources are not poisson distributed, which is most likely due to multiple physical components of radio sources. employing instead a cox process, we show that there is strong evidence in favour of the negative binomial distribution above a flux density threshold of 2 mjy. additionally, the mean number of radio components derived from the negative binomial distribution is in good agreement with corresponding estimates based on the value - added catalogue of lotss - dr2. the scaling of the counts - in - cells normalised variance with cell size is in good agreement with a power - law model for the angular two - point correlation. at a flux density threshold of 2 mjy and a signal - to - noise ratio of 7. 5 for individual radio sources, we find that for a range of angular scales large enough to not be affected by the multi - component nature of radio sources, the value of the exponent of the power law ranges from - 0. 8 to - 1. 05. this closely aligns with findings from previous optical, infrared, and radio surveys of the large scale structure. the scaling of the counts - in - cells statistics with cell size provides a computationally efficient method to estimate the two - point correlation properties, offering a valuable tool for future large - scale structure studies.
arxiv:2504.20723
intelligent metamaterials have attracted widespread research interest due to their self - adaptive capabilities and controllability. they hold great potential for advancing fluid control by providing responsive and flexible solutions. however, current designs of passive hydrodynamic metamaterials are limited by their fixed shapes and specific environments, lacking environmental adaptability. these two constraints hinder the broader application of hydrodynamic metamaterials. in this work, we propose a design for passive intelligent metashells that utilize extremely anisotropic parameters to endow hydrodynamic metamaterials with self - adaptive abilities and free - form shapes. achieving the required anisotropic parameters is challenging, but we ingeniously accomplished this by creating isobaric conditions through increasing the water height in the shell region. we validated the design through finite - element simulations. this approach overcomes the limitations of existing passive hydrodynamic metamaterials, enhancing their intelligent behavior. our model improves the flexibility and robustness of hydrodynamic metamaterials in complex and dynamic environments, providing insights for future designs and practical applications
arxiv:2412.02964
let m be a 1 - motive defined over a field of characteristic 0. to m we can associate its motivic galois group, g _ mot ( m ), which is the geometrical interpretation of the munford - tate group of m. we prove that the unipotent radical of the lie algebra of g _ mot ( m ) is the semi - abelian variety defined by the adjoint action of the semi - simplified of the lie algebra of g _ mot ( m ) on itself.
arxiv:math/0305046
health care delivery is a collaborative process, requiring close coordination among networks of providers with specialized expertise. yet in the united states, care is often spread across multiple disconnected providers ( e. g., primary care physicians, specialists ), leading to fragmented care delivery networks, and contributing to higher costs and lower quality. while this problem is well known, there are relatively few quantitative tools available for characterizing care delivery networks at scale, thereby inhibiting deeper understanding of care fragmentation and efforts to address it. in this, study, we conduct a large - scale analysis of care delivery networks across the united states using the discrete hodge decomposition, an emerging method of topological data analysis. using this technique, we decompose networks of patient flows among physicians into three orthogonal subspaces : gradient ( acyclic flow ), harmonic ( global cyclic flow ), and curl ( local cyclic flow ). we document substantial variation in the relative importance of each subspace, suggesting that there may be systematic differences in the organization of care delivery networks across health care markets. moreover, we find that the relative importance of each subspace is predictive of local care cost and quality, with outcomes tending to be better with greater curl flow and worse with greater harmonic flow.
arxiv:2110.09637
a highly efficient semi - empirical hamiltonian has been developed and applied to model the compact boron clusters with the intermediate size. the hamiltonian, in addition to the inclusion of the environment - dependent interactions and electron - electron correlations with the on - site charge calculated self - consistently, has contained the environment - dependent excitation orbital energy to take into account the atomic aggregation effect on the atomic orbitals. the hamiltonian for boron has successfully characterized the electron deficiency of boron and captured the complex chemical bonding in various boron allotropes including the planer and quasi - planer, the convex, the ring, the icosahedra, the fullerene - like clusters, the two - dimensional monolayer sheets, and the alpha boron bulk, demonstrating its transferability, robustness, reliability, and has the predict power. the hamiltonian has been applied to explore the existence of the compact structure of boron clusters with the intermediate size. over 230 compact clusters including the random, the rhombohedra, and the spherical icosahedra structures are obtained with the size up to 768 atoms. it has been found that, energetically, clusters containing most compacted icosahedra b12 balls ( i. e., the body - like rhombohedra clusters and trimmed spherical cut icosahedra clusters ) are the most stable for large size ( natom > 200 ) of boron clusters, while the spherical cut icosahedra, random structures, and cage - like boron clusters are competitive for the small or intermediate size ( 24 < natom < 200 ) of boron clusters.
arxiv:1408.4931
practical algorithms for solving the subgraph homeomorphism problem are known for only a few small pattern graphs : among these are the wheel graphs with four, five, six, and seven spokes. the length and difficulty of the proofs leading to these algorithms increase greatly as the size of the pattern graph increases. proving a result for the wheel with six spokes requires extensive case analysis on many small graphs, and even more such analysis is needed for the wheel with seven spokes. this paper describes algorithms and programs used to automate the generation and testing of the graphs that arise as cases in these proofs. the main algorithm given may be useful in a more general context, for developing other characterizations of shp - related properties.
arxiv:1311.0574
generalizing the octahedral configuration of six congruent cylinders touching the unit sphere, we exhibit configurations of congruent cylinders associated to a pair of dual platonic bodies.
arxiv:1904.02043
out of thermal equilibrium, bosonic quantum systems can bose - condense away from the ground state, featuring a macroscopic occupation of an excited state or even of multiple states in the so - called bose - selection scenario. in previous work, a theory was developed that predicts, in which states a driven - dissipative ideal bose gas condenses. here, we address the inverse problem : given a target state with desired condensate fractions in certain single - particle states, how can this configuration be achieved by tuning available control parameters? which type of experimental setup allows for flexible condensation control? we solve these problems, on the one hand, by proposing a bose ` condenser ', experimentally implementable in a superconducting circuit, where targeted bose condensation into eigenstates of a chain of resonators is driven through the coupling to artificial quantum baths, realized via auxiliary two - level systems. on the other, we develop a theory to solve the inverse problem based on linear programming methods. we further discuss the engineering of transition points between different bose condensation configurations, which may find application for amplification, heat - flow control, and the design of highly - structured quantum baths.
arxiv:2311.02170
low - luminosity active galactic nuclei ( agn ) are believed to be surrounded by a collisionless, highly magnetized accretion flow. as a result, particle - in - cell simulations are the best tools to study the immediate vicinity of the event horizons of these supermassive black holes. we present a gpu - based general relativistic particle - in - cell ( grpic ) code framework called aperture. aperture is developed in c + +, with compute kernels written in cuda and hip to take advantage of the massive acceleration modern gpus enable. the code is organized in a fully modular way, allowing easy extensions to new physics problems. in this paper, we describe in detail the particle pusher, field solver, and charge - conserving current deposition algorithms employed in aperture, and present test cases to validate their correctness. then, we apply the code to study spark gaps and plasma injection in black hole magnetospheres. we find that the apparent location and time - evolution of the gap depend on the observer. our results reconcile the previous conflicting findings from 1d and 2d simulations in the literature.
arxiv:2503.04558
it is shown that the 1s level hyperfine populations prior to muon capture will be statistical when either target or beam are unpolarised independent of the atomic level at which the hyperfine interaction becomes appreciable. this assertion holds in the absence of magnetic transitions during the cascade and is true because of minimal polarisation after atomic capture and selective feeding during the cascade.
arxiv:nucl-th/9212007
we consider $ m $ - colorings of the edges of a complete graph, where each color class is defined semi - algebraically with bounded complexity. the case $ m = 2 $ was first studied by alon et al., who applied this framework to obtain surprisingly strong ramsey - type results for intersection graphs of geometric objects and for other graphs arising in computational geometry. considering larger values of $ m $ is relevant, e. g., to problems concerning the number of distinct distances determined by a point set. for $ p \ ge 3 $ and $ m \ ge 2 $, the classical ramsey number $ r ( p ; m ) $ is the smallest positive integer $ n $ such that any $ m $ - coloring of the edges of $ k _ n $, the complete graph on $ n $ vertices, contains a monochromatic $ k _ p $. it is a longstanding open problem that goes back to schur ( 1916 ) to decide whether $ r ( p ; m ) = 2 ^ { o ( m ) } $, for a fixed $ p $. we prove that this is true if each color class is defined semi - algebraically with bounded complexity. the order of magnitude of this bound is tight. our proof is based on the cutting lemma of chazelle { \ em et al. }, and on a szemer \ ' edi - type regularity lemma for multicolored semi - algebraic graphs, which is of independent interest. the same technique is used to address the semi - algebraic variant of a more general ramsey - type problem of erd \ h { o } s and shelah.
arxiv:1505.07429
a real representation of a finite group naturally determines a polytope, generalizing the well - known birkhoff polytope. this paper determines the structure of the polytope corresponding to the natural permutation representation of a general frobenius group.
arxiv:1102.0988
magnetic topological materials and their physical signatures are a focus of current research. here, by first - principles calculations and symmetry analysis, we reveal topological semimetal states in an existing antiferromagnet thmn2si2. depending on the n \ ' eel vector orientation, the topological band crossings near the fermi level form either a double - nodal loop or two pairs of dirac points, which are all fourfold degenerate and robust under spin - orbit coupling. these topological features produce large berry connection polarizability, which leads to enhanced nonlinear transport effects. particularly, we evaluate the third order current response, which dominates the transverse charge current. we show that the nonlinear response can be much more sensitive to topological phase transitions than linear response, which offers a powerful tool for characterizing magnetic topological semimetals.
arxiv:2209.05736
using a collapsar progenitor model of macfadyen & woosley we have simulated the propagation of an axisymmetric jet through a collapsing rotating massive star with the genesis multi - dimensional relativistic hydrodynamic code. the jet forms as a consequence of an assumed ( constant or variable ) energy deposition in the range $ 10 ^ { 50 } $ erg s $ ^ { - 1 } $ to $ 10 ^ { 51 } $ erg s $ ^ { - 1 } $ within a $ 30 ^ { \ circ } $ cone around the rotation axis. the jet flow is strongly beamed ( $ \ la $ few degrees ), spatially inhomogeneous, and time dependent. the jet reaches the surface of the stellar progenitor ( $ r _ { \ ast } = 2. 98 \ times 10 ^ { 10 } $ cm ) intact. at breakout the maximum lorentz factor of the jet flow is 33. after breakout the jet accelerates into the circumstellar medium, whose density is assumed to decrease exponentially and then being constant $ \ rho _ { \ rm ext } = 10 ^ { - 5 } $ gcm $ ^ { - 3 } $. outside the star the flow begins to expand also laterally ( $ v \ sim c $ ), but the beam remains very well collimated. at a distance of $ 2. 54 r _ { \ ast } $, where the simulation ends, the lorentz factor has increased to 44.
arxiv:astro-ph/9911098
high - quality speech dialogue datasets are crucial for speech - llm development, yet existing acquisition methods face significant limitations. human recordings incur high costs and privacy concerns, while synthetic approaches often lack conversational authenticity. to address these challenges, we introduce \ textsc { speechdialoguefactory }, a production - ready framework for generating natural speech dialogues efficiently. our solution employs a comprehensive pipeline including metadata generation, dialogue scripting, paralinguistic - enriched utterance simulation, and natural speech synthesis with voice cloning. additionally, the system provides an interactive ui for detailed sample inspection and a high - throughput batch synthesis mode. evaluations show that dialogues generated by our system achieve a quality comparable to human recordings while significantly reducing production costs. we release our work as an open - source toolkit, alongside example datasets available in english and chinese, empowering researchers and developers in speech - llm research and development.
arxiv:2503.23848
a stochastic gravitational wave background ( sgwb ) will affect the cmb anisotropies via weak lensing. unlike weak lensing due to large scale structure which only deflects photon trajectories, a sgwb has an additional effect of rotating the polarization vector along the trajectory. we study the relative importance of these two effects, deflection \ & rotation, specifically in the context of e - mode to b - mode power transfer caused by weak lensing due to sgwb. using weak lensing distortion of the cmb as a probe, we derive constraints on the spectral energy density ( $ \ omega _ { gw } $ ) of the sgwb, sourced at different redshifts, without assuming any particular model for its origin. we present these bounds on $ \ omega _ { gw } $ for different power - law models characterizing the sgwb, indicating the threshold above which observable imprints of sgwb must be present in cmb.
arxiv:1606.08862
it is a very old and interesting open problem to characterize those collections of embedded topological types of local plane curve singularities which may appear as singularities of a projective plane curve c of degree d. the goal of the present article is to give a complete ( topological ) classification of those cases when c is rational and it has a unique singularity which is locally irreducible ( i. e. c is unicuspidal ) with one puiseux pair.
arxiv:math/0604420
a quantum metamaterial can be implemented as a quantum coherent 1d array of qubits placed in a transmission line. the properties of quantum metamaterials are determined by the local quantum state of the system. here we show that a spatially - periodic quantum state of such a system can be realized without direct control of the constituent qubits, by their interaction with the initializing ( " priming " ) pulses sent through the system in opposite directions. the properties of the resulting quantum photonic crystal are determined by the choice of the priming pulses. this proposal can be readily generalized to other implementations of quantum metamaterials.
arxiv:1303.1086
the relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. in black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids ( the quasinormal modes ) and of a power - law tail. how many quasinormal modes are necessary to describe waveforms with a prescribed precision? what error do we incur by only including quasinormal modes, and not tails? what other systematic effects are present in current state - of - the - art numerical waveforms? these issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. we use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. we show that ( i ) a determination of the fundamental $ l = m = 2 $ quasinormal mode to within $ 1 \ % $ or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones ; ( ii ) a determination of the black hole mass and spin with precision better than $ 1 \ % $ requires the inclusion of at least two quasinormal modes for any given angular harmonic mode $ ( \ ell, \, m ) $. we also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. these results are important to quantify theoretical ( as opposed to instrumental ) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal - to - noise ratio gravitational wave detectors.
arxiv:1710.02156
deploying federated learning at the wireless edge introduces federated edge learning ( feel ). given feel ' s limited communication resources and potential mislabeled data on devices, improper resource allocation or data selection can hurt convergence speed and increase training costs. thus, to realize an efficient feel system, this paper emphasizes jointly optimizing resource allocation and data selection. specifically, in this work, through rigorously modeling the training process and deriving an upper bound on feel ' s one - round convergence rate, we establish a problem of joint resource allocation and data selection, which, unfortunately, cannot be solved directly. toward this end, we equivalently transform the original problem into a solvable form via a variable substitution and then break it into two subproblems, that is, the resource allocation problem and the data selection problem. the two subproblems are mixed - integer non - convex and integer non - convex problems, respectively, and achieving their optimal solutions is a challenging task. based on the matching theory and applying the convex - concave procedure and gradient projection methods, we devise a low - complexity suboptimal algorithm for the two subproblems, respectively. finally, the superiority of our proposed scheme of joint resource allocation and data selection is validated by numerical results.
arxiv:2407.02888
in railway transportation, the evaluation of track geometry is an indispensable requirement to ensure the safety and comfort of railway vehicles. a promising approach is to directly use vehicle dynamic responses to assess the impact of track geometry defects. however, the computational cost of obtaining the dynamic response of the vehicle body using dynamics simulation methods is large. thus, it is important to obtain the dynamic response of the vehicle - track coupled system efficiently and accurately. in this work, a branch fourier neural operator ( bfno ) model is proposed to obtain the dynamic response of the vehicle - track coupled system. the model takes into account the nonlinear relationship of the vehicle - track coupled system and realizes the fast and accurate estimation of the system dynamic response. the relative loss ( rlse ) of bfno model is 2. 04 %, which is reduced by 64 %, compared with the traditional neural network ( cnn - gru ). in the frequency domain, bfno model achieves the effective estimation of the dynamic response of the system within the primary frequency range. compared with the existing methods, our proposed model can make predictions at unseen time steps, enabling predictions from low to high time resolutions. meanwhile, our proposed model is superior to commercial software in terms of efficiency. in the evaluation of track geometry, users can use pre - trained bfno to obtain the dynamic response with almost no computational cost.
arxiv:2402.18366
we study the geometry of germs of definable ( semialgebraic or subanalytic ) sets over a $ p $ - adic field from the metric, differential and measure geometric point of view. we prove that the local density of such sets at each of their points does exist. we then introduce the notion of distinguished tangent cone with respect to some open subgroup with finite index in the multiplicative group of our field and show, as it is the case in the real setting, that, up to some multiplicities, the local density may be computed on this distinguished tangent cone. we also prove that these distinguished tangent cones stabilize for small enough subgroups. we finally obtain the $ p $ - adic counterpart of the cauchy - crofton formula for the density. to prove these results we use the lipschitz decomposition of definable $ p $ - adic sets of arxiv : 0904. 3853v1 and prove here the genericity of the regularity conditions for stratification such as $ ( w _ f ) $, $ ( w ) $, $ ( a _ f ) $, $ ( b ) $ and $ ( a ) $ conditions.
arxiv:0910.0799
the advances of cloud computing, fog computing and internet of things ( iot ) make the industries more prosperous than ever. a wide range of industrial systems such as transportation systems and manufacturing systems have been developed by integrating cloud computing, fog computing and iot successfully. security and privacy issues are a major concern that hinders the wide adoptions of these novel techniques. in this paper, we focus on assured data deletion, an issue which is important but received less attention in academia and industry. we firstly propose a framework to integrate the cloud, the fog and the things together to manage the stored data from industries or individuals. we then focus on secure data deletion in this framework by proposing an assured data deletion scheme which fulfills fine - grained access control over sensitive data and verifiable data deletion. only the data owners and the fog devices are involved when deleting a data key and validating the data deletion, which makes the protocol practical due to the features of low latency and real - time interaction of fog computing. the proposed protocol takes advantage of attribute - based encryption and is provably secure under the standard model. the theoretical analysis shows the good performance and functionality requirements while the implementation results demonstrate the feasibility of our proposal.
arxiv:1804.02834
we demonstrate that the finiteness of the limiting values of the lower energy levels of a hydrogen atom under an unrestricted growth of the magnetic field, into which this atom is embedded, is achieved already when the vacuum polarization ( vp ) is calculated in the magnetic field within the approximation of the local action of euler - - heisenberg. we find that the mechanism for this saturation is different from the one acting, when vp is calculated via the feynman diagram in the furry picture. we study the effective potential that appears when the adiabatic ( diagonal ) approximation is exploited for solving the schr \ " { o } dinger equation for the longitudinal degree of freedom of the electron on the lowest landau level in the atom. we find that the ( effective ) potential of a point - like charge remains nonsingular thanks to the growing screening provided by vp. the regularizing length turns out to be $ \ sqrt { \ alpha / 3 \ pi } \ lambdabar _ { \ mathrm { c } } $, where $ \ lambdabar _ { \ mathrm { c } } $ is the electron compton length. the family of effective potentials, labeled by growing values of the magnetic field condenses towards a certain limiting, magnetic - field - independent potential - distance curve. the ~ limiting values of even ground - state energies are determined for four magnetic quantum numbers using the karnakov - - popov method.
arxiv:2011.12422
we present the first direct measurements of the rest - frame 10 - 40 kev x - ray luminosity function ( xlf ) of active galactic nuclei ( agns ) based on a sample of 94 sources at 0. 1 < z < 3, selected at 8 - 24 kev energies from sources in the nustar extragalactic survey program. our results are consistent with the strong evolution of the agn population seen in prior, lower - energy studies of the xlf. however, different models of the intrinsic distribution of absorption, which are used to correct for selection biases, give significantly different predictions for the total number of sources in our sample, leading to small, systematic differences in our binned estimates of the xlf. adopting a model with a lower intrinsic fraction of compton - thick sources and a larger population of sources with column densities n _ h ~ 10 ^ { 23 - 24 } / cm2 or a model with stronger compton reflection component ( with a relative normalization of r ~ 2 at all luminosities ) can bring extrapolations of the xlf from 2 - 10 kev into agreement with our nustar sample. ultimately, x - ray spectral analysis of the nustar sources is required to break this degeneracy between the distribution of absorbing column densities and the strength of the compton reflection component and thus refine our measurements of the xlf. furthermore, the models that successfully describe the high - redshift population seen by nustar tend to over - predict previous, high - energy measurements of the local xlf, indicating that there is evolution of the agn population that is not fully captured by the current models.
arxiv:1511.04184
instance level detection and segmentation of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest x - ray images. leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure - aware relation network ( sar - net ) extending mask r - cnn. the sar - net consists of three relation modules : 1. the anatomical structure relation module encoding spatial relations between diseases and anatomical parts. 2. the contextual relation module aggregating clues based on query - key pair of disease roi and lung fields. 3. the disease relation module propagating co - occurrence and causal relations into disease proposals. towards making a practical system, we also provide chestx - det, a chest x - ray dataset with instance - level annotations ( boxes and masks ). chestx - det is a subset of the public dataset nih chestx - ray14. it contains ~ 3500 images of 13 common disease categories labeled by three board - certified radiologists. we evaluate our sar - net on it and another dataset dr - private. experimental results show that it can enhance the strong baseline of mask r - cnn with significant improvements. the chestx - det is released at https : / / github. com / deepwise - ailab / chestx - det - dataset.
arxiv:2104.10326
in this article, we establish a liouville - type inequality for polynomials evaluated at the values of arbitrary siegel e - functions at non - zero algebraic points. additionally, we provide a comparable result within the framework of mahler m - functions.
arxiv:2502.09999
large language models ( llms ) excel in code - related tasks like code generation, but benchmark evaluations often overlook task characteristics, such as difficulty. moreover, benchmarks are usually built using tasks described with one single prompt, despite the formulation of prompts having a profound impact on the outcome. this paper introduces a generalist approach, taskeval, a framework using diverse prompts and item response theory ( irt ) to efficiently assess llms ' capabilities and benchmark task characteristics, improving the understanding of their performance. using two code generation benchmarks, humaneval + and classeval, as well as 5 code generation llms, we show that taskeval is capable of characterizing the properties of tasks. using topic analysis, we identify and analyze the tasks of respectively 17 and 21 topics within the benchmarks. we also cross - analyze tasks ' characteristics with programming constructs ( e. g., variable assignment, conditions, etc. ) used by llms, emphasizing some patterns with tasks ' difficulty. finally, we conduct a comparison between the difficulty assessment of tasks by human - annotators and llms. orthogonal to current benchmarking evaluation efforts, taskeval can assist researchers and practitioners in fostering better assessments of llms. the tasks ' characteristics can be used to identify shortcomings within existing benchmarks. this could be used to generate additional related tasks for the evaluation or improvement of llm.
arxiv:2407.21227
we find a new integration transformation which can convert a chirplet function to fractional fourier transformation kernel, this new transformation is invertible and obeys parseval theorem. under this transformation a new relationship between a phase space function and its weyl - wigner quantum correspondence operator is revealed.
arxiv:0902.1800
we study collider implications of variant axion models which naturally avoid the cosmological domain wall problem. we find that in such models the branching ratio of $ h \ to \ gamma \ gamma $ can be enhanced by a factor of 5 up to 30 as compared with the standard model prediction. the $ h \ to \ gamma \ gamma $ process is therefore a promising channel to discover a light higgs boson at the lhc and to probe the peccei - quinn charge assignment of the standard model fields from yukawa interactions.
arxiv:1005.1185
we introduce, for the first time, a new class of birnbaum - saunders nonlinear regression models potentially useful in lifetime data analysis. the class generalizes the regression model described by rieck and nedelman [ 1991, a log - linear model for the birnbaum - saunders distribution, technometrics, 33, 51 - 60 ]. we discuss maximum likelihood estimation for the parameters of the model, and derive closed - form expressions for the second - order biases of these estimates. our formulae are easily computed as ordinary linear regressions and are then used to define bias corrected maximum likelihood estimates. some simulation results show that the bias correction scheme yields nearly unbiased estimates without increasing the mean squared errors. we also give an application to a real fatigue data set.
arxiv:0901.4881
we report a photometric study of the wz sagittae - type dwarf nova pq andromedae. the light curve shows strong ( 0. 05 mag full amplitude ) signals with periods of 1263 ( 1 ) and 634 ( 1 ) s, and a likely double - humped signal with p = 80. 6 ( 2 ) min. we interpret the first two as nonradial pulsation periods of the underlying white dwarf, and the last as the orbital period of the underlying binary. we estimate a distance of 150 ( 50 ) pc from proper motions and the two standard candles available : the white dwarf and the dwarf - nova outburst. at this distance, the k magnitude implies that the secondary is probably fainter than any star on the main sequence - - indicating a mass below the kumar limit at 0. 075 m _ sol. pq and may be another " period bouncer ", where evolution now drives the binary out to longer period.
arxiv:astro-ph/0506135
game theory ' s prescriptive power typically relies on full rationality and / or self - play interactions. in contrast, this work sets aside these fundamental premises and focuses instead on heterogeneous autonomous interactions between two or more agents. specifically, we introduce a new and concise representation for repeated adversarial ( constant - sum ) games that highlight the necessary features that enable an automated planing agent to reason about how to score above the game ' s nash equilibrium, when facing heterogeneous adversaries. to this end, we present teamup, a model - based rl algorithm designed for learning and planning such an abstraction. in essence, it is somewhat similar to r - max with a cleverly engineered reward shaping that treats exploration as an adversarial optimization problem. in practice, it attempts to find an ally with which to tacitly collude ( in more than two - player games ) and then collaborates on a joint plan of actions that can consistently score a high utility in adversarial repeated games. we use the inaugural lemonade stand game tournament to demonstrate the effectiveness of our approach, and find that teamup is the best performing agent, demoting the tournament ' s actual winning strategy into second place. in our experimental analysis, we show hat our strategy successfully and consistently builds collaborations with many different heterogeneous ( and sometimes very sophisticated ) adversaries.
arxiv:1203.3498
we compute the four - dimensional gaugino mass for a dp - brane extended in spacetime and wrapping a cycle on the internal geometry in a warped compactification with fluxes. motivated by the backreaction of gaugino bilinear vevs, we use generalized complex geometry to characterize the internal geometry as well as the cycle wrapped by the brane. we find that the rr fluxes and the non - closure of the generalized complex structures combine in the gaugino mass terms in the same form as they do in the bulk superpotential, while for the nsns fluxes there is a crucial minus sign in the component normal to the brane. our expression extends the known result for d3 and d7 - branes in calabi - yau manifolds, where the gaugino masses are induced respectively by the imaginary anti - self dual and imaginary self - dual components of the complex 3 - form flux $ g _ 3 $.
arxiv:2002.01481
mid - infrared observations of the andromeda galaxy, m31, obtained with the infrared array camera on board the spitzer space telescope, are presented. the image mosaics cover areas of approximate 3. 7deg x 1. 6deg and include the satellite galaxies m32 and ngc 205. the appearance of m31 varies dramatically in the different mid - infrared bands, from the smooth bulge and disk of the old stellar population seen at 3. 6um to the well - known ' 10 kpc ring ' dominating the 8um image. the similarity of the 3. 6um and optical isophotes and nearly constant optical - mid - infrared color over the inner 400 arcsec confirms that there is no significant extinction at optical wavelengths in m31 ' s bulge. the nuclear colors indicate the presence of dust but not an infrared - bright nucleus. the integrated 8um non - stellar luminosity implies a star formation rate of 0. 4 msun / yr, consistent with other indicators that show m31 to be a quiescent galaxy.
arxiv:astro-ph/0608593
we study the hyperfine interaction between the nuclear spins and the electrons in a hgte quantum well, which is the prime experimentally realized example of a two - dimensional topological insulator. the hyperfine interaction is a naturally present, internal source of broken time - reversal symmetry from the point of view of the electrons. the hgte quantum well is described by the so - called bernevig - hughes - zhang ( bhz ) model. the basis states of the bhz model are combinations of both s - and p - like symmetry states, which means that three kinds of hyperfine interactions play a role : ( i ) the fermi contact interaction, ( ii ) the dipole - dipole like coupling and ( iii ) the electron orbital to nuclear - spin coupling. we provide benchmark results for the forms and magnitudes of these hyperfine interactions within the bhz model, which give a good starting point for evaluating hyperfine interactions in any hgte nanostructure. we apply our results to the helical edge states of a hgte two - dimensional topological insulator and show how their total hyperfine interaction becomes anisotropic and dependent on the orientation of the sample edge within the plane. moreover, for the helical edge states the hyperfine interactions due to the p - like states can dominate over the s - like contribution in certain circumstances.
arxiv:1304.5096
we introduce phonon state tomography ( pst ) as a diagnostic probe of electron dynamics in solids whose phonons are optically excited by a laser pulse at initial time. using a projected - purified matrix - product states algorithm, pst decomposes the exact correlated electron - phonon wavefunction into contributions from purely electronic states corresponding to statistically typical configurations of the optically accessible phononic response, enabling a ' tomographic ' reconstruction of the electronic dynamics generated by the phonons. thus, pst may be used to diagnose electronic behavior in experiments that access only the phonon response, such as thermal diffuse x - ray and electron scattering. we study the dynamics of a metal whose infrared phonons are excited by an optical pulse at initial time and use it to simulate the sample - averaged momentum - resolved phonon occupancy and accurately reconstruct the electronic correlations. we also use pst to analyze the influence of different pulse shapes on the light - induced enhancement and suppression of electronic correlations.
arxiv:2403.04209
dijets observed near midrapidity in high - energy nuclear collisions result from large - angle scattering of low - $ x $ partons ( gluons ) within projectile hadrons as a signature manifestation of qcd. within the same collisions it has been claimed that hydrodynamic flows ( radial, elliptic and " higher harmonic " flows ) carried by a dense qcd medium or quark - gluon plasma ( qgp ) dominate the observed hadronic final state. the flow - qgp narrative is imposed { \ em a priori } on primary particle data, and of all possible analysis methods a subset a that seems to support that narrative is preferred. the present study explores an alternative minimum - bias ( mb ) jet narrative - - quantitative correspondence of mb dijet manifestations in the hadronic final state with measured { \ em isolated jet } properties. the latter incorporates a different set of methods b that emerge from inductive study of primary particle data without { \ em a priori } assumptions. the resulting system of methods and data manifestations is represented by a two - component ( soft + hard ) model ( tcm ) of hadron production. a survey of methods reveals that type a tends to discard substantial information carried by primary particle data whereas type b retains almost all information in both primary particle data from nuclear collisions and from isolated jets. the main goal of the present study is a review of mb dijet contributions to high - energy collisions in small and large systems relative to measured isolated - jet properties. representative analysis methods from types a and b are compared in the context of mb jet manifestations. this study suggests that at least some data features commonly attributed to flows actually result from mb dijets and thereby challenges the flow - qgp narrative.
arxiv:1701.07866
an overview of the smc data taking and the polarized deep inelastic scattering experiment is given. the new data on the deuteron extend the kinematic range and have considerably reduced statistical and systematic errors. the evaluation of the first moment of the spin dependent structure function is presented and the result for the bjorken sum rule from smc data alone is given. the spin contribution of the quarks to the spin of the nucleon is obtained with information from weak decays of baryons. in a new polarized semi - inclusive analysis the asymmetry of the difference between the number of positive and negative charged hadrons was studied. preliminary results are shown.
arxiv:hep-ex/9509007
universes, not just one, and endow each one with its own unique, unimaginable and incomparable character. it is impossible to disprove a claim when that claim as defined encompasses every conceivable contingency. creation science violates the principle of parsimony : parsimony favours those explanations which rely on the fewest assumptions. scientists prefer explanations that are consistent with known and supported facts and evidence and require the fewest assumptions to fill the remaining gaps. many of the alternative claims made in creation science retreat from simpler scientific explanations and introduce more complications and conjecture into the equation. creation science is not, and cannot be, empirically or experimentally tested : creationism posits supernatural causes which lie outside the realm of methodological naturalism and scientific experiment. science can only test empirical, natural claims. creation science is not correctable, dynamic, tentative or progressive : creation science adheres to a fixed and unchanging premise or " absolute truth, " the " word of god, " which is not open to change. any evidence that runs contrary to that truth must be disregarded. in science, all claims are tentative, they are forever open to challenge, and must be discarded or adjusted when the weight of evidence demands it. by invoking claims of " abrupt appearance " of species as a miraculous act, creation science is unsuited for the tools and methods demanded by science, and it cannot be considered scientific in the way that the term " science " is currently defined. scientists and science writers commonly characterize creation science as a pseudoscience. = = = historical, philosophical, and sociological criticism = = = historically, the debate of whether creationism is compatible with science can be traced back to 1874, the year science historian john william draper published his history of the conflict between religion and science. in it draper portrayed the entire history of scientific development as a war against religion. this presentation of history was propagated further by followers such as andrew dickson white in his two - volume a history of the warfare of science with theology in christendom ( 1896 ). their conclusions have been disputed. in the united states, the principal focus of creation science advocates is on the government - supported public school systems, which are prohibited by the establishment clause from promoting specific religions. historical communities have argued that biblical translations contain many translation errors and errata, and therefore that the use of biblical literalism in creation science is self - contradictory. = = kinds of creation science = = = = = biology
https://en.wikipedia.org/wiki/Creation_science
with the rapid development of large language models ( llms ), numerous mature applications of llms have emerged in the field of content safety detection. however, we have found that llms exhibit blind trust in safety detection agents. the general llms can be compromised by hackers with this vulnerability. hence, this paper proposed an attack named feign agent attack ( f2a ). through such malicious forgery methods, adding fake safety detection results into the prompt, the defense mechanism of llms can be bypassed, thereby obtaining harmful content and hijacking the normal conversation. continually, a series of experiments were conducted. in these experiments, the hijacking capability of f2a on llms was analyzed and demonstrated, exploring the fundamental reasons why llms blindly trust safety detection results. the experiments involved various scenarios where fake safety detection results were injected into prompts, and the responses were closely monitored to understand the extent of the vulnerability. also, this paper provided a reasonable solution to this attack, emphasizing that it is important for llms to critically evaluate the results of augmented agents to prevent the generating harmful content. by doing so, the reliability and security can be significantly improved, protecting the llms from f2a.
arxiv:2410.08776
there are known problems of lorentz - dirac equation for moving with acceleration charged particle in classical electrodynamics. the model of extended in one dimension particle is proposed and shown that electromagnetic self - interaction can lead ( with appropriate choice of retarded and advanced interactions ) to zero change in particle momentum. the hypothesis is formulated : all relativistic internal forces of various nature can give zero change in particle momentum
arxiv:hep-th/9707006
the brain produces rhythms in a variety of frequency bands. some are likely by - products of neuronal processes ; others are thought to be top - down. produced entirely naturally, these rhythms have clearly recognizable beats, but they are very far from periodic in the sense of mathematics. they produce signals that are broad - band, episodic, wandering in magnitude, in frequency and in phase ; the rhythm comes and goes, degrading and regenerating. rhythms with these characteristics do not match standard dynamical systems paradigms of periodicity, quasi - periodicity, or periodic motion in the presence of a brownian noise. thus far they have been satisfactorily reproduced only using networks of hundreds of integrate - and - fire neurons. in this paper, we tackle the mathematical question of whether signals with these properties can be generated from simpler dynamical systems. using an ode with two variables inspired by the fitzhugh - nagumo model, and varying randomly three parameters that control the magnitude, frequency and degree of degradation, we were able to replicate the qualitative characteristics of these natural brain rhythms. viewing the two variables as excitatory and inhibitory conductances of a typical neuron in a local population, our model produces results that closely resemble gamma - band activity in real cortex, including the moment - to - moment balancing of e and i - currents seen in experiments.
arxiv:2006.04039
for every nonempty compact convex subset $ k $ of a normed linear space a ( unique ) point $ c _ k \ in k $, called the generalized chebyshev center, is distinguished. it is shown that $ c _ k $ is a common fixed point for the isometry group of the metric space $ k $. with use of the generalized chebyshev centers, the central measure $ \ mu _ x $ of an arbitrary compact metric space $ x $ is defined. for a large class of compact metric spaces, including the interval $ [ 0, 1 ] $ and all compact metric groups, another ` central ' measure is distinguished, which turns out to coincide with the lebesgue measure and the haar one for the interval and a compact metric group, respectively. an idea of distinguishing infinitely many points forming a dense subset of an arbitrary compact metric space is also presented.
arxiv:1105.5706
we prove global well - posedness for the half - wave map with $ s ^ 2 $ target for small $ \ dot { h } ^ { \ frac { n } { 2 } } \ times \ dot { h } ^ { \ frac { n } { 2 } - 1 } $ initial data. we also prove the global well - posedness for the equation with $ \ mathbb { h } ^ 2 $ target for small smooth $ \ dot { b } ^ { \ frac { n } { 2 } } _ { 2, 1 } \ times \ dot { b } ^ { \ frac { n } { 2 } - 1 } _ { 2, 1 } $ initial data.
arxiv:2109.13657
we investigate an analogue to the wedderburn principal theorem ( wpt ) for a finite - dimensional jordan superalgebra $ j $ with solvable radical $ n $ such that $ n ^ 2 = 0 $ and $ j / n \ cong jp _ n $, $ n \ geq 3 $. we consider $ n $ as an irreducible $ jp _ n $ - bimodule and we prove that the wpt holds for $ j $.
arxiv:2001.07470
legendre curves are smooth plane curves which may have singular points, but still have a well defined smooth normal ( and corresponding tangent ) vector field. because of the existence of singular points, the usual curvature concept for regular curves cannot be straightforwardly extended to these curves. however, fukunaga, and takahashi defined and studied functions that play the role of curvature functions of a legendre curve, and whose ratio extend the curvature notion in the usual sense. going into the same direction, our paper is devoted to the extension of the concept of circular curvature from regular to legendre curves, but additionally referring not only to the euclidean plane. for the first time we will extend the concept of legendre curves to normed planes. generalizing in such a way the results of the mentioned authors, we define new functions that play the role of circular curvature of legendre curves, and tackle questions concerning existence, uniqueness, and invariance under isometries for them. using these functions, we study evolutes, involutes, and pedal curves of legendre curves for normed planes, and the notion of contact between such curves is correspondingly extended, too. we also provide new ways to calculate the maslov index of a front in terms of our new curvature functions. it becomes clear that an inner product is not necessary in developing the theory of legendre curves. more precisely, only a fixed norm and the associated orthogonality ( of birkhoff type ) are necessary.
arxiv:1704.04927
we re - examine the likelihood for alien civilizations to develop communication technology on the basis of the general assumption that life elsewhere could have a non - carbon chemical foundation. we particularized the discussion to a complex silicon - based biochemistry in a nitrogen solvent, and elaborate on the environment in which such a chemistry is feasible, and if so, on what scales. more concretely, we determine the region outside the habitable zone where such organisms can grow and flourish and after that we study how our findings impact the recently derived upper limit on the fraction of living intelligent species that develop communication technology $ \ langle \ xi _ { \ rm biotec } \ rangle $. we also compare this new restriction on $ \ langle \ xi _ { \ rm biotec } \ rangle $ with that resulting from the extension of the habitable zone to accommodate subsurface exolife, originating in planets with subsurface ( water ) oceans.
arxiv:1908.01335
using axisymmetric simulations coupling special relativistic mhd, an approximate post - newtonian gravitational potential and two - moment neutrino transport, we show different paths for the formation of either protomagnetars or stellar mass black holes. the fraction of prototypical stellar cores which should result in collapsars depends on a combination of several factors, among which the structure of the progenitor star and the profile of specific angular momentum are probably the foremost. along with the implosion of the stellar core, we also obtain supernova - like explosions driven by neutrino heating and hydrodynamic instabilities or by magneto - rotational effects in cores of high - mass stars. in the latter case, highly collimated, mildly relativistic outflows are generated. we find that after a rather long post - collapse phase ( lasting > ~ 1 sec ) black holes may form in cases both of successful and failed supernovalike explosions. a basic trend is that cores with a specific angular momentum smaller than that obtained by standard, one - dimensional stellar evolution calculations form black holes ( and eventually collapsars ). complementary, protomagnetars result from stellar cores with the standard distribution of specific angular momentum obtained from prototypical stellar evolution calculations including magnetic torques and moderate to large mass loss rates.
arxiv:1703.09893
real - life events, behaviors and interactions produce sequential data. an important but rarely explored problem is to analyze those nonoccurring ( also called negative ) yet important sequences, forming negative sequence analysis ( nsa ). a typical nsa area is to discover negative sequential patterns ( nsps ) consisting of important non - occurring and occurring elements and patterns. the limited existing work on nsp mining relies on frequentist and downward closure property - based pattern selection, producing large and highly redundant nsps, nonactionable for business decision - making. this work makes the first attempt for actionable nsp discovery. it builds an nsp graph representation, quantify both explicit occurrence and implicit non - occurrence - based element and pattern relations, and then discover significant, diverse and informative nsps in the nsp graph to represent the entire nsp set for discovering actionable nsps. a dpp - based nsp representation and actionable nsp discovery method einsp introduces novel and significant contributions for nsa and sequence analysis : ( 1 ) it represents nsps by a determinantal point process ( dpp ) based graph ; ( 2 ) it quantifies actionable nsps in terms of their statistical significance, diversity, and strength of explicit / implicit element / pattern relations ; and ( 3 ) it models and measures both explicit and implicit element / pattern relations in the dpp - based nsp graph to represent direct and indirect couplings between nsp items, elements and patterns. we substantially analyze the effectiveness of einsp in terms of various theoretical and empirical aspects including complexity, item / pattern coverage, pattern size and diversity, implicit pattern relation strength, and data factors.
arxiv:2204.03571
in order to gain a better understanding of the state space of programs, with the aim of making their verification more tractable, models based on directed topological spaces have been introduced, allowing to take in account equivalence between execution traces, as well as translate features of the execution ( such as the presence of deadlocks ) into geometrical situations. in this context, many algorithms were introduced, based on a description of the geometrical models as regions consisting of unions of rectangles. we explain here that these constructions can actually be performed directly on the syntax of programs, thus resulting in representations which are more natural and easier to implement. in order to do so, we start from the observation that positions in a program can be described as partial explorations of the program. the operational semantics induces a partial order on positions, and regions can be defined as formal unions of intervals in the resulting poset. we then study the structure of such regions and show that, under reasonable conditions, they form a boolean algebra and admit a representation in normal form ( which corresponds to covering a space by maximal intervals ), thus supporting the constructions needed for the purpose of studying programs. all the operations involved here are given explicit algorithmic descriptions.
arxiv:2112.14055
developing high - performing systems for detecting biomedical named entities has major implications. state - of - the - art deep - learning based solutions for entity recognition often require large annotated datasets, which is not available in the biomedical domain. transfer learning and multi - task learning have been shown to improve performance for low - resource domains. however, the applications of these methods are relatively scarce in the biomedical domain, and a theoretical understanding of why these methods improve the performance is lacking. in this study, we performed an extensive analysis to understand the transferability between different biomedical entity datasets. we found useful measures to predict transferability between these datasets. besides, we propose combining transfer learning and multi - task learning to improve the performance of biomedical named entity recognition systems, which is not applied before to the best of our knowledge.
arxiv:2011.00425
we describe the circumstances that led to the discovery of kepler - 36b, and the subsequent characterization of its host planetary system. the kepler - 36 system is remarkable for its physical properties : the close separation of the planets, the contrasting densities of the planets despite their proximity, and the short chaotic timescale. its discovery and characterization was also remarkable for the novelty of the detection technique and for the precise characterization due to the large transit - timing variations caused by the close proximity of the planets, as well as the precise stellar parameters due to asteroseismology. this was the first multi - planet system whose transit data was processed using a fully consistent photometric - dynamical model, using population markov chain monte carlo techniques to precisely constrain system parameters. amongst those parameters, the stellar density was found to be consistent with a complementary, concurrent asteroseismic analysis. in a first, the 3d orientation of the planets was constrained from the lack of transit - duration variations. the system yielded insights into the composition and evolution of short - period planet systems. the denser planet appears to have an earth - like composition, with uncertainties comparable to the highest precision rocky exoplanet measurements, and the planet densities foreshadowed the rocky / gaseous boundary. the formation of this system remains a mystery, but should yield insights into the migration and evolution of compact exoplanet systems.
arxiv:1905.05229
in the development of model predictive controllers for pde - constrained problems, the use of reduced order models is essential to enable real - time applicability. besides local linearization approaches, proper orthogonal decomposition ( pod ) has been most widely used in the past in order to derive such models. due to the huge advances concerning both theory as well as the numerical approximation, a very promising alternative based on the koopman operator has recently emerged. in this chapter, we present two control strategies for model predictive control of nonlinear pdes using data - efficient approximations of the koopman operator. in the first one, the dynamic control system is replaced by a small number of autonomous systems with different yet constant inputs. the control problem is consequently transformed into a switching problem. in the second approach, a bilinear surrogate model, is obtained via linear interpolation between two of these autonomous systems. using a recent convergence result for extended dynamic mode decomposition ( edmd ), convergence to the true optimum can be proved. we study the properties of these two strategies with respect to solution quality, data requirements, and complexity of the resulting optimization problem using the 1d burgers equation and the 2d navier - stokes equations as examples. finally, an extension for online adaptivity is presented.
arxiv:1806.09898
as part of an ongoing effort to characterize the high temperature phase of qcd, in a numerical simulation using the staggered fermion scheme, we measure the quark baryon density in the vicinity of a fixed test quark at high temperature and compare it with similar measurements at low temperature and at the crossover temperature. we find an extremely weak correlation at high temperature, suggesting that small color singlet clusters are unimportant in the thermal ensemble. we also find that at $ t = 0. 75 \ t _ c $ the total induced quark number shows a surprisingly large component attributable to baryonic screening. a companion simulation of a simple flux tube model produces similar results and also suggests a plausible phenomenological scenario : as the crossover temperature is approached from below, baryonic states proliferate. above the crossover temperature the mean size of color singlet clusters grows explosively, resulting in an effective electrostatic deconfinement.
arxiv:hep-lat/9309017
the world space observatory project is a new space mission concept, grown out the needs of the astronomical community to have access to the part of the electromagnetic spectrum where all known physics can be studied on all possible time scales : the ultraviolet range. the physical diagnostics in this domain supply a richness of new experimental data unmatched by any other wavelength range, for the studies of the universe. as wso / uv has been driven by the needs of scientists from many different countries, a new implementation model was needed to bring the world space observatory to reality. the wso / uv consists of a single ultraviolet telescope in orbit, incorporating a primary mirror of 1. 7 m diameter feeding a uv spectrograph and uv imagers.
arxiv:astro-ph/0306554
chlorine ( cl ) is a chemical element of the group of the halogens and is between the 17th and the 20th most abundant elements in the solar system. it is thought to be produced from the capture of a proton or neutron by specific alpha - element isotopes during both hydrostatic and explosive oxygen burning, though some contribution may come from type ia supernovae. cl lines are quite rare in stellar spectra, so most of the information available about its abundance comes from analyzing the emission lines of ionized nebulae, especially the collisionally excited lines of cl2 + ( [ cl iii ] { \ lambda } { \ lambda } 5518, 5538 ). our goal is to accurately determine the cl abundance in h ii regions, and gather more information about its nucleosynthetic origin. for this work we used a sample of observations that encompasses the deepest spectra of h ii regions available in the literature, from both the milky way and other galaxies in the local universe, covering a range of oxygen ( o ) abundances, 12 + log ( o / h ), from 7. 18 to 8. 70. as a first step, we determine the most representative electron temperature of the zone of the nebulae where the cl2 + ion lies. to this aim we used a grid of photoionization models and diagnostics valid for other ions, as that parameter cannot be determined directly through [ cl iii ] lines. we then computed the total cl abundance using different sets of ionization correction factors to account for the contribution from unseen ionization stages.
arxiv:2410.18673
the wiedemann - franz law, connecting the electronic thermal conductivity to the electrical conductivity of a disordered metal, is generally found to be well satisfied even when electron - electron ( e - e ) interactions are strong. in ultra - clean conductors, however, large deviations from the standard form of the law are expected, due to the fact that e - e interactions affect the two conductivities in radically different ways. thus, the standard wiedemann - franz ratio between the thermal and the electric conductivity is reduced by a factor $ 1 + \ tau / \ tau _ { \ rm th } ^ { \ rm ee } $, where $ 1 / \ tau $ is the momentum relaxation rate, and $ 1 / \ tau _ { \ rm th } ^ { \ rm ee } $ is the relaxation time of the thermal current due to e - e collisions. here we study the density and temperature dependence of $ 1 / \ tau _ { \ rm th } ^ { \ rm ee } $ in the important case of doped, clean single layers of graphene, which exhibit record - high thermal conductivities. we show that at low temperature $ 1 / \ tau _ { \ rm th } ^ { \ rm ee } $ is $ 8 / 5 $ of the quasiparticle decay rate. we also show that the many - body renormalization of the thermal drude weight coincides with that of the fermi velocity.
arxiv:1406.2940
by comparing semi - analytic galaxy catalogues with data from the sloan digital sky survey ( sdss ), we show that current galaxy formation models reproduce qualitatively the dependence of galaxy clustering and pairwise peculiar velocities on luminosity, but some subtle discrepancies with the data still remain. the comparisons are carried out by constructing a large set of mock galaxy redshift surveys that have the same selection function as the sdss data release four ( dr4 ). the mock surveys are based on two sets of semi - analytic catalogues presented by croton et al. and kang et al. from the mock catalogues, we measure the redshift - space projected two - point correlation function, the power spectrum, and the pairwise velocity dispersion ( pvd ) in fourier space and in configuration space, for galaxies in different luminosity intervals. we then compare these theoretical predictions with the measurements derived from the sdss dr4. on large scales and for galaxies brighter than l *, both sets of mock catalogues agree well with the data. for fainter galaxies, however, both models predict stronger clustering and higher pairwise velocities than observed. we demonstrate that this problem can be resolved if the fraction of faint satellite galaxies in massive haloes is reduced by ~ 30 % compared to the model predictions. a direct look into the model galaxy catalogues reveals that a signifcant fraction ( 15 % ) of faint galaxies ( $ - 18 < m _ { ^ { 0. 1 } r } < - 17 $ ) reside in haloes with $ m _ { vir } > 10 ^ { 13 } \ msun $, and this population is predominantly red in colour. these faint red galaxies are responsible for the high pvd values of low - luminosity galaxies on small scales.
arxiv:astro-ph/0701218
we study how the kinetic decoupling of dark matter ( dm ) within a minimal supersymmetric extension of the standard model, by adopting nine independent parameters ( mssm - 9 ), could improve our knowledge of the properties of the dm protohalos. we show that the most probable neutralino mass regions, which satisfy the relic density and the higgs mass contraints, are those with the lightest supersymmetric neutralino mass around 1 tev and 3 tev, corresponding to higgsino - like and wino - like neutralino, respectively. the kinetic decoupling temperature in the mssm - 9 scenario leads to a most probable protohalo mass in a range of $ m _ { \ mathrm { ph } } \ sim 10 ^ { - 12 } - 10 ^ { - 7 } \, m _ \ odot $. the part of the region closer to 2 tev gives also important contributions from the neutralino - stau co - annihilation, reducing the effective annihilation rate in the early universe. we also study how the size of the smallest dm substructures correlates to experimental signatures, such as the spin - dependent and spin - independent scattering cross sections, relevant for direct detection of dm. improvements on the spin - independent sensitivity might reduce the most probable range of the protohalo mass between $ \ sim $ 10 $ ^ { - 9 } \, m _ \ odot $ and $ \ sim $ 10 $ ^ { - 7 } \, m _ \ odot $, while the expected spin - dependent sensitivity provides weaker constraints. we show how the boost of the luminosity due to dm annihilation increases, depending on the protohalo mass. in the higgsino case, the protohalo mass is lower than the canonical value often used in the literature ( $ \ sim $ 10 $ ^ { - 6 } \, m _ \ odot $ ), while $ \ langle \ sigma v \ rangle $ does not deviate from $ \ langle \ sigma v \ rangle \ sim 10 ^ { - 26 } $ cm $ ^ 3 $ s $ ^ { - 1 } $ ; there is no significant enhancement of the luminosity. on the contrary, in the wino case, the protohalo mass is even lighter, and $ \ langle \ sigma v \ rangle $ is two orders of magnitude larger ; as its consequence, we see
arxiv:1506.01529
brightest cluster galaxies ( bcgs ) might have been assembled relatively late ( z < 1 ) via mergers. by exploiting the high - resolution hst / acs imaging, we find four bcgs ( cosmos - p 125516, 102810, 036694 and 089357 ) in major dry merging in 29 x - ray clusters at $ 0. 3 \ le z \ le 0. 6 $ in the cosmological evolutionary survey ( cosmos ). these bcgs show prominent but quiescent double nuclei with a magnitude difference of $ \ delta m < 1. 5 $ and a projected separation of $ r _ p < $ 10 kpc. clear signatures of interaction such as extended plumes and / or significant asymmetries are also observed in their residual images. we infer a major merger rate of $ 0. 55 \ pm0. 27 $ merger per gyr at $ z \ sim0. 43 $ assuming the merger time - scale estimate of kitzbichler & white ( 2008 ). this inferred rate is significantly higher than the rate in the local universe ( $ 0. 12 \ pm0. 03 $ at $ z \ sim0. 07 $ ) presented in liu et al. ( 2009 ). we estimate that present - day bcgs increase their luminosity ( mass ) by $ \ sim35 \ pm15 $ per cent $ ( f _ { mass } / 0. 5 ) $ via major dry mergers since $ z = 0. 6 $, where $ f _ { mass } $ is the mean mass fraction of companion galaxies accreted onto the central ones. although the statistical uncertainty due to our small sample size is relatively large, our finding is consistent with both recent observational and theoretical results. furthermore, in conjunction with our previous findings in liu et al. ( 2009 ), the discovery of these intermediate - redshift merging bcgs is clear evidence of ongoing assembly of bcgs via major dry mergers over the last $ \ sim $ 6 gyr.
arxiv:1412.1861
internal target experiments with high quality proton beams allow for a new class of experiments providing null tests of time reversal symmetry in forward scattering. this could yield more stringent limits on t - odd p - even observables. a excellent candidate for such experiments is the proton deuteron system. this system is analyzed in terms of effective t - violating p - conserving nucleon - nucleon interactions and bounds on coupling strengths that might be expected are given.
arxiv:nucl-th/9302002
previous studies of the vacuum polarization on de sitter have demonstrated that there is a simple, noncovariant representation of it in which the physics is transparent. there is also a cumbersome, covariant representation in which the physics is obscure. despite being unwieldy, the latter form has a powerful appeal for those who are concerned about de sitter invariance. we show that nothing is lost by employing the simple, noncovariant representation because there is a closed form procedure for converting its structure functions to those of the covariant representation. we also present a vastly improved technique for reading off the noncovariant structure functions from the primitive diagrams. and we discuss the issue of representing the vacuum polarization for a general metric background.
arxiv:1211.1342
manifestations of the solar magnetic activity through periodicities of about 11 and 2 years are now clearly seen in all solar activity indices. in this paper, we add information about the mechanism driving the 2 year period by studying the time and latitudinal properties of acoustic modes that are sensitive probes of the subsurface layers. we use almost 17 years of high quality resolved data provided by the global oscillation network group ( gong ) to investigate the solar cycle changes in p - mode frequencies for spherical degrees l from 0 to 120 and 1. 6 mhz < nu < 3. 5 mhz. for both periodic components of solar activity, we locate the origin of the frequency shift in the subsurface layers and put in evidence for a sudden enhancement in amplitude just in the last few hundred kilometers. we also show that, in both cases, the size of the shift increases towards equatorial latitudes and from minimum to maximum of solar activity, but, in agreement with previous findings, the quasi - biennial periodicity ( qbp ) causes a weaker shift in mode frequencies and a slower enhancement than the one caused by the 11 year cycle. we compare our observational findings with the features predicted by different models that try to explain the origin of this qbp and conclude that the observed propertiescould result from the beating between a dipole and quadrupole magnetic configuration of the dynamo.
arxiv:1210.6796
of polar and geosynchronous weather satellites. the relationship typically involves nasa developing the space systems, launch solutions, and ground control technology for the satellites and noaa operating the systems and delivering weather forecasting products to users. multiple generations of noaa polar orbiting platforms have operated to provide detailed imaging of weather from low altitude. geostationary operational environmental satellites ( goes ) provide near - real - time coverage of the western hemisphere to ensure accurate and timely understanding of developing weather phenomenon. = = = united states space force = = = the united states space force ( ussf ) is the space service branch of the united states armed forces, while the national aeronautics and space administration ( nasa ) is an independent agency of the united states government responsible for civil spaceflight. nasa and the space force ' s predecessors in the air force have a long - standing cooperative relationship, with the space force supporting nasa launches out of kennedy space center, cape canaveral space force station, and vandenberg space force base, to include range support and rescue operations from task force 45. nasa and the space force also partner on matters such as defending earth from asteroids. space force members can be nasa astronauts, with colonel michael s. hopkins, the commander of spacex crew - 1, commissioned into the space force from the international space station on december 18, 2020. in september 2020, the space force and nasa signed a memorandum of understanding formally acknowledging the joint role of both agencies. this new memorandum replaced a similar document signed in 2006 between nasa and air force space command. = = = us geological survey = = = the landsat program is the longest - running enterprise for acquisition of satellite imagery of earth. it is a joint nasa / usgs program. on july 23, 1972, the earth resources technology satellite was launched. this was eventually renamed to landsat 1 in 1975. the most recent satellite in the series, landsat 9, was launched on september 27, 2021. the instruments on the landsat satellites have acquired millions of images. the images, archived in the united states and at landsat receiving stations around the world, are a unique resource for global change research and applications in agriculture, cartography, geology, forestry, regional planning, surveillance and education, and can be viewed through the us geological survey ( usgs ) " earthexplorer " website. the collaboration between nasa and usgs involves nasa designing and delivering the space system ( satellite ) solution, launching the satellite into orbit with the usgs operating the system once in orbit. as of october
https://en.wikipedia.org/wiki/NASA
we prove that the hausdorff dimension of an average conformal repeller is stable under random perturbations. our perturbation model uses the notion of a bundle random dynamical system.
arxiv:0909.5261
the pion mass difference generates a pronounced cusp in k - - > 3 pi decays. as has recently been pointed out by cabibbo and isidori, an accurate measurement of the cusp may allow one to pin down the s - wave pi pi scattering lengths to high precision. here, we present and illustrate an effective field theory framework that allows one to determine the structure of this cusp in a straightforward manner. the strictures imposed by analyticity and unitarity are respected automatically.
arxiv:hep-ph/0604084
the s - wave meson - nucleon interaction in the s = - 1 sector is studied by means of a coupled - channel lippmann schwinger equation, using the lowest order chiral lagrangian and a cut off to regularize the loop integrals. the position and width of the lambda ( 1405 ) resonance and the k ^ - p scattering cross sections at low energies are well reproduced. the inclusion of the eta lambda, eta sigma ^ 0 channels in the coupled system is found very important and allows a solution in terms of only the lowest order lagrangian. the model is applied to calculate the in - medium k ^ - self - energy to which we add a small p - wave piece resulting from the coupling to hyperon particle - nucleon hole excitations. the k ^ - feels an attraction of about - 100 mev at normal nuclear density. the lambda ( 1405 ) resonance shifts to energies above the $ k ^ - p $ threshold and ends up dissolving as density increases. it remains to be seen how these effects persist when the dressing of the kbar and the pi mesons is incorporated self - consistently in the calculation.
arxiv:nucl-th/9810014
we present a hereditary class of graphs of unbounded clique - width which is well - quasi - ordered by the induced subgraph relation. this result provides a negative answer to the question asked by daligault, rao and thomass \ ' e in ( " well - quasi - order of relabel functions ", order, 27 ( 3 ) : 301 - - 315, 2010 ).
arxiv:1503.00571
using mobile robots for autonomous patrolling of environments to prevent intrusions is a topic of increasing practical relevance. one of the most challenging scientific issues is the problem of finding effective patrolling strategies that, at each time point, determine the next moves of the patrollers in order to maximize some objective function. in the very last years this problem has been addressed in a game theoretical fashion, explicitly considering the presence of an adversarial intruder. the general idea is that of modeling a patrolling situation as a game, played by the patrollers and the intruder, and of studying the equilibria of this game to derive effective patrolling strategies. in this paper we present a game theoretical formal framework for the determination of effective patrolling strategies that extends the previous proposals appeared in the literature, by considering environments with arbitrary topology and arbitrary preferences for the agents. the main original contributions of this paper are the formulation of the patrolling game for generic graph environments, an algorithm for finding a deterministic equilibrium strategy, which is a fixed path through the vertices of the graph, and an algorithm for finding a non - deterministic equilibrium strategy, which is a set of probabilities for moving between adjacent vertices of the graph. both the algorithms are analytically studied and experimentally validated, to assess their properties and efficiency.
arxiv:0912.3275
a number of recent works in astronomy and cosmology have relied upon theoretical he i emissivities, but we know of no effort to quantify the uncertainties in the atomic data. we analyze and assign uncertainties to all relevant atomic data, perform monte carlo analyses, and report standard deviations in the line emissivities. we consider two sets of errors, which we call " optimistic " and " pessimistic. " we also consider three different conditions, corresponding to prototypical galactic and extragalactic h ii regions and the epoch of cosmological recombination. in the extragalactic h ii case, the errors we obtain are comparable to or larger than the errors in some recent $ y _ p $ calculations, including those derived from cmb observations. we demonstrate a systematic effect on primordial abundance calculations ; this effect cannot be reduced by observing a large number of objects. in the cosmological recombination case, the errors are comparable to many of the effects considered in recent calculations.
arxiv:0811.1216