text
stringlengths
1
3.65k
source
stringlengths
15
79
we propose a new, nonparametric method for multivariate regression subject to convexity or concavity constraints on the response function. convexity constraints are common in economics, statistics, operations research, financial engineering and optimization, but there is currently no multivariate method that is computationally feasible for more than a few hundred observations. we introduce convex adaptive partitioning ( cap ), which creates a globally convex regression model from locally linear estimates fit on adaptively selected covariate partitions. cap is computationally efficient, in stark contrast to current methods. the most popular method, the least squares estimator, has a computational complexity of $ \ mathcal { o } ( n ^ 3 ) $. we show that cap has a computational complexity of $ \ mathcal { o } ( n \ log ( n ) \ log ( \ log ( n ) ) ) $ and also give consistency results. cap is applied to value function approximation for pricing american basket options with a large number of underlying assets.
arxiv:1105.1924
large scale computer simulations are used to elucidate a longstanding controversy regarding the existence, or otherwise, of spin waves in paramagnetic bcc iron. spin dynamics simulations of the dynamic structure factor of a heisenberg model of fe with first principles interactions reveal that well defined peaks persist far above curie temperature t _ c. at large wave vectors these peaks can be ascribed to propagating spin waves, at small wave vectors the peaks correspond to over - damped spin waves. paradoxically, spin wave excitations exist despite only limited magnetic short - range order at and above t _ c.
arxiv:cond-mat/0501713
we examine the current state of neutralino dark matter and consider how the lep constraints on the minimal supersymmetric standard model parameters are squeezing the available dark matter regions. we also show how cosmological constraints augment bounds coming from collider searches to further constrain the mssm parameter space.
arxiv:hep-ph/9807481
optical trapping has proven to be a valuable experimental technique for precisely controlling small dielectric objects. however, due to their very nature, conventional optical traps are diffraction limited and require high intensities to confine the dielectric objects. in this work, we propose a novel optical trap based on dielectric photonic crystal nanobeam cavities, which overcomes the limitations of conventional optical traps by significant factors. this is achieved by exploiting an optomechanically induced backaction mechanism between a dielectric nanoparticle and the cavities. we perform numerical simulations to show that our trap can fully levitate a submicron - scale dielectric particle with a trap width as narrow as 56 nm. it allows for achieving a high trap stiffness, therefore, a high q - frequency product for the particle ' s motion while reducing the optical absorption by a factor of 43 compared to the cases for conventional optical tweezers. moreover, we show that multiple laser tones can be used further to create a complex, dynamic potential landscape with feature sizes well below the diffraction limit. the presented optical trapping system offers new opportunities for precision sensing and fundamental quantum experiments based on levitated particles.
arxiv:2304.09522
we present results from modeling of quasi - simultaneous broad band ( radio through x - ray ) observations of the galactic stellar black hole ( bh ) transient x - ray binary ( xrb ) systems xte j1118 + 480 and gx 339 - 4 using an irradiated disc + compact jet model. in addition to quantifying the physical properties of the jet, we have developed a new irradiated disc model which also constrains the geometry and temperature of the outer accretion disc by assuming a disc heated by viscous energy release and x - ray irradiation from the inner regions. for the source xte j1118 + 480, which has better spectral coverage of the two in optical and near - ir ( oir ) wavelengths, we show that the entire broad band continuum can be well described by an outflow - dominated model + an irradiated disc. the best - fit radius of the outer edge of the disc is consistent with the roche lobe geometry of the system, and the temperature of the outer edge of the accretion disc is similar to those found for other xrbs. irradiation of the disc by the jet is found to be negligible for this source. for gx 339 - 4, the entire continuum is well described by the jet - dominated model only, with no disc component required. for the two xrbs, which have very different physical and orbital parameters and were in different accretion states during the observations, the sizes of the jet base are similar and both seem to prefer a high fraction of non - thermal electrons in the acceleration / shock region and a magnetically dominated plasma in the jet. these results, along with recent similar results from modeling other galactic xrbs and agns, may suggest an inherent unity in diversity in the geometric and radiative properties of compact jets from accreting black holes.
arxiv:0904.2128
entropy bounds render quantum corrections to the cosmological constant $ \ lambda $ finite. under certain assumptions, the natural value of $ \ lambda $ is of order the observed dark energy density $ \ sim 10 ^ { - 10 } { \ rm ev } ^ 4 $, thereby resolving the cosmological constant problem. we note that the dark energy equation of state in these scenarios is $ w \ equiv p / \ rho = 0 $ over cosmological distances, and is strongly disfavored by observational data. alternatively, $ \ lambda $ in these scenarios might account for the diffuse dark matter component of the cosmological energy density.
arxiv:hep-th/0403052
image captioning is the generation of natural language descriptions of images which have increased immense popularity in the recent past. with this different deep - learning techniques are devised for the development of factual and stylized image captioning models. previous models focused more on the generation of factual and stylized captions separately providing more than one caption for a single image. the descriptions generated from these suffer from out - of - vocabulary and repetition issues. to the best of our knowledge, no such work exists that provided a description that integrates different captioning methods to describe the contents of an image with factual and stylized ( romantic and humorous ) elements. to overcome these limitations, this paper presents a novel unified attention and multi - head attention - driven caption summarization transformer ( unma - capsumt ) based captioning framework. it utilizes both factual captions and stylized captions generated by the modified adaptive attention - based factual image captioning model ( maa - fic ) and style factored bi - lstm with attention ( sf - bi - alstm ) driven stylized image captioning model respectively. sf - bi - alstm - based stylized ic model generates two prominent styles of expression - { romance, and humor }. the proposed summarizer unmha - st combines both factual and stylized descriptions of an input image to generate styled rich coherent summarized captions. the proposed unmha - st transformer learns and summarizes different linguistic styles efficiently by incorporating proposed word embedding fasttext with attention word embedding ( fta - we ) and pointer - generator network with coverage mechanism concept to solve the out - of - vocabulary issues and repetition problem. extensive experiments are conducted on flickr8k and a subset of flickrstyle10k with supporting ablation studies to prove the efficiency and efficacy of the proposed framework.
arxiv:2412.11836
non - analyticities in the logarithm of the loschmidt echo, known as dynamical quantum phase transitions [ dqpts ], are a recently introduced attempt to classify the myriad of possible phenomena which can occur in far from equilibrium closed quantum systems. in this work, we analytically investigate the loschmidt echo in nonequilibrium $ s $ - wave and topological $ p _ x + ip _ y $ fermionic superfluids. we find that the presence of non - analyticities in the echo is not invariant under global rotations of the superfluid phase. we remedy this deficiency by introducing a more general notion of a grand canonical loschmidt echo. overall, our study shows that dqpts are not a good indicator for the long time dynamics of an interacting system. in particular, there are no dqpts to tell apart distinct dynamical phases of quenched bcs superconductors. nevertheless, they can signal a quench induced change in the topology and also keep track of solitons emerging from unstable stationary states of a bcs superconductor.
arxiv:2103.03754
gelfand - na \ u { i } mark theorem supplies a one to one correspondence between commutative $ c ^ * $ - algebras and locally compact hausdorff spaces. so any noncommutative $ c ^ * $ - algebra can be regarded as a generalization of a topological space. generalizations of several topological invariants may be defined by algebraic methods. for example serre swan theorem states that complex topological $ k $ - theory coincides with $ k $ - theory of $ c ^ * $ - algebras. this article is concerned with generalization of local systems. the classical construction of local system implies an existence of a path groupoid. however the noncommutative geometry does not contain this object. there is a construction of local system which uses covering projections. otherwise a classical ( commutative ) notion of a covering projection has a noncommutative generalization. a generalization of noncommutative covering projections supplies a generalization of local systems.
arxiv:1411.2505
the polar codes are proven to be capacity - achieving and are shown to have equivalent or even better finite - length performance than the turbo / ldpc codes under some improved decoding algorithms over the additive white gaussian noise ( awgn ) channels. polar coding is based on the so - called channel polarization phenomenon induced by a transform over the underlying binary - input channel. the channel polarization is found to be universal in many signal processing problems and has been applied to the coded modulation schemes. in this paper, the channel polarization is further extended to the multiple antenna transmission following a multilevel coding principle. the multiple - input multile - output ( mimo ) channel under quadrature amplitude modulation ( qam ) are transformed into a series of synthesized binary - input channels under a three - stage channel transform. based on this generalized channel polarization, the proposed space - time polar coded modulation ( stpcm ) scheme allows a joint optimization of the binary polar coding, modulation and mimo transmission. in addition, a practical solution of polar code construction over the fading channels is also provided, where the fading channels are approximated by an awgn channel which shares the same capacity with the original. the simulations over the mimo channel with uncorrelated rayleigh fast fading show that the proposed stpcm scheme can outperform the bit - interleaved turbo coded scheme in all the simulated cases, where the latter is adopted in many existing communication systems.
arxiv:1312.1799
we analyse the sample of pulsar proper motions, taking detailed account of the selection effects of the original surveys. we treat censored data using survival statistics. from a comparison of our results with monte carlo simulations, we find that the mean birth speed of a pulsar is 250 - 300 km / s, rather than the 450 km / s foundby lyne & lorimer ( 1994 ). the resultant distribution is consistent with a maxwellian with dispersion $ \ sigma _ v = 190 km / s $. despite the large birth velocities, we find that the pulsars with long characteristic ages show the asymmetric drift, indicating that they are dynamically old. these pulsars may result from the low velocity tail of the younger population, although modified by their origin in binaries and by evolution in the galactic potential.
arxiv:astro-ph/9708071
we study a cosmological model of gravity coupled to three, self - interacting scalar fields, one of them with negative kinetic term. the theory has cosmological solutions described by three - dimensional quadratic autonomous equations, leading to strange attractors. the associated chaotic cosmologies exhibit highly fluctuating periods of contraction and expansion, alternating with long, steady periods in a de sitter - like phase.
arxiv:2204.06018
a planetary system consists of a host star and one or more planets, arranged into a particular configuration. here, we consider what information belongs to the configuration, or ordering, of 4286 kepler planets in their 3277 planetary systems. first, we train a neural network model to predict the radius and period of a planet based on the properties of its host star and the radii and period of its neighbors. the mean absolute error of the predictions of the trained model is a factor of 2. 1 better than the mae of the predictions of a naive model which draws randomly from dynamically allowable periods and radii. second, we adapt a model used for unsupervised part - of - speech tagging in computational linguistics to investigate whether planets or planetary systems fall into natural categories with physically interpretable " grammatical rules. " the model identifies two robust groups of planetary systems : ( 1 ) compact multi - planet systems and ( 2 ) systems around giant stars ( $ \ log { g } \ lesssim 4. 0 $ ), although the latter group is strongly sculpted by the selection bias of the transit method. these results reinforce the idea that planetary systems are not random sequences - - instead, as a population, they contain predictable patterns that can provide insight into the formation and evolution of planetary systems.
arxiv:2105.09966
deformations of spacelike hypersurfaces in space - time play an important role in discussions of general covariance and slicing independence in gravitational theories. in a canonical formulation, they provide the geometrical meaning of gauge transformations generated by the diffeomorphism and hamiltonian constraints. however, it has been known for some time that the relationship between hypersurface deformations and general covariance is not a kinematical equivalence but holds only on the solution space of the constraints and requires their gauge equations and equations of motion to be used. the off - shell behavior of hypersurface deformations on their own, without imposing constraint and gauge equations, is therefore different from space - time diffeomorphisms. its complete understanding is important for potential quantizations or modifications of general relativity in canonical form and of compatible space - time geometries that may be implied by them. here, a geometrical analysis of hypersurface deformations is performed, allowing for a dependence of hypersurface deformation generators ( the lapse function and the shift vector ) on the phase - space degrees of freedom given by the geometry of an embedded spacelike hypersurface. the result is compared in detail with poisson brackets of the gravitational constraints. as a new implication of physical relevance, covariance conditions are obtained for theories of emergent modified gravity without symmetry restrictions.
arxiv:2410.18807
formalism of differential forms is developed for a variety of quantum and noncommutative situations.
arxiv:quant-ph/9807092
globular clusters ( gcs ) are established emitters of high - energy ( he, 100 mev < e < 100 gev ) \ gamma - ray radiation which could originate from the cumulative emission of the numerous millisecond pulsars ( mspsrs ) in the clusters ' cores or from inverse compton ( ic ) scattering of relativistic leptons accelerated in the gc environment. gcs could also constitute a new class of sources in the very - high - energy ( vhe, e > 100 gev ) \ gamma - ray regime, judging from the recent detection of emission from the direction of terzan 5 with the h. e. s. s. telescope array. to search for vhe \ gamma - ray sources associated with other gcs, and to put constraints on leptonic emission models, we systematically analyzed the observations towards 15 gcs taken with h. e. s. s. we searched for individual sources of vhe \ gamma - rays from each gc in our sample and also performed a stacking analysis combining the data from all gcs to investigate the hypothesis of a population of faint emitters. assuming ic emission as the source of emission from terzan 5, we calculated the expected \ gamma - ray flux for each of the 15 gcs, based on their number of millisecond pulsars, their optical brightness and the energy density of background photon fields. we did not detect significant emission from any of the 15 gcs. the obtained flux upper limits allow to rule out the simple ic / mspsr scaling model for ngc 6388 and ngc 7078. the upper limits derived from the stacking analyses are factors between 2 and 50 below the flux predicted by the simple leptonic model, depending on the assumed source extent and the dominant target photon fields. therefore, terzan 5 still remains exceptional among all gcs, as the vhe \ gamma - ray emission either arises from extra - ordinarily efficient leptonic processes, or from a recent catastrophic event, or is even unrelated to the gc itself.
arxiv:1307.4555
we make a first step towards a classification of simple generalized harish - chandra modules which are not harish - chandra modules or weight modules of finite type. for an arbitrary algebraic reductive pair of complex lie algebras $ ( \ g, \ k ) $, we construct, via cohomological induction, the fundamental series $ f ^ \ cdot ( \ p, e ) $ of generalized harish - chandra modules. we then use $ f ^ \ cdot ( \ p, e ) $ to characterize any simple generalized harish - chandra module with generic minimal $ \ k $ - type. more precisely, we prove that any such simple $ ( \ g, \ k ) $ - module of finite type arises as the unique simple submodule of an appropriate fundamental series module $ f ^ s ( \ p, e ) $ in the middle dimension $ s $. under the stronger assumption that $ \ k $ contains a semisimple regular element of $ \ g $, we prove that any simple $ ( \ g, \ k ) $ - module with generic minimal $ \ k $ - type is necessarily of finite type, and hence obtain a reconstruction theorem for a class of simple $ ( \ g, \ k ) $ - modules which can a priori have infinite type. we also obtain generic general versions of some classical theorems of harish - chandra, such as the harish - chandra admissibility theorem. the paper is concluded by examples, in particular we compute the genericity condition on a $ \ k $ - type for any pair $ ( \ g, \ k ) $ with $ \ k \ simeq s \ ell ( 2 ) $.
arxiv:math/0409285
- 04 - 30 ). surface science : foundations of catalysis and nanoscience ( 3 ed. ). wiley. isbn 978 - 1119990352. attard, gary ; barnes, colin ( january 1998 ). surfaces. oxford chemistry primers. isbn 978 - 0198556862. = = external links = = " ram rao materials and surface science ", a video from the vega science trust surface chemistry discoveries surface metrology guide
https://en.wikipedia.org/wiki/Surface_science
humor is a defining characteristic of human beings. our goal is to develop methods that automatically detect humorous statements and rank them on a continuous scale. in this paper we report on results using a language model approach, and outline our plans for using methods from deep learning.
arxiv:1705.10272
entangled - photon coincidence imaging is a method to nonlocally image an object by transmitting a pair of entangled photons through the object and a reference optical system, respectively. the image of the object can be extracted from the coincidence rate of these two photons. from a classical perspective, the image is proportional to the fourth - order correlation function of the wave field. using classical statistical optics, we study a particular aspect of coincidence imaging with incoherent sources. as an application, we give a proposal to realize lensless fourier - transform imaging, and discuss its applicability in x - ray diffraction.
arxiv:quant-ph/0408135
disaggregation regression has become an important tool in spatial disease mapping for making fine - scale predictions of disease risk from aggregated response data. by including high resolution covariate information and modelling the data generating process on a fine scale, it is hoped that these models can accurately learn the relationships between covariates and response at a fine spatial scale. however, validating these high resolution predictions can be a challenge, as often there is no data observed at this spatial scale. in this study, disaggregation regression was performed on simulated data in various settings and the resulting fine - scale predictions are compared to the simulated ground truth. performance was investigated with varying numbers of data points, sizes of aggregated areas and levels of model misspecification. the effectiveness of cross validation on the aggregate level as a measure of fine - scale predictive performance was also investigated. predictive performance improved as the number of observations increased and as the size of the aggregated areas decreased. when the model was well - specified, fine - scale predictions were accurate even with small numbers of observations and large aggregated areas. under model misspecification predictive performance was significantly worse for large aggregated areas but remained high when response data was aggregated over smaller regions. cross - validation correlation on the aggregate level was a moderately good predictor of fine - scale predictive performance. while the simulations are unlikely to capture the nuances of real - life response data, this study gives insight into the effectiveness of disaggregation regression in different contexts.
arxiv:2005.03604
continuous - time series is essential for different modern application areas, e. g. healthcare, automobile, energy, finance, internet of things ( iot ) and other related areas. different application needs to process as well as analyse a massive amount of data in time series structure in order to determine the data - driven result, for example, financial trend prediction, potential probability of the occurrence of a particular event occurrence identification, patient health record processing and so many more. however, modeling real - time data using a continuous - time series is challenging since the dynamical systems behind the data could be a differential equation. several research works have tried to solve the challenges of modelling the continuous - time series using different neural network models and approaches for data processing and learning. the existing deep learning models are not free from challenges and limitations due to diversity among different attributes, behaviour, duration of steps, energy, and data sampling rate. this paper has described the general problem domain of time series and reviewed the challenges of modelling the continuous time series. we have presented a comparative analysis of recent developments in deep learning models and their contribution to solving different difficulties of modelling the continuous time series. we have also identified the limitations of the existing neural network model and open issues. the main goal of this review is to understand the recent trend of neural network models used in a different real - world application with continuous - time data.
arxiv:2409.09106
in this paper, we present a model for semantic memory that allows machines to collect information and experiences to become more proficient with time. post semantic analysis of the sensory and other related data, the processed information is stored in the knowledge graph which is then used to comprehend the work instructions expressed in natural language. this imparts industrial robots cognitive behavior to execute the required tasks in a deterministic manner. the paper outlines the architecture of the system along with an implementation of the proposal.
arxiv:2101.01099
we introduce the concept and a default implementation of guided reasoning. a multi - agent system is a guided reasoning system iff one agent ( the guide ) primarily interacts with other agents in order to improve reasoning quality. we describe logikon ' s default implementation of guided reasoning in non - technical terms. this is a living document we ' ll gradually enrich with more detailed information and examples. code : https : / / github. com / logikon - ai / logikon
arxiv:2408.16331
a simple supersymmetric so ( 10 ) gut in five dimensions is considered. the fifth dimension is compactified on the $ s ^ 1 / ( z _ 2 \ times z _ 2 ^ \ prime ) $ orbifold possessing two inequivalent fixed points. in our setup, all matter and higgs multiplets reside on one brane ( ps brane ) where the original so ( 10 ) gauge group is broken down to the pati - salam ( ps ) gauge group, su ( 4 ) $ _ c \ timessu ( 2 ) _ l \ times $ su ( 2 ) $ _ r $, by the orbifold boundary condition, while only the so ( 10 ) gauge multiplet resides in the bulk. the further breaking of the ps symmetry to the standard model gauge group is realized by higgs multiplets on the ps brane as usual in four dimensional models. proton decay is fully suppressed. in our simple setup, the gauge coupling unification is realized after incorporating threshold corrections of kaluza - klein modes. when supersymmetry is assumed to be broken on the other brane, supersymmetry breaking is transmitted to the ps brane through the gaugino mediation with the bulk gauge multiplet.
arxiv:0803.1758
pretrained large language models ( llms ) are able to solve a wide variety of tasks through transfer learning. various explainability methods have been developed to investigate their decision making process. tracin ( pruthi et al., 2020 ) is one such gradient - based method which explains model inferences based on the influence of training examples. in this paper, we explore the use of tracin to improve model performance in the parameter - efficient tuning ( pet ) setting. we develop conversational safety classifiers via the prompt - tuning pet method and show how the unique characteristics of the pet regime enable tracin to identify the cause for certain misclassifications by llms. we develop a new methodology for using gradient - based explainability techniques to improve model performance, g - bair : gradient - based automated iterative recovery. we show that g - bair can recover llm performance on benchmarks after manually corrupting training labels. this suggests that influence methods like tracin can be used to automatically perform data cleaning, and introduces the potential for interactive debugging and relabeling for pet - based transfer learning methods.
arxiv:2302.06598
studying human - robot interaction over time can provide insights into what really happens when a robot becomes part of people ' s everyday lives. " in the wild " studies inform the design of social robots, such as for the service industry, to enable them to remain engaging and useful beyond the novelty effect and initial adoption. this paper presents an " in the wild " experiment where we explored the evolution of interaction between users and a robo - barista. we show that perceived trust and prior attitudes are both important factors associated with the usefulness, adaptability and likeability of the robo - barista. a combination of interaction features and user attributes are used to predict user satisfaction. qualitative insights illuminated users ' robo - barista experience and contribute to a number of lessons learned for future long - term studies.
arxiv:2309.02942
meta - gradient reinforcement learning ( rl ) allows agents to self - tune their hyper - parameters in an online fashion during training. in this paper, we identify a bias in the meta - gradient of current meta - gradient rl approaches. this bias comes from using the critic that is trained using the meta - learned discount factor for the advantage estimation in the outer objective which requires a different discount factor. because the meta - learned discount factor is typically lower than the one used in the outer objective, the resulting bias can cause the meta - gradient to favor myopic policies. we propose a simple solution to this issue : we eliminate this bias by using an alternative, \ emph { outer } value function in the estimation of the outer loss. to obtain this outer value function we add a second head to the critic network and train it alongside the classic critic, using the outer loss discount factor. on an illustrative toy problem, we show that the bias can cause catastrophic failure of current meta - gradient rl approaches, and show that our proposed solution fixes it. we then apply our method to a more complex environment and demonstrate that fixing the meta - gradient bias can significantly improve performance.
arxiv:2211.10550
the variables, terms, or objects of interest so that persons other than the definer can measure or test them independently ) ( see also : reproducibility ). failure to make reasonable use of the principle of parsimony, i. e., failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible ( see : occam ' s razor ). lack of boundary conditions : most well - supported scientific theories possess well - articulated limitations under which the predicted phenomena do and do not apply. lack of effective controls in experimental design, such as the use of placebos and double - blinding. lack of understanding of basic and established principles of physics and engineering. = = = improper collection of evidence = = = assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment ( see also : falsifiability ). assertion of claims that a theory predicts something that it has not been shown to predict. scientific claims that do not confer any predictive power are considered at best " conjectures ", or at worst " pseudoscience " ( e. g., ignoratio elenchi ). assertion that claims which have not been proven false must therefore be true, and vice versa ( see : argument from ignorance ). over - reliance on testimonial, anecdotal evidence, or personal experience : this evidence may be useful for the context of discovery ( i. e., hypothesis generation ), but should not be used in the context of justification ( e. g., statistical hypothesis testing ). use of myths and religious texts as if they were fact, or basing evidence on readings of such texts. use of concepts and scenarios from science fiction as if they were fact. this technique appeals to the familiarity that many people already have with science fiction tropes through the popular media. presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims. this is an example of selection bias or cherry picking, a distortion of evidence or data that arises from the way that the data are collected. it is sometimes referred to as the selection effect. repeating excessive or untested claims that have been previously published elsewhere, and promoting those claims as if they were facts ; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the woozle effect. reversed burden of proof : science places the burden of proof on
https://en.wikipedia.org/wiki/Pseudoscience
3d representation disentanglement aims to identify, decompose, and manipulate the underlying explanatory factors of 3d data, which helps ai fundamentally understand our 3d world. this task is currently under - explored and poses great challenges : ( i ) the 3d representations are complex and in general contains much more information than 2d image ; ( ii ) many 3d representations are not well suited for gradient - based optimization, let alone disentanglement. to address these challenges, we use nerf as a differentiable 3d representation, and introduce a self - supervised navigation to identify interpretable semantic directions in the latent space. to our best knowledge, this novel method, dubbed navinerf, is the first work to achieve fine - grained 3d disentanglement without any priors or supervisions. specifically, navinerf is built upon the generative nerf pipeline, and equipped with an outer navigation branch and an inner refinement branch. they are complementary - - the outer navigation is to identify global - view semantic directions, and the inner refinement dedicates to fine - grained attributes. a synergistic loss is further devised to coordinate two branches. extensive experiments demonstrate that navinerf has a superior fine - grained 3d disentanglement ability than the previous 3d - aware models. its performance is also comparable to editing - oriented models relying on semantic or geometry priors.
arxiv:2304.11342
we address the problem of efficiently compressing video for conferencing - type applications. we build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. however, these methods encode video in a frame - by - frame fashion, i. e. each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. the residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. our experiments indicate a significant bitrate gain, in excess of 70 % compared to the hevc video standard and over 30 % compared to vvc, on a datasetof talking - head videos
arxiv:2307.04187
for many applications one wishes to decide whether a certain set of numbers originates from an equiprobability distribution or whether they are unequally distributed. distributions of relative frequencies may deviate significantly from the corresponding probability distributions due to finite sample effects. hence, it is not trivial to discriminate between an equiprobability distribution and non - equally distributed probabilities when knowing only frequencies. based on analytical results we provide a software tool which allows to decide whether data correspond to an equiprobability distribution. the tool is available at http : / / bioinf. charite. de / equifreq /. its application is demonstrated for the distribution of point mutations in coding genes.
arxiv:q-bio/0401041
artificial intelligence is a central topic in the computer science curriculum. from the year 2011 a project - based learning methodology based on computer games has been designed and implemented into the intelligence artificial course at the university of the bio - bio. the project aims to develop software - controlled agents ( bots ) which are programmed by using heuristic algorithms seen during the course. this methodology allows us to obtain good learning results, however several challenges have been founded during its implementation. in this paper we show how linguistic descriptions of data can help to provide students and teachers with technical and personalized feedback about the learned algorithms. algorithm behavior profile and a new turing test for computer games bots based on linguistic modelling of complex phenomena are also proposed in order to deal with such challenges. in order to show and explore the possibilities of this new technology, a web platform has been designed and implemented by one of authors and its incorporation in the process of assessment allows us to improve the teaching learning process.
arxiv:1711.09744
the phenix experiment has conducted searches for the qcd critical point with measurements of multiplicity fluctuations, transverse momentum fluctuations, event - by - event kaon - to - pion ratios, elliptic flow, and correlations. measurements have been made in several collision systems as a function of centrality and transverse momentum. the results do not show significant evidence of critical behavior in the collision systems and energies studied, although several interesting features are discussed.
arxiv:0909.2587
using wilson ' s numerical renormalization group ( nrg ) technique we compute zero - bias conductance and various correlation functions of a double quantum dot ( dqd ) system. we present different regimes within a phase diagram of the dqd system. by introducing a negative hubbard u on one of the quantum dots, we simulate the effect of electron - phonon coupling and explore the properties of the coexisting spin and charge kondo state. in a triple quantum dot ( tqd ) system a multi - stage kondo effect appears where localized moments on quantum dots are screened successively at exponentially distinct kondo temperatures.
arxiv:0705.4537
in this paper, we construct a hierarchy of hybrid numerical methods for multi - scale kinetic equations based on moment realizability matrices, a concept introduced by levermore, morokoff and nadiga. following such a criterion, one can consider hybrid scheme where the hydrodynamic part is given either by the compressible euler or navier - stokes equations, or even with more general models, such as the burnett or super - burnett systems.
arxiv:1402.6304
we investigate the emergence of anti - ferromagnetic ordering and its effect on the helical edge states in a quantum spin hall insulator, in the presence of strong coulomb interaction. using dynamical mean - field theory, we show that the breakdown of lattice translational symmetry favours the formation of magnetic ordering with non - trivial spatial modulation. the onset of a non - uniform magnetization enables the coexistence of spin - ordered and topologically non - trivial states. an unambiguous signature of the persistence of the topological bulk property is the survival of bona fide edge states. we show that the penetration of the magnetic order is accompanied by the progressive reconstruction of gapless states in sub - peripherals layers, redefining the actual topological boundary within the system.
arxiv:1805.03999
we present a simple formalism to interpret two galaxy statistics, the uv luminosity function and two - point correlation functions for star - forming galaxies at z ~ 4, 5, 6 in the context of lcdm cosmology. both statistics are the result of how star formation takes place in dm halos, and thus are used to constrain how uv light depends on halo properties such as mass. the two measures were taken from the goods data, thus ideal for joint analysis. the two physical quantities we explore are the sf duty cycle, and the range of l _ uv that a halo of mass m can have ( mean and variance ). the former addresses the typical duration of sf activity in halos while the latter addresses the averaged sf history and regularity of gas inflow into these systems. we explore various physical models consistent with data, and find the following : 1 ) the typical duration of sf observed in the data is < 0. 4 gyr ( 1 sig ), 2 ) the inferred scaling law between l _ uv and halo mass m from the observed slope of the lfs is roughly linear at all redshifts, and 3 ) l _ uv for a fixed halo mass decreases with time, implying that the sf efficiency ( after dust extinction ) is higher at earlier times. we explore several physical scenarios relating star formation to halo mass, but find that these scenarios are indistinguishable due to the limited range of halo mass probed by our data. in order to discriminate between different scenarios, we discuss constraining the bright - faint galaxy cross - correlation functions and luminosity - dependence of galaxy bias. ( abridged )
arxiv:0808.1727
a new isomeric $ ( 4 ^ - ) $ state at 285. 5 ( 32 ) kev in $ ^ { 162 } $ tb was reported by r. orford et al. [ phys. rev. c 102, 011303 ( r ) ( 2020 ) ] based on a penning - trap mass measurement. here we show that this result is not compatible with existing experimental data. the state identified as $ ^ { 162 } $ tb $ ^ { m } $ with a mass - excess value of $ - 65593. 9 ( 25 ) $ kev is actually the $ 1 ^ - $ ground state. the state identified as the ground state of $ ^ { 162 } $ tb is most likely a molecular contaminant with the same mass - over - charge ratio.
arxiv:2405.04563
random forests are widely claimed to capture interactions well. however, some simple examples suggest that they perform poorly in the presence of certain pure interactions that the conventional cart criterion struggles to capture during tree construction. we argue that simple alternative partitioning schemes used in the tree growing procedure can enhance identification of these interactions. in a simulation study we compare these variants to conventional random forests and extremely randomized trees. our results validate that the modifications considered enhance the model ' s fitting ability in scenarios where pure interactions play a crucial role.
arxiv:2406.15500
we investigate consensus formation and the asymptotic consensus times in stylized individual - or agent - based models, in which global agreement is achieved through pairwise negotiations with or without a bias. considering a class of individual - based models on finite complete graphs, we introduce a coarse - graining approach ( lumping microscopic variables into macrostates ) to analyze the ordering dynamics in an associated random - walk framework. within this framework, yielding a linear system, we derive general equations for the expected consensus time and the expected time spent in each macro - state. further, we present the asymptotic solutions of the 2 - word naming game, and separately discuss its behavior under the influence of an external field and with the introduction of committed agents.
arxiv:1103.4659
it is routinely assumed that galaxy rotation curves are equal to their circular velocity curves ( modulo some corrections ) such that they are good dynamical mass tracers. we take a visualisation - driven approach to exploring the limits of the validity of this assumption for a sample of $ 33 $ low - mass galaxies ( $ 60 < v _ \ mathrm { max } / \ mathrm { km } \, \ mathrm { s } ^ { - 1 } < 120 $ ) from the apostle suite of cosmological hydrodynamical simulations. only $ 3 $ of these have rotation curves nearly equal to their circular velocity curves at $ z = 0 $, the rest are undergoing a wide variety of dynamical perturbations. we use our visualisations to guide an assessment of how many galaxies are likely to be strongly perturbed by processes in several categories : mergers / interactions ( affecting $ 6 $ / $ 33 $ galaxies ), bulk radial gas inflows ( $ 19 $ / $ 33 $ ), vertical gas outflows ( $ 15 $ / $ 33 $ ), distortions driven by a non - spherical dm halo ( $ 17 $ / $ 33 $ ), warps ( $ 8 $ / $ 33 $ ), and winds due to motion through the igm ( $ 5 $ / $ 33 $ ). most galaxies fall into more than one of these categories ; only $ 5 $ / $ 33 $ are not in any of them. the sum of these effects leads to an underestimation of the low - velocity slope of the baryonic tully - fisher relation ( $ \ alpha \ sim 3. 1 $ instead of $ \ alpha \ sim 3. 9 $, where $ m _ \ mathrm { bar } \ propto v ^ \ alpha $ ) that is difficult to avoid, and could plausibly be the source of a significant portion of the observed diversity in low - mass galaxy rotation curve shapes.
arxiv:2301.05242
one of the main drawbacks of standard cosmology, known as the horizon problem, was until now thought to be only solvable in an inflationary scenario. a delayed big - bang in an inhomogeneous universe is shown to solve this problem while leaving unimpaired the main successful features of the standard model.
arxiv:astro-ph/9809134
and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. although most of the contents of the elements were already known, euclid arranged them into a single, coherent logical framework. the elements was known to all educated people in the west up through the middle of the 20th century and its contents are still taught in geometry classes today. in addition to the familiar theorems of euclidean geometry, the elements was meant as an introductory textbook to all mathematical subjects of the time, such as number theory, algebra and solid geometry, including proofs that the square root of two is irrational and that there are infinitely many prime numbers. euclid also wrote extensively on other subjects, such as conic sections, optics, spherical geometry, and mechanics, but only half of his writings survive. archimedes ( c. 287 – 212 bc ) of syracuse, widely considered the greatest mathematician of antiquity, used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. he also showed one could use the method of exhaustion to calculate the value of π with as much precision as desired, and obtained the most accurate value of π then known, 3 + 10 / 71 < π < 3 + 10 / 70. he also studied the spiral bearing his name, obtained formulas for the volumes of surfaces of revolution ( paraboloid, ellipsoid, hyperboloid ), and an ingenious method of exponentiation for expressing very large numbers. while he is also known for his contributions to physics and several advanced mechanical devices, archimedes himself placed far greater value on the products of his thought and general mathematical principles. he regarded as his greatest achievement his finding of the surface area and volume of a sphere, which he obtained by proving these are 2 / 3 the surface area and volume of a cylinder circumscribing the sphere. apollonius of perga ( c. 262 – 190 bc ) made significant advances to the study of conic sections, showing that one can obtain all three varieties of conic section by varying the angle of the plane that cuts a double - napped cone. he also coined the terminology in use today for conic sections, namely parabola ( " place beside " or " comparison " ), " ellipse " ( " deficiency " ), and " hyperbola " ( " a throw beyond " ). his work
https://en.wikipedia.org/wiki/History_of_mathematics
we present the first symmetry inheritance analysis of fields nonminimally coupled to gravity. in this work we are focused on the real scalar field $ \ phi $ with nonminimal coupling of the form $ \ xi \ phi ^ 2 r $. possible cases of the symmetry noninheriting fields are constrained by the properties of the ricci tensor and the scalar potential. examples of such spacetimes can be found among those which are " dressed " with the stealth scalar field, a nontrivial scalar field configuration with the vanishing energy - momentum tensor. we classify the scalar field potentials which allow the symmetry noninheriting stealth field configurations on top of the exact solutions of the einstein ' s gravitational field equation with the cosmological constant.
arxiv:1709.07456
motivated by the recent cdf measurement of the $ w $ - boson mass, we study string - based particle physics models which can accommodate this deviation from the standard model. we consider an f - theory gut in which the visible sector is realized on intersecting 7 - branes, and extra sector states arise from a probe d3 - brane near an e - type yukawa point. the d3 - brane worldvolume naturally realizes a strongly coupled sector which mixes with the higgs. in the limit where some extra sector states get their mass solely from higgs vevs, this leads to a contribution to the $ \ rho $ parameter which is too large, but as the d3 - brane is separated from the 7 - brane stack, this effect is suppressed, leading to o ( 1 ) - o ( 10 ) tev scale extra sector states and a correction to $ \ rho $ which would be in accord with the cdf result. we also estimate the contribution to the oblique electroweak parameter $ s $, and find that it is compatible with existing constraints. this also leads to additional signatures, including diphoton resonances ( as well as other diboson final states ) in the o ( 1 ) - o ( 10 ) tev range.
arxiv:2204.05302
as part of a long term effort to understand pre - main sequence li burning, we have obtained high resolution spectroscopic observations of 14 late type stars ( g0 - - m1 ) in the young open cluster ic ~ 4665. most of the stars have \ ha filled - in and \ li absorption, as expected for their young age. from the equivalent widths of \ ha emission excess ( obtained using the spectral subtraction technique ) and the \ lii feature, we have derived \ ha emission fluxes and photospheric li abundances. the mean li abundance of ic ~ 4665 solar - type stars is log n ( li ) = 3. 1 ; the same as in other young clusters ( $ \ alpha $ ~ per, pleiades ) and t tauri stars. our results support the conclusions from previous works that pms li depletion is very small for masses $ \ sim $ 1 \ msun. among the ic 4665 late - g and early k - type stars, there is a spread in li abundances of about one order of magnitude. the li - poor ic ~ 4665 members have low \ ha excess and vsin { \ it i } $ \ le $ 10. hence, the li - activity - rotation connection which has been clearly established in the pleiades also seems to hold in ic 4665. one m - type ic ~ 4665 star that we have observed does not show li, implying a very efficient li depletion as observed in $ \ alpha $ ~ per stars of the same spectral type. the level of chromospheric activity and li depletion among the low mass stars of ic 4665 is similar to that in the pleiades. in fact, we note that the li abundance distributions in several young clusters ( $ \ alpha $ ~ per, pleiades, ic ~ 2391, ic ~ 4665 ) and in post t tauri stars are strikingly similar. this result suggests that \ ha emission and li abundance not well correlated with age for low mass stars between 20 and 100 myr old. we argue that a finer age indicator, the ` ` ll - clock ", would be the luminosity at
arxiv:astro-ph/9605038
in mathematics, a function from a set x to a set y assigns to each element of x exactly one element of y. the set x is called the domain of the function and the set y is called the codomain of the function. functions were originally the idealization of how a varying quantity depends on another quantity. for example, the position of a planet is a function of time. historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable ( that is, they had a high degree of regularity ). the concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. a function is often denoted by a letter such as f, g or h. the value of a function f at an element x of its domain ( that is, the element of the codomain that is associated with x ) is denoted by f ( x ) ; for example, the value of f at x = 4 is denoted by f ( 4 ). commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; { \ displaystyle f ( x ) = x ^ { 2 } + 1 ; } in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value ; for example, if f ( x ) = x 2 + 1, { \ displaystyle f ( x ) = x ^ { 2 } + 1, } then f ( 4 ) = 4 2 + 1 = 17. { \ displaystyle f ( 4 ) = 4 ^ { 2 } + 1 = 17. } given its domain and its codomain, a function is uniquely represented by the set of all pairs ( x, f ( x ) ), called the graph of the function, a popular means of illustrating the function. when the domain and the codomain are sets of real numbers, each such pair may be thought of as the cartesian coordinates of a point in the plane. functions are widely used in science, engineering, and in most fields of mathematics. it has been said that functions are " the central objects of investigation " in most fields of mathematics. the concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century.
https://en.wikipedia.org/wiki/Function_(mathematics)
an accurate prediction of watch time has been of vital importance to enhance user engagement in video recommender systems. to achieve this, there are four properties that a watch time prediction framework should satisfy : first, despite its continuous value, watch time is also an ordinal variable and the relative ordering between its values reflects the differences in user preferences. therefore the ordinal relations should be reflected in watch time predictions. second, the conditional dependence between the video - watching behaviors should be captured in the model. for instance, one has to watch half of the video before he / she finishes watching the whole video. third, modeling watch time with a point estimation ignores the fact that models might give results with high uncertainty and this could cause bad cases in recommender systems. therefore the framework should be aware of prediction uncertainty. forth, the real - life recommender systems suffer from severe bias amplifications thus an estimation without bias amplification is expected. therefore we propose tpm for watch time prediction. specifically, the ordinal ranks of watch time are introduced into tpm and the problem is decomposed into a series of conditional dependent classification tasks which are organized into a tree structure. the expectation of watch time can be generated by traversing the tree and the variance of watch time predictions is explicitly introduced into the objective function as a measurement for uncertainty. moreover, we illustrate that backdoor adjustment can be seamlessly incorporated into tpm, which alleviates bias amplifications. extensive offline evaluations have been conducted in public datasets and tpm have been deployed in a real - world video app kuaishou with over 300 million daus. the results indicate that tpm outperforms state - of - the - art approaches and indeed improves video consumption significantly.
arxiv:2306.03392
we study the hardy type inequalities in the framework of equalities. we present equalities which immediately imply hardy type inequalities by dropping the remainder term. simultaneously we give a characterization of the class of functions which makes the remainder term vanish. a point of our observation is to apply an orthogonality properties in general hilbert space, and which gives a simple and direct understanding of the hardy type inequalities as well as the nonexistence of nontrivial extremizers.
arxiv:1611.03580
in 1977, erd \ h { o } s and hajnal made the conjecture that, for every graph $ h $, there exists $ c > 0 $ such that every $ h $ - free graph $ g $ has a clique or stable set of size at least $ | g | ^ c $ ; and they proved that this is true with $ | g | ^ c $ replaced by $ 2 ^ { c \ sqrt { \ log | g | } } $. until now, there has been no improvement on this result ( for general $ h $ ). we prove a strengthening : that for every graph $ h $, there exists $ c > 0 $ such that every $ h $ - free graph $ g $ with $ | g | \ ge 2 $ has a clique or stable set of size at least $ $ 2 ^ { c \ sqrt { \ log | g | \ log \ log | g | } }. $ $ indeed, we prove the corresponding strengthening of a theorem of fox and sudakov, which in turn was a common strengthening of theorems of r \ " odl, nikiforov, and the theorem of erd \ h { o } s and hajnal mentioned above.
arxiv:2301.10147
deviations from the blackbody spectral energy distribution of the cmb are a precise probe of physical processes active both in the early universe ( such as those connected to particle decays and inflation ) and at later times ( e. g. reionization and astrophysical emissions ). limited progress has been made in the characterization of these spectral distortions after the pioneering measurements of the firas instrument on the cobe satellite in the early 1990s, which mainly targeted the measurement of their average amplitude across the sky. since at present no follow - up mission is scheduled to update the firas measurement, in this work we re - analyze the firas data and produce a map of $ \ mu $ - type spectral distortion across the sky. we provide an updated constraint on the $ \ mu $ distortion monopole $ | \ langle \ mu \ rangle | < 47 \ times 10 ^ { - 6 } $ at 95 \ % confidence level that sharpens the previous firas estimate by a factor of $ \ sim 2 $. we also constrain primordial non - gaussianities of curvature perturbations on scales $ 10 \ lesssim k \ lesssim 5 \ times 10 ^ 4 $ through the cross - correlation of $ \ mu $ distortion anisotropies with cmb temperature and, for the first time, the full set of polarization anisotropies from the planck satellite. we obtain upper limits on $ f _ { \ rm nl } \ lesssim 3. 6 \ times 10 ^ 6 $ and on its running $ n _ { \ rm nl } \ lesssim 1. 4 $ that are limited by the firas sensitivity but robust against galactic and extragalactic foreground contaminations. we revisit previous similar analyses based on data of the planck satellite and show that, despite their significantly lower noise, they yield similar or worse results to ours once all the instrumental and astrophysical uncertainties are properly accounted for. our work is the first to self - consistently analyze data from a spectrometer and demonstrate the power of such instrument to carry out this kind of science case with reduced systematic uncertainties.
arxiv:2206.02762
a protocol by ishai et al. \ ( focs 2006 ) showing how to implement distributed $ n $ - party summation from secure shuffling has regained relevance in the context of the recently proposed \ emph { shuffle model } of differential privacy, as it allows to attain the accuracy levels of the curator model at a moderate communication cost. to achieve statistical security $ 2 ^ { - \ sigma } $, the protocol by ishai et al. \ requires the number of messages sent by each party to { \ em grow } logarithmically with $ n $ as $ o ( \ log n + \ sigma ) $. in this note we give an improved analysis achieving a dependency of the form $ o ( 1 + \ sigma / \ log n ) $. conceptually, this addresses the intuitive question left open by ishai et al. \ of whether the shuffling step in their protocol provides a " hiding in the crowd " amplification effect as $ n $ increases. from a practical perspective, our analysis provides explicit constants and shows, for example, that the method of ishai et al. \ applied to summation of $ 32 $ - bit numbers from $ n = 10 ^ 4 $ parties sending $ 12 $ messages each provides statistical security $ 2 ^ { - 40 } $.
arxiv:1909.11225
we investigate a field theoretical approach to the jordan - brans - dicke ( jbd ) theory extended with a particular potential term on a cosmological background by starting with the motivation that the higgs field and the scale factor of the universe are related. based on this relation, it is possible to come up with mathematically equivalent but two different interpretations. from one point of view while the universe is static, the masses of the elementary particles change with time. the other one, which we stick with throughout the manuscript, is that while the universe is expanding, particle masses are constant. thus, a coupled lagrangian density of the jbd field and the scale factor ( the higgs field ), which exhibit a massive particle and a linearly expanding space in zeroth order respectively, is obtained. by performing a coordinate transformation in the field space for the reduced jbd action whose kinetic part is nonlinear sigma model, the lagrangian of two scalar fields can be written as uncoupled for the higgs mechanism. after this transformation, as a result of spontaneous symmetry breaking, the time dependent vacuum expectation value ( vev ) of the higgs field and the higgs bosons which are the particles corresponding to quantized oscillation modes about the vacuum, are found.
arxiv:2101.10854
we study the feasibility of detecting noncommutative ( nc ) qed through neutral higgs boson ( h ) pair production at linear colliders ( lc ). this is based on the assumption that h interacts directly with photon in ncqed as suggested by symmetry considerations and strongly hinted by our previous study on \ pi ^ 0 - photon interactions. we find the following striking features as compared to the standard model ( sm ) result : ( 1 ) generally larger cross sections for an nc scale of order 1 tev ; ( 2 ) completely different dependence on initial beam polarizations ; ( 3 ) distinct distributions in the polar and azimuthal angles ; and ( 4 ) day - night asymmetry due to the earth ' s rotation. these will help to separate nc signals from those in the sm or other new physics at lc. we emphasize the importance of treating properly the lorentz noninvariance problem and show how the impact of the earth ' s rotation can be used as an advantage for our purpose of searching for nc signals.
arxiv:hep-ph/0105090
to make a more accurate diagnosis of covid - 19, we propose a straightforward yet effective model. firstly, we analyse the characteristics of 3d ct scans and remove the non - lung parts, facilitating the model to focus on lesion - related areas and reducing computational cost. we use resnest50 as the strong feature extractor, initializing it with pretrained weights which have covid - 19 - specific prior knowledge. our model achieves a macro f1 score of 0. 94 on the validation set of the 4th cov19d competition challenge $ \ mathrm { i } $, surpassing the baseline by 16 %. this indicates its effectiveness in distinguishing between covid - 19 and non - covid - 19 cases, making it a robust method for covid - 19 detection.
arxiv:2403.11953
topological protection ensures stability of information and particle transport against perturbations. we explore experimentally and computationally the topologically protected transport of magnetic colloids above spatially inhomogeneous magnetic patterns, revealing that transport complexity can be encoded in both the driving loop and the pattern. complex patterns support intricate transport modes when the microparticles are subjected to simple time - periodic loops of a uniform magnetic field. we design a pattern featuring a topological defect that functions as an attractor or a repeller of microparticles, as well as a pattern that directs microparticles along a prescribed complex trajectory. using simple patterns and complex loops, we simultaneously and independently control the motion of several identical microparticles differing only in their positions above the pattern. combining complex patterns and complex loops we transport microparticles from unknown locations to predefined positions and then force them to follow arbitrarily complex trajectories concurrently. our findings pave the way for new avenues in transport control and dynamic self - assembly in colloidal science.
arxiv:2311.12165
the \ emph { canonical structures of the plane } are those that result, up to isomorphism, from the rings that have the form $ \ mathds { r } [ x ] / ( ax ^ 2 + bx + c ) $ with $ a \ neq 0 $. that ring is isomorphic to $ \ mathds { r } [ \ theta ] $, where $ \ theta $ is the equivalence class of x, which satisfies $ \ theta ^ 2 = ( - \ dfrac { c } { a } ) + \ theta ( - \ dfrac { b } { a } ) $. on the other hand, it is known that, up to isomorphism, there are only three canonical structures : the corresponding to $ \ theta ^ 2 = - 1 $ ( the complex numbers ), $ \ theta ^ 2 = 1 $ ( the perplex or hyperbolic numbers ) and $ \ theta ^ 2 = 0 $ ( the parabolic numbers ). this article copes with the algebraic structure of the rings of integers $ \ mathds { z } [ \ theta ] $ in the perplex and parabolic cases by \ emph { analogy } to the complex cases : the ring of gaussian integers. for those rings a \ emph { division algorithm } is proved and it is obtained, as a consequence, the characterization of the prime and irreducible elements.
arxiv:0707.0700
families of dark solitons exist in superfluid fermi gases. the energy - velocity dispersion and number of depleted particles completely determines the dynamics of dark solitons on a slowly - varying background density. for the unitary fermi gas we determine these relations from general scaling arguments and conservation of local particle number. we find solitons to oscillate sinusoidally at the trap frequency reduced by a factor of $ 1 / \ sqrt { 3 } $. numerical integration of the time - dependent bogoliubov - de gennes equation determines spatial profiles and soliton dispersion relations across the bec - bcs crossover and proves consistent with the scaling relations at unitarity.
arxiv:1011.5337
this work studies the problem of high - dimensional data ( referred to as tensors ) completion from partially observed samplings. we consider that a tensor is a superposition of multiple low - rank components. in particular, each component can be represented as multilinear connections over several latent factors and naturally mapped to a specific tensor network ( tn ) topology. in this paper, we propose a fundamental tensor decomposition ( td ) framework : multi - tensor network representation ( mtnr ), which can be regarded as a linear combination of a range of td models, e. g., candecomp / parafac ( cp ) decomposition, tensor train ( tt ), and tensor ring ( tr ). specifically, mtnr represents a high - order tensor as the addition of multiple tn models, and the topology of each tn is automatically generated instead of manually pre - designed. for the optimization phase, an adaptive topology learning ( atl ) algorithm is presented to obtain latent factors of each tn based on a rank incremental strategy and a projection error measurement strategy. in addition, we theoretically establish the fundamental multilinear operations for the tensors with tn representation, and reveal the structural transformation of mtnr to a single tn. finally, mtnr is applied to a typical task, tensor completion, and two effective algorithms are proposed for the exact recovery of incomplete data based on the alternating least squares ( als ) scheme and alternating direction method of multiplier ( admm ) framework. extensive numerical experiments on synthetic data and real - world datasets demonstrate the effectiveness of mtnr compared with the start - of - the - art methods.
arxiv:2109.04022
we survey results concerning behavior of positivity of line bundles and possible vanishing theorems in positive characteristic. we also try to describe variation of positivity in mixed characteristic. these problems are very much related to behavior of strong semistability of vector bundles, which is another main topic of the paper.
arxiv:1301.4450
we discuss how to resolve generic skew - symmetric and generic symmetric determinantal singularities. the key ingredients are ( skew - ) symmetry preserving matrix operations in order to deduce an inductive argument.
arxiv:2308.01460
we extend the nonconforming trefftz virtual element method introduced in arxiv : 1805. 05634 to the case of the fluid - fluid interface problem, that is, a helmholtz problem with piecewise constant wave number. with respect to the original approach, we address two additional issues : firstly, we define the coupling of local approximation spaces with piecewise constant wave numbers, secondly, we enrich such local spaces with special functions capturing the physical behaviour of the solution to the target problem. as these two issues are directly related to an increase of the number of degrees of freedom, we use a reduction strategy inspired by arxiv : 1807. 11237, which allows to mitigate the growth of the dimension of the approximation space when considering $ h $ - and $ p $ - refinements. this renders the new method highly competitive in comparison to other trefftz and quasi - trefftz technologies tailored for the helmholtz problem with piecewise constant wave number. a wide range of numerical experiments, including the $ p $ - version with quasi - uniform meshes and the $ hp $ - version with isotropic and anisotropic mesh refinements, is presented.
arxiv:1811.01645
a distance oracle is a compact representation of the shortest distance matrix of a graph. it can be queried to approximate shortest paths between any pair of vertices. any distance oracle that returns paths of worst - case stretch ( 2k - 1 ) must require space $ \ omega ( n ^ { 1 + 1 / k } ) $ for graphs of n nodes. the hard cases that enforce this lower bound are, however, rather dense graphs with average degree \ omega ( n ^ { 1 / k } ). we present distance oracles that, for sparse graphs, substantially break the lower bound barrier at the expense of higher query time. for any 1 \ leq \ alpha \ leq n, our distance oracles can return stretch 2 paths using o ( m + n ^ 2 / \ alpha ) space and stretch 3 paths using o ( m + n ^ 2 / \ alpha ^ 2 ) space, at the expense of o ( \ alpha m / n ) query time. by setting appropriate values of \ alpha, we get the first distance oracles that have size linear in the size of the graph, and return constant stretch paths in non - trivial query time. the query time can be further reduced to o ( \ alpha ), by using an additional o ( m \ alpha ) space for all our distance oracles, or at the cost of a small constant additive stretch. we use our stretch 2 distance oracle to present the first compact routing scheme with worst - case stretch 2. any compact routing scheme with stretch less than 2 must require linear memory at some nodes even for sparse graphs ; our scheme, hence, achieves the optimal stretch with non - trivial memory requirements. moreover, supported by large - scale simulations on graphs including the as - level internet graph, we argue that our stretch - 2 scheme would be simple and efficient to implement as a distributed compact routing protocol.
arxiv:1201.2703
in the present paper we consider a general family of two dimensional wave equations which represents a great variety of linear and nonlinear equations within the framework of the transformations of equivalence groups. we have investigated the existence problem of point transformations that lead mappings between linear and nonlinear members of particular families and determined the structure of the nonlinear terms of linearizable equations. we have also given examples about some equivalence transformations between linear and nonlinear equations and obtained exact solutions of nonlinear equations via the linear ones.
arxiv:1811.07224
separating overlapped nuclei is a major challenge in histopathology image analysis. recently published approaches have achieved promising overall performance on nuclei segmentation ; however, their performance on separating overlapped nuclei is quite limited. to address the issue, we propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately. the newly proposed multitask learning architecture enhances the generalization by learning shared representation from three tasks : instance segmentation, nuclei distance map prediction, and overlapped nuclei distance map prediction. the proposed bending loss defines high penalties to concave contour points with large curvatures, and applies small penalties to convex contour points with small curvatures. minimizing the bending loss avoids generating contours that encompass multiple nuclei. in addition, two new quantitative metrics, aggregated jaccard index of overlapped nuclei ( ajio ) and accuracy of overlapped nuclei ( acco ), are designed for the evaluation of overlapped nuclei segmentation. we validate the proposed approach on the consep and monusegv1 datasets using seven quantitative metrics : aggregate jaccard index, dice, segmentation quality, recognition quality, panoptic quality, ajio, and acco. extensive experiments demonstrate that the proposed bend - net outperforms eight state - of - the - art approaches.
arxiv:2109.15283
let f be a germ of holomorphic self - map of c ^ 2 at the origin o tangent to the identity, and with o as a non - dicritical isolated fixed point. a parabolic curve for f is a holomorphic f - invariant curve, with o on the boundary, attracted by o under the action of f. it has been shown that if the characteristic direction [ v ] has residual index not belonging to q ^ +, then there exist parabolic curves for f tangent to [ v ]. in this paper we prove, with a different method, that the conclusion still holds just assuming that the residual index is not vanishing ( at least when f is regular along [ v ] ).
arxiv:math/0501537
frequency - domain unsteady lifting - line theory is better developed than its time - domain counterpart. to take advantage of this, this paper transforms time - domain kinematics to the frequency domain, performs a convolution and then returns the results back to the time - domain. it demonstrates how well - developed frequency - domain methods can be easily applied to time - domain problems, enabling prediction of forces and moments on finite wings undergoing arbitrary kinematics.
arxiv:2105.05679
quantum key distribution has made continuous progress over the last 20 years and is now commercially available. however, the secret key rates ( skr ) are still limited to a few mbps. here, we present a custom multipixel superconducting nanowire single - photon detectors and fast acquisition and real - time key distillation electronics, removing two roadblocks and allowing an increase of the skr of more than an order of magnitude. in combination with a simple 2. 5 ghz clocked time - bin quantum key distribution system, we can generate secret keys at a rate of 64 mbps over a distance of 10. 0 km and at a rate of 3. 0 mbps over a distance of 102. 4 km with real - time key distillation.
arxiv:2210.16126
with the aim of generating new constraints on the ozi suppressed couplings of chiral perturbation theory a set of six equations of the roy and steiner type for the $ s $ - and $ p $ - waves of the $ \ pi k $ scattering amplitudes is derived. the range of validity and the multiplicity of the solutions are discussed. precise numerical solutions are obtained in the range $ e \ lapprox 1 $ gev which make use as input, for the first time, of the most accurate experimental data available at $ e > 1 $ gev for both $ \ pi k \ to \ pi k $ and $ \ pi \ pi \ to k \ bar { k } $ amplitudes. our main result is the determination of a narrow allowed region for the two s - wave scattering lengths. present experimental data below 1 gev are found to be in generally poor agreement with our results. a set of threshold expansion parameters, as well as sub - threshold parameters are computed. for the latter, matching with the su ( 3 ) chiral expansion at nlo is performed.
arxiv:hep-ph/0310283
how do stars that are more massive than the sun form, and thus how is the stellar initial mass function ( imf ) established? such intermediate - and high - mass stars may be born from relatively massive pre - stellar gas cores, which are more massive than the thermal jeans mass. the turbulent core accretion model invokes such cores as being in approximate virial equilibrium and in approximate pressure equilibrium with their surrounding clump medium. their internal pressure is provided by a combination of turbulence and magnetic fields. alternatively, the competitive accretion model requires strongly sub - virial initial conditions that then lead to extensive fragmentation to the thermal jeans scale, with intermediate - and high - mass stars later forming by competitive bondi - hoyle accretion. to test these models, we have identified four prime examples of massive ( ~ 100msun ) clumps from mid - infrared extinction mapping of infrared dark clouds ( irdcs ). fontani et al. found high deuteration fractions of n2h + in these objects, which are consistent with them being starless. here we present alma observations of these four clumps that probe the n2d + ( 3 - 2 ) line at 2. 3 " resolution. we find six n2d + cores and determine their dynamical state. their observed velocity dispersions and sizes are broadly consistent with the predictions of the turbulent core model of self - gravitating, magnetized ( with alfven mach number m _ a ~ 1 ) and virialized cores that are bounded by the high pressures of their surrounding clumps. however, in the most massive cores, with masses up to ~ 60msun, our results suggest that moderately enhanced magnetic fields ( so that m _ a ~ 0. 3 ) may be needed for the structures to be in virial and pressure equilibrium. magnetically regulated core formation may thus be important in controlling the formation of massive cores, inhibiting their fragmentation, and thus helping to establish the stellar imf.
arxiv:1303.4343
we confirm and characterize the exoplanetary systems kepler - 445 and kepler - 446 : two mid - m dwarf stars, each with multiple, small, short - period transiting planets. kepler - 445 is a metal - rich ( [ fe / h ] = + 0. 25 $ \ pm $ 0. 10 ) m4 dwarf with three transiting planets, and kepler - 446 is a metal - poor ( [ fe / h ] = - 0. 30 $ \ pm $ 0. 10 ) m4 dwarf also with three transiting planets. kepler - 445c is similar to gj 1214b : both in planetary radius and the properties of the host star. the kepler - 446 system is similar to the kepler - 42 system : both are metal - poor with large galactic space velocities and three short - period, likely - rocky transiting planets that were initially assigned erroneously large planet - to - star radius ratios. we independently determined stellar parameters from spectroscopy and searched for and fitted the transit light curves for the planets, imposing a strict prior on stellar density in order to remove correlations between the fitted impact parameter and planet - to - star radius ratio for short - duration transits. combining kepler - 445, kepler - 446 and kepler - 42, and isolating all mid - m dwarf stars observed by kepler with the precision necessary to detect similar systems, we calculate that 21 $ ^ { + 7 } _ { - 5 } $ % of mid - m dwarf stars host compact multiples ( multiple planets with periods of less than 10 days ) for a wide range of metallicities. we suggest that the inferred planet masses for these systems support highly efficient accretion of protoplanetary disk metals by mid - m dwarf protoplanets.
arxiv:1501.01305
we report a fully inclusive measurement of the flavour changing neutral current decay b - > s gamma in the energy range 1. 8 gev < e * < 2. 8 gev, covering 95 % of the total spectrum. using 140 fb ^ - 1 we obtain bf ( b - > s gamma ) = 3. 55 + / - 0. 32 + 0. 30 - 0. 31 + 0. 11 - 0. 07, where the errors are statistical, systematic and from theory corrections. we also measure the first and second moments of the photon energy spectrum above 1. 8 gev and obtain < e > = 2. 292 + / - 0. 026 + / - 0. 034 gev and < e ^ 2 > - < e > ^ 2 = 0. 0305 + / - 0. 0074 + / - 0. 0063 gev ^ 2, where the errors are statistical and systematic.
arxiv:hep-ex/0403004
nuclear physics can be applied in various ways to the study of neutron stars. this thesis reports on one such application, where the relativistic mean - field approximation has been employed to calculate the equations of state of matter in the neutron star interior. in particular the equations of state of nuclear and neutron star matter of the nl3, pk1 and fsugold parameter sets were derived. a survey of available literature on neutron stars is presented and we use the derived equations of state to reproduce the properties of saturated nuclear matter as well as the mass - radius relationship of a static, spherical symmetric neutron star. results are compared to published values of the properties of saturated nuclear matter and to available observational data of the mass - radius relationship of neutron stars.
arxiv:0806.0747
rovibrational energies, wave functions, and raman transition moments are reported for the lowest - energy states of the h $ _ 3 ^ + $ molecular ion including the magnetic couplings of the proton spins and molecular rotation in the presence of a weak external magnetic field. the rovibrational - hyperfine - zeeman hamiltonian matrix is constructed and diagonalized using the rovibrational eigenstates and the proton spin functions. the developed methodology can be used to compute hyperfine - zeeman effects also for higher - energy rovibrational excitations of h $ _ 3 ^ + $ and other polyatomic molecules. these developments will guide future experiments extending quantum logic spectroscopy to polyatomic systems.
arxiv:2410.19963
0 } } } + { \ frac { dl } { l _ { 1 } } } + { \ frac { dl } { l _ { 2 } } } + \ cdots = \ sum _ { i } { \ frac { dl } { l _ { i } } } } then, we can express the value as l 0 l i d l l d x = ln ( l i l 0 ) = ln ( 1 + ε e ) { \ displaystyle \ int _ { l _ { 0 } } ^ { l _ { i } } { \ frac { dl } { l } } \, dx = \ ln \ left ( { \ frac { l _ { i } } { l _ { 0 } } } \ right ) = \ ln ( 1 + \ varepsilon _ { e } ) } thus, we can induce the plot in terms of σ t { \ displaystyle \ sigma _ { t } } and ε e { \ displaystyle \ varepsilon _ { e } } as right figure. additionally, based on the true stress - strain curve, we can estimate the region where necking starts to happen. since necking starts to appear after ultimate tensile stress where the maximum force applied, we can express this situation as below : d f = 0 = σ t d a i + a i d σ t { \ displaystyle df = 0 = \ sigma _ { t } da _ { i } + a _ { i } d \ sigma _ { t } } so this form can be expressed as below : d σ t σ t = − d a i a i { \ displaystyle { \ frac { d \ sigma _ { t } } { \ sigma _ { t } } } = - { \ frac { da _ { i } } { a _ { i } } } } it indicates that the necking starts to appear where reduction of area becomes much significant compared to the stress change. then the stress will be localized to specific area where the necking appears. additionally, we can induce various relation based on true stress - strain curve. 1 ) true strain and stress curve can be expressed by the approximate linear relationship by taking a log on true stress and strain. the relation can be expressed as below : σ t = k × ( ε t ) n { \ displaystyle \ sigma _ { t } = k \ times ( \ varepsilon _ { t
https://en.wikipedia.org/wiki/Deformation_(engineering)
in this paper, we construct a consistent non - parametric test for testing the equality of population medians for different samples when the observations in each sample are independent and identically distributed. this test can be further used to test the equality of unknown location parameters for different samples. the method discussed in this paper can be extended to any quantile level instead of the median. we present the theoretical results and also demonstrate the performance of this test through simulation studies.
arxiv:2501.05136
free quantum field theories on curved backgrounds are discussed via three explicit examples : the real scalar field, the dirac field and the proca field. the first step consists of outlining the main properties of globally hyperbolic spacetimes, that is the class of manifolds on which the classical dynamics of all physically relevant free fields can be written in terms of a cauchy problem. the set of all smooth solutions of the latter encompasses the dynamically allowed configurations which are used to identify via a suitable pairing a collection of classical observables. as a last step we use such collection to construct a $ * $ - algebra which encodes the information on the dynamics and on the canonical commutation or anti - commutation relations depending whether the underlying field is a fermion or a boson.
arxiv:1505.04298
we use homological perturbation machinery specific for the algebra category [ p. real. homological perturbation theory and associativity. homology, homotopy and applications vol. 2, n. 5 ( 2000 ) 51 - 88 ] to give an algorithm for computing the differential structure of a small 1 - - homological model for commutative differential graded algebras ( briefly, cdgas ). the complexity of the procedure is studied and a computer package in mathematica is described for determining such models.
arxiv:math/0110331
the contextual multi - armed bandit ( mab ) is a widely used framework for problems requiring sequential decision - making under uncertainty, such as recommendation systems. in applications involving a large number of users, the performance of contextual mab can be significantly improved by facilitating collaboration among multiple users. this has been achieved by the clustering of bandits ( cb ) methods, which adaptively group the users into different clusters and achieve collaboration by allowing the users in the same cluster to share data. however, classical cb algorithms typically rely on numerical reward feedback, which may not be practical in certain real - world applications. for instance, in recommendation systems, it is more realistic and reliable to solicit preference feedback between pairs of recommended items rather than absolute rewards. to address this limitation, we introduce the first " clustering of dueling bandit algorithms " to enable collaborative decision - making based on preference feedback. we propose two novel algorithms : ( 1 ) clustering of linear dueling bandits ( coldb ) which models the user reward functions as linear functions of the context vectors, and ( 2 ) clustering of neural dueling bandits ( condb ) which uses a neural network to model complex, non - linear user reward functions. both algorithms are supported by rigorous theoretical analyses, demonstrating that user collaboration leads to improved regret bounds. extensive empirical evaluations on synthetic and real - world datasets further validate the effectiveness of our methods, establishing their potential in real - world applications involving multiple users with preference - based feedback.
arxiv:2502.02079
accurate forecasting of renewable generation is crucial to facilitate the integration of res into the power system. focusing on pv units, forecasting methods can be divided into two main categories : physics - based and data - based strategies, with ai - based models providing state - of - the - art performance. however, while these ai - based models can capture complex patterns and relationships in the data, they ignore the underlying physical prior knowledge of the phenomenon. therefore, in this paper we propose matnet, a novel self - attention transformer - based architecture for multivariate multi - step day - ahead pv power generation forecasting. it consists of a hybrid approach that combines the ai paradigm with the prior physical knowledge of pv power generation of physics - based methods. the model is fed with historical pv data and historical and forecast weather data through a multi - level joint fusion approach. the effectiveness of the proposed model is evaluated using the ausgrid benchmark dataset with different regression performance metrics. the results show that our proposed architecture significantly outperforms the current state - of - the - art methods. these findings demonstrate the potential of matnet in improving forecasting accuracy and suggest that it could be a promising solution to facilitate the integration of pv energy into the power grid.
arxiv:2306.10356
nonequilibrium systems with large - scale fluctuations of a suitable system parameter are often effectively described by a superposition of two statistics, a superstatistics. here we illustrate this concept by analysing experimental data of fluctuations in atmospheric wind velocity differences at florence airport.
arxiv:cond-mat/0508257
designing verilog modules requires meticulous attention to correctness, efficiency, and adherence to design specifications. however, manually writing verilog code remains a complex and time - consuming task that demands both expert knowledge and iterative refinement. leveraging recent advancements in large language models ( llms ) and their structured text generation capabilities, we propose verimind, an agentic llm framework for verilog code generation that significantly automates and optimizes the synthesis process. unlike traditional llm - based code generators, verimind employs a structured reasoning approach : given a user - provided prompt describing design requirements, the system first formulates a detailed train of thought before the final verilog code is generated. this multi - step methodology enhances interpretability, accuracy, and adaptability in hardware design. in addition, we introduce a novel evaluation metric - pass @ arc - which combines the conventional pass @ k measure with average refinement cycles ( arc ) to capture both success rate and the efficiency of iterative refinement. experimental results on diverse hardware design tasks demonstrated that our approach achieved up to $ 8. 3 \ % $ improvement on pass @ k metric and $ 8. 1 \ % $ on pass @ arc metric. these findings underscore the transformative potential of agentic llms in automated hardware design, rtl development, and digital system synthesis.
arxiv:2503.16514
an infinite game on the set of real numbers appeared in matthew baker ' s work [ math. mag. 80 ( 2007 ), no. 5, pp. 377 - - 380 ] in which he asks whether it can help characterize countable subsets of the reals. this question is in a similar spirit to how the banach - mazur game characterizes meager sets in an arbitrary topological space. in a recent paper, will brian and steven clontz prove that in baker ' s game, player ii has a winning strategy if and only if the payoff set is countable. they also asked if it is possible, in general linear orders, for player ii to have a winning strategy on some uncountable set. to this we give a positive answer and moreover construct, for every infinite cardinal $ \ kappa $, a dense linear order of size $ \ kappa $ on which player ii has a winning strategy on all payoff sets. we finish with some future research questions, further underlining the difficulty in generalizing the characterization of brian and clontz to linear orders.
arxiv:2408.14624
quantum considerations have led many theorists to believe that classical black hole physics is modified not just deep inside black holes but at { \ it horizon scales }, or even further outward. the near - horizon regime has just begun to be observationally probed for astrophysical black holes - - both by ligo, and by the event horizon telescope. this suggests exciting prospects for observational constraints on or discovery of new quantum black hole structure. this paper overviews arguments for certain such structure and these prospects.
arxiv:1605.05341
we present keck / deimos spectroscopy of resolved stars in the m31 satellites and xxviii & and xxix. we show that these are likely self - bound galaxies based on 18 and 24 members in and xxviii & and xxix, respectively. and xxviii has a systemic velocity of - 331. 1 + / - 1. 8 km / s and velocity dispersion of 4. 9 + / - 1. 6 km / s, implying a mass - to - light ratio ( within r _ 1 / 2 ) of ~ 44 + / - 41. and xxix has a systemic velocity of - 194. 4 + / - 1. 5 km / s and velocity dispersion of 5. 7 + / - 1. 2 km / s, implying a mass - to - light ratio ( within r _ 1 / 2 ) of ~ 124 + / - 72. the internal kinematics and implied masses of and xxviii & and xxix are similar to dwarf spheroidals ( dsphs ) of comparable luminosities, implying that these objects are dark matter - dominated dwarf galaxies. despite the large projected distances from their host ( 380 and 188 kpc ), the kinematics of these dsph suggest that they are bound m31 satellites.
arxiv:1302.0848
the compositional dependence of the lowest direct and indirect band gaps in $ \ text { ge } _ { 1 - y } \ text { sn } _ { y } $ has been determined from room - temperature photoluminescence measurements. this technique is particularly attractive for a comparison of the two transitions because distinct features in the spectra can be associated with the direct and indirect gaps. however, detailed modeling of these room temperature spectra is required to extract the band gap values with the high accuracy required to determine the sn concentration $ y _ { c } $ at which the alloy becomes a direct gap semiconductor. for the direct gap, this is accomplished using a microscopic model that allows the determination of direct gap energies with mev accuracy. for the indirect gap, it is shown that current theoretical models are inadequate to describe the emission properties of systems with close indirect and direct transitions. accordingly, an ad hoc procedure is used to extract the indirect gap energies from the data. for $ y $ < 0. 1 the resulting direct gap compositional dependence is given by $ \ delta e _ { 0 } = - ( 3. 57 \ pm 0. 06 ) y $ ( in ev ). for the indirect gap, the corresponding expression is $ \ delta e _ { \ text { ind } } = - ( 1. 64 \ pm 0. 10 ) y $ ( in ev ). if a quadratic function of composition is used to express the two transition energies over the entire compositional range $ 0 \ leq y \ leq 1 $, the quadratic ( bowing ) coefficients are found to be $ b _ { 0 } = 2. 46 \ pm 0. 06 $ ev ( for $ e _ { 0 } $ ) and $ b _ { \ text { ind } } = 0. 99 \ pm 0. 11 $ ev ( for $ e _ { \ text { ind } } $ ). these results imply a crossover concentration $ y _ { c } = 0. 073 _ { - 0. 006 } ^ { + 0. 007 } $, much lower than early theoretical predictions based on the virtual crystal approximation, but in better agreement with predictions based on large atomic supercells.
arxiv:1406.0448
the role lattice qcd can play in $ b $ physics is surveyed. we include results for the decay constant, and discuss upcoming calculations of semileptonic form factors and neutral - meson mixing. together with experimental measurements, these calculations can determine the unitarity triangle. plenary talk presented at the workshop on $ b $ physics at hadron accelerators, snowmass, colo., 21 june - - 2 july, 1993.
arxiv:hep-ph/9310220
we provide a characterisation of the continuous - time markov models where the markov matrices from the model can be parameterised directly in terms of the associated rate matrices ( generators ). that is, each markov matrix can be expressed as the sum of the identity matrix and a rate matrix from the model. we show that the existence of an underlying jordan algebra provides a sufficient condition, which becomes necessary for ( so - called ) linear models. we connect this property to the well - known uniformization procedure for continuous - time markov chains by demonstrating that the property is equivalent to all markov matrices from the model taking the same form as the corresponding discrete time markov matrices in the uniformized process. we apply our results to analyse two model hierarchies practically important to phylogenetic inference, obtained by assuming ( i ) time - reversibility and ( ii ) permutation symmetry, respectively.
arxiv:2105.03558
we investigate the effect of pauli non - locality in the heavy - ion optical potential on sub - barrier fusion reactions. the s \ ~ { a } o paulo potential, which takes into account the pauli non - locality and has been widely used in analyzing elastic scattering, has also recently been applied to heavy - ion fusion. however, the approximation employed in deriving the s \ ~ { a } o paulo potential, based on the perey - buck semi - classical treatment of neutron induced reactions, must be assessed for charged particles tunneling through a barrier. it is the purpose of this note to look into this question. we consider the widely studied system $ ^ { 16 } $ o + $ ^ { 208 } $ pb at energies that span the barrier region from 10 mev below to 10 mev above. it seems that the non - locality plays a minor role. we find the s \ ~ { a } o paulo potential to be quite adequate throughout the region.
arxiv:0705.0771
heusler alloys based magnetic tunnel junctions can potentially provide high magnetoresistance, small damping and fast switching. here junctions with co2feal as a ferromagnetic electrode are fabricated by room temperature sputtering on si / sio2 substrates. the doping of boron in co2feal is found to have a large positive impact on the structural, magnetic and transport properties of the junctions, with a reduced interfacial roughness and substantial improved tunneling magnetoresistance. a two - level magnetoresistance is also observed in samples annealed at low temperature, which is believed to be related to the memristive effect of the tunnel barrier with impurities.
arxiv:2211.12448
the author has recently introduced an abstract algebraic framework of analogical proportions within the general setting of universal algebra. this paper studies analogical proportions in the boolean domain consisting of two elements 0 and 1 within his framework. it turns out that our notion of boolean proportions coincides with two prominent models from the literature in different settings. this means that we can capture two separate modellings of boolean proportions within a single framework which is mathematically appealing and provides further evidence for the robustness and applicability of the general framework.
arxiv:2109.00388
we propose a roadmap for leveraging the tremendous opportunities the internet of things ( iot ) has to offer. we argue that the combination of the recent advances in service computing and iot technology provide a unique framework for innovations not yet envisaged, as well as the emergence of yet - to - be - developed iot applications. this roadmap covers : emerging novel iot services, articulation of major research directions, and suggestion of a roadmap to guide the iot and service computing community to address key iot service challenges.
arxiv:2103.03043
lexical simplification ( ls ) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning, to simplify the sentence. recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. in this paper, we propose a lexical simplification framework lsbert based on pretrained representation model bert, that is capable of ( 1 ) making use of the wider context when both detecting the words in need of simplification and generating substitue candidates, and ( 2 ) taking five high - quality features into account for ranking candidates, including bert prediction order, bert - based language model, and the paraphrase database ppdb, in addition to the word frequency and word similarity commonly used in other ls methods. we show that our system outputs lexical simplifications that are grammatically correct and semantically appropriate, and obtains obvious improvement compared with these baselines, outperforming the state - of - the - art by 29. 8 accuracy points on three well - known benchmarks.
arxiv:2006.14939
in this paper, we take a first step towards answering the question of how to design fair machine learning algorithms that are robust to adversarial attacks. using a minimax framework, we aim to design an adversarially robust fair regression model that achieves optimal performance in the presence of an attacker who is able to add a carefully designed adversarial data point to the dataset or perform a rank - one attack on the dataset. by solving the proposed nonsmooth nonconvex - nonconcave minimax problem, the optimal adversary as well as the robust fairness - aware regression model are obtained. for both synthetic data and real - world datasets, numerical results illustrate that the proposed adversarially robust fair models have better performance on poisoned datasets than other fair machine learning models in both prediction accuracy and group - based fairness measure.
arxiv:2211.04449
like the rsk correspondence for symmetric groups, garfinkle defined a domino correspondence for type $ \ mathrm { b } $ and $ \ mathrm { d } $ coxeter groups. similar to the knuth relations, taskin and pietraho give the plactic relations for the domino correspondence and bonnaf \ ' e use them to classify the cells for type $ \ mathrm { b } $ coxeter groups. we give some further properties of the plactic relations and use these relations to describe the bidirected edges and the molecules of gelfand $ w $ - graphs for type $ \ mathrm { b } $ and $ \ mathrm { d } $ coxeter groups.
arxiv:2312.14043
the time dependent schr \ " odinger equation for an electron passing through a semiconductor quantum ring of nonzero width is solved in the presence of a perpendicular homogenous magnetic field. we study the effects of the lorentz force on the aharonov - bohm oscillations. within the range of incident momentum for which the ring is transparent at zero magnetic field, the lorentz force leads to a decrease of the oscillation amplitude, due to the asymmetry in the electron injection in the two arms of the ring. for structures in which the fast electrons are predominantly backscattered, the lorentz force assists in the transport producing an initial increase of the corresponding oscillation amplitude. furthermore, we discuss the effect of elastic scattering on a potential cavity within one of the arms of the ring. for the cavity tuned to shift maximally the phase of the maximum of the wave packet we observe a $ \ pi $ shift of the aharonov - bohm oscillations. for other cavity depths the oscillations with a period of half of the flux quantum are observed.
arxiv:cond-mat/0503268
mars today has no active volcanism and its atmosphere is oxidizing, dominated by the photochemistry of co2 and h2o. using a one - dimensional photochemical model, we consider whether plausible volcanic gas fluxes could have switched the redox - state of the past martian atmosphere to reducing conditions. in our model, the total quantity and proportions of volcanic gases depend on the water content, outgassing pressure, and oxygen fugacity of the source melt. we find that with reasonable melt parameters the past martian atmosphere ( ~ 3. 5 gyr to present ) could have easily reached reducing and anoxic conditions with modest levels of volcanism, > 0. 14 km ^ 3 / yr, well within the range of prior estimates. counter - intuitively we also find that more reducing melts with lower oxygen fugacity require greater amounts of volcanism to switch a paleo - atmosphere from oxidizing to reducing. the reason is that sulfur is more stable in such melts and lower absolute fluxes of sulfur - bearing gases more than compensate for increases in the proportions of h2 and co. these results imply that ancient mars should have experienced periods with anoxic and reducing atmospheres even through the mid - amazonian whenever volcanic outgassing was sustained at sufficient levels. reducing anoxic conditions are potentially conducive to the synthesis of prebiotic organic compounds, such as amino acids, and are therefore relevant to the possibility of life on mars. also, anoxic reducing conditions should have influenced the type of minerals that were formed on the surface or deposited from the atmosphere such as elemental polysulfur ( s8 ) as a signature of past reducing atmospheres. finally, our models allow us to estimate the amount of volcanically sourced atmospheric sulfate deposited over mars ' history, approximately 10 ^ 6 to 10 ^ 9 tmol, with a spread depending on assumed outgassing rate history and magmatic source conditions.
arxiv:2103.13012
dynamic and multimodal features are two important properties and widely existed in many real - world optimization problems. the former illustrates that the objectives and / or constraints of the problems change over time, while the latter means there is more than one optimal solution ( sometimes including the accepted local solutions ) in each environment. the dynamic multimodal optimization problems ( dmmops ) have both of these characteristics, which have been studied in the field of evolutionary computation and swarm intelligence for years, and attract more and more attention. solving such problems requires optimization algorithms to simultaneously track multiple optima in the changing environments. so that the decision makers can pick out one optimal solution in each environment according to their experiences and preferences, or quickly turn to other solutions when the current one cannot work well. this is very helpful for the decision makers, especially when facing changing environments. in this competition, a test suit about dmmops is given, which models the real - world applications. specifically, this test suit adopts 8 multimodal functions and 8 change modes to construct 24 typical dynamic multimodal optimization problems. meanwhile, the metric is also given to measure the algorithm performance, which considers the average number of optimal solutions found in all environments. this competition will be very helpful to promote the development of dynamic multimodal optimization algorithms.
arxiv:2201.00523
we use accurate estimates of aluminium abundance provided as part of the apogee data release 17 and gaia early data release 3 astrometry to select a highly pure sample of stars with metallicity $ - 1. 5 \ lesssim { \ rm [ fe / h ] } \ lesssim 0. 5 $ born in - situ in the milky way proper. we show that the low - metallicity ( [ fe / h ] $ \ lesssim - 1. 3 $ ) in - situ component that we dub aurora is kinematically hot with an approximately isotropic velocity ellipsoid and a modest net rotation. aurora stars exhibit large scatter in metallicity and in a number of element abundance ratios. the median tangential velocity of the in - situ stars increases sharply with increasing metallicity between [ fe / h ] $ = - 1. 3 $ and $ - 0. 9 $, the transition that we call the spin - up. the observed and theoretically expected age - metallicity correlations imply that this increase reflects a rapid formation of the milky way disk over $ \ approx 1 - 2 $ gyrs. the transformation of the stellar kinematics as a function of [ fe / h ] is accompanied by a qualitative change in chemical abundances : the scatter drops sharply once the galaxy builds up a disk during later epochs corresponding to [ fe / h ] $ > - 0. 9 $. results of galaxy formation models presented in this and other recent studies strongly indicate that the trends observed in the milky way reflect generic processes during the early evolution of progenitors of mw - sized galaxies : a period of chaotic pre - disk evolution, when gas is accreted along cold narrow filaments and when stars are born in irregular configurations, and subsequent rapid disk formation. the latter signals formation of a stable hot gaseous halo around the mw progenitor, which changes the mode of gas accretion and allows development of coherently rotating disk.
arxiv:2203.04980