text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
a new criterion necessary and sufficient for the separability of pure bipartite systems for arbitrary finite dimensions is demonstrated ; and the corresponding finer quantitative measures or characterizations of entanglement ( beyond mere separability or non - separability determination ) are discussed. based on this criterion, we proved that the well - known peres - horodecki positivity - of - partial - transform criterion is also necessary and sufficient for separability in the case of pure bipartite systems. the maximum value of entanglement, and the corresponding maximally - entangled states are also worked out in detail.
|
arxiv:1304.5294
|
a model for the current - voltage characteristic of the junction between an ion - sensitive - membrane and an electrolyte solution is derived and compared with numerical simulations of the poisson - nernst - planck model for ion transport. the expression resembles that of a semiconductor pn junction with a non - ideality factor of 2. the non - ideality correlated to the voltage drop in the electrolyte induced by the re - arrangement of the counter - ions.
|
arxiv:2409.10530
|
for a prime number $ p \ ge 5 $, we consider three classical cusp eigenforms $ f _ j ( z ) $ of weights $ k _ 1, k _ 2, k _ 3 $, of conductors $ n _ 1, n _ 2, n _ 3 $, and of nebentypus characters $ \ psi _ j \ bmod n _ j $. according to h. hida and r. coleman, one can include each $ f _ j $ into a { $ p $ - adic analytic family } $ k _ j \ mapsto \ { f _ { j, k _ j } \ } $ of cusp eigenforms $ f _ { j, k _ j } $ of weights $ k _ j $ in such a way that $ f _ { j, k _ j } = f _ j $, and that all their fourier coefficients $ a _ n ( f _ { j, k _ j } ) $ are given by certain $ p $ - adic analytic functions $ k _ j { } \ mapsto a _ { n, j } ( k _ j { } ) $. the purpose of this paper is to describe a four variable $ p $ - adic $ l $ - function attached to garrett ' s triple product of three coleman ' s families $ k _ j \ mapsto \ { f _ { j, k _ j } \ } $ of cusp eigenforms of three fixed slopes $ \ sigma _ j = v _ p ( \ alpha _ { p, j } ^ { ( 1 ) } ( k _ j { } ) ) \ ge 0 $ where $ \ alpha _ { p, j } ^ { ( 1 ) } = \ al _ { p, j } ^ { ( 1 ) } ( k _ j { } ) $ is an eigenvalue ( which depends on $ k _ j { } $ ) of atkin ' s operator $ u = u _ p $ acting on fourier expansions by $ u ( \ sum _ { n \ ge 0 } ^ \ infty a _ { n } q ^ n ) = \ sum _ { n \ ge 0 } ^ \ infty a _ { np } q ^ n $. we consider the $ p $ - adic weight space $ x $ containing all $ ( k { } _ j, \ psi _ j ) $. our $ p $ - adic $ l $ - functions are mel
|
arxiv:math/0607204
|
we introduce the situated corpus of understanding transactions ( scout ), a multi - modal collection of human - robot dialogue in the task domain of collaborative exploration. the corpus was constructed from multiple wizard - of - oz experiments where human participants gave verbal instructions to a remotely - located robot to move and gather information about its surroundings. scout contains 89, 056 utterances and 310, 095 words from 278 dialogues averaging 320 utterances per dialogue. the dialogues are aligned with the multi - modal data streams available during the experiments : 5, 785 images and 30 maps. the corpus has been annotated with abstract meaning representation and dialogue - amr to identify the speaker ' s intent and meaning within an utterance, and with transactional units and relations to track relationships between utterances to reveal patterns of the dialogue structure. we describe how the corpus and its annotations have been used to develop autonomous human - robot systems and enable research in open questions of how humans speak to robots. we release this corpus to accelerate progress in autonomous, situated, human - robot dialogue, especially in the context of navigation tasks where details about the environment need to be discovered.
|
arxiv:2411.12844
|
self - supervised contrastive learning is a powerful tool to learn visual representation without labels. prior work has primarily focused on evaluating the recognition accuracy of various pre - training algorithms, but has overlooked other behavioral aspects. in addition to accuracy, distributional robustness plays a critical role in the reliability of machine learning models. we design and conduct a series of robustness tests to quantify the behavioral differences between contrastive learning and supervised learning to downstream or pre - training data distribution changes. these tests leverage data corruptions at multiple levels, ranging from pixel - level gamma distortion to patch - level shuffling and to dataset - level distribution shift. our tests unveil intriguing robustness behaviors of contrastive and supervised learning. on the one hand, under downstream corruptions, we generally observe that contrastive learning is surprisingly more robust than supervised learning. on the other hand, under pre - training corruptions, we find contrastive learning vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset - level distribution change. we attempt to explain these results through the role of data augmentation and feature space properties. our insight has implications in improving the downstream robustness of supervised learning.
|
arxiv:2206.05259
|
since the introduction of totalsegmentator ct, there is demand for a similar robust automated mri segmentation tool that can be applied across all mri sequences and anatomic structures. in this retrospective study, a nnu - net model ( totalsegmentator ) was trained on mri and ct examinations to segment 80 anatomic structures relevant for use cases such as organ volumetry, disease characterization, surgical planning and opportunistic screening. examinations were randomly sampled from routine clinical studies to represent real - world examples. dice scores were calculated between the predicted segmentations and expert radiologist reference standard segmentations to evaluate model performance on an internal test set, two external test sets and against two publicly available models, and totalsegmentator ct. the model was applied to an internal dataset containing abdominal mris to investigate age - dependent volume changes. a total of 1143 examinations ( 616 mris, 527 cts ) ( median age 61 years, iqr 50 - 72 ) were split into training ( n = 1088, ct and mri ) and an internal test set ( n = 55 ; only mri ), two external test sets ( amos, n = 20 ; chaos, n = 20 ; only mri ), and an internal aging - study dataset of 8672 abdominal mris ( median age 59 years, iqr 45 - 70 ) were included. the model showed a dice score of 0. 839 on the internal test set and outperformed two other models ( dice score, 0. 862 versus 0. 759 ; and 0. 838 versus 0. 560 ; p <. 001 for both ). the proposed open - source, easy - to - use model allows for automatic, robust segmentation of 80 structures, extending the capabilities of totalsegmentator to mris of any sequence. the ready - to - use online tool is available at https : / / totalsegmentator. com, the model at https : / / github. com / wasserth / totalsegmentator, and the dataset at https : / / zenodo. org / records / 14710732.
|
arxiv:2405.19492
|
the well known interpretational difficulties with nonlinear schr \ " odinger and von neumann equations can be reduced to the problem of computing multiple - time correlation functions in the absence of heisenberg picture. having no heisenberg picture one often resorts to zeno - type reasoning which explicitly involves the projection postulate as a means of computing conditional and joint probabilities. although the method works well in linear quantum mechanics, it completely fails for nonlinear evolutions. we propose an alternative way of performing the same task in linear quantum mechanics and show that the method smoothly extends to the nonlinear domain. the trick is to use appropriate time - dependent hamiltonians which involve " switching - off functions ". we apply the technique to the epr problem in nonlinear quantum mechanics and show that paradoxes of gisin and polchinski disappear.
|
arxiv:quant-ph/0106051
|
scene graph generation ( sgg ) has gained tremendous progress in recent years. however, its underlying long - tailed distribution of predicate classes is a challenging problem. for extremely unbalanced predicate distributions, existing approaches usually construct complicated context encoders to extract the intrinsic relevance of scene context to predicates and complex networks to improve the learning ability of network models for highly imbalanced predicate distributions. to address the unbiased sgg problem, we introduce a simple yet effective method dubbed context - aware mixture - of - experts ( came ) to improve model diversity and mitigate biased sgg without complicated design. specifically, we propose to integrate the mixture of experts with a divide and ensemble strategy to remedy the severely long - tailed distribution of predicate classes, which is applicable to the majority of unbiased scene graph generators. the biased sgg is thereby reduced, and the model tends to anticipate more evenly distributed predicate predictions. to differentiate between various predicate distribution levels, experts with the same weights are not sufficiently diverse. in order to enable the network dynamically exploit the rich scene context and further boost the diversity of model, we simply use the built - in module to create a context encoder. the importance of each expert to scene context and each predicate to each expert is dynamically associated with expert weighting ( ew ) and predicate weighting ( pw ) strategy. we have conducted extensive experiments on three tasks using the visual genome dataset, showing that came outperforms recent methods and achieves state - of - the - art performance. our code will be available publicly.
|
arxiv:2208.07109
|
a partition is a $ t $ - core partition if $ t $ is not one of its hook lengths. let $ c _ t ( n ) $ be the number of $ t $ - core partitions of $ n $. in 1999, stanton conjectured $ c _ t ( n ) \ le c _ { t + 1 } ( n ) $ if $ 4 \ le t \ ne n - 1 $. this was proved for $ t $ fixed and $ n $ sufficiently large by anderson, and for small values of $ t $ by kim and rouse. in this paper, we prove stanton ' s conjecture in general. our approach is to find a saddle point asymptotic formula for $ c _ t ( n ) $, valid in all ranges of $ t $ and $ n $. this includes the known asymptotic formulas for $ c _ t ( n ) $ as special cases, and shows that the behavior of $ c _ t ( n ) $ depends on how $ t ^ 2 $ compares in size to $ n $. for example, our formula implies that if $ t ^ 2 = \ kappa n + o ( t ) $, then $ c _ t ( n ) = \ frac { \ exp \ left ( 2 \ pi \ sqrt { a n } \ right ) } { b n } ( 1 + o ( 1 ) ) $ for suitable constants $ a $ and $ b $ defined in terms of $ \ kappa $.
|
arxiv:2406.02982
|
game science ( chinese : 学 ; pinyin : youxi kexue ) is a chinese video game development and publishing company founded by feng ji and yang qi in 2014. the studio is headquartered in shenzhen and has an additional office in hangzhou. it is best known for developing the video game black myth : wukong ( 2024 ). = = history = = = = = formation and early period ( 2014 – 2017 ) = = = game science was founded on 13 june 2014. the seven founding members were former employees of tencent and worked as developers for the massively multiplayer online game asura at the company. at the time of their studio ' s formation, china ' s mobile games market was rapidly expanding, so they made the decision to develop mobile games in order to survive as a studio. in collaboration with publisher netease, game science developed 100 heroes, a mobile game inspired by the romance of the three kingdoms. the game attracted 500 thousand players in the first month and nearly 800 thousand players in its first year. yang qi proposed a single - player game as their next project, but the idea was shelved due to the high costs and risks involved for a newly established studio. instead, their next project became the mobile game art of war : red tides. in 2019, the game was acquired by chaoxi guangnian, a game company under bytedance. lilith games ceo wang xiwen — a former colleague of feng ji at tencent — introduced feng ji and hero games ceo daniel wu to each other, which ultimately led to wu investing in game science. during a meeting in their early days, game science committed to pursuing a vision of creating games that move and resonate with them personally. during a speech at an art exhibition in april 2025, feng ji remarked that this was a core value of the studio. he explained that the idea is that a project can progress effectively if game developers, as users themselves, have a better understanding of both the work and its players, but that they only represent themselves and thus must constantly experiment to find the intersection between themselves and players. the studio ' s vision also retained the ideas reflected in feng ji ' s 2007 article " who murdered our games? " ( 我 的 ), which offers a critique from the perspective of a game planner, arguing that many games fail before they even leave the development stage, these failures occur when development teams lack excitement for the games they are creating, the industry has fostered a mentality where players are treated like
|
https://en.wikipedia.org/wiki/Game_Science
|
we review attempts to estimate the influence of global cosmological expansion on local systems. here ` local ' is taken to mean that the sizes of the considered systems are much smaller than cosmologically relevant scales. for example, such influences can affect orbital motions as well as configurations of compact objects, like black holes. we also discuss how measurements based on the exchange of electromagnetic signals of distances, velocities, etc. of moving objects are influenced. as an application we compare orders of magnitudes of such effects with the scale set by the apparently anomalous acceleration of the pioneer 10 and 11 spacecrafts, which is 10 ^ - 9 m / s ^ 2. we find no reason to believe that the latter is of cosmological origin. however, the general problem of gaining a qualitative and quantitative understanding of how the cosmological dynamics influences local systems remains challenging, with only partial clues being so far provided by exact solutions to the field equations of general relativity.
|
arxiv:0810.2712
|
this paper presents an architecture for generating music for video games based on the transformer deep learning model. our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical style. the system generates various musical layers, following the standard layering strategy currently used by composers designing video game music. to adapt the music generated to the game play and to the player ( s ) situation, we are using an arousal - valence model of emotions, in order to control the selection of musical layers. we discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.
|
arxiv:2207.01698
|
this paper studies the equilibrium price of an asset that is traded in continuous time between n agents who have heterogeneous beliefs about the state process underlying the asset ' s payoff. we propose a tractable model where agents maximize expected returns under quadratic costs on inventories and trading rates. the unique equilibrium price is characterized by a weakly coupled system of linear parabolic equations which shows that holding and liquidity costs play dual roles. we derive the leading - order asymptotics for small transaction and holding costs which give further insight into the equilibrium and the consequences of illiquidity.
|
arxiv:1905.05730
|
turbulent flows at the surface of the ocean deviate from geostrophic equilibrium on scales smaller than about 10 km. these scales are associated with important vertical transport of active and passive tracers, and should play a prominent role in the heat transport at climatic scales and for plankton dynamics. measuring velocity fields on such small scales is notoriously difficult but new, high - resolution satellite altimetry is starting to reveal them. however, the satellite - derived velocities essentially represent the geostrophic flow component, and the impact of unresolved ageostrophic motions on particle dispersion needs to be understood to properly characterize transport properties. here, we investigate ocean fine - scale turbulence using a model that represents some of the processes due to ageostrophic dynamics. we take a lagrangian approach and focus on the predictability of the particle dynamics, comparing trajectories advected by either the full flow or by its geostrophic component only. our results indicate that, over long times, relative dispersion is marginally affected by the filtering of the ageostrophic component. nevertheless, advection by the filtered flow leads to an overestimation of the typical pair - separation rate, and to a bias on trajectories ( in terms of displacement from the actual ones ), whose importance grows with the rossby number. we further explore the intensity of the transient particle clustering induced by ageostrophic motions and find that it can be significant, even for small flow compressibility. indeed, we show that clustering is here due to the interplay between compressibility and persistent flow structures that trap particles, enhancing their aggregation.
|
arxiv:2406.03915
|
in r - parity conserving supersymmetric ( susy ) models the lightest susy particle ( lsp ) is stable and a candidate for dark matter. depending on the coupling and mass of this particle the life time of the next - to - lightest susy particle ( nlsp ) may be large compared to experimental time scales. in particular, if the nlsp is a charged particle and its decay length is of the order of the earth ' s diameter cherenkov telescopes might observe parallel muon - like tracks of nlsp pairs produced in neutrino - nucleon interactions in the earth ' s interior. we have investigated two susy scenarios with a long - lived stau nlsp and a gravitino lsp in view of the observability at the icecube detector.
|
arxiv:astro-ph/0610775
|
a phenomenological model for the hyperon - nucleon interactions is constructed by using the quark cluster model approach to the short - distance baryon - baryon interactions. the model contains the su ( 3 ) symmetric meson exchange interaction at large distances and the quark - exchange short - distance interaction. the main feature of the model is that strong channel dependences of the short range repulsions due to the quark model symmetry. it is pointed out that two channels, ( $ i $, $ s $ ) = ( 1 / 2, 0 ) and ( 3 / 2, 1 ), of the s - wave sigma - nucleon interactions have extremely strong repulsions at short - distances.
|
arxiv:hep-ph/9312275
|
we calculate the spectra of inverse compton ( ic ) emissions in gamma - ray burst ( grb ) shocks produced when relativistic ejecta encounters the external interstellar medium, assuming a broken power - law approximation to the synchrotron seed spectrum. four ic processes, including the synchrotron self - compton ( ssc ) processes in grb forward and reverse shocks, and two combined - ic processes ( i. e. scattering of reverse shock photons on the electrons in forward shocks and forward shock photons on the electrons in reverse shocks ), are considered. we find that the ssc emission from reverse shocks dominates over other emission processes in energy bands from tens of mev to tens of gev, for a wide range of shock parameters. this mechanism may be responsible for the prompt high energy gamma - rays detected by the energetic gamma ray experiment telescope ( egret ). at tev energy bands, however, the combined - ic emissions and / or the ssc emission from the forward shocks become increasingly dominant for a moderately steep distribution of shocked electrons.
|
arxiv:astro-ph/0104128
|
the quasinormal modes ( qnms ) of a regular black hole with charge are calculated in the eikonal approximation. in the eikonal limit the qnms of black hole are determined by the parameters of the unstable circular null geodesics. the behaviors of qnms are compared with qnms of reisner - nordstr \ " { o } m black hole, it is done by fixing some of the parameters that characterize the black holes and varying another. we observed that the parameter that is related one effective cosmological constant at small distances, determines the behaviors of the qnms of regular black hole with charge.
|
arxiv:1810.09034
|
we present tolman iv spacetime representing compact fluid sphere in bigravity. here we have explored the effect of scale parameter $ k $ in the local matter distribution of compact stars. we have model for three well - known compact stars and it shows that for lower values of $ k $ leads to stiffer eos. this claim is also supported by the graphical analysis. it can be observed that the sound speed and the adiabatic index are more for lower values of $ k $. it is also seen that all the solutions of einstein ' s field equations are still satisfying the field equations in the presence of a background metric $ \ gamma _ { \ mu \ nu } $. however, the density and pressure does modified by extra term from the constant curvature background, thus affecting the eos. one can also think that the parameter $ \ alpha \ equiv 1 / k ^ 2 $ as coupling constant between the $ g _ { \ mu \ nu } $ and $ \ gamma _ { \ mu \ nu } $ and consequently more the coupling stiffer is the eos. as $ k \ rightarrow \ infty $, the background de - sitter spacetime reduces to minkowski ' s spacetime and the coupling vanishes. the solution satisfy the causality condition, all the energy conditions and equilibrium under gravity and hydrostatic forces. the stability of the local stellar structure is enhanced by reducing the scalar curvature of the background spacetime.
|
arxiv:2006.07135
|
we study newton type methods for inverse problems described by nonlinear operator equations $ f ( u ) = g $ in banach spaces where the newton equations $ f ' ( u _ n ; u _ { n + 1 } - u _ n ) = g - f ( u _ n ) $ are regularized variationally using a general data misfit functional and a convex regularization term. this generalizes the well - known iteratively regularized gauss - newton method ( irgnm ). we prove convergence and convergence rates as the noise level tends to 0 both for an a priori stopping rule and for a lepski { \ u \ i } - type a posteriori stopping rule. our analysis includes previous order optimal convergence rate results for the irgnm as special cases. the main focus of this paper is on inverse problems with poisson data where the natural data misfit functional is given by the kullback - leibler divergence. two examples of such problems are discussed in detail : an inverse obstacle scattering problem with amplitude data of the far - field pattern and a phase retrieval problem. the performence of the proposed method for these problems is illustrated in numerical examples.
|
arxiv:1105.2690
|
hedonic games provide a natural model of coalition formation among self - interested agents. the associated problem of finding stable outcomes in such games has been extensively studied. in this paper, we identify simple conditions on expressivity of hedonic games that are sufficient for the problem of checking whether a given game admits a stable outcome to be computationally hard. somewhat surprisingly, these conditions are very mild and intuitive. our results apply to a wide range of stability concepts ( core stability, individual stability, nash stability, etc. ) and to many known formalisms for hedonic games ( additively separable games, games with w - preferences, fractional hedonic games, etc. ), and unify and extend known results for these formalisms. they also have broader applicability : for several classes of hedonic games whose computational complexity has not been explored in prior work, we show that our framework immediately implies a number of hardness results for them.
|
arxiv:1507.03474
|
this paper is about sheaf cohomology for varieties ( schemes ) in characteristic $ p > 0 $. we assume the presence of a frobenius splitting. ( see v. b. mehta and a. ramanathan, frobenius splitting and cohomology vanishing for schubert varieties, annals of math. 122 ( 1985 ), 27 - - 40 ). the main result is that a non - zero higher direct image under a proper map of the ideal sheaf of a compatibly frobenius split subvariety can not have a support whose inverse image is contained in that subvariety. earlier vanishing theorems for frobenius split varieties were based on direct limits and serre ' s vanishing theorem, but our theorem is based on inverse limits and grothendieck ' s theorem on formal functions. the result implies a grauert - - riemenschneider type theorem.
|
arxiv:alg-geom/9202009
|
we present asymptotic giant branch ( agb ) models of solar metallicity, to allow the interpretation of observations of galactic agb stars, whose distances should be soon available after the first release of the gaia catalogue. we find an abrupt change in the agb physical and chemical properties, occurring at the threshold mass to ignite hot bottom burning, i. e. $ 3. 5m _ { \ odot } $. stars with mass below $ 3. 5 m _ { \ odot } $ reach the c - star stage and eject into the interstellar medium gas enriched in carbon, nitrogen and $ ^ { 17 } o $. the higher mass counterparts evolve at large luminosities, between $ 3 \ times 10 ^ 4 l _ { \ odot } $ and $ 10 ^ 5 l _ { \ odot } $. the mass expelled from the massive agb stars shows the imprinting of proton - capture nucleosynthesis, with considerable production of nitrogen and sodium and destruction of $ ^ { 12 } c $ and $ ^ { 18 } o $. the comparison with the most recent results from other research groups are discussed, to evaluate the robustness of the present findings. finally, we compare the models with recent observations of galactic agb stars, outlining the possibility offered by gaia to shed new light on the evolution properties of this class of objects.
|
arxiv:1607.02876
|
the navier - stokes - korteweg ( nsk ) system is a classical diffuse interface model which is based on van der waals theory of capillarity. diffuse interface methods have gained much interest to model two - phase flow in porous media. however, for the numerical solution of the nsk equations two major challenges have to be faced. first, an extended numerical stencil is required due to a third - order term in the linear momentum and the total energy equations. in addition, the dispersive contribution in the linear momentum equations prevents the straightforward use of contact angle boundary conditions. secondly, any real gas equation of state is based on a non - convex helmholtz free energy potential which may cause the eigenvalues of the jacobian of the first - order fluxes to become imaginary numbers inside the spinodal region. in this work, a thermodynamically consistent relaxation model is presented which is used to approximate the nsk equations. the model is complimented by thermodynamically consistent non - equilibrium boundary conditions which take contact angle effects into account. due to the relaxation approach, the contribution of the korteweg tensor in the linear momentum and total energy equations can be reduced to second - order terms which enables a straightforward implementation of contact angle boundary conditions in a numerical scheme. moreover, the definition of a modified pressure function enables to formulate first - order fluxes which remain strictly hyperbolic in the entire spinodal region. the present work is a generalization of a previously presented parabolic relaxation model for the isothermal nsk equations.
|
arxiv:2208.05310
|
in this paper we study in details a system of two weakly coupled harmonic oscillators. this system may be viewed as a simple model for the interaction between a photon and a photodetector. we obtain exact solutions for the general case. we then compute approximate solutions for the case of a single photon ( where one oscillator is initially in its first excited state ) reaching a photodetector in its ground state ( the other oscillator ). the approximate solutions represent the state of both the photon and the photodetector after the interaction, which is not an eigenstate of the individual hamiltonians for each particle, and therefore the energies for each particle do not exist in the copenhagen interpretation of quantum mechanics. we use the approximate solutions that we obtained to compute bohmian trajectories and to study the energy transfer between the two particles. we conclude that even using the bohmian view the energy of each individual particle is not well defined, as the nonlocal quantum potential is not negligible even after the coupling is turned off.
|
arxiv:quant-ph/0307193
|
this paper summarises the results of our research on macroscopic entanglement in spin systems and free bosonic gases. we explain how entanglement can be observed using entanglement witnesses which are themselves constructed within the framework of thermodynamics and thus macroscopic observables. these thermodynamical entanglement witnesses result in bounds on macroscopic parameters of the system, such as the temperature, the energy or the susceptibility, below which entanglement must be present. the derived bounds indicate a relationship between the occurrence of entanglement and the establishment of order, possibly resulting in phase transition phenomena. we give a short overview over the concepts developed in condensed matter physics to capture the characteristics of phase transitions in particular in terms of order and correlation functions. finally we want to ask and speculate whether entanglement could be a generalised order concept by itself, relevant in ( quantum induced ) phase transitions such as bec, and that taking this view may help us to understand the underlying process of high - t superconductivity.
|
arxiv:quant-ph/0610268
|
channel - based pruning has achieved significant successes in accelerating deep convolutional neural network, whose pipeline is an iterative three - step procedure : ranking, pruning and fine - tuning. however, this iterative procedure is computationally expensive. in this study, we present a novel computationally efficient channel pruning approach based on the coarse ranking that utilizes the intermediate results during fine - tuning to rank the importance of filters, built upon state - of - the - art works with data - driven ranking criteria. the goal of this work is not to propose a single improved approach built upon a specific channel pruning method, but to introduce a new general framework that works for a series of channel pruning methods. various benchmark image datasets ( cifar - 10, imagenet, birds - 200, and flowers - 102 ) and network architectures ( alexnet and vgg - 16 ) are utilized to evaluate the proposed approach for object classification purpose. experimental results show that the proposed method can achieve almost identical performance with the corresponding state - of - the - art works ( baseline ) while our ranking time is negligibly short. in specific, with the proposed method, 75 % and 54 % of the total computation time for the whole pruning procedure can be reduced for alexnet on cifar - 10, and for vgg - 16 on imagenet, respectively. our approach would significantly facilitate pruning practice, especially on resource - constrained platforms.
|
arxiv:1902.06385
|
we study type - b conformal anomalies associated with $ \ frac { 1 } { 2 } $ - bps coulomb - branch operators in 4d $ \ mathcal n = 2 $ superconformal field theories. when the vacuum preserves the conformal symmetry these anomalies coincide with the two - point function coefficients in the coulomb - branch chiral ring. they are non - trivial functions of exactly - marginal couplings that can be determined from the $ s ^ 4 $ partition function. in this paper, we examine the fate of these anomalies in vacua of the higgs - branch moduli space, where conformal symmetry is spontaneously broken. we argue non - perturbatively that these anomalies are covariantly constant on conformal manifolds. in some cases, this can be used to show that they match in the broken and unbroken phases. thus, we uncover a new class of data on the higgs branch of 4d $ \ mathcal n = 2 $ conformal field theories that are exactly computable. an interesting application of this matching occurs in $ \ mathcal n = 2 $ circular quivers that deconstruct the 6d ( 2, 0 ) theory on a torus. in that context, we argue that 4d supersymmetric localisation can be used to calculate non - trivial data involving $ \ frac { 1 } { 2 } $ - bps operators of the 6d theory as exact functions of the complex structure of the torus.
|
arxiv:1911.05827
|
a new variational mode decomposition ( vmd ) based deep learning approach is proposed in this paper for time series forecasting problem. firstly, vmd is adopted to decompose the original time series into several sub - signals. then, a convolutional neural network ( cnn ) is applied to learn the reconstruction patterns on the decomposed sub - signals to obtain several reconstructed sub - signals. finally, a long short term memory ( lstm ) network is employed to forecast the time series with the decomposed sub - signals and the reconstructed sub - signals as inputs. the proposed vmd - cnn - lstm approach is originated from the decomposition - reconstruction - ensemble framework, and innovated by embedding the reconstruction, single forecasting, and ensemble steps in a unified deep learning approach. to verify the forecasting performance of the proposed approach, four typical time series datasets are introduced for empirical analysis. the empirical results demonstrate that the proposed approach outperforms consistently the benchmark approaches in terms of forecasting accuracy, and also indicate that the reconstructed sub - signals obtained by cnn is of importance for further improving the forecasting performance.
|
arxiv:2002.09695
|
this paper proposes a clustering, labeling, then augmenting framework that significantly enhances performance in semi - supervised text classification ( sstc ) tasks, effectively addressing the challenge of vast datasets with limited labeled examples. unlike traditional sstc approaches that rely on a predefined small set of labeled data to generate pseudo - labels for the unlabeled data, this framework innovatively employs clustering to select representative " landmarks " for labeling. these landmarks subsequently act as intermediaries in an ensemble of augmentation techniques, including retrieval - augmented generation ( rag ), large language model ( llms ) - based rewriting, and synonym substitution, to generate synthetic labeled data without making pseudo - labels for the unlabeled data. empirical results show that even in complex text document classification scenarios involving over 100 categories, our method achieves state - of - the - art accuracies of 95. 41 % on the reuters dataset and 82. 43 % on the web of science dataset. our approach significantly reduces the reliance on human labeling efforts and the associated expenses, while simultaneously ensuring high data quality and minimizing privacy risks. the finetuning results further show the efficiency of fine - tuning llms for text classification tasks, highlighting a robust solution for leveraging limited labeled data.
|
arxiv:2411.06175
|
unravelling the nature of energy transport in multi - chromophoric photosynthetic complexes is essential to extract valuable design blueprints for light - harvesting applications. long - range exciton transport in such systems is facilitated by a combination of delocalized excitation wavefunctions ( excitons ) and remarkable exciton diffusivities. the unambiguous identification of the exciton transport, however, is intrinsically challenging due to the system ' s sheer complexity. here we address this challenge by employing a novel spectroscopic lab - on - a - chip approach : a combination of ultrafast coherent two - dimensional spectroscopy and microfluidics working in tandem with theoretical modelling. this allowed us to unveil exciton transport throughout the entire hierarchical supramolecular structure of a double - walled artificial light - harvesting complex. we show that at low exciton densities, the outer layer acts as an antenna that supplies excitons to the inner tube, while under high excitation fluences it protects the inner tube from overburning. our findings shed light on the excitonic trajectories across different sub - units of a multi - layered supramolecular structure and underpin the great potential of artificial light - harvesting complexes for directional excitation energy transport.
|
arxiv:1907.04604
|
we consider the computation of the mean of sequences in the quantum model of computation. we determine the query complexity in the case of sequences which satisfy a $ p $ - summability condition for $ 1 \ le p < 2 $. this settles a problem left open in heinrich ( 2001 ).
|
arxiv:quant-ph/0109038
|
the concepts of risk - aversion, chance - constrained optimization, and robust optimization have developed significantly over the last decade. statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. a modeling framework, called distributionally robust optimization ( dro ), has recently received significant attention in both the operations research and statistical learning communities. this paper surveys main concepts and contributions to dro, and its relationships with robust optimization, risk - aversion, chance - constrained optimization, and function regularization.
|
arxiv:1908.05659
|
complex, concentrated, multi - component alloys have been shown to display outstanding thermo - mechanical properties, that have been typically attributed to sluggish diffusion, entropic, and lattice distortion effects. here, we investigate two metal alloys with such exemplary properties, the equiatomic, single - phase, face - centered - cubic ( fcc ) alloys nicocr and nicocrfemn, and we compare their microstructural kinetics to the behaviors in a pure - ni fcc metal. we perform long - time, kinetic monte carlo ( kmc ) simulations, and we analyze in detail the kinetics of atomic vacancies. we find that vacancies in both concentrated alloys exhibit subdiffusive thermally driven dynamics, in direct contrast to the diffusive dynamics of pure ni. subdiffusive dynamics shall be attributed to dynamical sluggishness, that is modeled by a fractional brownian random walk. furthermore, we analyze the statistics of waiting times, and we interpret long power - law - distributed rest periods as a direct consequence of barriers ' energy - scales and lattice distortions.
|
arxiv:2304.04255
|
today, relativistic calculations are known to provide a very successful means in the study of open - shell atoms and ions. but although accurate atomic data are obtained from these computations, they are traditionally carried out in jj - coupling and, hence, do often not allow for a simple lsj classification of the atomic levels as needed by experiment. in fact, this lack of providing a proper spectroscopic notation from relativistic structure calculations has recently hampered not only the spectroscopy of medium and heavy elements, but also the interpretation and analysis of inner - shell processes, for which the occurrence of additional vacancies usually leads to a very detailed fine structure. therefore, in order to facilitate the classification of atomic levels from such computations, here we present a program ( within the ratip environment ) which help transform the atomic wave functions from jj - coupled multiconfiguration dirac - fock computations into a ls - coupled representation. beside of a proper lsj assignment to the atomic levels, the program also supports the full transformation of the wave functions if required for ( nonrelativistic ) computations.
|
arxiv:physics/0406006
|
we present wikipedia - based polyglot dirichlet allocation ( wikipda ), a crosslingual topic model that learns to represent wikipedia articles written in any language as distributions over a common set of language - independent topics. it leverages the fact that wikipedia articles link to each other and are mapped to concepts in the wikidata knowledge base, such that, when represented as bags of links, articles are inherently language - independent. wikipda works in two steps, by first densifying bags of links using matrix completion and then training a standard monolingual topic model. a human evaluation shows that wikipda produces more coherent topics than monolingual text - based lda, thus offering crosslinguality at no cost. we demonstrate wikipda ' s utility in two applications : a study of topical biases in 28 wikipedia editions, and crosslingual supervised classification. finally, we highlight wikipda ' s capacity for zero - shot language transfer, where a model is reused for new languages without any fine - tuning. researchers can benefit from wikipda as a practical tool for studying wikipedia ' s content across its 299 language editions in interpretable ways, via an easy - to - use library publicly available at https : / / github. com / epfl - dlab / wikipda.
|
arxiv:2009.11207
|
we discuss selected aspects regarding the magnetic field evolution of solar - type stars. most of the stars with activity cycles are in the range where the normalized chromospheric calcium emission increases linearly with the inverse rossby number. for rossby numbers below about a quarter of the solar value, the activity saturates and no cycles have been found. for rossby numbers above the solar value, again no activity cycles have been found, but now the activity goes up again for a major fraction of the stars. rapidly rotating stars show nonaxisymmetric large - scale magnetic fields, but there is disagreement between models and observations regarding the actual value of the rossby number where this happens. we also discuss the prospects of detecting the sign of magnetic helicity using various linear polarization techniques both at the stellar surface using the parity - odd contribution to linear polarization and above the surface using faraday rotation.
|
arxiv:2004.00439
|
we present a numerical implementation, based on wannier interpolation, of a kubo - greenwood formalism for computing the spatially dispersive optical conductivity in crystals at first order in the wave vector of light. this approach is more efficient than direct $ \ textit { ab initio } $ methods because, with less computational cost, it allows for a much finer sampling of reciprocal space, resulting in better resolved spectra. moreover, wannier interpolation avoids errors arising from truncation of the sums over conduction bands when evaluating the spatially dispersive optical matrix elements. we validate our method by computing the optical activity spectrum of selected crystals, both polar ( gan ) and chiral ( trigonal te, trigonal se, and $ \ alpha $ - quartz ), and comparing with existing literature.
|
arxiv:2504.09742
|
metasurfaces consisting of an array of planar sub - wavelength structures have shown great potentials in controlling thermal infrared radiation, including intensity, coherence, and polarization. these capabilities together with the two - dimensional nature make thermal metasurfaces an ultracompact multifunctional platform for infrared light manipulation. integrating the functionalities, such as amplitude, phase ( spectrum and directionality ), and polarization, on a single metasurface offers fascinating device responses. however, it remains a significant challenge to concurrently optimize the optical, electrical, and thermal responses of a thermal metasurface in a small footprint. in this work, we develop a center - contacted electrode line design for a thermal infrared metasurface based on a gold nanorod array, which allows local joule heating to electrically excite the emission without undermining the localized surface plasmonic resonance. the narrowband emission of thermal metasurfaces and their robustness against temperature nonuniformity demonstrated in this work have important implications for the applications in infrared imaging, sensing, and energy harvesting.
|
arxiv:2208.10484
|
we have switched gaas / alas and algaas / alas planar microcavities that operate in the " original " ( o ) telecom band by exploiting the instantaneous electronic kerr effect. we observe that the resonance frequency reversibly shifts within one picosecond. we investigate experimentally and theoretically the role of several main parameters : the material backbone and its electronic bandgap, the pump power, the quality factor, and the duration of the switch pulse. the magnitude of the shift is reduced when the backbone of the central $ \ lambda - $ layer has a greater electronic bandgap ; pumping with photon energies near the bandgap resonantly enhances the switched magnitude. our model shows that the magnitude of the resonance frequency shift depends on the pump pulse duration and is maximized when the duration matches the cavity storage time that is set by the quality factor. we provide the settings for the essential parameters so that the frequency shift of the cavity resonance can be increased to one linewidth.
|
arxiv:1508.02776
|
we analyze in detail the anomaly cancellation conditions for the strongly coupled $ e _ 8 \ times e _ 8 $ heterotic string introduced by horava and witten and find new features compared to the ten - dimensional green - schwarz mechanism. we project onto ten dimensions the corresponding lagrangian of the zero - mode fields. we find that it has a simple interpretation provided by the conjectured heterotic string / fivebrane duality. the part which originates from eleven - dimensions is naturally described in fivebrane language. we discuss physical couplings and scales in four dimensions.
|
arxiv:hep-th/9701048
|
in this article we investigate some " unexpected " properties of the " infinite power tower ". \ [ y = f ( x ) = { x ^ { { x ^ { { x ^ { { x ^ { \ mathinner { \ mkern2mu \ raise1pt \ hbox {. } \ mkern2mu \ raise4pt \ hbox {. } \ mkern2mu \ raise7pt \ hbox {. } \ mkern1mu } } } } } } } } } \ ] the material collected here is also intended as a potential guide for teachers of high - school / undergraduate students interested in planning an activity of investigative mathematics in the classroom, where the knowledge is gained through the active, creative and cooperative use of diversified mathematical tools ( and some ingenuity ). the activity should possibly be carried on with a laboratorial style, with no preclusions on the paths chosen and undertaken by the students and with little or no information imparted from the teacher ' s desk. the teacher should then act just as a guide and a facilitator. the mathematical requisites to follow this path are : functions, properties of exponentials and logarithms, sequences, limits and derivatives. the topics presented should then be accessible to undergraduate or " advanced high school " students.
|
arxiv:1908.05559
|
we investigate the mutual proximity effect in a normal metal contacted to a superconductor through a magnetic interface. analytical and self - consistent numerical results are presented, and we consider both the diffusive and ballistic regimes. we focus on the density of states in both the normal and superconducting region, and find that the presence of spin - dependent phase - shifts occurring at the interface qualitatively modifies the density of states. in particular, we find that the proximity - induced pairing amplitudes in the normal metal region undergo a conversion at the fermi level from pure even - frequency to odd - frequency. above a critical value of the interface spin - polarization ( or, equivalently, for fixed interface spin - polarization, above a critical interface resistance ), only odd frequency correlations remain. this is accompanied by the replacement of the familiar proximity minigap or pseudogap in the normal layer by an enhancement of the density of states above its normal state value for energies near the chemical potential. the robustness of this effect towards inelastic scattering, impurity scattering, and the depletion of the superconducting order parameter close to the interface is investigated. we also study the inverse proximity effect in the diffusive limit. we find that the above - mentioned conversion persists also for thin superconducting layers comparable in size to the superconducting coherence length $ \ xi _ \ text { s } $, as long as the inverse proximity effect is relatively weak. concomitantly, we find a shift in the critical interface resistance where the pairing conversion occurs. our findings suggest a robust and simple method for producing purely odd - frequency superconducting correlations, that can be tested experimentally.
|
arxiv:1004.1176
|
iterates of quantum operations and their convergence are investigated in the context of mean ergodic theory. we discuss in detail the convergence of the iterates and show that the uniform ergodic theorem plays an essential role. our results will follow from some general theorems concerning completely positive maps, mean ergodic operators, and operator algebras on hilbert spaces. a few examples of both finite and infinite dimensional hilbert spaces are presented as well.
|
arxiv:1911.03956
|
usually, the hyperparallel quantum computation can speed up quantum computing, reduce the quantum resource consumed largely, resist to noise, and simplify the storage of quantum information. here, we present the first scheme for the self - error - corrected hyperparallel photonic quantum computation working with both the polarization and the spatial - mode degrees of freedom of photon systems simultaneously. it can prevent bit - flip errors from happening with an imperfect nonlinear interaction in the nearly realistic condition. we give the way to design the universal hyperparallel photonic quantum controlled - not ( cnot ) gate on a two - photon system, resorting to the nonlinear interaction between the circularly polarized photon and the electron spin in the quantum dot in a double - sided microcavity system, by taking the imperfect interaction in the nearly realistic condition into account. its self - error - corrected pattern prevents the bit - flip errors from happening in the hyperparallel quantum cnot gate, guarantees the robust fidelity, and relaxes the requirement for its experiment. meanwhile, this scheme works in a failure - heralded way. also, we generalize this approach to achieve the self - error - corrected hyperparallel quantum cnot $ ^ n $ gate working on a multiple - photon system. these good features make this scheme more useful in the photonic quantum computation and quantum communication in the future.
|
arxiv:1802.00113
|
this paper examines the evolving performance practices of ludwig van beethoven ' s cello sonatas, with a particular focus on tempo and portamento between 1930 and 2012. it integrates analyses of 22 historical recordings, advancements in recording technology to shed light on changes in interpretative approaches. by comparing beethoven ' s metronome markings, as understood through contemporaries such as czerny and moscheles, with their application in modern performances, my research highlights notable deviations. these differences prove the challenges performers face in reconciling historical tempos with the demands of contemporary performance practice. my study pays special attention to the diminishing use of audible portamento in the latter half of the 20th century, contrasted with a gradual increase in tempo after 1970. this development is linked to broader cultural and pedagogical shifts, including the adoption of fingering techniques that reduce hand shifts, thereby facilitating greater technical precision at faster tempos. nonetheless, my study identifies the persistence of ' silent portamento ' as an expressive device, allowing performers to retain stylistic expression without compromising rhythmic integrity. my paper offers valuable insights for performers and scholars alike, advocating a critical reassessment of beethoven ' s tempo markings and the nuanced application of portamento in modern performance practice.
|
arxiv:2502.00030
|
active learning is an important technique for low - resource sequence labeling tasks. however, current active sequence labeling methods use the queried samples alone in each iteration, which is an inefficient way of leveraging human annotations. we propose a simple but effective data augmentation method to improve the label efficiency of active sequence labeling. our method, seqmix, simply augments the queried samples by generating extra labeled sequences in each iteration. the key difficulty is to generate plausible sequences along with token - level labels. in seqmix, we address this challenge by performing mixup for both sequences and token - level labels of the queried samples. furthermore, we design a discriminator during sequence mixup, which judges whether the generated sequences are plausible or not. our experiments on named entity recognition and event detection tasks show that seqmix can improve the standard active sequence labeling method by $ 2. 27 \ % $ - - $ 3. 75 \ % $ in terms of $ f _ 1 $ scores. the code and data for seqmix can be found at https : / / github. com / rz - zhang / seqmix
|
arxiv:2010.02322
|
the inverse problem of amplitude reconstruction on an inclined line based on the values of amplitude or its module as recorded on semi - infinite line orthogonal to the beam propagation direction is considered within the framework of 2d parabolic equation. it is demonstrated that this inverse problem, in case when the complex image plane amplitude is known, can be reduced to a singular cauchy type integral equation. the existence of its solutions requires that certain conditions be met but if a solution exists it is necessary unique. the obtained integral equation is then approximated piece - wisely and the resulting linear algebraic system is solved numerically while applying necessary regularization procedures to enhance the stability of its solutions. finally, an iterative method of phase retrieval is developed and a set of numerical experiments is performed.
|
arxiv:1804.04718
|
highpt is a mathematica package for the analysis of high - energy data of semileptonic transitions at hadron colliders. it allows to compute high - $ p _ t $ tail observables for semileptonic processes, i. e. drell - yan cross sections, for dilepton and monolepton final states at the lhc. these observables can be calculated at tree level within the standard model effective field theory, including the relevant operators up to dimension eight to ensure a consistent description of the cross section including terms of $ \ mathcal { o } ( \ lambda ^ { - 4 } ) $ in the cutoff scale $ \ lambda $. for new physics models with new mediators that can be resolved at lhc energies, highpt can also account for the full propagation effects of these new bosonic states at tree level. using the available data from the high - $ p _ t $ tails in the relevant lhc run - ii searches by the atlas and cms collaborations, highpt can also construct the corresponding likelihoods for all possible flavors of the leptonic final states. as an illustration, we derive and compare constraints on wilson coefficients at different orders in the effective field theory expansion, and we investigate lepton flavor violation for the $ s _ 3 $ leptoquark model. the highpt code is publicly available at https : / / github. com / highpt / highpt.
|
arxiv:2207.10756
|
this letter attempts to design a surveillance scheme by adopting an active reconfigurable intelligent surface ( ris ). different from the conventional passive ris, the active ris could not only adjust the phase shift but also amplify the amplitude of the reflected signal. with such reflecting, the reflected signal of active ris could jointly adjust the signal to interference plus noise ratio ( sinr ) of the suspicious receiver and the legitimate monitor, hence the proactive eavesdropping at the physical layer could be effectively realized. we formulate the optimization problem with the target of maximizing the eavesdropping rate to obtain the optimal reflecting coefficient matrix of the active ris. the formulated optimization problem is nonconvex fractional programming and challenging to deal with. we then solve the problem by approximating it as a series of convex constraints. simulation results validate the effectiveness of our designed surveillance scheme and show that the proposed active ris aided surveillance scheme has good performance in terms of eavesdropping rate compared with the scheme with passive ris.
|
arxiv:2210.13010
|
let $ v $ be a subvariety of codimension $ \ leq g $ of the moduli space $ \ ca _ g $ of principally polarized abelian varieties of dimension $ g $ or of the moduli space $ \ tm _ g $ of curves of compact type of genus $ g $. we prove that the set $ e _ 1 ( v ) $ of elements of $ v $ which map onto an elliptic curve is analytically dense in $ v $. from this we deduce that if $ v \ subset \ ca _ g $ is complete, then $ v $ has codimension equal to $ g $ and the set of elements of $ v $ isogenous to a product of $ g $ elliptic curves is countable and analytically dense in $ v $. we also prove a technical property of the conormal sheaf of $ v $ if $ v \ subset \ tm _ g $ ( or $ \ ca _ g $ ) is complete of codimension $ g $.
|
arxiv:alg-geom/9609008
|
the aim of this paper is to study foliations that remain invariant by parallel transports along the integral curves of vector fields of another foliations. according to this idea, we define a new concept of stability between foliations. a particular case of stability ( called regular stability ) is studied, giving a useful characterization in terms of the riemann curvature tensor. this characterization allows us to prove that there are no regularly self - stable foliations of dimension greater than 1 in schwarzschild and robertson - walker space - times. finally, we study the existence of regularly self - stable foliations in other space - times, like $ pp $ - wave space - times.
|
arxiv:gr-qc/0501099
|
initially, we derive a nonlinear integral equation for the vacuum counting function of the spin 1 / 2 - xyz chain in the { \ it disordered regime }, thus paralleling similar results by kl \ " umper \ cite { klu }, achieved through a different technique in the { \ it antiferroelectric regime }. in terms of the counting function we obtain the usual physical quantities, like the energy and the transfer matrix ( eigenvalues ). then, we introduce a double scaling limit which appears to describe the sine - gordon theory on cylindrical geometry, so generalising famous results in the plane by luther \ cite { lut } and johnson et al. \ cite { jkm }. furthermore, after extending the nonlinear integral equation to excitations, we derive scattering amplitudes involving solitons / antisolitons first, and bound states later. the latter case comes out as manifestly related to the deformed virasoro algebra of shiraishi et al. \ cite { skao }. although this nonlinear integral equations framework was contrived to deal with finite geometries, we prove it to be effective for discovering or rediscovering s - matrices. as a particular example, we prove that this unique model furnishes explicitly two s - matrices, proposed respectively by zamolodchikov \ cite { zame } and lukyanov - mussardo - penati \ cite { luk, mp } as plausible scattering description of unknown integrable field theories.
|
arxiv:hep-th/0504122
|
this paper describes our dku - oppo system for the 2022 spoofing - aware speaker verification ( sasv ) challenge. first, we split the joint task into speaker verification ( sv ) and spoofing countermeasure ( cm ), these two tasks which are optimized separately. for asv systems, four state - of - the - art methods are employed. for cm systems, we propose two methods on top of the challenge baseline to further improve the performance, namely embedding random sampling augmentation ( ersa ) and one - class confusion loss ( occl ). second, we also explore whether sv embedding could help improve cm system performance. we observe a dramatic performance degradation of existing cm systems on the domain - mismatched voxceleb2 dataset. third, we compare different fusion strategies, including parallel score fusion and sequential cascaded systems. compared to the 1. 71 % sasv - eer baseline, our submitted cascaded system obtains a 0. 21 % sasv - eer on the challenge official evaluation set.
|
arxiv:2207.07510
|
by comparing photoemission spectroscopy with a non - perturbative dynamical mean field theory extension to many - body ab initio calculations, we show in the prominent case of pentacene crystals that an excellent agreement with experiment for the bandwidth, dispersion and lifetime of the hole carrier bands can be achieved in organic semiconductors provided that one properly accounts for the coupling to molecular vibrational modes and the presence of disorder. our findings rationalize the growing experimental evidence that even the best band structure theories based on a many - body treatment of electronic interactions cannot reproduce the experimental photoemission data in this important class of materials.
|
arxiv:1111.2148
|
testing with randomly generated inputs ( fuzzing ) has gained significant traction due to its capacity to expose program vulnerabilities automatically. fuzz testing campaigns generate large amounts of data, making them ideal for the application of machine learning ( ml ). neural program smoothing ( nps ), a specific family of ml - guided fuzzers, aims to use a neural network as a smooth approximation of the program target for new test case generation. in this paper, we conduct the most extensive evaluation of nps fuzzers against standard gray - box fuzzers ( > 11 cpu years and > 5. 5 gpu years ), and make the following contributions : ( 1 ) we find that the original performance claims for nps fuzzers do not hold ; a gap we relate to fundamental, implementation, and experimental limitations of prior works. ( 2 ) we contribute the first in - depth analysis of the contribution of machine learning and gradient - based mutations in nps. ( 3 ) we implement neuzz + +, which shows that addressing the practical limitations of nps fuzzers improves performance, but that standard gray - box fuzzers almost always surpass nps - based fuzzers. ( 4 ) as a consequence, we propose new guidelines targeted at benchmarking fuzzing based on machine learning, and present mlfuzz, a platform with gpu access for easy and reproducible evaluation of ml - based fuzzers. neuzz + +, mlfuzz, and all our data are public.
|
arxiv:2309.16618
|
##y $ and $ \ alpha = 1 / p - 1 / q $, thus providing a noncommutative analogue of a classical result. furthermore, we investigate the corresponding result for noncommutative martingale hardy spaces. namely, there is a constant $ { \ mathrm c } $ depending only on $ \ alpha $ such that if $ x = ( x _ k ) _ { k \ geq 1 } $ is a finite noncommutative martingale in the martingale hardy space $ \ mathcal { h } _ 1 ( \ m ) $ then $ \ | i ^ \ alpha x \ | _ { \ mathcal { h } _ { 1 / ( 1 - \ alpha ) } ( \ m ) } \ leq { \ mathrm c } \ | x \ | _ { \ mathcal { h } _ 1 ( \ m ) } $.
|
arxiv:1501.06016
|
the study of the ground state of spinless fermions in 2d disordered clusters ( phys. rev. lett. { \ bf 83 }, 1826 ( 1999 ) ) has suggested the existence of a new quantum phase for intermediate coulomb energy to kinetic energy ratios $ r _ s $. exact diagonalization of the same small clusters show that its low energy excitations ( quantum ergodicity above a few ` ` hexatic ' ' excitations characterized by oriented currents ) significantly differ from those occuring in the fermi glass ( weak $ r _ s $ ) and in the pinned wigner crystal ( large $ r _ s $ ). the ` ` hexatic ' ' excitations vanish for temperatures of order of the fermi temperature.
|
arxiv:cond-mat/9910271
|
this paper studies the approximate and null controllability for impulse controlled systems of heat equations coupled by a pair ( a, b ) of constant matrices. we present a necessary and sufficient condition for the approximate controllability, which is exactly kalman ' s controllability rank condition of ( a, b ). we prove that when such a system is approximately controllable, the approximate controllability over an interval [ 0, t ] can be realized by adding controls at arbitrary n different control instants 0 < \ tau _ 1 < \ tau _ 2 < \ cdots < \ tau _ n < t, provided that \ tau _ n - \ tau _ 1 < d _ a, where d _ a = \ min \ { \ pi / | im \ lambda | : \ lambda \ in \ sigma ( a ) \ }. we also show that in general, such systems are not null controllable.
|
arxiv:1701.05717
|
generative models operate at fixed resolution, even though natural images come in a variety of sizes. as high - resolution details are downsampled away and low - resolution images are discarded altogether, precious supervision is lost. we argue that every pixel matters and create datasets with variable - size images, collected at their native resolutions. to take advantage of varied - size data, we introduce continuous - scale training, a process that samples patches at random scales to train a new generator with variable output resolutions. first, conditioning the generator on a target scale allows us to generate higher resolution images than previously possible, without adding layers to the model. second, by conditioning on continuous coordinates, we can sample patches that still obey a consistent global layout, which also allows for scalable training at higher resolutions. controlled ffhq experiments show that our method can take advantage of multi - resolution training data better than discrete multi - scale approaches, achieving better fid scores and cleaner high - frequency details. we also train on other natural image domains including churches, mountains, and birds, and demonstrate arbitrary scale synthesis with both coherent global layouts and realistic local details, going beyond 2k resolution in our experiments. our project page is available at : https : / / chail. github. io / anyres - gan /.
|
arxiv:2204.07156
|
we present the compilation and properties of a meta - catalogue of x - ray detected clusters of galaxies, the mcxc. this very large catalogue is based on publicly available rosat all sky survey - based ( noras, reflex, bcs, sgp, nep, macs, and ciza ) and serendipitous ( 160sd, 400sd, sharc, warps, and emss ) cluster catalogues. data have been systematically homogenised to an overdensity of 500, and duplicate entries originating from overlaps between the survey areas of the individual input catalogues are carefully handled. the mcxc comprises 1743 clusters with virtually no duplicate entries. for each cluster the mcxc provides : three identifiers, a redshift, coordinates, membership of original catalogue, and standardised 0. 1 - 2. 4 kev band luminosity l _ 500, total mass m _ 500, and radius r _ 500. the meta - catalogue additionally furnishes information on overlaps between the input catalogues and the luminosity ratios when measurements from different surveys are available, and also gives notes on individual objects. the mcxc is available in electronic format for maximum usefulness in x - ray, sz, and multi - wavelength studies.
|
arxiv:1007.1916
|
we investigate polytopes inscribed into a sphere that are normally equivalent ( or strongly isomorphic ) to a given polytope $ p $. we show that the associated space of polytopes, called the inscribed cone of $ p $, is closed under minkowski addition. inscribed cones are interpreted as type cones of ideal hyperbolic polytopes and as deformation spaces of delaunay subdivisions. in particular, testing if there is an inscribed polytope normally equivalent to $ p $ is polynomial time solvable. normal equivalence is decided on the level of normal fans and we study the structure of inscribed cones for various classes of polytopes and fans, including simple, simplicial, and even. we classify ( virtually ) inscribable fans in dimension $ 2 $ as well as inscribable permutahedra and nestohedra. a second goal of the paper is to introduce inscribed virtual polytopes. polytopes with a fixed normal fan $ \ mathcal { n } $ form a monoid with respect to minkowski addition and the associated grothendieck group is called the type space of $ \ mathcal { n } $. elements of the type space correspond to formal minkowski differences and are naturally equipped with vertices and hence with a notion of inscribability. we show that inscribed virtual polytopes form a subgroup, which can be non - trivial even if $ \ mathcal { n } $ does not have actual inscribed polytopes. we relate inscribed virtual polytopes to routed particle trajectories, that is, piecewise - linear trajectories of particles in a ball with restricted directions. the state spaces gives rise to connected groupoids generated by reflections, called reflection groupoids. the endomorphism groups of reflection groupoids can be thought of as discrete holonomy groups of the trajectories and we determine when they are reflection groups.
|
arxiv:2012.07724
|
we prove quantitative sub - ballisticity for the self - avoiding walk on the hexagonal lattice. namely, we show that with high probability a self - avoiding walk of length $ n $ does not exit a ball of radius $ o ( n / \ log { n } ) $. previously, only a non - quantitative $ o ( n ) $ bound was known from the work of duminil - copin and hammond \ cite { dch13 }. as an important ingredient of the proof we show that at criticality the partition function of bridges of height $ t $ decays polynomially fast to $ 0 $ as $ t $ tends to infinity, which we believe to be of independent interest.
|
arxiv:2310.17299
|
the target asymmetry for electroproduction of vector mesons is investigated within the handbag approach. while the generalized parton distribution ( gpd ) h is taken from a previous analysis of the elctroproduction cross section, we here construct the gpd e from double distributions and constrain it by the pauli form factors of the nucleon, positivity bounds and sum rules. predictions for the target asymmetry are given for various vector mesons and discussed how experimental data on the asymmetry will further constrain e and what we may learn about the angular momenta the partons carry.
|
arxiv:0809.4126
|
small progress measures is one of the classical parity game solving algorithms. for games with n vertices, m edges and d different priorities, the original algorithm computes the winning regions and a winning strategy for one of the players in o ( dm. ( n / floor ( d / 2 ) ) ^ floor ( d / 2 ) ) time. computing a winning strategy for the other player requires a re - run of the algorithm on that player ' s winning region, thus increasing the runtime complexity to o ( dm. ( n / ceil ( d / 2 ) ) ^ ceil ( d / 2 ) ) for computing the winning regions and winning strategies for both players. we modify the algorithm so that it derives the winning strategy for both players in one pass. this reduces the upper bound on strategy derivation for spm to o ( dm. ( n / floor ( d / 2 ) ) ^ floor ( d / 2 ) ). at the basis of our modification is a novel operational interpretation of the least progress measure that we provide.
|
arxiv:1509.07207
|
the spectral zeta function of the quantum rabi hamiltonian is considered. it is shown that the spectral zeta function converges to the riemann zeta function as the coupling constant goes to infinity. moreover the path measure associated with the ground state of the quantum rabi hamiltonian is constructed on a discontinuous path space, and several applications are shown.
|
arxiv:2405.09158
|
we argue that the worldvolume theories of d - branes probing orbifolds with discrete torsion develop, in the large quiver limit, new non - commutative directions. this provides an explicit ` deconstruction ' of a wide class of noncommutative theories. this also provides insight into the physical meaning of discrete torsion and its relation to the t - dual b field. we demonstrate that the strict large quiver limit reproduces the matrix theory construction of higher - dimensional d - branes, and argue that finite ` fuzzy moose ' theories provide novel regularizations of non - commutative theories and explicit string theory realizations of gauge theories on fuzzy tori. we also comment briefly on the relation to ncos, ( 2, 0 ) and little string theories.
|
arxiv:hep-th/0111079
|
a novel approach for a delay line interferometer ( dli ) based purely on forward bragg scattering is proposed. we have numerically and experimentally demonstrated that a bragg grating can deliver the functionality of a dli in its transmission mode along a single common interfering optical path, instead of the conventional dli implementation with two interfering optical paths. as a proof of concept, a fiber bragg grating has been designed and fabricated, showing the desired functionality in the transmission mode of the bragg grating. the proposed " bragg - dli " approach is applicable to any kind of bragg grating technology, such as volume bragg gratings, dielectric mirrors, silicon photonics, and other optical waveguide based bragg structures.
|
arxiv:1611.06794
|
we evaluated the parametric instabilities of lcgt ( japanese interferometric gravitational wave detector project ) arm cavity. the number of unstable modes of lcgt is 10 - times smaller than that of advanced ligo ( u. s. a. ). since the strength of the instabilities of lcgt depends on the mirror curvature more weakly than that of advanced ligo, the requirement of the mirror curvature accuracy is easier to be achieved. the difference in the parametric instabilities between lcgt and advanced ligo is because of the thermal noise reduction methods ( lcgt, cooling sapphire mirrors ; advanced ligo, fused silica mirrors with larger laser beams ), which are the main strategies of the projects. elastic q reduction by the barrel surface ( 0. 2 mm thickness ta $ _ 2 $ o $ _ 5 $ ) coating is effective to suppress instabilities in the lcgt arm cavity. therefore, the cryogenic interferometer is a smart solution for the parametric instabilities in addition to thermal noise and thermal lensing.
|
arxiv:0805.2385
|
in recent past every discipline and every industry have their own methods of developing products. it may be software development, mechanics, construction, psychology and so on. these demarcations work fine as long as the requirements are within one discipline. however, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. performance is an important quality aspect of web services because of their distributed nature. predicting the performance of web services during early stages of software development is significant. in industry, prototype of these applications is developed during analysis phase of software development life cycle ( sdlc ). however, performance models are generated from uml models. methodologies for predicting the performance from uml models is available. hence, in this paper, a methodology for developing use case model and activity model from user interface is presented. the methodology is illustrated with a case study on amazon. com.
|
arxiv:1201.2031
|
a large - scale complex system comprising many, often spatially distributed, dynamical subsystems with partial autonomy and complex interactions are called system of systems. this paper describes an efficient algorithm for model predictive control of a class of system of systems for which the overall objective function is the sum of convex quadratic cost functions of ( locally ) constrained linear subsystems that are coupled through a set of ( global ) linear constraints on the subsystems coordination parameters. the proposed control algorithm is based on parametrization and splitting of the underlying optimization problem into one global coordination problem and a set of local optimization problems pertaining to individual subsystems. the local optimization problems are solved off - line, via parametric optimization, while the coordination problem is solved on - line. the properties of the local parametric solutions are utilized to solve the coordination problem very efficiently. in particular, it is shown that, for a fixed number of coupling constraints, the coordination problem can be solved with a linear - time algorithm in a finite number of iterations if all subsystems have one - dimensional coordination parameters.
|
arxiv:1804.01377
|
it is shown that the distribution of low variability periods in the activity of human heart rate typically follows a multi - scaling zipf ' s law. the presence or failure of a power law, as well as the values of the scaling exponents, are personal characteristics depending on the daily habits of the subjects. meanwhile, the distribution function of the low - variability periods as a whole discriminates efficiently between various heart pathologies. this new technique is also applicable to other non - linear time - series and reflects these aspects of the underlying intermittent dynamics, which are not covered by other methods of linear - and nonlinear analysis.
|
arxiv:physics/0110075
|
the status of the theory of color confinemnt is discussed.
|
arxiv:hep-lat/0610102
|
a transformation of gamma max - infinitely divisible laws viz. geometric gamma max - infinitely divisible laws is considered in this paper. some of its distributional and divisibility properties are discussed and a random time changed extremal process corresponding to this distribution is presented. a new kind of invariance ( stability ) under geometric maxima is proved and a max - ar ( 1 ) model corresponding to it is also discussed.
|
arxiv:0801.2083
|
with the rapid development of internet technology and the comprehensive popularity of internet applications, online activities have gradually become an indispensable part of people ' s daily life. the original recommendation learning algorithm is mainly based on user - microvideo interaction for learning, modeling the user - micro - video connection relationship, which is difficult to capture the more complex relationships between nodes. to address the above problems, we propose a personalized recommendation model based on graph neural network, which utilizes the feature that graph neural network can tap deep information of graph data more effectively, and transforms the input user rating information and item side information into graph structure, for effective feature extraction, based on the importance sampling strategy. the importance - based sampling strategy measures the importance of neighbor nodes to the central node by calculating the relationship tightness between the neighbor nodes and the central node, and selects the neighbor nodes for recommendation tasks based on the importance level, which can be more targeted to select the sampling neighbors with more influence on the target micro - video nodes. the pooling aggregation strategy, on the other hand, trains the aggregation weights by inputting the neighborhood node features into the fully connected layer before aggregating the neighborhood features, and then introduces the pooling layer for feature aggregation, and finally aggregates the obtained neighborhood aggregation features with the target node itself, which directly introduces a symmetric trainable function to fuse the neighborhood weight training into the model to better capture the different neighborhood nodes ' differential features in a learnable manner to allow for a more accurate representation of the current node features.
|
arxiv:2205.10588
|
phenix has a well defined program for measuring the polarized gluon distribution in the nucleon. we measure the gluon polarization in the proton with polarized $ p $ - $ p $ collisions at phenix. the measurements of gluon polarization $ via $ the direct - photon production and the heavy - flavor production can be significantly improved by the silicon vertex tracker upgrade. we have studied the possible improvements of the gluon polarization measurements using monte carlo simulation and they are shown and discussed in this paper.
|
arxiv:hep-ex/0501047
|
recent advances in zero - shot and few - shot learning have shown promise for a scope of research and practical purposes. however, this fast - growing area lacks standardized evaluation suites for non - english languages, hindering progress outside the anglo - centric paradigm. to address this line of research, we propose tape ( text attack and perturbation evaluation ), a novel benchmark that includes six more complex nlu tasks for russian, covering multi - hop reasoning, ethical concepts, logic and commonsense knowledge. the tape ' s design focuses on systematic zero - shot and few - shot nlu evaluation : ( i ) linguistic - oriented adversarial attacks and perturbations for analyzing robustness, and ( ii ) subpopulations for nuanced interpretation. the detailed analysis of testing the autoregressive baselines indicates that simple spelling - based perturbations affect the performance the most, while paraphrasing the input has a more negligible effect. at the same time, the results demonstrate a significant gap between the neural and human baselines for most tasks. we publicly release tape ( tape - benchmark. com ) to foster research on robust lms that can generalize to new tasks when little to no supervision is available.
|
arxiv:2210.12813
|
driven by the interplay among artificial intelligence, digital twin, and wireless networks, 6g is envisaged to go beyond data - centric services to provide intelligent and immersive experiences. to efficiently support intelligent tasks with customized service requirements, it becomes critical to develop novel information compression and transmission technologies, which typically involve coupled sensing, communication, and computation processes. to this end, task - oriented communication stands out as a disruptive technology for 6g system design by exploiting the task - specific information structures and folding the communication goals into the design of task - level transmission strategies. in this article, by developing task - oriented information extraction and network resource orchestration strategies, we demonstrate the effectiveness of task - oriented communication principles for typical intelligent tasks, including federated learning, edge inference, and semantic communication.
|
arxiv:2303.10920
|
using the swift data of grb 050315, we progress on the uniqueness of our theoretically predicted gamma - ray burst ( grb ) structure as composed by a proper - grb ( p - grb ), emitted at the transparency of an electron - positron plasma with suitable baryon loading, and an afterglow comprising the so called " prompt emission " as due to external shocks. thanks to the swift observations, we can theoretically fit detailed light curves for selected energy bands on a continuous time scale ranging over 10 ^ 6 seconds. the theoretically predicted instantaneous spectral distribution over the entire afterglow confirms a clear hard - to - soft behavior encompassing, continuously, the " prompt emission " all the way to the latest phases of the afterglow. consequences of the instrumental threshold on the definition of " short " and " long " grbs are discussed.
|
arxiv:0705.2453
|
we study conformal field theory with the symmetry algebra $ \ mathcal { a } ( 2, p ) = \ hat { \ mathfrak { gl } } ( n ) _ { 2 } / \ hat { \ mathfrak { gl } } ( n - p ) _ 2 $. in order to support the conjecture that this algebra acts on the moduli space of instantons on $ \ mathbb { c } ^ { 2 } / \ mathbb { z } _ { p } $, we calculate the characters of its representations and check their coincidence with the generating functions of the fixed points of the moduli space of instantons. we show that the algebra $ \ mathcal { a } ( 2, p ) $ can be realized in two ways. the first realization is connected with the cross - product of $ p $ virasoro and $ p $ heisenberg algebras : $ \ mathcal { h } ^ { p } \ times \ textrm { vir } ^ { p } $. the second realization is connected with : $ \ mathcal { h } ^ { p } \ times \ hat { \ mathfrak { sl } } ( p ) _ 2 \ times ( \ hat { \ mathfrak { sl } } ( 2 ) _ p \ times \ hat { \ mathfrak { sl } } ( 2 ) _ { n - p } / \ hat { \ mathfrak { sl } } ( 2 ) _ n ) $. the equivalence of these two realizations provides the non - trivial identity for the characters of $ \ mathcal { a } ( 2, p ) $. the moduli space of instantons on $ \ mathbb { c } ^ { 2 } / \ mathbb { z } _ { p } $ admits two different compactifications. this leads to two different bases for the representations of $ \ mathcal { a } ( 2, p ) $. we use this fact to explain the existence of two forms of the instanton pure partition functions.
|
arxiv:1306.3938
|
video quality assessment ( vqa ) is essential for quantifying perceptual quality in various video processing workflows, spanning from camera capture systems to over - the - top streaming platforms. while recent supervised vqa models have made substantial progress, the reliance on manually annotated datasets - - a process that is labor - intensive, costly, and difficult to scale up - - has hindered further optimization of their generalization to unseen video content and distortions. to bridge this gap, we introduce a self - supervised learning framework for vqa to learn quality assessment capabilities from large - scale, unlabeled web videos. our approach leverages a \ textbf { learning - to - rank } paradigm to train a large multimodal model ( lmm ) on video pairs automatically labeled via two manners, including quality pseudo - labeling by existing vqa models and relative quality ranking based on synthetic distortion simulations. furthermore, we introduce a novel \ textbf { iterative self - improvement training strategy }, where the trained model acts an improved annotator to iteratively refine the annotation quality of training data. by training on a dataset $ 10 \ times $ larger than the existing vqa benchmarks, our model : ( 1 ) achieves zero - shot performance on in - domain vqa benchmarks that matches or surpasses supervised models ; ( 2 ) demonstrates superior out - of - distribution ( ood ) generalization across diverse video content and distortions ; and ( 3 ) sets a new state - of - the - art when fine - tuned on human - labeled datasets. extensive experimental results validate the effectiveness of our self - supervised approach in training generalized vqa models. the datasets and code will be publicly released to facilitate future research.
|
arxiv:2505.03631
|
big model has emerged as a new research paradigm that can be applied to various down - stream tasks with only minor effort for domain adaption. correspondingly, this study tackles camouflaged object detection ( cod ) leveraging the segment anything model ( sam ). the previous studies declared that sam is not workable for cod but this study reveals that sam works if promoted properly, for which we devise a new framework to render point promotions : first, we develop the promotion point targeting network ( ppt - net ) to leverage multi - scale features in predicting the probabilities of camouflaged objects ' presences at given candidate points over the image. then, we develop a key point selection ( kps ) algorithm to deploy both positive and negative point promotions contrastively to sam to guide the segmentation. it is the first work to facilitate big model for cod and achieves plausible results experimentally over the existing methods on 3 data sets under 6 metrics. this study demonstrates an off - the - shelf methodology for cod by leveraging sam, which gains advantage over designing professional models from scratch, not only in performance, but also in turning the problem to a less challenging task, that is, seeking informative but not exactly precise promotions.
|
arxiv:2505.09123
|
self - supervised speech representation learning aims to extract meaningful factors from the speech signal that can later be used across different downstream tasks, such as speech and / or emotion recognition. existing models, such as hubert, however, can be fairly large thus may not be suitable for edge speech applications. moreover, realistic applications typically involve speech corrupted by noise and room reverberation, hence models need to provide representations that are robust to such environmental factors. in this study, we build on the so - called distilhubert model, which distils hubert to a fraction of its original size, with three modifications, namely : ( i ) augment the training data with noise and reverberation, while the student model needs to distill the clean representations from the teacher model ; ( ii ) introduce a curriculum learning approach where increasing levels of noise are introduced as the model trains, thus helping with convergence and with the creation of more robust representations ; and ( iii ) introduce a multi - task learning approach where the model also reconstructs the clean waveform jointly with the distillation task, thus also acting as an enhancement step to ensure additional environment robustness to the representation. experiments on three superb tasks show the advantages of the proposed method not only relative to the original distilhubert, but also to the original hubert, thus showing the advantages of the proposed method for ` ` in the wild ' ' edge speech applications.
|
arxiv:2211.06562
|
in the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as demographic parity and equalized odds. nevertheless, these objectives, measured as global averages, have raised concerns about persistent local disparities between sensitive groups. in this work, we address the problem of local fairness, which ensures that the predictor is unbiased not only in terms of expectations over the whole population, but also within any subregion of the feature space, unknown at training time. to enforce this objective, we introduce road, a novel approach that leverages the distributionally robust optimization ( dro ) framework within a fair adversarial learning objective, where an adversary tries to infer the sensitive attribute from the predictions. using an instance - level re - weighting strategy, road is designed to prioritize inputs that are likely to be locally unfair, i. e. where the adversary faces the least difficulty in reconstructing the sensitive attribute. numerical experiments demonstrate the effectiveness of our method : it achieves pareto dominance with respect to local fairness and accuracy for a given global fairness level across three standard datasets, and also enhances fairness generalization under distribution shift.
|
arxiv:2310.18413
|
the epidemic curve and the final extent of the covid - 19 pandemic are usually predicted from the rate of early exponential raising using the sir model. these predictions implicitly assume a full social mixing, which is not plausible generally. here i am showing a counterexample to the these predictions, based on random propagation of an epidemic in barab \ ' asi - - albert scale - free network models. the start of the epidemic suggests $ r _ 0 = 2. 6 $, but unlike $ \ omega \ approx 70 \ % { } $ predicted by the sir model, they reach a final extent of only $ \ omega \ approx 4 \ % { } $ without external mitigation and $ \ omega \ approx 0. 5 $ - - $ 1. 5 \ % { } $ with mitigation. daily infection rate at the top is also 1 - - 1. 5 orders of magnitude less than in sir models. quarantining only the 1. 5 \ % { } most active superspreaders has similar effect on extent and top infection rate as blind quarantining a random 50 \ % { } of the full community.
|
arxiv:2004.00067
|
the kineticts of magnetization reversal of stripe - shaped permalloy - niobium hybrid nanofilms is studied in 6 - 300 k temperature range by means of magneto - optics visualization technique. the niobium influence on magnetic domain walls type and on magnetic domain structure of permalloy via the interface quality and via the distortion of stray fields is found. the memory effect, which is the superconducting niobium memory about an initial magnetic domain structure of permalloy at cooling below t _ c, is found. the memory is razed only by hybrid heating over t _ c.
|
arxiv:1112.2359
|
a streamflow time series encompasses a large amount of hidden information and reliable prediction of its behavior in the future remains a challenge. it seems that the use of information measures can significantly contribute to determining the time horizon of rivers and improving predictability. using the kolmogorov complexity ( kc ) and its derivatives ( kc spectrum and its highest value ), and lyapunov exponent ( le ), it has previously been shown that the degree of streamflow predictability depends on human activities, environmental factors, and natural characteristics. this paper applied the kc and le measures to investigate the randomness and chaotic behavior of monthly streamflow of 1879 rivers from the united states for a period from 1950 to 2015 and evaluated their time horizons via the lyapunov and kolmogorov time ( lt and kt, respectively ).
|
arxiv:2301.11983
|
the goal of the paper is to analyze a gaudin model for a polynomial representation of the kohno - drinfeld lie algebra associated with the multinomial distribution. the main result is the construction of an explicit basis of the space of polynomials consisting of common eigenfunctions of gaudin operators in terms of aomoto - gelfand hypergeometric series. the construction shows that the polynomials in this basis are also common eigenfunctions of the operators for a dual gaudin model acting on the degree indices, and therefore they provide a solution to a multivariate discrete bispectral problem.
|
arxiv:2303.08206
|
1. 5x10 ^ 9 msol. along with the stellar mass of 3x10 ^ 11 msol, these give a black hole - bulge mass ratio of mbh / mbulge > ~ 0. 005. this is in agreement with studies on the evolution of the mbh / mbulge relationship at high redshifts, which find a departure from the local value ~ 0. 002.
|
arxiv:1204.5480
|
in this paper we focus on landscape animation, which aims to generate time - lapse videos from a single landscape image. motion is crucial for landscape animation as it determines how objects move in videos. existing methods are able to generate appealing videos by learning motion from real time - lapse videos. however, current methods suffer from inaccurate motion generation, which leads to unrealistic video results. to tackle this problem, we propose a model named fgla to generate high - quality and realistic videos by learning fine - grained motion embedding for landscape animation. our model consists of two parts : ( 1 ) a motion encoder which embeds time - lapse motion in a fine - grained way. ( 2 ) a motion generator which generates realistic motion to animate input images. to train and evaluate on diverse time - lapse videos, we build the largest high - resolution time - lapse video dataset with diverse scenes, namely time - lapse - d, which includes 16, 874 video clips with over 10 million frames. quantitative and qualitative experimental results demonstrate the superiority of our method. in particular, our method achieves relative improvements by 19 % on lipis and 5. 6 % on fvd compared with state - of - the - art methods on our dataset. a user study carried out with 700 human subjects shows that our approach visually outperforms existing methods by a large margin.
|
arxiv:2109.02216
|
the kinematics of the decay of a bound proton is governed by the proton spectral function. we evaluate this quantity in 16o using the information from nuclear physics experiments. it also includes a correlated part. the reliability of this evaluation is sufficient to open the possibility of correlated cuts in the missing mass and momentum variables in order to identify the decay events from the bound protons with a possible increase of the signal to noise ratio.
|
arxiv:1002.3301
|
as observational datasets become larger and more complex, so too are the questions being asked of these data. data simulations, i. e., synthetic data with properties ( pixelization, noise, psf, artifacts, etc. ) akin to real data, are therefore increasingly required for several purposes, including : ( 1 ) testing complicated measurement methods, ( 2 ) comparing models and astrophysical simulations to observations in a manner that requires as few assumptions about the data as possible, ( 3 ) predicting observational results based on models and astrophysical simulations for, e. g., proposal planning, and ( 4 ) mitigating risk for future observatories and missions by effectively priming and testing pipelines. we advocate for an increase in using synthetic data to plan for and interpret real observations as a matter of routine. this will require funding for ( 1 ) facilities to provide robust data simulators for their instruments, telescopes, and surveys, and ( 2 ) making synthetic data publicly available in archives ( much like real data ) so as to lower the barrier of entry to all.
|
arxiv:1907.07184
|
looped transformers have shown exceptional neural algorithmic reasoning capability in simulating traditional graph algorithms, but their application to more complex structures like hypergraphs remains underexplored. hypergraphs generalize graphs by modeling higher - order relationships among multiple entities, enabling richer representations but introducing significant computational challenges. in this work, we extend the loop transformer architecture ' s neural algorithmic reasoning capability to simulate hypergraph algorithms, addressing the gap between neural networks and combinatorial optimization over hypergraphs. specifically, we propose a novel degradation mechanism for reducing hypergraphs to graph representations, enabling the simulation of graph - based algorithms, such as dijkstra ' s shortest path. furthermore, we introduce a hyperedge - aware encoding scheme to simulate hypergraph - specific algorithms, exemplified by helly ' s algorithm. we establish theoretical guarantees for these simulations, demonstrating the feasibility of processing high - dimensional and combinatorial data using loop transformers. this work highlights the potential of transformers as general - purpose algorithmic solvers for structured data.
|
arxiv:2501.10688
|
s. koumandos and s. ruscheweyh posed the following conjecture : for $ \ rho \ in ( 0, 1 ] $ and $ 0 < \ mu \ leq \ mu ^ { \ ast } ( \ rho ) $, the partial sum $ s _ n ^ { \ mu } ( z ) = \ displaystyle \ sum _ { k = 0 } ^ n \ frac { ( \ mu ) _ k } { k! } z ^ k $, $ 0 < \ mu \ leq1 $, $ | z | < 1 $, satisfies % \ begin { align * } ( 1 - z ) ^ { \ rho } s _ n ^ { \ mu } ( z ) \ prec \ left ( \ frac { 1 + z } { 1 - z } \ right ) ^ { \ rho }, \ qquad n \ in \ mathbb { n }, \ end { align * } where $ \ mu ^ { \ ast } ( \ rho ) $ is the unique solution of \ begin { align * } \ int _ 0 ^ { ( \ rho + 1 ) \ pi } \ sin ( t - \ rho \ pi ) t ^ { \ mu - 1 } dt = 0. \ end { align * } this conjecture is already settled for $ \ rho = \ frac { 1 } { 2 } $, $ \ frac { 1 } { 4 } $, $ \ frac { 3 } { 4 } $ and $ \ rho = 1 $. in this work, we validate this conjecture for an open neighbourhood of $ \ rho = \ frac { 1 } { 3 } $ and in a weaker form for $ \ rho = \ frac { 2 } { 3 } $. the particular value of the conjecture leads to several consequences related to starlike functions.
|
arxiv:1806.06999
|
we establish a discrete analog of the r \ ' enyi entropy comparison due to bobkov and madiman. for log - concave variables on the integers, the min entropy is within log e of the usual shannon entropy. additionally we investigate the entropic rogers - shephard inequality studied by madiman and kontoyannis, and establish a sharp r \ ' enyi version for certain parameters in both the continuous and discrete cases
|
arxiv:2005.10930
|
with the widespread popularity of internet celebrity marketing all over the world, short video production has gradually become a popular way of presenting products information. however, the traditional video production industry usually includes series of procedures as script writing, video filming in a professional studio, video clipping, special effects rendering, customized post - processing, and so forth. not to mention that multilingual videos is not accessible for those who could not speak multilingual languages. these complicated procedures usually needs a professional team to complete, and this made short video production costly in both time and money. this paper presents an intelligent system that supports the automatic generation of talking avatar videos, namely virbo. with simply a user - specified script, virbo could use a deep generative model to generate a target talking videos. meanwhile, the system also supports multimodal inputs to customize the video with specified face, specified voice and special effects. this system also integrated a multilingual customization module that supports generate multilingual talking avatar videos in a batch with hundreds of delicate templates and creative special effects. through a series of user studies and demo tests, we found that virbo can generate talking avatar videos that maintained a high quality of videos as those from a professional team while reducing the entire production costs significantly. this intelligent system will effectively promote the video production industry and facilitate the internet marketing neglecting of language barriers and cost challenges.
|
arxiv:2403.11700
|
two degenerate ( one bosonic and one fermionic ) vacua.
|
arxiv:1610.07205
|
families of codes such as group codes, constacyclic and skew cyclic codes, some of which independently suggested in the literature, turn out to be special instances of the general family of crossed product codes. hamming - metric is a main feature of ambient code spaces which is used to evaluate the efficiency of their various codes. this note aims at classifying the ambient spaces of crossed products up to hamming - isometry. we establish a criterion for two crossed products of a group g over a base ring r to be isometric in terms of a certain g - automorphism action on the second cohomology of g with coefficients in r *. this classification is demonstrated by two families of examples, namely crossed products of cyclic groups over finite fields, and twisted group algebras of elementary abelian groups over the complex field and over finite fields. we also determine when crossed products belonging to these families are ( relative ) semisimple, and in particular, when they admit only trivial codes.
|
arxiv:1708.01854
|
in this article, we illustrate the scaling properties of a family of solutions for n attractive bosonic atoms in the limit of large $ n $. these solutions represent the quantized dynamics of solitonic degrees of freedom in atomic droplets. in dimensions lower than two, or $ d = 2 - \ epsilon $, we demonstrate that the number of isotropic droplet states scales as $ n ^ { 3 / 2 } / \ epsilon ^ { 1 / 2 } $, and for $ \ epsilon = 0 $, or $ d = 2 $, scales as $ { n ^ 2 } $. the ground state energies scale as $ n ^ { 2 / \ epsilon + 1 } $ in $ d = 2 - \ epsilon $, and when $ d = 2 $, scale as an exponential function of n. we obtain the universal energy spectra and the generalized tjon relation ; their scaling properties are uniquely determined by the asymptotic freedom of quantum bosonic fields at short distances, a distinct feature in low dimensions. we also investigate the effect of quantum loop corrections that arise from various virtual processes and show that the resultant lifetime for a wide range of excited states scales as $ n ^ { \ epsilon / 2 } e ^ { 1 - \ epsilon / 2 } $.
|
arxiv:1406.0519
|
specific heat data for the quasi one dimensional quantum magnet copper benzoate ( cu ( c _ 6d _ 5coo ) _ 2 \ cdot 3d _ 2o ) is analyzed in the framework of an effective low - energy description in terms of a sine - gordon theory.
|
arxiv:cond-mat/9811309
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.