text
stringlengths
1
3.65k
source
stringlengths
15
79
in the present work, the effect of near - degeneracy on rotational petersen diagrams ( rpd ) is analysed. seismic models are computed considering rotation effects on both equilibrium models and adiabatic oscillation frequencies ( including second - order near - degeneracy effects ). contamination of coupled modes and coupling strength on the first radial modes are studied in detail. analysis of relative intrinsic amplitudes of near - degenerate modes reveals that the identity of the fundamental radial mode and its coupled quadrupole pair are almost unaltered once near - degeneracy effects are considered. however, for the first overtone, a mixed radial / quadrupole identity is always predicted. the effect of near - degeneracy on the oscillation frequencies becomes critical for rotational velocities larger than 15 - 20 km / s, for which large wriggles in the evolution of the period ratios are obtained ( up $ 10 ^ { - 2 } $ ). such wriggles imply uncertainties, in terms of metallicity determinations using rpd, reaching up to 0. 50 dex, which can be critical for pop. i hads ( high amplitude \ dss ). in terms of mass determinations, uncertainties reaching up to 0. 5 m _ sun are predicted. the location of such wriggles is found to be independent of metallicity and rotational velocity, and governed mainly by the avoided - crossing phenomenon.
arxiv:0709.2006
we consider scaling of the mean square dipole moments in a plasma with logarithmic interactions in a two - and three - dimensional system. in both cases, we establish the existence of a low - temperature regime where the mean square dipole moment does not scale with system size and a high - temperature regime where it does scale with system size. thus, there is a nonanalytic change in the polarizability of the system as a function of temperature, and hence a metal - insulator transition in both cases. the relevance of this transition in three dimensions to quantum phase transitions in 2 + 1 - dimensional systems is briefly discussed.
arxiv:cond-mat/0311524
this study is main goal is to provide a comparative comparison of libraries using machine learning methods. experts in natural language processing ( nlp ) are becoming more and more interested in sentiment analysis ( sa ) of text changes. the objective of employing nlp text analysis techniques is to recognize and categorize feelings related to twitter users utterances. in this examination, issues with sa and the libraries utilized are also looked at. provides a number of cooperative methods to classify emotional polarity. the naive bayes classifier, decision tree classifier, maxent classifier, sklearn classifier, sklearn classifier multinomialnb, and other conjoint learning algorithms, according to recent research, are very effective. in the project will use five python and r libraries nltk, textblob, vader, transformers ( gpt and bert pretrained ), and tidytext will be used in the study to apply sentiment analysis techniques. four machine learning models tree of decisions ( dt ), support vector machine ( svm ), naive bayes ( nb ), and k - nearest neighbor ( knn ) will also be used. to evaluate how well libraries for sa operate in the social network environment, comparative study was also carried out. the measures to assess the best algorithms in this experiment, which used a single data set for each method, were precision, recall, and f1 score. we conclude that the bert transformer method with an accuracy : 0. 973 is recommended for sentiment analysis.
arxiv:2307.14311
now that pev neutrinos have been discovered by icecube, we optimistically entertain the possibility that neutrinos with energy above 100 pev exist. we evaluate the dependence of event rates of such neutrinos on the neutrino - nucleon cross section at observatories that detect particles, atmospheric fluorescence, or cherenkov radiation, initiated by neutrino interactions. we consider how ( i ) a simple scaling of the total standard model neutrino - nucleon cross section, ( ii ) a new elastic neutral current interaction, and ( iii ) a new completely inelastic interaction, individually impact event rates.
arxiv:1502.06337
in the field of spin - controlled semiconductor lasers, massive effort has been focused upon materials with long spin relaxation times ( ~ ns ). in contrast, we demonstrate room - temperature spin - polarized ultrafast pulsed lasing in ingaas quantum wells ( ~ 10 ps ) embedded within a gaas microcavity. the microcavity studied here is similar to vertical - cavity surface - emitting lasers ( vcsel ) used in optical communication. unlike a vcsel, the present polariton laser has nonlinear output and energy shifts owing to the mixing of the free - carrier polarization and cavity light field. at room temperature, we observe features resembling those in exciton - polariton condensates at cryogenic temperatures, including the spontaneous build - up of spatial coherence, macroscopic occupation, and spin polarization. our results should stimulate activities to exploit spin - orbit interaction and many - body effects for fundamental studies of quantum light - matter fluids and developments of spin - dependent optoelectronic devices.
arxiv:1310.0882
we introduce a notion of genuine distributed coherence. such a notion is based on the possibility of concentrating on individual systems the coherence present in a distributed system, by making use of incoherent unitary transformations. we define an entropic quantifier of genuine distributed multipartite coherence for generic mixed states, and we focus on the bipartite pure - state case. in the latter case we derive necessary and sufficient conditions for the possibility of fully localizing the coherence, hence identifying the conditions for genuine distributed bipartite coherence. we analyze in detail the quantitative problem for the case of two - qubit pure states, identifying the states with the largest amount of genuine distributed coherence. interestingly, such states do not have maximal global coherence nor maximal coherence rank.
arxiv:1801.03919
similarity matching is a core operation in siamese trackers. most siamese trackers carry out similarity learning via cross correlation that originates from the image matching field. however, unlike 2 - d image matching, the matching network in object tracking requires 4 - d information ( height, width, channel and time ). cross correlation neglects the information from channel and time dimensions, and thus produces ambiguous matching. this paper proposes a spatio - temporal matching process to thoroughly explore the capability of 4 - d matching in space ( height, width and channel ) and time. in spatial matching, we introduce a space - variant channel - guided correlation ( svc - corr ) to recalibrate channel - wise feature responses for each spatial location, which can guide the generation of the target - aware matching features. in temporal matching, we investigate the time - domain context relations of the target and the background and develop an aberrance repressed module ( arm ). by restricting the abrupt alteration in the interframe response maps, our arm can clearly suppress aberrances and thus enables more robust and accurate object tracking. furthermore, a novel anchor - free tracking framework is presented to accommodate these innovations. experiments on challenging benchmarks including otb100, vot2018, vot2020, got - 10k, and lasot demonstrate the state - of - the - art performance of the proposed method.
arxiv:2105.02408
we present the first observations of the transverse component of photospheric magnetic field acquired by the imaging magnetograph sunrise / imax. using an automated detection method, we obtain statistical properties of 4536 features with significant linear polarization signal. their rate of occurrence is 1 - 2 orders of magnitude larger than values reported by previous studies. we show that these features have no characteristic size or lifetime. they appear preferentially at granule boundaries with most of them being caught in downflow lanes at some point in their evolution. only a small percentage are entirely and constantly embedded in upflows ( 16 % ) or downflows ( 8 % ).
arxiv:1008.1535
with the extensive use of vision - language models in various downstream tasks, evaluating their robustness is crucial. in this paper, we propose a benchmark for assessing the robustness of vision - language models. we believe that a robust model should properly understand both linguistic and visual semantics and be resilient to explicit variations. in pursuit of this goal, we create new variants of texts and images in the ms - coco test set and re - evaluate the state - of - the - art ( sota ) models with the new data. specifically, we alter the meaning of text by replacing a word, and generate visually altered images that maintain some visual context while introducing noticeable pixel changes through image mixing techniques. our evaluations on the proposed benchmark reveal substantial performance degradation in many sota models ( e. g., image - to - text recall @ 1 : 81. 9 \ % $ \ rightarrow $ 48. 4 \ % in blip, 66. 1 \ % $ \ rightarrow $ 37. 6 \ % in vse $ \ infty $ ), with the models often favoring the altered texts / images over the original ones. this indicates the current vision - language models struggle with subtle changes and often fail to understand the overall context of texts and images. based on these findings, we propose semantic contrastive loss and visual contrastive loss to learn more robust embedding. datasets and code are available at { \ url { https : / / github. com / pseulki / rococo } }.
arxiv:2304.10727
we study the costs and benefits of different quantum approaches to finding approximate solutions of constrained combinatorial optimization problems with a focus on maximum independent set. in the lagrange multiplier approach we analyze the dependence of the output on graph density and circuit depth. the quantum alternating ansatz approach is then analyzed and we examine the dependence on different choices of initial states. the quantum alternating ansatz approach, although powerful, is expensive in terms of quantum resources. a new algorithm based on a " dynamic quantum variational ansatz " ( dqva ) is proposed that dynamically changes to ensure the maximum utilization of a fixed allocation of quantum resources. our analysis and the new proposed algorithm can also be generalized to other related constrained combinatorial optimization problems.
arxiv:2010.06660
this project investigates the behavior of multi - head attention in transformer models, specifically focusing on the differences between benign and trojan models in the context of sentiment analysis. trojan attacks cause models to perform normally on clean inputs but exhibit misclassifications when presented with inputs containing predefined triggers. we characterize attention head functions in trojan and benign models, identifying specific ' trojan ' heads and analyzing their behavior.
arxiv:2406.16925
frames in a hilbert space that are generated by operator orbits are vastly studied because of the applications in dynamic sampling and signal recovery. we demonstrate in this paper a representation theory for frames generated by operator orbits that provides explicit constructions of the frame and the operator when the operators are not surjective. it is known that the kaczmarz algorithm for stationary sequences in hilbert spaces generates a frame that arises from an operator orbit where the operator is not surjective. in this paper, we show that every frame generated by a not surjective operator in any hilbert space arises from the kaczmarz algorithm. furthermore, we show that the operators generating these frames are similar to rank one perturbations of unitary operators. after this, we describe a large class of operator orbit frames that arise from fourier expansions for singular measures. moreover, we classify all measures that possess frame - like fourier expansions arising from two - sided operator orbit frames. finally, we show that measures that possess frame - like fourier expansions arising from two - sided operator orbits are weighted lebesgue measure with weight satisfying a weak $ a _ { 2 } $ condition, even in the non - frame case. we also use these results to classify measures with other types of frame - like fourier expansions.
arxiv:2409.10706
as natural extensions of the boson realizations of the su ( 2 ) - and the su ( 1, 1 ) - algebra, the so ( 4 ) - and the so ( 3, 1 ) - algebras are presented in the form of boson realizations with four kinds of boson operators. for each algebra, two forms are discussed. one is constructed in terms of two sets of the boson operators which play a role of spherical tensor with rank 1 / 2. the other is based on the ranks 1 and 0. as a possible application, the runge - lenz - pauli vector, which is famous in the hydrogen atom, is derived with some aspects.
arxiv:nucl-th/0411075
motivated by the result that an ` approximate ' evaluation of the jones polynomial of a braid at a $ 5 ^ { th } $ root of unity can be used to simulate the quantum part of any algorithm in the quantum complexity class bqp, and results relating bqp to the counting class gapp, we introduce a form of additive approximation which can be used to simulate a function in bqp. we show that all functions in the classes # p and gapp have such an approximation scheme under certain natural normalisations. however we are unable to determine whether the particular functions we are motivated by, such as the above evaluation of the jones polynomial, can be approximated in this way. we close with some open problems motivated by this work.
arxiv:0908.2122
the general phenomenon of shell structure in atomic nuclei has been understood since the pioneering work of goeppert - mayer, haxel, jensen and suess. they realized that the experimental evidence for nuclear magic numbers could be explained by introducing a strong spin - orbit interaction in the nuclear shell model potential. however, our detailed knowledge of nuclear forces and the mechanisms governing the structure of nuclei, in particular far from stability, is still incomplete. in nuclei with equal neutron and proton numbers ( $ n = z $ ), the unique nature of the atomic nucleus as an object composed of two distinct types of fermions can be expressed as enhanced correlations arising between neutrons and protons occupying orbitals with the same quantum numbers. such correlations have been predicted to favor a new type of nuclear superfluidity ; isoscalar neutron - proton pairing, in addition to normal isovector pairing ( see fig. 1 ). despite many experimental efforts these predictions have not been confirmed. here, we report on the first observation of excited states in $ n = z = 46 $ nucleus $ ^ { 92 } $ pd. gamma rays emitted following the $ ^ { 58 } $ ni ( $ ^ { 36 } $ ar, 2 $ n $ ) $ ^ { 92 } $ pd fusion - evaporation reaction were identified using a combination of state - of - the - art high - resolution { \ gamma } - ray, charged - particle and neutron detector systems. our results reveal evidence for a spin - aligned, isoscalar neutron - proton coupling scheme, different from the previous prediction. we suggest that this coupling scheme replaces normal superfluidity ( characterized by seniority coupling ) in the ground and low - lying excited states of the heaviest n = z nuclei. the strong isoscalar neutron - proton correlations in these $ n = z $ nuclei are predicted to have a considerable impact on their level structures, and to influence the dynamics of the stellar rapid proton capture nucleosynthesis process.
arxiv:1101.2187
this paper addresses a general method of polynomial transformation of hypergeometric equations. examples of some classical special equations of mathematical physics are generated. heun ' s equation and exceptional jacobi polynomials are also treated.
arxiv:1306.4889
two integral structures on the q - vector space of modular forms of weight two on x _ 0 ( n ) are compared at primes p exactly dividing n. when p = 2 and n is divisible by a prime that is 3 mod 4, this comparison leads to an algorithm for computing the space of weight one forms mod 2 on x _ 0 ( n / 2 ). for p arbitrary and n > 4 prime to p, a way to compute the hecke algebra of mod p modular forms of weight one on gamma _ 1 ( n ) is presented, using forms of weight p, and, for p = 2, parabolic group cohomology with mod 2 coefficients. appendix a is a letter from mestre to serre, of october 1987, where he reports on computations of weight one forms mod 2 of prime level. appendix b reports on an implementation for p = 2 in magma, using stein ' s modular symbols package, with which mestre ' s computations are redone and slightly extended.
arxiv:math/0312019
neutron stars offer a great opportunity to study highly compressed hadronic matter experimentally and theoretically. however, the so - called hyperon - puzzle arises at neutron star densities. the hyperon coexistence with other particles in compressed matter softens the equation of state and many widely - accepted models fail to reproduce precise observations of large neutron star masses. here, we propose a novel mechanism to retain the stiffness of the high density state with hyperons by considering the explicit momentum dependence of their in - medium potentials. our approach modifies conventional strangeness threshold conditions and generates new threshold effects on hyperons in high - density matter. we demonstrate these effects within the non - linear derivative model, which incorporates baryon momentum - dependent fields based on empirical and microscopic studies. it turns out that even soft momentum - dependent strangeness fields do prohibit their populations in neutron star matter. the generic momentum dependence of strangeness potentials, as modeled by the non - linear derivative approach, is crucial for resolving the long - standing hyperon - puzzle in neutron stars.
arxiv:2402.08329
the recent discovery of the microwave induced vanishing resistance states in a two dimensional electron system ( 2des ) is an unexpected and surprising phenomena. in these experiments the magnetoresistance of a high mobility 2des under the influence of microwave radiation of frequency $ \ omega $ at moderate values of the magnetic field, exhibits strong oscillations with zero - resistance states ( zrs ) governed by the ratio $ \ omega / \ omega _ c $, where $ \ omega _ c $ is the cyclotron frequency. in this work we present a model for the photoconductivity of a two dimensional electron system ( 2des ) subjected to a magnetic field. the model includes the microwave and landau contributions in a non - perturbative exact way, impurity scattering effects are treated perturbatively. in our model, the landau - floquet states act coherently with respect to the oscillating field of the impurities, that in turn induces transitions between these levels. based on this formalism, we provide a kubo - like formula that takes into account the oscillatory floquet structure of the problem. we study the effects of both short - range and long - range disorder on the photoconductivity. our calculation yields a magnetoresistance oscillatory behavior with the correct period and phase. it is found that, in agreement with experiment, negative dissipation can only be induced in very high mobility samples. we analyze the dependence of the results on the microwave power and polarization. for high - intensity radiation multi - photon processes take place predicting new negative - resistance states centered at $ \ omega / \ omega _ c = 1 / 2 $, and $ \ omega / \ omega _ c = 3 / 2 $.
arxiv:cond-mat/0407468
we present an examination of the multi - wavelength observation of a c7. 9 flare which occurred on 1998 november 10. this is the first time of imaging observation of the quasi - periodic pulsations ( qpps ). four bursts were observed with the hard x - ray telescope aboard yohkoh and the nobeyama radioheliograph during the impulsive phase of the flare. in the second burst, the hard x - ray and microwave time profiles clearly showed a qpp. we estimated the alfven transit time along the flare loop using the images of the soft x - ray telescope aboard it yohkoh and the photospheric magnetgrams, and found that the transit time was almost equal to the period of the qpp. we therefore suggest, based on a shock acceleration model, that variations of macroscopic magnetic structures, such as oscillations of coronal loops, affect the efficiency of particle injection / acceleration.
arxiv:astro-ph/0111018
the elastic neutron form factors $ g _ { en } $ and $ g _ { mn } $ are calculated in a gpd framework using gpds obtained from fits to proton elastic form factors $ g _ { ep } $ and $ g _ { mp } $, and isospin symmetry, with no further changes in parameters. the results for $ g _ { en } $ are in good agreement with existing data, while those for $ g _ { mn } $ are fair. the calculations predict the form factors for future measurements at higher $ q ^ 2 $.
arxiv:hep-ph/0307162
crime is an unlawful act that carries legal repercussions. bangladesh has a high crime rate due to poverty, population growth, and many other socio - economic issues. for law enforcement agencies, understanding crime patterns is essential for preventing future criminal activity. for this purpose, these agencies need structured crime database. this paper introduces a novel crime dataset that contains temporal, geographic, weather, and demographic data about 6574 crime incidents of bangladesh. we manually gather crime news articles of a seven year time span from a daily newspaper archive. we extract basic features from these raw text. using these basic features, we then consult standard service - providers of geo - location and weather data in order to garner these information related to the collected crime incidents. furthermore, we collect demographic information from bangladesh national census data. all these information are combined that results in a standard machine learning dataset. together, 36 features are engineered for the crime prediction task. five supervised machine learning classification algorithms are then evaluated on this newly built dataset and satisfactory results are achieved. we also conduct exploratory analysis on various aspects the dataset. this dataset is expected to serve as the foundation for crime incidence prediction systems for bangladesh and other countries. the findings of this study will help law enforcement agencies to forecast and contain crime as well as to ensure optimal resource allocation for crime patrol and prevention.
arxiv:2211.01551
we address the following important question : how to distinguish kitaev models experimentally realized on small lattices from other non - topological interacting spin models. based on symmetry arguments and exact diagonalization, we show that a particularly characteristic pattern of spin - spin correlations survives despite finite size, open boundary and thermal effects. the pattern is robust against small residual perturbing interactions and can be utilized to distinguish the kitaev interactions from other interactions such as antiferromagnetic heisenberg interactions. the effect of external magnetic field is also considered and found to be not critical.
arxiv:0906.0017
future direct searches for low - mass dark matter particles with germanium detectors, such as supercdms snolab, are expected to be limited by backgrounds from radioactive isotopes activated by cosmogenic radiation inside the germanium. there are limited experimental data available to constrain production rates and a large spread of theoretical predictions. we examine the calculation of expected production rates, and analyze data from the second run of the cdms low ionization threshold experiment ( cdmslite ) to estimate the rates for several isotopes. we model the measured cdmslite spectrum and fit for contributions from tritium and other isotopes. using the knowledge of the detector history, these results are converted to cosmogenic production rates at sea level. the production rates in atoms / ( kg $ \ cdot $ day ) are 74 $ \ pm $ 9 for $ ^ 3 $ h, 1. 5 $ \ pm $ 0. 7 for $ ^ { 55 } $ fe, 17 $ \ pm $ 5 for $ ^ { 65 } $ zn, and 30 $ \ pm $ 18 for $ ^ { 68 } $ ge.
arxiv:1806.07043
our sne ic and ic - bl the ^ 56ni is mixed up to the outer layers, suggesting that sn ic progenitors are de facto helium poor.
arxiv:1408.4084
with the aim to determine the spatial distribution of the dark matter halo, we investigate two polar ring galaxies ngc 4262 and sprc - 7. for both galaxies the stellar kinematics data for the central galaxy were obtained from optical spectroscopy at the 6 - m telescope of the special astrophysical observatory of the russian academy of sciences. the information about polar gaseous components was taken from the optical 3d - spectroscopic observations of ionized gas ( for sprc - 7 ) and h { \ sc i } radio observations ( for ngc 4262 ). sprc - 7 is the system with a relative angle $ \ delta = 73 ^ { \ circ } $ towards the central galaxy and a quite massive stellar - gaseous polar component. meanwhile ngc 4262 is the classic polar case with $ \ delta = 88 ^ { \ circ } $ where the polar ring mainly consists of neutral gas with a negligible stellar contribution to the mass. we are hence dealing with two different systems and the results are quite diverse too. the observed properties of both galaxies were compared with the results of self consistent simulations of velocity fields of the polar component along with the rotation curve of the central lenticular galaxy. for sprc - 7 we have found a slightly flattened halo towards the polar plane with the axis ratio $ c / a \ simeq 1. 7 \ pm 0. 2 $ for the isothermal halo model and $ c / a \ simeq 1. 5 \ pm 0. 2 $ for the nfw model. the case of ngc 4262 is more unusual, the shape of the dark matter distribution varies strongly with radius. namely, the dark matter halo is fattened in the vicinity of the galactic disc ( $ c / a \ approx 0. 4 \ pm 0. 1 $ ), however it is prolate far beyond the central galaxy ( $ c / a \ approx 1. 7 $ for the isothermal halo and $ c / a \ approx 2. 3 $ for nfw ).
arxiv:1404.1247
efficient modal feature fusion strategy is the key to achieve accurate segmentation of brain glioma. however, due to the specificity of different mri modes, it is difficult to carry out cross - modal fusion with large differences in modal features, resulting in the model ignoring rich feature information. on the other hand, the problem of multi - modal feature redundancy interaction occurs in parallel networks due to the proliferation of feature dimensions, further increase the difficulty of multi - modal feature fusion at the bottom end. in order to solve the above problems, we propose a noval complementary feature compression interaction network ( cfci - net ), which realizes the complementary fusion and compression interaction of multi - modal feature information with an efficient mode fusion strategy. firstly, we propose a selective complementary feature fusion ( scff ) module, which adaptively fuses rich cross - modal feature information by complementary soft selection weights. secondly, a modal feature compression interaction ( mfci ) transformer is proposed to deal with the multi - mode fusion redundancy problem when the feature dimension surges. the mfci transformer is composed of modal feature compression ( mfc ) and modal feature interaction ( mfi ) to realize redundancy feature compression and multi - mode feature interactive learning. % in mfi, we propose a hierarchical interactive attention mechanism based on multi - head attention. evaluations on the brats2019 and brats2020 datasets demonstrate that cfci - net achieves superior results compared to state - of - the - art models. code : https : / / github. com / cdmm0 / cfci - net
arxiv:2503.16149
the theory of dark energy stars illustrates how the behavior of matter near to certain kinds of quantum critical phase transitions can be given a geometrical interpretation by regarding the criticality tuning parameter as an extra dimension. in the case of a superfluid with vanishing speed of sound, the implied geometry resembles 5 - dimensional anti - de - sitter. in a dark energy star this geometry applies both inside and outside the horizon radius, so the ads - cft correspondence is consistent with the idea that the surface of a compact astrophysical object represents a quantum critical phase transition of space - time. the superfluid transition in a chiron gas, which was originally proposed as a theory of high temperature superconductivity, may provide an exact theory of this transition.
arxiv:0907.4397
recent cosmological observations and compatible theory offer an understanding of long - mysterious dark matter and dark energy. the postulate of universal conformal local weyl scaling symmetry, without dark matter, modifies action integrals for both einstein - hilbert gravitation and the higgs scalar field by nonclassical gravitational terms. conformal theory accounts both for observed excessive external galactic orbital velocities and for accelerating cosmic expansion. su ( 2 ) symmetry - breaking is retained but dark energy is implied rather than nonzero higgs particle mass. these results are compatible with existence of a massive neutral particle or resonance $ w _ 2 $ at 125gev, described as composite scalar $ g _ { \ mu \ nu } w _ - ^ \ mu w _ + ^ \ nu $ and $ g _ { \ mu \ nu } z ^ { \ mu * } z ^ \ nu $ interacting strongly via quark exchange. decay modes would be consistent with those observed at lhc. higgs scalar field $ \ phi $ is dressed by the $ w _ 2 $ field to produce lagrangian term $ \ lambda ( \ phi ^ \ dagger \ phi ) ^ 2 $.
arxiv:1304.4650
many current problems of interest in quantum non - equilibrium are described by time - local master equations ( tlmes ) for the density matrix that are not of the lindblad form, that is, that are not strictly probability conserving and / or markovian. here we describe an generic approach by which the system of interest that obeys the tlme is coupled to an ancilla, such that the dynamics of the combined system - plus - ancilla is markovian and thus described by a lindblad equation. this in turn allows us to recover the properties of the original tlme dynamics from a physical unravelling of this associated lindblad dynamics. we discuss applications of this generic mapping in two areas of current interest. the first one is that of " thermodynamics of trajectories ", where non - lindblad master equations encode the large - deviation properties of the dynamics, and we show that the relevant large - deviation functions ( i. e. dynamical free - energies ) can be recovered from appropriate observables of the ancilla. the second one is that of quantum filters, where we show tracking a quantum system undergoing a continuous homodyne measurement with another quantum system of the same size will inherently be inefficient in our framework.
arxiv:1311.7394
since black holes can only accrete sub - keplerian matter, a companion black hole orbiting on a circular and instantaneously keplerian orbit around a central, massive black hole in a galactic centre will loss angular momentum and energy to the accreting matter. this loss could be a significant fraction of the loss due to gravitational wave ( gw ) emission. the corresponding gw signal would be modified. we discuss this effects in the light of the modern accretion disk theory.
arxiv:astro-ph/0012529
hydrodynamic simulations suggest that galactic gas disks form when coplanar gas spirals into the inner regions of the disk. we recently presented a simple " modified accretion disk " model of viscous galactic disks in which star - formation is fed by a radial flow of gas. however, little observational evidence has been presented for such inflows, which are expected to be only a few km s $ ^ { - 1 } $ in the central regions of the disk, i. e. within three disk scale - lengths, but could reach of order 50 - 100 km s $ ^ { - 1 } $ in the very outer disk. the effects of systematic inflow on the 2 - d velocity field are examined and it is shown that these are quite similar to those produced by geometric warps of the disks, with twist distortions of both the kinematic major and minor axes. this makes it potentially difficult to distinguish between these in practice. by comparing the handedness of the observed twisting of the kinematic axes and of the spiral arms for a sample of nearby galaxies, we find ( assuming that the spiral arms are generally trailing ) that the effects of warps are in fact likely to dominate over the effects of radial inflows. however, the common practice of treating these twist distortions of the kinematic major and minor axes as being due only to warps can lead, for galaxies of low - to - intermediate inclinations, to substantial underestimates of any systematic inflow.
arxiv:2205.04215
we review both the counting rule and the influence of the evolution in $ q ^ 2 $ for the large $ x _ { bj } $ behaviour of the valance quark distribution functions. based on a factorization procedure we present a more general perturbative treatment to compute this behaviour. a complete analysis is performed in the scalar $ \ phi ^ 3 _ { [ 6 ] } $ - theory for the parton distribution function of the ` ` meson ' ', which shows that logarithmical corrections arise from the distribution amplitude and that the reference momentum square $ q _ 0 ^ 2 $ is fixed by $ x _ { bj } $.
arxiv:hep-ph/9406260
motivated by privacy concerns in sequential decision - making on sensitive data, we address the challenge of nonparametric contextual multi - armed bandits ( mab ) under local differential privacy ( ldp ). we develop a uniform - confidence - bound - type estimator, showing its minimax optimality supported by a matching minimax lower bound. we further consider the case where auxiliary datasets are available, subject also to ( possibly heterogeneous ) ldp constraints. under the widely - used covariate shift framework, we propose a jump - start scheme to effectively utilize the auxiliary data, the minimax optimality of which is further established by a matching lower bound. comprehensive experiments on both synthetic and real - world datasets validate our theoretical results and underscore the effectiveness of the proposed methods.
arxiv:2503.08098
attribution methods are an easy to use tool for investigating and validating machine learning models. multiple methods have been suggested in the literature and it is not yet clear which method is most suitable for a given task. in this study, we tested the robustness of four attribution methods, namely gradient * input, guided backpropagation, layer - wise relevance propagation and occlusion, for the task of alzheimer ' s disease classification. we have repeatedly trained a convolutional neural network ( cnn ) with identical training settings in order to separate structural mri data of patients with alzheimer ' s disease and healthy controls. afterwards, we produced attribution maps for each subject in the test data and quantitatively compared them across models and attribution methods. we show that visual comparison is not sufficient and that some widely used attribution methods produce highly inconsistent outcomes.
arxiv:1909.08856
all classical lie algebras can be realized \ ` a la schwinger in terms of fermionic oscillators. we show that the same can be done for their $ q $ - deformed counterparts by simply replacing the fermionic oscillators with anyonic ones defined on a two dimensional lattice. the deformation parameter $ q $ is a phase related to the anyonic statistical parameter. a crucial r \ ^ ole in this construction is played by a sort of bosonization formula which gives the generators of the quantum algebras in terms of the underformed ones. the entire procedure works even on one dimensional chains ; in such a case $ q $ can also be real.
arxiv:hep-th/9304108
popular wisdom suggests that measuring the tensor to scalar ratio $ r $ on cmb scales is a " proof of inflation " since one generic prediction is a scale - invariant tensor spectrum while alternatives predict $ r $ that is many orders of magnitude below the sensitivity of future experiments. a bouncing universe with sourced fluctuations allows for nearly scale - invariant spectra of both scalar and tensor perturbations challenging this point of view. past works have analyzed the model until the bounce, under the assumption that the bounce will not change the final predictions. in this work, we discard this assumption. we explicitly follow the evolution of the universe and fluctuations across the bounce until reheating. the evolution is stable, and the existence of the sourced fluctuations does not destroy the bounce. the bounce enhances the scalar spectrum while leaving the tensor spectrum unchanged. the enhancement depends on the duration of the bounce - a shorter bounce implies a larger enhancement. the model matches current observations and predicts any viable tensor - to - scalar ratio $ r \ lesssim 10 ^ { - 2 } $, which may be observed in upcoming cmb experiments. hence, a measurement of $ r $ will no longer be a " proof of inflation ' ', and a sourced bounce is a viable paradigm with distinct predictions.
arxiv:2308.00256
contrastive language - image pre - training ( clip ) achieves promising results in 2d zero - shot and few - shot learning. despite the impressive performance in 2d, applying clip to help the learning in 3d scene understanding has yet to be explored. in this paper, we make the first attempt to investigate how clip knowledge benefits 3d scene understanding. we propose clip2scene, a simple yet effective framework that transfers clip knowledge from 2d image - text pre - trained models to a 3d point cloud network. we show that the pre - trained 3d network yields impressive performance on various downstream tasks, i. e., annotation - free and fine - tuning with labelled data for semantic segmentation. specifically, built upon clip, we design a semantic - driven cross - modal contrastive learning framework that pre - trains a 3d network via semantic and spatial - temporal consistency regularization. for the former, we first leverage clip ' s text semantics to select the positive and negative point samples and then employ the contrastive loss to train the 3d network. in terms of the latter, we force the consistency between the temporally coherent point cloud features and their corresponding image features. we conduct experiments on semantickitti, nuscenes, and scannet. for the first time, our pre - trained network achieves annotation - free 3d semantic segmentation with 20. 8 % and 25. 08 % miou on nuscenes and scannet, respectively. when fine - tuned with 1 % or 100 % labelled data, our method significantly outperforms other self - supervised methods, with improvements of 8 % and 1 % miou, respectively. furthermore, we demonstrate the generalizability for handling cross - domain datasets. code is publicly available https : / / github. com / runnanchen / clip2scene.
arxiv:2301.04926
this paper presents the eclipse plug - ins for the task flow model in the discovery method. these plug - ins provide an ide for the task algebra compiler and the model - checking tools. the task algebra is the formal representation for the task model and it is based on simple and compound tasks. the model - checking techniques were developed to validate task models represented in the algebra.
arxiv:1107.2683
the number of orphan radio afterglows associated with gamma - ray bursts ( grbs ) that should be detected by a flux limited radio survey, is calculated. it is shown that for jetted grbs this number is smaller for smaller jet opening angle ( theta ), contrary to naive expectation. for a beaming factor f _ b ^ { - 1 } = ( theta ^ 2 / 2 ) ^ { - 1 } = 500, roughly the value inferred by frail et al. ( 2001 ) from analysis of afterglow light curves, we predict that between several hundreds to several thousands orphan radio afterglows should be detectable ( over all sky ) above 1 mjy at ghz frequencies at any given time. this orphan population is dominated by sources lying at distances of a few hundred mpc, and having an age of ~ 1 yr. a search for point - like radio transients with flux densities greater than 6 mjy was conducted using the first and nvss surveys, yielding a list of 25 orphan candidates. we argue that most of the candidates are unlikely to be radio supernovae. however, the possibility that they are radio loud agns cannot be ruled out without further observations. our analysis sets an upper limit for the all sky number of radio orphans, which corresponds to a lower limit f _ b ^ { - 1 } > 10 on the beaming factor. rejection of all candidates found in our search would imply f _ b ^ { - 1 } > 100. this, and the fact that some candidates may indeed be radio afterglows, strongly motivate further observations of these transients.
arxiv:astro-ph/0203262
we study the one - neutron removal strength of the 7he ground state, which provides us with the 6he - n component in 7he. the he isotopes are described on the basis of the 4he + xn cluster model ( x = 1, 2, 3 ). the complex scaling method is applied to describe not only the gamow resonances but also the nonresonant continuum states of valence neutrons, with the correct boundary condition of particle decays. the one - neutron removal strength of 7he into the unbound states of 6he is calculated using the complex - scaled green ' s function, in which a complex - scaled complete set of 4he + n + n states is adopted. using this framework, we investigate resonant and nonresonant contributions of the strength, which individually produce specific structures in the distributions. in addition, we propose a method to obtain the real - value strength using the complex values of spectroscopic factors of gamow states. as a result, the 6he ( 2 + ) resonance is found to give the largest contribution.
arxiv:0907.0531
the nonequilibrium dynamics of a periodically driven extended xy model, in the presence of linear time dependent magnetic filed, is investigated using the notion of dynamical quantum phase transitions ( dqpts ). along the similar lines to the equilibrium phase transition, the main purpose of this work is to search the fundamental concepts such as scaling and universality at the ramped quench dqpts. we have shown that the critical points of the model, where the gap closing occurs, can be moved by tuning the driven frequency and consequently the presence / absence of dqpts can be flexibly controlled by adjusting the driven frequency. % taking advantage of this property, we have uncovered that, for a ramp across the single quantum critical point, the critical mode at which dqpts occur is classified into three regions : the kibble - zurek ( kz ) region, where the critical mode scales linearly with the square root of the sweep velocity, pre - saturated ( ps ) region, and the saturated ( s ) region where the critical mode makes a plateau versus the sweep velocity. while for a ramp that crosses two critical points, the critical modes disclose just kz and ps regions. on the basis of numerical simulations, we find that the dynamical free energy scales linerly with time, as approaches to dqpt time, with the exponent $ \ nu = 1 \ pm 0. 01 $ for all sweep velocities and driven frequencies.
arxiv:2310.15101
low grade gliomas ( lggs ) are infiltrative and incurable primary brain tumours with typically slow evolution. these tumours usually occur in young and otherwise healthy patients, bringing controversies in treatment planning since aggressive treatment may lead to undesirable side effects. thus, for management decisions it would be valuable to obtain early estimates of lgg growth potential. here we propose a simple mathematical model of lgg growth and its response to chemotherapy which allows the growth of lggs to be described in real patients. the model predicts, and our clinical data confirms, that the speed of response to chemotherapy is related to tumour aggressiveness. moreover, we provide a formula for the time to radiological progression, which can be possibly used as a measure of tumour aggressiveness. finally, we suggest that the response to a few chemotherapy cycles upon diagnosis might be used to predict tumour growth and to guide therapeutical actions on the basis of the findings.
arxiv:1702.05307
in the $ d $ - dimensional hypercube bin packing problem, a given list of $ d $ - dimensional hypercubes must be packed into the smallest number of hypercube bins. epstein and van stee [ siam j. comput. 35 ( 2005 ) ] showed that the asymptotic performance ratio $ \ rho $ of the online bounded space variant is $ \ omega ( \ log d ) $ and $ o ( d / \ log d ) $, and conjectured that it is $ \ theta ( \ log d ) $. we show that $ \ rho $ is in fact $ \ theta ( d / \ log d ) $, using probabilistic arguments.
arxiv:2107.14161
conventional neural structures tend to communicate through analog quantities such as currents or voltages, however, as cmos devices shrink and supply voltages decrease, the dynamic range of voltage / current - domain analog circuits becomes narrower, the available margin becomes smaller, and noise immunity decreases. more than that, the use of operational amplifiers ( op - amps ) and continuous - time or clocked comparators in conventional designs leads to high energy consumption and large chip area, which would be detrimental to building spiking neural networks. in view of this, we propose a neural structure for generating and transmitting time - domain signals, including a neuron module, a synapse module, and two weight modules. the proposed neural structure is driven by a leakage current of mos transistors and uses an inverter - based comparator to realize a firing function, thus providing higher energy and area efficiency compared to conventional designs. the proposed neural structure is fabricated using tsmc 65 nm cmos technology. the proposed neuron and synapse occupy the area of 127 { \ mu } m ^ { 2 } and 231 { \ mu } m ^ { 2 }, respectively, while achieving millisecond time constants. actual chip measurements show that the proposed structure implements the temporal signal communication function with millisecond time constants, which is a critical step toward hardware reservoir computing for human - computer interaction. simulation results of the spiking - neural network for reservoir computing with the behavioral model of the proposed neural structure demonstrate the learning function.
arxiv:2208.11881
deep neural networks have shown excellent performance for stereo matching. many efforts focus on the feature extraction and similarity measurement of the matching cost computation step while less attention is paid on cost aggregation which is crucial for stereo matching. in this paper, we present a learning - based cost aggregation method for stereo matching by a novel sub - architecture in the end - to - end trainable pipeline. we reformulate the cost aggregation as a learning process of the generation and selection of cost aggregation proposals which indicate the possible cost aggregation results. the cost aggregation sub - architecture is realized by a two - stream network : one for the generation of cost aggregation proposals, the other for the selection of the proposals. the criterion for the selection is determined by the low - level structure information obtained from a light convolutional network. the two - stream network offers a global view guidance for the cost aggregation to rectify the mismatching value stemming from the limited view of the matching cost computation. the comprehensive experiments on challenge datasets such as kitti and scene flow show that our method outperforms the state - of - the - art methods.
arxiv:1801.04065
horn functions form a subclass of boolean functions possessing interesting structural and computational properties. these functions play a fundamental role in algebra, artificial intelligence, combinatorics, computer science, database theory, and logic. in the present paper, we introduce the subclass of hypergraph horn functions that generalizes matroids and equivalence relations. we provide multiple characterizations of hypergraph horn functions in terms of implicate - duality and the closure operator, which are respectively regarded as generalizations of matroid duality and mac lane - steinitz exchange property of matroid closure. we also study algorithmic issues on hypergraph horn functions, and show that the recognition problem ( i. e., deciding if a given definite horn cnf represents a hypergraph horn function ) and key realization ( i. e., deciding if a given hypergraph is realized as a key set by a hypergraph horn function ) can be done in polynomial time, while implicate sets can be generated with polynomial delay.
arxiv:2301.05461
automatically generated reports from medical images promise to improve the workflow of radiologists. existing methods consider an image - to - report modeling task by directly generating a fully - fledged report from an image. however, this conflates the content of the report ( e. g., findings and their attributes ) with its style ( e. g., format and choice of words ), which can lead to clinically inaccurate reports. to address this, we propose a two - step approach for radiology report generation. first, we extract the content from an image ; then, we verbalize the extracted content into a report that matches the style of a specific radiologist. for this, we leverage radgraph - - a graph representation of reports - - together with large language models ( llms ). in our quantitative evaluations, we find that our approach leads to beneficial performance. our human evaluation with clinical raters highlights that the ai - generated reports are indistinguishably tailored to the style of individual radiologist despite leveraging only a few examples as context.
arxiv:2310.17811
this paper investigates the problem of combinatorial multiarmed bandits with stochastic submodular ( in expectation ) rewards and full - bandit delayed feedback, where the delayed feedback is assumed to be composite and anonymous. in other words, the delayed feedback is composed of components of rewards from past actions, with unknown division among the sub - components. three models of delayed feedback : bounded adversarial, stochastic independent, and stochastic conditionally independent are studied, and regret bounds are derived for each of the delay models. ignoring the problem dependent parameters, we show that regret bound for all the delay models is $ \ tilde { o } ( t ^ { 2 / 3 } + t ^ { 1 / 3 } \ nu ) $ for time horizon $ t $, where $ \ nu $ is a delay parameter defined differently in the three cases, thus demonstrating an additive term in regret with delay in all the three delay models. the considered algorithm is demonstrated to outperform other full - bandit approaches with delayed composite anonymous feedback.
arxiv:2303.13604
we present deep ground - based { \ it b } and { \ it r } observations of 12 fields in the small magellanic cloud ( smc ). the resulting color - magnitude diagrams ( cmds ) reach the oldest main - sequence ( ms ) turnoff at m $ _ { r } $ $ \ thicksim $ 3. 5 and reveal the stellar population differences between the part of the galaxy facing the large magellanic cloud ( lmc ) and an area on the opposite side. in the southern part of the galaxy, we found that there are still intermediate - age stars as far as 4 kpc from the smc center. the chemical enrichment history ( ceh ) in one of our smc fields is also presented.
arxiv:astro-ph/0703225
this paper provides a technical review of position and speed sensorless methods for controlling brushless direct current ( bldc ) motor drives, including the background analysis using sensors, limitations and advances. the performance and reliability of bldc motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. the study includes a deep overview of state - of - the - art back - emf sensing methods, which includes terminal voltage sensing, third harmonic voltage integration, terminal current sensing, back - emf integration and pwm strategies. also, the most relevant techniques based on estimation and models are briefly analysed, such as sliding - mode observer, extended kalman filter, model reference adaptive system, adaptive observers ( full - order and pseudoreduced - order ) and artificial neural networks.
arxiv:2402.05263
we consider the problem of upper bounding the number of circular transpositions needed to sort a permutation. it is well known that any permutation can be sorted using at most $ n ( n - 1 ) / 2 $ adjacent transpositions. we show that, if we allow all adjacent transpositions, as well as the transposition that interchanges the element in position 1 with the element in the last position, then the number of transpositions needed is at most $ n ^ 2 / 4 $. this answers an open question posed by feng, chitturi and sudborough ( 2010 ).
arxiv:1402.4867
= research in the humanities involves different methods such as for example hermeneutics and semiotics. humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. context is always important, and context can be social, historical, political, cultural, or ethnic. an example of research in the humanities is historical research, which is embodied in historical method. historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. these studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. = = = artistic research = = = artistic research, also seen as ' practice - based research ', can take form when creative works are considered both the research and the object of research itself. it is the debatable body of thought which offers an alternative to purely scientific methods in research in its search for knowledge and truth. the controversial trend of artistic teaching becoming more academics - oriented is leading to artistic research being accepted as the primary mode of enquiry in art as in the case of other disciplines. one of the characteristics of artistic research is that it must accept subjectivity as opposed to the classical scientific methods. as such, it is similar to the social sciences in using qualitative research and intersubjectivity as tools to apply measurement and critical analysis. artistic research has been defined by the school of dance and circus ( dans och cirkushogskolan, doch ), stockholm in the following manner – " artistic research is to investigate and test with the purpose of gaining knowledge within and for our artistic disciplines. it is based on artistic practices, methods, and criticality. through presented documentation, the insights gained shall be placed in a context. " artistic research aims to enhance knowledge and understanding with presentation of the arts. a simpler understanding by julian klein defines artistic research as any kind of research employing the artistic mode of perception. for a survey of the central problematics of today ' s artistic research, see giaco schiesser. according to artist hakan topal, in artistic research, " perhaps more so than other disciplines, intuition is utilized as a method to identify a wide range of new and unexpected productive modalities ". most writers, whether of fiction or non - fiction
https://en.wikipedia.org/wiki/Research
the graph isomorphism problem is a main problem which has numerous applications in different fields. thus, finding an efficient and easy to implement method to discriminate non - isomorphic graphs is valuable. in this paper, a new method is introduced which is very simple and easy to implement, but very efficient in discriminating non - isomorphic graphs, in practice. this method does not need any heuristic attempt and based on the eigenvalues of a new matrix representation for graphs. it, almost always, separates non - isomorphic $ n $ - vertex graphs in time $ o ( n ^ 3 ) $ and in worst cases such as strongly regular graphs, in time $ o ( n ^ 4 ) $. here, we show that this method, successfully, characterizes the isomorphism classes of studied instances of strongly regular graphs ( up to 64 vertices ). strongly regular graphs are believed to be hard cases of the graph isomorphism problem.
arxiv:1611.01936
the problem of navigating a bipedal robot to a desired destination in various environments is very important. however, it is very difficult to solve the navigation problem in real time because the computation time is very long due to the nature of the biped robot having a high degree of freedom. in order to overcome this, many scientists suggested navigation through the footstep planning. usually footstep planning use the shortest distance or angles as the objective function based on the a * algorithm. recently, the energy required for human walking, which is widely used in human dynamics, approximated by a polynomial function is proposed as a better cost function that explains the movement of the bipedal robot. in addition, for the real time navigation, using the action set of the a * algorithm not fixed, but the number changing according to the situation, so that the computation time does not increase much and the methods of considering the collision with the external environment are suggested as a practical method. in this thesis, polynomial function approximating the energy required for human walking is adopted as a cost function, and heuristic function considering the angular difference between the robot and the destination which is not shown in the previous studies is newly proposed and proved. in addition, a new method to integrate the adaptive behavior set and energy related to human walking is proposed. furthermore, efficient collision avoidance method and a method to reduce the local minimum problem is proposed in this framework. finally, footstep planning algorithm with all of these features into the mapping algorithm and the walking algorithm to solve the navigation problem is validated with simulation and real robot.
arxiv:2211.05555
as autonomous robots increasingly become part of daily life, they will often encounter dynamic environments while only having limited information about their surroundings. unfortunately, due to the possible presence of malicious dynamic actors, it is infeasible to develop an algorithm that can guarantee collision - free operation. instead, one can attempt to design a control technique that guarantees the robot is not - at - fault in any collision. in the literature, making such guarantees in real time has been restricted to static environments or specific dynamic models. to ensure not - at - fault behavior, a robot must first correctly sense and predict the world around it within some sufficiently large sensor horizon ( the prediction problem ), then correctly control relative to the predictions ( the control problem ). this paper addresses the control problem by proposing reachability - based trajectory design for dynamic environments ( rtd - d ), which guarantees that a robot with an arbitrary nonlinear dynamic model correctly responds to predictions in arbitrary dynamic environments. rtd - d first computes a forward reachable set ( frs ) offline of the robot tracking parameterized desired trajectories that include fail - safe maneuvers. then, for online receding - horizon planning, the method provides a way to discretize predictions of an arbitrary dynamic environment to enable real - time collision checking. the frs is used to map these discretized predictions to trajectories that the robot can track while provably not - at - fault. one such trajectory is chosen at each iteration, or the robot executes the fail - safe maneuver from its previous trajectory which is guaranteed to be not at fault. rtd - d is shown to produce not - at - fault behavior over thousands of simulations and several real - world hardware demonstrations on two robots : a segway, and a small electric vehicle.
arxiv:1902.02851
in this paper, we investigate the existence of online learning algorithms with bandit feedback that simultaneously guarantee $ o ( 1 ) $ regret compared to a given comparator strategy, and $ o ( \ sqrt { t } ) $ regret compared to the best strategy in hindsight, where $ t $ is the number of rounds. we provide the first affirmative answer to this question. in the context of symmetric zero - sum games, both in normal - and extensive form, we show that our results allow us to guarantee to risk at most $ o ( 1 ) $ loss while being able to gain $ \ omega ( t ) $ from exploitable opponents, thereby combining the benefits of both no - regret algorithms and minimax play.
arxiv:2502.11673
let $ d $ be an integral domain with quotient field $ k $. a star - operation $ \ star $ on $ d $ is a closure operation $ a \ longmapsto a ^ \ star $ on the set of nonzero fractional ideals, $ f ( d ) $, of $ d $ satisfying the properties : $ ( xd ) ^ \ star = xd $ and $ ( xa ) ^ \ star = xa ^ \ star $ for all $ x \ in k ^ \ ast $ and $ a \ in f ( d ) $. let $ { \ m s } $ be a multiplicatively closed set of ideals of $ d $. for $ a \ in f ( d ) $ define $ a _ { \ m s } = \ { x \ in k \ mid xi \ subseteq { a } $, for some $ i \ in { \ m s } \ } $. then $ d _ { \ m s } $ is an overring of $ d $ and $ a _ { \ m s } $ is a fractional ideal of $ d _ { \ m s } $. let $ { \ m s } $ be a multiplicative set of finitely generated nonzero ideals of $ d $ and $ a \ in f ( d ) $, then the map $ a \ longmapsto a _ { \ m s } $ is a finite character star - operation if and only if for each $ i \ in { \ m s } $, $ i _ v = d $. we give an example to show that this result is not true if the ideals are not assumed to be finitely generated. in general, the map $ a \ longmapsto a _ { \ m s } $ is a star - operation if and only if $ \ bar { \ m s } $, the saturation of $ { \ m s } $, is a localizing gv - system. we also discuss star - operations given of the form $ a \ longmapsto \ cap ad _ \ alpha $, where $ d = \ cap d _ \ alpha $.
arxiv:math/0301046
we have investigated the relation of the direction of the momentum among the matter, neutrino, and proto - neutron star in a collapse - driven supernova in order to discuss the pulsar kick. in particular, we have investigated the effects of the pulsar motion on the explosion, which are neglected in the previous study. as a result, it is suggested that the direction of the total momentum of the matter and neutrino is opposite to that of the momentum of the proto - neutron star in the asymmetric explosion models. this is because the center of the explosion deviates from the center of the progenitor due to the pulsar motion. this picture is common among the asymmetric explosion models. so if we assume that the pulsar motion is caused by an asymmetric supernova explosion, the neutron star born in sn 1987a, which has not been found yet, will be moving in the southern part of the remnant. in other words, if we can find one neutron star in sn 1987a on the south part of the remnant, asymmetric explosion models will be supported by the observation better than the binary models.
arxiv:astro-ph/9911077
assuming that vector and scalar diquarks exist in the quark - gluon plasma near the critical temporature $ t _ c $, baryons can be produced through the processes of quarks and diquarks forming $ ( { 1 / 2 } ) ^ + $ baryon states. ratios of different baryons can be estimated through this method, if such kind of qgp with diquarks can exists.
arxiv:hep-ph/0303164
we examine the effects of confinement on the dynamics of premelted films driven by thermomolecular pressure gradients. our approach is to modify a well - studied setting in which the thermomolecular pressure gradient is driven by a temperature gradient parallel to an interfacially premelted elastic wall. the modification treats the increase in viscosity associated with the thinning of films, studied in a wide variety of materials, using a power law and we examine the consequent evolution of the confining elastic wall. we treat ( 1 ) a range of interactions that are known to underlie interfacial premelting and ( 2 ) a constant temperature gradient wherein the thermomolecular pressure gradient is a constant. the difference between the cases with and without the proximity effect arises in the volume flux of premelted liquid. the proximity effect increases the viscosity as the film thickness decreases thereby requiring the thermomolecular pressure driven flux to be accommodated at higher temperatures where the premelted film thickness is the largest. implications for experiment and observations of frost heave are discussed.
arxiv:1707.06577
by a new approximate method, dimensional free harnack inequalities are established for a class of semilinear stochastic differential equations in hilbert space with multiplicative noise. these inequalities are applied to study the strong feller property for the semigroup and some properties of invariant measure.
arxiv:1208.2343
the resistance of a homogeneous semiconductor increases quadratically with magnetic field at low fields and, except in very special cases, saturates at fields much larger than the inverse of the carrier mobility, a number typically of order 1 tesla. here, we argue that a macroscopically disordered and strongly inhomogeneous semiconductor will instead show a non - saturating magnetoresistance, with typically a quasi - linear behaviour up to very large fields, and possibly also extending down to very low fields, depending on the degree of inhomogeneity. we offer this as a possible explanation of the observed anomalously large magnetoresistance in doped silver chalcogenides. furthermore, our model of an inhomogeneous semiconductor can be developed into magnetoresistive devices that possess a large, controllable, linear response.
arxiv:cond-mat/0312020
in this paper, a higher order finite difference scheme is proposed for generalized fractional diffusion equations ( gfdes ). the fractional diffusion equation is considered in terms of the generalized fractional derivatives ( gfds ) which uses the scale and weight functions in the definition. the gfd reduces to the riemann - liouville, caputo derivatives and other fractional derivatives in a particular case. due to importance of the scale and the weight functions in describing behaviour of real - life physical systems, we present the solutions of the gfdes by considering various scale and weight functions. the convergence and stability analysis are also discussed for finite difference scheme ( fds ) to validate the proposed method. we consider test examples for numerical simulation of fds to justify the proposed numerical method.
arxiv:2206.03194
over the last few years, several computational techniques have been devised to recover protein complexes from the protein interaction ( ppi ) networks of organisms. these techniques model " dense " subnetworks within ppi networks as complexes. however, our comprehensive evaluations revealed that these techniques fail to reconstruct many ' gold standard ' complexes that are " sparse " in the networks ( only 71 recovered out of 123 known yeast complexes embedded in a network of 9704 interactions among 1622 proteins ). in this work, we propose a novel index called component - edge ( ce ) score to quantitatively measure the notion of " complex derivability " from ppi networks. using this index, we theoretically categorize complexes as " sparse " or " dense " with respect to a given network. we then devise an algorithm sparc that selectively employs functional interactions to improve the ce scores of predicted complexes, and thereby elevates many of the " sparse " complexes to " dense ". this empowers existing methods to detect these " sparse " complexes. we demonstrate that our approach is effective in reconstructing significantly many complexes missed previously ( 104 recovered out of the 123 known complexes or ~ 47 % improvement ).
arxiv:1301.0363
we present the solutions of equations of degrees 3 and 4 using galois theory and some simple fourier analysis for finite groups, together with historical comments on these and other solution methods.
arxiv:1009.2373
spin qubits in quantum dots are a promising technology for quantum computing due to their fast response time and long coherence times. an electromagnetic pulse is applied to the system for a specific duration to perform a desired rotation. to avoid decoherence, the amplitude and gate time must be highly accurate. in this work, we aim to study the impact of leakage during the gate time evolution of a spin qubit encoded in a double quantum dot device. we prove that, in the weak interaction regime, leakage introduces a shift in the phase of the time evolution operator, causing over - or under - rotations. indeed, controlling the leakage terms is useful for adjusting the time needed to perform a quantum computation. this is crucial for running fault - tolerant algorithms and is beneficial for quantum error mitigation techniques.
arxiv:2411.19179
in 1993 e. i. zelmanov asked the following question in dniester notebook : " suppose that f _ { 2, m } is a 2 - generated associative ring with the identity x ^ m = 0. is it true, that the nilpotency degree of f _ { 2, m } has exponential growth? " we show that the nilpotency degree of l - generated associative algebra with the identity x ^ d = 0 is smaller than psi ( d, d, l ), where psi ( n, d, l ) = 2 ^ { 18 } l ( nd ) ^ { 3 log _ 3 ( nd ) + 13 } d ^ 2. we give the definitive answer to e. i. zelmanov by this result. it is the consequence of one fact, which is based on combinatorics of words. let l, n and d > n be positive integers. then all the words over alphabet of cardinality l which length is greater than psi ( n, d, l ) are either n - divided or contain d - th power of subword, where a word w is n - divided, if it can be represented in the following form w = w _ 0 w _ 1... w _ n such that w _ 1 > ' w _ 2 > '... > ' w _ n. the symbol > ' means lexicographical order here. a. i. shirshov proved that the set of non n - divided words over alphabet of cardinality l has bounded height h over the set y consisting of all the words of degree < n. original shirshov ' s estimation was just recursive, in 1982 double exponent was obtained by a. g. kolotov and in 1993 a. ya. belov obtained exponential estimation. we show, that h < phi ( n, l ), where phi ( n, l ) = 2 ^ { 87 } n ^ { 12 log _ 3 n + 48 } l. our proof uses latyshev idea of dilworth theorem application.
arxiv:1207.2987
it is of great interest to quantify the contributions of genetic variation to brain structure and function, which are usually measured by high - dimensional imaging data ( e. g., magnetic resonance imaging ). in addition to the variance, the covariance patterns in the genetic effects of a functional phenotype are of biological importance, and covariance patterns have been linked to psychiatric disorders. the aim of this paper is to develop a scalable method to estimate heritability and the non - stationary covariance components in high - dimensional imaging data from twin studies. our motivating example is from the human connectome project ( hcp ). several major big - data challenges arise from estimating the genetic and environmental covariance functions of functional phenotypes extracted from imaging data, such as cortical thickness with 60, 000 vertices. notably, truncating to positive eigenvalues and their eigenfunctions from unconstrained estimators can result in large bias. this motivated our development of a novel estimator ensuring positive semidefiniteness. simulation studies demonstrate large improvements over existing approaches, both with respect to heritability estimates and covariance estimation. we applied the proposed method to cortical thickness data from the hcp. our analysis suggests fine - scale differences in covariance patterns, identifying locations in which genetic control is correlated with large areas of the brain and locations where it is highly localized.
arxiv:1905.07502
the main fundamental principles characterizing the vacuum field structure are formulated and the modeling of the related vacuum medium and charged point particle dynamics by means of devised field theoretic tools are analyzed. the work is devoted to studying the vacuum structure, special relativity, electrodynamics of interacting charged point particles and quantum mechanics, and is a continuation of \ cite { bpt, brt1 }. based on the vacuum field theory no - geometry approach, the lagrangian and hamiltonian reformulation of some alternative classical electrodynamics models is devised. the dirac type quantization procedure, based on the canonical hamiltonian formulation, is developed for some alternative electrodynamics models. within an approach developed a possibility of the combined description both of electrodynamics and gravity is analyzed.
arxiv:0810.3303
we study the problem of optimal control for mean - field stochastic partial differential equations ( stochastic evolution equations ) driven by a brownian motion and an independent poisson random measure, in the case of \ textit { partial information } control. one important novelty of our problem is represented by the introduction of \ textit { general mean - field } operators, acting on both the controlled state process and the control process. we first formulate a sufficient and a necessary maximum principle for this type of control. we then prove existence and uniqueness of the solution of such general forward and backward mean - field stochastic partial differential equations. we finally apply our results to find the explicit optimal control for an optimal harvesting problem.
arxiv:1704.03430
the following notes derive from review lectures on closed string field theory given at the galileo galilei institute for theoretical physics in march 2019.
arxiv:1905.06785
focused ion beam ( fib ) microscopy suffers from source shot noise - random variation in the number of incident ions in any fixed dwell time - along with random variation in the number of detected secondary electrons per incident ion. this multiplicity of sources of randomness increases the variance of the measurements and thus worsens the trade - off between incident ion dose and image accuracy. time - resolved sensing combined with maximum likelihood estimation from the resulting sets of measurements greatly reduces the effect of source shot noise. through fisher information analysis and monte carlo simulations, the reduction in mean - squared error or reduction in required dose is shown to be by a factor approximately equal to the secondary electron yield. experiments with a helium ion microscope ( him ) are consistent with the analyses and suggest accuracy improvement for a fixed source dose, or reduced source dose for a desired imaging accuracy, by a factor of about 3.
arxiv:1906.03285
recent advances in differentiable structure learning have framed the combinatorial problem of learning directed acyclic graphs as a continuous optimization problem. various aspects, including data standardization, have been studied to identify factors that influence the empirical performance of these methods. in this work, we investigate critical limitations in differentiable structure learning methods, focusing on settings where the true structure can be identified up to markov equivalence classes, particularly in the linear gaussian case. while ng et al. ( 2024 ) highlighted potential non - convexity issues in this setting, we demonstrate and explain why the use of $ \ ell _ 1 $ - penalized likelihood in such cases is fundamentally inconsistent, even if the global optimum of the optimization problem can be found. to resolve this limitation, we develop a hybrid differentiable structure learning method based on $ \ ell _ 0 $ - penalized likelihood with hard acyclicity constraint, where the $ \ ell _ 0 $ penalty can be approximated by different techniques including gumbel - softmax. specifically, we first estimate the underlying moral graph, and use it to restrict the search space of the optimization problem, which helps alleviate the non - convexity issue. experimental results show that the proposed method enhances empirical performance both before and after data standardization, providing a more reliable path for future advancements in differentiable structure learning, especially for learning markov equivalence classes.
arxiv:2410.18396
image hash codes are produced by binarizing the embeddings of convolutional neural networks ( cnn ) trained for either classification or retrieval. while proxy embeddings achieve good performance on both tasks, they are non - trivial to binarize, due to a rotational ambiguity that encourages non - binary embeddings. the use of a fixed set of proxies ( weights of the cnn classification layer ) is proposed to eliminate this ambiguity, and a procedure to design proxy sets that are nearly optimal for both classification and hashing is introduced. the resulting hash - consistent large margin ( hclm ) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error, while producing highly discriminative hash - codes. a semantic extension ( shclm ), aimed to improve hashing performance in a transfer scenario, is also proposed. extensive experiments show that shclm embeddings achieve significant improvements over state - of - the - art hashing procedures on several small and large datasets, both within and beyond the set of training classes.
arxiv:2007.13912
deep learning ( dl ) methods, especially those based on physics - driven dl, have become the state - of - the - art for reconstructing sub - sampled magnetic resonance imaging ( mri ) data. however, studies have shown that these methods are susceptible to small adversarial input perturbations, or attacks, resulting in major distortions in the output images. various strategies have been proposed to reduce the effects of these attacks, but they require retraining and may lower reconstruction quality for non - perturbed / clean inputs. in this work, we propose a novel approach for mitigating adversarial attacks on mri reconstruction models without any retraining. our framework is based on the idea of cyclic measurement consistency. the output of the model is mapped to another set of mri measurements for a different sub - sampling pattern, and this synthesized data is reconstructed with the same model. intuitively, without an attack, the second reconstruction is expected to be consistent with the first, while with an attack, disruptions are present. a novel objective function is devised based on this idea, which is minimized within a small ball around the attack input for mitigation. experimental results show that our method substantially reduces the impact of adversarial perturbations across different datasets, attack types / strengths and pd - dl networks, and qualitatively and quantitatively outperforms conventional mitigation methods that involve retraining. finally, we extend our mitigation method to two important practical scenarios : a blind setup, where the attack strength or algorithm is not known to the end user ; and an adaptive attack setup, where the attacker has full knowledge of the defense strategy. our approach remains effective in both cases.
arxiv:2501.01908
strong electron correlation effects in the photophysics of quasi - one - dimensional $ \ pi $ - conjugated organic systems such as polyenes, polyacetylenes, polydiacetylenes, etc., have been extensively studied. far less is known on correlation effects in two - dimensional $ \ pi $ - conjugated systems. here we present theoretical and experimental evidence for moderate repulsive electron - electron interactions in a number of finite polycyclic aromatic hydrocarbon molecules with $ d _ { 6h } $ symmetry. we show that the excited state orderings in these molecules are reversed relative to that expected within one - electron and mean - field theories. our results reflect similarities as well as differences in the role and magnitude of electron correlation effects in these two - dimensional molecules compared to those in polyenes.
arxiv:1311.0567
lhc has reported tantalizing hints for a higgs boson of mass 125 gev decaying into two photons. we focus on two - higgs - doublet models, and study the interesting possibility that the heavier scalar ( h ) has been seen, with the lightest scalar ( h ) having thus far escaped detection. non - observation of h at lep severely constrains the parameter - space of two - higgs - doublet models. we analyze cases where the decay h - - > h h is kinematically allowed, and cases where it is not, in the context of type i, type ii, lepton - specific, and flipped models.
arxiv:1201.0019
some challenging problems in tracking multiple objects include the time - dependent cardinality, unordered measurements and object parameter labeling. in this paper, we employ bayesian bayesian nonparametric methods to address these challenges. in particular, we propose modeling the multiple object parameter state prior using the dependent dirichlet and pitman - yor processes. these nonparametric models have been shown to be more flexible and robust, when compared to existing methods, for estimating the time - varying number of objects, providing object labeling and identifying measurement to object associations. monte carlo sampling methods are then proposed to efficiently learn the trajectory of objects from noisy measurements. using simulations, we demonstrate the estimation performance advantage of the new methods when compared to existing algorithms such as the generalized labeled multi - bernoulli filter.
arxiv:2004.10798
in this work, we identify and characterize intra - pulse intensity noise shaping by saturable absorbers applied in mode - locked lasers and ultra - low noise nonlinear fiber amplifiers. reshaped intra - pulse intensity noise distributions are shown to be inevitably interconnected with self - amplitude modulation, the fundamental physical mechanism for initiation and stabilization of ultra - short pulses in the steady - state of a mode - locked laser. a theoretical model is used to describe the ultrafast saturation dynamics by an intra - pulse noise transfer function for widely - applied slow and fast saturable absorbers. for experimental verification of the theoretical results, spectrally - resolved relative intensity noise measurements are applied on chirped input pulses to enable the direct measurement of intra - pulse noise transfer functions using a versatile experimental platform. it is further demonstrated, how the characterized intra - pulse intensity noise distribution of ultrafast laser systems can be utilized for quantum - limited intensity noise suppression via tailored optical bandpass filtering.
arxiv:2303.13420
this study explores the design of an efficient rebate policy in auction markets, focusing on a continuous - time setting with competition among market participants. in this model, a stock exchange collects transaction fees from auction investors executing block trades to buy or sell a risky asset, then redistributes these fees as rebates to competing market makers submitting limit orders. market makers influence both the price at which the asset trades and their arrival intensity in the auction. we frame this problem as a principal - multi - agent problem and provide necessary and sufficient conditions to characterize the nash equilibrium among market makers. the exchange ' s optimization problem is formulated as a high - dimensional hamilton - jacobi - bellman equation with poisson jump processes, which is solved using a verification result. to numerically compute the optimal rebate and transaction fee policies, we apply the deep bsde method. our results show that optimal transaction fees and rebate structures improve market efficiency by narrowing the spread between the auction clearing price and the asset ' s fundamental value, while ensuring a minimal gain for both market makers indexed on the price of the asset on a coexisting limit order book.
arxiv:2501.12591
method for detection and visualization of trends, periodicities, local peculiarities in measurement series ( dl - method ) based on dfa technology ( detrended fluctuation analysis ) is proposed. the essence of the method lies in reflecting the values of absolute deviation of measurement accumulation series points from the respective values of linear approximation. it is shown that dl - method in some cases allows better determination of local peculiarities than wavelet - analysis. easy - to - realize approach that is proposed can be used in the analysis of time series in such fields as economics and sociology.
arxiv:0903.3328
quantum algorithms are still challenging to solve linear systems of equations on real devices. this challenge arises from the need for deep circuits and numerous ancilla qubits. we introduce the quantum conjugate gradient ( qcg ) method using the quantum eigenvalue transformation ( qet ). the circuit depth of this algorithm depends on the square root of the coefficient matrix ' s condition number $ \ kappa $, representing a square root improvement compared to the previous quantum algorithms, while the total query complexity worsens. the number of ancilla qubits is constant, similar to other qet - based algorithms. additionally, to implement the qcg method efficiently, we devise a qet - based technique that uses only the positive side of the polynomial ( denoted by $ p ( x ) $ for $ x \ in [ 0, 1 ] $ ). we conduct numerical experiments by applying our algorithm to the one - dimensional poisson equation and successfully solve it. based on the numerical results, our algorithm significantly improves circuit depth, outperforming another qet - based algorithm by three to four orders of magnitude.
arxiv:2404.02713
weyl semi - metals are three dimensional generalizations of graphene with point - like fermi surfaces. their linear electronic dispersion leads to a window in the particle - hole excitation spectrum which allows for undamped propagation of collective excitations. we argue that interactions in weyl semi - metals generically lead to well - defined exciton modes. however, using a minimal model for interactions, we show that the exciton binding energy is exponentially small for weak interactions. this is due to effective two - dimensional character in the space of particle - hole pairs that are available for bound state formation. this is ultimately a consequence of linear electronic dispersion in three dimensions. nevertheless, intermediate interaction strengths can lead to sharp spin - carrying excitonic resonances. we demonstrate this in a model weyl semi - metal with broken time - reversal symmetry and hubbard interactions, using grpa ( generalized random phase approximation ) analysis. excitons in weyl semi - metals have evoked interest as their condensation could lead to an axionic charge density wave order. however, we find that the leading instability corresponds to intra - valley spin density wave order which shifts the weyl points without opening a gap. our results suggest interesting directions for experimental studies of three dimensional dirac systems.
arxiv:1808.05233
a multiple operator integral ( moi ) is an indispensable tool in several branches of noncommutative analysis. however, there are substantial technical issues with the existing literature on the " separation of variables " approach to defining mois, especially when the underlying hilbert spaces are not separable. in this paper, we provide a detailed development of this approach in a very general setting that resolves existing technical issues. along the way, we characterize several kinds of " weak " operator valued integrals in terms of easily checkable conditions and prove a useful minkowski - type integral inequality for maps with values in a semifinite von neumann algebra.
arxiv:2107.03687
test - time compute scaling has emerged as a new axis along which to improve model accuracy, where additional computation is used at inference time to allow the model to think longer for more challenging problems. one promising approach for test - time compute scaling is search against a process reward model, where a model generates multiple potential candidates at each step of the search, and these partial trajectories are then scored by a separate reward model in order to guide the search process. the diversity of trajectories in the tree search process affects the accuracy of the search, since increasing diversity promotes more exploration. however, this diversity comes at a cost, as divergent trajectories have less kv sharing, which means they consume more memory and slow down the search process. previous search methods either do not perform sufficient exploration, or else explore diverse trajectories but have high latency. we address this challenge by proposing efficient tree search ( ets ), which promotes kv sharing by pruning redundant trajectories while maintaining necessary diverse trajectories. ets incorporates a linear programming cost model to promote kv cache sharing by penalizing the number of nodes retained, while incorporating a semantic coverage term into the cost model to ensure that we retain trajectories which are semantically different. we demonstrate how ets can achieve 1. 8 $ \ times $ reduction in average kv cache size during the search process, leading to 1. 4 $ \ times $ increased throughput relative to prior state - of - the - art methods, with minimal accuracy degradation and without requiring any custom kernel implementation. code is available at : https : / / github. com / squeezeailab / ets.
arxiv:2502.13575
- by - n { \ displaystyle n } matrices with real entries. its subgroups are referred to as matrix groups or linear groups. the dihedral group example mentioned above can be viewed as a ( very small ) matrix group. another important matrix group is the special orthogonal group s o ( n ) { \ displaystyle \ mathrm { so } ( n ) }. it describes all possible rotations in n { \ displaystyle n } dimensions. rotation matrices in this group are used in computer graphics. representation theory is both an application of the group concept and important for a deeper understanding of groups. it studies the group by its group actions on other spaces. a broad class of group representations are linear representations in which the group acts on a vector space, such as the three - dimensional euclidean space r 3 { \ displaystyle \ mathbb { r } ^ { 3 } }. a representation of a group g { \ displaystyle g } on an n { \ displaystyle n } - dimensional real vector space is simply a group homomorphism ρ : g → g l ( n, r ) { \ displaystyle \ rho : g \ to \ mathrm { gl } ( n, \ mathbb { r } ) } from the group to the general linear group. this way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations. a group action gives further means to study the object being acted on. on the other hand, it also yields information about the group. group representations are an organizing principle in the theory of finite groups, lie groups, algebraic groups and topological groups, especially ( locally ) compact groups. = = = galois groups = = = galois groups were developed to help solve polynomial equations by capturing their symmetry features. for example, the solutions of the quadratic equation a x 2 + b x + c = 0 { \ displaystyle ax ^ { 2 } + bx + c = 0 } are given by x = − b ± b 2 − 4 a c 2 a. { \ displaystyle x = { \ frac { - b \ pm { \ sqrt { b ^ { 2 } - 4ac } } } { 2a } }. } each solution can be obtained by replacing the ± { \ displaystyle \ pm } sign by + { \ displaystyle + } or − { \ displaystyle - } ; analogous formulae are known for cubic and quartic equations, but do not exist in
https://en.wikipedia.org/wiki/Group_(mathematics)
one of the main challenges for biomedical research lies in the computer - assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. the preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta - data for a scientist to understand and recreate the results of an experiment. to support this we explored a model for the semantic description of a workflow - centric research object ( ro ), where an ro is defined as a resource that aggregates other resources, e. g., datasets, software, spreadsheets, text, etc. we applied this model to a case study where we analysed human metabolite variation by workflows.
arxiv:1311.2789
we discuss the log minimal model theory for log surfaces. we show that the log minimal model program, the finite generation of log canonical rings, and the log abundance theorem for log surfaces hold true under assumptions weaker than the usual framework of the log minimal model theory.
arxiv:1001.3902
in several recent papers some concepts of convex analysis were extended to discrete sets. this paper is one more step in this direction. it is well known that a local minimum of a convex function is always its global minimum. we study some discrete objects that share this property and provide several examples of convex families related to graphs and to two - person games in normal form.
arxiv:2306.10948
in this paper, we discuss optical properties of the topologically charged rotating black hole. we study the horizon, the photon region, the shadow of the black hole and other observables. the results show that in addition to the black hole spin parameter $ a $, the other two parameters, tidal charge $ \ beta $ and electric charge $ q $, are also found to affect the horizon, the photon region and the black hole shadow. in a certain range, with the increase of the three parameters, the horizon distance, shape of the photon region and the black hole shadow will all shrink. moreover, with the increase of these three parameters, the distortion parameter $ \ delta _ { s } $ gradually increases, while the peak of the black hole energy emission rate decreases.
arxiv:2101.01374
this paper describes our approach for semeval - 2024 task 9 : brainteaser : a novel task defying common sense. the brainteaser task comprises multiple - choice question answering designed to evaluate the models ' lateral thinking capabilities. it consists of sentence puzzle and word puzzle subtasks that require models to defy default common - sense associations and exhibit unconventional thinking. we propose a unique strategy to improve the performance of pre - trained language models, notably the gemini 1. 0 pro model, in both subtasks. we employ static and dynamic few - shot prompting techniques and introduce a model - generated reasoning strategy that utilizes the llm ' s reasoning capabilities to improve performance. our approach demonstrated significant improvements, showing that it performed better than the baseline models by a considerable margin but fell short of performing as well as the human annotators, thus highlighting the efficacy of the proposed strategies.
arxiv:2405.16129
in this paper we review the predictions of the replica approach on the probability distribution of the overlaps among replicas and on the sample to sample fluctuations of this probability. we stress the role of replica equivalence in obtaining relations which do not depend on the form of replica symmetry breaking. a comparison is done with the results obtained with a different rigorous approach. the role of the ultrametricity and of other algebraic properties in discussed. it is shown that the ultrametric solution can be obtained from a variational principle.
arxiv:cond-mat/9801081
we present a new inner bound for the rate region of the $ t $ - stage successive - refinement problem with side - information. we also present a new upper bound for the rate - distortion function for lossy - source coding with multiple decoders and side - information. characterising this rate - distortion function is a long - standing open problem, and it is widely believed that the tightest upper bound is provided by theorem 2 of heegard and berger ' s paper " rate distortion when side information may be absent ", \ emph { ieee trans. inform. theory }, 1985. we give a counterexample to heegard and berger ' s result.
arxiv:0901.1705
we prove that the chebyshev spline orthoprojectors are uniformly bounded on $ l ^ \ infty $.
arxiv:1807.07161
in this article, we aim to study the stability and dynamic transition of an electrically conducting fluid in the presence of an external uniform horizontal magnetic field and rotation based on a boussinesq approximation model. by analyzing the spectrum of the linear part of the model and verifying the validity of the principle of exchange of stability, we take a hybrid approach combining theoretical analysis with numerical computation to study the transition from a simple real eigenvalue, a pair of complex conjugate eigenvalues and a real eigenvalue of multiplicity two, respectively. the center manifold reduction theory is applied to reduce the infinite dimensional system to the corresponding finite dimensional one together with one or several non - dimensional transition numbers that determine the dynamic transition types. careful numerical computations are performed to determine these transition numbers as well as related temporal and flow patterns etc. our results indicate that both continuous and jump transitions can occur at certain parameter region.
arxiv:2110.07951
we present a cryogenic ion trapping system designed for large scale quantum simulation of spin models. our apparatus is based on a segmented - blade ion trap enclosed in a 4 k cryostat, which enables us to routinely trap over 100 $ ^ { 171 } $ yb $ ^ + $ ions in a linear configuration for hours due to a low background gas pressure from differential cryo - pumping. we characterize the cryogenic vacuum by using trapped ion crystals as a pressure gauge, measuring both inelastic and elastic collision rates with the molecular background gas. we demonstrate nearly equidistant ion spacing for chains of up to 44 ions using anharmonic axial potentials. this reliable production and lifetime enhancement of large linear ion chains will enable quantum simulation of spin models that are intractable with classical computer modelling.
arxiv:1802.03118
this paper demonstrates that metre is a privileged indicator of authorial style in classical latin hexameter poetry. using only metrical features, pairwise classification experiments are performed between 5 first - century authors ( 10 comparisons ) using four different machine - learning models. the results showed a two - label classification accuracy of at least 95 % with samples as small as ten lines and no greater than eighty lines ( up to around 500 words ). these sample sizes are an order of magnitude smaller than those typically recommended for bow ( ' bag of words ' ) or n - gram approaches, and the reported accuracy is outstanding. additionally, this paper explores the potential for novelty ( forgery ) detection, or ' one - class classification '. an analysis of the disputed aldine additamentum ( sil. ital. puni. 8 : 144 - 225 ) concludes ( p = 0. 0013 ) that the metrical style differs significantly from that of the rest of the poem.
arxiv:1911.12478
quantifying the forces between and within macromolecules is a necessary first step in understanding the mechanics of molecular structure, protein folding, and enzyme function and performance. in such macromolecular settings, dynamic single - molecule force spectroscopy ( dfs ) has been used to distort bonds. the resulting responses, in the form of rupture forces, work applied, and trajectories of displacements, have been used to reconstruct bond potentials. such approaches often rely on simple parameterizations of one - dimensional bond potentials, assumptions on equilibrium starting states, and / or large amounts of trajectory data. parametric approaches typically fail at inferring complex - shaped bond potentials with multiple minima, while piecewise estimation may not guarantee smooth results with the appropriate behavior at large distances. existing techniques, particularly those based on work theorems, also do not address spatial variations in the diffusivity that may arise from spatially inhomogeneous coupling to other degrees of freedom in the macromolecule, thereby presenting an incomplete picture of the overall bond dynamics. to solve these challenges, we have developed a comprehensive empirical bayesian approach that incorporates data and regularization terms directly into a path integral. all experiemental and statistical parameters in our method are estimated empirically directly from the data. upon testing our method on simulated data, our regularized approach requires fewer data and allows simultaneous inference of both complex bond potentials and diffusivity profiles.
arxiv:1502.06415
we present a method to create diffusion - based video models from pretrained text - to - image ( t2i ) models. recently, animatediff proposed freezing the t2i model while only training temporal layers. we advance this method by proposing a unique architecture, incorporating a mapping network and frame - wise tokens, tailored for video generation while maintaining the diversity and creativity of the original t2i model. key innovations include novel loss functions for temporal smoothness and a mitigating gradient sampling technique, ensuring realistic and temporally consistent video generation despite limited public video data. we have successfully integrated video - specific inductive biases into the architecture and loss functions. our method, built on the frozen stablediffusion model, simplifies training processes and allows for seamless integration with off - the - shelf models like controlnet and dreambooth. project page : https : / / kwonminki. github. io / harivo
arxiv:2410.07763