text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we consider the flows of viscoelastic fluid which obey a constitutive law of integral type. the existence and uniqueness results for solutions of the initial boundary value problem are proved, and the stationary case is studied.
|
arxiv:1111.5029
|
we study invariant measures of a piecewise expanding map in $ \ mathbb { r } ^ m $ defined by an expanding similitude modulo lattice. using the result of bang on a problem of tarski, we show that when the similarity ratio is not less than $ m + 1 $, it has an absolutely continuous invariant measure equivalent to the $ m $ - dimensional lebesgue measure, under some mild assumption on the fundamental domain. applying the method to the case $ m = 2 $, we obtain an alternative proof of the result in akiyama - caalim : 2015 together with some improvement.
|
arxiv:1509.04785
|
we present a measurement of the double longitudinal spin asymmetry in inclusive pi ^ 0 production in polarized proton - proton collisions at sqrt ( s ) = 200 gev. the data were taken at the relativistic heavy ion collider with average beam polarizations of 26 %. the measurements are the first of a program to study the longitudinal spin structure of the proton, using strongly interacting probes, at collider energies. the asymmetry is presented for transverse momenta 1 - 5 gev / c at mid - rapidity, where next - to - leading order perturbative quantum chromodynamic ( nlo pqcd ) calculations describe the unpolarized cross section well. the observed asymmetry is small and is compared with a nlo pqcd calculation with a range of polarized gluon distributions.
|
arxiv:hep-ex/0404027
|
cross - domain few - shot segmentation ( cd - fss ) is proposed to first pre - train the model on a large - scale source - domain dataset, and then transfer the model to data - scarce target - domain datasets for pixel - level segmentation. the significant domain gap between the source and target datasets leads to a sharp decline in the performance of existing few - shot segmentation ( fss ) methods in cross - domain scenarios. in this work, we discover an intriguing phenomenon : simply filtering different frequency components for target domains can lead to a significant performance improvement, sometimes even as high as 14 % miou. then, we delve into this phenomenon for an interpretation, and find such improvements stem from the reduced inter - channel correlation in feature maps, which benefits cd - fss with enhanced robustness against domain gaps and larger activated regions for segmentation. based on this, we propose a lightweight frequency masker, which further reduces channel correlations by an amplitude - phase masker ( apm ) module and an adaptive channel phase attention ( acpa ) module. notably, apm introduces only 0. 01 % additional parameters but improves the average performance by over 10 %, and acpa imports only 2. 5 % parameters but further improves the performance by over 1. 5 %, which significantly surpasses the state - of - the - art cd - fss methods.
|
arxiv:2410.22135
|
we evaluate the extent of the regions within the $ \ alpha $ centauri ab star system where small planets are able to orbit for billion - year timescales, and we calculate the positions on the sky plane where planets on stable orbits about either stellar component may appear. we confirm the qualitative results of wiegert and holman ( aj 113, 1445, 1997 ) regarding the approximate size of the regions of stable orbits, which are larger for retrograde orbits relative to the binary than for prograde orbits. additionally, we find that mean motion resonances with the binary orbit leave an imprint on the limits of orbital stability, and the effects of the lidov - kozai mechanism are also readily apparent.
|
arxiv:1604.04917
|
cta 102, classified as a flat spectrum radio quasar at z = 1. 037, produced exceptionally bright optical flare in 2012 september. following fermi - lat detection of enhanced gamma - ray activity, we densely monitored this source in the optical and near - infrared bands for the subsequent ten nights using twelve telescopes in japan and south - africa. on mjd 56197 ( 2012 september 27, 4 - 5 days after the peak of bright gamma - ray flare ), polarized flux showed a transient increase, while total flux and polarization angle remained almost constant during the " orphan polarized - flux flare ". we also detected an intra - night and prominent flare on mjd 56202. the total and polarized fluxes showed quite similar temporal variations, but pa again remained constant during the flare. interestingly, the polarization angles during the two flares were significantly different from the jet direction. emergence of a new emission component with high polarization degree ( pd ) up to 40 % would be responsible for the observed two flares, and such a high pd indicates a presence of highly ordered magnetic field at the emission site. we discuss that the well - ordered magnetic field and even the observed directions of polarization angle which is grossly perpendicular to the jet are reasonably accounted for by transverse shock ( s ) propagating down the jet.
|
arxiv:1304.2453
|
performance of cooperative diversity schemes at low signal to noise ratios ( lsnr ) was recently studied by avestimehr et. al. [ 1 ] who emphasized the importance of diversity gain over multiplexing gain at low snrs. it has also been pointed out that continuous energy transfer to the channel is necessary for achieving the max - flow min - cut bound at lsnr. motivated by this we propose the use of selection decode and forward ( sdf ) at lsnr and analyze its performance in terms of the outage probability. we also propose an energy optimization scheme which further brings down the outage probability.
|
arxiv:0707.0234
|
we introduce seven new versions of the kirchhoff - law - johnson - ( like ) - noise ( kljn ) classical physical secure key exchange scheme and a new transient protocol for practically - perfect security. while these practical improvements offer progressively enhanced security and / or speed for the non - ideal conditions, the fundamental physical laws providing the security remain the same. in the " intelligent " kljn ( ikljn ) scheme, alice and bob utilize the fact that they exactly know not only their own resistor value but also the stochastic time function of their own noise, which they generate before feeding it into the loop. in the " multiple " kljn ( mkljn ) system, alice and bob have publicly known identical sets of different resistors with a proper, publicly known truth table about the bit - interpretation of their combination. in the " keyed " kljn ( kkljn ) system, by using secure communication with a formerly shared key, alice and bob share a proper time - dependent truth table for the bit - interpretation of the resistor situation for each secure bit exchange step during generating the next key. the remaining four kljn schemes are the combinations of the above protocols to synergically enhance the security properties. these are : the " intelligent - multiple " ( imkljn ), the " intelligent - keyed " ( ikkljn ), the " keyed - multiple " ( kmkljn ) and the " intelligent - keyed - multiple " ( ikmkljn ) kljn key exchange systems. finally, we introduce a new transient - protocol offering practically - perfect security without privacy amplification, which is not needed at practical applications but it is shown for the sake of ongoing discussions.
|
arxiv:1302.3901
|
rings are ubiquitous around giant planets in our solar system. they evolve jointly with the nearby satellite system. they could form either during the giant planet formation process or much later, as a result of large scale dynamical instabilities either in the local satellite system, or at the planetary scale. we review here the main characteristics of rings in our solar system, and discuss their main evolution processes and possible origin. we also discuss the recent discovery of rings around small bodies.
|
arxiv:1805.08963
|
in this paper we investigate staggered discontinuous galerkin method for the helmholtz equation with large wave number on general quadrilateral and polygonal meshes. the method is highly flexible by allowing rough grids such as the trapezoidal grids and highly distorted grids, and at the same time, is numerical flux free. furthermore, it allows hanging nodes, which can be simply treated as additional vertices. by exploiting a modified duality argument, the stability and convergence can be proved under the condition that $ \ kappa h $ is sufficiently small, where $ \ kappa $ is the wave number and $ h $ is the mesh size. error estimates for both the scalar and vector variables in $ l ^ 2 $ norm are established. several numerical experiments are tested to verify our theoretical results and to present the capability of our method for capturing singular solutions.
|
arxiv:1904.12091
|
hyperparameter optimization, also known as hyperparameter tuning, is a widely recognized technique for improving model performance. regrettably, when training private ml models, many practitioners often overlook the privacy risks associated with hyperparameter optimization, which could potentially expose sensitive information about the underlying dataset. currently, the sole existing approach to allow privacy - preserving hyperparameter optimization is to uniformly and randomly select hyperparameters for a number of runs, subsequently reporting the best - performing hyperparameter. in contrast, in non - private settings, practitioners commonly utilize ` ` adaptive ' ' hyperparameter optimization methods such as gaussian process - based optimization, which select the next candidate based on information gathered from previous outputs. this substantial contrast between private and non - private hyperparameter optimization underscores a critical concern. in our paper, we introduce dp - hypo, a pioneering framework for ` ` adaptive ' ' private hyperparameter optimization, aiming to bridge the gap between private and non - private hyperparameter optimization. to accomplish this, we provide a comprehensive differential privacy analysis of our framework. furthermore, we empirically demonstrate the effectiveness of dp - hypo on a diverse set of real - world datasets.
|
arxiv:2306.05734
|
we present local naive bayes nearest neighbor, an improvement to the nbnn image classification algorithm that increases classification accuracy and improves its ability to scale to large numbers of object classes. the key observation is that only the classes represented in the local neighborhood of a descriptor contribute significantly and reliably to their posterior probability estimates. instead of maintaining a separate search structure for each class, we merge all of the reference data together into one search structure, allowing quick identification of a descriptor ' s local neighborhood. we show an increase in classification accuracy when we ignore adjustments to the more distant classes and show that the run time grows with the log of the number of classes rather than linearly in the number of classes as did the original. this gives a 100 times speed - up over the original method on the caltech 256 dataset. we also provide the first head - to - head comparison of nbnn against spatial pyramid methods using a common set of input features. we show that local nbnn outperforms all previous nbnn based methods and the original spatial pyramid model. however, we find that local nbnn, while competitive with, does not beat state - of - the - art spatial pyramid methods that use local soft assignment and max - pooling.
|
arxiv:1112.0059
|
exceptional points ( eps ) correspond to degeneracies of open systems. these are attracting much interest in optics, optoelectronics, plasmonics, and condensed matter physics. in the classical and semiclassical approaches, hamiltonian eps ( heps ) are usually defined as degeneracies of non - hermitian hamiltonians such that at least two eigenfrequencies are identical and the corresponding eigenstates coalesce. heps result from continuous, mostly slow, nonunitary evolution without quantum jumps. clearly, quantum jumps should be included in a fully quantum approach to make it equivalent to, e. g., the lindblad master - equation approach. thus, we suggest to define eps via degeneracies of a liouvillian superoperator ( including the full lindbladian term, leps ), and we clarify the relations between heps and leps. we prove two main theorems : theorem 1 proves that, in the quantum limit, leps and heps must have essentially different properties. theorem 2 dictates a condition under which, in the ` ` semiclassical ' ' limit, leps and heps recover the same properties. in particular, we show the validity of theorem 1 studying systems which have ( 1 ) an lep but no heps, and ( 2 ) both leps and heps but for shifted parameters. as for theorem 2, ( 3 ) we show that these two types of eps become essentially equivalent in the semiclassical limit. we introduce a series of mathematical techniques to unveil analogies and differences between the heps and leps. we analytically compare leps and heps for some quantum and semiclassical prototype models with loss and gain.
|
arxiv:1909.11619
|
we construct the holographic dual of two colored quasiparticles in maximally supersymmetric yang - mills theory entangled in a color singlet epr pair. in the holographic dual the entanglement is encoded in a geometry of a non - traversable wormhole on the worldsheet of the flux tube connecting the pair. this gives a simple example supporting the recent claim by maldacena and susskind that epr pairs and non - traversable wormholes are equivalent descriptions of the same physics.
|
arxiv:1307.1132
|
quadrotors are highly nonlinear dynamical systems that require carefully tuned controllers to be pushed to their physical limits. recently, learning - based control policies have been proposed for quadrotors, as they would potentially allow learning direct mappings from high - dimensional raw sensory observations to actions. due to sample inefficiency, training such learned controllers on the real platform is impractical or even impossible. training in simulation is attractive but requires to transfer policies between domains, which demands trained policies to be robust to such domain gap. in this work, we make two contributions : ( i ) we perform the first benchmark comparison of existing learned control policies for agile quadrotor flight and show that training a control policy that commands body - rates and thrust results in more robust sim - to - real transfer compared to a policy that directly specifies individual rotor thrusts, ( ii ) we demonstrate for the first time that such a control policy trained via deep reinforcement learning can control a quadrotor in real - world experiments at speeds over 45km / h.
|
arxiv:2202.10796
|
both classical and relativistic weak - field and slow - motion perturbations to planetary orbits can be treated as perturbative corrections to the keplerian model. in particular, tidal forces and general relativity ( gr ) induce small precession rates of the apsidal line. accurate measurements of these effects in transiting exoplanets could be used to test gr and to gain information about the planetary interiors. unfortunately, models for transiting planets have a high degree of degeneracy in the orbital parameters that, combined to the uncertainties of photometric transit observations, results in large errors on the determinations of the argument of periastron and precludes a direct evaluation of the apsidal line precession. moreover, tidal and gr precession time - scales are many order of magnitudes larger than orbital periods, so that on the observational time - spans required to cumulate a precession signal enough strong to be detected, even small systematic errors in transit ephemerides add up to cancel out the tiny variations due to precession. here we present a more feasible solution to detect tidal and gr precession rates through the observation of variations of the time interval ( $ \ delta \ tau $ ) between primary and secondary transits of hot jupiters and propose the most promising target for such detection, wasp - 14 b. for this planet we expect a cumulated $ \ delta \ tau $ $ \ approx $ - 250 s, due to tidal and relativistic precession, since its first photometric observations.
|
arxiv:2105.02555
|
we propose to apply the back and forth nudging ( bfn ) method used for geophysical data assimilations to estimate the initial state of a quantum system. we consider a cloud of atoms interacting with a magnetic field while a single observable is being continuously measured over time using homodyne detection. the bfn method relies on designing an observer forward and backwards in time. the state of the bfn observer is continuously updated by the measured data and tends to converge to the systems state. the proposed estimator seems to be globally asymptotically convergent when the system is observable. a detailed convergence proof and simulations are given in the 2 - level case. a discussion on the extension of the algorithm to the multilevel case is also presented.
|
arxiv:1007.3911
|
many proteins interact with and deform double - stranded dna in cells. single - molecule experiments have studied the elasticity of dna with helix - deforming proteins, including proteins that bend dna. these experiments increase the need for theories of dna elasticity which include helix - deforming proteins. previous theoretical work on bent dna has examined a long dna molecule with many nonspecifically binding proteins. however, recent experiments used relatively short dna molecules with a single, well - defined bend site. here we develop a simple theoretical description of the effect of a single bend. we then include the description of the bend in the finite worm like chain model ( fwlc ) of short dna molecules attached to beads. we predict how the dna force - extension relation changes due to formation of a single permanent kink, at all values of the applied stretching force. our predictions show that high - resolution single - molecule experiments could determine the bend angle induced upon protein binding.
|
arxiv:physics/0607267
|
transition disks have dust - depleted inner regions and may represent an intermediate step of an on - going disk dispersal process, where planet formation is probably in progress. recent millimetre observations of transition disks reveal radially and azimuthally asymmetric structures, where micron - and millimetre - sized dust particles may not spatially coexist. these properties can be the result of particle trapping and grain growth in pressure bumps originating from the disk interaction with a planetary companion. the multiple features observed in some transition disks, such as sr 21, suggest the presence of more than one planet. we study the gas and dust distributions of a disk hosting two massive planets as function of different disk and dust parameters. observational signatures, such as the spectral energy distribution, sub - millimetre, and polarised images are simulated for the various parameters. we confirm that planets can lead to particle trapping, although for a disk with high viscosity ( $ \ alpha _ { \ rm { turb } } = 10 ^ { - 2 } $ ), the planet should be more massive than $ 5 m _ { \ rm { jup } } $ and dust fragmentation should occur with low efficiency ( $ v _ { f } \ sim30 \ rm { m s } ^ { - 1 } $ ). this will lead to a ring - like feature as observed in transition disks in the millimetre. when trapping occurs, we find that a smooth distribution of micron sized grains throughout the disk, sometimes observed in scattered light, can only happen if the combination of planet mass and turbulence is such that small grains are not fully filtered out. a high disk viscosity ( $ \ alpha _ { \ rm { turb } } = 10 ^ { - 2 } $ ) ensures a replenishment of the cavity in micron - sized dust, while for lower viscosity ( $ \ alpha _ { \ rm { turb } } = 10 ^ { - 3 } $ ), the planet mass is constrained to be less than $ 5 m _ { \ rm { jup } } $. in these cases, the gas distribution is likely to show low - amplitude azimuthal asymmetries caused by disk eccentricity rather than by long - lived vortices.
|
arxiv:1410.5963
|
in many astronomical works, the structure of vector fields is analyzed, such as the differences in the celestial object coordinates in catalogs or the celestial object velocities, by decomposition into vector spherical harmonics ( vsh ). this method has shown high efficiency in many studies, but, at the same time, comparing the results obtained by different authors can cause difficulties associated with different approaches to building the vsh system and even with their different designations. to facilitate this task, this paper provides a comparison of the three vsh systems most often used in works on astrometry and stellar astronomy.
|
arxiv:2410.06075
|
for huntington disease, identification of brain regions related to motor impairment can be useful for developing interventions to alleviate the motor symptom, the major symptom of the disease. however, the effects from the brain regions to motor impairment may vary for different groups of patients. hence, our interest is not only to identify the brain regions but also to understand how their effects on motor impairment differ by patient groups. this can be cast as a model selection problem for a varying - coefficient regression. however, this is challenging when there is a pre - specified group structure among variables. we propose a novel variable selection method for a varying - coefficient regression with such structured variables. our method is empirically shown to select relevant variables consistently. also, our method screens irrelevant variables better than existing methods. hence, our method leads to a model with higher sensitivity, lower false discovery rate and higher prediction accuracy than the existing methods. finally, we found that the effects from the brain regions to motor impairment differ by disease severity of the patients. to the best of our knowledge, our study is the first to identify such interaction effects between the disease severity and brain regions, which indicates the need for customized intervention by disease severity.
|
arxiv:2007.06076
|
the emerging technologies related to mobile data especially cdr data has great potential for mobility and transportation applications. however, it presents some challenges due to its spatio - temporal characteristics and sparseness. therefore, in this article, we introduced a new model to refine the positioning accuracy of mobile devices using only cdr data and coverage areas locations. the adopted method has three steps : first, we discovered which model of movement ( move, stay ) is associated with the coverage areas where the mobile device was connected using a kalman filter. then, simultaneously we estimated the location or the position of the device. finally, we applied map - matching to bring the positioning to the right road segment. the results are very encouraging ; nevertheless, there is some enhancement that can be done at the level of movement models and map matching. for example by introducing more sophisticated movement model based on data - driven modeling and a map matching that uses the movement model type detected by matching " stay " location to buildings and " move " model to roads.
|
arxiv:1902.09399
|
the alignment of interstellar dust grains is described by the joint distribution function for certain ` ` internal ' ' and ` ` external ' ' variables, where the former describe the orientation of a grain ' s axes with respect to its angular momentum, j, and the latter describe the orientation of j relative to the interstellar magnetic field. i show how the large disparity between the dynamical timescales of the internal and external variables - - - which is typically 2 - - 3 orders of magnitude - - - can be exploited to greatly simplify calculations of the required distribution. the method is based on an ` ` adiabatic approximation ' ' which closely resembles the born - oppenheimer approximation in quantum mechanics. the adiabatic approximation prescribes an analytic distribution function for the ` ` fast ' ' dynamical variables and a simplified fokker - planck equation for the ` ` slow ' ' variables which can be solved straightforwardly using various techniques. these solutions are accurate to order epsilon, where epsilon is the ratio of the fast and slow dynamical timescales. as a simple illustration of the method, i derive an analytic solution for the joint distribution established when barnett relaxation acts in concert with gas damping. the statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
|
arxiv:astro-ph/9707126
|
in this paper, we introduce a mathematical framework for analyzing and optimizing multi - operator cellular networks that are allowed to share spectrum licenses and infrastructure elements. the proposed approach exploits stochastic geometry for modeling the locations of cellular base stations and for computing the aggregate average rate. the trade - offs that emerge from sharing spectrum frequencies and cellular base stations are quantified and discussed.
|
arxiv:1608.06168
|
the aim of this paper is to leverage the free - energy principle and its corollary process theory, active inference, to develop a generic, generalizable model of the representational capacities of living creatures ; that is, a theory of phenotypic representation. given their ubiquity, we are concerned with distributed forms of representation ( e. g., population codes ), whereby patterns of ensemble activity in living tissue come to represent the causes of sensory input or data. the active inference framework rests on the markov blanket formalism, which allows us to partition systems of interest, such as biological systems, into internal states, external states, and the blanket ( active and sensory ) states that render internal and external states conditionally independent of each other. in this framework, the representational capacity of living creatures emerges as a consequence of their markovian structure and nonequilibrium dynamics, which together entail a dual - aspect information geometry. this entails a modest representational capacity : internal states have an intrinsic information geometry that describes their trajectory over time in state space, as well as an extrinsic information geometry that allows internal states to encode ( the parameters of ) probabilistic beliefs about ( fictive ) external states. building on this, we describe here how, in an automatic and emergent manner, information about stimuli can come to be encoded by groups of neurons bound by a markov blanket ; what is known as the neuronal packet hypothesis. as a concrete demonstration of this type of emergent representation, we present numerical simulations showing that self - organizing ensembles of active inference agents sharing the right kind of probabilistic generative model are able to encode recoverable information about a stimulus array.
|
arxiv:2008.03238
|
here we investigate the connection of broad emission line shapes and continuum light curve variability time scales of type - 1 active galactic nuclei ( agn ). we developed a new model to describe optical broad emission lines as an accretion disk model of a line profile with additional ring emission. we connect ring radii with orbital time scales derived from optical light curves, and using kepler ' s third law, we calculate mass of central supermassive black hole ( smbh ). the obtained results for central black hole masses are in a good agreement with { other methods. this indicates that the variability time scales of agn may not be stochastic, but rather connected to the orbital time scales which depend on the central smbh mass.
|
arxiv:1805.07007
|
navigating efficiently to an object in an unexplored environment is a critical skill for general - purpose intelligent robots. recent approaches to this object goal navigation problem have embraced a modular strategy, integrating classical exploration algorithms - notably frontier exploration - with a learned semantic mapping / exploration module. this paper introduces a novel informative path planning and 3d object probability mapping approach. the mapping module computes the probability of the object of interest through semantic segmentation and a bayes filter. additionally, it stores probabilities for common objects, which semantically guides the exploration based on common sense priors from a large language model. the planner terminates when the current viewpoint captures enough voxels identified with high confidence as the object of interest. although our planner follows a zero - shot approach, it achieves state - of - the - art performance as measured by the success weighted by path length ( spl ) and soft spl in the habitat objectnav challenge 2023, outperforming other works by more than 20 %. furthermore, we validate its effectiveness on real robots. project webpage : https : / / ippon - paper. github. io /
|
arxiv:2410.19697
|
contrastive vision - language pre - training, known as clip, has shown promising effectiveness in addressing downstream image recognition tasks. however, recent works revealed that the clip model can be implanted with a downstream - oriented backdoor. on downstream tasks, one victim model performs well on clean samples but predicts a specific target class whenever a specific trigger is present. for injecting a backdoor, existing attacks depend on a large amount of additional data to maliciously fine - tune the entire pre - trained clip model, which makes them inapplicable to data - limited scenarios. in this work, motivated by the recent success of learnable prompts, we address this problem by injecting a backdoor into the clip model in the prompt learning stage. our method named badclip is built on a novel and effective mechanism in backdoor attacks on clip, i. e., influencing both the image and text encoders with the trigger. it consists of a learnable trigger applied to images and a trigger - aware context generator, such that the trigger can change text features via trigger - aware prompts, resulting in a powerful and generalizable attack. extensive experiments conducted on 11 datasets verify that the clean accuracy of badclip is similar to those of advanced prompt learning methods and the attack success rate is higher than 99 % in most cases. badclip is also generalizable to unseen classes, and shows a strong generalization capability under cross - dataset and cross - domain settings.
|
arxiv:2311.16194
|
qma ( quantum merlin - arthur ) is the quantum analogue of the class np. there are a few qma - complete problems, most notably the ` ` local hamiltonian ' ' problem introduced by kitaev. in this dissertation we show some new qma - complete problems. the first one is ` ` consistency of local density matrices ' ' : given several density matrices describing different ( constant - size ) subsets of an n - qubit system, decide whether these are consistent with a single global state. this problem was first suggested by aharonov. we show that it is qma - complete, via an oracle reduction from local hamiltonian. this uses algorithms for convex optimization with a membership oracle, due to yudin and nemirovskii. next we show that two problems from quantum chemistry, ` ` fermionic local hamiltonian ' ' and ` ` n - representability, ' ' are qma - complete. these problems arise in calculating the ground state energies of molecular systems. n - representability is a key component in recently developed numerical methods using the contracted schrodinger equation. although these problems have been studied since the 1960 ' s, it is only recently that the theory of quantum computation has allowed us to properly characterize their complexity. finally, we study some special cases of the consistency problem, pertaining to 1 - dimensional and ` ` stoquastic ' ' systems. we also give an alternative proof of a result due to jaynes : whenever local density matrices are consistent, they are consistent with a gibbs state.
|
arxiv:0712.3041
|
we use the theory of pseudo - holomorphic quilts to establish a counterpart, in symplectic floer homology, to the gysin sequence for the homology of a sphere - bundle. in a motivating class of examples, this " symplectic gysin sequence " is precisely analogous to an exact sequence describing the behaviour of seiberg - witten monopole floer homology for 3 - manifolds under connected sum.
|
arxiv:0807.1863
|
in this paper, we present toss, which introduces text to the task of novel view synthesis ( nvs ) from just a single rgb image. while zero - 1 - to - 3 has demonstrated impressive zero - shot open - set nvs capability, it treats nvs as a pure image - to - image translation problem. this approach suffers from the challengingly under - constrained nature of single - view nvs : the process lacks means of explicit user control and often results in implausible nvs generations. to address this limitation, toss uses text as high - level semantic information to constrain the nvs solution space. toss fine - tunes text - to - image stable diffusion pre - trained on large - scale text - image pairs and introduces modules specifically tailored to image and camera pose conditioning, as well as dedicated training for pose correctness and preservation of fine details. comprehensive experiments are conducted with results showing that our proposed toss outperforms zero - 1 - to - 3 with more plausible, controllable and multiview - consistent nvs results. we further support these results with comprehensive ablations that underscore the effectiveness and potential of the introduced semantic guidance and architecture design.
|
arxiv:2310.10644
|
domain specificity of embedding models is critical for effective performance. however, existing benchmarks, such as finmteb, are primarily designed for high - resource languages, leaving low - resource settings, such as korean, under - explored. directly translating established english benchmarks often fails to capture the linguistic and cultural nuances present in low - resource domains. in this paper, titled twice : what advantages can low - resource domain - specific embedding models bring? a case study on korea financial texts, we introduce korfinmteb, a novel benchmark for the korean financial domain, specifically tailored to reflect its unique cultural characteristics in low - resource languages. our experimental results reveal that while the models perform robustly on a translated version of finmteb, their performance on korfinmteb uncovers subtle yet critical discrepancies, especially in tasks requiring deeper semantic understanding, that underscore the limitations of direct translation. this discrepancy highlights the necessity of benchmarks that incorporate language - specific idiosyncrasies and cultural nuances. the insights from our study advocate for the development of domain - specific evaluation frameworks that can more accurately assess and drive the progress of embedding models in low - resource settings.
|
arxiv:2502.07131
|
a graph on 5 vertices consisting of 2 copies of the cycle graph c3 sharing a common vertex is called the butterfly graph ( b ). the smallest natural number s such that any two - colouring ( say red and blue ) of the edges of kj * s has a copy of a red b or a blue g is called the multipartite ramsey number of butterfly graph versus g. this number is denoted by mj ( b, g ). in this paper we find exact the values for mj ( b, g ) when and g represents any connected proper subgraph of k4 with at least one edge.
|
arxiv:1901.01658
|
this paper continues our previous study of feynman integrals in configuration spaces and their algebro - geometric and motivic aspects. we consider here both massless and massive feynman amplitudes, from the point of view of potential theory. we consider a variant of the wonderful compactification of configuration spaces that works simultaneously for all graphs with a given number of vertices and that also accounts for the external structure of feynman graph. as in our previous work, we consider two version of the feynman amplitude in configuration space, which we refer to as the real and complex versions. in the real version, we show that we can extend to the massive case a method of evaluating feynman integrals, based on expansion in gegenbauer polynomials, that we investigated previously in the massless case. in the complex setting, we show that we can use algebro - geometric methods to renormalize the feynman amplitudes, so that the renormalized values of the feynman integrals are given by periods of a mixed tate motive. the regularization and renormalization procedure is based on pulling back the form to the wonderful compactification and replace it with a cohomologous one with logarithmic poles. a complex of forms with logarithmic poles, endowed with an operator of pole subtraction, determines a rota baxter algebra on the wonderful compactifications. we can then apply the renormalization procedure via birkhoff factorization, after interpreting the regularization as an algebra homomorphism from the connes - kreimer hopf algebra of feynman graphs to the rota - baxter algebra. we obtain in this setting a description of the renormalization group. we also extend the period interpretation to the case of dirac fermions and gauge bosons.
|
arxiv:1308.5687
|
after an x - ray binary experiences a transient jet ejection, it undergoes a phase in which its x - ray light curve is dominated, for some time, by thermal emission from an accretion disk surrounding the black hole. the accretion physics in the thermal - dominant state is understood better than in any other, and it is therefore the best state for comparing observations to theoretical models. here, i present simulations that study the way a thermally - emitting disk might be expected to behave immediately after a large - scale, steady jet has been removed from the system in the form of a sudden ejection. i simulate the ejection ' s effect on the disk by allowing the strength of turbulence ( modeled by the alpha parameter of shakura and sunyaev ) to increase rapidly in time, and i show how this change can lead to an outburst in an otherwise - steady disk. the motivation for treating the jet removal in this way is the fact that many models for jets involve large - scale magnetic fields that should inhibit the magnetorotational instability believed to drive turbulence ; this should naturally lead to a rapid increase in turbulence when the magnetic field is ejected from the system or otherwise destroyed during the ejection event. i show how the timescale and luminosity of the outburst can be controlled by the manner in which alpha is allowed to change, and i briefly discuss ways in which these simulations can be compared to observations of x - ray binaries, in particular grs 1915 + 105, which shows the most complex and variable behavior of any black hole system in outburst.
|
arxiv:astro-ph/0612236
|
this presentation at the conference " 50 years of cp violation ", 10 - 11 july 2014, held at queen mary university of london covers an introduction to time reversal and motion reversal, the phenomenology of k0 - k0bar transitions, unitarity relations, the first observation of t violation in 1970, later results from bell - steinberger unitarity analyses, searches for t violation in k0 to pi ell nu decay amplitudes and in the transverse muon polarisation of k to pi mu nu decays, direct observation of t violation by motion reversal in k0 - k0bar transitions, and " direct t violation " in k0 to pi pi, i = 2 decays ( epsilon - prime ).
|
arxiv:1411.1862
|
we show the best current simulations of the absorption and emission features predicted from thermal - radiative winds produced from x - ray illumination of the outer accretion disc in binary systems. we use the density and velocity structure derived from a radiation hydrodynamic code as input to a monte - carlo radiation transport calculation. the initial conditions are matched to those of the black hole binary system h1743 - 322 in its soft, disc dominated state, where wind features are seen in chandra grating data. our simulation fits well to the observed line profile, showing that these physical wind models can be the origin of the absorption features seen, rather than requiring a magnetically driven wind. we show how the velocity structure is the key observable discriminator between magnetic and thermal winds. magnetic winds are faster at smaller radii, whereas thermal winds transition to a static atmosphere at smaller radii. new data from xrism ( due for launch jan 2022 ) will give an unprecedented view of the physics of the wind launch and acceleration processes, but the existence of static atmospheres in small disc systems already rules out magnetic winds which assume self - similar magnetic fields from the entire disc as the origin of the absorption features seen.
|
arxiv:1911.01660
|
we investigated the electronic structure and lattice dynamics of multiferroic mnwo4 by optical spectroscopy. with variation of polarization, temperature, and magnetic field, we obtained optical responses over a wide range of photon energies. the electronic structure of mnwo4 near to the fermi level was examined, with inter - band transitions identified in optical conductivity spectra above a band - gap of 2. 5 ev. as for the lattice dynamics, we identified all the infrared transverse optical phonon modes available according to the group - theory analysis. although we did not observe much change in global electronic structure across the phase transition temperatures, an optical absorption at around 2. 2 ev showed an evident change depending upon the spin configuration and magnetic field. the behavior of this band - edge absorption indicates that spin - orbit coupling plays an important role in multiferroic mnwo4.
|
arxiv:1004.4254
|
we present a novel design of loop - gap resonator, the loop - zag resonator, for sub - x - band electron - spin resonance spectroscopy. the loop - zag design can achieve improved coupling to small - sample spin systems through the improvement of sample filling factor and rf $ b _ 1 $ field. by introducing ` ` zags ' ' to the resonator ' s gap path, the capacitance is increased, accommodating a smaller loop size and thereby a larger filling factor to maintain the requisite resonant frequency. we present experimental spectra on five different resonators, each with approximately the same resonant frequency of $ \ sim2. 9 $ ~ ghz, showing that an increase in the number of zags and reduction in loop size gives rise to higher sensitivity. finite - element simulations of these resonators provide estimates of the improved filling factors obtained through the addition of zags. the frequency range over which this loop - zag design is practical enables a breadth of future applications in microwave engineering, including esr and esr - like quantum information microwave techniques.
|
arxiv:2307.11269
|
we formulate a " correct " version of the quillen conjecture on linear group homology for certain arithmetic rings and provide evidence for the new conjecture. in this way we predict that the linear group homology has a direct summand looking like an unstable form of milnor k - theory and we call this new theory " homological symbols algebra ". as a byproduct we prove the quillen conjecture in homological degree two for the rank two and the prime 5.
|
arxiv:0804.3553
|
a quark model is applied to the spectrum of baryons containing heavy quarks. the model gives masses for the known heavy baryons that are in agreement with experiment, but for the doubly - charmed baryon cascade _ { cc }, the model prediction is too heavy. mixing between the cascade _ q and cascade _ q ^ \ prime states is examined and is found to be small for the lowest lying states. in contrast with this, mixing between the cascade _ { bc } and cascade _ { bc } ^ \ prime states is found to be large, and the implication of this mixing for properties of these states is briefly discussed. we also examine heavy - quark spin - symmetry multiplets, and find that many states in the model can be placed in such multiplets. we compare our predictions with those of a number of other authors.
|
arxiv:0711.2492
|
conflict - free replicated data types ( crdts ) are distributed data structures designed for fault tolerance and high availability. crdts can be taxonomized into state - based crdts, in which replicas apply updates locally and periodically broadcast their local state to other replicas, and operation - based ( op - based ) crdts, in which every state - updating operation is individually broadcast and applied at each replica. in the literature, state - based and op - based crdts are considered equivalent due to the existence of algorithms that transform one kind of crdt into the other. in particular, verification techniques and results for one kind of crdt are often said to be applicable to the other kind, thanks to this equivalence. however, what it means for state - based and op - based crdts to emulate each other has never been made fully precise. in particular, emulation is nontrivial since state - based and op - based crdts place different requirements on the behavior of the underlying network with regard to both the causal ordering of message delivery, and the granularity of the messages themselves. in this paper, we specify and formalize crdt emulation in terms of simulation by modeling crdts and their interactions with the network as formal transition systems. we show that emulation can be understood as weak simulations between the transition systems of the original and emulating crdt systems, thus closing a gap in the crdt literature. we precisely characterize which properties of crdt systems are preserved by our weak simulations, and therefore which properties can be said to be applicable to state - based crdts as long as they are applicable to op - based crdts and vice versa. finally, we leverage our emulation results to obtain a general representation independence result for crdts : intuitively, clients of a crdt cannot tell whether they are interacting with a state - based or op - based crdt in particular.
|
arxiv:2504.05398
|
interstellar complex organic molecules ( icoms ) can be loosely defined as chemical compounds with at least six atoms in which at least one is carbon. the observations of icoms in star - forming regions have shown that they contain an important fraction of carbon in a molecular form, which can be used to synthesize more complex, even biotic molecules. hence, icoms are major actors in the increasing molecular complexity in space and they might have played a role in the origin of terrestrial life. understand - ing how icoms are formed is relevant for predicting the ultimate organic chemistry reached in the interstellar medium. one possibility is that they are synthesized on the interstellar grain icy surfaces, via recombination of previously formed radicals. the present work focuses on the reactivity of hco with ch3 / nh2 on the grain icy sur - faces, investigated by means of quantum chemical simulations. the goal is to carry outa systematic study using different computational approaches and models for the icy surfaces. specifically, dft computations have been bench - marked with caspt2 and ccsd ( t ) methods, and the ice mantles have been mimicked with cluster models of 1, 2, 18 and 33 h2o molecules, in which different reaction sites have been considered. our results indicate that the hco + ch3 / nh2 reactions, if they actually occur, have two major competitive channels : the formation of icoms ch3cho / nh2cho, or the formation of co + ch4 / nh3. these two channels are either barrierless or presentrelatively low ( $ \ leq $ 10 kj / mol equal to about 1200 k ) energy barriers. finally, we briefly discuss the astrophysical implications of these findings.
|
arxiv:1909.12686
|
molecular property prediction ( mpp ) is vital for drug discovery, crop protection, and environmental science. over the last decades, diverse computational techniques have been developed, from using simple physical and chemical properties and molecular fingerprints in statistical models and classical machine learning to advanced deep learning approaches. in this review, we aim to distill insights from current research on employing transformer models for mpp. we analyze the currently available models and explore key questions that arise when training and fine - tuning a transformer model for mpp. these questions encompass the choice and scale of the pre - training data, optimal architecture selections, and promising pre - training objectives. our analysis highlights areas not yet covered in current research, inviting further exploration to enhance the field ' s understanding. additionally, we address the challenges in comparing different models, emphasizing the need for standardized data splitting and robust statistical analysis.
|
arxiv:2404.03969
|
the safety of our day - to - day life depends crucially on the correct functioning of embedded software systems which control the functioning of more and more technical devices. many of these software systems are time - critical. hence, computations performed need not only to be correct, but must also be issued in a timely fashion. worst case execution time ( wcet ) analysis is concerned with computing tight upper bounds for the execution time of a system in order to provide formal guarantees for the proper timing behaviour of a system. central for this is to compute safe and tight bounds for loops and recursion depths. in this paper, we highlight the tubound approach to this challenge at whose heart is a constraint logic based approach for loop analysis.
|
arxiv:0903.2251
|
we reanalyze the inference of the protosolar abundance of deuterium made by geiss ( 1993 ) from measurements of 3he / 4he in the solar wind. we use an evolutionary solar model with microscopic diffusion, constrained to fit the present age, radius and luminosity, as well as the observed ratio of heavy elements to hydrogen. the protosolar 2h / 1h is obtained from the best fit of 3he / 4he in the solar wind. taking for the protosolar 3he / 4he the value measured in jupiter by the galileo probe ( niemann et al. 1996 ), we derive the protosolar 2h / 1h = ( 3. 01 + - 0. 17 ) 10 ^ { - 5 } ratio. compared to the present interstellar medium value ( linsky et al. 1993 ), this result is compatible with models of the chemical evolution of the galaxy in the solar neighborhood ; it is also marginally compatible with the jovian 2h / 1h = ( 5 + - 2 ) 10 ^ { - 5 } ratio measured by galileo.
|
arxiv:astro-ph/9704069
|
in this paper, we propose magicstylegan and magicstylegan - ada - both incarnations of the state - of - the - art models stylegan2 and stylegan2 ada - to experiment with their capacity of transfer learning into a rather different domain : creating new illustrations for the vast universe of the game " magic : the gathering " cards. this is a challenging task especially due to the variety of elements present in these illustrations, such as humans, creatures, artifacts, and landscapes - not to mention the plethora of art styles of the images made by various artists throughout the years. to solve the task at hand, we introduced a novel dataset, named mtg, with thousands of illustration from diverse card types and rich in metadata. the resulting set is a dataset composed by a myriad of both realistic and fantasy - like illustrations. although, to investigate effects of diversity we also introduced subsets that contain specific types of concepts, such as forests, islands, faces, and humans. we show that simpler models, such as dcgans, are not able to learn to generate proper illustrations in any setting. on the other side, we train instances of magicstylegan using all proposed subsets, being able to generate high quality illustrations. we perform experiments to understand how well pre - trained features from stylegan2 can be transferred towards the target domain. we show that in well trained models we can find particular instances of noise vector that realistically represent real images from the dataset. moreover, we provide both quantitative and qualitative studies to support our claims, and that demonstrate that magicstylegan is the state - of - the - art approach for generating magic illustrations. finally, this paper highlights some emerging properties regarding transfer learning in gans, which is still a somehow under - explored field in generative learning research.
|
arxiv:2205.14442
|
humans can easily deduce the relative pose of an unseen object, without label / training, given only a single query - reference image pair. this is arguably achieved by incorporating ( i ) 3d / 2. 5d shape perception from a single image, ( ii ) render - and - compare simulation, and ( iii ) rich semantic cue awareness to furnish ( coarse ) reference - query correspondence. existing methods implement ( i ) by a 3d cad model or well - calibrated multiple images and ( ii ) by training a network on specific objects, which necessitate laborious ground - truth labeling and tedious training, potentially leading to challenges in generalization. moreover, ( iii ) was less exploited in the paradigm of ( ii ), despite that the coarse correspondence from ( iii ) enhances the compare process by filtering out non - overlapped parts under substantial pose differences / occlusions. motivated by this, we propose a novel 3d generalizable relative pose estimation method by elaborating ( i ) with a 2. 5d shape from an rgb - d reference, ( ii ) with an off - the - shelf differentiable renderer, and ( iii ) with semantic cues from a pretrained model like dinov2. specifically, our differentiable renderer takes the 2. 5d rotatable mesh textured by the rgb and the semantic maps ( obtained by dinov2 from the rgb input ), then renders new rgb and semantic maps ( with back - surface culling ) under a novel rotated view. the refinement loss comes from comparing the rendered rgb and semantic maps with the query ones, back - propagating the gradients through the differentiable renderer to refine the 3d relative pose. as a result, our method can be readily applied to unseen objects, given only a single rgb - d reference, without label / training. extensive experiments on linemod, lm - o, and ycb - v show that our training - free method significantly outperforms the sota supervised methods, especially under the rigorous acc @ 5 / 10 / 15 { \ deg } metrics and the challenging cross - dataset settings.
|
arxiv:2406.18453
|
because currently the most popular method of calculating plasma self - consistent fields is an incomplete method which, strictly speaking, is not suitable to scientific investigation, we develop this method into be a complete reliable basic tool for scientific investigation. here, the text after the pargraph around eq. ( 7 ) have two different presentations, the presentation given latter is more concise and straightforward. pacs : 52. 65. - y, 52. 35. - g.
|
arxiv:1008.2298
|
this study introduces unoranic +, a novel method that integrates unsupervised feature orthogonalization with the ability of a vision transformer to capture both local and global relationships for improved robustness and generalizability. the streamlined architecture of unoranic + effectively separates anatomical and image - specific attributes, resulting in robust and unbiased latent representations that allow the model to demonstrate excellent performance across various medical image analysis tasks and diverse datasets. extensive experimentation demonstrates unoranic + ' s reconstruction proficiency, corruption resilience, as well as capability to revise existing image distortions. additionally, the model exhibits notable aptitude in downstream tasks such as disease classification and corruption detection. we confirm its adaptability to diverse datasets of varying image sources and sample sizes which positions the method as a promising algorithm for advanced medical image analysis, particularly in resource - constrained environments lacking large, tailored datasets. the source code is available at https : / / github. com / sdoerrich97 / unoranic - plus.
|
arxiv:2409.12276
|
building on an analogy to ductile fracture mechanics, we quantify the size of debris particles created during adhesive wear. earlier work suggested a linear relation between tangential work and wear debris volume, assuming that the debris size is proportional to the micro contact size multiplied by the junction shear strength. however, the present study reveals deviations from linearity. these deviations can be rationalized with fracture mechanics and imply that less work is necessary to generate debris than what was assumed. here, we postulate that the work needed to detach a wear particle is made of the surface energy expended to create new fracture surfaces, and also of plastic work within a fracture process zone of a given width around the cracks. our theoretical model, validated by molecular dynamics simulations, reveals a super - linear scaling relation between debris volume ( $ v _ d $ ) and tangential work ( $ w _ t $ ) : $ v _ d \ sim w _ t ^ { 3 / 2 } $ in 3d and $ v _ d \ sim w _ t ^ { 2 } $ in 2d. this study provides a theoretical foundation to estimate the statistical distribution of sizes of fine particles emitted due to adhesive wear processes.
|
arxiv:2207.09561
|
entity alignment ( ea ) aims to merge two knowledge graphs ( kgs ) by identifying equivalent entity pairs. while existing methods heavily rely on human - generated labels, it is prohibitively expensive to incorporate cross - domain experts for annotation in real - world scenarios. the advent of large language models ( llms ) presents new avenues for automating ea with annotations, inspired by their comprehensive capability to process semantic information. however, it is nontrivial to directly apply llms for ea since the annotation space in real - world kgs is large. llms could also generate noisy labels that may mislead the alignment. to this end, we propose a unified framework, llm4ea, to effectively leverage llms for ea. specifically, we design a novel active learning policy to significantly reduce the annotation space by prioritizing the most valuable entities based on the entire inter - kg and intra - kg structure. moreover, we introduce an unsupervised label refiner to continuously enhance label accuracy through in - depth probabilistic reasoning. we iteratively optimize the policy based on the feedback from a base ea model. extensive experiments demonstrate the advantages of llm4ea on four benchmark datasets in terms of effectiveness, robustness, and efficiency. codes are available via https : / / github. com / chensycn / llm4ea _ official.
|
arxiv:2405.16806
|
the normal state properties of the recently discovered ferropnictide superconductors might hold the key to understanding their exotic superconductivity. using point - contact spectroscopy we show that andreev reflection between an epitaxial thin film of ba ( fe $ _ { 0. 92 } $ co $ _ { 0. 08 } $ ) $ _ 2 $ as $ _ 2 $ and a silver tip can be seen in the normal state of the film up to temperature $ t \ sim1. 3t _ c $, where $ t _ c $ is the critical temperature of the superconductor. andreev reflection far above $ t _ c $ can be understood only when superconducting pairs arising from strong fluctuation of the phase of the complex superconducting order parameter exist in the normal state. our results provide spectroscopic evidence of phase - incoherent superconducting pairs in the normal state of the ferropnictide superconductors.
|
arxiv:1004.4852
|
we present a detailed study of the effects of the initial distribution on the kinetic evolution of the irreversible reaction a + b - > 0 in one dimension. our analytic as well as numerical work is based on a reaction - diffusion model of this reaction. we focus on the role of initial density fluctuations in the creation of the macroscopic patterns that lead to the well - known kinetic anomalies in this system. in particular, we discuss the role of the long wavelength components of the initial fluctuations in determining the long - time behavior of the system. we note that the frequently studied random initial distribution is but one of a variety of possible distributions leading to interesting anomalous behavior. our discussion includes an initial distribution with correlated a - b pairs and one in which the initial distribution forms a fractal pattern. the former is an example of a distribution whose long wavelength components are suppressed, while the latter exemplifies one whose long wavelength components are enhanced, relative to those of the random distribution.
|
arxiv:physics/9806012
|
let $ e $ be a countable directed graph. we show that $ c ^ * ( e ) $ is af - embeddable if and only if no loop in $ e $ has an entrance. the proof is constructive and is in the same spirit as the drinen - tomforde desingularization.
|
arxiv:1405.7757
|
we discuss a solution of the equations of motion of five - dimensional gauged type iib supergravity that describes confining su ( n ) gauge theories at large n and large ' t hooft parameter. we prove confinement by computing the wilson loop, and we show that our solution is generic, independent of most of the details of the theory. in particular, the einstein - frame metric near its singularity, and the condensates of scalar, composite operators are universal. also universal is the discreteness of the glueball mass spectrum and the existence of a mass gap. the metric is also identical to a generically confining solution recently found in type 0b theory.
|
arxiv:hep-th/9903026
|
this paper introduces cfp, a system that search intra - operator parallelism configurations by leveraging runtime profiles of actual parallel programs. the key idea is to profile a limited space by identifying a new structure named parallelblock, which is a group of operators with the property of communication - free tensor partition propagation : the partition of its input tensor can propagate through all operators to its output tensor without introducing communication or synchronization. based on this property, an optimal tensor partition of operators within a parallelblock should be inferred from the partition of input tensor through partition propagation to prevent the avoidable communication. thus, the search space can be reduced by only profiling each parallelblock with different input tensor partitions at its entry, instead of enumerating all combinations among operators within the parallelblock. moreover, the search space is further reduced by identifying parallelblock sequences ( segments ) with similar parallel behavior. cfp computes the overall performance of the model based on the profiles of all segments. on gpt, llama, and moe models, cfp achieves up to a 1. 51x, 1. 31x, and 3. 43x speedup over the state - of - the - art framework, alpa.
|
arxiv:2504.00598
|
the technique of distillation helps transform cumbersome neural network into compact network so that the model can be deployed on alternative hardware devices. the main advantages of distillation based approaches include simple training process, supported by most off - the - shelf deep learning softwares and no special requirement of hardwares. in this paper, we propose a guideline to distill the architecture and knowledge of pre - trained standard cnns simultaneously. we first make a quantitative analysis of the baseline network, including computational cost and storage overhead in different components. and then, according to the analysis results, optional strategies can be adopted to the compression of fully - connected layers. for vanilla convolution layers, the proposed parsimonious convolution ( parconv ) block only consisting of depthwise separable convolution and pointwise convolution is used as a direct replacement without other adjustments such as the widths and depths in the network. finally, the knowledge distillation with multiple losses is adopted to improve performance of the compact cnn. the proposed algorithm is first verified on offline handwritten chinese text recognition ( hctr ) where the cnns are characterized by tens of thousands of output nodes and trained by hundreds of millions of training samples. compared with the cnn in the state - of - the - art system, our proposed joint architecture and knowledge distillation can reduce the computational cost by > 10x and model size by > 8x with negligible accuracy loss. and then, by conducting experiments on one of the most popular data sets : mnist, we demonstrate the proposed approach can also be successfully applied on mainstream backbone networks.
|
arxiv:1912.07806
|
we load atoms into every site of an optical lattice and selectively spin flip atoms in a sublattice consisting of every other site. these selected atoms are separated from their unselected neighbors by less than an optical wavelength. we also show spin - dependent transport, where atomic wave packets are coherently separated into adjacent sites according to their internal state. these tools should be useful for quantum information processing and quantum simulation of lattice models with neutral atoms.
|
arxiv:quant-ph/0702039
|
split inference ( si ) partitions deep neural networks into distributed sub - models, enabling privacy - preserving collaborative learning. nevertheless, it remains vulnerable to data reconstruction attacks ( dras ), wherein adversaries exploit exposed smashed data to reconstruct raw inputs. despite extensive research on adversarial attack - defense games, a shortfall remains in the fundamental analysis of privacy risks. this paper establishes a theoretical framework for privacy leakage quantification using information theory, defining it as the adversary ' s certainty and deriving both average - case and worst - case error bounds. we introduce fisher - approximated shannon information ( fsinfo ), a novel privacy metric utilizing fisher information ( fi ) for operational privacy leakage computation. we empirically show that our privacy metric correlates well with empirical attacks and investigate some of the factors that affect privacy leakage, namely the data distribution, model size, and overfitting.
|
arxiv:2504.10016
|
we discuss the gravitational wave background produced by bouncing models based on a full quantum evolution of a universe filled with a perfect fluid. using an ontological interpretation for the background wave function allows us to solve the mode equations for the tensorial perturbations, and we find the spectral index as a function of the fluid equation of state.
|
arxiv:gr-qc/0605060
|
over the past few decades, tremendous efforts have been devoted to understanding self - duality at the quantum critical point, which enlarges the global symmetry and constrains the dynamics. in this letter, we employ large - scale density matrix renormalization group simulations to investigate the critical spin chain with long - range interaction $ v ( r ) \ sim 1 / r ^ { \ alpha } $. remarkably, we reveal that the long - range interaction drives the deconfined criticality towards a first - order phase transition as $ \ alpha $ decreases. more strikingly, the emergent self - duality leads to an emergent symmetry and manifests at these first - order critical points. this discovery is reminiscent of self - duality protected multicritical points and provides the example of the critical line with generalized symmetry. our work has far - reaching implications for ongoing experimental efforts in rydberg atom quantum simulators.
|
arxiv:2309.01652
|
in this work we experimentally demonstrate a photon - pair source with correlations in the frequency and polarization degrees of freedom. we base our source on the spontaneous four - wave mixing ( sfwm ) process in a photonic crystal fiber. we show theoretically that the two - photon state is the coherent superposition of up to six distinct sfwm processes, each corresponding to a distinct combination of polarizations for the four waves involved and giving rise to an energy - conserving pair of peaks. our experimental measurements, both in terms of single and coincidence counts, confirm the presence of these pairs of peaks, while we also present related numerical simulations with excellent experiment - theory agreement. we explicitly show how the pump frequency and polarization may be used to effectively control the signal - idler photon - pair properties, defining which of the six processes can participate in the overall two - photon state and at which optical frequencies. we analyze the signal - idler correlations in frequency and polarization, and in terms of fiber characterization, we input the sfwm - peak experimental data into a genetic algorithm which successfully predicts the values of the parameters that characterize the fiber cross section, as well as predict the particular sfwm process associated with a given pair of peaks. we believe our work will help advance the exploitation of photon - pair correlations in the frequency and polarization degrees of freedom.
|
arxiv:2109.02232
|
possibilities of all - angle left - handed negative refraction in 2d honeycomb and kagome lattices made of dielectric rods in air are discussed for the refractive indices 3. 1 and 3. 6. in contrast to triangular lattice photonic crystals made of rods in air, both the honeycomb and kagome lattices show all - angle left - handed negative refraction in the case of the tm2 band for low normalized frequencies. certain advantages of the honeycomb and kagome structures over the triangular lattice are emphasized. this specially concerns the honeycomb lattice with its circle - like equifrequency contours where the effective indices are close to - 1 for a wide range of incident angles and frequencies.
|
arxiv:cond-mat/0605031
|
while large models pre - trained on high - quality data exhibit excellent performance across various reasoning tasks, including mathematical reasoning ( e. g. gsm8k, multiarith ), specializing smaller models to excel at mathematical reasoning remains a challenging problem. common approaches to address this challenge include knowledge distillation, where smaller student models learn from large pre - trained teacher models, and data augmentation, such as rephrasing questions. despite these efforts, smaller models struggle with arithmetic computations, leading to errors in mathematical reasoning. in this work, we focus on leveraging a programmatically generated arithmetic dataset to enhance the reasoning capabilities of smaller models. we investigate two key approaches to incorporate this dataset - - ( 1 ) intermediate fine - tuning, where a model is fine - tuned on the arithmetic dataset before being trained on a reasoning dataset, and ( 2 ) integrating the arithmetic dataset into the instruction - tuning mixture, allowing the model to learn arithmetic skills alongside general instruction - following abilities. our experiments on multiple reasoning benchmarks demonstrate that incorporating an arithmetic dataset, whether through targeted fine - tuning or within the instruction - tuning mixture, enhances the models ' arithmetic capabilities, which in turn improves their mathematical reasoning performance.
|
arxiv:2502.12855
|
practical applications of thermoacoustic tomography require numerical inversion of the spherical mean radon transform with the centers of integration spheres occupying an open surface. solution of this problem is needed ( both in 2 - d and 3 - d ) because frequently the region of interest cannot be completely surrounded by the detectors, as it happens, for example, in breast imaging. we present an efficient numerical algorithm for solving this problem in 2 - d ( similar methods are applicable in the 3 - d case ). our method is based on the numerical approximation of plane waves by certain single layer potentials related to the acquisition geometry. after the densities of these potentials have been precomputed, each subsequent image reconstruction has the complexity of the regular filtration backprojection algorithm for the classical radon transform. the peformance of the method is demonstrated in several numerical examples : one can see that the algorithm produces very accurate reconstructions if the data are accurate and sufficiently well sampled, on the other hand, it is sufficiently stable with respect to noise in the data.
|
arxiv:0807.1355
|
in this paper, we report on a parallel freeviewpoint video synthesis algorithm that can efficiently reconstruct a high - quality 3d scene representation of sports scenes. the proposed method focuses on a scene that is captured by multiple synchronized cameras featuring wide - baselines. the following strategies are introduced to accelerate the production of a free - viewpoint video taking the improvement of visual quality into account : ( 1 ) a sparse point cloud is reconstructed using a volumetric visual hull approach, and an exact 3d roi is found for each object using an efficient connected components labeling algorithm. next, the reconstruction of a dense point cloud is accelerated by implementing visual hull only in the rois ; ( 2 ) an accurate polyhedral surface mesh is built by estimating the exact intersections between grid cells and the visual hull ; ( 3 ) the appearance of the reconstructed presentation is reproduced in a view - dependent manner that respectively renders the non - occluded and occluded region with the nearest camera and its neighboring cameras. the production for volleyball and judo sequences demonstrates the effectiveness of our method in terms of both execution time and visual quality.
|
arxiv:1903.11785
|
with the increasing spread of ar head - mounted displays suitable for everyday use, interaction with information becomes ubiquitous, even while walking. however, this requires constant shifts of our attention between walking and interacting with virtual information to fulfill both tasks adequately. accordingly, we as a community need a thorough understanding of the mutual influences of walking and interacting with digital information to design safe yet effective interactions. thus, we systematically investigate the effects of different ar anchors ( hand, head, torso ) and task difficulties on user experience and performance. we engage participants ( n = 26 ) in a dual - task paradigm involving a visual working memory task while walking. we assess the impact of dual - tasking on both virtual and walking performance, and subjective evaluations of mental and physical load. our results show that head - anchored ar content least affected walking while allowing for fast and accurate virtual task interaction, while hand - anchored content increased reaction times and workload.
|
arxiv:2502.20944
|
we studied the $ e ^ + e ^ - \ to h \ gamma $ process at the full simulation level, using a realistic detector model to study the feasibility to constrain the sm effective field theory ( smeft ) $ h \ gamma z $ coefficient, $ \ zeta _ { az } $, at the ilc. assuming international large detector ( ild ) operating at 250 gev ilc, it is shown that the $ e ^ + e ^ - \ to h \ gamma $ process is much more difficult to observe than naively expected if there is no bsm contribution. we thus put upper limits on the cross section of this process. the expected combined 95 % c. l. upper limits for full polarisations $ ( p _ { e ^ - }, p _ { e ^ + } ) = ( - 100 \ %, + 100 \ % ) $ and $ ( + 100 \ %, - 100 \ % ) $ are $ \ frac { \ sigma _ { h \ gamma } ^ { l } } { \ sigma _ { s m } ^ { l } } < 5. 0 $ and $ \ frac { \ sigma _ { h \ gamma } ^ { r } } { \ sigma _ { s m } ^ { r } } < 61. 9 $, respectively. the resultant 95 % c. l. limit on $ \ zeta _ { az } $ is $ - 0. 020 < \ zeta _ { a z } < 0. 003 $.
|
arxiv:2203.07202
|
in this paper we address the importance and the impact of employing structure preserving neural networks as surrogate of the analytical physics - based models typically employed to describe the rheology of non - newtonian fluids in stokes flows. in particular, we propose and test on real - world scenarios a novel strategy to build data - driven rheological models based on the use of input - output convex neural networks ( icnns ), a special class of feedforward neural network scalar valued functions that are convex with respect to their inputs. moreover, we show, through a detailed campaign of numerical experiments, that the use of icnns is of paramount importance to guarantee the well - posedness of the associated non - newtonian stokes differential problem. finally, building upon a novel perturbation result for non - newtonian stokes problems, we study the impact of our data - driven icnn based rheological model on the accuracy of the finite element approximation.
|
arxiv:2401.07121
|
the existence of global nonnegative weak solutions is proved for coupled one - dimensional lubrication systems that describe the evolution of nanoscopic bilayer thin polymer films that take account of navier - slip or no - slip conditions at both liquid - liquid and liquid - solid interfaces. in addition, in the presence of attractive van der waals and repulsive born intermolecular interactions existence of positive smooth solutions is shown.
|
arxiv:1211.2216
|
we performed spectral line observations of co j = 2 - 1, 13co j = 1 - 0, and c18o j = 1 - 0 and polarimetric observations in the 1. 3 mm continuum and co j = 2 - 1 toward a multiple protostar system, l1448 irs 3, in the perseus molecular complex at a distance of ~ 250 pc, using the bima array. in the 1. 3 mm continuum, two sources ( irs 3a and 3b ) were clearly detected with estimated envelope masses of 0. 21 and 1. 15 solar masses, and one source ( irs 3c ) was marginally detected with an upper mass limit of 0. 03 solar masses. in co j = 2 - 1, we revealed two outflows originating from irs 3a and 3b. the masses, mean number densities, momentums, and kinetic energies of outflow lobes were estimated. based on those estimates and outflow features, we concluded that the two outflows are interacting and that the irs 3a outflow is nearly perpendicular to the line of sight. in addition, we estimated the velocity, inclination, and opening of the irs 3b outflow using bayesian statistics. when the opening angle is ~ 20 arcdeg, we constrain the velocity to ~ 45 km / s and the inclination angle to ~ 57 arcdeg. linear polarization was detected in both the 1. 3 mm continuum and co j = 2 - 1. the linear polarization in the continuum shows a magnetic field at the central source ( irs 3b ) perpendicular to the outflow direction, and the linear polarization in the co j = 2 - 1 was detected in the outflow regions, parallel or perpendicular to the outflow direction. moreover, we comprehensively discuss whether the binary system of irs 3a and 3b is gravitationally bound, based on the velocity differences detected in 13co j = 1 - 0 and c18o j = 1 - 0 observations and on the outflow features. the specific angular momentum of the system was estimated as ~ 3e20 cm ^ 2 / s, comparable to the values obtained from previous studies on binaries and molecular clouds in taurus.
|
arxiv:astro-ph/0609176
|
traveling wave theory is deployed today to improve the monitoring of transmission lines in electrical power grids. most traveling wave methods require prior knowledge of the wave propagation of the transmission line, which is a major source of error as the value changes during the operation of the line. to improve the localization of events on transmission lines, we propose a new online localization method that simultaneously determines the frequency - dependent wave propagation characteristic from the traveling wave measurements of the event. compared to conventional methods, this is achieved with one additional traveling wave measurement, but the method can still be applied in different measurement setups. we have derived the method based on the complex continuous wavelet transform. the accuracy of the method is evaluated in a simulation with a frequency - dependent transmission line model of the ieee 39 - bus system. the method was developed independently of the type of event and evaluated in test setups considering different lengths of the monitored line, line types and event locations. the localization accuracy is compared with existing online methods and analyzed with regard to the characterization capabilities. the method has a high relative localization accuracy in the range of 0. 1 \, \ % under different test conditions.
|
arxiv:2304.01733
|
we investigate the ice - templating behaviour of alumina suspensions by in - situ x - rays radiography and tomography. we focus here on the formation and structure of the transitional zone, which takes place during the initials instants of freezing. for many applications, this part is undesirable since the resulting porosity is heterogeneous, in size, morphology and orientation. we investigate the influence of the composition of alumina suspensions on the formation of the transitional zone. alumina particles are dispersed by three different dispersants, in various quantities, or by chlorhydric acid. we show that the height and the morphology of the transitional zone are determined by the growth of large dendritic ice - crystals growing in a supercooled state, and growing much faster than the cellular freezing front. when the freezing temperature decreases, the degree of supercooling increases. this results in a faster freezing front velocity and increases the dimensions of the transitional zone. it is therefore possible to adjust the dimensions of the transitional zone by changing the composition of alumina suspensions. the counter - ion na + has the most dramatic influence on the freezing temperature of suspensions, yielding a predominance of cellular ice crystals instead of the usual lamellar crystals.
|
arxiv:1804.08700
|
the goal of this paper is to give a short review of recent results of the authors concerning classical hamiltonian many particle systems. we hope that these results support the new possible formulation of boltzmann ' s ergodicity hypothesis which sounds as follows. for almost all potentials, the minimal contact with external world, through only one particle of $ n $, is sufficient for ergodicity. but only if this contact has no memory. also new results for quantum case are presented.
|
arxiv:1705.06050
|
with the aid of nanosecond time - resolved x - ray diffraction techniques, we have explored the complex structural dynamics of bismuth under laser - driven compression. the results demonstrate that shocked bismuth undergoes a series of structural transformations involving four solid structures : the bi - i, bi - ii, bi - iii and bi - v phases. the transformation from the bi - i phase to the bi - v phase occurs within 4 ns under shock compression at ~ 11 gpa, showing no transient phases with the available experimental conditions. successive phase transitions ( bi - v - > bi - iii - > bi - ii - > bi - i ) during the shock release within 30 ns have also been resolved, which were inaccessible using other dynamic techniques.
|
arxiv:1310.1150
|
averaging checkpoints along the training trajectory is a simple yet powerful approach to improve the generalization performance of machine learning models and reduce training time. motivated by these potential gains, and in an effort to fairly and thoroughly benchmark this technique, we present an extensive evaluation of averaging techniques in modern deep learning, which we perform using algoperf \ citep { dahl _ benchmarking _ 2023 }, a large - scale benchmark for optimization algorithms. we investigate whether weight averaging can reduce training time, improve generalization, and replace learning rate decay, as suggested by recent literature. our evaluation across seven architectures and datasets reveals that averaging significantly accelerates training and yields considerable efficiency gains, at the price of a minimal implementation and memory cost, while mildly improving generalization across all considered workloads. finally, we explore the relationship between averaging and learning rate annealing and show how to optimally combine the two to achieve the best performances.
|
arxiv:2502.06761
|
upper semicontinuous ( usc ) functions arise in the analysis of maximization problems, distributionally robust optimization, and function identification, which includes many problems of nonparametric statistics. we establish that every usc function is the limit of a hypo - converging sequence of piecewise affine functions of the difference - of - max type and illustrate resulting algorithmic possibilities in the context of approximate solution of infinite - dimensional optimization problems. in an effort to quantify the ease with which classes of usc functions can be approximated by finite collections, we provide upper and lower bounds on covering numbers for bounded sets of usc functions under the attouch - wets distance. the result is applied in the context of stochastic optimization problems defined over spaces of usc functions. we establish confidence regions for optimal solutions based on sample average approximations and examine the accompanying rates of convergence. examples from nonparametric statistics illustrate the results.
|
arxiv:1709.06730
|
the proliferation of small - scale renewable generators and price - responsive loads makes it a challenge for distribution network operators ( dnos ) to schedule the controllable loads of the load aggregators and the generation of the generators in real - time. additionally, the high computational burden and violation of the entities ' ( i. e., load aggregators ' and generators ' ) privacy make a centralized framework impractical. in this paper, we propose a decentralized energy trading algorithm that can be executed by the entities in a real - time fashion. to address the privacy issues, the dno provides the entities with proper control signals using the lagrange relaxation technique to motivate them towards an operating point with maximum profit for entities. to deal with uncertainty issues, we propose a probabilistic load model and robust framework for renewable generation. the performance of the proposed algorithm is evaluated on an ieee 123 - node test feeder. when compared with a benchmark of not performing load management for the aggregators, the proposed algorithm benefits both the load aggregators and generators by increasing their profit by 17. 8 % and 10. 3 %, respectively. when compared with a centralized approach, our algorithm converges to the solution of the dno ' s centralized problem with a significantly lower running time in 50 iterations per time slot.
|
arxiv:1705.02575
|
we define the notion of central orderings for a general commutative ring $ a $ which generalizes the notion of central points of irreducible real algebraic varieties. we study a central and a precentral loci which both live in the real spectrum of the ring $ a $ and allow to state central positivestellens \ " atze in the spirit of hilbert 17th problem.
|
arxiv:2307.04430
|
research community evaluations in information retrieval, such as nist ' s text retrieval conference ( trec ), build reusable test collections by pooling document rankings submitted by many teams. naturally, the quality of the resulting test collection thus greatly depends on the number of participating teams and the quality of their submitted runs. in this work, we investigate : i ) how the number of participants, coupled with other factors, affects the quality of a test collection ; and ii ) whether the quality of a test collection can be inferred prior to collecting relevance judgments from human assessors. experiments conducted on six trec collections illustrate how the number of teams interacts with various other factors to influence the resulting quality of test collections. we also show that the reusability of a test collection can be predicted with high accuracy when the same document collection is used for successive years in an evaluation campaign, as is common in trec.
|
arxiv:2012.13292
|
high - quality radio frequency ( rf ) components are imperative for efficient wireless communication. however, these components can degrade over time and need to be identified so that either they can be replaced or their effects can be compensated. the identification of these components can be done through observation and analysis of constellation diagrams. however, in the presence of multiple distortions, it is very challenging to isolate and identify the rf components responsible for the degradation. this paper highlights the difficulties of distorted rf components ' identification and their importance. furthermore, a deep multi - task learning algorithm is proposed to identify the distorted components in the challenging scenario. extensive simulations show that the proposed algorithm can automatically detect multiple distorted rf components with high accuracy in different scenarios.
|
arxiv:2207.01707
|
progress in ai has relied on human - generated data, from annotator marketplaces to the wider internet. however, the widespread use of large language models now threatens the quality and integrity of human - generated data on these very platforms. we argue that this issue goes beyond the immediate challenge of filtering ai - generated content - - it reveals deeper flaws in how data collection systems are designed. existing systems often prioritize speed, scale, and efficiency at the cost of intrinsic human motivation, leading to declining engagement and data quality. we propose that rethinking data collection systems to align with contributors ' intrinsic motivations - - rather than relying solely on external incentives - - can help sustain high - quality data sourcing at scale while maintaining contributor trust and long - term participation.
|
arxiv:2502.07732
|
based on ideas of pigolla and setti \ cite { ps } we prove that immersed submanifolds with bounded mean curvature of cartan - hadamard manifolds are feller. we also consider riemannian submersions $ \ pi \ colon m \ to n $ with compact minimal fibers, and based on various criteria for parabolicity and stochastic completeness, see \ cite { grygor ' yan }, we prove that $ m $ is feller, parabolic or stochastically complete if and only if the base $ n $ is feller, parabolic or stochastically complete respectively.
|
arxiv:1109.3380
|
three sudden spin - down events, termed ` anti - glitches ', were recently discovered in the accreting pulsar ngc 300 ulx - 1 by the \ textit { neutron star interior composition explorer } ( nicer ) mission. unlike previous anti - glitches detected in decelerating magnetars, these are the first anti - glitches recorded in an accelerating pulsar. one standard theory is that pulsar spin - up glitches are caused by avalanches of collectively unpinning vortices that transfer angular momentum from the superfluid interior to the crust of a neutron star. here we test whether vortex avalanches are also consistent with the anti - glitches in ngc 300 ulx - 1, with the angular momentum transfer reversed. we perform $ n $ - body simulations of up to $ 5 \ times 10 ^ { 3 } $ pinned vortices in two dimensions in secularly accelerating and decelerating containers. vortex avalanches routinely occur in both scenarios, propagating inwards and outwards respectively. the implications for observables, such as size and waiting time statistics, are considered briefly.
|
arxiv:2205.05896
|
we propose and analyze a recipient - anonymous stochastic routing model to study a fundamental trade - off between anonymity and routing delay. an agent wants to quickly reach a goal vertex in a network through a sequence of routing actions, while an overseeing adversary observes the agent ' s entire trajectory and tries to identify her goal among those vertices traversed. we are interested in understanding the probability that the adversary can correctly identify the agent ' s goal ( anonymity ), as a function of the time it takes the agent to reach it ( delay ). a key feature of our model is the presence of intrinsic uncertainty in the environment, so that each of the agent ' s intended steps is subject to random perturbation and thus may not materialize as planned. using large - network asymptotics, our main results provide near - optimal characterization of the anonymity - delay trade - off under a number of network topologies. our main technical contributions are centered around a new class of " noise - harnessing " routing strategies that adaptively combine intrinsic uncertainty from the environment with additional artificial randomization to achieve provably efficient obfuscation.
|
arxiv:1911.08875
|
is composed of simple nodes and its problem - solving capacity derives from the connections between them. neural nets are textbook implementations of this approach. some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection - level than they apparently are at the macroscopic level. other approaches gaining in popularity include ( 1 ) dynamical systems theory, ( 2 ) mapping symbolic models onto connectionist models ( neural - symbolic integration or hybrid intelligent systems ), and ( 3 ) and bayesian models, which are often drawn from machine learning. all the above approaches tend either to be generalized to the form of integrated computational models of a synthetic / abstract intelligence ( i. e. cognitive architecture ) in order to be applied to the explanation and improvement of individual and social / organizational decision - making and reasoning or to focus on single simulative programs ( or microtheories / " middle - range " theories ) modelling specific cognitive faculties ( e. g. vision, language, categorization etc. ). = = = neurobiological methods = = = research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. these methods allow us to understand how intelligent behavior is implemented in a physical system. single - unit recording direct brain stimulation animal models postmortem studies = = key findings = = cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. it has also given rise to a new theory of the philosophy of mathematics ( related to denotational mathematics ), and many theories of artificial intelligence, persuasion and coercion. it has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. fields of cognitive science have been influential in understanding the brain ' s particular functional systems ( and functional deficits ) ranging from speech production to auditory processing and visual perception. it has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopsia, and hemispatial neglect. = = notable researchers = = some of the more recognized names in cognitive science are usually either the most controversial or the most cited. within philosophy,
|
https://en.wikipedia.org/wiki/Cognitive_science
|
the dual magneto - hydrodynamics of dyonic plasma describes the study of electrodynamics equations along with the transport equations in the presence of electrons and magnetic monopoles. in this paper, we formulate the quaternionic dual fields equations, namely, the hydro - electric and hydro - magnetic fields equations which are an analogous to the generalized lamb vector field and vorticity field equations of dyonic cold plasma fluid. further, we derive the quaternionic dirac - maxwell equations for dual magneto - hydrodynamics of dyonic cold plasma. we also obtain the quaternionic dual continuity equations that describe the transport of dyonic fluid. finally, we establish an analogy of alfven wave equation which may generate from the flow of magnetic monopoles in the dyonic field of cold plasma. the present quaternionic formulation for dyonic cold plasma is well invariant under the duality, lorentz and cpt transformations.
|
arxiv:1806.08221
|
we describe in detail the method we have used to determine the ckm angles gamma, alpha and beta using flavour symmetries between non - leptonic b decays. this method is valid in the context of the sm but also in presence of new physics not affecting the amplitudes.
|
arxiv:hep-ph/0306058
|
sparse adversarial attacks fool deep neural networks ( dnns ) through minimal pixel perturbations, often regularized by the $ \ ell _ 0 $ norm. recent efforts have replaced this norm with a structural sparsity regularizer, such as the nuclear group norm, to craft group - wise sparse adversarial attacks. the resulting perturbations are thus explainable and hold significant practical relevance, shedding light on an even greater vulnerability of dnns. however, crafting such attacks poses an optimization challenge, as it involves computing norms for groups of pixels within a non - convex objective. we address this by presenting a two - phase algorithm that generates group - wise sparse attacks within semantically meaningful areas of an image. initially, we optimize a quasinorm adversarial loss using the $ 1 / 2 - $ quasinorm proximal operator tailored for non - convex programming. subsequently, the algorithm transitions to a projected nesterov ' s accelerated gradient descent with $ 2 - $ norm regularization applied to perturbation magnitudes. rigorous evaluations on cifar - 10 and imagenet datasets demonstrate a remarkable increase in group - wise sparsity, e. g., $ 50. 9 \ % $ on cifar - 10 and $ 38. 4 \ % $ on imagenet ( average case, targeted attack ). this performance improvement is accompanied by significantly faster computation times, improved explainability, and a $ 100 \ % $ attack success rate.
|
arxiv:2311.17434
|
we prove that the discriminant of a nonsingular space curve of genus $ g \ geq 2 $ is stable with respect to the standard action of the special linear group.
|
arxiv:1206.6708
|
particle beams with highly asymmetric emittance ratios are expected at the interaction point of high energy colliders. these asymmetric beams can be used to drive high gradient wakefields in dielectrics and plasma. in the case of plasma, the high aspect ratio of the drive beam creates a transversely elliptical blowout cavity and the asymmetry in the ion column creates asymmetric focusing in the two transverse planes. the ellipticity of the blowout depends on the ellipticity and normalized charge density of the beam. in this paper, simulations are performed to investigate the ellipticity of the wakefield based on the initial driver beam parameters. the matching conditions for this elliptical cavity are discussed. example cases for employment using the attainable parameter space at the awa and facet facilities are also presented.
|
arxiv:2305.01902
|
advanced data analysis techniques have proved to be crucial for extracting information from noisy images. here we show that principal component analysis can be successfully applied to ultracold gases to unveil their collective excitations. by analyzing the correlations in a series of images we are able to identify the collective modes which are excited, determine their population, image their eigenfunction, and measure their frequency. our method allows to discriminate the relevant modes from other noise components and is robust with respect to the data sampling procedure. it can be extended to other dynamical systems including cavity polariton quantum gases or trapped ions.
|
arxiv:1410.1675
|
we report on a series of experiments in which all decision trees consistent with the training data are constructed. these experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. in particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. the experiments were performed on a massively parallel maspar computer. the results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.
|
arxiv:cs/9403101
|
bayesian neural networks ( bnns ) can account for both aleatoric and epistemic uncertainty. however, in bnns the priors are often specified over the weights which rarely reflects true prior knowledge in large and complex neural network architectures. we present a simple approach to incorporate prior knowledge in bnns based on external summary information about the predicted classification probabilities for a given dataset. the available summary information is incorporated as augmented data and modeled with a dirichlet process, and we derive the corresponding \ emph { summary evidence lower bound }. the approach is founded on bayesian principles, and all hyperparameters have a proper probabilistic interpretation. we show how the method can inform the model about task difficulty and class imbalance. extensive experiments show that, with negligible computational overhead, our method parallels and in many cases outperforms popular alternatives in accuracy, uncertainty calibration, and robustness against corruptions with both balanced and imbalanced data.
|
arxiv:2207.01234
|
we consider a system of independent one - dimensional random walkers where new particles are added at the origin at fixed rate whenever there is no older particle present at the origin. a poisson ansatz leads to a semi - linear lattice heat equation and predicts that starting from the empty configuration the total number of particles grows as $ c \ sqrt { t } \ log t $. we confirm this prediction and also describe the asymptotic macroscopic profile of the particle configuration.
|
arxiv:1410.4344
|
we consider a loop - quantum gravity inspired modification of general relativity, where the holst action is generalized by making the barbero - immirzi ( bi ) parameter a scalar field, whose value could be dynamically determined. the modified theory leads to a non - zero torsion tensor that corrects the field equations through quadratic first - derivatives of the bi field. such a correction is equivalent to general relativity in the presence of a scalar field with non - trivial kinetic energy. this stress - energy of this field is automatically covariantly conserved by its own dynamical equations of motion, thus satisfying the strong equivalence principle. every general relativistic solution remains a solution to the modified theory for any constant value of the bi field. for arbitrary time - varying bi fields, a study of cosmological solutions reduces the scalar field stress - energy to that of a pressureless perfect fluid in a comoving reference frame, forcing the scale factor dynamics to be equivalent to those of a stiff equation of state. upon ultraviolet completion, this model could provide a natural mechanism for k - inflation, where the role of the inflaton is played by the bi field and inflation is driven by its non - trivial kinetic energy instead of a potential.
|
arxiv:0807.2652
|
we present quantitative versions of bohr ' s theorem on general dirichlet series $ d = \ sum a _ { n } e ^ { - \ lambda _ { n } s } $ assuming different assumptions on the frequency $ \ lambda : = ( \ lambda _ { n } ) $, including the conditions introduced by bohr and landau. therefore using the summation method by typical ( first ) means invented by m. riesz, without any condition on $ \ lambda $, we give upper bounds for the norm of the partial sum operator $ s _ { n } ( d ) : = \ sum _ { n = 1 } ^ { n } a _ { n } ( d ) e ^ { - \ lambda _ { n } s } $ of length $ n $ on the space $ \ mathcal { d } _ { \ infty } ^ { ext } ( \ lambda ) $ of all somewhere convergent $ \ lambda $ - dirichlet series allowing a holomorphic and bounded extension to the open right half plane $ [ re > 0 ] $. as a consequence for some classes of $ \ lambda $ ' s we obtain a montel theorem in $ \ mathcal { d } _ { \ infty } ( \ lambda ) $ ; the space of all $ d \ in \ mathcal { d } _ { \ infty } ^ { ext } ( \ lambda ) $ which converge on $ [ re > 0 ] $. moreover following the ideas of neder we give a construction of frequencies $ \ lambda $ for which $ \ mathcal { d } _ { \ infty } ( \ lambda ) $ fails to be complete.
|
arxiv:1812.04925
|
the reproducibility crisis has led to an increasing number of replication studies being conducted. sample sizes for replication studies are often calculated using conditional power based on the effect estimate from the original study. however, this approach is not well suited as it ignores the uncertainty of the original result. bayesian methods are used in clinical trials to incorporate prior information into power calculations. we propose to adapt this methodology to the replication framework and promote the use of predictive instead of conditional power in the design of replication studies. moreover, we describe how extensions of the methodology to sequential clinical trials can be tailored to replication studies. conditional and predictive power calculated at an interim analysis are compared and we argue that predictive power is a useful tool to decide whether to stop a replication study prematurely. a recent project on the replicability of social sciences is used to illustrate the properties of the different methods.
|
arxiv:2004.10814
|
we have previously applied several models of high - frequency quasi - periodic oscillations ( hf qpos ) to estimate the spin of the central kerr black hole in the three galactic microquasars, grs 1915 + 105, gro j1655 - 40, and xte j1550 - 564. here we explore the alternative possibility that the central compact body is a super - spinning object ( or a naked singularity ) with the external space - time described by kerr geometry with a dimensionless spin parameter a = cj / gm2 > 1. we calculate the relevant spin intervals for a subset of hf qpo models considered in the previous study. our analysis indicates that for all but one of the considered models there exists at least one interval of a > 1 that is compatible with constraints given by the ranges of the central compact object mass independently estimated for the three sources. for most of the models, the inferred values of a are several times higher than the extreme kerr black hole value a = 1. these values may be too high since the spin of superspinars is often assumed to rapidly decrease due to accretion when a > > 1. in this context, we conclude that only the epicyclic and the keplerian resonance model provides estimates that are compatible with the expectation of just a small deviation from a = 1.
|
arxiv:1410.6129
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.