text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we have computed the hadronic light - by - light ( lbl ) contribution to the muon anomalous magnetic moment $ a _ \ mu $ in the frame of chiral perturbation theory with the inclusion of the lightest resonance multiplets as dynamical fields ( r $ \ chi $ t ). it is essential to give a more accurate prediction of this hadronic contribution due to the future projects of j - parc and fnal on reducing the uncertainty in this observable. we, therefore, computed the pseudoscalar transition form factor and proposed the measurement of the $ e ^ + e ^ - \ to \ mu ^ + \ mu ^ - \ pi ^ 0 $ cross section and dimuon invariant mass spectrum to determine more accurately its parameters. then, we evaluated the pion exchange contribution to $ a _ \ mu $, obtaining $ ( 6. 66 \ pm0. 21 ) \ cdot10 ^ { - 10 } $. by comparing the pion exchange contribution and the pion - pole approximation to the corresponding transition form factor ( $ \ pi $ tff ) we recalled that the latter underestimates the complete $ \ pi $ tff by ( 15 - 20 ) \ %. then, we obtained the $ \ eta ^ ( { } ' ^ ) $ tff, obtaining a total contribution of the lightest pseudoscalar exchanges of $ ( 10. 47 \ pm0. 54 ) \ cdot10 ^ { - 10 } $, in agreement with previous results and with smaller error.
|
arxiv:1612.03331
|
we study the lf of the high - z galaxy population with 3 < z < 4 using a purely i - band magnitude - selected spectroscopic sample obtained in the framework of the vvds. we determine the lf from the vvds, taking care to add as few assumptions and as simple corrections as possible, and compare our results with those obtained from photometric studies, based on lyman - break selections or photo - z measurements. we find that in the range 3 < z < 4, the vvds lf is parameterized by phi * = 1. 24 + - 0. 50 10 - 3 mag - 1 mpc - 3 and m * = - 21. 49 + - 0. 19, assuming a slope alpha = - 1. 4 consistent with most previous studies. while phi * is comparable to previously found values, m * is significantly brighter by about 0. 5 mag at least. using the conservative slope - 1. 4, we find a ld at 1700a rho ( m < - 18. 5 ) = 2. 4 10 19 w mpc - 3 and rho tot = 3. 1 10 19 w mpc - 3, comparable to that estimated in other studies. the unexpectedly large number of very bright galaxies found in the vvds indicates that the color - selection and photo - z techniques that are generally used to build high - z galaxy samples may be affected by a significant fraction of color - measurement failures or by incomplete modelling of the mix of stellar emission, agn contribution, dust absorption and intergalactic extinction assumed to identify high - z galaxies, making pure magnitude selection better able to trace the full population. because of the difficulty to identify all low - luminosity galaxies in a spectroscopic survey, the ld could still be significantly underestimated. we also find that the relative contribution of the most luminous galaxies compared to the fainter ones is at least twice as large in the vvds compared to former estimates. therefore, the vvds paints a quite different picture of the role of the most actively star - forming galaxies in the history of star formation.
|
arxiv:astro-ph/0608176
|
implicit in the study of magnetic materials is the concept of spin hamiltonians, which emerge as the low - energy theories of correlation - driven insulators. in order to predict and establish such hamiltonians for real materials, a variety of first principles $ ab $ - $ initio $ methods have been developed, based on density functional theory and wavefunction methodologies. in this review, we provide a basic introduction to such methods and the essential concepts of low - energy hamiltonians, with a focus on their practical capabilities and limitations. we further discuss our recent efforts toward understanding a variety of complex magnetic systems that present unique challenges from the perspective of $ ab $ - $ initio $ approaches.
|
arxiv:1811.06553
|
the wedge product of vectors has been shown to yield the generalised entanglement measure i - concurrence, wherein the separability of the multiparty qubit system arises from the parallelism of vectors in the underlying hilbert space of the subsystems. here, we demonstrate the geometrical conditions of the post - measurement vectors which maximize the entanglement corresponding to the bi - partitions and can yield non - identical set of maximally entangled states. the bell states for the two qubit case, ghz and ghz like states with superposition of four constituents for three qubits, naturally arise as the maximally entangled states. the geometric conditions for maximally entangled two qudit systems are derived, leading to the generalised bell states, where the reduced density matrices are maximally mixed. we further show that the reduced density matrix for an arbitrary finite dimensional subsystem of a general qudit state can be constructed from the overlap of the post - measurement vectors. using this approach, we discuss the trade - off between the local properties namely predictability and coherence with the global property, entanglement for the non - maximally entangled two qubit state.
|
arxiv:2103.04986
|
sirius, the seventh - nearest stellar system, is a visual binary containing the metallic - line a1 v star sirius a, brightest star in the sky, orbited in a 50. 13 - year period by sirius b, the brightest and nearest white dwarf ( wd ). using images obtained over nearly two decades with the hubble space telescope ( hst ), along with photographic observations covering almost 20 years, and nearly 2300 historical measurements dating back to the 19th century, we determine precise orbital elements for the visual binary. combined with the parallax and the motion of the a component, these elements yield dynamical masses of 2. 063 + / - 0. 023 msun and 1. 018 + / - 0. 011 msun for sirius a and b, respectively. our precise hst astrometry rules out third bodies orbiting either star in the system, down to masses of ~ 15 - 25 mjup. the location of sirius b in the h - r diagram is in excellent agreement with theoretical cooling tracks for wds of its dynamical mass, and implies a cooling age of ~ 126 myr. the position of sirius b in the mass - radius plane is also consistent with wd theory, assuming a carbon - oxygen core. including the pre - wd evolutionary timescale of the assumed progenitor, the total age of sirius b is about 228 + / - 10 myr. we calculated evolutionary tracks for stars with the dynamical mass of sirius a, using two independent codes. we find it necessary to assume a slightly sub - solar metallicity, of about 0. 85 zsun, to fit its location in the luminosity - radius plane. the age of sirius a based on these models is about 237 - 247 myr, with uncertainties of + / - 15 myr, consistent with that of the wd companion. we discuss astrophysical puzzles presented by the sirius system, including the probability that the two stars must have interacted in the past, even though there is no direct evidence for this, and the orbital eccentricity remains high.
|
arxiv:1703.10625
|
the masses of open bottom mesons ( $ b $ ( $ b ^ + $, $ b ^ 0 $ ), $ \ bar { b } $ ( $ b ^ - $, $ \ bar { b ^ 0 } $ ), $ b _ s $ ( $ { b _ { s } } ^ 0 $, $ \ bar { { b _ s } ^ 0 } $ ) ) and upsilon states ( $ \ upsilon ( 1s ) $, $ \ upsilon ( 2s ) $, $ \ upsilon ( 3s ) $, $ \ upsilon ( 4s ) $, and $ \ upsilon ( 1d ) $ ) are investigated in the isospin asymmetric strange hadronic medium at finite temperature in the presence of strong magnetic fields using a chiral effective lagrangian approach. for charged baryons, the magnetic field introduces contribution from landau energy levels. the masses of the open bottom mesons get modified through their interactions with the baryons and the scalar mesons, which undergo modifications in a magnetized medium. the charged open bottom mesons have additional positive mass shifts due to landau quantization in the presence of the magnetic field. the medium mass shift of the upsilon states originates from the modification of the gluon condensates simulated by the variation of dilaton field ( $ \ chi $ ) and a quark mass term in the magnetized medium. the open bottom mesons and upsilon states experience a mass drop in the magnetized medium. the masses of these mesons initially increase with a rise in temperature, and beyond a high value of temperature, their masses are observed to drop. when the temperature is below 90 mev, the in - medium masses of the mesons increase with an increase in the magnetic field. however, at high temperatures ( t $ > $ 90 mev ), the masses are observed to drop with an increase in the magnetic field. the dominant in - medium effects are the density effects, which can have observable effects in asymmetric heavy - ion collisions planned at compressed baryonic matter experiments at fair at the future facility of gsi.
|
arxiv:2206.05715
|
recent computations of scattering amplitudes show that n = 8 supergravity is surprisingly well behaved in the ultraviolet and may even be ultraviolet finite in perturbation theory. the novel cancellations necessary for ultraviolet finiteness first appear at one loop in the guise of the " no - triangle hypothesis ". we study one - loop amplitudes in pure einstein gravity and point out the existence of cancellations similar to those found previously in n = 8 supergravity. these cancellations go beyond those found in the one - loop effective action. using unitarity, this suggests that generic theories of quantum gravity based on the einstein - hilbert action may be better behaved in the ultraviolet at higher loops than suggested by naive power counting, though without additional ( supersymmetric ) cancellations they diverge. we comment on future studies that should be performed to support this proposal.
|
arxiv:0707.1035
|
the next generation of radioactive ion beam facilities, which will give experimental access to many exotic nuclei, are presently being developed. these facilities will make it possible to study very short lived exotic nuclei with extreme values of isospin far from the line of beta stability. such nuclei will be produced with very low cross sections and to study them, new detector arrays are being developed. at the spiral facility in ganil a neutron detector array, the neutron wall, is located. in this work the neutron wall has been characterized regarding neutron detection efficiency and discrimination between neutrons and gamma rays. the possibility to increase the efficiency by increasing the high voltage of the photomultiplier tubes has also been studied. for spiral2 a neutron detector array, neda, is being developed. neda will operate in a high gamma - ray background environment which puts a high demand on the quality of discrimination between neutrons and gamma rays. to increase the quality of the discrimination methods pulse - shape discrimination techniques utilizing digital electronics have been developed and evaluated regarding bit resolution and sampling frequency of the adc. the conclusion is that an adc with a bit resolution of 12 bits and a sampling frequency of 100 ms / s is adequate for pulse - shape discrimination of neutrons and gamma rays for a neutron energy range of 0. 3 - 12 mev.
|
arxiv:0905.2132
|
we present a new monte carlo method which couples path integral for finite temperature protons with quantum monte carlo for ground state electrons, and we apply it to metallic hydrogen for pressures beyond molecular dissociation. we report data for the equation of state for temperatures across the melting of the proton crystal. our data exhibit more structure and higher melting temperatures of the proton crystal than car - parrinello molecular dynamics results. this method fills the gap between high temperature electron - proton path integral and ground state diffusion monte carlo methods.
|
arxiv:physics/0405056
|
we investigate the generation of seed magnetic field through the chern - simons coupling between the u ( 1 ) gauge field and an axion field that commences to oscillate at various epoch, depending on the mass scale. we address axions which begin oscillation during inflation, reheating, and also the radiation dominated era after the thermalization of the universe. we study the resonant generation mechanisms and highlight that a small oscillation time scale with respect to that of the cosmic expansion can lead to an efficient generation of ( hyper ) magnetic field via resonant generation, even for $ { \ cal o } ( 1 ) $ coupling. in addition, we demonstrate that the generated field can be helical due to the tachyonic amplification phase prior to the onset of oscillation. furthermore, it is shown that the parametric resonance during reheating can generate a circularly polarized ( hyper ) magnetic field in a void region with the present amplitude $ b _ 0 = 3 \ times 10 ^ { - 15 } $ gauss and the coherent length $ \ lambda _ 0 = 0. 3 $ pc without being plagued by the backreaction issue.
|
arxiv:1909.00288
|
one of the most promising ways to study the epoch of reionization ( eor ) is through radio observations of the redshifted 21 - cm line emission from neutral hydrogen. these observations are complicated by the fact that the mapping of redshifts to line - of - sight positions is distorted by the peculiar velocities of the gas. such distortions can be a source of error if they are not properly understood, but they also encode information about cosmology and astrophysics. we study the effects of redshift space distortions on the power spectrum of 21 - cm radiation from the eor using large scale $ n $ - body and radiative transfer simulations. we quantify the anisotropy introduced in the 21 - cm power spectrum by redshift space distortions and show how it evolves as reionization progresses and how it relates to the underlying physics. we go on to study the effects of redshift space distortions on lofar observations, taking instrument noise and foreground subtraction into account. we find that lofar should be able to directly observe the power spectrum anisotropy due to redshift space distortions at spatial scales around $ k \ sim 0. 1 $ mpc $ ^ { - 1 } $ after $ \ gtrsim $ 1000 hours of integration time. at larger scales, sample errors become a limiting factor, while at smaller scales detector noise and foregrounds make the extraction of the signal problematic. finally, we show how the astrophysical information contained in the evolution of the anisotropy of the 21 - cm power spectrum can be extracted from lofar observations, and how it can be used to distinguish between different reionization scenarios.
|
arxiv:1303.5627
|
in this letter, inspired by the maximum inter - element spacing ( ies ) constraint ( misc ) criterion, an enhanced misc - based ( emisc ) sparse array ( sa ) with high uniform degrees - of - freedom ( udofs ) and low mutual - coupling ( mc ) is proposed, analyzed and discussed in detail. for the emisc sa, an ies set is first determined by the maximum ies and number of elements. then, the emisc sa is composed of seven uniform linear sub - arrays ( ulsas ) derived from an ies set. an analysis of the udofs and weight function shows that, the proposed emisc sa outperforms the imisc sa in terms of udof and mc. simulation results show a significant advantage of the emisc sa over other existing sas.
|
arxiv:2309.09044
|
a self - consistent model is developed to investigate attachment / detachment kinetics of two soft, deformable microspheres with irregular surface and coated with flexible binding ligands. the model highlights how the microscale binding kinetics of these ligands as well as the attractive / repulsive potential of the charged surface affects the static deformed configuration of the spheres. it is shown that in the limit of smooth, neutral charged surface ( i. e., the debye length, $ \ kappa \ rightarrow \ infty $ ), interacting via elastic binders ( i. e., the stiffness coefficient, $ \ lambda \ rightarrow 0 $ ) the adhesion mechanics approaches the regime of application of the jkr theory, and in this particular limit, the contact radius scales with the particle radius, according to the scaling law, $ r _ c \ propto r ^ { \ frac { 2 } { 3 } } $. we show that adhesion dominates in larger particles with highly charged surface and with resilient binders. normal stress distribution within the contact area fluctuates with the binder stiffness coefficient, from a maximum at the center to a maximum at the periphery of the region. surface heterogeneities result in a diminished adhesion with a distinct reduction in the pull off force, larger separation gap, weaker normal stress and limited area of adhesion. these results are in agreement with the published experimental findings.
|
arxiv:1510.02813
|
as the routing protocol for low power and lossy networks ( rpl ) became the standard for routing in the internet of things ( iot ) networks, many researchers had investigated the security aspects of this protocol. however, no work ( to the best of our knowledge ) has investigated the use of the security mechanisms included in rpl standard, mainly because there was no implementation for these features in any iot operating systems yet. a partial implementation of rpl security mechanisms was presented recently for the contiki operating system ( by perazzo et al. ), which provided us with an opportunity to examine rpl security mechanisms. in this paper, we investigate the effects and challenges of using rpl security mechanisms under common routing attacks. first, a comparison of rpl performance, with and without its security mechanisms, under four routing attacks ( blackhole, selective - forward, neighbor, and wormhole attacks ) is conducted using several metrics ( e. g., average data packet delivery rate, average data packet delay, average power consumption, etc. ). this comparison is performed using two commonly used radio duty - cycle protocols. secondly, and based on the observations from this comparison, we propose two techniques that could reduce the effects of such attacks, without having added security mechanisms for rpl. an evaluation of these techniques shows improved performance of rpl under the investigated attacks, except for the wormhole attack.
|
arxiv:2004.07815
|
we investigate the nucleon to $ \ delta $ transition form factors in a soft - wall ads / qcd model and a light - front quark - diquark model inspired by ads / qcd. from the transition form factors we evaluate the transition charge densities which influences the nucleon to $ \ delta $ excitation. here we consider both the unpolarized and the transversely polarized cases. the ads / qcd predictions are compared with available experimental data and with the results of the global parameterization, maid2007.
|
arxiv:1605.00997
|
given $ ( m, g ) $ a smooth compact riemannian manifold without boundary of dimension $ n \ geq 3 $, we consider the first conformal eigenvalue which is by definition the supremum of the first eigenvalue of the laplacian among all metrics conformal to $ g $ of volume 1. we prove that it is always greater than $ n \ omega _ n ^ { \ frac { 2 } { n } } $, the value it takes in the conformal class of the round sphere, except if $ ( m, g ) $ is conformally diffeomorphic to the standard sphere.
|
arxiv:1310.4698
|
we performed a precise calculation of physical quantities related to the axial structure of the nucleon using 2 + 1 flavor lattice qcd gauge configuration ( pacs10 configuration ) generated at the physical point with lattice volume larger than $ ( 10 \ ; { \ mathrm { fm } } ) ^ 4 $ by the pacs collaboration. the nucleon matrix element of the axial - vector current has two types of the nucleon form factors, the axial - vector ( $ f _ a $ ) form factor and the induced pseudoscalar ( $ f _ p $ ) form factor. recently lattice qcd simulations have succeeded in reproducing the experimental value of the axial - vector coupling, $ g _ a $, determined from $ f _ a ( q ^ 2 ) $ at zero momentum transfer $ q ^ 2 = 0 $, at a percent level of statistical accuracy. however, the $ f _ p $ form factor so far has not reproduced the experimental values well due to strong $ \ pi n $ excited - state contamination. therefore, we proposed a simple subtraction method for removing the so - called leading $ \ pi n $ - state contribution, and succeeded in reproducing the values obtained by two experiments of muon capture on the proton and pion electro - production for $ f _ p ( q ^ 2 ) $. the novel approach can also be applied to the nucleon pseudoscalar matrix element to determine the pseudoscalar ( $ g _ p $ ) form factor with the help of the axial wald - takahashi identity. the resulting form factors, $ f _ p ( q ^ 2 ) $ and $ g _ p ( q ^ 2 ) $, are in good agreement with the prediction of the pion - pole dominance model. in the new analysis, the induced pseudoscalar coupling $ g _ p ^ \ ast $ and the pion - nucleon coupling $ g _ { \ pi nn } $ can be evaluated with a few percent accuracy including systematic uncertainties using existing data calculated at two lattice spacings.
|
arxiv:2505.06854
|
today ' s systems have diverse needs that are difficult to address using one - size - fits - all commodity dram. unfortunately, although system designers can theoretically adapt commodity dram chips to meet their particular design goals ( e. g., by reducing access timings to improve performance, implementing system - level rowhammer mitigations ), we observe that designers today lack sufficient insight into commodity dram chips ' reliability characteristics to implement these techniques in practice. in this work, we make a case for dram manufacturers to provide increased transparency into key aspects of dram reliability ( e. g., basic chip design properties, testing strategies ). doing so enables system designers to make informed decisions to better adapt commodity dram to meet modern systems ' needs while preserving its cost advantages. to support our argument, we study four ways that system designers can adapt commodity dram chips to system - specific design goals : ( 1 ) improving dram reliability ; ( 2 ) reducing dram refresh overheads ; ( 3 ) reducing dram access latency ; and ( 4 ) mitigating rowhammer attacks. we observe that adopting solutions for any of the four goals requires system designers to make assumptions about a dram chip ' s reliability characteristics. these assumptions discourage system designers from using such solutions in practice due to the difficulty of both making and relying upon the assumption. we identify dram standards as the root of the problem : current standards rigidly enforce a fixed operating point with no specifications for how a system designer might explore alternative operating points. to overcome this problem, we introduce a two - step approach that reevaluates dram standards with a focus on transparency of dram reliability so that system designers are encouraged to make the most of commodity dram technology for both current and future dram chips.
|
arxiv:2204.10378
|
here, we study the problem of decoding information transmitted through unknown quantum states. we assume that alice encodes an alphabet into a set of orthogonal quantum states, which are then transmitted to bob. however, the quantum channel that mediates the transmission maps the orthogonal states into non - orthogonal states, possibly mixed. if an accurate model of the channel is unavailable, then the states received by bob are unknown. in order to decode the transmitted information we propose to train a measurement device to achieve the smallest possible error in the discrimination process. this is achieved by supplementing the quantum channel with a classical one, which allows the transmission of information required for the training, and resorting to a noise - tolerant optimization algorithm. we demonstrate the training method in the case of minimum - error discrimination and show that it achieves error probabilities very close to the optimal one. in particular, in the case of two unknown pure states our proposal approaches the helstrom bound. a similar result holds for a larger number of states in higher dimensions. we also show that a reduction of the search space, which is used in the training process, leads to a considerable reduction in the required resources. finally, we apply our proposal to the case of the dephasing channel reaching an accurate value of the optimal error probability.
|
arxiv:2111.13568
|
the presence of the $ ( b + l ) $ - conserving decay modes $ n \ to k ^ + e ^ -, $ $ n \ to k ^ + \ mu ^ -, $ $ p \ to k ^ + e ^ - \ pi ^ + $ and $ p \ to k ^ + \ mu ^ - \ pi ^ + $ is shown to be a characteristic feature of a class of models with explicit breaking of $ r $ - parity. these modes dominate over the $ ( b - l ) $ - conserving ones in certain regions of the parameter space ; the impact of this scenario for nucleon decay search at the super - kamiokande is discussed.
|
arxiv:hep-ph/9503227
|
magnetic field effect on pion superfluid phase transition is investigated in frame of a pauli - villars regularized njl model. instead of directly dealing with charged pion condensate, we apply the goldstone ' s theorem ( massless goldstone boson $ \ pi ^ + $ ) to determine the onset of pion superfluid phase, and obtain the phase diagram in magnetic field, temperature, isospin and baryon chemical potential space. at weak magnetic field, it is analytically proved that the critical isospin chemical potential of pion superfluid phase transition is equal to the mass of $ \ pi ^ + $ meson in magnetic field. the pion superfluid phase is retarded to higher isospin chemical potential, and can survive at higher temperature and higher baryon chemical potential under external magnetic field.
|
arxiv:2009.11550
|
local exact controllability of the 1d nls ( subject to zero boundary conditions ) with distributed control is shown to hold in a $ h ^ 1 $ - - neighbourhood of the nonlinear ground state. the hilbert uniqueness method ( hum ), due to j. - l. lions, is applied to the linear control problem that arises by linearization around the ground state. the application of hum crucially depends on the spectral properties of the linearized nls operator which are given in detail.
|
arxiv:math/0608135
|
in this paper we study the cross section at leading order in $ 1 / q $ for polarized drell - yan scattering at measured lepton - pair transverse momentum $ q _ t $. we find that for a hadron with spin $ 1 / 2 $ the quark content at leading order is described by six distribution functions for each flavor, which depend on both the lightcone momentum fraction $ x $, and the quark transverse momentum $ \ bbox { k } _ t ^ 2 $. these functions are illustrated for a free - quark ensemble. the cross sections for both longitudinal and transverse polarizations are expressed in terms of convolution integrals over the distribution functions.
|
arxiv:hep-ph/9403227
|
shortcuts to adiabaticity ( sta ) are techniques allowing rapid variation of the system hamiltonian without inducing excess heating. fast optical transfer of atoms between different locations is a prime example of an sta application. we show that the boundary conditions on the atomic position, which are imposed to find the sta trajectory, lead to highly non - practical boundary conditions for the optical trap. our experimental results demonstrate that, as a result, previously suggested sta trajectories generally do not perform well. we develop and demonstrate two complementary methods that solve the boundary conditions problem and allow the construction of realistic and flexible sta movements. our technique can also account for non - harmonic terms in the confining potential.
|
arxiv:1805.11889
|
wavelet transform of polarized fluorescence spectra of human breast tissues is found to localize spectral features that can reliably differentiate normal and malignant tissue types. the intensity differences of parallel and perpendicularly polarized fluorescence spectra are subjected to investigation, since the same is relatively free of the diffusive background. a number of parameters, capturing spectral variations and subtle changes in the diseased tissues in the visible wavelength regime, are clearly identifiable in the wavelet domain. these manifest both in the average low pass and high frequency high pass wavelet coefficients.
|
arxiv:1205.0447
|
at swissfel, a fully non - invasive characterization of the beam energy distribution can be performed by means of a synchrotron - radiation - monitor ( srm ) imaging in the visible the transverse profile of the electron beam in a magnetic chicane. under conditions of off - crest acceleration in the swissfel injector, according to a first order taylor series expansion, the distribution of the relative energy of the electron beam can be linearly expressed as a function of the distribution of the electron longitudinal coordinates via the coefficient of the rf induced beam energy chirp. it is hence possible to express the distribution of the electron longitudinal coordinates at the entrance of the magnetic chicane of swissfel as a function of the distribution of the dispersed electron trajectories in the horizontal plane of the magnetic chicane. machine parameters and instrument data at swissfel can be beam - synchronously acquired. a shot - by - shot correlation between the analysis results of the srm camera images and the linear coefficient of the beam energy chirp resulting from the rf parameters of the injector can be hence established. finally, taking into account the compression factor of the magnetic chicane, the electron bunch length at the exit of the magnetic chicane can be expressed as a function of the horizontal size of the electron beam imaged by the srm. bunch - length measurements performed by means of the srm in the first magnetic chicane of swissfel will be presented.
|
arxiv:1905.08081
|
memorization of training data is an active research area, yet our understanding of the inner workings of neural networks is still in its infancy. recently, haim et al. ( 2022 ) proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers, effectively demonstrating that a large portion of training samples are encoded in the parameters of such networks. in this work, we extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks. we derive a more general reconstruction scheme which is applicable to a wider range of loss functions such as regression losses. moreover, we study the various factors that contribute to networks ' susceptibility to such reconstruction schemes. intriguingly, we observe that using weight decay during training increases reconstructability both in terms of quantity and quality. additionally, we examine the influence of the number of neurons relative to the number of training samples on the reconstructability. code : https : / / github. com / gonbuzaglo / decoreco
|
arxiv:2307.01827
|
v. v. shchigolev has proven that over any infinite field k of characteristic p > 2, the t - space generated by g = { x _ 1 ^ p, x _ 1 ^ px _ 2 ^ p,... } is finitely based, which answered a question raised by a. v. grishin. shchigolev went on to conjecture that every infinite subset of g generated a finitely based t - space. in this paper, we prove that shchigolev ' s conjecture was correct by showing that for any field of characteristic p > 2, the t - space generated by any subset { x _ 1 ^ px _ 2 ^ p... x _ { i _ 1 } ^ p, x _ 1 ^ px _ 2 ^ p... x _ { i _ 2 } ^ p,... }, i _ 1 < i _ 2 < i _ 3 <..., of g has a t - space basis of size at most i _ 2 - i _ 1 + 1.
|
arxiv:0911.1709
|
for optimal placement and orchestration of network services, it is crucial that their structure and semantics are specified clearly and comprehensively and are available to an orchestrator. existing specification approaches are either ambiguous or miss important aspects regarding the behavior of virtual network functions ( vnfs ) forming a service. we propose to formally and unambiguously specify the behavior of these functions and services using queuing petri nets ( qpns ). qpns are an established method that allows to express queuing, synchronization, stochastically distributed processing delays, and changing traffic volume and characteristics at each vnf. with qpns, multiple vnfs can be connected to complete network services in any structure, even specifying bidirectional network services containing loops. we propose a tool - based workflow that supports the specification of network services and the automatic generation of corresponding simulation code to enable an in - depth analysis of their behavior and performance. in a case study, we show how developers can benefit from analysis insights, e. g., to anticipate the impact of different service configurations. we also discuss how management and orchestration systems can benefit from our clear and comprehensive specification approach and its extensive analysis possibilities, leading to better placement of vnfs and improved quality of service.
|
arxiv:1803.07007
|
we introduce the notion of pointwise coverage to measure the explainability properties of machine learning classifiers. an explanation for a prediction is a definably simple region of the feature space sharing the same label as the prediction, and the coverage of an explanation measures its size or generalizability. with this notion of explanation, we investigate whether or not there is a natural characterization of the most explainable classifier. according with our intuitions, we prove that the binary linear classifier is uniquely the most explainable classifier up to negligible sets.
|
arxiv:1910.08595
|
studying the radial variation of the stellar mass function in globular clusters ( gcs ) has proved a valuable tool to explore the collisional dynamics leading to mass segregation and core collapse. in order to study the radial dependence of the luminosity and mass function of m 10, we used acs / hst deep high resolution archival images, reaching out to approximately the cluster ' s half - mass radius ( rhm ), combined with deep wfpc2 images that extend our radial coverage to more than 2 rhm. from our photometry, we derived a radial mass segregation profile and a global mass function that we compared with those of simulated clusters containing different energy sources ( namely hard binaries and / or an imbh ) able to halt core collapse and to quench mass segregation. a set of direct n - body simulations of gcs, with and without an imbh of mass 1 % of the total cluster mass, comprising different initial mass functions ( imfs ) and primordial binary fractions, was used to predict the observed mass segregation profile and mass function. the mass segregation profile of m 10 is not compatible with cluster models without either an imbh or primordial binaries, as a source of energy appears to be moderately quenching mass segregation in the cluster. unfortunately, the present observational uncertainty on the binary fraction in m10 does not allow us to confirm the presence of an imbh in the cluster, since an imbh, a dynamically non - negligible binary fraction ( ~ 5 % ), or both can equally well explain the radial dependence of the cluster mass function.
|
arxiv:1003.0280
|
finite temperature compact electrodynamics in ( 2 + 1 ) dimensions is studied in the presence of external electromagnetic fields. the deconfinement temperature is found to be insensitive to the external fields. this result corroborates our observation that external fields create additional small - - size magnetic dipoles from the vacuum which do not spoil the confining properties of the model at low temperature. however, the polyakov loop is not an order parameter of confinement. it can vanish in deconfinement in the presence of external field. this does not mean the restoration of confinement for certain external field fluxes. as a next step in the study of ( 2 + 1 ) d qed, the influence of monopoles on the photon propagator is studied. first results are presented showing this connection in the confining phase ( without external field ).
|
arxiv:hep-lat/0110038
|
software vulnerability detection is crucial for high - quality software development. recently, some studies utilizing graph neural networks ( gnns ) to learn the graph representation of code in vulnerability detection tasks have achieved remarkable success. however, existing graph - based approaches mainly face two limitations that prevent them from generalizing well to large code graphs : ( 1 ) the interference of noise information in the code graph ; ( 2 ) the difficulty in capturing long - distance dependencies within the graph. to mitigate these problems, we propose a novel vulnerability detection method, angle, whose novelty mainly embodies the hierarchical graph refinement and context - aware graph representation learning. the former hierarchically filters redundant information in the code graph, thereby reducing the size of the graph, while the latter collaboratively employs the graph transformer and gnn to learn code graph representations from both the global and local perspectives, thus capturing long - distance dependencies. extensive experiments demonstrate promising results on three widely used benchmark datasets : our method significantly outperforms several other baselines in terms of the accuracy and f1 score. particularly, in large code graphs, angle achieves an improvement in accuracy of 34. 27 % - 161. 93 % compared to the state - of - the - art method, ample. such results demonstrate the effectiveness of angle in vulnerability detection tasks.
|
arxiv:2412.10164
|
a white paper is a report or guide that informs readers concisely about a complex issue and presents the issuing body ' s philosophy on the matter. it is meant to help readers understand an issue, solve a problem, or make a decision. since the 1990s, this type of document has proliferated in business. today, a business - to - business ( b2b ) white paper falls under grey literature, more akin to a marketing presentation meant to persuade customers and partners, and promote a certain product or viewpoint. the term originated in the 1920s to mean a type of position paper or industry report published by a department of the uk government. = = in government = = the term white paper originated with the british government, with the churchill white paper of 1922 being an early example. in the british government, a white paper is usually the less extensive version of the so - called blue book, both terms being derived from the colour of the document ' s cover. white papers are a " tool of participatory democracy... not [ an ] unalterable policy commitment ". " white papers have tried to perform the dual role of presenting firm government policies while at the same time inviting opinions upon them. " in canada, a white paper is " a policy document, approved by cabinet, tabled in the house of commons and made available to the general public ". the " provision of policy information through the use of white and green papers can help to create an awareness of policy issues among parliamentarians and the public and to encourage an exchange of information and analysis. they can also serve as educational techniques. " white papers are a way the government can present policy preferences before it introduces legislation. publishing a white paper tests public opinion on controversial policy issues and helps the government gauge its probable impact. by contrast, green papers, which are issued much more frequently, are more open - ended. also known as consultation documents, green papers may merely propose a strategy to implement in the details of other legislation, or they may set out proposals on which the government wishes to obtain public views and opinion. examples of governmental white papers include, in australia, full employment in australia and, in the united kingdom, the white paper of 1939 and the 1966 defence white paper. in israeli history, the british white paper of 1939 – marking a sharp turn against zionism in british policy and at the time greeted with great anger by the jewish yishuv community in mandatory palestine – is remembered as " the white paper " ( in hebrew ha ' sefer ha ' lava
|
https://en.wikipedia.org/wiki/White_paper
|
the optical activity of a chiral medium is discussed from the view point of transfer of energy. the absorbed energy of the polarized light in the optical active medium is transferred to the mechanical rotation of the chiral molecule. they acquire the helicity dependent geometric phase due to passage of the polarized light which loses energy by having an optical rotation. the entanglement of a polarized photon and fermion is the very source of this behavior. this theoretical knowledge has been reflected in an experimental study with six essential and five non - essential amino acids.
|
arxiv:1909.07795
|
we study quasi - monte carlo ( qmc ) integration of smooth functions defined over the multi - dimensional unit cube. inspired by a recent work of pan and owen, we study a new construction - free median qmc rule which can exploit the smoothness and the weights of function spaces adaptively. for weighted korobov spaces, we draw a sample of $ r $ independent generating vectors of rank - 1 lattice rules, compute the integral estimate for each, and approximate the true integral by the median of these $ r $ estimates. for weighted sobolev spaces, we use the same approach but with the rank - 1 lattice rules replaced by high - order polynomial lattice rules. a major advantage over the existing approaches is that we do not need to construct good generating vectors by a computer search algorithm, while our median qmc rule achieves almost the optimal worst - case error rate for the respective function space with any smoothness and weights, with a probability that converges to 1 exponentially fast as $ r $ increases. numerical experiments illustrate and support our theoretical findings.
|
arxiv:2201.09413
|
the clausius inequality, one of the classical formulations of the second law, was recently found to be violated in the quantum regime. here this result is formulated in the context of a mesoscopic or nanoscale linear rlc circuit interacting with a thermal bath. previous experiments in this and related fields are analyzed and possibilities of experimental detection of the violation are pointed out. it is discussed that recent experiments reached the range of temperatures, where the effect should be visible, and that a part of the proposal was already confirmed.
|
arxiv:cond-mat/0205156
|
the problem of reservation in a large distributed system is analyzed via a new mathematical model. a typical application is a station - based car - sharing system which can be described as a closed stochastic network where the nodes are the stations and the customers are the cars. the user can reserve the car and the parking space. in the paper, we study the evolution of the system when the reservation of parking spaces and cars is effective for all users. the asymptotic behavior of the underlying stochastic network is given when the number $ n $ of stations and the fleet size increase at the same rate. the analysis involves a markov process on a state space with dimension of order $ n ^ 2 $. it is quite remarkable that the state process describing the evolution of the stations, whose dimension is of order $ n $, converges in distribution, although not markov, to an non - homogeneous markov process. we prove this mean - field convergence. we also prove, using combinatorial arguments, that the mean - field limit has a unique equilibrium measure when the time between reserving and picking up the car is sufficiently small. this result extends the case where only the parking space can be reserved.
|
arxiv:2201.08298
|
we present a study on the static magnetic properties of individual $ \ text { gaas } $ - $ \ text { fe } _ { \ text { 33 } } \ text { co } _ { \ text { 67 } } $ core - shell nanorods. x - ray magnetic circular dichroism combined with photoemission electron microscopy ( xmcd - peem ) and scanning transmission x - ray microscopy ( stxm ) were used to investigate the magnetic nanostructures. the magnetic layer is purposely designed to establish a magnetic easy axis neither along a high symmetry nor mirror axes to promote 3d magnetic helical order on the curved surface. in practice, two types of magnetic textures with in - plane magnetization were found inside the nanostructures ' facets : magnetic domains with almost longitudinal or almost perpendicular magnetization with respect to the axis of the tube. we observe that a magnetic field applied perpendicular to the long axis of the nanostructure can add an azimuthal component of the magnetization to the previously almost longitudinal magnetization.
|
arxiv:2408.15036
|
statistical resampling methods have become feasible for parametric estimation, hypothesis testing, and model validation now that the computer is a ubiquitous tool for statisticians. this essay focuses on the resampling technique for parametric estimation known as the jackknife procedure. to outline the usefulness of the method and its place in the general class of statistical resampling techniques, i will quickly delineate two similar resampling methods : the bootstrap and the permutation test. i then outline the jackknife method and show an example of its use.
|
arxiv:1606.00497
|
we exhibit in a model with simple dynamics, specifically a particle in a square box or two particles in one dimensional boxes, that if an experimenter can prepare the initial wave function of a system, the maximal information about the positions of bohmian particles that is compatible with " no signalling " is that they are distributed according to $ | \ psi ( x ) | ^ 2 $. in particular, the positions cannot be prepared independently from the wave function. any sharper " actual " position of the particle must be inaccessible since it could be used to send signals instantaneously. this is a consequence of the non - local character of the bohmian dynamical law.
|
arxiv:1902.03752
|
simple examples are constructed that show the entanglement of two qubits being both increased and decreased by interactions on just one of them. one of the two qubits interacts with a third qubit, a control, that is never entangled or correlated with either of the two entangled qubits and is never entangled, but becomes correlated, with the system of those two qubits. the two entangled qubits do not interact, but their state can change from maximally entangled to separable or from separable to maximally entangled. similar changes for the two qubits are made with a swap operation between one of the qubits and a control ; then there are compensating changes of entanglement that involve the control. when the entanglement increases, the map that describes the change of the state of the two entangled qubits is not completely positive. combination of two independent interactions that individually give exponential decay of the entanglement can cause the entanglement to not decay exponentially but, instead, go to zero at a finite time.
|
arxiv:0704.0461
|
speaker verification systems often degrade significantly when there is a language mismatch between training and testing data. being able to improve cross - lingual speaker verification system using unlabeled data can greatly increase the robustness of the system and reduce human labeling costs. in this study, we introduce an unsupervised adversarial discriminative domain adaptation ( adda ) method to effectively learn an asymmetric mapping that adapts the target domain encoder to the source domain, where the target domain and source domain are speech data from different languages. adda, together with a popular domain adversarial training ( dat ) approach, are evaluated on a cross - lingual speaker verification task : the training data is in english from nist sre04 - 08, mixer 6 and switchboard, and the test data is in chinese from aishell - i. we show that with the adda adaptation, equal error rate ( eer ) of the x - vector system decreases from 9. 331 \ % to 7. 645 \ %, relatively 18. 07 \ % reduction of eer, and 6. 32 \ % reduction from dat as well. further data analysis of adda adapted speaker embedding shows that the learned speaker embeddings can perform well on speaker classification for the target domain data, and are less dependent with respect to the shift in language.
|
arxiv:1908.01447
|
schemes for creation of n particle entangled greenberger - horne - zeilinger ( ghz ) states are important for understanding multi - particle non - classical correlations. here, a theoretical scheme for creation of a multi - particle ghz state implemented on a target ensemble of n, $ \ lambda $ three - level rydberg atoms and a single rydberg atom as a control using stimulated raman adiabatic passage ( stirap ) is presented. we work in the rydberg blockade regime for the ensemble atoms induced due to excitation of the control atom to a high lying rydberg level. it is shown that using stirap, atoms from one ground state of the ensemble can be adiabatically transferred to the other ground state, depending on the state of the control atom with high fidelity. measurement of the control atom in a specific basis after this conditional transfer facilitates one - step creation of a n particle ghz state. a thorough analysis of adiabatic conditions for this scheme and the influence of radiative decay from the excited rydberg levels is presented. we show that this scheme is immune to the decay rate of the excited level in ensemble atoms and provides a robust way of creating ghz states.
|
arxiv:1803.02844
|
air - gapped computers are computers which are kept isolated from the internet, because they store and process sensitive information. when highly sensitive data is involved, an air - gapped computer might also be kept secluded in a faraday cage. the faraday cage prevents the leakage of electromagnetic signals emanating from various computer parts, which may be picked up by an eavesdropping adversary remotely. the air - gap separation, coupled with the faraday shield, provides a high level of isolation, preventing the potential leakage of sensitive data from the system. in this paper, we show how attackers can bypass faraday cages and air - gaps in order to leak data from highly secure computers. our method is based on an exploitation of the magnetic field generated by the computer cpu. unlike electromagnetic radiation ( emr ), low frequency magnetic radiation propagates though the air, penetrating metal shielding such as faraday cages ( e. g., compass still works inside faraday cages ). we introduce a malware code - named odini that can control the low frequency magnetic fields emitted from the infected computer by regulating the load of the cpu cores. arbitrary data can be modulated and transmitted on top of the magnetic emission and received by a magnetic receiver ( bug ) placed nearby. we provide technical background and examine the characteristics of the magnetic fields. we implement a malware prototype and discuss the design considerations along with the implementation details. we also show that the malicious code does not require special privileges ( e. g., root ) and can successfully operate from within isolated virtual machines ( vms ) as well.
|
arxiv:1802.02700
|
we theoretically investigate electron spin operations driven by applied electric fields in a semiconductor double quantum dot ( dqd ). our model describes a dqd formed in semiconductor nanowire with longitudinal potential modulated by local gating. the eigenstates for two electron occupation, including spin - orbit interaction, are calculated and then used to construct a model for the charge transport cycle in the dqd taking into account the spatial dependence and spin mixing of states. the dynamics of the system is simulated aiming at implementing protocols for qubit operations, that is, controlled transitions between the singlet and triplet states. in order to obtain fast spin manipulation, the dynamics is carried out taking advantage of the anticrossings of energy levels introduced by the spin - orbit and interdot couplings. the theory of optimal quantum control is invoked to find the specific electric - field driving that performs qubit logical operations. we demonstrate that it is possible to perform within high efficiency a universal set of quantum gates $ \ { $ cnot, h $ \ otimes $ i, i $ \ otimes $ h, t $ \ otimes $ i, and t $ \ otimes $ i $ \ } $, where h is the hadamard gate, t is the $ \ pi / 8 $ gate, and i is the identity, even in the presence of a fast charge transport cycle and charge noise effects.
|
arxiv:1710.02499
|
we establish " higher depth " analogues of regularized determinants due to milnor for zeros of cuspidal automorphic l - functions of gl _ d over a general number field. this is a generalization of the result of deninger about the regularized determinant for zeros of the riemann zeta function.
|
arxiv:0909.4925
|
we outline a rigorous algorithm, first suggested by casson, for determining whether a closed orientable 3 - manifold m is hyperbolic, and to compute the hyperbolic structure, if one exists. the algorithm requires that a procedure has been given to solve the word problem in \ pi _ 1 ( m ).
|
arxiv:math/0102154
|
ultrasound simulation based on ray tracing enables the synthesis of highly realistic images. it can provide an interactive environment for training sonographers as an educational tool. however, due to high computational demand, there is a trade - off between image quality and interactivity, potentially leading to sub - optimal results at interactive rates. in this work we introduce a deep learning approach based on adversarial training that mitigates this trade - off by improving the quality of simulated images with constant computation time. an image - to - image translation framework is utilized to translate low quality images into high quality versions. to incorporate anatomical information potentially lost in low quality images, we additionally provide segmentation maps to image translation. furthermore, we propose to leverage information from acoustic attenuation maps to better preserve acoustic shadows and directional artifacts, an invaluable feature for ultrasound image interpretation. the proposed method yields an improvement of 7. 2 % in fr \ ' { e } chet inception distance and 8. 9 % in patch - based kullback - leibler divergence.
|
arxiv:2006.10850
|
we discuss the stochastic background of gravitational waves from ultra compact neutron star - white dwarf ( ns - wd ) binaries at cosmological distances. under the assumption that accreting neutron stars and donor white dwarf stars form most of the low mass x - ray binaries ( lmxbs ), our calculation makes use of recent results related to the luminosity function determined from x - ray observations. even after accounting for detached ns - wd binaries not captured in x - ray data, the ns - wd background is at least an order of magnitude below that due to extragalactic white dwarf - white dwarf binaries and below the detectability level of the laser interferometer space antenna ( lisa ) at frequencies between 10 ^ - 5 hz and 10 ^ - 1 hz. while the extragalactic background is unlikely to be detected, we suggest that around one to ten galactic ns - wd binaries may be resolved with lisa such that their positions are determined to an accuracy of several degrees on the sky.
|
arxiv:astro-ph/0406467
|
recently, graphics processors ( gpus ) have been increasingly leveraged in a variety of scientific computing applications. however, architectural differences between cpus and gpus necessitate the development of algorithms that take advantage of gpu hardware. as sparse matrix vector multiplication ( spmv ) operations are commonly used in finite element analysis, a new spmv algorithm and several variations are developed for unstructured finite element meshes on gpus. the effective bandwidth of current gpu algorithms and the newly proposed algorithms are measured and analyzed for 15 sparse matrices of varying sizes and varying sparsity structures. the effects of optimization and differences between the new gpu algorithm and its variants are then subsequently studied. lastly, both new and current spmv gpu algorithms are utilized in the gpu cg solver in gpu finite element simulations of the heart. these results are then compared against parallel petsc finite element implementation results. the effective bandwidth tests indicate that the new algorithms compare very favorably with current algorithms for a wide variety of sparse matrices and can yield very notable benefits. gpu finite element simulation results demonstrate the benefit of using gpus for finite element analysis, and also show that the proposed algorithms can yield speedup factors up to 12 - fold for real finite element applications.
|
arxiv:1501.00324
|
recent searchable symmetric encryption ( sse ) schemes enable secure searching over an encrypted database stored in a server while limiting the information leaked to the server. these schemes focus on hiding the access pattern, which refers to the set of documents that match the client ' s queries. this provides protection against current attacks that largely depend on this leakage to succeed. however, most sse constructions also leak whether or not two queries aim for the same keyword, also called the search pattern. in this work, we show that search pattern leakage can severely undermine current sse defenses. we propose an attack that leverages both access and search pattern leakage, as well as some background and query distribution information, to recover the keywords of the queries performed by the client. our attack follows a maximum likelihood estimation approach, and is easy to adapt against sse defenses that obfuscate the access pattern. we empirically show that our attack is efficient, it outperforms other proposed attacks, and it completely thwarts two out of the three defenses we evaluate it against, even when these defenses are set to high privacy regimes. these findings highlight that hiding the search pattern, a feature that most constructions are lacking, is key towards providing practical privacy guarantees in sse.
|
arxiv:2010.03465
|
the bgo - od experiment at the university of bonn ' s elsa accelerator facility in germany is ideally suited to investigate photoproduction at extreme forward angles. it combines a highly segmented bgo electromagnetic calorimeter at central angles and an open dipole magnetic spectrometer in the forward direction. this allows the detection of forward going kaons, and complex final states of mixed charge from hyperon decays. current projects at the bgo - od experiment include strangeness production of $ \ gamma p \ rightarrow k ^ + \ lambda / \ sigma ^ 0 $ at forward angles, $ k ^ 0 \ sigma ^ 0 $ with a deuteron target and $ k ^ + \ lambda ( 1405 ) $ line shape and cross section measurements.
|
arxiv:2008.02023
|
robustness is a key concern for rust library development because rust promises no risks of undefined behaviors if developers use safe apis only. fuzzing is a practical approach for examining the robustness of programs. however, existing fuzzing tools are not directly applicable to library apis due to the absence of fuzz targets. it mainly relies on human efforts to design fuzz targets case by case which is labor - intensive. to address this problem, this paper proposes a novel automated fuzz target generation approach for fuzzing rust libraries via api dependency graph traversal. we identify several essential requirements for library fuzzing, including validity and effectiveness of fuzz targets, high api coverage, and efficiency. to meet these requirements, we first employ breadth - first search with pruning to find api sequences under a length threshold, then we backward search longer sequences for uncovered apis, and finally we optimize the sequence set as a set covering problem. we implement our fuzz target generator and conduct fuzzing experiments with afl + + on several real - world popular rust projects. our tool finally generates 7 to 118 fuzz targets for each library with api coverage up to 0. 92. we exercise each target with a threshold of 24 hours and find 30 previously - unknown bugs from seven libraries.
|
arxiv:2104.12064
|
current image generation and editing methods primarily process textual prompts as direct inputs without reasoning about visual composition and explicit operations. we present generation chain - of - thought ( got ), a novel paradigm that enables generation and editing through an explicit language reasoning process before outputting images. this approach transforms conventional text - to - image generation and editing into a reasoning - guided framework that analyzes semantic relationships and spatial arrangements. we define the formulation of got and construct large - scale got datasets containing over 9m samples with detailed reasoning chains capturing semantic - spatial relationships. to leverage the advantages of got, we implement a unified framework that integrates qwen2. 5 - vl for reasoning chain generation with an end - to - end diffusion model enhanced by our novel semantic - spatial guidance module. experiments show our got framework achieves excellent performance on both generation and editing tasks, with significant improvements over baselines. additionally, our approach enables interactive visual generation, allowing users to explicitly modify reasoning steps for precise image adjustments. got pioneers a new direction for reasoning - driven visual generation and editing, producing images that better align with human intent. to facilitate future research, we make our datasets, code, and pretrained models publicly available at https : / / github. com / rongyaofang / got.
|
arxiv:2503.10639
|
model agnostic feature attribution algorithms ( such as shap and lime ) are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks. however, since complex classification models produce superior performance when trained on low - level ( or encoded ) features, in many cases, the explanations generated by these algorithms are neither interpretable nor usable by humans. methods proposed in recent studies that support the generation of human - interpretable explanations are impractical, because they require a fully invertible transformation function that maps the model ' s input features to the human - interpretable features. in this work, we introduce latent shap, a black - box feature attribution framework that provides human - interpretable explanations, without the requirement for a fully invertible transformation function. we demonstrate latent shap ' s effectiveness using ( 1 ) a controlled experiment where invertible transformation functions are available, which enables robust quantitative evaluation of our method, and ( 2 ) celebrity attractiveness classification ( using the celeba dataset ) where invertible transformation functions are not available, which enables thorough qualitative evaluation of our method.
|
arxiv:2211.14797
|
next - generation wireless networks target high network availability, ubiquitous coverage, and extremely high data rates for mobile users. this requires exploring new frequency bands, e. g., mmwaves, moving toward ultra - dense deployments in urban locations, and providing ad hoc, resilient connectivity in rural scenarios. the design of the backhaul network plays a key role in advancing how the access part of the wireless system supports next - generation use cases. wireless backhauling, such as the newly introduced integrated access and backhaul ( iab ) concept in 5g, provides a promising solution, also leveraging the mmwave technology and steerable beams to mitigate interference and scalability issues. at the same time, however, managing and optimizing a complex wireless backhaul introduces additional challenges for the operation of cellular systems. this paper presents a strategy for the optimal creation of the backhaul network considering various constraints related to network topology, robustness, and flow management. we evaluate its feasibility and efficiency using synthetic and realistic network scenarios based on 3d modeling of buildings and ray tracing. we implement and prototype our solution as a dynamic iab control framework based on the open radio access network ( ran ) architecture, and demonstrate its functionality in colosseum, a large - scale wireless network emulator with hardware in the loop.
|
arxiv:2410.22246
|
we use a kinetic - equation approach to describe the propagation of ultra high energy cosmic ray protons and nuclei comparing theoretical results with the observations of the pierre auger observatory.
|
arxiv:1412.7380
|
we introduce a transfer matrix method for the spectral analysis of discrete hermitian operators with locally finite hopping. such operators can be associated with a locally finite graph structure and the method works in principle on any such graph. the key result is a spectral averaging formula well known for jacobi or 1 - channel operators giving the spectral measure at a root vector by a weak limit of products of transfer matrices. here, we assume an increase in the rank for the connections between spherical shells which is a typical situation and true on finite dimensional lattices $ \ mathbb { z } ^ d $. the product of transfer matrices are considered as a transformation of the relations of ' boundary resolvent data ' along the shells. the trade off is that at each level or shell with more forward then backward connections ( rank - increase ) we have a set of transfer matrices at a fixed spectral parameter. still, considering these products we can relate the minimal norm growth over the set of all products with the spectral measure at the root and obtain several criteria for absolutely continuous spectrum. finally, we give some example of operators on stair - like graphs ( increasing width ) which has absolutely continuous spectrum with a sufficiently fast decaying random shell - matrix - potential.
|
arxiv:1903.10114
|
automated lesion segmentation from computed tomography ( ct ) is an important and challenging task in medical image analysis. while many advancements have been made, there is room for continued improvements. one hurdle is that ct images can exhibit high noise and low contrast, particularly in lower dosages. to address this, we focus on a preprocessing method for ct images that uses stacked generative adversarial networks ( sgan ) approach. the first gan reduces the noise in the ct image and the second gan generates a higher resolution image with enhanced boundaries and high contrast. to make up for the absence of high quality ct images, we detail how to synthesize a large number of low - and high - quality natural images and use transfer learning with progressively larger amounts of ct images. we apply both the classic grabcut method and the modern holistically nested network ( hnn ) to lesion segmentation, testing whether sgan can yield improved lesion segmentation. experimental results on the deeplesion dataset demonstrate that the sgan enhancements alone can push grabcut performance over hnn trained on original images. we also demonstrate that hnn + sgan performs best compared against four other enhancement methods, including when using only a single gan.
|
arxiv:1807.07144
|
fluctuations on de sitter solution of einstein - cartan field equations are obtained in terms of the matter density primordial density fluctuations and spin - torsion density and matter density fluctuations obtained from cobe data. einstein - de sitter solution is shown to be unstable even in the absence of torsion. the spin - torsion density fluctuation is simply computed from the einstein - cartan equations and from cobe data.
|
arxiv:gr-qc/9912104
|
median absolute deviation ( hereafter mad ) is known as a robust alternative to the ordinary variance. it has been widely utilized to induce robust statistical inferential procedures. in this paper, we investigate the strong and weak bahadur representations of its bootstrap counterpart. as a useful application, we utilize the results to derive the weak bahadur representation of the bootstrap sample projection depth weighted mean - - - a quite important location estimator depending on mad.
|
arxiv:1802.10302
|
- order arithmetic and their use ", proceedings of the international congress of mathematicians ( vancouver, b. c., 1974 ), vol. 1, montreal : canad. math. congress, pp. 235 – 242, mr 0429508 friedman, harvey ( 1976 ), baldwin, john ; martin, d. a. ; soare, r. i. ; tait, w. w. ( eds. ), " systems of second - order arithmetic with restricted induction, i, ii ", meeting of the association for symbolic logic, the journal of symbolic logic, 41 ( 2 ) : 557 – 559, doi : 10. 2307 / 2272259, jstor 2272259 hirschfeldt, denis r. ( 2014 ), slicing the truth, lecture notes series of the institute for mathematical sciences, national university of singapore, vol. 28, world scientific hunter, james ( 2008 ), reverse topology ( pdf ) ( phd thesis ), university of wisconsin – madison kohlenbach, ulrich ( 2005 ), " higher order reverse mathematics ", in simpson, stephen g ( ed. ), higher order reverse mathematics, reverse mathematics 2001 ( pdf ), lecture notes in logic, cambridge university press, pp. 281 – 295, citeseerx 10. 1. 1. 643. 551, doi : 10. 1017 / 9781316755846. 018, isbn 9781316755846 normann, dag ; sanders, sam ( 2018 ), " on the mathematical and foundational significance of the uncountable ", journal of mathematical logic, 19 : 1950001, arxiv : 1711. 08939, doi : 10. 1142 / s0219061319500016, s2cid 119120366 simpson, stephen g. ( 2009 ), subsystems of second - order arithmetic, perspectives in logic ( 2nd ed. ), cambridge university press, doi : 10. 1017 / cbo9780511581007, isbn 978 - 0 - 521 - 88439 - 6, mr 2517689 stillwell, john ( 2018 ), reverse mathematics, proofs from the inside out, princeton university press, isbn 978 - 0 - 691 - 17717 - 5 solomon, reed ( 1999 ), " ordered groups : a case study in reverse mathematics ", the bulletin of symbolic logic, 5 ( 1 ) : 45 – 58, citeseerx 10.
|
https://en.wikipedia.org/wiki/Reverse_mathematics
|
the mechanism by which the supermassive black holes that power bright quasars at high redshift form remains unknown. one possibility is that... the monolithic collapse of a massive protogalactic disc... leads to the formation of a quasi - star : a growing black hole, initially of typical stellar - mass, embedded in a hydrostatic giant - like envelope. quasi - stars are the main object of study in this dissertation.... in chapter 1, i introduce the problem posed by the supermassive black holes that power high - redshift quasars.... in chapter 2, i outline the cambridge stars code and the modifications that are made to model quasi - star envelopes. in chapter 3, i present models of quasi - stars where the base of the envelope is located at the bondi radius of the black hole. the black holes in these models are subject to a robust upper fractional mass limit of about one tenth. in addition, the final black hole mass is sensitive to the choice of the inner boundary radius of the envelope. in chapter 4, i construct alternative models of quasi - stars by drawing from work on convection - and advection - dominated accretion flows... the evolution of these quasi - stars is qualitatively different from those described in chapter 3.... [ t ] he core black holes are no longer subject to a fractional mass limit and ultimately accrete all of the material in their envelopes. in chapter 5, i demonstrate that the fractional mass limit found in chapter 3... is in essence the same as the sch \ " onberg - chandrasekhar limit. the analysis demonstrates... that limits exist under a wider range of circumstances than previously thought. a test is provided that determines whether a composite polytrope is at a fractional mass limit. in chapter 6, i apply this test to realistic stellar models and find evidence that the existence of fractional mass limits is connected to the evolution of stars into the red giants.
|
arxiv:1207.5972
|
salazar, dunn and graham in [ salazar et. al., 2006 ] presented an improved feng - rao bound for the minimum distance of dual codes. in this work we take the improvement a step further. both the original bound by salazar et. al., as well as our improvement are lifted so that they deal with generalized hamming weights. we also demonstrate the advantage of working with one - way well - behaving pairs rather than weakly well - behaving or well - behaving pairs.
|
arxiv:1305.1091
|
the halo concentration - mass relation has ubiquitous use in modeling the matter field for cosmological and astrophysical analyses, and including the imprints from galaxy formation physics is tantamount to its robust usage. many analyses, however, probe the matter around halos selected by a given halo / galaxy property - - rather than by halo mass - - and the imprints under each selection choice can be different. we employ the camels simulation suite to quantify the astrophysics and cosmology dependence of the concentration - mass relation, $ c _ { \ rm vir } - m _ { \ rm vir } $, when selected on five properties : ( i ) velocity dispersion, ( ii ) formation time, ( iii ) halo spin, ( iv ) stellar mass, and ( v ) gas mass. we construct simulation - informed nonlinear models for all properties as a function of halo mass, redshift, and six cosmological / astrophysical parameters, with a mass range $ m _ { \ rm vir } \ in [ 10 ^ { 11 }, 10 ^ { 14. 5 } ] m _ \ odot / h $. there are many mass - dependent imprints in all halo properties, with clear connections across different properties and non - linear couplings between the parameters. finally, we extract the $ c _ { \ rm vir } - m _ { \ rm vir } $ relation for subsamples of halos that have scattered above / below the mean property - $ m _ { \ rm vir } $ relation for a chosen property. selections on gas mass or stellar mass have a significant impact on the astrophysics / cosmology dependence of $ c _ { \ rm vir } $, while those on any of the other three properties have a significant ( mild ) impact on the cosmology ( astrophysics ) dependence. we show that ignoring such selection effects can lead to errors of $ \ approx 25 \ % $ in baryon imprint modelling of $ c _ { \ rm vir } $. our nonlinear model for all properties is made publicly available.
|
arxiv:2311.03491
|
this paper proposes to use a newly - derived transformed inertial navigation system ( ins ) mechanization to fuse ins with other complementary navigation systems. through formulating the attitude, velocity and position as one group state of group of double direct spatial isometries se2 ( 3 ), the transformed ins mechanization has proven to be group affine, which means that the corresponding vector error state model will be trajectory - independent. in order to make use of the transformed ins mechanization in inertial based integration, both the right and left vector error state models are derived. the ins / gps and ins / odometer integration are investigated as two representatives of inertial based integration. some application aspects of the derived error state models in the two applications are presented, which include how to select the error state model, initialization of the se2 ( 3 ) based error state covariance and feedback correction corresponding to the error state definitions. extensive monte carlo simulations and land vehicle experiments are conducted to evaluate the performance of the derived error state models. it is shown that the most striking superiority of using the derived error state models is their ability to handle the large initial attitude misalignments, which is just the result of log - linearity property of the derived error state models. therefore, the derived error state models can be used in the so - called attitude alignment for the two applications. moreover, the derived right error state - space model is also very preferred for long - endurance ins / odometer integration due to the filtering consistency caused by its less dependence on the global state estimate.
|
arxiv:2103.02229
|
in this talk we report on selected topics on hadrons in nuclei. the first topic is the renormalization of the width of the $ \ lambda ( 1520 ) $ in a nuclear medium. this is followed by a short update of the situation of the $ \ omega $ in the medium. the investigation of the properties of $ \ bar { k } $ in the nuclear medium from the study of the $ ( k _ { flight }, p ) $ reaction is also addressed, as well as properties of x, y, z charmed and hidden charm resonances in a nuclear medium. finally we address the novel issue of multimeson states.
|
arxiv:1102.3981
|
the ` quantum gravity in the lab ' paradigm suggests that quantum computers might shed light on quantum gravity by simulating the cft side of the ads / cft correspondence and mapping the results to the ads side. this relies on the assumption that the duality map ( the ` dictionary ' ) is efficient to compute. in this work, we show that the complexity of the ads / cft dictionary is surprisingly subtle : there might be cases in which one can efficiently apply operators to the cft state ( a task we call ' operator reconstruction ' ) without being able to extract basic properties of the dual bulk state such as its geometry ( which we call ' geometry reconstruction ' ). geometry reconstruction corresponds to the setting where we want to extract properties of a completely unknown bulk dual from a simulated cft boundary state. we demonstrate that geometry reconstruction may be generically hard due to the connection between geometry and entanglement in holography. in particular we construct ensembles of states whose entanglement approximately obey the ryu - takayanagi formula for arbitrary geometries, but which are nevertheless computationally indistinguishable. this suggests that even for states with the special entanglement structure of holographic cft states, geometry reconstruction might be hard. this result should be compared with existing evidence that operator reconstruction is generically easy in ads / cft. a useful analogy for the difference between these two tasks is quantum fully homomorphic encryption ( fhe ) : this encrypts quantum states in such a way that no efficient adversary can learn properties of the state, but operators can be applied efficiently to the encrypted state. we show that quantum fhe can separate the complexity of geometry reconstruction vs operator reconstruction, which raises the question whether fhe could be a useful lens through which to view ads / cft.
|
arxiv:2411.04978
|
the theory of geometric zeta functions for locally symmetric spaces as initialized by selberg and continued by numerous mathematicians is generalized to the case of higher rank spaces. we show analytic continuation, describe the divisor in terms of tangential cohomology and in terms of group cohomology which generalizes the patterson conjecture. we also extend the range of zeta functions in considering higher dimensional flats.
|
arxiv:dg-ga/9511006
|
we contribute improvements to a lagrangian dual solution approach applied to large - scale optimization problems whose objective functions are convex, continuously differentiable and possibly nonlinear, while the non - relaxed constraint set is compact but not necessarily convex. such problems arise, for example, in the split - variable deterministic reformulation of stochastic mixed - integer optimization problems. the dual solution approach needs to address the nonconvexity of the non - relaxed constraint set while being efficiently implementable in parallel. we adapt the augmented lagrangian method framework to address the presence of nonconvexity in the non - relaxed constraint set and the need for efficient parallelization. the development of our approach is most naturally compared with the development of proximal bundle methods and especially with their use of serious step conditions. however, deviations from these developments allow for an improvement in efficiency with which parallelization can be utilized. pivotal in our modification to the augmented lagrangian method is the use of an integration of approaches based on the simplicial decomposition method ( sdm ) and the nonlinear block gauss - seidel ( gs ) method. an adaptation of a serious step condition associated with proximal bundle methods allows for the approximation tolerance to be automatically adjusted. under mild conditions optimal dual convergence is proven, and we report computational results on test instances from the stochastic optimization literature. we demonstrate improvement in parallel speedup over a baseline parallel approach.
|
arxiv:1702.00526
|
we calculate the top - quark - induced three - loop corrections of o ( alpha _ s ^ 2 g _ f m _ t ^ 2 ) to the yukawa couplings of the first five quark flavours in the framework of the minimal standard model with an intermediate - mass higgs boson, with mass m _ h < < 2m _ t. the calculation is performed using an effective - lagrangian approach implemented with the hard - mass procedure. as an application, we derive the o ( alpha _ s ^ 2 g _ f m _ t ^ 2 ) corrections to the h - > q q - bar partial decay widths, including the case q = b. the couplings of the higgs boson to pairs of leptons and intermediate bosons being known to o ( alpha _ s ^ 2 g _ f m _ t ^ 2 ), this completes the knowledge of such corrections in the higgs sector. we express the results both in the ms - bar and on - shell schemes of mass renormalization. we recover the notion that the qcd perturbation expansions exhibit a worse convergence behaviour in the on - shell scheme than they do in the ms - bar scheme.
|
arxiv:hep-ph/9701277
|
we consider the hartle - hawking wavefunction of the universe defined as a euclidean path integral that satisfies the " no - boundary proposal. " we focus on the simplest minisuperspace model that comprises a single scale factor degree of freedom and a positive cosmological constant. the model can be seen as a non - linear $ \ sigma $ - model with a line - segment base. we reduce the path integral over the lapse function to an integral over the proper length of the base and use diffeomorphism - invariant measures for the ghosts and the scale factor. as a result, the gauge - fixed path integral is independent of the gauge. however, we point out that all field redefinitions of the scale factor degree of freedom yield different choices of gauge - invariant path - integral measures. for each prescription, we compute the wavefunction at the semi - classical level and find a different result. we resolve in each case the ambiguity in the form of the wheeler - dewitt equation at this level of approximation. by imposing that the hamiltonians associated with these possibly distinct quantum theories are hermitian, we determine the inner products of the corresponding hilbert spaces and find that they lead to a universal norm, at least semi - classically. quantum predictions are thus independent of the prescription at this level of approximation. finally, all wavefunctions of the hilbert spaces of the minisuperspace model we consider turn out to be non - normalizable, including the no - boundary states.
|
arxiv:2103.15168
|
network intrusion detection systems ( nids ) to detect malicious attacks continue to meet challenges. nids are often developed offline while they face auto - generated port scan infiltration attempts, resulting in a significant time lag from adversarial adaption to nids response. to address these challenges, we use hypergraphs focused on internet protocol addresses and destination ports to capture evolving patterns of port scan attacks. the derived set of hypergraph - based metrics are then used to train an ensemble machine learning ( ml ) based nids that allows for real - time adaption in monitoring and detecting port scanning activities, other types of attacks, and adversarial intrusions at high accuracy, precision and recall performances. this ml adapting nids was developed through the combination of ( 1 ) intrusion examples, ( 2 ) nids update rules, ( 3 ) attack threshold choices to trigger nids retraining requests, and ( 4 ) a production environment with no prior knowledge of the nature of network traffic. 40 scenarios were auto - generated to evaluate the ml ensemble nids comprising three tree - based models. the resulting ml ensemble nids was extended and evaluated with the cic - ids2017 dataset. results show that under the model settings of an update - all - nids rule ( specifically retrain and update all the three models upon the same nids retraining request ) the proposed ml ensemble nids evolved intelligently and produced the best results with nearly 100 % detection performance throughout the simulation.
|
arxiv:2211.03933
|
we analyze the scaling of the condensation energy $ e _ { \ delta } $ divided by $ \ gamma $, $ e _ { \ delta } / \ gamma \ simeq n ( 0 ) \ delta _ 1 ^ 2 / \ gamma $, of both conventional superconductors and unconventional high - $ t _ c $ one, where $ n ( 0 ) $ is the density of states, $ \ delta _ 1 $ is the maximum value of the superconducting gap and $ \ gamma $ is the sommerfeld coefficient. for the first time, we show that the universal scaling of $ e _ { \ delta } / \ gamma \ propto t _ c ^ 2 $ applies equally to conventional superconductors and unconventional high - $ t _ c $ ones. our consideration is based on both facts : bogoliubov quasiparticles act in conventional and unconventional superconductors, and the corresponding flat band is deformed by the unconventional superconducting state. as a result, our theoretical observations based on the fermion condensation theory are in good agreement with experimental facts.
|
arxiv:2402.02532
|
detached eclipsing binaries ( debs ) are ideal targets for accurate measurement of masses and radii of ther component stars. if at least one of the stars has evolved off the main sequence ( ms ), the masses and radii give a strict constraint on the age of the stars. several debs containing a bright k giant and a fainter ms star have been discovered by the kepler satellite. the mass and radius of a red giant ( rg ) star can also be derived from its asteroseismic signal. the parameters determined in this way depend on stellar models and may contain systematic errors. it is important to validate the asteroseismically determined mass and radius with independent methods. this can be done when stars are members of stellar clusters or members of debs. kic 8410637 consists of an rg and an ms star. the aim is to derive accurate masses and radii for both components and provide the foundation for a strong test of the asteroseismic method and the accuracy of the deduced mass, radius and age. we analyse high - resolution spectra from three different spectrographs. we also calculate a fit to the kepler light curve and use ground - based photometry to determine the flux ratios between the component stars in the bvri passbands. we measured the masses and radii of the stars in the deb, and the classical parameters teff, log g and [ fe / h ] from the spectra and ground - based photometry. the rg component of kic 8410637 is most likely in the core helium - burning red clump phase of evolution and has an age and composition very similar to the stars in the open cluster ngc 6819. the mass of the rg in kic 8410637 should therefore be similar to the mass of rgs in ngc 6819, thus lending support to the most up - to - date version of the asteroseismic scaling relations. this is the first direct measurement of both mass and radius for an rg to be compared with values for rgs from asteroseismic scaling relations.
|
arxiv:1307.0314
|
cosmological shock waves result from supersonic flow motions induced by hierarchical clustering of nonlinear structures in the universe. these shocks govern the nature of cosmic plasma through thermalization of gas and acceleration of nonthermal, cosmic - ray ( cr ) particles. we study the statistics and energetics of shocks formed in cosmological simulations of a concordance $ \ lambda $ cdm universe, with a special emphasis on the effects of non - gravitational processes such as radiative cooling, photoionization / heating, and galactic superwind feedbacks. adopting an improved model for gas thermalization and cr acceleration efficiencies based on nonlinear diffusive shock acceleration calculations, we then estimate the gas thermal energy and the cr energy dissipated at shocks through the history of the universe. since shocks can serve as sites for generation of vorticity, we also examine the vorticity that should have been generated mostly at curved shocks in cosmological simulations. we find that the dynamics and energetics of shocks are governed primarily by the gravity of matter, so other non - gravitational processes do not affect significantly the global energy dissipation and vorticity generation at cosmological shocks. our results reinforce scenarios in which the intracluster medium and warm - hot intergalactic medium contain energetically significant populations of nonthermal particles and turbulent flow motions.
|
arxiv:0704.1521
|
the nine - year h. e. s. s. galactic plane survey ( hgps ) yielded the most uniform observation scan of the inner milky way in the tev gamma - ray band to date. the sky maps and source catalogue of the hgps allow for a systematic study of the population of tev pulsar wind nebulae found throughout the last decade. to investigate the nature and evolution of pulsar wind nebulae, for the first time we also present several upper limits for regions around pulsars without a detected tev wind nebula. our data exhibit a correlation of tev surface brightness with pulsar spin - down power $ \ dot { e } $. this seems to be caused both by an increase of extension with decreasing $ \ dot { e } $, and hence with time, compatible with a power law $ r _ \ mathrm { pwn } ( \ dot { e } ) \ sim \ dot { e } ^ { - 0. 65 \ pm 0. 20 } $, and by a mild decrease of tev gamma - ray luminosity with decreasing $ \ dot { e } $, compatible with $ l _ { 1 - 10 \, \ mathrm { tev } } \ sim \ dot { e } ^ { 0. 59 \ pm 0. 21 } $. we also find that the offsets of pulsars with respect to the wind nebula centres with ages around 10 kyr are frequently larger than can be plausibly explained by pulsar proper motion and could be due to an asymmetric environment. in the present data, it seems that a large pulsar offset is correlated with a high apparent tev efficiency $ l _ { 1 - 10 \, \ mathrm { tev } } / \ dot { e } $. in addition to 14 hgps sources considered as firmly identified pulsar wind nebulae and 5 additional pulsar wind nebulae taken from literature, we find 10 hgps sources that form likely tev pulsar wind nebula candidates. using a model that subsumes the present common understanding of the very high - energy radiative evolution of pulsar wind nebulae, we find that the trends and variations of the tev observables and limits can be reproduced to a good level, drawing a consistent picture of present - day tev data and theory.
|
arxiv:1702.08280
|
we present the results of a search and study of central abundance drops in a volume - limited sample ( z < = 0. 071 ) of 101 x - ray galaxy groups and clusters. these are best observed in nearby, and so best resolved, groups and clusters, making our sample ideal for their detection. out of the 65 groups and clusters in our sample for which we have abundance profiles, 8 of them have certain central abundance drops, with possible central abundance drops in another 6. all sources with central abundance drops have x - ray cavities, and all bar one exception have a central cooling time < = 1 gyr. these central abundance drops can be generated if the iron injected by stellar mass loss processes in the core of these sources is in grains, which then become incorporated in the central dusty filaments. these, in turn, are dragged outwards by the bubbling feedback process in these sources. we find that data quality significantly affects the detection of central abundance drops, inasmuch as a higher number of counts in the central 20 kpc of a source makes it easier to detect a central abundance drop, as long as these counts are more than ~ 13000. on the other hand, the magnitude of the central abundance drop does not depend on the number of these counts, though the statistical significance of the measured drop does. finally, in line with the scenario briefly outlined above, we find that, for most sources, the location of x - ray cavities acts as an upper limit to the location of the peak in the radial metallicity distribution.
|
arxiv:1411.6040
|
wireless energy transfer ( wet ) has attracted significant attention recently for providing energy supplies wirelessly to electrical devices without the need of wires or cables. among different types of wet techniques, the radio frequency ( rf ) signal enabled far - field wet is most practically appealing to power energy constrained wireless networks in a broadcast manner. to overcome the significant path loss over wireless channels, multi - antenna or multiple - input multiple - output ( mimo ) techniques have been proposed to enhance the transmission efficiency and distance for rf - based wet. however, in order to reap the large energy beamforming gain in mimo wet, acquiring the channel state information ( csi ) at the energy transmitter ( et ) is an essential task. this task is particularly challenging for wet systems, since existing channel training and feedback methods used for communication receivers may not be implementable at the energy receiver ( er ) due to its hardware limitation. to tackle this problem, in this paper we consider a multiuser mimo system for wet, where a multiple - antenna et broadcasts wireless energy to a group of multiple - antenna ers concurrently via transmit energy beamforming. by taking into account the practical energy harvesting circuits at the er, we propose a new channel learning method that requires only one feedback bit from each er to the et per feedback interval. the feedback bit indicates the increase or decrease of the harvested energy by each er between the present and previous intervals, which can be measured without changing the existing hardware at the er. based on such feedback information, the et adjusts transmit beamforming in different training intervals and at the same time obtains improved estimates of the mimo channels to ers by applying a new approach termed analytic center cutting plane method ( accpm ).
|
arxiv:1312.1444
|
recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. however, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. in this work, we propose a feature - enriched neural model for joint chinese word segmentation and part - of - speech tagging task. specifically, to simulate the feature templates of traditional discrete feature based models, we use different filters to model the complex compositional features with convolutional and pooling layer, and then utilize long distance dependency information with recurrent layer. experimental results on five different datasets show the effectiveness of our proposed model.
|
arxiv:1611.05384
|
quantum nonlocality and nonclassicality are two remarkable characteristics of quantum theory, and offer quantum advantages in some quantum information processing. motivated by recent work on the interplay between nonclassicality quantified by average correlation [ tschaffon et al., phys. rev. res. 5, 023063 ( 2023 ) ] and bell nonlocality, in this paper we aim to establish the relationship between the average correlation and the violation of the three - setting linear steering inequality for two - qubit systems. exact lower and upper bounds of average correlation versus steering are obtained, and the respective states which suffice those bounds are also characterized. for clarity of our presentation, we illustrate these results with examples from well - known classes of two - qubit states. moreover, the dynamical behavior of these two quantifiers is carefully analyzed under the influence of local unital and nonunital noisy channels. the results suggest that average correlation is closely related to the violation of three - setting linear steering, like its relationship with bell violation. particularly, for a given class of states, the hierarchy of nonclassicality - steering - bell nonlocality is demonstrated.
|
arxiv:2410.11219
|
as automatic speech processing ( asr ) systems are getting better, there is an increasing interest of using the asr output to do downstream natural language processing ( nlp ) tasks. however, there are few open source toolkits that can be used to generate reproducible results on different spoken language understanding ( slu ) benchmarks. hence, there is a need to build an open source standard that can be used to have a faster start into slu research. we present espnet - slu, which is designed for quick development of spoken language understanding in a single framework. espnet - slu is a project inside end - to - end speech processing toolkit, espnet, which is a widely used open - source standard for various speech processing tasks like asr, text to speech ( tts ) and speech translation ( st ). we enhance the toolkit to provide implementations for various slu benchmarks that enable researchers to seamlessly mix - and - match different asr and nlu models. we also provide pretrained models with intensively tuned hyper - parameters that can match or even outperform the current state - of - the - art performances. the toolkit is publicly available at https : / / github. com / espnet / espnet.
|
arxiv:2111.14706
|
due to technological advances in the field of radio technology and its availability, the number of interference signals in the radio spectrum is continuously increasing. interference signals must be detected in a timely fashion, in order to maintain standards and keep emergency frequencies open. to this end, specialized ( multi - channel ) receivers are used for spectrum monitoring. in this paper, the performances of two different approaches for controlling the available receiver resources are compared. the methods used for resource management ( rema ) are linear frequency tuning as a heuristic approach and a q - learning algorithm from the field of reinforcement learning. to test the methods to be investigated, a simplified scenario was designed with two receiver channels monitoring ten non - overlapping frequency bands with non - uniform signal activity. for this setting, it is shown that the q - learning algorithm used has a significantly higher detection rate than the heuristic approach at the expense of a smaller exploration rate. in particular, the q - learning approach can be parameterized to allow for a suitable trade - off between detection and exploration rate.
|
arxiv:2307.05763
|
the construction of four dimensional supersymmetric gauge theories via the fivebrane of m theory wrapped around a riemann surface has been successfully applied to the computation of holomorphic quantities of field theory. in this paper we compute non - holomorphic quantities in the eleven dimensional supergravity limit of m theory. while the kahler potential on the coulomb of n = 2 theories is correctly reproduced, higher derivative terms in the n = 2 effective action differ from what is expected for the four dimensional gauge theory. for the kahler potential of n = 1 theories at an abelian coulomb phase, the result again differs from what is expected for the four - dimensional gauge theory. using a gravitational back reaction method for the fivebrane we compute the metric on the higgs branch of n = 2 gauge theories. here we find an agreement with the results expected for the gauge theories. a similar computation of the metric on n = 1 higgs branches yields information on the complex structure associated with the flavor rotation in one case and the classical metric in another. we discuss what four - dimensional field theory quantities can be computed via the fivebrane in the supergravity limit of m theory.
|
arxiv:hep-th/9711143
|
we report the storage and retrieval of a small microwave field from a superconducting resonator into collective excitations of a spin ensemble. the spins are nitrogen - vacancy centers in a diamond crystal. the storage time of the order of 30 ns is limited by inhomogeneous broadening of the spin ensemble.
|
arxiv:1109.3960
|
the main result of this paper is a new exact algorithm computing the estimate given by the least trimmed squares ( lts ). the algorithm works under very weak assumptions. to prove that, we study the respective objective function using basic techniques of analysis and linear algebra.
|
arxiv:1001.1297
|
recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. in this work, we suggest a slightly more difficult data - to - text generation task, and investigate how effective current approaches are on this task. in particular, we introduce a new, large - scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. experiments show that these models produce fluent text, but fail to convincingly approximate human - generated documents. moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy - and reconstruction - based extensions lead to noticeable improvements.
|
arxiv:1707.08052
|
finding the stationary states of a free energy functional is an important problem in phase field crystal ( pfc ) models. many efforts have been devoted for designing numerical schemes with energy dissipation and mass conservation properties. however, most existing approaches are time - consuming due to the requirement of small effective step sizes. in this paper, we discretize the energy functional and propose efficient numerical algorithms for solving the constrained non - convex minimization problem. a class of gradient based approaches, which is the so - called adaptive accelerated bregman proximal gradient ( aa - bpg ) methods, is proposed and the convergence property is established without the global lipschitz constant requirements. a practical newton method is also designed to further accelerate the local convergence with convergence guarantee. one key feature of our algorithms is that the energy dissipation and mass conservation properties hold during the iteration process. moreover, we develop a hybrid acceleration framework to accelerate the aa - bpg methods and most of existing approaches through coupling with the practical newton method. extensive numerical experiments, including two three dimensional periodic crystals in landau - brazovskii ( lb ) model and a two dimensional quasicrystal in lifshitz - petrich ( lp ) model, demonstrate that our approaches have adaptive step sizes which lead to a significant acceleration over many existing methods when computing complex structures.
|
arxiv:2002.09898
|
the quon algebra describes particles, ` ` quons, ' ' that are neither fermions nor bosons, using a label $ q $ that parametrizes a smooth interpolation between bosons ( $ q = 1 $ ) and fermions ( $ q = - 1 $ ). understanding the relation of quons on the one side and bosons or fermions on the other can shed light on the different properties of these two kinds of operators and the statistics which they carry. in particular, local bilinear observables can be constructed from bosons and fermions, but not from quons. in this paper we construct bosons and fermions from quon operators. for bosons, our construction works for $ - 1 \ leq q \ leq 1 $. the case $ q = - 1 $ is paradoxical, since that case makes a boson out of fermions, which would seem to be impossible. none the less, when the limit $ q \ to - 1 $ is taken from above, the construction works. for fermions, the analogous construction works for $ - 1 \ leq q \ leq 1 $, which includes the paradoxical case $ q = 1 $.
|
arxiv:hep-th/0107058
|
in the spirit of recently developed lda + u and lda + dmft methods we implement a combination of density functional theory in its local density approximation ( lda ) with a $ k $ - and $ \ omega - $ dependent self - energy found from diagrammatic fluctuational exchange ( flex ) approximation. the active hilbert space here is described by the correlated subset of electrons which allows to tremendously reduce the sizes of matrices needed to represent charge and spin susceptibilities. the method is perturbative in nature but accounts for both bubble and ladder diagrams and accumulates the physics of momentum resolved spin fluctuations missing in such popular approach as gw. as an application, we study correlation effects on band structures in v and pd. the d - electron self - energies emergent from this calculation are found to be remarkably k - independent. however, when we compare our calculated electronic mass enhancements against lda + dmft, we find that for a long standing problem of spin fluctuations in pd, lda + flex delivers a better agreement with experiment, although this conclusion depends on a particular value of hubbard $ u $ used in the simulation. we also discuss outcomes of recently proposed combinations of k - dependent flex with dmft.
|
arxiv:1802.02471
|
we study the formation of disks via the cooling flow of gas within galactic haloes using smoothed particle hydrodynamics simulations. these simulations resolve mass scales of a few thousand solar masses in the gas component for the first time. thermal instabilities result in the formation of numerous warm clouds that are pressure confined by the hot ambient halo gas. the clouds fall slowly onto the disk through non - spherical accretion from material flowing preferentially down the angular momentum axis. the rotational velocity of the infalling cold gas decreases as a function of height above the disk, closely resembling that of the extra - planar gas recently observed around the spiral galaxy ngc 891.
|
arxiv:astro-ph/0507296
|
we discuss an $ { \ cal r } + { \ cal r } ^ n $ class of modified $ { \ cal n } = 1 $, d = 4 supergravity models where the deformation is a monomial $ { \ cal r } ^ n \ big | _ f $ in the chiral scalar curvature multiplet $ { \ cal r } $ of the " old minimal " auxiliary field formulation. the scalaron and goldstino multiplets are dual to each other in this theory. since one of them is not dynamical, this theory, as recently shown, cannot be used as the supersymmetric completion of $ r + r ^ n $ gravity. this is confirmed by investigating the scalar potential and its critical points in the dual standard supergravity formulation with a single chiral multiplet with specific k \ " ahler potential and superpotential. we study the vacuum structure of this dual theory and we find that there is always a supersymmetric minkowski critical point which however is pathological for $ n \ geq 3 $ as it corresponds to a corner ( $ n = 3 $ ) and a cusp ( $ n > 3 $ ) point of the potential. for $ n > 3 $ an anti - de sitter regular supersymmetric vacuum emerges. as a result, this class of models are not appropriate to describe inflation. we also find the mass spectrum and we provide a general formula for the masses of the scalars of a chiral multiplet around the anti - de sitter critical point and their relation to $ osp ( 1, 4 ) $ unitary representations.
|
arxiv:1310.0399
|
the michaelis - menten mechanism is probably the best known model for an enzyme - catalyzed reaction. for spatially homogeneous concentrations, qss reductions are well known, but this is not the case when chemical species are allowed to diffuse. we will discuss qss reductions for both the irreversible and reversible michaelis - menten reaction in the latter case, given small initial enzyme concentration and slow diffusion. our work is based on a heuristic method to obtain an ordinary differential equation which admits reduction by tikhonov - fenichel theory. we will not give convergence proofs but we provide numerical results that support the accuracy of the reductions.
|
arxiv:1707.04043
|
the ukirt infrared deep sky survey ( ukidss ) is the first of a new generation of hemispheric imaging projects to extend the work of the two micron all sky survey ( 2mass ) by reaching three magnitudes deeper in yjhk imaging, to k = 18. 2 ( 5 - sigma, vega ) over wide fields. better complementing existing optical surveys such as the sloan digital sky survey ( sdss ), the resulting public imaging catalogues provide new photometry of rare object samples too faint to be reached previously. the first data release of ukidss has already surpassed 2mass in terms of photons gathered, and using this new dataset we examine the near - infrared properties of 2837 quasars found in the sdss and newly catalogued by the ukidss in ~ 189 square degrees. the matched quasars include the ra range 22hr to 4hr on the southern equatorial stripe ( sdss stripe 82 ), an area of significant future followup possibilities with deeper surveys and pointed observations. the sample covers the redshift and absolute magnitude ranges 0. 08 < z < 5. 03 and - 29. 5 < m _ i < - 22. 0, and 98 per cent of sdss quasars have matching ukidss data. we discuss the photometry, astrometry, and various colour properties of the quasars. we also examine the effectiveness of quasar / star separation using the near - infrared passbands. the combination of sdss ugriz photometry with the yjhk near - infrared photometry from ukidss over large areas of sky has enormous potential for advancing our understanding of the quasar population.
|
arxiv:astro-ph/0612608
|
upcoming and current large astronomical survey experiments often seek to constrain cosmological parameters via measurements of subtle effects such as weak lensing, which can only be measured statistically. in these cases, instrumental effects in the image plane ccds need to be accounted and / or corrected for in measurement algorithms. otherwise, the systematic errors induced in the measurements might overwhelm the size of the desired effects. lateral electric fields in the bulk of the ccds caused by field shaping potentials or space charge build up as the electrons in the image are acquired can cause lateral deflections of the electrons drifting in the ccd bulk. here, i report on the lsst effort to model these effects on a photon - by - photon basis by the use of a monte carlo technique. the eventual goal of this work is to produce a ccd model validated by laboratory data which can then be used to evaluate its effects on weak lensing science.
|
arxiv:1505.03639
|
mixed - membership ( mm ) models such as latent dirichlet allocation ( lda ) have been applied to microbiome compositional data to identify latent subcommunities of microbial species. these subcommunities are informative for understanding the biological interplay of microbes and for predicting health outcomes. however, microbiome compositions typically display substantial cross - sample heterogeneities in subcommunity compositions - - that is, the variability in the proportions of microbes in shared subcommunities across samples - - which is not accounted for in prior analyses. as a result, lda can produce inference which is highly sensitive to the specification of the number of subcommunities and often divides a single subcommunity into multiple artificial ones. to address this limitation, we incorporate the logistic - tree normal ( ltn ) model into lda to form a new mm model. this model allows cross - sample variation in the composition of each subcommunity around some " centroid " composition that defines the subcommunity. incorporation of auxiliary p \ ' olya - gamma variables enables a computationally efficient collapsed blocked gibbs sampler to carry out bayesian inference under this model. by accounting for such heterogeneity, our new model restores the robustness of the inference in the specification of the number of subcommunities and allows meaningful subcommunities to be identified.
|
arxiv:2109.05386
|
terms of mathematical tools, the pde thesis underlines the insuperable high threshold of the cacophony of environmental stimuli ( the stimuli noise ) for young organisms at the onset of life. it argues that the temporal ( phase ) synchronization of neural activity based on dynamical self - organizing processes in neural networks, any dynamical bound together or integration to a representation of the perceptual object by means of a synchronization mechanism can not help organisms in distinguishing relevant cue ( informative stimulus ) for overcome this noise threshold. = = see also = = outlines outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more. outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more. = = references = = = = external links = = media related to cognitive science at wikimedia commons quotations related to cognitive science at wikiquote learning materials related to cognitive science at wikiversity " cognitive science " on the stanford encyclopedia of philosophy cognitive science society cognitive science movie index : a broad list of movies showcasing themes in the cognitive sciences archived 4 september 2015 at the wayback machine list of leading thinkers in cognitive science
|
https://en.wikipedia.org/wiki/Cognitive_science
|
in this paper, we report the advantage of using ac actuating signal for driving mems actuators instead of dc voltages. the study is based upon micro mirror devices used in digital mode for optical switching operation. when the pull - in effect is used, charge injection occurs when the micro mirror is maintained in the deflected position. to avoid this effect, a geometrical solution is to realize grounded landing electrodes which are electro - statically separated from the control electrodes. another solution is the use of ac signal which eliminates charge injection particularly if a bipolar signal is used. long term experiments have demonstrated the reliability of such a signal command to avoid injection of electric charges.
|
arxiv:0802.3075
|
haptic feedback is an important component of creating an immersive mixed reality experience. traditionally, haptic forces are rendered in response to the user ' s interactions with the virtual environment. in this work, we explore the idea of rendering haptic forces in a proactive manner, with the explicit intention to influence the user ' s behavior through compelling haptic forces. to this end, we present a framework for active haptic guidance in mixed reality, using one or more robotic haptic proxies to influence user behavior and deliver a safer and more immersive virtual experience. we provide details on common challenges that need to be overcome when implementing active haptic guidance, and discuss example applications that show how active haptic guidance can be used to influence the user ' s behavior. finally, we apply active haptic guidance to a virtual reality navigation problem, and conduct a user study that demonstrates how active haptic guidance creates a safer and more immersive experience for users.
|
arxiv:2301.05311
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.