text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
a method is given to rapidly compute quasisymmetric stellarator magnetic fields for plasma confinement, without the need to call a three - dimensional magnetohydrodynamic equilibrium code inside an optimization iteration. the method is based on direct solution of the equations of magnetohydrodynamic equilibrium and quasisymmetry using garren and boozer ' s expansion about the magnetic axis ( phys fluids b 3, 2805 ( 1991 ) ), and it is several orders of magnitude faster than the conventional optimization approach. the work here extends the method of landreman, sengupta and plunk ( j plasma phys 85, 905850103 ( 2019 ) ), which was limited to flux surfaces with elliptical cross - section, to higher order in the aspect ratio expansion. as a result, configurations can be generated with strong shaping that achieve quasisymmetry to high accuracy. using this construction, we give the first numerical demonstrations of garren and boozer ' s ideal scaling of quasisymmetry - breaking with the cube of inverse aspect ratio. we also demonstrate a strongly nonaxisymmetric configuration ( vacuum $ \ iota > 0. 4 $ ) in which symmetry - breaking mode amplitudes throughout a finite volume are $ < 2 \ times 10 ^ { - 7 } $, the smallest ever reported. to generate boundary shapes of finite - minor - radius configurations, a careful analysis is given of the effect of substituting a finite minor radius into the near - axis expansion. the approach here can provide analytic insight into the space of possible quasisymmetric stellarator configurations, and it can be used to generate good initial conditions for conventional stellarator optimization.
|
arxiv:1908.10253
|
cdms ii data from the 5 - tower runs at the soudan underground laboratory were reprocessed with an improved charge - pulse fitting algorithm. two new analysis techniques to reject surface - event backgrounds were applied to the 612 kg days germanium - detector wimp - search exposure. an extended analysis was also completed by decreasing the 10 kev analysis threshold to $ \ sim $ 5 kev, to increase sensitivity near a wimp mass of 8 gev / $ c ^ 2 $. after unblinding, there were zero candidate events above a deposited energy of 10 kev and 6 events in the lower - threshold analysis. this yielded minimum wimp - nucleon spin - independent scattering cross - section limits of $ 1. 8 \ times 10 ^ { - 44 } $ and $ 1. 18 \ times 10 ^ { - 41 } $ cm $ ^ 2 $ at 90 \ % confidence for 60 and 8. 6 gev / $ c ^ 2 $ wimps, respectively. this improves the previous cdms ii result by a factor of 2. 4 ( 2. 7 ) for 60 ( 8. 6 ) gev / $ c ^ 2 $ wimps.
|
arxiv:1504.05871
|
hard - tev bl lacs are a new type of blazars characterized by a hard intrinsic tev spectrum, locating the peak of their gamma - ray emission in the spectral energy distribution ( sed ) above 2 - 10 tev. such high energies are problematic for the compton emission, using a standard one - zone leptonic model. we study six examples of this new type of bl lacs in the hard x - ray band with the nustar satellite. together with simultaneous observations with the swift satellite, we fully constrain the peak of the synchrotron emission in their sed, and test the leptonic synchrotron self - compton ( ssc ) model. we confirm the extreme nature of 5 objects also in the synchrotron emission. we do not find evidence of additional emission components in the hard x - ray band. we find that a one - zone ssc model can in principle reproduce the extreme properties of both peaks in the sed, from x - ray up to tev energies, but at the cost of i ) extreme electron energies with very low radiative efficiency, ii ) conditions heavily out of equipartition ( by 3 to 5 orders of magnitude ), and iii ) not accounting for the simultaneous uv data, which then should belong to a different emission component, possibly the same as the far - ir ( wise ) data. we find evidence of this separation of the uv and x - ray emission in at least two objects. in any case, the tev electrons must not " see " the uv or lower - energy photons, even if coming from different zones / populations, or the increased radiative cooling would steepen the vhe spectrum.
|
arxiv:1711.06282
|
the paper is concerned with forster resonance energy transfer ( fret ) considered as a mechanism for communication between nanodevices. two solved issues are reported in the paper, namely : signal generation and signal storage in fret - based nanonetworks. first, luciferase molecules as fret transmitters which are able to generate fret signals themselves, taking energy from chemical reactions without any external light exposure, are proposed. second, channelrhodopsins as fret receivers, as they can convert fret signals into voltage, are suggested. further, medical in - body systems where both molecule types might be successfully applied, are discussed. luciferase - channelrhodopsin communication is modeled and its performance is numerically validated, reporting on its throughput, bit error rate, propagation delay and energy consumption.
|
arxiv:1802.04886
|
possible indirect detection of neutralino, through its gamma - ray annihilation product, by the forthcoming glast satellite from our galactic halo, m31, m87 and the dwarf galaxies draco and sagittarius is studied. gamma - ray fluxes are evaluated for the two representative energy thresholds, 0. 1 gev and 1. 0 gev, at which the spatial resolution of glast varies considerably. apart from dwarfs which are described either by a modified plummer profile or by a tidally - truncated king profiles, fluxes are compared for halos with central cusps and cores. it is demonstrated that substructures, irrespective of their profiles, enhance the gamma - ray emission only marginally. the expected gamma - ray intensity above 1 gev at high galactic latitudes is consistent with the residual emission derived from egret data if the density profile has a central core and the neutralino mass is less than 50 gev, whereas for a central cusp only a substantial enhancement would explain the observations. from m31, the flux can be detected above 0. 1 gev and 1. 0 gev by glast only if the neutralino mass is below 300 gev and if the density profile has a central cusp, case in which a significant boost in the gamma - ray emission is produced by the central black hole. for sagittarius, the flux above 0. 1 gev is detectable by glast provided the neutralino mass is below 50 gev. from m87 and draco the fluxes are always below the sensitivity limit of glast.
|
arxiv:astro-ph/0401378
|
we examine the possible reionization of the intergalactic medium ( igm ) by the source udf033238. 7 - 274839. 8 ( hereafter hudf - jd2 ), which was discovered in deep { \ it hst } / vlt / { \ it spitzer } images obtained as part of the great observatory origins deep survey and { \ it hubble } ultra - deep field projects. mobasher et al ( 2005 ) have identified hudf - jd2 as a massive ( $ \ sim6 \ times10 ^ { 11 } m _ \ odot $ ) post - starburst galaxy at redshift z $ \ gtrsim6. 5 $. we find that hudf - jd2 may be capable of reionizing its surrounding region of the universe, starting the process at a redshift as high as z $ \ approx 15 \ pm5 $.
|
arxiv:astro-ph/0509605
|
we investigate the existence and stability of dissipative soliton solution in a system described by complex ginzburg - landau ( cgl ) equation with asymmetric complex potential, which is obtained from original parity reflection - time reversal ( $ \ mathcal { pt } $ ) symmetric rosen - morse potential. in this study, stability of solution is examined by numerical analysis to show that solitons are stable for some parameter ranges for both self - focusing and self - defocusing nonlinear modes. dynamical properties such as evolution and transverse energy flow for both modes are also analyzed. obtained results are useful for experimental designs and applications in related fields.
|
arxiv:1911.05605
|
low - density parity - check ( ldpc ) codes with the parity - based approach for distributed joint source channel coding ( djscc ) with decoder side information is described in this paper. the parity - based approach is theoretical limit achievable. different edge degree distributions are used for source variable nodes and parity variable nodes. particularly, the codeword - averaged density evolution ( cade ) is presented for asymmetrically correlated nonuniform sources over the asymmetric memoryless transmission channel. extensive simulations show that the splitting of variable nodes can improve the coding efficiency of suboptimal codes and lower the error floor.
|
arxiv:1404.2231
|
automatically generating realistic musical performance motion can greatly enhance digital media production, often involving collaboration between professionals and musicians. however, capturing the intricate body, hand, and finger movements required for accurate musical performances is challenging. existing methods often fall short due to the complex mapping between audio and motion, typically requiring additional inputs like scores or midi data. in this work, we present syncviolinist, a multi - stage end - to - end framework that generates synchronized violin performance motion solely from audio input. our method overcomes the challenge of capturing both global and fine - grained performance features through two key modules : a bowing / fingering module and a motion generation module. the bowing / fingering module extracts detailed playing information from the audio, which the motion generation module uses to create precise, coordinated body motions reflecting the temporal granularity and nature of the violin performance. we demonstrate the effectiveness of syncviolinist with significantly improved qualitative and quantitative results from unseen violin performance audio, outperforming state - of - the - art methods. extensive subjective evaluations involving professional violinists further validate our approach. the code and dataset are available at https : / / github. com / kakanat / syncviolinist.
|
arxiv:2412.08343
|
we introduce a notion of index for shrinkers of the mean curvature flow. we then prove a gap theorem for the index of rotationally symmetric immersed shrinkers in r ^ 3, namely, that such shrinkers have index at least 3, unless they are one of the stable ones : the sphere, the cylinder, or the plane. we also provide a generalization of the result to higher dimensions.
|
arxiv:1603.06539
|
from the complex motions of robots to the oxygen binding of hemoglobin, the function of many mechanical systems depends on large, coordinated movements of their components. such movements arise from a network of physical interactions in the form of links that transmit forces between constituent elements. however, the principled design of specific movements is made difficult by the number and nonlinearity of interactions. here, we model mechanical systems as linkages of rigid bonds ( edges ) connected by joints ( nodes ), and formulate a simple but powerful framework for designing full nonlinear coordinated motions using concepts from dynamical systems theory. we begin with principles for designing finite and infinitesimal motions in small modules, and show that each module is a one - dimensional map between distances across pairs of nodes. next, we represent the act of combining modules as an iteration of this map, and design networks whose geometries reflect the map ' s fixed points, limit cycles, and chaos. we use this representation to design different folding sequences from a deployable network and a soliton, to a branched network acting as a mechanical and gate. finally, we design large changes in curvature of the entire network, and construct physical networks from laser - cut acrylic, origami, and 3d printed material to demonstrate the framework ' s potential and versatility for designing the full conformational trajectory of morphing metamaterials and structures.
|
arxiv:1906.08400
|
we study the reaction $ e ^ + e ^ - \ to \ pi ^ 0 \ gamma $ based on a dispersive representation of the underlying $ \ pi ^ 0 \ to \ gamma \ gamma ^ * $ transition form factor. as a first application, we evaluate the contribution of the $ \ pi ^ 0 \ gamma $ channel to the hadronic - vacuum - polarization correction to the anomalous magnetic moment of the muon. we find $ a _ \ mu ^ { \ pi ^ 0 \ gamma } \ big | _ { \ leq 1. 35 \, \ text { gev } } = 43. 8 ( 6 ) \ times 10 ^ { - 11 } $, in line with evaluations from the direct integration of the data. second, our fit determines the resonance parameters of $ \ omega $ and $ \ phi $. we observe good agreement with the $ e ^ + e ^ - \ to3 \ pi $ channel, explaining a previous tension in the $ \ omega $ mass between $ \ pi ^ 0 \ gamma $ and $ 3 \ pi $ by an unphysical phase in the fit function. combining both channels we find $ \ bar m _ \ omega = 782. 736 ( 24 ) \, \ text { mev } $ and $ \ bar m _ \ phi = 1019. 457 ( 20 ) \, \ text { mev } $ for the masses including vacuum - polarization corrections. the $ \ phi $ mass agrees perfectly with the pdg average, which is dominated by determinations from the $ \ bar k k $ channel, demonstrating consistency with $ 3 \ pi $ and $ \ pi ^ 0 \ gamma $. for the $ \ omega $ mass, our result is consistent but more precise, exacerbating tensions with the $ \ omega $ mass extracted via isospin - breaking effects from the $ 2 \ pi $ channel.
|
arxiv:2007.12696
|
current one - stage action detection methods, which simultaneously predict action boundaries and the corresponding class, do not estimate or use a measure of confidence in their boundary predictions, which can lead to inaccurate boundaries. we incorporate the estimation of boundary confidence into one - stage anchor - free detection, through an additional prediction head that predicts the refined boundaries with higher confidence. we obtain state - of - the - art performance on the challenging epic - kitchens - 100 action detection as well as the standard thumos14 action detection benchmarks, and achieve improvement on the activitynet - 1. 3 benchmark.
|
arxiv:2210.14284
|
the transport properties of the two - dimensional system in hgte - based quantum wells containing simultaneously electrons and holes of low densities are examined. the hall resistance, as a function of perpendicular magnetic field, reveals an unconventional behavior, different from the classical n - shaped dependence typical for bipolar systems with electron - hole asymmetry. the quantum features of magnetotransport are explained by means of numerical calculation of the landau level spectrum based on the kane hamiltonian. the origin of the quantum hall plateau { \ sigma } xy = 0 near the charge neutrality point is attributed to special features of landau quantization in our system.
|
arxiv:1210.7219
|
guided - wave platforms such as fiber and silicon - on - insulator waveguide show great advances over traditional free space implementations in quantum information technology for significant advantages of low transmission loss, low cost, integrability and compatible with mature fiber communication systems. interference between independent photon sources is the key to realize complex quantum systems for more sophisticated applications such as multi - photon entanglement generation and quantum teleportation. in this work, we report hong - ou - mandel interference between two independent all fiber photon pair sources over two 100ghz dense wave division multiplexing channels, the visibility reaches 53. 2 ( 8. 4 ) % ( 82. 9 ( 5. 3 ) % ) without ( with ) back ground counts subtracted. in addition, we give a general theoretical description of the purity of the photon pair generation in dispersion shifted fiber and obtain the optimized condition for high purity photon pair generation. we also obtain a maximum coincidence to back ground ratio of 131 by cooling the fiber in liquid nitrogen. our study shows great promising of integrated optical elements for future scalable quantum information promising.
|
arxiv:1607.02301
|
we review nonlinear gauge theory and its application to two - dimensional gravity. we construct a gauge theory based on nonlinear lie algebras, which is an extension of the usual gauge theory based on lie algebras. it is a new approach to generalization of the gauge theory. the two - dimensional gravity is derived from nonlinear poincar { \ ' e } algebra, which is the new yang - mills like formulation of the gravitational theory. as typical examples, we investigate $ r ^ 2 $ gravity with dynamical torsion and generic form of ' dilaton ' gravity. the supersymmetric extension of this theory is also discussed.
|
arxiv:hep-th/9312059
|
numerous studies have reported performance enhancement of a thermophotovoltaic ( tpv ) system when an emitter is separated by nanoscale gaps from a tpv cell. although a $ p $ - $ n $ - junction - based tpv cell has been widely used for the near - field tpv system, a schottky - junction - based near - field tpv system has drawn attention recently with the advantage of the easy fabrication. however, existing studies mostly focused on the generated photocurrent only in the metal side due to the fact that required energy for the metal - side photocurrent ( i. e., schottky barrier height ) is smaller than the bandgap energy. here, we suggest the precise performance analysis model for the schottky - junction - based near - field tpv system, including photocurrent generation on the semiconductor side by considering the transport of minority carriers within the semiconductor. it is found that most of the total photocurrent in the schottky - junction - based near - field tpv system is generated in the semiconductor side. we also demonstrate that further enhancement in the photocurrent generation can be achieved by re - absorbing the usable photon energy in the metal with the help of a backside reflector. the present work will provide a design guideline for the schottky - junction - based near - field tpv system taking into account three types of photocurrents.
|
arxiv:1811.12625
|
the direct writing using a focused electron beam allows for fabricating truly three - dimensional structures of sub - wavelength dimensions in the visible spectral regime. the resulting sophisticated geometries are perfectly suited for studying light - matter interaction at the nanoscale. their overall optical response will strongly depend not only on geometry but also on the optical properties of the deposited material. in case of the typically used metal - organic precursors, the deposits show a substructure of metallic nanocrystals embedded in a carbonaceous matrix. since gold - containing precursor media are especially interesting for optical applications, we experimentally determine the effective permittivity of such an effective material. our experiment is based on spectroscopic measurements of planar deposits. the retrieved permittivity shows a systematic dependence on the gold particle density and cannot be sufficiently described using the common maxwell - garnett approach for effective medium.
|
arxiv:1510.06610
|
the two - dimensional ising model is studied by performing computer simulations with one of the monte carlo algorithms - the worm algorithm. the critical temperature t _ c of the phase transition is calculated by the usage of the critical exponents and the results are compared to the analytical result, giving a very high accuracy. we also show that the magnetic ordering of impurities distributed on a graphene sheet is possible, by simulating the properly constructed model using the worm algorithm. the value of t _ c is estimated. furthermore, the dependence of t _ c on the interaction constants is explored. we outline how one can proceed in investigating this relation in the future.
|
arxiv:1303.0429
|
we use extensive multi - wavelength photometric data from the great observatories origins deep survey ( goods ) to estimate photometric redshifts for a sample of 434 galaxies with spectroscopic redshifts in the chandra deep field south. using the bayesian method, which incorporates redshift / magnitude priors, we estimate photometric redshifts for galaxies in the range 18 < r ( ab ) < 25. 5, giving an rms scatter of 0. 11. the outlier fraction is < 10 %, with the outlier - clipped rms being 0. 047. we examine the accuracy of photometric redshifts for several, special sub - - classes of objects. the results for extremely red objects are more accurate than those for the sample as a whole, with rms of 0. 051 and very few outliers ( 3 % ). photometric redshifts for active galaxies, identified from their x - ray emission, have a dispersion of 0. 104, with 10 % outlier fraction, similar to that for normal galaxies. employing a redshift / magnitude prior in this process seems to be crucial in improving the agreement between photometric and spectroscopic redshifts.
|
arxiv:astro-ph/0309068
|
we use halpha and fuv galex data for a large sample of nearby objects to study the high mass star formation activity of normal late - type galaxies. the data are corrected for dust attenuation using the most accurate techniques at present available, namely the balmer decrement and the total far - infrared to fuv flux ratio. the sample shows a highly dispersed distribution in the halpha to fuv flux ratio indicating that two of the most commonly used star formation tracers give star formation rates with uncertainties up to a factor of 2 - 3. the high dispersion is due to the presence of agn, where the uv and the halpha emission can be contaminated by nuclear activity, highly inclined galaxies, for which the applied extinction corrections are probably inaccurate, or starburst galaxies, where the stationarity in the star formation history required for transforming halpha and uv luminosities into star formation rates is not satisfied. excluding these objects we reach an uncertainty of ~ 50 % on the sfr. the halpha to fuv flux ratio increases with their total stellar mass. if limited to normal star forming galaxies, however, this relationship reduces to a weak trend that might be totally removed using different extinction correction recipes. in these objects the halpha to fuv flux ratio seems also barely related with the fuv - h colour, the h band effective surface brightness, the total star formation activity and the gas fraction. the data are consistent with a kroupa and salpeter initial mass function in the high mass stellar range and imply, for a salpeter imf, that the variations of the slope cannot exceed 0. 25, from g = 2. 35 for massive galaxies to g = 2. 60 in low luminosity systems. we show however that these observed trends, if real, can be due to the different micro history of star formation in massive galaxies with respect to dwarf.
|
arxiv:0910.3521
|
we analyze electronic excitations ( excitations generated by adding or removing one electron ) in the bulk of fractional quantum hall states in jain sequence states, using composite fermion chern - simons field theory. starting from meanfield approximation in which gauge field fluctuations are neglected, we use symmetry to constrain the possible composite fermion states contributing to electronic green ' s function and expect discrete infinitely - sharp peaks in the electronic spectral function. we further consider the electronic excitations in particle - hole conjugate fractional quantum hall states. gauge field fluctuations play an increasingly important role in the electron spectral function as the filling factor approaches 1 / 2, and evolve the discrete coherent peaks into a broad continuum even in the absence of impurities. at that limit, we switch to the electron perspective and calculate the electron spectral function via linked cluster approximation from the low to intermediate energy range. finally, we compare our results with recent experiments.
|
arxiv:2406.09382
|
we report the detection of wasp - 35b, a planet transiting a metal - poor ( [ fe / h ] = - 0. 15 ) star in the southern hemisphere, wasp - 48b, an inflated planet which may have spun - up its slightly evolved host star of 1. 75 r _ sun in the northern hemisphere, and the independent discovery of hat - p - 30b / wasp - 51b, a new planet in the northern hemisphere. using wasp, rise, fts and trappist photometry, with coralie, sophie and not spectroscopy, we determine that wasp - 35b has a mass of 0. 72 + / - 0. 06 m _ j and radius of 1. 32 + / - 0. 03 r _ j, and orbits with a period of 3. 16 days, wasp - 48b has a mass of 0. 98 + / - 0. 09 m _ j, radius of 1. 67 + / - 0. 08 r _ j and orbits in 2. 14 days, while wasp - 51b, with an orbital period of 2. 81 days, is found to have a mass of 0. 76 + / - 0. 05 m _ j and radius of 1. 42 + / - 0. 04 r _ j, agreeing with values of 0. 71 + / - 0. 03 m _ j and 1. 34 + / - 0. 07 r _ j reported for hat - p - 30b.
|
arxiv:1104.2827
|
recently joint radar communication ( jrc ) systems have gained considerable interest for several applications such as vehicular communications, indoor localization and activity recognition, covert military communications, and satellite - based remote sensing. in these frameworks, bistatic / passive radar deployments with directional beams explore the angular search space and identify mobile users / radar targets. subsequently, directional communication links are established with these mobile users. consequently, jrc parameters such as the time trade - off between the radar exploration and communication service tasks have direct implications on the network throughput. using tools from stochastic geometry ( sg ), we derive several system design and planning insights for deploying such networks and demonstrate how efficient radar detection can augment the communication throughput in a jrc system. specifically, we provide a generalized analytical framework to maximize the network throughput by optimizing jrc parameters such as the exploration / exploitation duty cycle, the radar bandwidth, the transmit power, and the pulse repetition interval. the analysis is further extended to monostatic radar conditions, which is a special case in our framework. the theoretical results are experimentally validated through monte carlo simulations. our analysis highlights that for a larger bistatic range, lower operating bandwidth and a higher duty cycle must be employed to maximize the network throughput. furthermore, we demonstrate how a reduced success in radar detection due to higher clutter density deteriorates the overall network throughput. finally, we show peak reliability of 70 % of the jrc link metrics for a single bistatic transceiver configuration.
|
arxiv:2201.03221
|
the tensor charges of the nucleon are calculated in the framework of the su ( 3 ) chiral quark soliton model. the rotational $ 1 / n _ c $ and strange quark mass corrections are taken into account up to linear order. we obtain the following numerical values of the tensor charges : $ \ delta u = 1. 12 $, $ \ delta d = - 0. 42 $, and $ \ delta s = - 0. 008 $. in contrast to the axial charges, the tensor charges in our model are closer to those of the nonrelativistic quark model, in particular, the net number of the transversely polarized strange quarks in a transversely polarized nucleon $ \ delta s $ is compatible with zero.
|
arxiv:hep-ph/9604442
|
for the first time, we introduce a rapid wavelength - swept, passively mode - locked fiber laser in an all - polarization - maintaining and all - fiber configuration. achieving an exceptional wavelength sweep rate of up to 19 khz through external modulation of the ld driver pump current, this laser offers a high sweep rate, simple cavity design, cost - effectiveness, and excellent repeatability.
|
arxiv:2406.15419
|
we consider the decoupling of neutrinos in the early universe in presence of non - standard neutral current neutrino - electron interactions ( nsi ). we first discuss a semi - analytical approach to solve the relevant kinetic equations and then present the results of fully numerical and momentum - dependent calculations, including flavor neutrino oscillations. we present our results in terms of both the effective number of neutrino species ( n _ eff ) and the impact on the abundance of he - 4 produced during big bang nucleosynthesis. we find that, for nsi parameters within the ranges allowed by present laboratory data, non - standard neutrino - electron interactions do not essentially modify the density of relic neutrinos nor the bounds on neutrino properties from cosmological observables, such as their mass. nonetheless, the presence of neutrino - electron nsi may enhance the entropy transfer from electron - positron pairs into neutrinos instead of photons, up to a value of n _ eff = 3. 12. this is almost three times the correction to n _ eff = 3 that appears for standard weak interactions.
|
arxiv:hep-ph/0607267
|
the feti - dp, bddc and p - feti - dp preconditioners are derived in a particulary simple abstract form. it is shown that their properties can be obtained from only on a very small set of algebraic assumptions. the presentation is purely algebraic and it does not use any particular definition of method components, such as substructures and coarse degrees of freedom. it is then shown that p - feti - dp and bddc are in fact the same. the feti - dp and the bddc preconditioned operators are of the same algebraic form, and the standard condition number bound carries over to arbitrary abstract operators of this form. the equality of eigenvalues of bddc and feti - dp also holds in the minimalist abstract setting. the abstract framework is explained on a standard substructuring example.
|
arxiv:0708.4031
|
the $ n _ c $ - dependence of the vertices $ ppp \ gamma $, where $ p $ is a pseudoscalar meson and $ n _ c $ is the number of colors, is analyzed with regard for the $ n _ c $ - dependence of the quark charges. it is shown that the best processes for the determination of $ n _ c $ are the reactions $ k \ gamma \ to k \ pi $ and $ \ pi ^ pm \ gamma \ to \ pi ^ \ pm \ eta $ as well as the decay $ \ eta \ ra \ pi ^ + \ pi ^ - \ gamma $. the measurement of the cross section $ \ sigma ( \ pi ^ - \ gamma \ ra \ pi ^ - \ eta ) $ at the ves facility at the ihep agrees with the value $ n _ c = 3 $.
|
arxiv:hep-ph/0202046
|
it was suggested that a programmable matter system ( composed of multiple computationally weak mobile particles ) should remain connected at all times since otherwise, reconnection is difficult and may be impossible. at the same time, it was not clear that allowing the system to disconnect carried a significant advantage in terms of time complexity. we demonstrate for a fundamental task, that of leader election, an algorithm where the system disconnects and then reconnects automatically in a non - trivial way ( particles can move far away from their former neighbors and later reconnect to others ). moreover, the runtime of the temporarily disconnecting deterministic leader election algorithm is linear in the diameter. hence, the disconnecting - - reconnecting algorithm is as fast as previous randomized algorithms. when comparing to previous deterministic algorithms, we note that some of the previous work assumed weaker schedulers. still, the runtime of all the previous deterministic algorithms that did not assume special shapes of the particle system ( shapes with no holes ) was at least quadratic in $ n $, where $ n $ is the number of particles in the system. ( moreover, the new algorithm is even faster in some parameters than the deterministic algorithms that did assume special initial shapes. ) since leader election is an important module in algorithms for various other tasks, the presented algorithm can be useful for speeding up other algorithms under the assumption of a strong scheduler. this leaves open the question : " can a deterministic algorithm be as fast as the randomized ones also under weaker schedulers? "
|
arxiv:2106.01108
|
many scalar field theory models with complex actions are invariant under the antilinear ( $ pt $ ) symmetry operation $ l ^ { \ ast } ( - \ chi ) = l ( \ chi ) $. models in this class include the $ i \ phi ^ { 3 } $ model, the bose gas at finite density and polyakov loop spin models at finite density. this symmetry may be used to obtain a dual representation where weights in the functional integral are real but not necessarily positive. for a subclass of models satisfying a dual positive weight condition, the partition function is manifestly positive. the sign problem is eliminated ; such models are easily simulated by a simple local algorithm in any number of dimensions. simulations of models in this subclass show a rich set of behaviors. propagators may exhibit damped oscillations, indicating a clear violation of spectral positivity. pattern formation may also occur, with both stripe and bubble morphologies possible. the existence of a positive representation is constrained by lee - yang zeros : a positive representation cannot exist everywhere in the neighborhood of such a zero. simulation results raise the possibility that pattern - forming behavior may occur in finite density qcd in the vicinity of the critical line.
|
arxiv:1811.11112
|
in this work, we study the behavior of the nonabelian five - dimensional chern - simons term at finite temperature regime in order to verify the possible nonanalyticity. we employ two methods, a perturbative and a non - perturbative one. no scheme of regularization is needed, and we verify the nonanalyticity of the self - energy of the photon in the origin of momentum space by two conditions that do not commute, namely, the static limit $ ( k _ 0 = 0, \ vec k \ rightarrow 0 ) $ and the long wavelength limit $ ( k _ 0 \ rightarrow 0, \ vec k = 0 ) $, while its tensorial structure holds in both limits.
|
arxiv:2011.12333
|
we investigate the laminar flow of two - fluid mixtures inside a simple network of inter - connected tubes. the fluid system is comprised of two miscible newtonian fluids of different viscosity which do not mix and remain as nearly distinct phases. downstream of a diverging network junction the two fluids do not necessarily split in equal fraction and thus heterogeneity is introduced into network. we find that in the simplest network, a single loop with one inlet and one outlet, under steady inlet conditions the flow rates and distribution of the two fluids within the network loop can undergo persistent spontaneous oscillations. we develop a simple model which highlights the basic mechanism of the instability and we demonstrate that the model can predict the region of parameter space where oscillations exist. the model predictions are in good agreement with experimental observations.
|
arxiv:1409.3785
|
numerical model of the peripheral circulation and dynamical model of the large vessels and the heart are discussed in this paper. they combined together into the global model of blood circulation. some results of numerical simulations concerning matter transport through the human organism and heart diseases are represented in the end.
|
arxiv:0712.4342
|
i review the ability of the lhc ( large hadron collider ), nlc ( next linear lepton collider ) and fmc ( first muon collider ) to detect and study higgs bosons, with emphasis on the higgs bosons of extended higgs sectors, especially those of the minimal supersymmetric standard model ( mssm ). particular attention is given to means for distinguishing the lightest neutral cp - even higgs boson of the mssm from the single higgs boson of the minimal standard model ( sm ).
|
arxiv:hep-ph/9705282
|
at the heart of technology transitions lie complex processes of social and industrial dynamics. the quantitative study of sustainability transitions requires modelling work, which necessitates a theory of technology substitution. many, if not most, contemporary modelling approaches for future technology pathways overlook most aspects of transitions theory, for instance dimensions of heterogenous investor choices, dynamic rates of diffusion and the profile of transitions. a significant body of literature however exists that demonstrates how transitions follow s - shaped diffusion curves or lotka - volterra systems of equations. this framework is used ex - post since timescales can only be reliably obtained in cases where the transitions have already occurred, precluding its use for studying cases of interest where nascent innovations in protective niches await favourable conditions for their diffusion. in principle, scaling parameters of transitions can, however, be derived from knowledge of industrial dynamics, technology turnover rates and technology characteristics. in this context, this paper presents a theory framework for evaluating the parameterisation of s - shaped diffusion curves for use in simulation models of technology transitions without the involvement of historical data fitting, making use of standard demography theory applied to technology at the unit level. the classic lotka - volterra competition system emerges from first principles from demography theory, its timescales explained in terms of technology lifetimes and industrial dynamics. the theory is placed in the context of the multi - level perspective on technology transitions, where innovation and the diffusion of new socio - technical regimes take a prominent place, as well as discrete choice theory, the primary theoretical framework for introducing agent diversity.
|
arxiv:1304.3602
|
geometrical complexities in natural fault zones, such as steps and gaps, pose a challenge in seismic hazard studies as they can act as barriers to seismic ruptures. in this study, we propose a criterion, which is based on the rate - and - state equation, to estimate the efficiency of an earthquake rupture to jump two spatially disconnected faults. the proposed jump criterion is tested using a 2d quasi - dynamic numerical simulations of the seismic cycle. the criterion successfully predicts fault jumps where the coulomb stress change fails to do so. the criterion includes the coulomb stress change as a parameter but is also dependent on other important parameters among which the absolute normal stress on the fault which the rupture is to jump to. based on the criterion, the maximum jump distance increases with decreasing absolute normal stress, i. e. as the rupture process occurs closer to the surface or as pore pressure increases. the criterion implies that an earthquake can jump to an infinite distance at the surface if the normal stress is allowed to go to zero. thus, the properties of the surface layers are of the outmost importance in terms of maximum rupture jump distance allowed. the absolute normal stress is the main controlling parameter followed by the uncertainty on the slip of an earthquake, which controls the coulomb stress impact on the receiver fault. finally, we also propose a method to compute probabilities of earthquakes rupture to jump, which allows to consider uncertainties in geometrical configurations between two faults.
|
arxiv:2501.15948
|
the notion of age of information ( aoi ) has become an important performance metric in network and control systems. information freshness, represented by aoi, naturally arises in the context of caching. we address optimal scheduling of cache updates for a time - slotted system where the contents vary in size. there is limited capacity for the cache and for making content updates. each content is associated with a utility function that is monotonically decreasing in the aoi. for this combinatorial optimization problem, we present the following contributions. first, we provide theoretical results settling the boundary of problem tractability. in particular, by a reformulation using network flows, we prove the boundary is essentially determined by whether or not the contents are of equal size. second, we derive an integer linear formulation for the problem, of which the optimal solution can be obtained for small - scale scenarios. next, via a mathematical reformulation, we derive a scalable optimization algorithm using repeated column generation. in addition, the algorithm computes a bound of global optimum, that can be used to assess the performance of any scheduling solution. performance evaluation of large - scale scenarios demonstrates the strengths of the algorithm in comparison to a greedy schedule. finally, we extend the applicability of our work to cyclic scheduling.
|
arxiv:2005.00445
|
the balanced connected subgraph problem ( bcs ) was recently introduced by bhore et al. ( caldam 2019 ). in this problem, we are given a graph $ g $ whose vertices are colored by red or blue. the goal is to find a maximum connected subgraph of $ g $ having the same number of blue vertices and red vertices. they showed that this problem is np - hard even on planar graphs, bipartite graphs, and chordal graphs. they also gave some positive results : bcs can be solved in $ o ( n ^ 3 ) $ time for trees and $ o ( n + m ) $ time for split graphs and properly colored bipartite graphs, where $ n $ is the number of vertices and $ m $ is the number of edges. in this paper, we show that bcs can be solved in $ o ( n ^ 2 ) $ time for trees and $ o ( n ^ 3 ) $ time for interval graphs. the former result can be extended to bounded treewidth graphs. we also consider a weighted version of bcs ( wbcs ). we prove that this variant is weakly np - hard even on star graphs and strongly np - hard even on split graphs and properly colored bipartite graphs, whereas the unweighted counterpart is tractable on those graph classes. finally, we consider an exact exponential - time algorithm for general graphs. we show that bcs can be solved in $ 2 ^ { n / 2 } n ^ { o ( 1 ) } $ time. this algorithm is based on a variant of dreyfus - wagner algorithm for the steiner tree problem.
|
arxiv:1910.07305
|
boolean satisfiability ( sat ) has an extensive application domain in computer science, especially in electronic design automation applications. circuit synthesis, optimization, and verification problems can be solved by transforming original problems to sat problems. however, the sat problem is known as np - complete, which means there is no efficient method to solve it. therefore, an efficient sat solver to enhance the performance is always desired. we propose a hardware acceleration method for sat problems. by surveying the properties of sat problems and the decoding of low - density parity - check ( ldpc ) codes, a special class of error - correcting codes, we discover that both of them are constraint satisfaction problems. the belief propagation algorithm has been successfully applied to the decoding of ldpc, and the corresponding decoder hardware designs are extensively studied. therefore, we proposed a belief propagation based algorithm to solve sat problems. with this algorithm, the sat solver can be accelerated by hardware. a software simulator is implemented to verify the proposed algorithm and the performance improvement is estimated. our experiment results show that time complexity does not increase with the size of sat problems and the proposed method can achieve at least 30x speedup compared to minisat.
|
arxiv:1603.05314
|
the paper presents a comparative study of state - of - the - art approaches for question classification task : logistic regression, convolutional neural networks ( cnn ), long short - term memory network ( lstm ) and quasi - recurrent neural networks ( qrnn ). all models use pre - trained glove word embeddings and trained on human - labeled data. the best accuracy is achieved using cnn model with five convolutional layers and various kernel sizes stacked in parallel, followed by one fully connected layer. the model reached 90. 7 % accuracy on trec 10 test set. all the model architectures in this paper were developed from scratch on pytorch, in few cases based on reliable open - source implementation.
|
arxiv:2001.00571
|
let m be a compact, connected, m - dimensional manifold without boundary and p > 1. for 1 < p \ leq m, we prove that the first eigenvalue \ lambda _ { 1, p } of the p - laplacian is bounded on each conformal class of riemannian metrics of volume one on m. for p > m, we show that any conformal class of riemannian metrics on m contains metrics of volume one with \ lambda _ { 1, p } arbitrarily large. as a consequence, we obtain that in two dimensions \ lambda _ { 1, p } is uniformly bounded on the space of riemannian metrics of volume one if 1 < p \ leq 2, respectively unbounded if p > 2.
|
arxiv:1210.5129
|
high - dimensional biomarkers such as genomics are increasingly being measured in randomized clinical trials. consequently, there is a growing interest in developing methods that improve the power to detect biomarker - treatment interactions. we adapt recently proposed two - stage interaction detecting procedures in the setting of randomized clinical trials. we also propose a new stage 1 multivariate screening strategy using ridge regression to account for correlations among biomarkers. for this multivariate screening, we prove the asymptotic between - stage independence, required for family - wise error rate control, under biomarker - treatment independence. simulation results show that in various scenarios, the ridge regression screening procedure can provide substantially greater power than the traditional one - biomarker - at - a - time screening procedure in highly correlated data. we also exemplify our approach in two real clinical trial data applications.
|
arxiv:2004.12028
|
we study separations between two fundamental models ( or \ emph { ans \ " atze } ) of antisymmetric functions, that is, functions $ f $ of the form $ f ( x _ { \ sigma ( 1 ) }, \ ldots, x _ { \ sigma ( n ) } ) = \ text { sign } ( \ sigma ) f ( x _ 1, \ ldots, x _ n ) $, where $ \ sigma $ is any permutation. these arise in the context of quantum chemistry, and are the basic modeling tool for wavefunctions of fermionic systems. specifically, we consider two popular antisymmetric ans \ " atze : the slater representation, which leverages the alternating structure of determinants, and the jastrow ansatz, which augments slater determinants with a product by an arbitrary symmetric function. we construct an antisymmetric function in $ n $ dimensions that can be efficiently expressed in jastrow form, yet provably cannot be approximated by slater determinants unless there are exponentially ( in $ n ^ 2 $ ) many terms. this represents the first explicit quantitative separation between these two ans \ " atze.
|
arxiv:2208.03264
|
most text - to - image customization techniques fine - tune models on a small set of \ emph { personal concept } images captured in minimal contexts. this often results in the model becoming overfitted to these training images and unable to generalize to new contexts in future text prompts. existing customization methods are built on the success of effectively representing personal concepts as textual embeddings. thus, in this work, we resort to diversifying the context of these personal concepts \ emph { solely } within the textual space by simply creating a contextually rich set of text prompts, together with a widely used self - supervised learning objective. surprisingly, this straightforward and cost - effective method significantly improves semantic alignment in the textual space, and this effect further extends to the image space, resulting in higher prompt fidelity for generated images. additionally, our approach does not require any architectural modifications, making it highly compatible with existing text - to - image customization methods. we demonstrate the broad applicability of our approach by combining it with four different baseline methods, achieving notable clip score improvements.
|
arxiv:2410.10058
|
in 2010 november 23 - 25, we found that the amplitude of v1162 orion, a { delta } scuti star, recovered to about 0. 18 mag in v band. an updated ( o - c ) diagram is provided for all available maximum data.
|
arxiv:1109.3840
|
we revisit the " epsilon - intelligence " model of toth et al. ( 2011 ), that was proposed as a minimal framework to understand the square - root dependence of the impact of meta - orders on volume in financial markets. the basic idea is that most of the daily liquidity is " latent " and furthermore vanishes linearly around the current price, as a consequence of the diffusion of the price itself. however, the numerical implementation of toth et al. was criticised as being unrealistic, in particular because all the " intelligence " was conferred to market orders, while limit orders were passive and random. in this work, we study various alternative specifications of the model, for example allowing limit orders to react to the order flow, or changing the execution protocols. by and large, our study lends strong support to the idea that the square - root impact law is a very generic and robust property that requires very few ingredients to be valid. we also show that the transition from super - diffusion to sub - diffusion reported in toth et al. is in fact a cross - over, but that the original model can be slightly altered in order to give rise to a genuine phase transition, which is of interest on its own. we finally propose a general theoretical framework to understand how a non - linear impact may appear even in the limit where the bias in the order flow is vanishingly small.
|
arxiv:1311.6262
|
in this paper, we investigate the general notion of the slope for families of curves $ f : x \ to y $. the main result is an answer to the above question when $ \ dim y = 2 $, and we prove a lower bound for this new slope in this case over fields of any characteristic. both the notion and the slope inequality are compatible with the theory for $ \ dim y = 0, 1 $ in a very natural way, and this gives a strong evidence that the slope for an $ n $ - fold fibration of curves $ f : x \ to y $ may be $ k _ { x / y } ^ n / \ mathrm { ch } _ { n - 1 } ( f _ * \ omega _ { x / y } ) $. rather than the usual stability methods, the whole proof of the slope inequality here is based on a completely new method using characteristic $ p > 0 $ geometry. a simpler version of this method yields a new proof of the slope inequality when $ \ dim y = 1 $.
|
arxiv:1512.03933
|
of the 1980s. since then, solid - state devices have all but completely taken over. vacuum tubes are still used in some specialist applications such as high power rf amplifiers, cathode - ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. in april 1955, the ibm 608 was the first ibm product to use transistor circuits without any vacuum tubes and is believed to be the first all - transistorized calculator to be manufactured for the commercial market. the 608 contained more than 3, 000 germanium transistors. thomas j. watson jr. ordered all future ibm products to use transistors in their design. from that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. however, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass - production basis, which limited them to a number of specialised applications. the mosfet was invented at bell labs between 1955 and 1960. it was the first truly compact transistor that could be miniaturised and mass - produced for a wide range of uses. its advantages include high scalability, affordability, low power consumption, and high density. it revolutionized the electronics industry, becoming the most widely used electronic device in the world. the mosfet is the basic element in most modern electronic equipment. as the complexity of circuits grew, problems arose. one problem was the size of the circuit. a complex circuit like a computer was dependent on speed. if the components were large, the wires interconnecting them must be long. the electric signals took time to go through the circuit, thus slowing the computer. the invention of the integrated circuit by jack kilby and robert noyce solved this problem by making all the components and the chip out of the same block ( monolith ) of semiconductor material. the circuits could be made smaller, and the manufacturing process could be automated. this led to the idea of integrating all components on a single - crystal silicon wafer, which led to small - scale integration ( ssi ) in the early 1960s, and then medium - scale integration ( msi ) in the late 1960s, followed by vlsi. in 2008, billion - transistor processors became commercially available. = = subfields = = = = devices and components = = an electronic component is any component in an electronic system either active or passive. components are connected together, usually by being soldered to a printed circuit board ( pcb ), to create an
|
https://en.wikipedia.org/wiki/Electronics
|
observational data from the fermi gamma - ray space telescope are analyzed with a goal in mind to look for variations in gamma - ray flux from young shell - like supernova remnants. uniform methodological approach is adopted for all snrs considered. g1. 9 + 0. 3 and kepler snrs are not detected. the light curves of cas ~ a and tycho snrs are compatible with the steady gev flux during the recent ten years, as also x - ray and radio fluxes. less confident results on sn1006 and sn1987a are discussed.
|
arxiv:1912.06452
|
automatically selecting exposure bracketing ( images exposed differently ) is important to obtain a high dynamic range image by using multi - exposure fusion. unlike previous methods that have many restrictions such as requiring camera response function, sensor noise model, and a stream of preview images with different exposures ( not accessible in some scenarios e. g. some mobile applications ), we propose a novel deep neural network to automatically select exposure bracketing, named ebsnet, which is sufficiently flexible without having the above restrictions. ebsnet is formulated as a reinforced agent that is trained by maximizing rewards provided by a multi - exposure fusion network ( mefnet ). by utilizing the illumination and semantic information extracted from just a single auto - exposure preview image, ebsnet can select an optimal exposure bracketing for multi - exposure fusion. ebsnet and mefnet can be jointly trained to produce favorable results against recent state - of - the - art approaches. to facilitate future research, we provide a new benchmark dataset for multi - exposure selection and fusion.
|
arxiv:2005.12536
|
we describe a time - resolved monitoring technique for heterogeneous media. our approach is based on the spatial variations of the cross - coherence of coda waveforms acquired at fixed positions but at different dates. to locate and characterize a weak change that occurred between successive acquisitions, we use a maximum likelihood approach combined with a diffusive propagation model. we illustrate this technique, called locadiff, with numerical simulations. in several illustrative examples, we show that the change can be located with a precision of a few wavelengths and its effective scattering cross - section can be retrieved. the precision of the method depending on the number of source receiver pairs, time window in the coda, and errors in the propagation model is investigated. limits of applications of the technique to real - world experiments are discussed.
|
arxiv:1007.3103
|
the order of the post - newtonian expansion needed, to extract in a reliable and accurate manner the fully general relativistic gravitational wave signal from inspiralling compact binaries, is explored. a class of approximate wave forms, called p - approximants, is constructed based on the following two inputs : ( a ) the introduction of two new energy - type and flux - type functions e ( v ) and f ( v ), respectively, ( b ) the systematic use of pade approximation for constructing successive approximants of e ( v ) and f ( v ). the new p - approximants are not only more effectual ( larger overlaps ) and more faithful ( smaller biases ) than the standard taylor approximants, but also converge faster and monotonically. the presently available o ( v / c ) ^ 5 - accurate post - newtonian results can be used to construct p - approximate wave forms that provide overlaps with the exact wave form larger than 96. 5 % implying that more than 90 % of potential events can be detected with the aid of p - approximants as opposed to a mere 10 - 15 % that would be detectable using standard post - newtonian approximants.
|
arxiv:gr-qc/9708034
|
we give an explicit formula for the mordell - weil rank of an abelian fibered variety and some of its applications for an abelian fibered hyperk \ " ahler manifold. as a byproduct, we also give an explicit example of an abelian fibered variety in which the picard number of the generic fiber in the sense of scheme is different from the picard number of generic closed fibers.
|
arxiv:math/0703245
|
in this paper distribution amplitudes of pseudoscalar and vector nonrelativistic mesons are considered. using equations of motion for the distribution amplitudes, it is derived relations which allow one to calculate the masses of nonrelativistic pseudoscalar and vector meson if the leading twist distribution amplitudes are known. these relations can be also rewritten as relations between the masses of nonrelativistic mesons and infinite series of qcd operators, what can be considered as an exact version of gremm - kapustin relation in nrqcd.
|
arxiv:0912.1781
|
despite recent advances, object detection in aerial images is still a challenging task. specific problems in aerial images makes the detection problem harder, such as small objects, densely packed objects, objects in different sizes and with different orientations. to address small object detection problem, we propose a two - stage object detection framework called " focus - and - detect ". the first stage which consists of an object detector network supervised by a gaussian mixture model, generates clusters of objects constituting the focused regions. the second stage, which is also an object detector network, predicts objects within the focal regions. incomplete box suppression ( ibs ) method is also proposed to overcome the truncation effect of region search approach. results indicate that the proposed two - stage framework achieves an ap score of 42. 06 on visdrone validation dataset, surpassing all other state - of - the - art small object detection methods reported in the literature, to the best of authors ' knowledge.
|
arxiv:2203.12976
|
a = c } not through the origin is ref a, c ( v ) = v − 2 v ⋅ a − c a ⋅ a a. { \ displaystyle \ operatorname { ref } _ { a, c } ( v ) = v - 2 { \ frac { v \ cdot a - c } { a \ cdot a } } a. } = = see also = = additive inverse coordinate rotations and reflections householder transformation inversive geometry plane of rotation reflection mapping reflection group reflection symmetry = = notes = = = = references = = coxeter, harold scott macdonald ( 1969 ), introduction to geometry ( 2nd ed. ), new york : john wiley & sons, isbn 978 - 0 - 471 - 50458 - 0, mr 0123930 popov, v. l. ( 2001 ) [ 1994 ], " reflection ", encyclopedia of mathematics, ems press weisstein, eric w. " reflection ". mathworld. = = external links = = reflection in line at cut - the - knot understanding 2d reflection and understanding 3d reflection by roger germundsson, the wolfram demonstrations project.
|
https://en.wikipedia.org/wiki/Reflection_(mathematics)
|
the structure and dynamics of fluids confined in nanoporous media differ from those in bulk, which can be probed using nmr relaxation measurements. we here show, using atomistic molecular dynamics simulations of water in a slit nanopore, that the behavior of the nmr relaxation rate, r1, with varying surface interaction and confinement strength can be estimated from the exchange statistics of fluid molecules between the adsorbed surface layer and the bulk region, where molecules undergo intermittent dynamics. we employ first return passage time calculations to quantify the molecular exchange statistics, thereby linking microscopic parameters of the confined fluid - such as adsorption time, pore size, and diffusion coefficient - to the nmr relaxation rate. this approach allows to predict and interpret the molecular relaxation of fluids at interfaces using merely concepts of statistical mechanics and can be generalized to closed and open geometries.
|
arxiv:2501.17596
|
this paper presents a generative model for super - resolution in routine clinical magnetic resonance images ( mri ), of arbitrary orientation and contrast. the model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice - select profile of the mr scanner. the paper introduces a prior based on multi - channel total variation for mri super - resolution. bias - variance trade - off is handled by estimating hyper - parameters from the low resolution input scans. the model was validated on a large database of brain images. the validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different mr contrasts, and that it generalises well to the large variability present in mr images of different subjects. the implementation is freely available at https : / / github. com / brudfors / spm _ superres
|
arxiv:1810.03422
|
we present a brief discussion of the general form of the amplitude that describes the two - pion photoproduction process. we outline an effective lagrangian method that we are using to calculate this amplitude, and comment briefly on a few aspects of the calculation.
|
arxiv:hep-ph/9708236
|
we unambiguously identify, in experiment and theory, a previously overlooked holographic interference pattern in strong - field ionization, dubbed " the spiral ", stemming from two trajectories for which the binding potential and the laser field are equally critical. we show that, due to strong interaction with the core, these trajectories are optimal tools for probing the target \ textbf { after } ionization and for revealing obfuscated phases in the initial bound states. the spiral is shown to be responsible for interference carpets, formerly attributed to direct above - threshold ionization trajectories, and we show the carpet - interference condition is a general property due to the field symmetry.
|
arxiv:2003.02239
|
we present our results on the { \ gamma } - ray emission from interaction - powered supernovae ( sne ), a recently discovered sn type that is suggested to be surrounded by a circumstellar medium ( csm ) with densities 10 ^ 7 - 10 ^ 12 ~ cm ^ - 3. such high densities favor inelastic collisions between relativistic protons accelerated in the sn blast wave and csm protons and the production of { \ gamma } - ray photons through neutral pion decays. using a numerical code that includes synchrotron radiation, adiabatic losses due to the expansion of the source, photon - photon interactions, proton - proton collisions and proton - photon interactions, we calculate the multi - wavelength non - thermal photon emission soon after the shock breakout and follow its temporal evolution until 100 - 1000 days. focusing on the { \ gamma } - ray emission at > 100 mev, we show that this could be detectable by the fermi - lat telescope for nearby ( < 10 mpc ) sne with dense csm ( > 10 ^ 11 cm ^ - 3 ).
|
arxiv:1607.05847
|
a widely used tool in the study of risk, insurance and extreme values is the mean excess plot. one use is for validating a generalized pareto model for the excess distribution. this paper investigates some theoretical and practical aspects of the use of the mean excess plot.
|
arxiv:0907.5236
|
in this work, we study the stochastic dynamics of micro - magnetics interacting with a spin - current torque. we extend the previously constructed stochastic landau - lifshitz equation to the case with spin - current torque, and verify the conditions of detailed balance. then we construct various thermodynamics quantities such as work and heat, and prove the second law of thermodynamics. due to the existence of spin - torque and the asymmetry of the kinetic matrix, a novel effect of entropy pumping shows up. as a consequence, the system may behave as a heat engine which constantly transforms heat into magnetic work. finally, we derive a fluctuation theorem for the joint probability density function of the pumped entropy and the total work, and verify it using numerical simulations.
|
arxiv:2406.02220
|
we explain why the australian electoral commission should perform an audit of the paper senate ballots against the published preference data files. we suggest four different post - election audit methods appropriate for australian senate elections. we have developed prototype code for all of them and tested it on preference data from the 2016 election.
|
arxiv:1610.00127
|
we present calculations indicating the possibility of a new class of type i supernovae. in this new paradigm relativistic terms enhance the self gravity of a carbon - oxygen white dwarf as it passes or orbits near a black hole. this relativistic compression can cause the central density to exceed the threshold for pycnonuclear reactions so that athermonuclear runaway ensues. we consider three possible environments : 1 ) white dwarfs orbiting a low - mass black hole ; 2 ) white dwarfs encountering a massive black hole in a dense globular cluster ; and 3 ) white dwarfs passing a supermassive black hole in a dense galactic core. we estimate the rate at which such events could occur out to a redshift of z = 1. event rates are estimated to be significantly less than the rate of normal type ia supernovae for all three classes. nevertheless, such events may be frequent enough to warrant a search for this new class of supernova. we propose several observable signatures which might be used to identify this type of event and speculate that such an event might have produced the observed " mixed - morphology " sgr a east supernova remnant in the galactic core.
|
arxiv:astro-ph/0307337
|
molecular representation learning ( mrl ) has long been crucial in the fields of drug discovery and materials science, and it has made significant progress due to the development of natural language processing ( nlp ) and graph neural networks ( gnns ). nlp treats the molecules as one dimensional sequential tokens while gnns treat them as two dimensional topology graphs. based on different message passing algorithms, gnns have various performance on detecting chemical environments and predicting molecular properties. herein, we propose directed graph attention networks ( d - gats ) : the expressive gnns with directed bonds. the key to the success of our strategy is to treat the molecular graph as directed graph and update the bond states and atom states by scaled dot - product attention mechanism. this allows the model to better capture the sub - structure of molecular graph, i. e., functional groups. compared to other gnns or message passing neural networks ( mpnns ), d - gats outperform the state - of - the - art on 13 out of 15 important molecular property prediction benchmarks.
|
arxiv:2305.14819
|
diffraction of elastic waves is considered for a system consisting of two parallel arrays of thin ( subwavelength ) cylinders that are arranged periodically. the embedding media supports waves with all polarizations, one longitudinal and two transverse, having different dispersion relations. an interaction with scatters mixes longitudinal and one of the transverse modes. it is shown that the system supports bound states in the continuum ( bsc ) that have no specific polarization, that is, there are standing waves localized in the scattering structure whose wave numbers lies in the first open diffraction channels for both longitudinal and transverse modes. bscs are shown to exists only for specific distances between the arrays and for specific values of the wave vector component along the array. an analytic solution is obtained for such bscs. for distances between the parallel arrays much larger than the wavelength, bscs is proved to exist due to destructive interference of the far field resonance radiation, similar to the interference in a fabry - perot interferometer, that can occur simultaneously for both propagating modes.
|
arxiv:1906.00955
|
with the improvements of los angeles in many aspects, people in mounting numbers tend to live or travel to the city. the primary objective of this paper is to apply a set of methods for the time series analysis of traffic accidents in los angeles in the past few years. the number of traffic accidents, collected from 2010 to 2019 monthly reveals that the traffic accident happens seasonally and increasing with fluctuation. this paper utilizes the ensemble methods to combine several different methods to model the data from various perspectives, which can lead to better forecasting accuracy. the ima ( 1, 1 ), ets ( a, n, a ), and two models with fourier items are failed in independence assumption checking. however, the online gradient descent ( ogd ) model generated by the ensemble method shows the perfect fit in the data modeling, which is the state - of - the - art model among our candidate models. therefore, it can be easier to accurately forecast future traffic accidents based on previous data through our model, which can help designers to make better plans.
|
arxiv:1911.12813
|
this paper addresses the problem of registering multiple point sets. solutions to this problem are often approximated by repeatedly solving for pairwise registration, which results in an uneven treatment of the sets forming a pair : a model set and a data set. the main drawback of this strategy is that the model set may contain noise and outliers, which negatively affects the estimation of the registration parameters. in contrast, the proposed formulation treats all the point sets on an equal footing. indeed, all the points are drawn from a central gaussian mixture, hence the registration is cast into a clustering problem. we formally derive batch and incremental em algorithms that robustly estimate both the gmm parameters and the rotations and translations that optimally align the sets. moreover, the mixture ' s means play the role of the registered set of points while the variances provide rich information about the contribution of each component to the alignment. we thoroughly test the proposed algorithms on simulated data and on challenging real data collected with range sensors. we compare them with several state - of - the - art algorithms, and we show their potential for surface reconstruction from depth data.
|
arxiv:1609.01466
|
quantum many - body scar states are exceptional finite energy density eigenstates in an otherwise thermalizing system that do not satisfy the eigenstate thermalization hypothesis. we investigate the fate of exact many - body scar states under perturbations. at small system sizes, deformed scar states described by perturbation theory survive. however, we argue for their eventual thermalization in the thermodynamic limit from the finite - size scaling of the off - diagonal matrix elements. nevertheless, we show numerically and analytically that the nonthermal properties of the scars survive for a parametrically long time in quench experiments. we present a rigorous argument that lower - bounds the thermalization time for any scar state as $ t ^ { * } \ sim o ( \ lambda ^ { - 1 / ( 1 + d ) } ) $, where $ d $ is the spatial dimension of the system and $ \ lambda $ is the perturbation strength.
|
arxiv:1910.07669
|
the recently proposed detection transformer ( detr ) has established a fully end - to - end paradigm for object detection. however, detr suffers from slow training convergence, which hinders its applicability to various detection tasks. we observe that detr ' s slow convergence is largely attributed to the difficulty in matching object queries to relevant regions due to the unaligned semantics between object queries and encoded image features. with this observation, we design semantic - aligned - matching detr + + ( sam - detr + + ) to accelerate detr ' s convergence and improve detection performance. the core of sam - detr + + is a plug - and - play module that projects object queries and encoded image features into the same feature embedding space, where each object query can be easily matched to relevant regions with similar semantics. besides, sam - detr + + searches for multiple representative keypoints and exploits their features for semantic - aligned matching with enhanced representation capacity. furthermore, sam - detr + + can effectively fuse multi - scale features in a coarse - to - fine manner on the basis of the designed semantic - aligned matching. extensive experiments show that the proposed sam - detr + + achieves superior convergence speed and competitive detection accuracy. additionally, as a plug - and - play method, sam - detr + + can complement existing detr convergence solutions with even better performance, achieving 44. 8 % ap with merely 12 training epochs and 49. 1 % ap with 50 training epochs on coco val2017 with resnet - 50. codes are available at https : / / github. com / zhanggongjie / sam - detr.
|
arxiv:2207.14172
|
in many scenarios of a language identification task, the user will specify a small set of languages which he / she can speak instead of a large set of all possible languages. we want to model such prior knowledge into the way we train our neural networks, by replacing the commonly used softmax loss function with a novel loss function named tuplemax loss. as a matter of fact, a typical language identification system launched in north america has about 95 % users who could speak no more than two languages. using the tuplemax loss, our system achieved a 2. 33 % error rate, which is a relative 39. 4 % improvement over the 3. 85 % error rate of standard softmax loss method.
|
arxiv:1811.12290
|
despite the influx of unprecedented - quality data from the fermi gamma - ray space telescope that have been collected over nine years of operation, the contribution of normal star - forming galaxies to the extragalactic gamma - ray background ( egrb ) remains poorly constrained. different estimates are discrepant both their underlying physical assumptions and their results. with several detections and many upper limits for the gamma - ray fluxes of nearby starforming galaxies now available, estimates that rely on empirical scalings between gamma - ray and longer - wavelength luminosities have become possible and increasingly popular. in this paper we examine factors that can bias such estimates, including : a ) possible sources of nontrivial redshift dependence ; b ) dependence on the choice of star - formation tracer ; c ) uncertainties in the slope and normalisation of empirical scalings. we find that such biases can be significant, pointing towards the need for more sophisticated models for the star - forming galaxy contribution to the gamma - ray background, implementing more, and more confident, physics in their buildup. finally, we show that there are large regions of acceptable parameter space in observational inputs that significantly overproduce the gamma - ray background, implying that the observed level of the background can yield significant constraints on models of the average cosmic gamma - ray emissivity associated with star formation.
|
arxiv:1711.11046
|
network nonlocality is an advanced study of quantum nonlocality that comprises network structure beyond bell ' s theorem. the development of quantum networks has the potential to bring a lot of technological applications in sevaral quantum information processing tasks. here, we are focusing on how the role of the independence of the measurement choices of the end parties in a network works and can be used to enhance the security in a quantum network. in both three - parties two - sources bilocal network and four - parties three - sources star network scenarios, we are able to show, a practical way to understand the relaxation of the assumptions to enhance a real security protocol if someone wants to breach in a network communications. theoratically, we have proved that by relaxing the independence of the measurement choices of only one end party we can create a standard network nonlocality ( snn ) and more stronger full network nonlocality ( fnn ) and we can get maximum quantum violation by the classical no - signalling local model. we are able to distinguish between two types of network nonlocality in the sense that the fnn is stronger than snn, i. e., fnn states all the sources in a network need to distribute nonlocal resources.
|
arxiv:2405.12379
|
the distance of the very young open cluster westerlund 2, which contains the very massive binary system wr 20a and is likely associated with a tev source, has been the subject of much debate. we attempt a joint analysis of spectroscopic and photometric data of eclipsing binaries in the cluster to constrain its distance. a sample of 15 stars, including three eclipsing binaries ( msp 44, msp 96, and msp 223 ) was monitored with the flames multi - object spectrograph. the spectroscopic data are analysed together with existing bv photometry. the analysis of the three eclipsing binaries clearly supports the larger values of the distance, around 8 kpc, and rules out values of about 2. 4 - 2. 8 kpc that have been suggested in the literature. furthermore, our spectroscopic monitoring reveals no clear signature of binarity with periods shorter than 50 days in either the wn6ha star wr 20b, the early o - type stars msp 18, msp 171, msp 182, msp 183, msp 199, and msp 203, or three previously unknown mid o - type stars. the only newly identified candidate binary system is msp 167. the absence of a binary signature is especially surprising for wr 20b and msp 18, which were previously found to be bright x - ray sources. the distance of westerlund 2 is confirmed to be around 8 kpc as previously suggested based on the spectrophotometry of its population of o - type stars and the analysis of the light curve of wr 20a. our results suggest that short - period binary systems are not likely to be common, at least not among the population of o - type stars in the cluster.
|
arxiv:1109.1086
|
infectious diseases are among the most prominent threats to mankind. when preventive health care cannot be provided, a viable means of disease control is the isolation of individuals, who may be infected. to study the impact of isolation, we propose a system of delay differential equations and offer our model analysis based on the geometric theory of semi - flows. calibrating the response to an outbreak in terms of the fraction of infectious individuals isolated and the speed with which this is done, we deduce the minimum response required to curb an incipient outbreak, and predict the ensuing endemic state should the infection continue to spread.
|
arxiv:1804.02696
|
we present the results of a study of the azimuthal characteristics of ionospheric and seismic effects of the meteorite ' chelyabinsk ', based on the data from the network of gps receivers, coherent decameter radar ekb superdarn and network of seismic stations. it is shown, that 6 - 14 minutes after the bolide explosion, gps network observed the cone - shaped wavefront of tids that is interpreted as a ballistic acoustic wave. the typical tids propagation velocity were observed 661 + / - 256m / s, which corresponds to the expected acoustic wave speed for 240km height. 14 minutes after the bolide explosion, at distances of 200km we observed the emergence and propagation of a tid with spherical wavefront, that is interpreted as gravitational mode of internal acoustic waves. the propagation velocity of this tid was 337 + / - 89m / s which corresponds to the propagation velocity of these waves in similar situations. at ekb superdarn radar, we observed tids in the sector of azimuthal angles close to the perpendicular to the meteorite trajectory. the observed tid velocity ( 400 m / s ) and azimuthal properties correlate well with the model of ballistic wave propagating at 120 - 140km altitude. it is shown, that the azimuthal distribution of the amplitude of vertical seismic oscillations can be described qualitatively by the model of vertical strike - slip rupture, propagating at 1km / s along the meteorite fall trajectory to distance of about 40km. these parameters correspond to the direction and velocity of propagation of the ballistic wave peak by the ground. it is shown, that the model of ballistic wave caused by supersonic motion and burning of the meteorite in the upper atmosphere can satisfactorily explain the various azimuthal ionospheric effects, observed by the coherent decameter radar ekb superdarn, gps - receivers network, as well as the azimuthal characteristics of seismic waves at large distances.
|
arxiv:1506.05863
|
in this paper we derive a list of all the possible indecomposable normalized rank - - two vector bundles without intermediate cohomology on the prime fano threefolds and on the complete intersection calabi yau threefolds, say $ v $, of picard number $ \ rho = 1 $. for any such bundle $ \ e $, if it exists, we find the projective invariants of the curves $ c \ subset v $ which are the zero - locus of general global sections of $ \ e $. in turn, a curve $ c \ subset v $ with such invariants is a section of a bundle $ \ e $ from our lists. this way we reduce the problem for existence of such bundles on $ v $ to the problem for existence of curves with prescribed properties contained in $ v $. in part of the cases in our lists the existence of such curves on the general $ v $ is known, and we state the question about the existence on the general $ v $ of any type of curves from the lists.
|
arxiv:math/0103010
|
the uncanny valley phenomenon refers to the feeling of unease that arises when interacting with characters that appear almost, but not quite, human - like. first theorised by masahiro mori in 1970, it has since been widely observed in different contexts from humanoid robots to video games, in which it can result in players feeling uncomfortable or disconnected from the game, leading to a lack of immersion and potentially reducing the overall enjoyment. the phenomenon has been observed and described mostly through behavioural studies based on self - reported scales of uncanny feeling : however, there is still no consensus on its cognitive and perceptual origins, which limits our understanding of its impact on player experience. in this paper, we present a study aimed at identifying the mechanisms that trigger the uncanny response by collecting and analysing both self - reported feedback and eeg data.
|
arxiv:2306.16233
|
given a flow network, the minimum flow decomposition ( mfd ) problem is finding the smallest possible set of weighted paths whose superposition equals the flow. it is a classical, strongly np - hard problem that is proven to be useful in rna transcript assembly and applications outside of bioinformatics. we improve an existing ilp ( integer linear programming ) model by dias et al. [ recomb 2022 ] for dags by decreasing the solver ' s search space using solution safety and several other optimizations. this results in a significant speedup compared to the original ilp, of up to 55 - 90x on average on the hardest instances. moreover, we show that our optimizations apply also to mfd problem variants, resulting in similar speedups, going up to 123x on the hardest instances. we also developed an ilp model of reduced dimensionality for an mfd variant in which the solution path weights are restricted to a given set. this model can find an optimal mfd solution for most instances, and overall, its accuracy significantly outperforms that of previous greedy algorithms while being up to an order of magnitude faster than our optimized ilp.
|
arxiv:2311.10563
|
scattering amplitudes in a range of quantum field theories can be computed using the cachazo - he - yuan ( chy ) formalism. in theories with colour ordering, the key ingredient is the so - called parke - taylor factor. in this note we give a fully $ \ text { sl } ( 2, \ mathbb { c } ) $ - covariant definition and study the properties of a new integrand called the string parke - taylor factor. it has an $ \ alpha ' $ expansion whose leading coefficient is the field - theoretic parke - taylor factor. its main application is that it leads to a chy formulation of open string tree - level amplitudes. in fact, the definition of the string parke - taylor factor was motivated by trying to extend the compact formula for the first $ \ alpha ' $ correction found by he and zhang, while the main ingredient in its definition is a determinant of a matrix introduced in the context of string theory by stieberger and taylor.
|
arxiv:1705.10323
|
we identify a new type of periodic evolution that appears in driven quantum systems. provided that the instantaneous ( adiabatic ) energies are equidistant we show how such systems can be mapped to ( time - dependent ) tilted single - band lattice models. having established this mapping, the dynamics can be understood in terms of bloch oscillations in the instantaneous energy basis. in our lattice model the site - localized states are the adiabatic ones, and the bloch oscillations manifest as a periodic repopulation among these states, or equivalently a periodic change in the system ' s instantaneous energy. our predictions are confirmed by considering two different models : a driven harmonic oscillator and a landau - zener grid model. both models indeed show convincing, or even perfect, oscillations. to strengthen the link between our energy bloch oscillations and the original spatial bloch oscillations we add a random disorder that breaks the translational invariance of the spectrum. this verifies that the oscillating evolution breaks down and instead turns into a ballistic spreading.
|
arxiv:1808.08061
|
the paper studies several properties of laplace hyperfunctions introduced by h. ~ komatsu in the one dimensional case and by the authors in the higher dimensional cases from the viewpoint of \ v { c } ech - dolbeault cohomology theory, which enables us, for example, to construct the laplace transformation and its inverse in a simple way. we also give some applications to a system of pdes with constant coefficients.
|
arxiv:2210.04226
|
despite the fact that only a small portion of muscles are affected in motion disease and disorders, medical therapies do not distinguish between healthy and unhealthy muscles. in this paper, a method is devised in order to calculate the neural stimuli of the lower body during gait cycle and check if any group of muscles are not acting properly. for this reason, an agent - based model of human muscle is proposed. the agent is able to convert neural stimuli to force generated by the muscle and vice versa. it can be used in many researches including medical education and research and prosthesis development. then, boots algorithm is designed based on a biomechanical model of human lower body to do a reverse dynamics of human motion by computing the forces generated by each muscle group. using the agent - driven model of human muscle and boots algorithm, a user - friendly application is developed which can calculate the number of neural stimuli received by each muscle during gait cycle. the application can be used by clinical experts to distinguish between healthy and unhealthy muscles.
|
arxiv:2212.12760
|
a very small fraction of ( runaway ) massive stars have masses exceeding $ 60 $ - $ 70 \, \ rm m _ { \ odot } $ and are predicted to evolve as luminous - blue - variable and wolf - rayet stars before ending their lives as core - collapse supernovae. our 2d axisymmetric hydrodynamical simulations explore how a fast wind ( $ 2000 \, \ rm km \, \ rm s ^ { - 1 } $ ) and high mass - loss rate ( $ 10 ^ { - 5 } \, \ rm m _ { \ odot } \, \ rm yr ^ { - 1 } $ ) can impact the morphology of the circumstellar medium. it is shaped as 100 pc - scale wind nebula which can be pierced by the driving star when it supersonically moves with velocity $ 20 $ - $ 40 \, \ rm km \, \ rm s ^ { - 1 } $ through the interstellar medium ( ism ) in the galactic plane. the motion of such runaway stars displaces the position of the supernova explosion out of their bow shock nebula, imposing asymmetries to the eventual shock wave expansion and engendering cygnus - loop - like supernova remnants. we conclude that the size ( up to more than $ 200 \, \ rm pc $ ) of the filamentary wind cavity in which the chemically enriched supernova ejecta expand, mixing efficiently the wind and ism materials by at least $ 10 \ % $ in number density, can be used as a tracer of the runaway nature of the very massive progenitors of such $ 0. 1 \, \ rm myr $ old remnants. our results motivate further observational campaigns devoted to the bow shock of the very massive stars bd + 43 3654 and to the close surroundings of the synchrotron - emitting wolf - rayet shell g2. 4 + 1. 4.
|
arxiv:2002.08268
|
cross - architecture binary similarity comparison is essential in many security applications. recently, researchers have proposed learning - based approaches to improve comparison performance. they adopted a paradigm of instruction pre - training, individual binary encoding, and distance - based similarity comparison. however, instruction embeddings pre - trained on external code corpus are not universal in diverse real - world applications. and separately encoding cross - architecture binaries will accumulate the semantic gap of instruction sets, limiting the comparison accuracy. this paper proposes a novel cross - architecture binary similarity comparison approach with multi - relational instruction association graph. we associate mono - architecture instruction tokens with context relevance and cross - architecture tokens with potential semantic correlations from different perspectives. then we exploit the relational graph convolutional network ( r - gcn ) to perform type - specific graph information propagation. our approach can bridge the gap in the cross - architecture instruction representation spaces while avoiding the external pre - training workload. we conduct extensive experiments on basic block - level and function - level datasets to prove the superiority of our approach. furthermore, evaluations on a large - scale real - world iot malware reuse function collection show that our approach is valuable for identifying malware propagated on iot devices of various architectures.
|
arxiv:2206.12236
|
in this master thesis we analyze the complexity of sorting a set of strings. it was shown that the complexity of sorting strings can be naturally expressed in terms of the prefix trie induced by the set of strings. the model of computation takes into account symbol comparisons and not just comparisons between the strings. the analysis of upper and lower bounds for some classical algorithms such as quicksort and mergesort in terms of such a model was shown. here we extend the analysis to another classical algorithm - heapsort. we also give analysis for the version of the algorithm that uses binomial heaps as a heap implementation.
|
arxiv:1408.5422
|
given a regular weight $ \ omega $ and a positive borel measure $ \ mu $ on the unit disc $ \ mathbb { d } $, the toeplitz operator associated with $ \ mu $ is $ $ \ mathcal { t } _ \ mu ( f ) ( z ) = \ int _ { \ mathbb { d } } f ( \ zeta ) \ bar { b _ z ^ \ omega ( \ zeta ) } \, d \ mu ( \ zeta ), $ $ where $ b ^ \ omega _ { z } $ are the reproducing kernels of the weighted bergman space $ a ^ 2 _ \ omega $. we describe bounded and compact toeplitz operators $ \ mathcal { t } _ \ mu : a ^ p _ \ omega \ to a ^ q _ \ omega $, $ 1 < q, p < \ infty $, in terms of carleson measures and the berezin transform $ $ \ widetilde { \ mathcal { t } _ \ mu } ( z ) = \ frac { \ langle \ mathcal { t } _ \ mu ( b ^ \ omega _ { z } ), b ^ \ omega _ { z } \ rangle _ { a ^ 2 _ \ omega } } { \ | b _ z ^ \ omega \ | ^ 2 _ { a ^ 2 _ \ omega } }. $ $ we also characterize schatten class toeplitz operators in terms of the berezin transform and apply this result to study schatten class composition operators.
|
arxiv:1607.04394
|
this paper presents categorization of croatian texts using non - standard words ( nsw ) as features. non - standard words are : numbers, dates, acronyms, abbreviations, currency, etc. nsws in croatian language are determined according to croatian nsw taxonomy. for the purpose of this research, 390 text documents were collected and formed the skipez collection with 6 classes : official, literary, informative, popular, educational and scientific. text categorization experiment was conducted on three different representations of the skipez collection : in the first representation, the frequencies of nsws are used as features ; in the second representation, the statistic measures of nsws ( variance, coefficient of variation, standard deviation, etc. ) are used as features ; while the third representation combines the first two feature sets. naive bayes, cn2, c4. 5, knn, classification trees and random forest algorithms were used in text categorization experiments. the best categorization results are achieved using the first feature set ( nsw frequencies ) with the categorization accuracy of 87 %. this suggests that the nsws should be considered as features in highly inflectional languages, such as croatian. nsw based features reduce the dimensionality of the feature space without standard lemmatization procedures, and therefore the bag - of - nsws should be considered for further croatian texts categorization experiments.
|
arxiv:1408.6746
|
developing aerial robots that can both safely navigate and execute assigned mission without any human intervention - i. e., fully autonomous aerial mobility of passengers and goods - is the larger vision that guides the research, design, and development efforts in the aerial autonomy space. however, it is highly challenging to concurrently operationalize all types of aerial vehicles that are operating fully autonomously sharing the airspace. full autonomy of the aerial transportation sector includes several aspects, such as design of the technology that powers the vehicles, operations of multi - agent fleets, and process of certification that meets stringent safety requirements of aviation sector. thereby, autonomous advanced aerial mobility is still a vague term and its consequences for researchers and professionals are ambiguous. to address this gap, we present a comprehensive perspective on the emerging field of autonomous advanced aerial mobility, which involves the use of unmanned aerial vehicles ( uavs ) and electric vertical takeoff and landing ( evtol ) aircraft for various applications, such as urban air mobility, package delivery, and surveillance. the article proposes a scalable and extensible autonomy framework consisting of four main blocks : sensing, perception, planning, and controls. furthermore, the article discusses the challenges and opportunities in multi - agent fleet operations and management, as well as the testing, validation, and certification aspects of autonomous aerial systems. finally, the article explores the potential of monolithic models for aerial autonomy and analyzes their advantages and limitations. the perspective aims to provide a holistic picture of the autonomous advanced aerial mobility field and its future directions.
|
arxiv:2311.04472
|
control barrier functions ( cbfs ) have been widely used for synthesizing controllers in safety - critical applications. when used as a safety filter, it provides a simple and computationally efficient way to obtain safe controls from a possibly unsafe performance controller. despite its conceptual simplicity, constructing a valid cbf is well known to be challenging, especially for high - relative degree systems under nonconvex constraints. recently, work has been done to learn a valid cbf from data based on a handcrafted cbf ( hcbf ). even though the hcbf gives a good initialization point, it still requires a large amount of data to train the cbf network. in this work, we propose a new method to learn more efficiently from the collected data through a novel prioritized data sampling strategy. a priority score is computed from the loss value of each data point. then, a probability distribution based on the priority score of the data points is used to sample data and update the learned cbf. using our proposed approach, we can learn a valid cbf that recovers a larger portion of the true safe set using a smaller amount of data. the effectiveness of our method is demonstrated in simulation on a unicycle and a two - link arm.
|
arxiv:2303.05973
|
we use flux - transmission correlations in \ lya forests to measure the imprint of baryon acoustic oscillations ( bao ). the study uses spectra of 157, 783 quasars in the redshift range $ 2. 1 \ le z \ le 3. 5 $ from the sloan digital sky survey ( sdss ) data release 12 ( dr12 ). besides the statistical improvements on our previous studies using sdss dr9 and dr11, we have implemented numerous improvements in the analysis procedure, allowing us to construct a physical model of the correlation function and to investigate potential systematic errors in the determination of the bao peak position. the hubble distance, $ \ dhub = c / h ( z ) $, relative to the sound horizon is $ \ dhub ( z = 2. 33 ) / r _ d = 9. 07 \ pm 0. 31 $. the best - determined combination of comoving angular - diameter distance, $ \ dm $, and the hubble distance is found to be $ \ dhub ^ { 0. 7 } \ dm ^ { 0. 3 } / r _ d = 13. 94 \ pm0. 35 $. this value is $ 1. 028 \ pm0. 026 $ times the prediction of the flat - \ lcdm model consistent with the cosmic microwave background ( cmb ) anisotropy spectrum. the errors include marginalization over the effects of unidentified high - density absorption systems and fluctuations in ultraviolet ionizing radiation. independently of the cmb measurements, the combination of our results and other bao observations determine the open - \ lcdm density parameters to be $ \ om = 0. 296 \ pm 0. 029 $, $ \ ol = 0. 699 \ pm 0. 100 $ and $ \ omega _ k = - 0. 002 \ pm 0. 119 $.
|
arxiv:1702.00176
|
material realization of the non - abelian kitaev spin liquid phase - an example of ising topological order ( ito ) - has been the subject of intense research in recent years. the $ 4d $ honeycomb mott insulator $ \ alpha $ - rucl $ _ 3 $ has emerged as a leading candidate, as it enters a field - induced magnetically disordered state where a half - integer quantized thermal hall conductivity $ \ kappa _ { xy } $ was reported. further, a recent report of a sign change in the quantized $ \ kappa _ { xy } $ across a certain crystallographic direction is strong evidence for a topological phase transition between two itos with opposite chern numbers. although this is a fascinating result, independent verification remains elusive, and one may ask if there is a thermodynamic quantity sensitive to the phase transition. here we propose that the magnetotropic coefficient $ k $ under in - plane magnetic fields would serve such a purpose. we report a singular feature in $ k $ that indicates a topological phase transition across the $ \ hat { b } $ - axis where ito is prohibited by a $ c _ 2 $ symmetry. if the transition in $ \ alpha $ - rucl $ _ 3 $ is indeed a direct transition between itos, then this feature in $ k $ should be observable.
|
arxiv:2004.13723
|
a variety of new physics models allows for neutrinos to up - scatter into heavier states. if the incident neutrino is energetic enough, the heavy neutrino may travel some distance before decaying. in this work, we consider the atmospheric neutrino flux as a source of such events. at icecube, this would lead to a " double - bang " ( db ) event topology, similar to what is predicted to occur for tau neutrinos at ultra - high energies. the db event topology has an extremely low background rate from coincident atmospheric cascades, making this a distinctive signature of new physics. our results indicate that icecube should already be able to derive new competitive constraints on models with gev - scale sterile neutrinos using existing data.
|
arxiv:1707.08573
|
despite decades of study, many unknowns exist about the mechanisms governing human locomotion. current models and motor control theories can only partially capture the phenomenon. this may be a major cause of the reduced efficacy of lower limb rehabilitation therapies. recently, it has been proposed that human locomotion can be planned in the task - space by taking advantage of the gravitational pull acting on the centre of mass ( com ) by modelling the attractor dynamics. the model proposed represents the com transversal trajectory as a harmonic oscillator propagating on the attractor manifold. however, the vertical trajectory of the com, controlled through ankle strategies, has not been accurately captured yet. research questions : is it possible to improve the model accuracy by introducing a mathematical model of the ankle strategies by coordinating the heel - strike and toe - off strategies with the com movement? our solution consists of closed - form equations that plan human - like trajectories for the com, the foot swing, and the ankle strategies. we have tested our model by extracting the biomechanics data and postural during locomotion from the motion capture trajectories of 12 healthy subjects at 3 self - selected speeds to generate a virtual subject using our model. our virtual subject has been based on the average of the collected data. the model output shows our virtual subject has walking trajectories that have their features consistent with our motion capture data. additionally, it emerged from the data analysis that our model regulates the stance phase of the foot as humans do. the model proves that locomotion can be modelled as an attractor dynamics, proving the existence of a nonlinear map that our nervous system learns. it can support a deeper investigation of locomotion motor control, potentially improving locomotion rehabilitation and assistive technologies.
|
arxiv:1802.03498
|
we study the congeniality property of algebras, as defined by bao, he, and zhang, in order to establish a version of auslander ' s theorem for various families of filtered algebras. it is shown that the property is preserved under homomorphic images and tensor products under some mild conditions. examples of congenial algebras in this paper include enveloping algebras of lie superalgebras, iterated differential operator rings, quantized weyl algebras, down - up algebras, and symplectic reflection algebras.
|
arxiv:1808.09003
|
we present an implementation of alchemical free energy simulations at the quantum mechanical level by directly interpolating the electronic hamiltonian. the method is compatible with any level of electronic structure theory and requires only one quantum calculation for each molecular dynamics step in contrast to multiple energy evaluations that would be needed when interpolating the ground - state energies. we demonstrate the correctness and applicability of the technique by computing alchemical free energy changes of gas - phase molecules, with both nuclear and electron creation / annihilation. we also show an initial application to first - principles pka calculation for solvated molecules where we quantum mechanically annihilate a bonded proton.
|
arxiv:2408.17002
|
ground penetrating radar mounted on micro aerial vehicle ( mav ) is a promising tool to assist humanitarian landmine clearance. however, the quality of synthetic aperture radar images depends on accurate and precise motion estimation of the radar antennas as well as generating informative viewpoints with the mav. this paper presents a complete and automatic airborne ground - penetrating synthetic aperture radar ( gpsar ) system. the system consists of a spatially calibrated and temporally synchronized industrial grade sensor suite that enables navigation above ground level, radar imaging, and optical imaging. a custom mission planning framework allows generation and automatic execution of stripmap and circular ( gpsar ) trajectories controlled above ground level as well as aerial imaging survey flights. a factor graph based state estimator fuses measurements from dual receiver real - time kinematic ( rtk ) global navigation satellite system ( gnss ) and inertial measurement unit ( imu ) to obtain precise, high rate platform positions and orientations. ground truth experiments showed sensor timing as accurate as 0. 8 us and as precise as 0. 1 us with localization rates of 1 khz. the dual position factor formulation improves online localization accuracy up to 40 % and batch localization accuracy up to 59 % compared to a single position factor with uncertain heading initialization. our field trials validated a localization accuracy and precision that enables coherent radar measurement addition and detection of radar targets buried in sand. this validates the potential as an aerial landmine detection system.
|
arxiv:2106.10108
|
several different " hat games " have recently received a fair amount of attention. typically, in a hat game, one or more players are required to correctly guess their hat colour when given some information about other players ' hat colours. some versions of these games have been motivated by research in complexity theory and have ties to well - known research problems in coding theory, and some variations have led to interesting new research. in this paper, we review ebert ' s hat game, which garnered a considerable amount of publicity in the late 90 ' s and early 00 ' s, and the hats - on - a - line game. then we introduce a new hat game which is a " hybrid " of these two games and provide an optimal strategy for playing the new game. the optimal strategy is quite simple, but the proof involves an interesting combinatorial argument.
|
arxiv:1001.3850
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.