text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
currently, the world has been facing the brunt of a pandemic due to a disease called covid - 19 for the last 2 years. to study the spread of such infectious diseases it is important to not only understand their temporal evolution but also the spatial evolution. in this work, the spread of this disease has been studied with a cellular automata ( ca ) model to find the temporal and the spatial behavior of it. here, we have proposed a neighborhood criteria which will help us to measure the social confinement at the time of the disease spread. the two main parameters of our model are ( i ) disease transmission probability ( q ) which helps us to measure the infectivity of a disease and ( ii ) exponent ( n ) which helps us to measure the degree of the social confinement. here, we have studied various spatial growths of the disease by simulating this ca model. finally we have tried to fit our model with the covid - 19 data of india for various waves and have attempted to match our model predictions with regards to each wave to see how the different parameters vary with respect to infectivity and restrictions in social interaction.
|
arxiv:2307.16423
|
using real road testing to optimize autonomous driving algorithms is time - consuming and capital - intensive. to solve this problem, we propose a gan - based model that is capable of generating high - quality images across different domains. we further leverage contrastive learning to train the model in a self - supervised way using image data acquired in the real world using real sensors and simulated images from 3d games. in this paper, we also apply an attention mechanism module to emphasize features that contain more information about the source domain according to their measurement of significance. finally, the generated images are used as datasets to train neural networks to perform a variety of downstream tasks to verify that the approach can fill in the gaps between the virtual and real worlds.
|
arxiv:2302.12052
|
the possible quantum hall ferromagnet at a filling factor $ \ nu = 0 $ is investigated for the zero - energy ( n = 0 ) landau level of the two dimensional massless dirac fermions in $ \ alpha $ - ( bedt - ttf ) $ _ 2 $ i $ _ 3 $ under pressure with tilted cones and a twofold valley degeneracy resulting from time - reversal symmetry. in the case of the dirac cones without tilting, the long - range coulomb interaction in the n = 0 landau level exhibits the su ( 2 ) valley - pseudo - spin symmetry even to the order $ o ( a / l _ { \ rm h } ) $, in contrast to $ n \ ne 0 $ landau levels, where $ a $ and $ l _ { \ rm h } $ represent the lattice constant and the magnetic length, respectively. such a characteristic comes from a fact that zero - energy states in a particular valley are restricted to only one of the spinor components, whereas the other spinor component is necessarily zero. in the case of the tilted dirac cones as found in $ \ alpha $ - ( bedt - ttf ) $ _ 2 $ i $ _ 3 $, one obtains a non - zero value of the second component and then the ackscattering processes between valleys becomes non - zero. it is shown that this fact can lead to easy - plane pseudospin ferromagnetism ( xy - type ). in this case, the phase fluctuations of the order parameters can be described by the xy model leading to kosterlitz - thouless transition at lower temperature. in view of these theoretical results, experimental findings in resistivity of $ \ alpha $ - ( bedt - ttf ) $ _ 2 $ i $ _ 3 $ are discussed.
|
arxiv:0907.1160
|
the q - exponential distributions, which are generalizations of the zipf - mandelbrot power - law distribution, are frequently encountered in complex systems at their stationary states. from the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies : the renyi entropy, the tsallis entropy, and the normalized tsallis entropy. accordingly, mere fittings of observed data by the q - exponential distributions do not lead to identification of the correct physical entropy. here, stabilities of these entropies, i. e., their behaviors under arbitrary small deformation of a distribution, are examined. it is shown that, among the three, the tsallis entropy is stable and can provide an entropic basis for the q - exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
|
arxiv:cond-mat/0206078
|
we employ classical and ring polymer molecular dynamics simulations to study the effect of nuclear quantum fluctuations on the structure and the water exchange dynamics of aqueous solutions of lithium and fluoride ions. while we obtain reasonably good agreement with experimental data for solutions of lithium by augmenting the coulombic interactions between the ion and the water molecules with a standard lennard - jones ion - oxygen potential, the same is not true for solutions of fluoride, for which we find that a potential with a softer repulsive wall gives much better agreement. a small degree of destabilization of the first hydration shell is found in quantum simulations of both ions when compared with classical simulations, with the shell becoming less sharply defined and the mean residence time of the water molecules in the shell decreasing. in line with these modest differences, we find that the mechanisms of the exchange processes are unaffected by quantization, so a classical description of these reactions gives qualitatively correct and quantitatively reasonable results. we also find that the quantum effects in solutions of lithium are larger than in solutions of fluoride. this is partly due to the stronger interaction of lithium with water molecules, partly due to the lighter mass of lithium, and partly due to competing quantum effects in the hydration of fluoride, which are absent in the hydration of lithium.
|
arxiv:1501.06592
|
##logy group and the ( n − k ) th cohomology group. = = = duality in algebraic and arithmetic geometry = = = the same duality pattern holds for a smooth projective variety over a separably closed field, using l - adic cohomology with qℓ - coefficients instead. this is further generalized to possibly singular varieties, using intersection cohomology instead, a duality called verdier duality. serre duality or coherent duality are similar to the statements above, but applies to cohomology of coherent sheaves instead. with increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems : the modern formulation of these dualities can be done using derived categories and certain direct and inverse image functors of sheaves ( with respect to the classical analytical topology on manifolds for poincare duality, l - adic sheaves and the etale topology in the second case, and with respect to coherent sheaves for coherent duality ). yet another group of similar duality statements is encountered in arithmetics : etale cohomology of finite, local and global fields ( also known as galois cohomology, since etale cohomology over a field is equivalent to group cohomology of the ( absolute ) galois group of the field ) admit similar pairings. the absolute galois group g ( fq ) of a finite field, for example, is isomorphic to z ^ { \ displaystyle { \ widehat { \ mathbf { z } } } }, the profinite completion of z, the integers. therefore, the perfect pairing ( for any g - module m ) hn ( g, m ) × h1−n ( g, hom ( m, q / z ) ) → q / z is a direct consequence of pontryagin duality of finite groups. for local and global fields, similar statements exist ( local duality and global or poitou – tate duality ). = = see also = = = = notes = = = = references = = = = = duality in general = = = atiyah, michael ( 2007 ). " duality in mathematics and physics lecture notes from the institut de matematica de la universitat de barcelona ( imub ) " ( pdf ). kostrikin, a. i. ( 2001 ) [ 1994 ], " duality ", encyclopedia of mathematics, ems press.
|
https://en.wikipedia.org/wiki/Duality_(mathematics)
|
conductance switching has been reported in many molecular junction devices, but in most cases has not been convincingly explained. we investigate conductance switching in pt / stearic acid monolayer / ti devices using pressure - modulated conductance microscopy. for devices with conductance g > > g _ q or g < < g _ q, where gq = 2e ^ 2 / h is the conductance quantum, pressure - induced conductance peaks < 30 nm in diameter are observed, indicating the formation of nanoscale conducting pathways between the electrodes. for devices with g ~ 1 - 2 g _ q, in addition to conductance peaks we also observed conductance dips and oscillations in response to localized pressure. these results can be modeled by considering interfering electron waves along a quantum conductance channel between two partially transmitting electrode surfaces. our findings underscore the possible use of these devices as atomic - scale switches.
|
arxiv:cond-mat/0703259
|
it is now well known that a combined analysis of the sunyaev - zel ' dovich ( sz ) effect and the x - ray emission observations can be used to determine the angular diameter distance to galaxy clusters, from which the hubble constant is derived. given that the sz / x - ray hubble constant is determined through a geometrical description of clusters, the accuracy to which such distance measurements can be made depends on how well one can describe intrinsic cluster shapes. using the observed x - ray isophotal axial ratio distribution for a sample of galaxy clusters, we discuss intrinsic cluster shapes and, in particular, if clusters can be described by axisymmetric models, such as oblate and prolate ellipsoids. these models are currently favored when determining the sz / x - ray hubble constant. we show that the current observational data on the asphericity of galaxy clusters suggest that clusters are more consistent with a prolate rather than an oblate distribution. we address the possibility that clusters are intrinsically triaxial by viewing triaxial ellipsoids at random angles with the intrinsic axial ratios following an isotropic gaussian distribution. we discuss implications of our results on current attempts at measuring the hubble constant using galaxy clusters and suggest that an unbiased estimate of the hubble constant, not fundamentally limited by projection effects, would eventually be possible with the sz / x - ray method.
|
arxiv:astro-ph/9905094
|
i discuss the comparison of the current theoretical calculations of epsilon ' / epsilon with the experimental data. lacking reliable ` ` first principle ' ' calculations, phenomenological approaches may help in understanding correlations among different contributions and available experimental data. in particular, in the chiral quark model approach the same dynamics which underlies the delta i = 1 / 2 selection rule in kaon decays appears to enhance the k - > pi pi matrix element of the q _ 6 gluonic penguin, thus driving epsilon ' / epsilon in the range of the recent experimental measurements.
|
arxiv:hep-ph/9908268
|
mike lockwood and mathew owens discuss how eclipse observations are aiding the development of a climatology of near - earth space
|
arxiv:2105.12559
|
we consider a non homogeneous gompertz diffusion process whose parameters are modified by generally time - dependent exogenous factors included in the infinitesimal moments. the proposed model is able to describe tumor dynamics under the effect of anti - proliferative and / or cell death - induced therapies. we assume that such therapies can modify also the infinitesimal variance of the diffusion process. an estimation procedure, based on a control group and two treated groups, is proposed to infer the model by estimating the constant parameters and the time - dependent terms. moreover, several concatenated hypothesis tests are considered in order to confirm or reject the need to include time - dependent functions in the infinitesimal moments. simulations are provided to evaluate the efficiency of the suggested procedures and to validate the testing hypothesis. finally, an application to real data is considered.
|
arxiv:2401.15382
|
a class k of structures is controlled if for all cardinals lambda, the relation of l _ { infty, lambda } - equivalence partitions k into a set of equivalence classes ( as opposed to a proper class ). we prove that no pseudo - elementary class with the independence property is controlled. by contrast, there is a pseudo - elementary class with the strict order property that is controlled.
|
arxiv:math/0303345
|
loneliness has been associated with negative outcomes for physical and mental health. understanding how people express and cope with various forms of loneliness is critical for early screening and targeted interventions to reduce loneliness, particularly among vulnerable groups such as young adults. to examine how different forms of loneliness and coping strategies manifest in loneliness self - disclosure, we built a dataset, fig - loneliness ( fine - grained loneliness ) by using reddit posts in two young adult - focused forums and two loneliness related forums consisting of a diverse age group. we provided annotations by trained human annotators for binary and fine - grained loneliness classifications of the posts. trained on fig - loneliness, two bert - based models were used to understand loneliness forms and authors ' coping strategies in these forums. our binary loneliness classification achieved an accuracy above 97 %, and fine - grained loneliness category classification reached an average accuracy of 77 % across all labeled categories. with fig - loneliness and model predictions, we found that loneliness expressions in the young adults related forums were distinct from other forums. those in young adult - focused forums were more likely to express concerns pertaining to peer relationship, and were potentially more sensitive to geographical isolation impacted by the covid - 19 pandemic lockdown. also, we showed that different forms of loneliness have differential use in coping strategies.
|
arxiv:2201.07423
|
using soft x - ray absorption spectroscopy and magnetic circular dichroism at the co - $ l _ { 2, 3 } $ edge we reveal that the spin state transition in lacoo $ _ { 3 } $ can be well described by a low - spin ground state and a triply - degenerate high - spin first excited state. from the temperature dependence of the spectral lineshapes we find that lacoo $ _ { 3 } $ at finite temperatures is an inhomogeneous mixed - spin - state system. crucial is that the magnetic circular dichroism signal in the paramagnetic state carries a large orbital momentum. this directly shows that the currently accepted low - / intermediate - spin picture is at variance. parameters derived from these spectroscopies fully explain existing magnetic susceptibility, electron spin resonance and inelastic neutron data.
|
arxiv:cond-mat/0610457
|
generative approaches have been recently shown to be effective for both entity disambiguation and entity linking ( i. e., joint mention detection and disambiguation ). however, the previously proposed autoregressive formulation for el suffers from i ) high computational cost due to a complex ( deep ) decoder, ii ) non - parallelizable decoding that scales with the source sequence length, and iii ) the need for training on a large amount of data. in this work, we propose a very efficient approach that parallelizes autoregressive linking across all potential mentions and relies on a shallow and efficient decoder. moreover, we augment the generative objective with an extra discriminative component, i. e., a correction term which lets us directly optimize the generator ' s ranking. when taken together, these techniques tackle all the above issues : our model is > 70 times faster and more accurate than the previous generative method, outperforming state - of - the - art approaches on the standard english dataset aida - conll. source code available at https : / / github. com / nicola - decao / efficient - autoregressive - el
|
arxiv:2109.03792
|
the effects of light, long - lived gluinos on $ 2 \ to 2 $ processes at hadron colliders are examined. such particles can mediate single squark resonant production via $ q \ glu \ to \ sq \ to q \ glu $ which would significantly modify the dijet data sample. we find that squark masses in the range $ 130 < m _ { \ tilde q } < 694, 595, 573 $ gev are excluded for gluino masses of $ 0. 4, 1. 3, 5. 0 $ gev from existing ua2 and tevatron data on dijet bump searches and angular distributions. run ii of the tevatron has the capability of excluding this scenario for squark masses up to $ \ sim 1 $ tev.
|
arxiv:hep-ph/9612377
|
we prove that the landau - lifshitz - gilbert equation in three space dimensions with homogeneous neumann boundary conditions admits arbitrarily smooth solutions, given that the initial data is sufficiently close to a constant function.
|
arxiv:1606.00086
|
this document presents an in - depth examination of stock market sentiment through the integration of convolutional neural networks ( cnn ) and gated recurrent units ( gru ), enabling precise risk alerts. the robust feature extraction capability of cnn is utilized to preprocess and analyze extensive network text data, identifying local features and patterns. the extracted feature sequences are then input into the gru model to understand the progression of emotional states over time and their potential impact on future market sentiment and risk. this approach addresses the order dependence and long - term dependencies inherent in time series data, resulting in a detailed analysis of stock market sentiment and effective early warnings of future risks.
|
arxiv:2412.10199
|
we propose a dataset, avaspeech - smad, to assist speech and music activity detection research. with frame - level music labels, the proposed dataset extends the existing avaspeech dataset, which originally consists of 45 hours of audio and speech activity labels. to the best of our knowledge, the proposed avaspeech - smad is the first open - source dataset that features strong polyphonic labels for both music and speech. the dataset was manually annotated and verified via an iterative cross - checking process. a simple automatic examination was also implemented to further improve the quality of the labels. evaluation results from two state - of - the - art smad systems are also provided as a benchmark for future reference.
|
arxiv:2111.01320
|
the connected vehicle data ( cvd ) is one of the most promising emerging mobility data that greatly increases the ability to effectively monitor transportation system performance. a commercial vehicle trajectory dataset was evaluated for market penetration and coverage to establish whether it represents a sufficient sample of the vehicle volumes across the statewide roadway network of new jersey. the dataset ( officially named wejo vehicle movement data ) was compared to the vehicle volumes obtained from 46 weight - in - motion ( wim ) traffic count stations during the corresponding two - month period. the observed market penetration rates of the movement data for the interstate highways, non - interstate expressways, major arterials, and minor arterials are 2. 55 % ( std. dev. 0. 76 % ), 2. 31 % ( std. dev. 1. 07 % ), 3. 25 % ( standard deviation 1. 48 % ), and 4. 39 % ( standard deviation 2. 65 % ), respectively. additionally, the temporal resolution of the dataset ( i. e., the time interval between consecutive wejo vehicle trips captured at a given roadway section, time - of - day variation, day - of - month variation ) was also found to be consistent among the evaluated wim locations. although relatively low ( less than 5 % ), the consistent market penetration, combined with uniform spatial distribution of equipped vehicles within the traffic flow, could enable or enhance a wide range of traffic analytics applications.
|
arxiv:2208.04703
|
we present a computation of the decay constant f _ bs in quenched qcd. our strategy is to combine new precise data from the static approximation with an interpolation of the decay constant around the charm quark mass region. this computation is the first step in demonstrating the feasability of a strategy for f _ b in full qcd. the continuum limits in the static theory and at finite mass are taken separately and will be further improved.
|
arxiv:hep-lat/0309072
|
one can argue that one of the main roles of the subject of statistics is to characterize what the evidence in collected data says about questions of scientific interest. there are two broad questions that we will refer to as the estimation question and the hypothesis assessment question. for estimation, the evidence in the data should determine a particular value of an object of interest together with a measure of the accuracy of the estimate, while for hypothesis assessment, the evidence in the data should provide evidence in favor of or against some hypothesized value of the object of interest together with a measure of the strength of the evidence. this will be referred to as the evidential approach to statistical reasoning which can be contrasted with the behavioristic or decision - theoretic approach where the notion of loss is introduced and the goal is to minimize expected losses. while the two approaches often lead to similar outcomes, this is not always the case and it is commonly argued that the evidential approach is more suited to scientific applications. this paper traces the history of the evidential approach and summarizes current developments.
|
arxiv:2406.05843
|
autoregressive models used to generate responses in open - domain dialogue systems often struggle to take long - term context into account and to maintain consistency over a dialogue. previous research in open - domain dialogue generation has shown that the use of \ emph { auxiliary tasks } can introduce inductive biases that encourage the model to improve these qualities. however, most previous research has focused on encoder - only or encoder / decoder models, while the use of auxiliary tasks in \ emph { decoder - only } autoregressive models is under - explored. this paper describes an investigation where four different auxiliary tasks are added to small and medium - sized gpt - 2 models fine - tuned on the personachat and dailydialog datasets. the results show that the introduction of the new auxiliary tasks leads to small but consistent improvement in evaluations of the investigated models.
|
arxiv:2304.08115
|
owing to a high degree of geometrical frustration an ising antiferromagnet on a kagome lattice ( iakl ) is known to exhibit no long - range ordering at any temperature, including the ground state. nevertheless, at low temperatures it shows a strongly correlated, highly fluctuating regime known as a cooperative paramagnet or classical spin liquid. in the ground state it is characterized by a macroscopic degeneracy which translates to a relatively large value of the residual entropy. it has been shown that the presence of a macroscopic degeneracy associated with geometrical frustration below the saturation field can facilitate an enhanced magnetocaloric effect ( mce ), which can exceed that of an ideal paramagnet with equivalent spin by more than an order of magnitude. in the present study we investigate magnetic and magnetocaloric properties of iakl by monte carlo simulation. in particular, we calculate the entropy of the system using the thermodynamic integration method and evaluate quantities which characterize mce, such as the isothermal entropy and adiabatic temperature changes in a varying magnetic field. it is found that iakl shows the most interesting magnetocaloric properties at low temperatures and moderate magnetic fields, suggesting its potential to be used in technological applications for low - temperature magnetic refrigeration.
|
arxiv:1905.11494
|
channel pruning is one of the predominant approaches for deep model compression. existing pruning methods either train from scratch with sparsity constraints on channels, or minimize the reconstruction error between the pre - trained feature maps and the compressed ones. both strategies suffer from some limitations : the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. to overcome these drawbacks, we investigate a simple - yet - effective method, called discrimination - aware channel pruning, to choose those channels that really contribute to discriminative power. to this end, we introduce additional losses into the network to increase the discriminative power of intermediate layers and then select the most discriminative channels for each layer by considering the additional loss and the reconstruction error. last, we propose a greedy algorithm to conduct channel selection and parameter optimization in an iterative way. extensive experiments demonstrate the effectiveness of our method. for example, on ilsvrc - 12, our pruned resnet - 50 with 30 % reduction of channels even outperforms the original model by 0. 39 % in top - 1 accuracy.
|
arxiv:1810.11809
|
recently, distributed controller architectures have been quickly gaining popularity in software - defined networking ( sdn ). however, the use of distributed controllers introduces a new and important request dispatching ( rd ) problem with the goal for every sdn switch to properly dispatch their requests among all controllers so as to optimize network performance. this goal can be fulfilled by designing an rd policy to guide distribution of requests at each switch. in this paper, we propose a multi - agent deep reinforcement learning ( ma - drl ) approach to automatically design rd policies with high adaptability and performance. this is achieved through a new problem formulation in the form of a multi - agent markov decision process ( ma - mdp ), a new adaptive rd policy design and a new ma - drl algorithm called ma - ppo. extensive simulation studies show that our ma - drl technique can effectively train rd policies to significantly outperform man - made policies, model - based policies, as well as rd policies learned via single - agent drl algorithms.
|
arxiv:2103.03022
|
the expected length of longest common subsequences is a problem that has been in the literature for at least twenty five years. determining the limiting constants \ gamma _ k appears to be quite difficult, and the current best bounds leave much room for improvement. boutet de monvel explores an independent version of the problem he calls the bernoulli matching model. he explores this problem and its relation to the longest common subsequence problem. this paper continues this pursuit by focusing on a simplification we term r - reach. for the string model, l _ r ( u, v ) is the longest common subsequence of u and v given that each matched pair of letters is no more than r letters apart.
|
arxiv:math/0412375
|
following poupard ' s study of strictly ordered binary trees with respect to two parameters, namely, " end of minimal chain " and " parent of maximum leaf " a true tree calculus is being developed to solve a partial difference equation system and then make a joint study of those two statistics. their joint distribution is shown to be symmetric and to be expressed in the form of an explicit three - variable generating function.
|
arxiv:1304.2484
|
we investigate the ability of $ \ mu \ rightarrow e $ facilities, mu2e and comet, to probe, or discover, new physics with their detector validation datasets. the validation of the detector response may be performed using a dedicated run with $ \ mu ^ + $, collecting data below the michel edge, $ e _ e \ lesssim 52 $ mev ; an alternative strategy using $ \ pi ^ + \ rightarrow e ^ + \ nu _ e $ may also be considered. we focus primarily on a search for a monoenergetic $ e ^ + $ produced via two - body decays $ \ mu ^ + \ rightarrow e ^ + x $ or $ \ pi ^ + \ rightarrow e ^ + x $, with $ x $ a light new physics particle. mu2e can potentially explore new parameter space beyond present astrophysical and laboratory constraints for a set of well motivated models including : axion like particles with flavor violating couplings ( $ \ mu ^ + \ rightarrow e ^ + a $ ), massive $ z ' $ bosons ( $ \ mu ^ + \ rightarrow z ' e ^ + $ ), and heavy neutral leptons ( $ \ pi ^ + \ rightarrow e ^ + n $ ). the projected sensitivities presented herein can be achieved in a matter of days.
|
arxiv:2310.00043
|
we review some relations occurring between the combinatorial intersection theory on the moduli spaces of stable curves and the asymptotic behavior of the ' t hooft - kontsevich matrix integrals. in particular, we give an alternative proof of the witten - di francesco - itzykson - zuber theorem - - which expresses derivatives of the partition function of intersection numbers as matrix integrals - - using techniques based on diagrammatic calculus and combinatorial relations among intersection numbers. these techniques extend to a more general interaction potential.
|
arxiv:math/0111082
|
profiles in the icm.
|
arxiv:1901.02903
|
this paper introduces a fast algorithm for simultaneous inversion and determinant computation of small sized matrices in the context of fully polarimetric synthetic aperture radar ( polsar ) image processing and analysis. the proposed fast algorithm is based on the computation of the adjoint matrix and the symmetry of the input matrix. the algorithm is implemented in a general purpose graphical processing unit ( gpgpu ) and compared to the usual approach based on cholesky factorization. the assessment with simulated observations and data from an actual polsar sensor show a speedup factor of about two when compared to the usual cholesky factorization. moreover, the expressions provided here can be implemented in any platform.
|
arxiv:1807.08084
|
dark matter or modifications of the newtonian inverse - square law in the solar - system are studied with accurate planetary astrometric data. from extra - perihelion precession and possible changes in the third kepler ' s law, we get an upper limit on the local dark matter density, rho _ { dm } < 3 * 10 ^ { - 16 } kg / m ^ 3 at the 2 - sigma confidence level. variations in the 1 / r ^ 2 behavior are considered in the form of either a possible yukawa - like interaction or a modification of gravity of mond type. up to scales of 10 ^ { 11 } m, scale - dependent deviations in the gravitational acceleration are really small. we examined the mond interpolating function mu in the regime of strong gravity. gradually varying mu suggested by fits of rotation curves are excluded, whereas the standard form mu ( x ) = x / ( 1 + x ^ 2 ) ^ { 1 / 2 } is still compatible with data. in combination with constraints from galactic rotation curves and theoretical considerations on the external field effect, the absence of any significant deviation from inverse square attraction in the solar system makes the range of acceptable interpolating functions significantly narrow. future radio ranging observations of outer planets with an accuracy of few tenths of a meter could either give positive evidence of dark matter or disprove modifications of gravity.
|
arxiv:astro-ph/0606197
|
in this paper, we construct a large class of new simple modules over the twisted $ n = 2 $ superconformal algebra. these new simple modules are restricted modules based on the simple modules over certain finite - dimensional solvable lie superalgebras, including various versions of whittaker modules. we elaborate that they are also the twisted modules for the universal $ n = 2 $ superconformal vertex algebra. on the other hand, we give an explicit characterization of the simple restricted modules over the twisted $ n = 2 $ superconformal algebra $ \ mathcal { t } $ under the condition that $ t _ t $ in $ \ mathcal { t } $ acts injectively for some $ t \ in \ frac { 1 } { 2 } + \ z _ + $.
|
arxiv:2303.06898
|
rapid advancements in multimodal large language models have enabled the creation of hyper - realistic images from textual descriptions. however, these advancements also raise significant concerns about unauthorized use, which hinders their broader distribution. traditional watermarking methods often require complex integration or degrade image quality. to address these challenges, we introduce a novel framework towards effective user attribution for latent diffusion models via watermark - informed blending ( teawib ). teawib incorporates a unique ready - to - use configuration approach that allows seamless integration of user - specific watermarks into generative models. this approach ensures that each user can directly apply a pre - configured set of parameters to the model without altering the original model parameters or compromising image quality. additionally, noise and augmentation operations are embedded at the pixel level to further secure and stabilize watermarked images. extensive experiments validate the effectiveness of teawib, showcasing the state - of - the - art performance in perceptual quality and attribution accuracy.
|
arxiv:2409.10958
|
the presence of celestial companions means that any planet may be subject to three kinds of harmonic mechanical forcing : tides, precession / nutation, and libration. these forcings can generate flows in internal fluid layers, such as fluid cores and subsurface oceans, whose dynamics then significantly differ from solid body rotation. in particular, tides in non - synchronized bodies and libration in synchronized ones are known to be capable of exciting the so - called elliptical instability, i. e. a generic instability corresponding to the destabilization of two - dimensional flows with elliptical streamlines, leading to three - dimensional turbulence. we aim here at confirming the relevance of such an elliptical instability in terrestrial bodies by determining its growth rate, as well as its consequences on energy dissipation, on magnetic field induction, and on heat flux fluctuations on planetary scales. previous studies and theoretical results for the elliptical instability are re - evaluated and extended to cope with an astrophysical context. in particular, generic analytical expressions of the elliptical instability growth rate are obtained using a local wkb approach, simultaneously considering for the first time ( i ) a local temperature gradient due to an imposed temperature contrast across the considered layer or to the presence of a volumic heat source and ( ii ) an imposed magnetic field along the rotation axis, coming from an external source. the theoretical results are applied to the telluric planets and moons of the solar system as well as to three super - earths : 55 cnc e, corot - 7b, and gj 1214b. for the tide - driven elliptical instability in non - synchronized bodies, only the early earth core is shown to be clearly unstable. for the libration - driven elliptical instability in synchronized bodies, the core of io is shown to be stable, contrary to previously thoughts, whereas europa, 55 cnc e, corot - 7b and gj 1214b cores can be unstable. the subsurface ocean of europa is slightly unstable }. however, these present states do not preclude more unstable situations in the past.
|
arxiv:1203.1796
|
a self - exciting spatio - temporal point process is fitted to incident data from the uk national traffic information service to model the rates of primary and secondary accidents on the m25 motorway in a 12 - month period during 2017 - 18. this process uses a background component to represent primary accidents, and a self - exciting component to represent secondary accidents. the background consists of periodic daily and weekly components, a spatial component and a long - term trend. the self - exciting components are decaying, unidirectional functions of space and time. these components are determined via kernel smoothing and likelihood estimation. temporally, the background is stable across seasons with a daily double peak structure reflecting commuting patterns. spatially, there are two peaks in intensity, one of which becomes more pronounced during the study period. self - excitation accounts for 6 - 7 % of the data with associated time and length scales around 100 minutes and 1 kilometre respectively. in - sample and out - of - sample validation are performed to assess the model fit. when we restrict the data to incidents that resulted in large speed drops on the network, the results remain coherent.
|
arxiv:2004.14194
|
using the radio observations by first and nvss, we build a sample of 151 radio variable quasars selected from the sloan digital sky survey data release 3 ( sdss dr3 ). six ( probably another two ) among them are classified as broad absorption line ( bal ) quasars, with radio flux variations of a few 10 percent within 1. 5 - 5 years. such large amplitudes of the variations imply brightness temperatures much higher than the inverse compton limits ( 10 $ ^ { 12 } $ k ) in all the bal quasars, suggesting the presence of relativistic jets beaming toward the observer. the angle between the outflow and the jet is constrained to be less than $ \ sim 20 ^ { \ circ } $. such bal quasars with polar outflows are beyond the simple unification models of bal quasars and non - bal quasars, which hypothesize that bal quasars are normal quasars seen nearly edge - on.
|
arxiv:astro-ph/0510243
|
in recent years, the fervent demand for computational power across various domains has prompted hardware manufacturers to introduce specialized computing hardware aimed at enhancing computational capabilities. particularly, the utilization of tensor hardware supporting low precision has gained increasing prominence in scientific research. however, the use of low - precision tensor hardware for computational acceleration often introduces errors, posing a fundamental challenge of simultaneously achieving effective acceleration while maintaining computational accuracy. this paper proposes improvements in the methodology by incorporating low - precision quantization and employing a residual matrix for error correction and combines vector - wise quantization method.. the key innovation lies in the use of sparse matrices instead of dense matrices when compensating for errors with a residual matrix. by focusing solely on values that may significantly impact relative errors under a specified threshold, this approach aims to control quantization errors while reducing computational complexity. experimental results demonstrate that this method can effectively control the quantization error while maintaining high acceleration effect. the improved algorithm on the cpu can achieve up to 15 \ % accuracy improvement while 1. 46 times speed improvement.
|
arxiv:2403.06924
|
robust surfaces capable of reducing flow drag, controlling heat and mass transfer, and resisting fouling in fluid flows are important for various applications. in this context, textured surfaces impregnated with a liquid lubricant show promise due to their ability to sustain a liquid - liquid layer that induces slippage. however, theoretical and numerical studies suggest that the slippage can be compromised by surfactants in the overlying fluid, which contaminate the liquid - liquid interface and generate marangoni stresses. in this study, we use doppler - optical coherence tomography, an interferometric imaging technique, combined with numerical simulations to investigate how surfactants influence the slip length of lubricant - infused surfaces with longitudinal grooves in a laminar flow. we introduce surfactants by adding tracer particles ( milk ) to the working fluid ( water ). local measurements of slip length at the liquid - liquid interface are significantly smaller than theoretical predictions for clean interfaces ( sch \ " onecker & hardt 2013 ). in contrast, measurements are in good agreement with numerical simulations of fully immobilized interfaces, indicating that milk particles adsorbed at the interface are responsible for the reduction in slippage. this work provides the first experimental evidence that liquid - liquid interfaces within textured surfaces can become immobilized in the presence of surfactants and flow.
|
arxiv:2501.12872
|
and techniques, and... began to construct his own problem sets through which his students could learn their craft. " : 174 in russia, stephen timoshenko reformed instruction around exercises. in 1913 he was teaching strength of materials at the petersburg state university of means of communication. as he wrote in 1968, [ practical ] exercises were not given at the institute, and on examinations the students were asked only theoretical questions from the adopted textbook. i had to put an end to this kind of teaching as soon as possible. the students clearly understood the situation, realized the need for better assimilation of the subject, and did not object to the heavy increase in their work load. the main difficulty was with the teachers – or more precisely, with the examiners, who were accustomed to basing their exams on the book. putting practical problems on the exams complicated their job. they were persons along in years... the only hope was to bring younger people into teaching. = = see also = = algorithm worked - example effect = = references = = = = external links = = tatyana afanasyeva ( 1931 ) exercises in experimental geometry from pacific institute for the mathematical sciences. vladimir arnold ( 2004 ) exercises for students from age 5 to 15 at imaginary platform james alfred ewing ( 1911 ) examples in mathematics, mechanics, navigation and nautical astronomy, heat and steam, electricity, for the use of junior officers afloat from internet archive. jim hefferon & others ( 2004 ) linear algebra at wikibooks
|
https://en.wikipedia.org/wiki/Exercise_(mathematics)
|
the concepts of twisted knot theory and singular knot theory inspire the introduction of singular twisted knot theory. this study showcases similar findings for singular twisted links, including the alexander theorem and the markov theorem derived from knot theory. moreover, in this paper we define singular twisted virtual braids and their monoid structure. additionally, we provide both a monoid and a reduced monoid presentation for singular twisted virtual braids.
|
arxiv:2403.17383
|
clinical notes are unstructured text generated by clinicians during patient encounters. clinical notes are usually accompanied by a set of metadata codes from the international classification of diseases ( icd ). icd code is an important code used in various operations, including insurance, reimbursement, medical diagnosis, etc. therefore, it is important to classify icd codes quickly and accurately. however, annotating these codes is costly and time - consuming. so we propose a model based on bidirectional encoder representations from transformers ( bert ) using the sequence attention method for automatic icd code assignment. we evaluate our approach on the medical information mart for intensive care iii ( mimic - iii ) benchmark dataset. our model achieved performance of macro - averaged f1 : 0. 62898 and micro - averaged f1 : 0. 68555 and is performing better than a performance of the state - of - the - art model using the mimic - iii dataset. the contribution of this study proposes a method of using bert that can be applied to documents and a sequence attention method that can capture important sequence in - formation appearing in documents.
|
arxiv:2106.07932
|
quantum one - class support vector machines leverage the advantage of quantum kernel methods for semi - supervised anomaly detection. however, their quadratic time complexity with respect to data size poses challenges when dealing with large datasets. in recent work, quantum randomized measurements kernels and variable subsampling were proposed, as two independent methods to address this problem. the former achieves higher average precision, but suffers from variance, while the latter achieves linear complexity to data size and has lower variance. the current work focuses instead on combining these two methods, along with rotated feature bagging, to achieve linear time complexity both to data size and to number of features. despite their instability, the resulting models exhibit considerably higher performance and faster training and testing times.
|
arxiv:2407.20753
|
inspired by extremely simplified view of the earthquakes we propose the stochastic domino cellular automaton model exhibiting avalanches. from elementary combinatorial arguments we derive a set of nonlinear equations describing the automaton. exact relations between the average parameters of the model are presented. depending on imposed triggering, the model reproduces both exponential and inverse power statistics of clusters.
|
arxiv:1009.4609
|
we propose a model for a power - counting renormalizable field theory living in a fractal spacetime. the action is lorentz covariant and equipped with a stieltjes measure. the system flows, even in a classical sense, from an ultraviolet regime where spacetime has hausdorff dimension 2 to an infrared limit coinciding with a standard $ d $ - dimensional field theory. we discuss the properties of a scalar field model at classical and quantum level. classically, the field lives on a fractal which exchanges energy - momentum with the bulk of integer topological dimension d. although an observer experiences dissipation, the total energy - momentum is conserved. the field spectrum is a continuum of massive modes. the gravitational sector and einstein equations are discussed in detail, also on cosmological backgrounds. we find ultraviolet cosmological solutions and comment on their implications for the early universe.
|
arxiv:1001.0571
|
we demonstrate a contradiction of quantum mechanics with local hidden variable theories for continuous variable quadrature phase amplitude ( ` ` position ' ' and ` ` momentum ' ' ) measurements, by way of a violation of a bell inequality. for any quantum state, this contradiction is lost for situations where the quadrature phase amplitude results are always macroscopically distinct. we show that for optical realisations of this experiment, where one uses homodyne detection techniques to perform the quadrature phase amplitude measurement, one has an amplification prior to detection, so that macroscopic fields are incident on photodiode detectors. the high efficiencies of such detectors may open a way for a loophole - free test of local hidden variable theories.
|
arxiv:quant-ph/0010024
|
the kinetics of intrinsic and dopant - enhanced solid phase epitaxy ( spe ) is stud - ied in amorphous germanium ( a - ge ) layers formed by ion implantation on < 100 > ge substrates. the spe rates were measured with a time - resolved reflectivity ( trr ) system between 300 and 540 degc and found to have an activation energy of ( 2. 15 + / - 0. 04 ) ev. to interpret the trr measurements the refractive indices of the a - ge layers were measured at the two wavelengths used, 1. 152 and 1. 532 { \ mu } m. for the first time, spe rate measurements on thick a - ge layers ( > 3 { \ mu } m ) have also been performed to distinguish between bulk and near - surface spe growth rate behavior. possible effects of explosive crystallization on thick a - ge layers are considered. when h is present in a - ge it is found to have a considerably greater retarding affect on the spe rate than for similar concentrations in a - si layers. hydrogen is found to reduce the pre - exponential spe velocity factor but not the activation energy of spe. however, the extent of h indiffusion into a - ge surface layers during spe is about one order of magnitude less that that observed for a - si layers. this is thought to be due to the lack of a stable surface oxide on a - ge. dopant enhanced kinetics were measured in a - ge layers containing uniform concentration profiles of implanted as or al spanning the concentration regime 1 - 10 x1019 / cm - 3. dopant compensation effects are also observed in a - ge layers containing equal concentrations of as and al, where the spe rate is similar to the intrinsic rate. various spe models are considered in light of these data.
|
arxiv:1007.5209
|
we propose a three - potential formalism for the three - body coulomb scattering problem. the corresponding integral equations are mathematically well - behaved and can succesfully be solved by the coulomb - sturmian separable expansion method. the results show perfect agreements with existing low - energy $ n - d $ and $ p - d $ scattering calculations.
|
arxiv:nucl-th/9701027
|
we study the zariski closure of points in local deformation rings corresponding to potential semi - stable representations with certain prescribed $ p $ - adic hodge theoretic properties. we show in favourable cases that the closure is equal to a union of irreducible components of the deformation space. we also study an analogous question for global hecke algebras.
|
arxiv:1809.06598
|
many modern theories which try to unify gravity with the standard model of particle physics, as e. g. string theory, propose two key modifications to the commonly known physical theories : i ) the existence of additional space dimensions ; ii ) the existence of a minimal length distance or maximal resolution. while extra dimensions have received a wide coverage in publications over the last ten years ( especially due to the prediction of micro black hole production at the lhc ), the phenomenology of models with a minimal length is still less investigated. in a summer study project for bachelor students in 2010 we have explored some phenomenological implications of the potential existence of a minimal length. in this paper we review the idea and formalism of a quantum gravity induced minimal length in the generalised uncertainty principle framework as well as in the coherent state approach to non - commutative geometry. these approaches are effective models which can make model - independent predictions for experiments and are ideally suited for phenomenological studies. pedagogical examples are provided to grasp the effects of a quantum gravity induced minimal length. this article is intended for graduate students and non - specialists interested in quantum gravity.
|
arxiv:1202.1500
|
image segmentation is an essential component in many image processing and computer vision tasks. the primary goal of image segmentation is to simplify an image for easier analysis, and there are two broad approaches for achieving this : edge based methods, which extract the boundaries of specific known objects, and region based methods, which partition the image into regions that are statistically homogeneous. one of the more prominent edge finding methods, known as the level set method, evolves a zero - level contour in the image plane with gradient descent until the contour has converged to the object boundaries. while the classical level set method and its variants have proved successful in segmenting real images, they are susceptible to becoming stuck in noisy regions of the image plane without a priori knowledge of the image and they are unable to provide details beyond object outer boundary locations. we propose a modification to the variational level set image segmentation method that can quickly detect object boundaries by making use of random point initialization. we demonstrate the efficacy of our approach by comparing the performance of our method on real images to that of the prominent canny method.
|
arxiv:2112.12355
|
an abundance of datasets and availability of reliable evaluation metrics have resulted in strong progress in factoid question answering ( qa ). this progress, however, does not easily transfer to the task of long - form qa, where the goal is to answer questions that require in - depth explanations. the hurdles include ( i ) a lack of high - quality data, and ( ii ) the absence of a well - defined notion of the answer ' s quality. in this work, we address these problems by ( i ) releasing a novel dataset and a task that we call asqa ( answer summaries for questions which are ambiguous ) ; and ( ii ) proposing a reliable metric for measuring performance on asqa. our task focuses on factoid questions that are ambiguous, that is, have different correct answers depending on interpretation. answers to ambiguous questions should synthesize factual information from multiple sources into a long - form summary that resolves the ambiguity. in contrast to existing long - form qa tasks ( such as eli5 ), asqa admits a clear notion of correctness : a user faced with a good summary should be able to answer different interpretations of the original ambiguous question. we use this notion of correctness to define an automated metric of performance for asqa. our analysis demonstrates an agreement between this metric and human judgments, and reveals a considerable gap between human performance and strong baselines.
|
arxiv:2204.06092
|
we prove an existence theorem for gauge invariant $ l ^ 2 $ - normal neighborhoods of the reduction loci in the space $ { \ cal a } _ a ( e ) $ of oriented connections on a fixed hermitian 2 - bundle $ e $. we use this to obtain results on the topology of the moduli space $ { \ cal b } _ a ( e ) $ of ( non - necessarily irreducible ) oriented connections, and to study the donaldson $ \ mu $ - classes globally around the reduction loci. in this part of the article we use essentially the concept of harmonic section in a sphere bundle with respect to an euclidean connection. second, we concentrate on moduli spaces of instantons on definite 4 - manifolds with arbitrary first betti number. we prove strong generic regularity results which imply ( for bundles with " odd " first chern class ) the existence of a connected, dense open set of " good " metrics for which all the reductions in the uhlenbeck compactification of the moduli space are simultaneously regular. these results can be used to define new donaldson type invariants for definite 4 - manifolds. the idea behind this construction is to notice that, for a good metric $ g $, the geometry of the instanton moduli spaces around the reduction loci is always the same, independently of the choice of $ g $. the connectedness of the space of good metrics is important, in order to prove that no wall - crossing phenomena ( jumps of invariants ) occur. moreover, we notice that, for low instanton numbers, the corresponding moduli spaces are a priori compact and contain no reductions at all so, in these cases, the existence of well - defined donaldson type invariants is obvious. the natural question is to decide whether these new donaldson type invariants yield essentially new differential topological information on the base manifold have, or have a purely topological nature.
|
arxiv:0704.2625
|
a recent experiment on nb - doped bi2se3 showed that zero field magnetization appears below the superconducting transition temperature. this gives evidence that the superconducting state breaks time - reversal symmetry spontaneously and possibly be in the chiral topological phase. this is in sharp contrast to the cu - doped case which is possibly in the nematic phase and breaks rotational symmetry spontaneously. by deriving the free energy of the system from a microscopic model, we show that the magnetic moments of the nb atoms can be polarized by the chiral cooper pairs and enlarge the phase space of the chiral topological phase compared to the nematic phase. we further show that the chiral topological phase is a weyl superconducting phase with bulk nodal points which are connected by surface majorana arcs.
|
arxiv:1608.05825
|
a reconfigurable phononic crystal ( pnc ) is proposed where elastic properties can be modulated by rotation of asymmetric solid scatterers immersed in water. the scatterers are metallic rods with cross - section of 120 { \ deg } circular sector. orientation of each rod is independently controlled by an external electric motor that allows continuous variation of the local scattering parameters and dispersion of sound in the entire crystal. due to asymmetry of the scatterers, the crystal band structure possesses highly anisotropic bandgaps. synchronous rotation of all the scatterers by a definite angle changes regime of reflection to regime of transmission and vice versa. the same mechanically tunable structure functions as a gradient index medium by incremental, angular reorientation of rods along both row and column, and, subsequently, can serve as a tunable acoustic lens, an acoustic beam splitter, and finally an acoustic beam steerer.
|
arxiv:2304.10008
|
in this paper, we address an open problem of zero - shot learning. its principle is based on learning a mapping that associates feature vectors extracted from i. e. images and attribute vectors that describe objects and / or scenes of interest. in turns, this allows classifying unseen object classes and / or scenes by matching feature vectors via mapping to a newly defined attribute vector describing a new class. due to importance of such a learning task, there exist many methods that learn semantic, probabilistic, linear or piece - wise linear mappings. in contrast, we apply well - established kernel methods to learn a non - linear mapping between the feature and attribute spaces. we propose an easy learning objective inspired by the linear discriminant analysis, kernel - target alignment and kernel polarization methods that promotes incoherence. we evaluate performance of our algorithm on the polynomial as well as shift - invariant gaussian and cauchy kernels. despite simplicity of our approach, we obtain state - of - the - art results on several zero - shot learning datasets and benchmarks including a recent awa2 dataset.
|
arxiv:1802.01279
|
a hypergraph $ h $ is properly colored if for every vertex $ v \ in v ( h ) $, all the edges incident to $ v $ have distinct colors. in this paper, we show that if $ h _ { 1 } $, \ cdots, $ h _ { s } $ are properly - colored $ k $ - uniform hypergraphs on $ n $ vertices, where $ n \ geq3k ^ { 2 } s $, and $ e ( h _ { i } ) > { { n } \ choose { k } } - { { n - s + 1 } \ choose { k } } $, then there exists a rainbow matching of size $ s $, containing one edge from each $ h _ i $. this generalizes some previous results on the erd \ h { o } s matching conjecture.
|
arxiv:1808.04954
|
x ^ { j } \ right \ } _ { j = 0 } ^ { n } $ by some lower triangular $ ( n + 1 ) \ times ( n + 1 ) $ matrix $ \ mathbf { \ pi } _ { n }. $
|
arxiv:1303.0627
|
it was realized recently that the chordal, radial and dipolar sles are special cases of a general slit holomorphic stochastic flow. we characterize those slit holomorphic stochastic flows which generate level lines of the gaussian free field. in particular, we describe the modifications of the gaussian free field ( gff ) corresponding to the chordal and dipolar sle with drifts. finally, we develop a version of conformal field theory based on the background charge and dirichlet boundary condition modifications of gff and present martingale - observables for these types of sles.
|
arxiv:1508.03160
|
an outreach effort has started at michigan state university to bring particle physics, the large hadron collider, and the atlas experiment to a general audience at the abrams planetarium on the msu campus. a team of undergraduate students majoring in physics, communications arts & sciences, and journalism are putting together short clips about atlas and the lhc to be shown at the planetarium.
|
arxiv:1109.2839
|
this study addresses the effect of the magnetic hyperfine interaction on the relativistic h1s wave functions. these are used to calculate the electric, magnetic, and confinement force densities acting on the 1s electron. the magnetic field couples dirac equations for different angular momenta. these are solved numerically for the hyperfine singlet and triplet, as well as for a classical magnetic dipole. in the singlet ground state the hyperfine interaction shifts the electron density toward the proton. a similar shift is found for the classical dipole, and an opposite shift for the triplet. the cross - over between charge accumulation and depletion occurs at 1. 325 times the bohr radius. the behavior of the wave functions is investigated down to distances smaller than the proton radius, including the incorporation of virtual positrons. the force densities are determined and balanced against each other.
|
arxiv:1702.05844
|
a phenomenological model of self - organization explaining the emergence of a complexity with features that apparently satisfy the specific criteria usually required for recognizing the appearance of life in laboratory is presented. the described phenomenology, justified by laboratory experiments, is essentially based on local self - enhancement and long - range inhibition. the complexity represents a primitive organism self - assembled in a gaseous medium revealing, immediately after its " birth ", many of the prerequisite features that attribute them the quality to evolve, under suitable conditions, into a living cell.
|
arxiv:0708.4067
|
mathematical models represent one of the fundamental ways of studying nature. in special, epidemic models have shown to be particularly useful in the understanding of the course of diseases and in the planning effective control policies. a particular type of epidemic model considers the individuals divided into populations. when studied in graphs, it is already known that the graph topology can play an important role in the evolution of the disease. at the same time, one may want to study the effect of the presence of an underlying \ emph { attraction landscape } of the vertices, apart from the respectively underlying topology. in this work, we study metapopulations with small number of individuals in the presence of an attraction landscape. individuals move across populations and get infected according to the sis compartmental model. by using a markov chain approach, we provide a numerical approximation to the prediction of the long - term prevalence of the disease. more specifically, an approach that combines two binomial distributions for mobility, with appropriate assumptions, is proposed to approximate the model. the problem setting is simulated through monte - carlo experiments and the obtained results are compared to the mathematic - analytical approach. substantial agreement is observed between both approaches, which corroborates the effectiveness of the reported numerical approach. in addition, we also study the impact of different levels of attraction landscapes, as well as propagation on the local scale of the entire population. all in all, this study proposes a potentially effective approach to a mostly unexplored setting of disease transmission.
|
arxiv:2111.13168
|
intent classification and slot filling are two critical tasks for natural language understanding. traditionally the two tasks have been deemed to proceed independently. however, more recently, joint models for intent classification and slot filling have achieved state - of - the - art performance, and have proved that there exists a strong relationship between the two tasks. this article is a compilation of past work in natural language understanding, especially joint intent classification and slot filling. we observe three milestones in this research so far : intent detection to identify the speaker ' s intention, slot filling to label each word token in the speech / text, and finally, joint intent classification and slot filling tasks. in this article, we describe trends, approaches, issues, data sets, evaluation metrics in intent classification and slot filling. we also discuss representative performance values, describe shared tasks, and provide pointers to future work, as given in prior works. to interpret the state - of - the - art trends, we provide multiple tables that describe and summarise past research along different dimensions, including the types of features, base approaches, and dataset domain used.
|
arxiv:2101.08091
|
this note highlights how russia uses the international academic sphere - including scientometric databases, international publishers, and international organizations - as a propaganda tool to legitimize its appropriation of ukrainian territories.
|
arxiv:2410.10274
|
we identify for visco - elasto - plastic ( vep ) glassy polymers, physical phenomena during berkovich nanoindentation, a locally imposed deformation. live visuals via in situ nanoindentation indicate mainly sink - in during loading, with pile - up after unloading. scanning probe microscopy ( spm ) indicates significant volume conserving upflow below the tip, for these high nu, compliant materials, with compliance correlated high geometric fractional contact ( including blunt height, h _ b ), ( h _ c + h _ b ) / ( h _ m + h _ b ) ~ 0. 86 - 0. 95. we adapt the ideal conical indentation framework to vep berkovich nanoindentation, to calculate the contact area and visually depict the upflow and the displacement paths, in the material. the combination of spm and p - h data, indicates a mixed comparison with uniaxial modulus and yield stress, with conventionally defined hardness, h < 3 * sig _ y, and nanoindentation modulus e _ n > e. by rationally removing viscoelastic ( ve ) effects from the loading p - h data, we find instant, zero - time hardness, h _ l0 > 3 * sig _ y. we apply the power law model to only the recovery onset, to estimate pure elastic recovery. we then deconvolute the vep nanoindentation into the conventional ep and elastic contributions, isolating the ve component. constraint - induced sink - in, pile - up and ve recovery of the highly yielded tip - apex region, mirror the converse constrained deformation effects, governing the trends in conventionally defined h _ l0 and e _ n for glassy polymers.
|
arxiv:2405.16180
|
we study the theory of scattering for the system consisting of a schr " odinger equation and a wave equation with a yukawa type coupling, in space dimension 3. we prove in particular the existence of modified wave operators for that system with no size restriction on the data and we determine the asymptotic behaviour in time of solutions in the range of the wave operators. the method consists in solving the wave equation, substituting the result into the schr " odinger equation, which then becomes both nonlinear and nonlocal in time, and treating the latter by the method previously used for a family of generalized hartree equations with long range interactions.
|
arxiv:math/0107087
|
the present situation on a comparison of theoretical evaluation of the muon anomalous magnetic moment with experimental one is roughly reviewed. then by means of a recently elaborated unitary and analytic model of the meson transition form factors the contributions of $ e ^ + e ^ - \ to p ( s ) \ gamma $ processes to muon $ g - 2 $ is estimated.
|
arxiv:hep-ph/0401134
|
we present a numerical investigation of three - dimensional, short - wavelength linear instabilities in kelvin - helmholtz ( kh ) vortices in homogeneous and stratified environments. the base flow, generated using two - dimensional numerical simulations, is characterized by the reynolds number and the richardson number defined based on the initial one - dimensional velocity and buoyancy profiles. the local stability equations are then solved on closed streamlines in the vortical base flow, which is assumed quasi - steady. for the unstratified case, the elliptic instability at the vortex core dominates at early times, before being taken over by the hyperbolic instability at the vortex edge. for the stratified case, the early time instabilities comprise a dominant elliptic instability at the core and a hyperbolic instability strongly influenced by stratification at the vortex edge. at intermediate times, the local approach shows a new branch of instability ( convective branch ) that emerges at the vortex core and subsequently moves towards the vortex edge. a few more convective instability branches appear at the vortex core and move away, before coalescing to form the most unstable region inside the vortex periphery at large times. the dominant instability characteristics from the local approach are shown to be in good qualitative agreement with results from global instability studies for both homogeneous and stratified cases. compartmentalized analyses are then used to elucidate the role of shear and stratification on the identified instabilities. the role of buoyancy is shown to be critical after the primary kh instability saturates, with the dominant convective instability shown to occur in regions with the strongest statically unstable layering. we conclude by highlighting the potentially insightful role that the local approach may offer in understanding the secondary instabilities in other flows.
|
arxiv:1712.05868
|
a wireless sensor network ( wsn ) typically consists of base stations and a large number of wireless sensors. the sensory data gathered from the whole network at a certain time snapshot can be visualized as an image. as a result, information hiding techniques can be applied to this " sensory data image ". steganography refers to the technology of hiding data into digital media without drawing any suspicion, while steganalysis is the art of detecting the presence of steganography. this article provides a brief review of steganography and steganalysis applications for wireless sensor networks ( wsns ). then we show that the steganographic techniques are both related to sensed data authentication in wireless sensor networks, and when considering the attacker point of view, which has not yet been investigated in the literature. our simulation results show that the sink level is unable to detect an attack carried out by the nsf5 algorithm on sensed data.
|
arxiv:1706.08136
|
atom probe tomography ( apt ) is a burgeoning characterization technique that provides compositional mapping of materials in three - dimensions at near - atomic scale. since its significant expansion in the past 30 years, we estimate that one million apt datasets have been collected, each containing millions to billions of individual ions. their analysis and the extraction of microstructural information has largely relied upon individual users whose varied level of expertise causes clear and documented bias. current practices hinder efficient data processing, and make challenging standardization and the deployment of data analysis workflows that would be compliant with fair data principles. over the past decade, building upon the long - standing expertise of the apt community in the development of advanced data processing or data mining techniques, there has been a surge of novel machine learning ( ml ) approaches aiming for user - independence, and that are efficient, reproducible, and robust from a statistics perspective. here, we provide a snapshot review of this rapidly evolving field. we begin with a brief introduction to apt and the nature of the apt data. this is followed by an overview of relevant ml algorithms and a comprehensive review of their applications to apt. we also discuss how ml can enable discoveries beyond human capability, offering new insights into the mechanisms within materials. finally, we provide guidance for future directions in this domain.
|
arxiv:2504.14378
|
gauge fields are special in the sense that they are invariant under gauge transformations and \ qtr { em } { ` ` ipso facto ' ' } they lead to problems when we try quantizing them straightforwardly. to circumvent this problem we need to specify a gauge condition to fix the gauge so that the fields that are connected by gauge invariance are not overcounted in the process of quantization. the usual way we do this in the light - front is through the introduction of a lagrange multiplier, $ ( n \ cdot a ) ^ { 2 } $, where $ n _ { \ mu } $ is the external light - like vector, i. e., $ n ^ { 2 } = 0 $, and $ a _ { \ mu } $ is the vector potential. this leads to the usual light - front propagator with all the ensuing characteristics such as the prominent $ ( k \ cdot n ) ^ { - 1 } $ pole which has been the subject of much research. however, it has been for long recognized that this procedure is incomplete in that there remains a residual gauge freedom still to be fixed by some ` ` ad hoc ' ' prescription, and this is normally worked out to remedy some unwieldy aspect that emerges along the way. in this work we propose a new lagrange multiplier for the light - front gauge that leads to the correctly defined propagator with no residual gauge freedom left. this is accomplished via $ ( n \ cdot a ) ( \ partial \ cdot a ) $ term in the lagrangian density. this leads to a well - defined and exact though lorentz non invariant propagator.
|
arxiv:nucl-th/0303016
|
specialized hardware accelerators aid the rapid advancement of artificial intelligence ( ai ), and their efficiency impacts ai ' s environmental sustainability. this study presents the first publication of a comprehensive ai accelerator life - cycle assessment ( lca ) of greenhouse gas emissions, including the first publication of manufacturing emissions of an ai accelerator. our analysis of five tensor processing units ( tpus ) encompasses all stages of the hardware lifespan - from raw material extraction, manufacturing, and disposal, to energy consumption during development, deployment, and serving of ai models. using first - party data, it offers the most comprehensive evaluation to date of ai hardware ' s environmental impact. we include detailed descriptions of our lca to act as a tutorial, road map, and inspiration for other computer engineers to perform similar lcas to help us all understand the environmental impacts of our chips and of ai. a byproduct of this study is the new metric compute carbon intensity ( cci ) that is helpful in evaluating ai hardware sustainability and in estimating the carbon footprint of training and inference. this study shows that cci improves 3x from tpu v4i to tpu v6e. moreover, while this paper ' s focus is on hardware, software advancements leverage and amplify these gains.
|
arxiv:2502.01671
|
as data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. this paper compares four freely available intracranial neuroelectrophysiology data repositories : data archive for the brain initiative ( dabi ), distributed archives for neurophysiology data integration ( dandi ), openneuro, and brain - code. the aim of this review is to describe archives that provide researchers with tools to store, share, and reanalyze both human and non - human neurophysiology data based on criteria that are of interest to the neuroscientific community. the brain imaging data structure ( bids ) and neurodata without borders ( nwb ) are utilized by these archives to make data more accessible to researchers by implementing a common standard. as the necessity for integrating large - scale analysis into data repository platforms continues to grow within the neuroscientific community, this article will highlight the various analytical and customizable tools developed within the chosen archives that may advance the field of neuroinformatics.
|
arxiv:2306.15041
|
we explore toponium, the smallest known quantum bound state of a top quark and its antiparticle, bound by the strong force. with a bohr radius of $ 8 \ times 10 ^ { - 18 } $ ~ m and a lifetime of $ 2. 5 \ times 10 ^ { - 25 } $ ~ s, toponium uniquely probes microphysics. unlike all other hadrons, it is governed by ultraviolet freedom. this distinction offers novel insights into quantum chromodynamics. our analysis reveals a toponium signal exceeding $ 5 \ sigma $ in the distribution of the cross section ratio between $ e ^ + e ^ - \ rightarrow b \ bar { b } $ and $ e ^ + e ^ - \ rightarrow q \ bar { q } $ ( $ q = b, c, s, d, u $ ), based on 400 ~ fb $ ^ { - 1 } $ of data collected at $ \ sqrt { s } \ approx 341 ~ { \ rm gev } $. this discovery enables a top quark mass measurement with an uncertainty reduced by a factor of ten compared to current precision levels. moreover, this method improves the systematic uncertainty by at least a factor of 12 compared to any other possible methods.
|
arxiv:2412.11254
|
we investigate charge qubit measurements using a single electron transistor, with focus on the backaction - induced renormalization of qubit parameters. it is revealed the renormalized dynamics leads to a number of intriguing features in the detector ' s noise spectra, and therefore needs to be accounted for to properly understand the measurement result. noticeably, the level renormalization gives rise to a strongly enhanced signal - to - noise ratio, which can even exceed the universal upper bound imposed quantum mechanically on linear - response detectors.
|
arxiv:1010.4622
|
we propose a nonparametric factorization approach for sparsely observed tensors. the sparsity does not mean zero - valued entries are massive or dominated. rather, it implies the observed entries are very few, and even fewer with the growth of the tensor ; this is ubiquitous in practice. compared with the existent works, our model not only leverages the structural information underlying the observed entry indices, but also provides extra interpretability and flexibility - - it can simultaneously estimate a set of location factors about the intrinsic properties of the tensor nodes, and another set of sociability factors reflecting their extrovert activity in interacting with others ; users are free to choose a trade - off between the two types of factors. specifically, we use hierarchical gamma processes and poisson random measures to construct a tensor - valued process, which can freely sample the two types of factors to generate tensors and always guarantees an asymptotic sparsity. we then normalize the tensor process to obtain hierarchical dirichlet processes to sample each observed entry index, and use a gaussian process to sample the entry value as a nonlinear function of the factors, so as to capture both the sparse structure properties and complex node relationships. for efficient inference, we use dirichlet process properties over finite sample partitions, density transformations, and random features to develop a stochastic variational estimation algorithm. we demonstrate the advantage of our method in several benchmark datasets.
|
arxiv:2110.10082
|
we establish the asymptotic behaviour of the sum of squared residuals autocovariances and autocorrelations for the class of multi - variate power transformed asymmetric models. we then derive a portmanteau test. we establish the asymptotic distribution of the proposed statistics. these asymptotic results are illustrated by monte carlo experiments. an application to a bivariate real financial data is also proposed.
|
arxiv:2404.12685
|
the topology of complex classical paths is investigated to discuss quantum tunnelling splittings in one - dimensional systems. here the hamiltonian is assumed to be given as polynomial functions, so the fundamental group for the riemann surface provides complete information on the topology of complex paths, which allows us to enumerate all the possible candidates contributing to the semiclassical sum formula for tunnelling splittings. this naturally leads to action relations among classically disjoined regions, revealing entirely non - local nature in the quantization condition. the importance of the proper treatment of stokes phenomena is also discussed in hamiltonians in the normal form.
|
arxiv:1709.10144
|
in this paper we study the problem of learning minimum - energy controls for linear systems from heterogeneous data. specifically, we consider datasets comprising input, initial and final state measurements collected using experiments with different time horizons and arbitrary initial conditions. in this setting, we first establish a general representation of input and sampled state trajectories of the system based on the available data. then, we leverage this data - based representation to derive closed - form data - driven expressions of minimum - energy controls for a wide range of control horizons. further, we characterize the minimum number of data required to reconstruct the minimum - energy inputs, and discuss the numerical properties of our expressions. finally, we investigate the effect of noise on our data - driven formulas, and, in the case of noise with known second - order statistics, we provide corrected expressions that converge asymptotically to the true optimal control inputs.
|
arxiv:2006.10895
|
histology - based grade classification is clinically important for many cancer types in stratifying patients distinct treatment groups. in prostate cancer, the gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. however, the subjective interpretation of gleason score often suffers from large interobserver and intraobserver variability. previous work in deep learning - based objective gleason grading requires manual pixel - level annotation. in this work, we propose a weakly - supervised approach for grade classification in tissue micro - arrays ( tma ) using graph convolutional networks ( gcns ), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. as node - level features in our graph representation, we learn the morphometry of each cell using a contrastive predictive coding ( cpc ) - based self - supervised approach. we demonstrate that on a five - fold cross validation our method can achieve $ 0. 9659 \ pm0. 0096 $ auc using only tma - level labels. our method demonstrates a 39. 80 \ % improvement over standard gcns with texture features and a 29. 27 % improvement over gcns with vgg19 features. our proposed pipeline can be used to objectively stratify low and high risk cases, reducing inter - and intra - observer variability and pathologist workload.
|
arxiv:1910.13328
|
we report a detailed optical study of the clusters abell 2125 and 2645. both clusters have z = 0. 25 and richness class 4, yet contrast strongly in blue fraction and radio galaxy population. we find 27 spectroscopically confirmed radio galaxies in the blue cluster, a2125 and only four in a2645. the excess radio population in a2125 occurs entirely at l ( 20cm ) < 10 ^ 23 w / hz, where one expects star - formation to be responsible for the radio emission. most of the radio galaxies have optical properties consistent with types later than e / s0, but emission lines weaker than would be expected for the sfr ' s implied by the radio emission. thus we suspect dust obscuration is important. the cluster - cluster merger in a2125 seems likely to play a part in these phenomena.
|
arxiv:astro-ph/9905004
|
we study resonance patterns of a spiral - shaped dielectric microcavity with chaotic ray dynamics. many resonance patterns of this microcavity, with refractive indices $ n = 2 $ and 3, exhibit strong localization of simple geometric shape, and we call them { \ em quasi - scarred resonances } in the sense that there is, unlike the conventional scarring, no underlying periodic orbits. it is shown that the formation of quasi - scarred pattern can be understood in ter ms of ray dynamical probability distributions and wave properties like uncertainty and interference.
|
arxiv:nlin/0403025
|
we refine our previous study of a $ u d \ bar { b } \ bar { b } $ tetraquark resonance with quantum numbers $ i ( j ^ { p } ) = 0 ( 1 ^ { - } ) $, which is based on antiheavy - antiheavy lattice qcd potentials, by including heavy quark spin effects via the mass difference of the $ b $ and the $ b ^ { * } $ meson. this leads to a coupled channel schr \ " odinger equation, where the two channels correspond to $ bb $ and $ b ^ { * } b ^ { * } $, respectively. we search for $ \ mbox { t } $ matrix poles in the complex energy plane, but do not find any indication for the existence of a tetraquark resonance in this refined coupled channel approach. we also vary the antiheavy - antiheavy potentials as well as the $ b $ quark mass to further understand the dynamics of this four - quark system.
|
arxiv:2211.15765
|
this paper presents three batch estimation methods that use noisy ground velocity and heading measurements from a vehicle executing a circular orbit ( or similar large heading change maneuver ) to estimate the speed and direction of a steady, uniform, flow - field. the methods are based on a simple kinematic model of the vehicle ' s motion and use curve - fitting or nonlinear least - square optimization. a monte carlo simulation with randomized flow conditions is used to evaluate the batch estimation methods while varying the measurement noise of the data and the interval of unique heading traversed during the maneuver. the methods are also compared using experimental data obtained with a bluefin - 21 unmanned underwater vehicle performing a series of circular orbit maneuvers over a five hour period in a tide - driven flow.
|
arxiv:2402.17078
|
we prove new identities betweenthe values of rogers dilogarithm function and describe a connection between these identities and spectra in conformal field theory.
|
arxiv:hep-th/9212150
|
we prove a h \ " older inequality for kms states, which generalises a well - known trace - inequality. our results are based on the theory of non - commutative $ l _ p $ - spaces.
|
arxiv:1103.3608
|
we study the single spin asymmetry in the lepton angular distribution of drell - yan processes in the frame work of collinear factorization. the asymmetry has been studied in the past and different results have been obtained. in our study we take an approach different than that used in the existing study. we explicitly calculate the transverse - spin dependent part of the differential cross - section with suitable parton states. because the spin is transverse, one has to take multi - parton states for the purpose. our result agrees with one of the existing results. a possible reason for the disagreement with others is discussed.
|
arxiv:1203.6415
|
we propose a new nonparametric procedure for the detection and estimation of multiple structural breaks in the autocovariance function of a multivariate ( second - order ) piecewise stationary process, which also identifies the components of the series where the breaks occur. the new method is based on a comparison of the estimated spectral distribution on different segments of the observed time series and consists of three steps : it starts with a consistent test, which allows to prove the existence of structural breaks at a controlled type i error. secondly, it estimates sets containing possible break points and finally these sets are reduced to identify the relevant structural breaks and corresponding components which are responsible for the changes in the autocovariance structure. in contrast to all other methods which have been proposed in the literature, our approach does not make any parametric assumptions, is not especially designed for detecting one single change point and addresses the problem of multiple structural breaks in the autocovariance function directly with no use of the binary segmentation algorithm. we prove that the new procedure detects all components and the corresponding locations where structural breaks occur with probability converging to one as the sample size increases and provide data - driven rules for the selection of all regularization parameters. the results are illustrated by analyzing financial returns, and in a simulation study it is demonstrated that the new procedure outperforms the currently available nonparametric methods for detecting breaks in the dependency structure of multivariate time series.
|
arxiv:1309.1309
|
by using an appropriate version of the synchronous sir model, we studied the effects of dilution and mobility on the critical immunization rate. we showed that, by applying time - dependent monte carlo ( mc ) simulations at criticality, and taking into account the optimization of the power law for the density of infected individuals, the critical immunization necessary to block the epidemic in two - dimensional lattices decreases as dilution increases with a logarithmic dependence. on the other hand, the mobility minimizes such effects and the critical immunizations is greater when the probability of movement of the individuals increases.
|
arxiv:1411.7105
|
the mechanical failure of amorphous media is a ubiquitous phenomenon from material engineering to geology. it has been noticed for a long time that the phenomenon is " scale - free ", indicating some type of criticality. in spite of attempts to invoke " self - organized criticality ", the physical origin of this criticality, and also its universal nature, being quite insensitive to the nature of microscopic interactions, remained elusive. recently we proposed that the precise nature of this critical behavior is manifested by a spinodal point of a thermodynamic phase transition. moreover, at the spinodal point there exists a divergent correlation length which is associated with the system - spanning instabilities ( known also as shear bands ) which are typical to the mechanical yield. demonstrating this requires the introduction of an " order parameter " that is suitable for distinguishing between disordered amorphous systems, and an associated correlation function, suitable for picking up the growing correlation length. the theory, the order parameter, and the correlation functions used are universal in nature and can be applied to any amorphous solid that undergoes mechanical yield. critical exponents for the correlation length divergence and the system size dependence are estimated. the phenomenon is seen at its sharpest in athermal systems, as is explained below ; in this paper we extend the discussion also to thermal systems, showing that at sufficiently high temperatures the spinodal phenomenon is destroyed by thermal fluctuations.
|
arxiv:1704.05285
|
this article reports on how diagrammatic identities of yang - - mills theory translate to diagrammatics for pure gravity. for this, we consider the einstein - - hilbert action and follow the approach of capper, leibbrandt, and medrano and expand the inverse metric density around the minkowski metric. by analogy to yang - - mills theory, cancellation identities are constructed for the graviton as well as the ghost vertices up to the valency of six.
|
arxiv:2007.08894
|
energy - conserving, angular momentum - changing collisions between protons and highly excited rydberg hydrogen atoms are important for precise understanding of atomic recombination at the photon decoupling era, and the elemental abundance after primordial nucleosynthesis. early approaches to $ \ ell $ - changing collisions used perturbation theory for only dipole - allowed ( $ \ delta \ ell = \ pm 1 $ ) transitions. an exact non - perturbative quantum mechanical treatment is possible, but it comes at computational cost for highly excited rydberg states. in this note we show how to obtain a semi - classical limit that is accurate and simple, and develop further physical insights afforded by the non - perturbative quantum mechanical treatment.
|
arxiv:1707.09256
|
this study focuses on comparing deep learning methods for the segmentation and quantification of uncertainty in prostate segmentation from mri images. the aim is to improve the workflow of prostate cancer detection and diagnosis. seven different u - net - based architectures, augmented with monte - carlo dropout, are evaluated for automatic segmentation of the central zone, peripheral zone, transition zone, and tumor, with uncertainty estimation. the top - performing model in this study is the attention r2u - net, achieving a mean intersection over union ( iou ) of 76. 3 % and dice similarity coefficient ( dsc ) of 85 % for segmenting all zones. additionally, attention r2u - net exhibits the lowest uncertainty values, particularly in the boundaries of the transition zone and tumor, when compared to the other models.
|
arxiv:2308.04653
|
we characterize a novel orthorhombic phase ( gamma ) of naalh4, discovered using first - principles molecular dynamics, and discuss its relevance to the dehydrogenation mechanism. this phase is close in energy to the known low - temperature structure and becomes the stabler phase above 320 k, thanks to a larger vibrational entropy associated with alh4 rotational modes. the structural similarity of gamma - naalh4 to alpha - na3alh6 suggests it acts as a key intermediate during hydrogen release. findings are consistent with recent experiments recording an unknown phase during dehydrogenation.
|
arxiv:0910.2760
|
( abridged ) using the uh8k mosaic camera, we have measured the angular correlation function \ omega ( \ theta ) for 100, 000 galaxies over four widely separated fields totalling ~ 1 \ deg ^ 2 and reaching iab ~ 25. 5. with this sample we investigate the dependence of \ omega ( \ theta ) at 1 ', a _ \ omega ( 1 ' ), on sample median iab magnitude in the range 19. 5 < i ( ab - med ) < 24. our results show that a _ \ omega ( 1 ' ) decreases monotonically to iab ~ 25. at bright magnitudes, \ omega ( \ theta ) is consistent with a power - law of slope \ delta = - 0. 8 for 0. 2 ' < \ theta < 3. 0 ' but at fainter magnitudes we find \ delta ~ - 0. 6. at the 3 \ sigma level, our observations are still consistent with \ delta = - 0. 8. furthermore, in the magnitude ranges 18. 5 < iab < 24. 0 and 18. 5 < iab < 23. 0 we find galaxies with 2. 6 < ( v - i ) ab < 2. 9 have a _ \ omega ( 1 ' ) ' s which are ~ 10x higher than field values. we demonstrate that our model redshift distributions for the faint galaxy population are in good agreement with current spectroscopic observations. using these predictions, we find that for low - omega cosmologies and assuming r _ 0 = 4. 3 / h mpc, in the range 19. 5 < i ( ab - med ) < 22, the growth of galaxy clustering is \ epsilon ~ 0. however, at 22 < i ( ab - med ) < 24. 0, our observations are consistent with \ epsilon > 1. models with \ epsilon ~ 0 cannot simultaneously match both bright and faint measurements of a _ \ omega ( 1 ` ). we show how this result is a natural consequence of the ` ` bias - free ' ' nature of the \ epsilon formalism and is consistent with the field galaxy population in the range 22. 0 < iab < 24. 0 being dominated by galaxies of low intrinsic luminosity.
|
arxiv:astro-ph/0107526
|
we investigate the performance of superconducting nanowire photon detectors fabricated from ultra - thin nb. a direct comparison is made between these detectors and similar nanowire detectors fabricated from nbn. we find that nb detectors are significantly more susceptible than nbn to thermal instability ( latching ) at high bias. we show that the devices can be stabilized by reducing the input resistance of the readout. nb detectors optimized in this way are shown to have approximately 2 / 3 the reset time of similar large - active - area nbn detectors of the same geometry, with approximately 6 % detection efficiency for single photons at 470 nm.
|
arxiv:0901.1146
|
we study noncooperative games, in which each player ' s objective is composed of a sequence of ordered - and potentially conflicting - preferences. problems of this type naturally model a wide variety of scenarios : for example, drivers at a busy intersection must balance the desire to make forward progress with the risk of collision. mathematically, these problems possess a nested structure, and to behave properly players must prioritize their most important preference, and only consider less important preferences to the extent that they do not compromise performance on more important ones. we consider multi - agent, noncooperative variants of these problems, and seek generalized nash equilibria in which each player ' s decision reflects both its hierarchy of preferences and other players ' actions. we make two key contributions. first, we develop a recursive approach for deriving the first - order optimality conditions of each player ' s nested problem. second, we propose a sequence of increasingly tight relaxations, each of which can be transcribed as a mixed complementarity problem and solved via existing methods. experimental results demonstrate that our approach reliably converges to equilibrium solutions that strictly reflect players ' individual ordered preferences.
|
arxiv:2410.21447
|
a conceptual set - up for measuring the electric field in silicon detectors by electro - optical imaging is proposed. it is based on the franz - keldysh effect which describes the electric field dependence of the absorption of light with an energy close to the silicon band gap. using published data, a measurement accuracy of 1 to 4 kv / cm is estimated. the set - up is intended for determining the electric field in radiation - damaged silicon detectors as a function of irradiation fluence and particle type, temperature and bias voltage. the overall concept and the individual components of the set - up are presented.
|
arxiv:2007.15503
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.