text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
starfinder is an idl code for the deep analysis of stellar fields, designed for well - sampled images with high and low strehl factor. an important feature is represented by the possibility to measure the anisoplanatic effect in wide - field adaptive optics observations and exploit this knowledge to improve the analysis of the observed field. a description of the method and applications to real ao data are presented.
|
arxiv:astro-ph/9911354
|
we present bilateral teleoperation system for task learning and robot motion generation. our system includes a bilateral teleoperation platform and a deep learning software. the deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. it leverages the datasets of images and robotic encoder information to learn about the inter - modal correspondence between visual images and robot motion. in detail, the deep learning software uses a combination of deep convolutional auto - encoders ( dcae ) over image regions, and recurrent neural network with long short - term memory units ( lstm - rnn ) over robot motor angles, to learn motion taught be human teleoperation. the learnt models are used to predict new motion trajectories for similar tasks. experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. detailed analysis is performed based on failure cases of the experimental results. some insights about the cans and cannots of the system are summarized.
|
arxiv:1810.10414
|
we show how the directional collective response of atomic arrays to light can be exploited for the dissipative generation of entangled atomic states, relevant for e. g. quantum metrology. we consider an atomic array illuminated by a paraxial beam of a squeezed - vacuum field and demonstrate that quantum - squeezing correlations are dissipatively transferred to the array atoms, resulting in an atomic spin - squeezed steady state. we find that the entanglement transfer efficiency and hence the degree of spin squeezing are determined by the resonant optical reflectivity of the array. considering realistic cases of finite - size array and illuminating beam, we find how the spin - squeezing strength scales with system parameters, such as the number of layers in the array and its spatial overlap with the beam. we discuss applications in atomic clocks both in optical and microwave domains.
|
arxiv:2311.03898
|
in this overview we discuss some recent results of non - - uniqueness for the isentropic euler equations of gas dynamics with particular attention to the role of some admissibility criteria proposed in the literature.
|
arxiv:1508.02937
|
an empirical bayes problem has an unknown prior to be estimated from data. the predictive recursion ( pr ) algorithm provides fast nonparametric estimation of mixing distributions and is ideally suited for empirical bayes applications. this paper presents a general notion of empirical bayes asymptotic optimality, and it is shown that pr - based procedures satisfy this property under certain conditions. as an application, the problem of in - season prediction of baseball batting averages is considered. there the pr - based empirical bayes rule performs well in terms of prediction error and ability to capture the distribution of the latent features.
|
arxiv:1210.5235
|
a popular way to create detailed yet easily controllable 3d shapes is via procedural modeling, i. e. generating geometry using programs. such programs consist of a series of instructions along with their associated parameter values. to fully realize the benefits of this representation, a shape program should be compact and only expose degrees of freedom that allow for meaningful manipulation of output geometry. one way to achieve this goal is to design higher - level macro operators that, when executed, expand into a series of commands from the base shape modeling language. however, manually authoring such macros, much like shape programs themselves, is difficult and largely restricted to domain experts. in this paper, we present shapemod, an algorithm for automatically discovering macros that are useful across large datasets of 3d shape programs. shapemod operates on shape programs expressed in an imperative, statement - based language. it is designed to discover macros that make programs more compact by minimizing the number of function calls and free parameters required to represent an input shape collection. we run shapemod on multiple collections of programs expressed in a domain - specific language for 3d shape structures. we show that it automatically discovers a concise set of macros that abstract out common structural and parametric patterns that generalize over large shape collections. we also demonstrate that the macros found by shapemod improve performance on downstream tasks including shape generative modeling and inferring programs from point clouds. finally, we conduct a user study that indicates that shapemod ' s discovered macros make interactive shape editing more efficient.
|
arxiv:2104.06392
|
the origin of large magnetic fields in the universe remains currently unknown. we investigate here a mechanism before recombination based on known physics. the source of the vorticity is due to the changes in the photon distribution function caused by the fluctuations in the background photons. we show that the magnetic field generated in the mhd limit, due to the coulomb scattering, is of the order $ 10 ^ { - 49 } $ ~ g on a coherence scale of 10 kpc. we explicitly show that the magnetic fields generated from this process are sustainable and are not erased by resistive diffusion. we compare the results with current observations and discuss the implications. our seed magnetic fields are generated on small scales whereas the main mechanisms studied in the literature are on scale bigger than 1 mpc. however, compared to more exotic theories generating seed magnetic fields on similar scales, the strength of our fields are generally smaller.
|
arxiv:1504.07853
|
pruning well - trained neural networks is effective to achieve a promising accuracy - efficiency trade - off in computer vision regimes. however, most of existing pruning algorithms only focus on the classification task defined on the source domain. different from the strong transferability of the original model, a pruned network is hard to transfer to complicated downstream tasks such as object detection arxiv : arch - ive / 2012. 04643. in this paper, we show that the image - level pretrain task is not capable of pruning models for diverse downstream tasks. to mitigate this problem, we introduce image reconstruction, a pixel - level task, into the traditional pruning framework. concretely, an autoencoder is trained based on the original model, and then the pruning process is optimized with both autoencoder and classification losses. the empirical study on benchmark downstream tasks shows that the proposed method can outperform state - of - the - art results explicitly.
|
arxiv:2202.11484
|
we present results on the effects of interstitial oxygen and carbon on a bulk - niobium superconducting radio - frequency cavity. previous experiments have shown that high - temperature ( ~ 800 $ ^ \ circ \ text { c } $ ) nitrogen - doping plays the dominant role in the reduction of the electron mean free path in the rf penetration layer of niobium that leads to a decrease in microwave surface resistance and a suppression the temperature - dependent component of the surface resistance with increasing accelerating gradient. in this work, we show that oxygen and carbon - doping has very similar effects on cavity performance, demonstrating that these effects are not unique to nitrogen. the preparation method used in the introduction of interstitial oxygen and carbon has the advantage that it is done at lower temperatures than that of high - temperature nitrogen - doping and does not require post - treatment electro - polishing.
|
arxiv:1612.08291
|
in this paper we introduce a general version of the loewner differential equation which allows us to present a new and unified treatment of both the radial equation introduced in 1923 by k. loewner and the chordal equation introduced in 2000 by o. schramm. in particular, we prove that evolution families in the unit disc are in one to one correspondence with solutions to this new type of loewner equations. also, we give a berkson - porta type formula for non - autonomous weak holomorphic vector fields which generate such loewner differential equations and study in detail geometric and dynamical properties of evolution families.
|
arxiv:0807.1594
|
since its first identification on the surface of ganymede in 1995, molecular oxygen ( o2 ) ice has been at the center of a scientific debate as the surface temperature of the jovian moon is on average well above the freezing point of o2. laboratory evidence suggested that solid o2 may either exist in a cold ( < 50 k ) subsurface layer of the icy surface of ganymede, or it is in an atmospheric haze of the moon. alternatively, o2 is constantly replenished at the surface through ion irradiation of water - containing ices. a conclusive answer on the existence of solid o2 on the surface of ganymede is hampered by the lack of detailed, extensive observational datasets. we present new ground - based, high - resolution spectroscopic observations of ganymede ' s surface obtained at the telescopio nazionale galileo. these are combined with dedicated laboratory measurements of ultraviolet - visible ( uv - vis ) photoabsorption spectra of o2 ice, both pure and mixed with other species of potential interest for the galilean satellites. our study confirms that the two bands identified in the visible spectra of ganymede ' s surface are due to the ( 1, 0 ) and ( 0, 0 ) transition bands of o2 ice. oxygen - rich ice mixtures including water ( h2o ) and carbon dioxide ( co2 ) can reproduce observational reflectance data of the ganymede ' s surface better than pure o2 ice in the temperature range 20 - 35 k. solid h2o and co2 also provide an environment where o2 ice can be trapped at higher temperatures than its pure ice desorption under vacuum space conditions. our experiments at different temperatures show also that the ( 1, 0 ) / ( 0, 0 ) ratio in case of the co2 : o2 = 1 : 2 ice mixture at 35 k has the closest value to observations, while at 30 k the ( 1, 0 ) / ( 0, 0 ) ratio seems to be mixture independent with the exception of the n2 : o2 = 1 : 2 ice mixture. the present work will support the esa / juice mission to the jovian system.
|
arxiv:2205.01659
|
we study the propagation of guided light along an array of three - level atoms in the vicinity of an optical nanofiber under the condition of electromagnetically induced transparency. we examine two schemes of atomic levels and field polarizations where the guided probe field is quasilinearly polarized along the major or minor principal axis, which is parallel or perpendicular, respectively, to the radial direction of the atomic position. our numerical calculations indicate that 200 cesium atoms in a linear array with a length of 100 $ \ mu $ m at a distance of 200 nm from the surface of a nanofiber with a radius of 250 nm can slow down the speed of guided probe light by a factor of about $ 3. 5 \ times 10 ^ 6 $ ( the corresponding group delay is about 1. 17 $ \ mu $ s ). in the neighborhood of the bragg resonance, a significant fraction of the guided probe light can be reflected back with a negative group delay. the reflectivity and the group delay of the reflected field do not depend on the propagation direction of the probe field. however, when the input guided light is quasilinearly polarized along the major principal axis, the transmittivity and the group delay of the transmitted field substantially depend on the propagation direction of the probe field. under the bragg resonance condition, an array of atoms prepared in an appropriate internal state can transmit guided light polarized along the major principal in one specific direction even in the limit of infinitely large atom numbers. the directionality of transmission of guided light through the array of atoms is a consequence of the existence of a longitudinal component of the guided light field as well as the ellipticity of both the field polarization and the atomic dipole vector.
|
arxiv:1502.04151
|
this paper focuses on inverse reinforcement learning for autonomous navigation using distance and semantic category observations. the objective is to infer a cost function that explains demonstrated behavior while relying only on the expert ' s observations and state - control trajectory. we develop a map encoder, that infers semantic category probabilities from the observation sequence, and a cost encoder, defined as a deep neural network over the semantic features. since the expert cost is not directly observable, the model parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. we propose a new model of expert behavior that enables error minimization using a closed - form subgradient computed only over a subset of promising states via a motion planning algorithm. our approach allows generalizing the learned behavior to new environments with new spatial configurations of the semantic categories. we analyze the different components of our model in a minigrid environment. we also demonstrate that our approach learns to follow traffic rules in the autonomous driving carla simulator by relying on semantic observations of buildings, sidewalks, and road lanes.
|
arxiv:2101.00186
|
entanglement, the non - local correlations present in multipartite quantum systems, is a curious feature of quantum mechanics and the fuel of quantum technology. it is therefore a major priority to develop energy - conserving and simple methods for generating high - fidelity entangled states. in the case of light, entanglement can be realized by interactions with matter, although the required nonlinear interaction is typically weak, thereby limiting its applicability. here, we show how a single two - level emitter deterministically coupled to light in a nanophotonic waveguide is used to realize genuine photonic quantum entanglement for excitation at the single photon level. by virtue of the efficient optical coupling, two - photon interactions are strongly mediated by the emitter realizing a giant nonlinearity that leads to entanglement. we experimentally generate and verify energy - time entanglement by violating a bell inequality ( clauder - horne - shimony - holt bell parameter of $ s = 2. 67 ( 16 ) > 2 $ ) in an interferometric measurement of the two - photon scattering response. as an attractive feature of this approach, the two - level emitter acts as a passive scatterer initially prepared in the ground state, i. e., no advanced spin control is required. this experiment is a fundamental advancement that may pave a new route for ultra - low energy - consuming synthesis of photonic entangled states for quantum simulators or metrology.
|
arxiv:2306.12801
|
we study the initial beam acquisition problem in millimeter wave ( mm - wave ) networks from the perspective of best arm identification in multi - armed bandits ( mabs ). for the stationary environment, we propose a novel algorithm called concurrent beam exploration, cbe, in which multiple beams are grouped based on the beam indices and are simultaneously activated to detect the presence of the user. the best beam is then identified using a hamming decoding strategy. for the case of orthogonal and highly directional thin beams, we characterize the performance of cbe in terms of the probability of missed detection and false alarm in a beam group ( bg ). leveraging this, we derive the probability of beam selection error and prove that cbe outperforms the state - of - the - art strategies in this metric. then, for the abruptly changing environments, e. g., in the case of moving blockages, we characterize the performance of the classical sequential halving ( sh ) algorithm. in particular, we derive the conditions on the distribution of the change for which the beam selection error is exponentially bounded. in case the change is restricted to a subset of the beams, we devise a strategy called k - sequential halving and exhaustive search, k - shes, that leads to an improved bound for the beam selection error as compared to sh. this policy is particularly useful when a near - optimal beam becomes optimal during the beam - selection procedure due to abruptly changing channel conditions. finally, we demonstrate the efficacy of the proposed scheme by employing it in a tandem beam refinement and data transmission scheme.
|
arxiv:2307.05023
|
the structural relaxation of multilayer graphene is essential in describing the interesting electronic properties induced by intentional misalignment of successive layers, including the recently reported superconductivity in twisted bilayer graphene. this is difficult to accomplish without an accurate interatomic potential. here, we present a new, registry - dependent kolmogorov - crespi type interatomic potential to model interlayer interactions in multilayer graphene structures. it consists of two parts representing attractive interaction due to dispersion, and repulsive interaction due to anisotropic overlap of electronic orbitals. an important new feature is a dihedral - angle - dependent term that is added to the repulsive part in order to describe correctly several distinct stacking states that the original kolmogorov - crespi potential cannot distinguish. we refer to the new model as the dihedral - angle - corrected registry - dependent interlayer potential ( drip ). computations for several test problems show that drip correctly reproduces the binding, sliding, and twisting energies and forces obtained from ab initio total - energy calculations based on density functional theory. we use the new potential to study the structural properties of a twisted graphene bilayer and the exfoliation of graphene from graphite. our potential is available through the openkim interatomic potential repository at https : / / openkim. org.
|
arxiv:1808.04485
|
in this paper we present our studies on the stellar populations and star formation histories ( sfhs ) for the reines et al. sample of 136 dwarf galaxies which host active galactic nuclei ( agns ), selected from the sloan digital sky survey data release 8. we derive stellar populations and reconstruct sfhs for these agn - host dwarfs using the stellar population synthesis code starlight. our results suggest that these agn - host dwarfs have assembled their stellar masses within a narrow period of time with the stellar mass - weighted ages in the range of $ 10 ^ 9 - 10 ^ { 10 } $ yr, but show a wide diversity of sfhs with the luminosity - weighted stellar ages in the range of $ 10 ^ 7 - 10 ^ { 10 } $ yr. the old population ( $ t > 10 ^ 9 $ yr ) contributes most to the galaxy light for the majority of the sample ; the young population ( $ t < 10 ^ 8 $ yr ) also appears in significant but widely varying fractions, while the intermediate - age population ( $ 10 ^ 8 < t < 10 ^ 9 $ yr ) in general contributes less to the optical continuum at 4020 $ \ r { a } $. we also find that these dwarfs follow a similar mass - metallicity relation to normal star - forming galaxies, indicating that agns have little effect on the chemical evolution of the host galaxy. we further investigate the relation between the derived sfhs and morphology of the host galaxy, and find no correlation. comparing the sfhs with the luminosity of the [ oiii ] $ \ lambda $ 5007 line ( $ l _ { \ rm [ oiii ] } $ ), we find that there exists a mild correlation when $ l _ { \ rm [ oiii ] } > 10 ^ { 39 } $ erg s $ ^ { - 1 } $, indicating that there is a physical connection between star formation and agn activities in these dwarf galaxies.
|
arxiv:2009.05227
|
ground state and finite temperature properties of a system of coupled frustrated and / or dimerized spin - 1 / 2 chains modeling e. g. the cugeo $ _ 3 $ compound are reviewed. special emphasis is put on the investigation of the role of impurity doping. a c hain - mean field computation combining exact diagonalisations of the chain hamiltonians together with a mean field treatment of the weak interchain couplings is performed in order to map the microscopic model onto a low - energy effective model. the latter descr ibes a 2 - dimensional system of effective spin - 1 / 2 local moments interacting by spacially anisotropic long range spin exchange interactions. an extensive study of this effective model is performed by stocastic series expansion quantum monte carlo for a wide range of temperatures and impurity concentrations. interesting scaling behaviors of the uniform and staggered spin susceptibilities ( above a small ordering neel temperature due to a residual 3d coupling ) can be interpreted in terms of the formation of large clusters of correlated spins carrying a finite magnetization. such results are reproduced satisfactorily by a new real space rg enabling to deal with long range interactions in two - dimensions
|
arxiv:cond-mat/0502369
|
we characterize $ n $ - rectifiable metric measure spaces as those spaces that admit a countable borel decomposition so that each piece has positive and finite $ n $ - densities and one of the following : is an $ n $ - dimensional lipschitz differentiability space ; has $ n $ - independent alberti representations ; satisfies david ' s condition for an $ n $ - dimensional chart. the key tool is an iterative grid construction which allows us to show that the image of a ball with a high density of curves from the alberti representations under a chart map contains a large portion of a uniformly large ball and hence satisfies david ' s condition. this allows us to apply previously known " bilipschitz pieces " results on the charts.
|
arxiv:1409.4242
|
fog computing has emerged as a promising technology that can bring the cloud applications closer to the physical iot devices at the network edge. while it is widely known what cloud computing is, and how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and a fog node, as its main building block, really is. one of the first attempts to define a fog node was made by cisco, qualifying a fog computing system as a mini - cloud, located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. thus, a fog node would be the infrastructure implementing the said mini - cloud. other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. in this paper, we first survey the state of the art in technologies for fog computing nodes as building blocks of fog computing, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. we summarize and compare the concepts, lessons learned from their implementation, and show how a conceptual framework is emerging towards a unifying fog node definition. we focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.
|
arxiv:1611.09193
|
any supervised machine learning analysis is required to provide an estimate of the out - of - sample predictive performance. however, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval ( ci ) and not just a point estimate. in an automl setting, estimating the ci is challenging due to the ` ` winner ' s curse ", i. e., the bias of estimation due to cross - validating several machine learning pipelines and selecting the winning one. in this work, we perform a comparative evaluation of 9 state - of - the - art methods and variants in ci estimation in an automl setting on a corpus of real and simulated datasets. the methods are compared in terms of inclusion percentage ( does a 95 \ % ci include the true performance at least 95 \ % of the time ), ci tightness ( tighter cis are preferable as being more informative ), and execution time. the evaluation is the first one that covers most, if not all, such methods and extends previous work to imbalanced and small - sample tasks. in addition, we present a variant, called bbc - f, of an existing method ( the bootstrap bias correction, or bbc ) that maintains the statistical properties of the bbc but is more computationally efficient. the results support that bbc - f and bbc dominate the other methods in all metrics measured.
|
arxiv:2406.08099
|
we use lower and upper solutions to investigate the existence of the greatest and the least solutions for quasimonotone systems of measure differential equations. the established results are then used to study the solvability of stieltjes differential equations ; a recent unification of discrete, continuous and impulsive systems. the applicability of our results is illustrated in a simple model for bacteria population.
|
arxiv:1803.08860
|
we study a system of equations arising in the chern - simons model on finite graphs. using the iteration scheme and the upper and lower solutions method, we get existence of solutions in the non - critical case. the critical case is dealt with by priori estimates. our results generalize those of huang et al. ( journal of functional analysis 281 ( 10 ) ( 2021 ) paper no. 109218 ).
|
arxiv:2206.12863
|
the leading order coefficients of the beta - function of qcd are computed in a large n _ f expansion. they are in agreement with the three loop msbar calculation. the method involves computing the anomalous dimension of the operator ( g ^ 2 _ { mu nu } ) ^ 2 at the d - dimensional fixed point in the non - abelian thirring model to which qcd is equaivalent in this limit. the effect the o ( 1 / n _ f ) corrections have on the location of the infrared stable fixed point for a range of n _ f is also examined.
|
arxiv:hep-ph/9602214
|
a strong disorder characterized by a small product of the fermi vector kf and the electron mean free l drives superconductors towards insulating state. such disorder can be introduced by making the films very thin. here, we present 3 - nm mo2n film with k _ f * l ~ 2 with a resistive superconducting transition temperature tc = 2 k heavily suppressed in comparison with the bulk tc. superconducting density of states ( dos ) with smeared gap - like peaks and in - gap states, so called dynes dos, is observed by the low temperature tunneling spectroscopy despite a sharp resistive transition. by scanning tunneling microscope the spectral maps are obtained and related to the surface topography. the maps show a spatial variation of the superconducting energy gap on the order of 20 % which is not accidental but well correlates with the surface corrugation : protrusions reveal larger gap, smaller spectral smearing and smaller in - gap states. in agreement with our previous measurements on ultrathin moc films we suggest that the film - substrate interface introducing the local pair - breaking is responsible for the observed effects and generally for the suppression of the superconductivity in these ultrathin films.
|
arxiv:2311.17506
|
confined smectic a liquid crystals ( sma lcs ) form topological defects called focal conic domains ( fcds ) that focus light as gradient - index lenses. here, we exploit surface curvature to self - assemble fcds in a single step into a hierarchical structure ( coined " flower pattern " ) molded by the fluid interface that is pinned at the top of a micropillar. the structure resembles the compound eyes of some invertebrates, which consist of hundreds of microlenses on a curved interface, able to focus and construct images in three dimensions. here we demonstrate that these flowers are indeed " compound eyes " with important features which have not been demonstrated previously in the literature. the eccentric fcds gradually change in size with radial distance from the edge of the micropillar, resulting in a variable microlens focal length that ranges from a few microns to a few tens of microns within a single " flower ". we show that the microlenses can construct a composite 3d image from different depth of field. moreover, the smectic " compound eye " can be reconfigured by heating and cooling at the lc phase transition temperature ; its field of view can be manipulated by tuning the curvature of the lc interface, and the lenses are sensitive to light polarization.
|
arxiv:1505.01449
|
the spectra of far - infrared transmission in tb3fe5o12 magnetoelectric single crystals have been studied in the range between 15 and 100 cm - 1, in magnetic fields up to 10 t, and for temperatures between 5 and 150 k. we attribute some of the observed infrared - active excitations to electric - dipole transitions between ligand - field split states of tb3 + ions. anticrossing between the magnetic exchange excitation and the ligand - field transition occurs at the temperature between 60 and 80 k. the corresponding coupling energy for this interaction is 6 cm - 1. temperature - induced softening of the hybrid ir excitation correlates with the increase of the static dielectric constant. we discuss the possibility for hybrid excitations of magnons and ligand - field states and their possible connection to the magnetoelectric effect in tb3fe5o12.
|
arxiv:1005.2705
|
we build the $ q = - 1 $ defomation of plane on a product of two copies of algebras of functions on the plane. this algebra constains a subalgebra of functions on the plane. we present general scheme ( which could be used as well to construct quaternion from pairs of complex numbers ) and we use it to derive differential structures, metric and discuss sample field theoretical models.
|
arxiv:q-alg/9503007
|
an optoelectronic oscillator exhibiting a large delay in its feedback loop is studied both experimentally and theoretically. we show that multiple square - wave oscillations may coexist for the same values of the parameters ( multirhythmicity ). depending on the sign of the phase shift, these regimes admit either periods close to an integer fraction of the delay or periods close to an odd integer fraction of twice the delay. these periodic solutions emerge from successive hopf bifurcation points and stabilize at a finite amplitude following a scenario similar to eckhaus instability in spatially extended systems. we find quantitative agreements between experiments and numerical simulations. the linear stability of the square - waves is substantiated analytically by determining stable fixed points of a map.
|
arxiv:1410.0840
|
large language models ( llms ) often struggle with strict memory, latency, and power demands. to meet these demands, various forms of dynamic sparsity have been proposed that reduce compute on an input - by - input basis. these methods improve over static methods by exploiting the variance across individual inputs, which has steadily grown with the exponential increase in training data. yet, the increasing depth within modern models, currently with hundreds of layers, has opened opportunities for dynamic layer sparsity, which skips the computation for entire layers. in this work, we explore the practicality of layer sparsity by profiling residual connections and establish the relationship between model depth and layer sparsity. for example, the residual blocks in the opt - 66b model have a median contribution of 5 % to its output. we then take advantage of this dynamic sparsity and propose radial networks, which perform token - level routing between layers guided by a trained router module. these networks can be used in a post - training distillation from sequential networks or trained from scratch to co - learn the router and layer weights. they enable scaling to larger model sizes by decoupling the number of layers from the dynamic depth of the network, and their design allows for layer reuse. by varying the compute token by token, they reduce the overall resources needed for generating entire sequences. overall, this leads to larger capacity networks with significantly lower compute and serving costs for large language models.
|
arxiv:2404.04900
|
atoms made of a particle and an antiparticle are unstable, usually surviving less than a microsecond. antihydrogen, made entirely of antiparticles, is believed to be stable, and it is this longevity that holds the promise of precision studies of matter - antimatter symmetry. we have recently demonstrated trapping of antihydrogen atoms by releasing them after a confinement time of 172 ms. a critical question for future studies is : how long can anti - atoms be trapped? here we report the observation of anti - atom confinement for 1000 s, extending our earlier results by nearly four orders of magnitude. our calculations indicate that most of the trapped anti - atoms reach the ground state. further, we report the first measurement of the energy distribution of trapped antihydrogen which, coupled with detailed comparisons with simulations, provides a key tool for the systematic investigation of trapping dynamics. these advances open up a range of experimental possibilities, including precision studies of cpt symmetry and cooling to temperatures where gravitational effects could become apparent.
|
arxiv:1104.4982
|
the hidden - variables premise is shown to be equivalent to the existence of generic filters for algebras of commuting propositions and for certain more general propositional systems. the significance of this equivalence is interpreted in light of the theory of generic filters and boolean - valued models in set theory ( the method of forcing ). the apparent stochastic nature of quantum observation is derived for these hidden - variables models.
|
arxiv:quant-ph/0506040
|
we prove a version of the arezzo - pacard - singer blow - up theorem in the setting of poincar \ ' e type metrics. we apply this to give new examples of extremal poincar \ ' e type metrics. a key feature is an additional obstruction which has no analogue in the compact case. this condition is conjecturally related to ensuring the metrics remain of poincar \ ' e type.
|
arxiv:1811.12584
|
the recently released persistent memory ( pm ) offers high performance, persistence, and is cheaper than dram. this opens up new possibilities for indexes that operate and persist data directly on the memory bus. recent learned indexes exploit data distribution and have shown great potential for some workloads. however, none support persistence or instant recovery, and existing pm - based indexes typically evolve b + - trees without considering learned indexes. this paper proposes apex, a new pm - optimized learned index that offers high performance, persistence, concurrency, and instant recovery. apex is based on alex, a state - of - the - art updatable learned index, to combine and adapt the best of past pm optimizations and learned indexes, allowing it to reduce pm accesses while still exploiting machine learning. our evaluation on intel dcpmm shows that apex can perform up to ~ 15x better than existing pm indexes and can recover from failures in ~ 42ms.
|
arxiv:2105.00683
|
classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. however, disagreement between annotators may be informative, and thus removing it may not be the best strategy. as a proof of concept, we predict whether a skin lesion from the isic 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. we compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. we show that the mean annotations perform best, but that the disagreement measures are still informative. we also make the crowd annotations used in this paper available at \ url { https : / / figshare. com / s / 5cbbce14647b66286544 }.
|
arxiv:1806.08174
|
this paper introduces a data - adaptive non - parametric approach for the estimation of time - varying spectral densities from nonstationary time series. time - varying spectral densities are commonly estimated by local kernel smoothing. the performance of these nonparametric estimators, however, depends crucially on the smoothing bandwidths that need to be specified in both time and frequency direction. as an alternative and extension to traditional bandwidth selection methods, we propose an iterative algorithm for constructing localized smoothing kernels data - adaptively. the main idea, inspired by the concept of propagation - separation ( polzehl and spokoiny 2006 ), is to determine for a point in the time - frequency plane the largest local vicinity over which smoothing is justified by the data. by shaping the smoothing kernels nonparametrically, our method not only avoids the problem of bandwidth selection in the strict sense but also becomes more flexible. it not only adapts to changing curvature in smoothly varying spectra but also adjusts for structural breaks in the time - varying spectrum.
|
arxiv:1512.00825
|
strong bounds - going beyond sarnak ' s density hypothesis - are obtained for the number of automorphic forms for the congruence subgroup gamma _ 0 ( q ) of sl _ n ( z ) violating the ramanujan conjecture at any given unramified place. the proof is based on a relative trace formula of kuznetsov type and best - possible bounds for certain kloosterman sums for gl ( n ). further applications are given.
|
arxiv:1906.07459
|
we use the data of wisconsin h $ \ alpha $ mapper ( wham ) to test the hypothesis of whether the amplitudes and spectrum of density fluctuations measured by wham can be matched to the data obtained for interstellar scintillations and scattering. to do this, first of all, we adjusted the mean level of signal in the adjacent patches of the data. then, assuming that the spectrum is kolmogorov, we successfully matched the amplitudes of turbulence obtained from the wham data and the interstellar density fluctuations reported in the existing literature. as a result, we conclude that the existing data is consistent with the kolmogorov cascade which spans from $ 10 ^ 6 $ to $ 10 ^ { 17 } $ $ m $.
|
arxiv:0905.4413
|
recent experiments with silicon qubits demonstrated strong coupling of a microwave resonator to the spin of a single electron in a double quantum dot, opening up the possibility of long - range spin - spin interactions. we present our theoretical calculation of effective interactions between distant quantum dot spins coupled by a resonator, and propose a protocol for fast, high - fidelity two - qubit gates consistent with experimentally demonstrated capabilities. our simulations show that, in the presence of noise, spin - spin entangling gates significantly outperform cavity - mediated gates on charge qubits.
|
arxiv:1902.05704
|
let $ s _ { \ rm lcm } ( n ) $ denote the set of permutations $ \ pi $ of $ [ n ] = \ { 1, 2, \ dots, n \ } $ such that $ { \ rm lcm } [ j, \ pi ( j ) ] \ le n $ for each $ j \ in [ n ] $. further, let $ s _ { \ rm div } ( n ) $ denote the number of permutations $ \ pi $ of $ [ n ] $ such that $ j \ mid \ pi ( j ) $ or $ \ pi ( j ) \ mid j $ for each $ j \ in [ n ] $. clearly $ s _ { \ rm div } ( n ) \ subset s _ { \ rm lcm } ( n ) $. we get upper and lower bounds for the counts of these sets, showing they grow geometrically. we also prove a conjecture from a recent paper on the number of " anti - coprime " permutations of $ [ n ] $, meaning that each $ \ gcd ( j, \ pi ( j ) ) > 1 $ except when $ j = 1 $.
|
arxiv:2206.01699
|
hawking radiation of uncharged and charged scalars from accelerating and rotating black holes is studied. we calculate the tunneling probabilities of these particles from the rotation and acceleration horizons of these black holes. using the tunneling method we recover the correct hawking temperature as well.
|
arxiv:1102.0029
|
in one of its versions, the second law states : " it is impossible to construct an engine which will work in a complete cycle, and produces no effect except the raising of a weight and cooling of a heat reservoir. " while the second law is considered as one of the most robust laws of nature, it is still challenging how to interpret it in a fully quantum domain. here we unpack the true meaning of the " cyclicity " and formulate the second law for a generic quantum battery via its asymptotic properties of a charging process rather than in terms of a single cycle. as a paradigm, we propose a machine consisting of a battery that repeatedly interacts with identically prepared systems. we then propose the second law in the form : the ergotropy of the battery may increase indefinitely if and only if systems are in a non - passive state. one of the most interesting features of this new formulation is the appearance of the passive states that naturally generalize the notion of the heat bath. in this paper, we provide a handful of results that supports this formulation for diagonal systems. interestingly, our methodology meets a well - known theory of markov chains, according to which we classify the general charging processes based on the passivity / non - passivity of charging systems. in particular, the adopted mathematics allows us to distinguish a subtle asymptotic difference between the indefinite increase of the battery ' s energy ( induced by the maximally mixed states ) and of ergotropy ( induced by the non - passive states ) in terms of the so - called null - recurrent versus transient markov chains.
|
arxiv:2209.05339
|
obtaining the rest mass of leptons with electric charge minus 1 is pursued by considering the existence of a medium made up of sub - quantum particles, called etherons, having a rest energy at the lowest limit which is possible in the universe. this medium is assumed to have a periodic structure, that generates zones of allowed and forbidden energy. the basic assumption consists in considering the photon interaction with this hypothetical medium to be similar with the interaction of the electrons with the particles of a crystalline lattice. it is further assumed that an inverse particle - antiparticle annihilation process in the presence of the periodical sub - quantum field generates the particles of the universe. the quantization of the photons in this sub - quantum lattice is achieved with the help of the operator of the square of the energy and a well - known formula of f. bloch has been further used to empirically fix the lattice parameters. the rest energy of fundamental particles would correspond to zones of allowed energy.
|
arxiv:0908.1945
|
we study the implications of a large nu _ mu - nu _ tau mixing angle on flavour changing transitions of quarks and leptons in supersymmetric extensions of the standard model. two patterns of supersymmetry breaking are considered, models with modular invariance and the standard scenario of universal soft breaking terms at the gut scale. the analysis is performed for two symmetry groups g x u ( 1 ) _ f, with g = su ( 5 ) and g = su ( 3 ) ^ 3, where u ( 1 ) _ f is a family symmetry. models with modular invariance are in agreement with observations only for restricted scalar quark and gaugino masses, ( m _ squark ^ 2 ) / ( m _ gluino ^ 2 ) \ simeq 7 / 9 and m _ bino > 350 gev. a characteristic feature of models with large tan beta and radiatively induced flavour mixing is a large branching ratio for mu - > e gamma. for both symmetry groups and for the considered range of supersymmetry breaking mass parameters we find br ( mu - > e gamma ) > 10 ^ ( - 14 ).
|
arxiv:hep-ph/9912317
|
performance problems are often observed in embedded software systems. the reasons for poor performance are frequently not obvious. bottlenecks can occur in any of the software components along the execution path. therefore it is important to instrument and monitor the different components contributing to the runtime behavior of an embedded software system. performance analysis tools can help locate performance bottlenecks in embedded software systems by monitoring the software ' s execution and producing easily understandable performance data. we maintain and further develop a tool for analyzing the performance of nokia mobile phone software. the user can select among four performance analysis reports to be generated : average processor load, processor utilization, task execution time statistics, and task execution timeline. each of these reports provides important information about where execution time is being spent. the demo will show how the tool helps to identify performance bottlenecks in nokia mobile phone software and better understand areas of poor performance.
|
arxiv:cs/0310001
|
we present a sub - kpc resolved study of the interstellar medium properties in sdp. 81, a z = 3. 042 strongly gravitationally lensed dusty star - forming galaxy, based on high - resolution, multi - band alma observations of the fir continuum, co ladder and the [ cii ] line. using a visibility - plane lens modelling code, we achieve a median source - plane resolution of ~ 200 pc. we use photon - dominated region ( pdr ) models to infer the physical conditions - far - uv field strength, density, and pdr surface temperature - of the star - forming gas on 200 - pc scales, finding a fuv field strength of ~ 10 ^ 3 - 10 ^ 4 g0, gas density of ~ 10 ^ 5 cm ^ - 3 and cloud surface temperatures up to 1500 k, similar to those in the orion trapezium region. the [ cii ] emission is significantly more extended than that fir continuum : ~ 50 per cent of [ cii ] emission arises outside the fir - bright region. the resolved [ cii ] / fir ratio varies by almost 2 dex across the source, down to ~ 2x10 ^ - 4 in the star - forming clumps. the observed [ cii ] / fir deficit trend is consistent with thermal saturation of the c + fine - structure level occupancy at high gas temperatures. we make the source - plane reconstructions of all emission lines and continuum data publicly available.
|
arxiv:1912.12538
|
we propose a notion of stability for constant k - mean curvature hypersurfaces in a general riemannian manifold and we give some applications. when the ambient manifold is a space form, our notion coincides with the known one, given by means of the variational problem. our approach led us to work with two different stability operators and we are able to relate stability to the study of the respective first eigenvalues. moreover, we prove that embedded rotational spheres with constant k - mean curvature in hnx r or in snxr are not stable.
|
arxiv:1912.12103
|
tail dependence plays an essential role in the characterization of joint extreme events in multivariate data. however, most standard tail dependence parameters assume continuous margins. this note presents a form of tail dependence suitable for non - continuous and discrete margins. we derive a representation of tail dependence based on the volume of a copula and prove its properties. we utilize a bivariate regular variation to show that our new metric is consistent with the standard tail dependence parameters on continuous margins. we further define tail dependence on autocorrelated margins where the tail dependence parameter examine lagged correlation on the sample.
|
arxiv:2502.01271
|
many popular robust estimators are $ u $ - quantiles, most notably the hodges - lehmann location estimator and the $ q _ n $ scale estimator. we prove a functional central limit theorem for the sequential $ u $ - quantile process without any moment assumptions and under weak short - range dependence conditions. we further devise an estimator for the long - run variance and show its consistency, from which the convergence of the studentized version of the sequential $ u $ - quantile process to a standard brownian motion follows. this result can be used to construct cusum - type change - point tests based on $ u $ - quantiles, which do not rely on bootstrapping procedures. we demonstrate this approach in detail at the example of the hodges - lehmann estimator for robustly detecting changes in the central location. a simulation study confirms the very good robustness and efficiency properties of the test. two real - life data sets are analyzed.
|
arxiv:1503.04161
|
a conceptual example is first analyzed to show that efficient wireless communications is possible, when user equipment ( ue ) receiver, bs transmitter or / and the scatter ( reflector ) in wireless channels employ the required channel state information ( csi ) to remove the randomness of signal phase. then, the principles and optimization of three reflective intelligent surface ( ris ) assisted mmwave ( ris - mmwave ) models are introduced. the first model assumes one bs, one ris and one ue ; the second one assumes one bs, one ris and multiple ues ; while the third ris - mmwave model assumes one bs, multiple riss and multiple ues. furthermore, the optimization of bs precoder and ris phase - shifts is addressed in the context of the massive ris - mmwave scenarios, where the number of bs antennas and that of ris reflection elements are significantly larger than the number of supported ues. the analyses demonstrate that, while the deployment of riss with mmwave is capable of solving the blockage problem and has the potential to significantly improve efficiency, finding the near - optimum solutions for ris phase - shifts is highly challenging in practice.
|
arxiv:2403.11260
|
it is easy to show that the lower and the upper box dimensions of a bounded set in euclidean space are invariant with respect to the ambient space. in this article we show that the minkowski content of a minkowski measurable set is also invariant with respect to the ambient space when normalized by an appropriate constant. in other words, the value of the normalized minkowski content of a bounded, minkowski measurable set is intrinsic to the set.
|
arxiv:1207.3279
|
water is an important component of exoplanets, with its distribution, i. e., whether at the surface or deep inside, fundamentally influencing the planetary properties. the distribution of water in most exoplanets is determined by yet - unknown partitioning coefficients at extreme conditions. our new first - principles molecular dynamics simulations reveal that water strongly partitions into iron over silicate at high pressures and thus would preferentially stay in a planet ' s core. furthermore, we model planet interiors by considering the effect of water on density, melting temperature, and water partitioning. the results shatter the notion of water worlds as imagined before : the majority of the bulk water budget ( even more than 95 % ) can be stored deep within the core and the mantle, and not at the surface. for planets more massive than ~ 6 earth ' s mass and earth - size planets ( of lower mass and small water budgets ), the majority of water resides deep in the cores of planets, whether water is assumed to be at the surface or at depth can affect the radius by up to 25 % for a given mass. this has drastic consequences for the inferred water distribution in exoplanets from mass - radius data.
|
arxiv:2401.16394
|
we discuss magnetotransport measurements on individual single - wall carbon nanotubes with low contact resistance, performed as a function of temperature and gate voltage. we find that the application of a magnetic field perpendicular to the tube axis results in a large magnetoconductance of the order of e ^ 2 / h at low temperature. we demonstrate that this magnetoconductance consists of a sample - specific and of an ensemble - averaged contribution, both of which decrease with increasing temperature. the observed behavior resembles very closely the behavior of more conventional multi - channel mesoscopic wires, exhibiting universal conductance fluctuations and weak localization. a theoretical analysis of our experiments will enable to reach a deeper understanding of phase - coherent one - dimensional electronic motion in swnts.
|
arxiv:cond-mat/0411141
|
( 2009 ). practical guide to computer simulations. world scientific. isbn 978 - 981 - 283 - 415 - 7. archived from the original on february 11, 2009. retrieved may 3, 2012. nonweiler, t. r. ( 1986 ). computational mathematics : an introduction to numerical approximation. john wiley and sons. isbn 978 - 0 - 470 - 20260 - 9. gentle, j. e. ( 2007 ). foundations of computational science. springer - verlag. isbn 978 - 0 - 387 - 00450 - 1. white, r. e. ( 2003 ). computational mathematics : models, methods, and analysis with matlab. chapman and hall. isbn 978 - 1584883647. yang, x. s. ( 2008 ). introduction to computational mathematics. world scientific. isbn 978 - 9812818171. strang, g. ( 2007 ). computational science and engineering. wiley. isbn 978 - 0961408817. = = external links = = foundations of computational mathematics, a non - profit organization international journal of computer discovered mathematics
|
https://en.wikipedia.org/wiki/Computational_mathematics
|
non - equilibrium two - parameter pumping transport through graphene ribbons, attached to reservoirs is described. a tight - binding model is solved using keldysh formalism, and the crossover between adiabatic and non - adiabatic regimes is studied. pumped dc currents through armchair ribbons show properties common in two - dimensional systems. the width - dependent dc current in zigzag ribbons, reveals that edge states akin to those in two - dimensional topological insulators, do not contribute to pumped transport in the adiabatic regime. the interplay between propagating and evanescent modes is discussed.
|
arxiv:1203.3952
|
we present here the natural extension of our pila - wilkie type estimates on the number of rational points of the trascendent part of a compact analytic subset of $ \ mathbb { f } _ { q } ( ( 1 / t ) ) ^ { n } $ to analogous subsets of $ k ^ { n } $, where $ k $ is a general local field of any characteristic. that would integrate the analogous estimate provided by f. loeser, g. comte and r. cluckers in characteristic 0.
|
arxiv:1711.08089
|
transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. however, creating effective transfer functions that align with users ' visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. in this work, we propose a novel approach that leverages recent advancements in language - vision models to bridge this semantic gap. by employing a fully differentiable rendering pipeline and an image - based loss function guided by language descriptions, our method generates transfer functions that yield volume - rendered images closely matching the user ' s intent. we demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. this advancement streamlines the transfer function design process and makes volume rendering more accessible to a wider range of users.
|
arxiv:2406.15634
|
we present the detection of infall, rotation and outflow kinematic signatures towards both a protostellar source, vla 1623 and what was initially thought to be a pre - protostellar core, sm1n, in the rho - ophiuchus a region. the kinematic signatures of early star formation were detected in the dense molecular gas surrounding the embedded sources using high signal - to - noise millimeter and submillimeter data. centroid velocity maps made with hco + j = 4 - > 3 and j = 1 - > 0 line emission exhibit the blue bulge signature of infall, which is predicted to be seen when infall motion dominates over rotational motion. further evidence for infalling gas is found in the hco + blue asymmetric line profiles and red asymmetric opacity profiles. we also performed co j = 3 - > 2 and j = 1 - > 0 observations to determine the direction, orientation, and extent of molecular outflows, and report the discovery of a new bipolar outflow possibly driven by sm1n.
|
arxiv:astro-ph/0605163
|
let $ k $ be an elementary extension of $ \ mathbb { q } _ p $, $ v $ be the set of finite $ a \ in k $, $ \ mathrm { st } $ be the standard part map $ k ^ m \ to \ mathbb { q } ^ m _ p $, and $ x \ subseteq k ^ m $ be $ k $ - definable. delon has shown that $ \ mathbb { q } ^ m _ p \ cap x $ is $ \ mathbb { q } _ p $ - definable. yao has shown that $ \ dim \ mathbb { q } ^ m _ p \ cap x \ leq \ dim x $ and $ \ dim \ mathrm { st } ( v ^ n \ cap x ) \ leq \ dim x $. we give new $ \ mathrm { nip } $ - theoretic proofs of these results and show that both inequalities hold in much more general settings. we also prove the analogous results for the expansion $ \ mathbb { q } ^ { \ mathrm { an } } _ p $ of $ \ mathbb { q } _ p $ by all analytic functions $ \ mathbb { z } ^ n _ p \ to \ mathbb { q } _ p $. as an application we show that if $ ( x _ k ) _ { k \ in \ mathbb { n } } $ is a sequence of elements of an $ \ mathbb { q } ^ { \ mathrm { an } } _ p $ - definable family of subsets of $ \ mathbb { q } ^ m _ p $ which converges in the hausdroff topology to $ x \ subseteq \ mathbb { q } ^ m _ p $ then $ x $ is $ \ mathbb { q } ^ { \ mathrm { an } } _ p $ - definable and $ \ dim x \ leq \ limsup _ { k \ to \ infty } \ dim x _ k $.
|
arxiv:2004.13109
|
toral automorphisms, represented by unimodular integer matrices, are investigated with respect to their symmetries and reversing symmetries. we characterize the symmetry groups of gl ( n, z ) matrices with simple spectrum through their connection with unit groups in orders of algebraic number fields. for the question of reversibility, we derive necessary conditions in terms of the characteristic polynomial and the polynomial invariants. we also briefly discuss extensions to ( reversing ) symmetries within affine transformations, to pgl ( n, z ) matrices, and to the more general setting of integer matrices beyond the unimodular ones.
|
arxiv:math/0006092
|
a method of exact all - order summation of leading infrared logarithms in two dimensional massless $ \ phi ^ 4 $ - type non - renormalizable effective field theories ( efts ) is developed. the method is applied to the $ { \ rm o } ( n ) $ - symmetric eft, which is a two - dimensional sibling of the four dimensional $ { \ rm o } ( n + 1 ) / { \ rm o } ( n ) $ sigma - model. for the first time the exact all - order summation of the $ \ left ( e ^ { 2 } \ ln ( 1 / e ) \ right ) ^ n $ contributions ( chiral logarithms ) for the $ 2 \ to 2 $ scattering amplitudes is performed in closed analytical form. the cases when the resulting amplitudes turn to be meromorphic functions with an infinite number of poles ( landau poles ) are identified. this provides the first explicit example of quasi - renormalizable field theories.
|
arxiv:1811.12289
|
the fermi - pasta - ulam ( fpu ) paradox consists of the nonequipartition of energy among normal modes of a weakly anharmonic atomic chain model. in the harmonic limit each normal mode corresponds to a periodic orbit in phase space and is characterized by its wave number $ q $. we continue normal modes from the harmonic limit into the fpu parameter regime and obtain persistence of these periodic orbits, termed here $ q $ - breathers ( qb ). they are characterized by time periodicity, exponential localization in the $ q $ - space of normal modes and linear stability up to a size - dependent threshold amplitude. trajectories computed in the original fpu setting are perturbations around these exact qb solutions. the qb concept is applicable to other nonlinear lattices as well.
|
arxiv:nlin/0504036
|
pokhran - ii ( operation shakti ) was a series of five nuclear weapon tests conducted by india in may 1998. e detonated at the indian army ' s pokhran test range in rajasthan. it was the second instance of nuclear testing conducted by india, after the first test, smiling buddha, in may 1974. the test consisted of five detonations, the first of which was claimed to be a two - stage fusion bomb while the remaining four were fission bombs. the first three tests were carried out simultaneously on 11 may 1998 and the last two were detonated two days later on 13 may 1998. the tests were collectively called operation shakti, and the five nuclear bombs were designated as shakti - i to shakti - v. the chairman of the atomic energy commission of india described each of the explosions to be equivalent to several tests carried out over the years by various nations. while announcing the tests, the indian government declared india as a nuclear state and that the tests achieved the main objective of giving the capability to build fission bombs and thermonuclear weapons with yields up to 200 kilotons. while the indian fission bombs have been documented, the design and development of thermonuclear weapons remains uncertain after the tests. as a consequence of the tests, united nations security council resolution 1172 was enacted and economic sanctions were imposed by countries including japan and the united states. = = history = = = = = early nuclear programme ( 1944 – 1965 ) = = = efforts towards building a nuclear bomb, infrastructure, and research on related technologies have been undertaken by india since the end of second world war. the origins of india ' s nuclear programme go back to 1945 when nuclear physicist homi bhabha established the tata institute of fundamental research ( tifr ) with the aid of tata group. after indian independence, the atomic energy act was passed on 15 april 1948, that established the indian atomic energy commission ( iaec ). in 1954, department of atomic energy ( dae ) was established which was responsible for the atomic development programme and was allocated a significant amount of the defence budget in the subsequent years. in 1956, the first nuclear reactor became operational at bhabha atomic research centre ( barc ), becoming the first operating reactor in asia. in 1961, india commissioned a reprocessing plant to produce weapon grade plutonium. in 1962, india was engaged in a war with china, and with china conducting its own nuclear test in 1964, india accelerated its development of nuclear weapons. with two reactors operational in
|
https://en.wikipedia.org/wiki/Pokhran-II
|
considering ultracold atoms traversing a high - q fabry - perot cavity, we theoretically demonstrate a quantum nondemolition measurement of the photon number. this fully quantum mechanical approach may be understood utilizing concepts as effective mass and group velocity of the atom. the various photon numbers induce a splitting of the atomic wave packet, and a time - of - flight measurement of the atom thereby reveals the photon number. while repeated atomic measurements increase the efficiency of the protocol, it is shown that by considering long interaction times only a few atoms are needed to resolve the photon number with almost perfect accuracy.
|
arxiv:0909.1958
|
we show that reinforcement learning ( rl ) methods for solving text - based games ( tbgs ) often fail to generalize on unseen games, especially in small data regimes. to address this issue, we propose context relevant episodic state truncation ( crest ) for irrelevant token removal in observation text for improved generalization. our method first trains a base model using q - learning, which typically overfits the training games. the base model ' s action token distribution is used to perform observation pruning that removes irrelevant tokens. a second bootstrapped model is then retrained on the pruned observation text. our bootstrapped agent shows improved generalization in solving unseen textworld games, using 10x - 20x fewer training games compared to previous state - of - the - art methods despite requiring less number of training episodes.
|
arxiv:2009.11896
|
medical dialogue generation ( mdg ) has gained increasing attention due to its substantial practical value. previous works typically employ a sequence - to - sequence framework to generate medical responses by modeling dialogue context as sequential text with annotated medical entities. while these methods have been successful in generating fluent responses, they fail to provide process explanations of reasoning and require extensive entity annotation. to address these limitations, we propose the method bootstrap prompting for explicit reasoning in mdg ( bp4er ), which explicitly model mdg ' s multi - step reasoning process and iteratively enhance this reasoning process. we employ a least - to - most prompting strategy to guide a large language model ( llm ) in explicit reasoning, breaking down mdg into simpler sub - questions. these sub - questions build on answers from previous ones. additionally, we also introduce two distinct bootstrapping techniques for prompting, which autonomously correct errors and facilitate the llm ' s explicit reasoning. this approach eliminates the need for entity annotation and increases the transparency of the mdg process by explicitly generating the intermediate reasoning chain. the experimental findings on the two public datasets indicate that bp4er outperforms state - of - the - art methods in terms of both objective and subjective evaluation metrics.
|
arxiv:2403.19414
|
in mimetic gravity, we derive $ d $ - dimension charged black hole solutions having flat or cylindrical horizons with zero curvature boundary. the asymptotic behaviours of these black holes behave as ( a ) ds. we study both linear and nonlinear forms of the maxwell field equations in two separate contexts. for the nonlinear case, we derive a new solution having a metric with monopole, dipole and quadrupole terms. the most interesting feature of this black hole is that its dipole and quadruple terms are related by a constant. however, the solution reduces to the linear case of the maxwell field equations when this constant acquires a null value. also, we apply a coordinate transformation and derive rotating black hole solutions ( for both linear and nonlinear cases ). we show that the nonlinear black hole has stronger curvature singularities than the corresponding known black hole solutions in general relativity. we show that the obtained solutions could have at most two horizons. we determine the critical mass of the degenerate horizon at which the two horizons coincide. we study the thermodynamical stability of the solutions. we note that the nonlinear electrodynamics contributes to process a second - order phase transition whereas the heat capacity has an infinite discontinuity.
|
arxiv:1809.02289
|
we consider approximate dynamic programming for the infinite - horizon stationary $ \ gamma $ - discounted optimal control problem formalized by markov decision processes. while in the exact case it is known that there always exists an optimal policy that is stationary, we show that when using value function approximation, looking for a non - stationary policy may lead to a better performance guarantee. we define a non - stationary variant of mpi that unifies a broad family of approximate dp algorithms of the literature. for this algorithm we provide an error propagation analysis in the form of a performance bound of the resulting policies that can improve the usual performance bound by a factor $ o ( 1 - \ gamma ) $, which is significant when the discount factor $ \ gamma $ is close to 1. doing so, our approach unifies recent results for value and policy iteration. furthermore, we show, by constructing a specific deterministic mdp, that our performance guarantee is tight.
|
arxiv:1304.5610
|
modern deep learning has revealed a surprising statistical phenomenon known as benign overfitting, with high - dimensional linear regression being a prominent example. this paper contributes to ongoing research on the ordinary least squares ( ols ) interpolator, focusing on the partial regression setting, where only a subset of coefficients is implicitly regularized. on the algebraic front, we extend cochran ' s formula and the leave - one - out residual formula for the partial regularization framework. on the stochastic front, we leverage our algebraic results to design several homoskedastic variance estimators under the gauss - markov model. these estimators serve as a basis for conducting statistical inference, albeit with slight conservatism in their performance. through simulations, we study the finite - sample properties of these variance estimators across various generative models.
|
arxiv:2411.06593
|
nonparametric or distribution - free charts can be useful in statistical process control problems when there is limited or lack of knowledge about the underlying process distribution. in this paper, a phase ii shewhart - type chart is considered for location, based on reference data from phase i analysis and the well - known mann - whitney statistic. control limits are computed using lugannani - rice - saddlepoint, edgeworth, and other approximations along with monte carlo estimation. the derivations take account of estimation and the dependence from the use of a reference sample. an illustrative numerical example is presented. the in - control performance of the proposed chart is shown to be much superior to the classical shewhart $ \ bar { x } $ chart. further comparisons on the basis of some percentiles of the out - of - control conditional run length distribution and the unconditional out - of - control arl show that the proposed chart is almost as good as the shewhart $ \ bar { x } $ chart for the normal distribution, but is more powerful for a heavy - tailed distribution such as the laplace, or for a skewed distribution such as the gamma. interactive software, enabling a complete implementation of the chart, is made available on a website.
|
arxiv:0805.2292
|
we study second - order divergence - form systems on half - infinite cylindrical domains with a bounded and possibly rough base, subject to homogeneous mixed boundary conditions on the lateral boundary and square integrable dirichlet, neumann, or regularity data on the cylinder base. assuming that the coefficients a are close to coefficients a \ _ 0 that are independent of the unbounded direction with respect to the modified carleson norm of dahlberg, we prove a priori estimates and establish well - posedness if a \ _ 0 has a special structure. we obtain a complete characterization of weak solutions whose gradient either has an l ^ 2 - bounded non - tangential maximal function or satisfies a lusin area bound. our method relies on the first - order formalism of axelsson, mcintosh, and the first author and the recent solution of kato ' s conjecture for mixed boundary conditions due to haller - dintelmann, tolksdorf, and the second author.
|
arxiv:1511.01715
|
for fixed positive integer $ n $, $ p \ in [ 0, 1 ] $, $ a \ in ( 0, 1 ) $, we prove that if a function $ g : \ mathbb { s } ^ { n - 1 } \ to \ mathbb { r } $ is sufficiently close to 1, in the $ c ^ a $ sense, then there exists a unique convex body $ k $ whose $ l _ p $ curvature function equals $ g $. this was previously established for $ n = 3 $, $ p = 0 $ by chen, feng, liu \ cite { cfl22 } and in the symmetric case by chen, huang, li, liu \ cite { chll20 }. related, we show that if $ p = 0 $ and $ n = 4 $ or $ n \ leq 3 $ and $ p \ in [ 0, 1 ) $, and the $ l _ p $ curvature function $ g $ of a ( sufficiently regular, containing the origin ) convex body $ k $ satisfies $ \ lambda ^ { - 1 } \ leq g \ leq \ lambda $, for some $ \ lambda > 1 $, then $ \ max _ { x \ in \ mathbb { s } ^ { n - 1 } } h _ k ( x ) \ leq c ( p, \ lambda ) $, for some constant $ c ( p, \ lambda ) > 0 $ that depends only on $ p $ and $ \ lambda $. this also extends a result from chen, feng, liu \ cite { cfl22 }. along the way, we obtain a result, that might be of independent interest, concerning the question of when the support of the $ l _ p $ surface area measure is lower dimensional. finally, we establish a strong non - uniqueness result for the $ l _ p $ - minkowksi problem, for $ - n < p < 0 $.
|
arxiv:2308.03367
|
compositional zero - shot learning ( czsl ) aims to recognize unseen compositions formed from seen state and object during training. since the same state may be various in the visual appearance while entangled with different objects, czsl is still a challenging task. some methods recognize state and object with two trained classifiers, ignoring the impact of the interaction between object and state ; the other methods try to learn the joint representation of the state - object compositions, leading to the domain gap between seen and unseen composition sets. in this paper, we propose a novel siamese contrastive embedding network ( scen ) ( code : https : / / github. com / xduxyli / scen - master ) for unseen composition recognition. considering the entanglement between state and object, we embed the visual feature into a siamese contrastive space to capture prototypes of them separately, alleviating the interaction between state and object. in addition, we design a state transition module ( stm ) to increase the diversity of training compositions, improving the robustness of the recognition model. extensive experiments indicate that our method significantly outperforms the state - of - the - art approaches on three challenging benchmark datasets, including the recent proposed c - qga dataset.
|
arxiv:2206.14475
|
we briefly recount the long friendship that developed between ludwig and us ( moshe flato and i ), since we first met at icm 1966 in moscow. that friendship extended to his school and family, and persists to this day. its strong personal impact and main scientific components are sketched, including reflexions on what mathematical physics is ( or should be ).
|
arxiv:2304.07577
|
kinship verification is a long - standing research challenge in computer vision. the visual differences presented to the face have a significant effect on the recognition capabilities of the kinship systems. we argue that aggregating multiple visual knowledge can better describe the characteristics of the subject for precise kinship identification. typically, the age - invariant features can represent more natural facial details. such age - related transformations are essential for face recognition due to the biological effects of aging. however, the existing methods mainly focus on employing the single - view image features for kinship identification, while more meaningful visual properties such as race and age are directly ignored in the feature learning step. to this end, we propose a novel deep collaborative multi - modal learning ( dcml ) to integrate the underlying information presented in facial properties in an adaptive manner to strengthen the facial details for effective unsupervised kinship verification. specifically, we construct a well - designed adaptive feature fusion mechanism, which can jointly leverage the complementary properties from different visual perspectives to produce composite features and draw greater attention to the most informative components of spatial feature maps. particularly, an adaptive weighting strategy is developed based on a novel attention mechanism, which can enhance the dependencies between different properties by decreasing the information redundancy in channels in a self - adaptive manner. to validate the effectiveness of the proposed method, extensive experimental evaluations conducted on four widely - used datasets show that our dcml method is always superior to some state - of - the - art kinship verification methods.
|
arxiv:2109.02804
|
in applications that involve sensor data, a useful measure of signal - to - noise ratio ( snr ) is the ratio of the root - mean - squared ( rms ) signal to the rms sensor noise. the present paper shows that, for numerical differentiation, the traditional snr is ineffective. in particular, it is shown that, for a harmonic signal with harmonic sensor noise, a natural and relevant snr is given by the ratio of the rms of the derivative of the signal to the rms of the derivative of the sensor noise. for a harmonic signal with white sensor noise, an effective snr is derived. implications of these observations for signal processing are discussed.
|
arxiv:2501.14906
|
let $ \ mfp ( d ) $ be a standard parabolic subalgebra of $ \ mfsl _ { n + 1 } ( k ) $ and $ \ mfu $ be the corresponding nilradical defined over an algebraically closed field $ k $ of characteristic $ p > 0 $. we construct a finite connected quiver $ q ( d ) $, through which we provide a combinatorial characterization of the centralizer $ c _ { \ mfu } ( x ( d ) ) $ of the richardson element $ x ( d ) $. we specifically focus on the centralizer when the levi factor of $ \ mfp ( d ) $ is determined by either one or two simple roots. this allows us to demonstrate that, under certain mild restrictions, the saturation rank of $ \ mfu $ equals the semisimple rank of the algebraic $ k $ - group $ \ sl _ { n + 1 } ( k ) $.
|
arxiv:2405.01956
|
the independence polynomial of a graph is the generating polynomial for the number of independent sets of each size, and its roots are called { \ em independence roots }. we investigate the stability of such polynomials, that is, conditions under which the roots lie in the left half - plane ( all of the real roots of independence polynomial are negative and hence lie in this half - plane ). we show stability for all independence polynomials of graphs with independence number at most three, but for larger independence number we show that the independence polynomials can have roots arbitrarily far to the right. we provide families of graphs whose independence polynomials are stable and ones that are not, utilizing various graph operations.
|
arxiv:1802.02478
|
the differential p _ t spectrum for vector boson production is computed at next - to - leading fixed order and including the resummation of threshold logarithms at next - to - next - to - leading logarithmic accuracy. a comparison is made to atlas data on direct photon and w production at high transverse momentum p _ t, finding excellent agreement. the resummation is achieved by factorizing contributions associated with different scales using soft - collinear effective theory. each part is then calculated perturbatively and the individual contributions are combined using renormalization group methods. a key advantage of the effective theory framework is that it indicates a set of natural scale choices, in contrast to the fixed - order calculation. resummation of logarithms of ratios of these scales leads to better agreement with data and reduced theoretical uncertainties.
|
arxiv:1206.6115
|
context. gas in protoplanetary disks mostly cools via thermal accommodation with dust particles. thermal relaxation is thus highly sensitive to the local dust size distributions and the spatial distribution of the grains. so far, the interplay between thermal relaxation and gas turbulence has not been dynamically modeled in hydrodynamic simulations of protoplanetary disks with dust. aims. we aim to study the effects of the vertical shear instability ( vsi ) on the thermal relaxation times, and vice versa. we are particularly interested in the influence of the initial dust grain size on the vsi and whether the emerging turbulence is sustained over long timescales. results. we find that the emergence of the vsi is strongly dependent on the initial dust grain size. coagulation also counteracts the emergence of hydrodynamic turbulence in our simulations, as shown by others before. starting a simulation with larger grains ( 100 $ \ mu $ m ) generally leads to a less turbulent outcome. while the inner disk regions ( within $ \ sim $ 70 au ) develop turbulence in all three simulations, we find that the simulations with larger particles do not develop vsi in the outer disk. conclusions. our simulations with dynamically calculated thermal accommodation times based on the drifting and settling dust distribution show that the vsi, once developed in a disk, can be sustained over long timescales, even if grain growth is occurring. the vsi corrugates the dust layer and even diffuses the smaller grains into the upper atmosphere, where they can cool the gas. whether the instability can emerge for a specific stratification depends on the initial dust grain sizes and the initial dust scale height. if the grains are initially $ \ gtrsim $ 100 $ \ mu $ m and if the level of turbulence is initially assumed to be low, we find no vsi turbulence in the outer disk regions.
|
arxiv:2406.10335
|
this textbook introduces the basic concepts of the theory of causal fermion systems, a recent approach to the description of fundamental physics. the theory yields quantum mechanics, general relativity and quantum field theory as limiting cases and is therefore a candidate for a unified physical theory. from the mathematical perspective, causal fermion systems provide a general framework for describing and analyzing non - smooth geometries and " quantum geometries. " the dynamics is described by a novel variational principle, the causal action principle. the book includes a detailed summary of the mathematical and physical preliminaries. it explains the physical concepts behind the causal fermion system approach from the basics. moreover, all the mathematical objects and structures are introduced step by step. the mathematical methods used for the analysis of causal fermion systems and the causal action principle are introduced in depth. many examples and applications are worked out. the textbook is addressed to master and graduate students in mathematics or physics. furthermore, it serves as a reference work for researchers working in the field.
|
arxiv:2411.06450
|
we introduce and study higher spherical algebras, an exotic family of finite - dimensional algebras over an algebraically closed field. we prove that every such an algebra is derived equivalent to a higher tetrahedral algebra studied in [ 7 ], an hence that it is a tame symmetric periodic algebra of period four.
|
arxiv:1905.03205
|
we propose a novel approach to reduce memory consumption of the backpropagation through time ( bptt ) algorithm when training recurrent neural networks ( rnns ). our approach uses dynamic programming to balance a trade - off between caching of intermediate results and recomputation. the algorithm is capable of tightly fitting within almost any user - set memory budget while finding an optimal execution policy minimizing the computational cost. computational devices have limited memory capacity and maximizing a computational performance given a fixed memory budget is a practical use - case. we provide asymptotic computational upper bounds for various regimes. the algorithm is particularly effective for long sequences. for sequences of length 1000, our algorithm saves 95 \ % of memory usage while using only one third more time per iteration than the standard bptt.
|
arxiv:1606.03401
|
this work explores how population - based engagement prediction can address cold - start at scale in large learning resource collections. the paper introduces i ) vle, a novel dataset that consists of content and video based features extracted from publicly available scientific video lectures coupled with implicit and explicit signals related to learner engagement, ii ) two standard tasks related to predicting and ranking context - agnostic engagement in video lectures with preliminary baselines and iii ) a set of experiments that validate the usefulness of the proposed dataset. our experimental results indicate that the newly proposed vle dataset leads to building context - agnostic engagement prediction models that are significantly performant than ones based on previous datasets, mainly attributing to the increase of training examples. vle dataset ' s suitability in building models towards computer science / artificial intelligence education focused on e - learning / mooc use - cases is also evidenced. further experiments in combining the built model with a personalising algorithm show promising improvements in addressing the cold - start problem encountered in educational recommenders. this is the largest and most diverse publicly available dataset to our knowledge that deals with learner engagement prediction tasks. the dataset, helper tools, descriptive statistics and example code snippets are available publicly.
|
arxiv:2207.01504
|
we report on theoretical and experimental investigations of the nonlinear tolerance of single carrier and digital multicarrier approaches with probabilistically shaped constellations. experimental transmission of pcs16qam is assessed at 120 gbd over an ultra - long - haul distance.
|
arxiv:2109.10553
|
despite much attention, the comparison of reduced - dimension representations of high - dimensional data remains a challenging problem in multiple fields, especially when representations remain high - dimensional compared to sample size. we offer a framework for evaluating the topological similarity of high - dimensional representations of very high - dimensional data, a regime where topological structure is more likely captured in the distribution of topological " noise " than a few prominent generators. treating each representational map as a metric embedding, we compute the vietoris - rips persistence of its image. we then use the topological bootstrap to analyze the re - sampling stability of each representation, assigning a " prevalence score " for each nontrivial basis element of its persistence module. finally, we compare the persistent homology of representations using a prevalence - weighted variant of the wasserstein distance. notably, our method is able to compare representations derived from different samples of the same distribution and, in particular, is not restricted to comparisons of graphs on the same vertex set. in addition, representations need not lie in the same metric space. we apply this analysis to a cross - sectional sample of representations of functional neuroimaging data in a large cohort and hierarchically cluster under the prevalence - weighted wasserstein. we find that the ambient dimension of a representation is a stronger predictor of the number and stability of topological features than its decomposition rank. our findings suggest that important topological information lies in repeatable, low - persistence homology generators, whose distributions capture important and interpretable differences between high - dimensional data representations.
|
arxiv:2306.13802
|
the vast challenges have been shown to be an effective tool in visual analytics education, encouraging student learning while enforcing good visualization design and development practices. however, research has observed that students often struggle at identifying a good " starting point " when tackling the vast challenge. consequently, students who could not identify a good starting point failed at finding the correct solution to the challenge. in this paper, we propose a preliminary guideline for helping students approach the vast challenge and identify initial analysis directions. we recruited two students to analyze the vast 2017 challenge using a hypothesis - driven approach, where they were required to pre - register their hypotheses prior to inspecting and analyzing the full dataset. from their experience, we developed a prescriptive guideline for other students to tackle vast challenges. in a preliminary study, we found that the students were able to use the guideline to generate well - formed hypotheses that could lead them towards solving the challenge. additionally, the students reported that with the guideline, they felt like they had concrete steps that they could follow, thereby alleviating the burden of identifying a good starting point in their analysis process.
|
arxiv:2211.00567
|
this text is based on lectures by the author in the summer school ` algebraic geometry and hypergeometric functions ' in istanbul in june 2005. it gives a review of some of the basic aspects of the theory of hypergeometric structures of gelfand, kapranov and zelevinsky, including differential equations, integrals and series, with emphasis on the latter. the secondary fan is constructed and subsequently used to describe the ` geography ' of the domains of convergence of the \ gamma - series. a solution to certain resonance problems is presented and applied in the context of mirror symmetry. many examples and some exercises are given throughout the paper.
|
arxiv:math/0511351
|
gravitational wave observations offer unique opportunities to probe gravity in the strong and dynamical regime, which was difficult to access previously. we here review two theory - agnostic ways to carry out tests of general relativity with gravitational waves, namely ( i ) parameterized waveform tests and ( ii ) consistency tests between the inspiral and merger - ringdown portions. for each method, we explain the formalism, followed by results from existing events, and finally we discuss future prospects with upgraded detectors, including the possibility of using multi - band gravitational - wave observations with ground - based and space - borne interferometers. we show that such future observations have the potential to improve upon current bounds on theories beyond general relativity by many orders of magnitude. we conclude by listing several open questions that remain to be addressed.
|
arxiv:1908.07103
|
the defining trait of magnetars, the most strongly magnetized neutron stars ( nss ), is their transient activity in the x / $ \ gamma $ - bands. in particular, many of them undergo phases of enhanced emission, the so - called outbursts, during which the luminosity rises by a factor $ \ sim $ 10 $ - $ 1000 in a few hours to then decay over months / years. outbursts often exhibit a thermal spectrum, associated with the appearance of hotter regions on the surface of the star, which subsequently change in shape and cool down. here we simulate the unfolding of a sudden, localized heat injection in the external crust of a ns with a 3d magneto - thermal evolution code, finding that this can reproduce the main features of magnetar outbursts. a full 3d treatment allows us to study for the first time the inherently asymmetric hot - spots which appear on the surface of the star as the result of the injection and to follow the evolution of their temperature and shape. we investigate the effects produced by different physical conditions in the heated region, highlighting in particular how the geometry of the magnetic field plays a key role in determining the properties of the event.
|
arxiv:2208.10178
|
we report on the oxidation of self - assembled silicene nanoribbons grown on the ag ( 110 ) surface using scanning tunneling microscopy and high - resolution photoemission spectroscopy. the results show that silicene nanoribbons present a strong resistance towards oxidation using molecular oxygen. this can be overcome by increasing the electric field in the stm tunnel junction above a threshold of + 2. 6 v to induce oxygen dissociation and reaction. the higher reactivity of the silicene nanoribbons towards atomic oxygen is observed as expected. the hr - pes confirm these observations : even at high exposures of molecular oxygen, the si 2p core - level peaks corresponding to pristine silicene remain dominant, reflecting a very low reactivity to molecular oxygen. complete oxidation is obtained following exposure to high doses of atomic oxygen ; the si 2p core level peak corresponding to pristine silicene disappears.
|
arxiv:2006.13780
|
chimera states arising in the classic kuramoto system of two - dimensional phase coupled oscillators are transient but they are " long " transients in the sense that the average transient lifetime grows exponentially with the system size. for reasonably large systems, e. g., those consisting of a few hundreds oscillators, it is infeasible to numerically calculate or experimentally measure the average lifetime, so the chimera states are practically permanent. we find that small perturbations in the third dimension, which make system " slightly " three - dimensional, will reduce dramatically the transient lifetime. in particular, under such a perturbation, the practically infinite average transient lifetime will become extremely short, because it scales with the magnitude of the perturbation only logarithmically. physically, this means that a reduction in the perturbation strength over many orders of magnitude, insofar as it is not zero, would result in only an incremental increase in the lifetime. the uncovered type of fragility of chimera states raises concerns about their observability in physical systems.
|
arxiv:2004.04769
|
well - designed diagnostic tasks have played a key role in studying the failure of neural nets ( nns ) to generalize systematically. famous examples include scan and compositional table lookup ( ctl ). here we introduce ctl + +, a new diagnostic dataset based on compositions of unary symbolic functions. while the original ctl is used to test length generalization or productivity, ctl + + is designed to test systematicity of nns, that is, their capability to generalize to unseen compositions of known functions. ctl + + splits functions into groups and tests performance on group elements composed in a way not seen during training. we show that recent ctl - solving transformer variants fail on ctl + +. the simplicity of the task design allows for fine - grained control of task difficulty, as well as many insightful analyses. for example, we measure how much overlap between groups is needed by tested nns for learning to compose. we also visualize how learned symbol representations in outputs of functions from different groups are compatible in case of success but not in case of failure. these results provide insights into failure cases reported on more complex compositions in the natural language domain. our code is public.
|
arxiv:2210.06350
|
physics - informed neural networks ( pinn ) has evolved into a powerful tool for solving partial differential equations, which has been applied to various fields such as energy, environment, en - gineering, etc. when utilizing pinn to solve partial differential equations, it is common to rely on automatic differentiation ( ad ) to compute the residuals of the governing equations. this can lead to certain precision losses, thus affecting the accuracy of the network prediction. this paper pro - poses a finite volume physics - informed neural network ( fv - pinn ), designed to address steady - state problems of incompressible flow. this method divides the solution domain into mul - tiple grids. instead of calculating the residuals of the navier - stokes equations at collocation points within the grid, as is common in traditional pinns, this approach evaluates them at gaussian in - tegral points on the grid boundaries using gauss ' s theorem. the loss function is constructed using the gaussian integral method, and the differentiation order for velocity is reduced. to validate the effectiveness of this approach, we predict the velocity and pressure fields for two typical examples in fluid topology optimization. the results are compared with commercial software comsol, which indicates that fvi - pinn significantly improves the prediction accuracy of both the velocity and pressure fields while accelerating the training speed of the network.
|
arxiv:2411.17095
|
let $ g $ be a principal modulus with rational fourier coefficients for a discrete subgroup of $ \ mathrm { sl } _ 2 ( \ mathbb { r } ) $ between $ \ gamma ( n ) $ or $ \ gamma _ 0 ( n ) ^ \ dag $ for a positive integer $ n $. let $ k $ be an imaginary quadratic field. we give a simple proof of the fact that the singular value of $ g $ generates the ray class field modulo $ n $ or the ring class field of the order of conductor $ n $ over $ k $. furthermore, we construct primitive generators of ray class fields of arbitrary moduli over $ k $ in terms of hasse ' s two generators.
|
arxiv:1102.1174
|
we consider model a dynamics for a quench from the disordered into the ordered phase of su ( 3 ) lattice gauge theory and the analogue 3d 3 - state potts model. for the gauge model this corresponds to a rapid heating from the confined to the deconfined phase. the exponential growth factors of low - lying structure function modes are numerically calculated. the linear theory of spinodal decomposition is used to determine the critical modes. this allows for the debye screening mass estimation in an effective phenomenological model. the quench leads to competing vacuum domains, which make the equilibration of the qcd vacuum after the heating non - trivial. the influence of such domains on the gluonic energy density is studied.
|
arxiv:hep-lat/0509040
|
using a separable many - body variational wavefunction, we formulate a self - consistent effective hamiltonian theory for fermionic many - body system. the theory is applied to the two - dimensional hubbard model as an example to demonstrate its capability and computational effectiveness. most remarkably for the hubbard model in 2 - d, a highly unconventional quadruple - fermion non - cooper - pair order parameter is discovered.
|
arxiv:1910.12136
|
magnon spin currents in insulating magnets are useful for low - power spintronics. however, in magnets stacked by antiferromagnetic ( afm ) exchange coupling, which have recently aroused significant interest for potential applications in spintronics, these currents are largely counteracted by opposite magnetic sublattices, thus suppressing their net effect. contrary to this common observation, here, we show that magnets with x - type afm stacking, where opposite magnetic sublattices form orthogonal intersecting chains, support giant magnon spin currents with minimal compensation. our model hamiltonian calculations predict magnetic chain locking of magnon spin currents in these x - type magnets, significantly reducing their compensation ratio. in addition, the one - dimensional nature of the chain - like magnetic sublattices enhances magnon spin conductivities surpassing those of two - dimensional ferromagnets and canonical altermagnets. notably, uncompensated x - type magnets, such as odd - layer antiferromagnets and ferrimagnets, can exhibit magnon spin currents polarized opposite to those expected by their net magnetization. these unprecedented properties of x - type magnets, combined with their inherent advantages resulting from afm coupling, offer a promising new path for low - power high - performance spintronics.
|
arxiv:2502.13511
|
a major challenge in cyber - threat analysis is combining information from different sources to find the person or the group responsible for the cyber - attack. it is one of the most important technical and policy challenges in cyber - security. the lack of ground truth for an individual responsible for an attack has limited previous studies. in this paper, we take a first step towards overcoming this limitation by building a dataset from the capture - the - flag event held at defcon, and propose an argumentation model based on a formal reasoning framework called delp ( defeasible logic programming ) designed to aid an analyst in attributing a cyber - attack. we build models from latent variables to reduce the search space of culprits ( attackers ), and show that this reduction significantly improves the performance of classification - based approaches from 37 % to 62 % in identifying the attacker.
|
arxiv:1607.02171
|
we propose a novel stochastic radio resource allocation strategy that achieves long - term fairness considering backhaul and air - interface capacity limitations. the base station is considered to be only powered with a finite battery that is recharged by an energy harvesting source. such energy harvesting is also taken into account in the proposed resource allocation strategy. this technical scenario can be found in remote rural areas where the backhaul connection is very limited and the base stations are fed with solar panels of reduced size. our results show that the proposed scheme achieves higher fairness among the users and, in some cases, a higher sum - rate compared with the well - known proportional fair scheduler.
|
arxiv:1501.03711
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.