text
stringlengths
1
3.65k
source
stringlengths
15
79
i review a class of exact string backgrounds, which appear in hierarchies, where the boundary of the target space of an exact sigma model is itself the target space of another exact model. from the worldsheet viewpoint this is due to the existence of ( 1, 1 ) operators based on parafermions. from the target space side, it is reminiscent of the structure of maximally symmetric friedmann - robertson - walker cosmological solutions, with broken homogeneity though. cosmological evolution in this framework raises again the question of the nature of time in string theory.
arxiv:hep-th/0612243
many clustering algorithms are guided by certain cost functions such as the widely - used $ k $ - means cost. these algorithms divide data points into clusters with often complicated boundaries, creating difficulties in explaining the clustering decision. in a recent work, dasgupta, frost, moshkovitz, and rashtchian ( icml 2020 ) introduced explainable clustering, where the cluster boundaries are axis - parallel hyperplanes and the clustering is obtained by applying a decision tree to the data. the central question here is : how much does the explainability constraint increase the value of the cost function? given $ d $ - dimensional data points, we show an efficient algorithm that finds an explainable clustering whose $ k $ - means cost is at most $ k ^ { 1 - 2 / d } \, \ mathrm { poly } ( d \ log k ) $ times the minimum cost achievable by a clustering without the explainability constraint, assuming $ k, d \ ge 2 $. taking the minimum of this bound and the $ k \, \ mathrm { polylog } ( k ) $ bound in independent work by makarychev - shan ( icml 2021 ), gamlath - jia - polak - svensson ( 2021 ), or esfandiari - mirrokni - narayanan ( 2021 ), we get an improved bound of $ k ^ { 1 - 2 / d } \, \ mathrm { polylog } ( k ) $, which we show is optimal for every choice of $ k, d \ ge 2 $ up to a poly - logarithmic factor in $ k $. for $ d = 2 $ in particular, we show an $ o ( \ log k \ log \ log k ) $ bound, improving near - exponentially over the previous best bound of $ o ( k \ log k ) $ by laber and murtinho ( icml 2021 ).
arxiv:2106.15566
let $ t $ be a torus acting on $ \ cc ^ n $ in such a way that, for all $ 1 \ leq k \ leq n $, the induced action on the grassmannian $ g ( k, n ) $ has only isolated fixed points. this paper proposes a natural, elementary, explicit description of the corresponding $ t $ - equivariant schubert calculus. in a suitable natural basis of the $ t $ - equivariant cohomology, seen as a module over the $ t $ - equivariant cohomology of a point, it is formally the same as the ordinary cohomology of a grassmann bundle. the main result, useful for computational purposes, is that the $ t $ - equivariant cohomology of $ g ( k, n ) $ can be realized as the quotient of a ring generated by derivations on the exterior algebra of a free module of rank $ n $ over the $ t $ - equivariant cohomology of a point.
arxiv:math/0703445
the laplace - beltrami operator ( lbo ) is a fundamental object associated to riemannian manifolds, which encodes all intrinsic geometry of the manifolds and has many desirable properties. recently, we proposed a novel numerical method, point integral method ( pim ), to discretize the laplace - beltrami operator on point clouds \ cite { lss }. in this paper, we analyze the convergence of point integral method ( pim ) for poisson equation with neumann boundary condition on submanifolds isometrically embedded in euclidean spaces.
arxiv:1403.2141
we present results from a 20 ksec xmm observation of mrk 231. epic data reveal strong line emission due to fe ka, which has rarely been detected in this class. the line energy is consistent with an origin in neutral fe and the width equivalent to a velocity dispersion ~ 18, 000 km / s, thus the line may be attributed to transmission and / or reflection from a distribution of emitting clouds. if, instead, the line originates in the accretion disk then the line strength and flat x - ray continuum support some contribution from reflection, although the data disfavor a model where the hard x - ray band is purely reflected x - rays from a disk. line parameters are similar to those obtained for the bal qso, h1413 + 117.
arxiv:astro-ph/0308030
we construct all skew braces of size $ pq $ ( where $ p > q $ are primes ) by using byott ' s classification of hopf - - galois extensions of the same degree. for $ p \ not \ equiv 1 \ pmod { q } $ there exists only one skew brace which is the trivial one. when $ p \ equiv 1 \ pmod { q } $, we have $ 2q + 2 $ skew braces, two of which are of cyclic type ( so, contained in rump ' s classification ) and $ 2q $ of non - abelian type.
arxiv:1908.03228
in this paper, we study the restoration of gauge symmetry and up to half the supersymmetry ( n = ( 2, 0 ) or n = ( 1, 1 ) in two dimensions ) for n = 2 non - abelian chern - simons theories in the presence of a boundary. we describe the boundary action which is a supersymmetric wzw model coupled to the bulk chern - simons theory. unlike the n = 1 case, higher supersymmetry ( n = ( 2, 0 ) ) will endow the group manifold of the wzw model with a complex structure. therefore, the n = ( 2, 0 ) wzw model in our paper is constructed via a coset space $ g _ c / g $, where $ g $ is the same as the gauge group in the chern - simons action.
arxiv:1601.05429
we propose a general framework called network dissection for quantifying the interpretability of latent representations of cnns by evaluating the alignment between individual hidden units and a set of semantic concepts. given any cnn model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. the units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. we use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self - supervised training tasks. we further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. we demonstrate that the proposed method can shed light on characteristics of cnn models and training methods that go beyond measurements of their discriminative power.
arxiv:1704.05796
generating novel and creative scientific hypotheses is a cornerstone in achieving artificial general intelligence. large language and reasoning models have the potential to aid in the systematic creation, selection, and validation of scientifically informed hypotheses. however, current foundation models often struggle to produce scientific ideas that are both novel and feasible. one reason is the lack of a dedicated dataset that frames scientific hypothesis generation ( shg ) as a natural language generation ( nlg ) task. in this paper, we introduce hypogen, the first dataset of approximately 5500 structured problem - hypothesis pairs extracted from top - tier computer science conferences structured with a bit - flip - spark schema, where the bit is the conventional assumption, the spark is the key insight or conceptual leap, and the flip is the resulting counterproposal. hypogen uniquely integrates an explicit chain - of - reasoning component that reflects the intellectual process from bit to flip. we demonstrate that framing hypothesis generation as conditional language modelling, with the model fine - tuned on bit - flip - spark and the chain - of - reasoning ( and where, at inference, we only provide the bit ), leads to improvements in the overall quality of the hypotheses. our evaluation employs automated metrics and llm judge rankings for overall quality assessment. we show that by fine - tuning on our hypogen dataset we improve the novelty, feasibility, and overall quality of the generated hypotheses. the hypogen dataset is publicly available at huggingface. co / datasets / universetbd / hypogen - dr1.
arxiv:2504.12976
in this paper we prove some generalizations of the classical hasse - davenport product relation for certain arithmetic factors defined on p - adic fields, among them one finds the epsilon - factors appearing in tate ' s thesis. we then show that these generalizations are equivalent to some representation theoretic identities relating the determinant of ramified local coefficients matrices defined for coverings of sl ( 2, f ) to plancherel measures and gamma - factors.
arxiv:2306.10928
in this chapter, by using ab - initio molecular dynamics, we introduce the latest simulation results on two materials for flash memory devices : ge2sb2te5 and ge - se - cu - ag. this chapter is a review of our previous work including some of our published figures and text in cai et al. ( 2010 ) and prasai & drabold ( 2011 ) and also includes several new results.
arxiv:1103.6051
operating hades at the future fair sis - 100 accelerator challenges the rate capability of daq and electronics. a new, more robust version of front - end electronics needs to be built for the hades drift chamber system. due to the unavailability of the previously used asd - 8 analog read - out asic, pasttrec ( panda straw tube read - out asic ) was tested as an asd - 8 replacement in different scenarios including a beam test. pasttrec falls 20 % short of the asd - 8 time precision but performs better w. r. t. signal charge measurements and overall operation stability. the measured time precision as a function of distance to the sense wire was modeled within a 3d garfield simulation of the hades drift cell.
arxiv:1810.12695
to conduct a more realistic evaluation on virtualized network functions resource allocation algorithms, researches needed data on : ( 1 ) potential nfs chains ( policies ), ( 2 ) traffic flows passing through these nfs chains, ( 3 ) how the dynamic traffic changes affect the nfs ( scale out / in ) and ( 4 ) different data center architectures for the nfc. however, there are no publicly available real data sets on nf chains and traffic that pass through nf chains. therefore we have used data from previous empirical analyses and made some assumptions to derive the required data to evaluate resource allocation algorithms for vnfs. we developed four programs to model the gathered data and generate the required data. all gathered data and data modelling programs are publicly available at github repository.
arxiv:1702.00369
given a permutation $ \ pi \ in \ sn \ _ n $, construct a graph $ g \ _ \ pi $ on the vertex set $ \ { 1, 2,..., n \ } $ by joining $ i $ to $ j $ if ( i ) $ i < j $ and $ \ pi ( i ) < \ pi ( j ) $ and ( ii ) there is no $ k $ such that $ i < k < j $ and $ \ pi ( i ) < \ pi ( k ) < \ pi ( j ) $. we say that $ \ pi $ is forest - like if $ g \ _ \ pi $ is a forest. we first characterize forest - like permutations in terms of pattern avoidance, and then by a certain linear map being onto. thanks to recent results of woo and yong, this shows that forest - like permutations characterize schubert varieties which are locally factorial. thus forest - like permutations generalize smooth permutations ( corresponding to smooth schubert varieties ). we compute the generating function of forest - like permutations. as in the smooth case, it turns out to be algebraic. we then adapt our method to count permutations for which $ g \ _ \ pi $ is a tree, or a path, and recover the known generating function of smooth permutations.
arxiv:math/0603617
it is shown that all the countable superatomic boolean algebras of finite rank have the small index property.
arxiv:2206.07017
we perform a molecular ( co and cn ) line observation using iram 30m telescope toward two small regions near the western edge of supernova remnant ( snr ) w50 / ss433. co observation reveals spatial correspondence of two molecular clumps at the local - standard - of - rest ( lsr ) velocity around + 53 km s $ ^ { - 1 } $ with multiwavelength local features of w50 / ss433 system. one of the two clumps appears to be embedded in a void of diffuse radio and x - ray emission. toward the two clumps, asymmetric broad line profiles of the $ ^ { 12 } $ co lines are obtained, which provide kinematic evidence of the association between the clumps and the jet - related gas. the $ ^ { 12 } $ co $ j = 2 - 1 $ / $ j = 1 - 0 $ line ratios ( $ > 0. 9 $ ) and the kinetic temperatures ( $ \ sim 30 $ k ) of the clumps are distinctively higher than all those of the clumps at other lsr velocities along the same line of sight, which may be physical signatures of the association. we show that the clump coincident with the void can survive the thermal heating if it is surrounded by hot gas, with an evaporation timescale much larger than the age of snr w50. we also show that the thermal equilibrium in the high temperature clumps can be maintained by the heating of the penetrating environmental crs. cn ( $ j = 3 / 1 - 1 / 2 $ ) line emission is detected in the two clumps, and the cn abundances derived are much higher than that in the interstellar molecular clouds ( mcs ) and that in the snr - interacting mcs.
arxiv:2002.09829
the relatively hot temperature of the human body causes people to turn into long - wave infrared light sources. since this emitted light has a larger wavelength than visible light, many surfaces in typical scenes act as infrared mirrors with strong specular reflections. we exploit the thermal reflections of a person onto objects in order to locate their position and reconstruct their pose, even if they are not visible to a normal camera. we propose an analysis - by - synthesis framework that jointly models the objects, people, and their thermal reflections, which allows us to combine generative models with differentiable rendering of reflections. quantitative and qualitative experiments show our approach works in highly challenging cases, such as with curved mirrors or when the person is completely unseen by a normal camera.
arxiv:2305.01652
medical studies for chronic disease are often interested in the relation between longitudinal risk factor profiles and individuals ' later life disease outcomes. these profiles may typically be subject to intermediate structural changes due to treatment or environmental influences. analysis of such studies may be handled by the joint model framework. however, current joint modeling does not consider structural changes in the residual variability of the risk profile nor consider the influence of subject - specific residual variability on the time - to - event outcome. in the present paper, we extend the joint model framework to address these two heterogeneous intra - individual variabilities. a bayesian approach is used to estimate the unknown parameters and simulation studies are conducted to investigate the performance of the method. the proposed joint model is applied to the framingham heart study to investigate the influence of anti - hypertensive medication on the systolic blood pressure variability together with its effect on the risk of developing cardiovascular disease. we show that anti - hypertensive medication is associated with elevated systolic blood pressure variability and increased variability elevates risk of developing cardiovascular disease.
arxiv:1912.06398
experimental observation of highly reduced thermal conductivity in surface - roughness dominated silicon nanowires have generated renewed interest in low - dimensional thermoelectric devices. using a previous work where the scattering of phonons from a rough surface is mapped to scattering from randomly situated localized phonons in the bulk of a smooth nanowire, we consider the thermal current across a nanowire for various strengths of surface disorder. we use non - equilibrium green ' s function techniques that allow us to evaluate the thermal current beyond the linear response regime, for arbitrary cold and hot temperatures of the two semi - infinite connecting leads. we show how the surface - roughness affects the frequency dependence of the thermal current, eventually leading to a temperature dependent reduction of the net current at high temperatures. we use a universal disorder parameter to describe the surface - roughness as has been proposed, and show that the dependence of the net current on this parameter provides a natural explanation for the experimentally observed differences between smooth vs rough surfaces. we argue that a systematic study of the thermal current for different values of the temperature difference between the two sides of a surface - roughness dominated nanowire for various strengths of disorder would help in our understanding of how best to optimize the thermoelectric efficiency.
arxiv:1906.06739
the morphology of the stagnated plasma resulting from magnetized liner inertial fusion ( maglif ) is measured by imaging the self - emission x - rays coming from the multi - kev plasma. equivalent diagnostic response can be generated by integrated radiation - magnetohydrodynamic ( rad - mhd ) simulations from programs such as hydra and gorgon. there have been only limited quantitative ways to compare the image morphology, that is the texture, of simulations and experiments. we have developed a metric of image morphology based on the mallat scattering transformation ( mst ), a transformation that has proved to be effective at distinguishing textures, sounds, and written characters. this metric is designed, demonstrated, and refined by classifying ensembles ( i. e., classes ) of synthetic stagnation images, and by regressing an ensemble of synthetic stagnation images to the morphology ( i. e., model ) parameters used to generate the synthetic images. we use this metric to quantitatively compare simulations to experimental images, experimental images to each other, and to estimate the morphological parameters of the experimental images with uncertainty. this coordinate space has proved very adept at doing a sophisticated relative background subtraction in the mst space. this was needed to compare the experimental self emission images to the rad - mhd simulation images.
arxiv:2005.01600
in this study, a multiple - comparison approach is developed for detecting faint hyperspectral sources. the detection method relies on a sparse and non - negative representation on a highly coherent dictionary to track a spatially varying source. a robust control of the detection errors is ensured by learning the test statistic distributions on the data. the resulting control is based on the false discovery rate, to take into account the large number of pixels to be tested. this method is applied to data recently recorded by the three - dimensional spectrograph multi - unit spectrograph explorer.
arxiv:1702.00609
learned optimizers - - neural networks that are trained to act as optimizers - - have the potential to dramatically accelerate training of machine learning models. however, even when meta - trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta - training set. in this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. we then introduce simple modifications to a learned optimizer ' s architecture and meta - training procedure which lead to improved stability, and improve the optimizer ' s inductive bias. we apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer - - at matched optimizer computational overhead - - with regard to optimization performance and meta - training speed, and is capable of generalization to tasks far different from those it was meta - trained on.
arxiv:2209.11208
in this paper we derive a variety of functional inequalities for general homogeneous invariant hypoelliptic differential operators on nilpotent lie groups. the obtained inequalities include hardy, rellich, hardy - littllewood - sobolev, galiardo - nirenberg, caffarelli - kohn - nirenberg and trudinger - moser inequalities. some of these estimates have been known in the case of the sub - laplacians, however, for more general hypoelliptic operators almost all of them appear to be new as no approaches for obtaining such estimates have been available. moreover, we obtain several versions of local and global weighted trudinger - moser inequalities with remainder terms, critical hardy and weighted gagliardo - nirenberg inequalities, which appear to be new also in the case of the sub - laplacian. curiously, we also show the equivalence of many of these critical inequalities as well as asymptotic relations between their best constants. the approach developed in this paper relies on establishing integral versions of hardy inequalities on homogeneous groups, for which we also find necessary and sufficient conditions for the weights for such inequalities to be true. consequently, we link such integral hardy inequalities to different hypoelliptic inequalities by using the riesz and bessel kernels associated to the described hypoelliptic operators.
arxiv:1805.01064
this expository article proves some results of ferguson, on the approximation of continuous functions on a compact subset of r by polynomials with integral coefficients.
arxiv:math/0103004
we extend the applications of prescriptive unitarity beyond the planar limit to provide local, polylogarithmic, integrand - level representations of six - particle mhv scattering amplitudes in both maximally supersymmetric yang - mills theory and gravity. the integrand basis we construct is diagonalized on a spanning set of non - vanishing leading singularities that ensures the manifest matching of all soft - collinear singularities in both theories. as a consequence, this integrand basis naturally splits into infrared - finite and infrared - divergent parts, with hints toward an integrand - level exponentiation of infrared divergences. importantly, we use the same basis of integrands for both theories, so that the presence or absence of residues at infinite loop momentum becomes a feature detectable by inspecting the cuts of the theory. complete details of our results are provided as ancillary files. this work has been updated to take into account the results of [ arxiv : 1911. 09106 ], which leads to a simpler and more uniform representation of these amplitudes.
arxiv:1909.09131
we have studied the spin - splitting effect in a four - terminal two - dimensional ( 2d ) electron gas system with two potential barriers generated by two surface metal gates and an external perpendicular magnetic field. the calculations show that by tuning the voltage applied on the gates, the injected spin - unpolarized current can be split into different spin currents with a high efficiency. the split currents flow out of the geometry from different output leads separately. the spin freedom of the outputs can be controlled by simply tuning voltage on gates. this phenomenon is a result of the combination of three effects - the potential barriers, zeeman splitting and edge current. furthermore, by tuning the voltage on gates, the outflow spin of current in one terminal can be switched. therefore, these features open up a possibility for making a spin filter or a switcher device by applying the four - terminal 2d electron gas system.
arxiv:1812.02065
we tested the ho \ u { r } ava lifshitz ( hl ) quantum gravity model by using the l { \ " u }, mei and pope solutions for primordial black holes ( pbhs ) and the observational upper bounds of the pbh density parameters. we found that, although the hl model is severely constrained, it is not ruled out. when our analysis is combined with that of dutta and saridakis the observed value of the density parameter $ \ omega _ { pbh } $ might rise by several percent as the running energy parameter $ \ lambda $ increases.
arxiv:1009.1703
in this paper we develop an optimisation based approach to multivariate chebyshev approximation on a finite grid. we consider two models : multivariate polynomial approximation and multivariate generalised rational approximation. in the second case the approximations are ratios of linear forms and the basis functions are not limited to monomials. it is already known that in the case of multivariate polynomial approximation on a finite grid the corresponding optimisation problems can be reduced to solving a linear programming problem, while the area of multivariate rational approximation is not so well understood. in this paper we demonstrate that in the case of multivariate generalised rational approximation the corresponding optimisation problems are quasiconvex. this statement remains true even when the basis functions are not limited to monomials. then we apply a bisection method, which is a general method for quasiconvex optimisation. this method converges to an optimal solution with given precision. we demonstrate that the convex feasibility problems appearing in the bisection method can be solved using linear programming. finally, we compare the deviation error and computational time for multivariate polynomial and generalised rational approximation with the same number of decision variables.
arxiv:2101.11786
we provide the expected constructions of weakly $ \ omega $ - categorified models ( in the sense of bressie ) of the theory of groups and quandles which arise by replacing the homotopies used to give equivalence relations in the theory of fundamental groups, fundamental quandles, and knot quandles with homotopies of all orders used as arrows of categorical dimensions one and greater, and discuss other related constructions of weakly $ \ omega $ - categorifed algebras.
arxiv:2006.15188
one of the most important task of the gamma - ray burst field is the classification of the bursts. many researches have proven the existence of the third kind ( intermediate duration ) of grbs in the batse data. recent works have analyzed bepposax and swift observations and can also identify three types of grbs in the data sets. however, the class memberships are probabilistic we have enough observed redshifts to calculate the redshift and spatial distribution of the intermediate grbs. they are significantly farther than the short bursts and seems to be closer than the long ones.
arxiv:1504.06427
this paper addresses a challenging problem - - how to generate multi - view cloth images from only a single view input. to generate realistic - looking images with different views from the input, we propose a new image generation model termed varigans that combines the strengths of the variational inference and the generative adversarial networks ( gans ). our proposed varigans model generates the target image in a coarse - to - fine manner instead of a single pass which suffers from severe artifacts. it first performs variational inference to model global appearance of the object ( e. g., shape and color ) and produce a coarse image with a different view. conditioned on the generated low resolution images, it then proceeds to perform adversarial learning to fill details and generate images of consistent details with the input. extensive experiments conducted on two clothing datasets, mvc and deepfashion, have demonstrated that images of a novel view generated by our model are more plausible than those generated by existing approaches, in terms of more consistent global appearance as well as richer and sharper details.
arxiv:1704.04886
a theory of massive gravity depends on a non - dynamical ' reference metric ' f _ { \ mu \ nu } which is often taken to be the flat minkowski metric. in this paper we examine the theory of perturbations on a background with metric g _ { \ mu \ nu } which does not coincide with the reference metric f _ { \ mu \ nu }. we derive the mass term for general perturbations on this background and show that it generically is not of the form of the fierz - pauli mass term. we explicitly compute it for some cosmological situations.
arxiv:1309.2245
we report on the identification of a soft gamma - ray source, igr j17204 - 3554, detected with the ibis imager on board the integral satellite. the source has a 20 - 100 kev flux of ~ 3x10 ^ - 11 erg cm ^ - 2 s ^ - 1 and is spatially coincident with ngc 6334, a molecular cloud located in the sagittarius arm of the milky way. diffuse x - ray emission has been reported from this region by asca and interpreted as coming from five far - infrared cores located in the cloud. however, the combined asca spectrum with a 9 kev temperature was difficult to explain in terms of emission from young pre - main sequence stars known to be embedded in the star forming regions. detection of gamma - rays makes this interpretation even more unrealistic and suggests the presence of a high energy source in or behind the cloud. follow up observations with swift and archival chandra data allow us to disentangle the ngc6334 enigma by locating an extragalactic object with the proper spectral characteristics to explain the gamma - ray emission. the combined chandra / ibis spectrum is well fitted by an absorbed power law with index 1. 2 + / - 0. 1, nh = 1. 4 + / - 0. 1x10 ^ 23 cm ^ - 2 and an unabsorbed 2 - 10 kev flux of 0. 5x10 ^ - 11 erg cm ^ - 2 s ^ - 1. this column density is in excess of the galactic value implying that we are detecting a background galaxy concealed by the molecular cloud and further hidden by material located either in the galaxy itself or between igr j17204 - 3554 and the cloud.
arxiv:astro-ph/0510338
minimax optimization problems have attracted a lot of attention over the past few years, with applications ranging from economics to machine learning. while advanced optimization methods exist for such problems, characterizing their dynamics in stochastic scenarios remains notably challenging. in this paper, we pioneer the use of stochastic differential equations ( sdes ) to analyze and compare minimax optimizers. our sde models for stochastic gradient descent - ascent, stochastic extragradient, and stochastic hamiltonian gradient descent are provable approximations of their algorithmic counterparts, clearly showcasing the interplay between hyperparameters, implicit regularization, and implicit curvature - induced noise. this perspective also allows for a unified and simplified analysis strategy based on the principles of it \ ^ o calculus. finally, our approach facilitates the derivation of convergence conditions and closed - form solutions for the dynamics in simplified settings, unveiling further insights into the behavior of different optimizers.
arxiv:2402.12508
named entity recognition ( ner ) is a well and widely studied task in natural language processing. recently, the nested ner has attracted more attention since its practicality and difficulty. existing works for nested ner ignore the recognition order and boundary position relation of nested entities. to address these issues, we propose a novel seq2seq model named gprl, which formulates the nested ner task as an entity triplet sequence generation process. gprl adopts the reinforcement learning method to generate entity triplets decoupling the entity order in gold labels and expects to learn a reasonable recognition order of entities via trial and error. based on statistics of boundary distance for nested entities, gprl designs a gaussian prior to represent the boundary distance distribution between nested entities and adjust the output probability distribution of nested boundary tokens. experiments on three nested ner datasets demonstrate that gprl outperforms previous nested ner models.
arxiv:2305.07266
hermitian linear matrix pencils are ubiquitous in control theory, operator systems, semidefinite optimization, and real algebraic geometry. this survey reviews the fundamental features of the matricial solution set of a linear matrix inequality, the free spectrahedron, from the perspective of free real algebraic geometry. namely, among matricial solution sets of noncommutative polynomial inequalities, free spectrahedra are precisely the convex ones. furthermore, a procedure for detecting free spectrahedra and producing their representing linear matrix pencils is discussed. finally, free spectrahedra admit a perfect positivstellensatz, leading to a semidefinite programming formulation of eigenvalue optimization over convex matricial sets constrained by noncommutative polynomial inequalities.
arxiv:2407.08450
exploring the climate impacts of various anthropogenic emissions scenarios is key to making informed decisions for climate change mitigation and adaptation. state - of - the - art earth system models can provide detailed insight into these impacts, but have a large associated computational cost on a per - scenario basis. this large computational burden has driven recent interest in developing cheap machine learning models for the task of climate model emulation. in this manuscript, we explore the efficacy of randomly wired neural networks for this task. we describe how they can be constructed and compare them to their standard feedforward counterparts using the climatebench dataset. specifically, we replace the serially connected dense layers in multilayer perceptrons, convolutional neural networks, and convolutional long short - term memory networks with randomly wired dense layers and assess the impact on model performance for models with 1 million and 10 million parameters. we find that models with less complex architectures see the greatest performance improvement with the addition of random wiring ( up to 30. 4 % for multilayer perceptrons ). furthermore, out of 24 different model architecture, parameter count, and prediction task combinations, only one saw a statistically significant performance deficit in randomly wired networks compared to their standard counterparts, with 14 cases showing statistically significant improvement. we also find no significant difference in prediction speed between networks with standard feedforward dense layers and those with randomly wired layers. these findings indicate that randomly wired neural networks may be suitable direct replacements for traditional dense layers in many standard models.
arxiv:2212.03369
imaging is the representation or reproduction of an object ' s form ; especially a visual representation ( i. e., the formation of an image ). imaging technology is the application of materials and methods to create, preserve, or duplicate images. imaging science is a multidisciplinary field concerned with the generation, collection, duplication, analysis, modification, and visualization of images, including imaging things that the human eye cannot detect. as an evolving field it includes research and researchers from physics, mathematics, electrical engineering, computer vision, computer science, and perceptual psychology. imagers are imaging sensors. = = imaging chain = = the foundation of imaging science as a discipline is the " imaging chain " – a conceptual model describing all of the factors which must be considered when developing a system for creating visual renderings ( images ). in general, the links of the imaging chain include : the human visual system. designers must also consider the psychophysical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include
https://en.wikipedia.org/wiki/Imaging
range separated hybrid density functionals are very successful in describing a wide range of molecular and solid state properties accurately. range separated hybrid functionals are designed from spherically averaged or system averaged reversed engineered exchange hole. in the present attempt, we employ screened range separated hybrid functional scheme to the meta - gga rung by using tao - mo semilocal exchange hole ( or functional ). the hybrid functional proposed here utilizes the spherically averaged density matrix expansion based exchange hole in range separation scheme. for slowly varying density correction, we employ range separation scheme only through the local density approximation ( lda ) based exchange hole coupled with the slowly varying tao - mo enhancement factor through the conventional wisdom technique. comprehensive performance and testing of the present functional shows, it accurately describes several molecular properties. the most appealing feature of this present screened hybrid functional is that it will be practically very useful in describing solid state properties in meta - gga level.
arxiv:1712.05323
detection of cyber attacks in smart power distribution grids with unbalanced configurations poses challenges due to the inherent nonlinear nature of these uncertain and stochastic systems. it originates from the intermittent characteristics of the distributed energy resources ( ders ) generation and load variations. moreover, the unknown behavior of cyber attacks, especially false data injection attacks ( fdias ) in the distribution grids with complex temporal correlations and the limited amount of labeled data increases the vulnerability of the grids and imposes a high risk in the secure and reliable operation of the grids. to address these challenges, this paper proposes an unsupervised adversarial autoencoder ( aae ) model to detect fdias in unbalanced power distribution grids integrated with ders, i. e., pv systems and wind generation. the proposed method utilizes long short - term memory ( lstm ) in the structure of the autoencoder to capture the temporal dependencies in the time - series measurements and leverages the power of generative adversarial networks ( gans ) for better reconstruction of the input data. the advantage of the proposed data - driven model is that it can detect anomalous points for the system operation without reliance on abstract models or mathematical representations. to evaluate the efficacy of the approach, it is tested on ieee 13 - bus and 123 - bus systems with historical meteorological data ( wind speed, ambient temperature, and solar irradiance ) as well as historical real - world load data under three types of data falsification functions. the comparison of the detection results of the proposed model with other unsupervised learning methods verifies its superior performance in detecting cyber attacks in unbalanced power distribution grids.
arxiv:2404.02923
charge balance functions, which identify balancing particle - antiparticle pairs on a statistical basis, have been shown to be sensitive to whether hadronization is delayed by several fm / c in relativistic heavy ion collisions. results from two classes of models are presented here, microscopic hadronic models and thermal models. the microscopic models give results which are contrary to recently published pi + pi - balance functions from the star collaboration, whereas the thermal model roughly reproduce the experimental results. this suggests that charge conservation is local at breakup, which is in line with expectations for a delayed hadronization. predictions are also presented for balance functions binned as a function of q _ inv.
arxiv:nucl-th/0401008
are likely to be new exotic states. more experimental data is needed to confirm the existence of these resonance states.
arxiv:2502.11818
the census of stellar streams and dwarf galaxies in the milky way provides direct constraints on galaxy formation models and the nature of dark matter. the desi milky way survey ( with a footprint of 14, 000 $ ~ deg { ^ 2 } $ and a depth of $ r < 19 $ mag ) delivers the largest sample of distant metal - poor stars compared to previous optical fiber - fed spectroscopic surveys. this makes desi an ideal survey to search for previously undetected streams and dwarf galaxies. we present a detailed characterization of the cocytos stream, which was re - discovered using a clustering analysis with a catalog of giants in the desi year 3 data, supplemented with magellan / mage spectroscopy. our analysis reveals a relatively metal - rich ( [ fe / h ] $ = - 1. 3 $ ) and thick stream ( width $ = 1. 5 ^ \ circ $ ) at a heliocentric distance of $ \ approx 25 $ kpc, with an internal velocity dispersion of 6. 5 - 9 km s $ ^ { - 1 } $. the stream ' s metallicity, radial orbit, and proximity to the virgo stellar overdensities suggest that it is most likely a disrupted globular cluster that came in with the gaia - enceladus merger. we also confirm its association with the pyxis globular cluster. our result showcases the ability of wide - field spectroscopic surveys to kinematically discover faint disrupted dwarfs and clusters, enabling constraints on the dark matter distribution in the milky way.
arxiv:2504.11687
dynamical systems can autonomously adapt their organization so that the required target dynamics is reproduced. in the previous rapid communication [ phys. rev. e 90, 030901 ( r ) ( 2014 ) ], it was shown how such systems can be designed using delayed feedbacks. here, the proposed method is further analyzed and improved. its extension to adaptable systems, where delays are absent and inertial feedbacks are instead employed, is suggested. numerical tests for three different models, including networks of phase and amplitude oscillators, are performed.
arxiv:1611.01036
the recent surge in large language models ( llms ) has garnered significant attention across numerous fields. fine - tuning is often required to fit general llms for a specific domain, like the web - based healthcare system. however, two problems arise during fine - tuning llms for medical applications. one is the task variety problem, which involves distinct tasks in real - world medical scenarios. the variety often leads to sub - optimal fine - tuning for data imbalance and seesaw problems. besides, the large amount of parameters in llms leads to huge time and computation consumption by fine - tuning. to address these two problems, we propose a novel parameter efficient fine - tuning framework for multi - task medical applications, dubbed as moelora. the designed framework aims to absorb both the benefits of mixture - of - expert ( moe ) for multi - task learning and low - rank adaptation ( lora ) for parameter efficient fine - tuning. for unifying moe and lora, we devise multiple experts as the trainable parameters, where each expert consists of a pair of low - rank matrices to retain the small size of trainable parameters. then, a task - motivated gate function for all moelora layers is proposed, which can control the contributions of each expert and produce distinct parameters for various tasks. we conduct experiments on a multi - task medical dataset, indicating moelora outperforms the existing parameter efficient fine - tuning methods. the code is available online.
arxiv:2310.18339
the size and the computational load of fine - tuning large - scale pre - trained neural network are becoming two major obstacles in adopting machine learning in many applications. continual learning ( cl ) can serve as a remedy through enabling knowledge - transfer across sequentially arriving tasks which relaxes the need to fine - tune all network weights from scratch. however, existing cl algorithms primarily consider learning unimodal vision - only or language - only tasks. we develop a transformer - based cl architecture for learning bimodal vision - and - language tasks based on increasing the number of the learnable parameters dynamically and using knowledge distillation. the new additional parameters are used to specialize the network for each task. our approach enables sharing information between the tasks while addressing the challenge of catastrophic forgetting. our approach is scalable learning to a large number of tasks because it requires little memory and time overhead. our model reaches state - of - the - art performance on challenging vision - and - language tasks.
arxiv:2303.14423
we characterize the magnetic activity of m dwarfs to provide the planet community with information on the energy input from the star ; in particular, in addition to the frequency of optical flares directly observed with tess, we aim at estimating the corresponding x - ray flare frequencies, making use of the small pool of known events observed simultaneously in both wavebands. we identified 112 m dwarfs with a tess magnitude < = 11. 5 for which tess can probe the full habitable zone for transits. these 112 stars have 1276 two - minute cadence tess light curves from the primary mission, which we searched for rotational modulation and flares. we study the link between rotation and flares and between flare properties, for example the flare amplitude - duration relation and cumulative flare energy frequency distributions ( ffds ). assuming that each optical flare is associated with a flare in the x - ray band, and making use of published simultaneous kepler / k2 and xmm - newton flare studies, we estimate the x - ray energy released by our detected tess flare events. our calibration also involves the relation between flare energies in the tess and k2 bands. we detected more than 2500 optical flare events on a fraction of about 32 % of our targets and found reliable rotation periods only for 12 stars, which is a fraction of about 11 %. for these 12 targets, we present cumulative ffds and ffd power law fits. we construct ffds in the x - ray band by calibrating optical flare energies to the x - rays. in the absence of directly observed x - ray ffds for main - sequence stars, our predictions can serve for estimates of the high - energy input to the planet of a typical fast - rotating early - or mid - m dwarf.
arxiv:2207.03794
ordinal ω3 as efa and is conservative over efa for π02 sentences. weak weak konig ' s lemma is the statement that a subtree of the infinite binary tree having no infinite paths has an asymptotically vanishing proportion of the leaves at length n ( with a uniform estimate as to how many leaves of length n exist ). an equivalent formulation is that any subset of cantor space that has positive measure is nonempty ( this is not provable in rca0 ). wwkl0 is obtained by adjoining this axiom to rca0. it is equivalent to the statement that if the unit real interval is covered by a sequence of intervals then the sum of their lengths is at least one. the model theory of wwkl0 is closely connected to the theory of algorithmically random sequences. in particular, an ω - model of rca0 satisfies weak weak konig ' s lemma if and only if for every set x there is a set y that is 1 - random relative to x. dnr ( short for " diagonally non - recursive " ) adds to rca0 an axiom asserting the existence of a diagonally non - recursive function relative to every set. that is, dnr states that, for any set a, there exists a total function f such that for all e the eth partial recursive function with oracle a is not equal to f. dnr is strictly weaker than wwkl ( lempp et al., 2004 ). δ11 - comprehension is in certain ways analogous to arithmetical transfinite recursion as recursive comprehension is to weak konig ' s lemma. it has the hyperarithmetical sets as minimal ω - model. arithmetical transfinite recursion proves δ11 - comprehension but not the other way around. σ11 - choice is the statement that if η ( n, x ) is a σ11 formula such that for each n there exists an x satisfying η then there is a sequence of sets xn such that η ( n, xn ) holds for each n. σ11 - choice also has the hyperarithmetical sets as minimal ω - model. arithmetical transfinite recursion proves σ11 - choice but not the other way around. hbu ( short for " uncountable heine - borel " ) expresses the ( open - cover ) compactness of the unit interval, involving uncountable covers.
https://en.wikipedia.org/wiki/Reverse_mathematics
autonomous driving techniques have been flourishing in recent years while thirsting for huge amounts of high - quality data. however, it is difficult for real - world datasets to keep up with the pace of changing requirements due to their expensive and time - consuming experimental and labeling costs. therefore, more and more researchers are turning to synthetic datasets to easily generate rich and changeable data as an effective complement to the real world and to improve the performance of algorithms. in this paper, we summarize the evolution of synthetic dataset generation methods and review the work to date in synthetic datasets related to single and multi - task categories for to autonomous driving study. we also discuss the role that synthetic dataset plays the evaluation, gap test, and positive effect in autonomous driving related algorithm testing, especially on trustworthiness and safety aspects. finally, we discuss general trends and possible development directions. to the best of our knowledge, this is the first survey focusing on the application of synthetic datasets in autonomous driving. this survey also raises awareness of the problems of real - world deployment of autonomous driving technology and provides researchers with a possible solution.
arxiv:2304.12205
this paper explores educational interactions involving humans and artificial intelligences not as sequences of prompts and responses, but as a social process of conversation and exploration. in this conception, learners continually converse with ai language models within a dynamic computational medium of internet tools and resources. learning happens when this distributed system sets goals, builds meaning from data, consolidates understanding, reconciles differences, and transfers knowledge to new domains. building social generative ai for education will require development of powerful ai systems that can converse with each other as well as humans, construct external representations such as knowledge maps, access and contribute to internet resources, and act as teachers, learners, guides and mentors. this raises fundamental problems of ethics. such systems should be aware of their limitations, their responsibility to learners and the integrity of the internet, and their respect for human teachers and experts. we need to consider how to design and constrain social generative ai for education.
arxiv:2306.10063
we describe a family of decidable propositional dynamic logics, where atomic modalities satisfy some extra conditions ( for example, given by axioms of the logics k5, s5, or k45 for different atomic modalities ). it follows from recent results ( kikot, shapirovsky, zolin, 2014 ; 2020 ) that if a modal logic $ l $ admits a special type of filtration ( so - called definable filtration ), then its enrichments with modalities for the transitive closure and converse relations also admit definable filtration. we use these results to show that if logics $ l _ 1, \ ldots, l _ n $ admit definable filtration, then the propositional dynamic logic with converse extended by the fusion $ l _ 1 * \ ldots * l _ n $ has the finite model property.
arxiv:2303.09948
into seven colleges. georgia tech has sought to expand its undergraduate and graduate offerings in less technical fields, primarily those under the ivan allen college of liberal arts, which saw a 20 % increase in admissions in 2008. also, even in the ivan allen college, the institute does not offer bachelor of arts and masters of arts degrees, only bachelor of science and master of science degrees. georgia tech ' s honors program is highly selective and designed to cater to the most intellectually curious undergraduates from all six colleges. = = = funding = = = the georgia institute of technology is a public institution that receives funds from the state of georgia, tuition, fees, research grants, and alumni contributions. in 2014, the institute ' s revenue amounted to about $ 1. 422 billion. fifteen percent came from state appropriations and grants while 20 % originated from tuition and fees. grants and contracts accounted for 55 % of all revenue. expenditures were about $ 1. 36 billion. forty - eight percent went to research and 19 % went to instruction. the georgia tech foundation runs the university ' s endowment and was incorporated in 1932. it includes several wholly owned subsidiaries that own land on campus or in midtown and lease the land back to the georgia board of regents and other companies and organizations. assets totaled $ 1. 882 billion and liabilities totaled $ 0. 478 billion in 2014. as of 2007, georgia tech had the most generous alumni donor base, percentage wise, of any public university ranked in the top 50. in 2015, the university received a $ 30 million grant from atlanta philanthropist diana blank to build the " most environmentally - sound building ever constructed in the southeast. " = = academics = = = = = undergraduate admissions = = = the 2022 annual ranking of u. s. news & world report categorizes georgia institute of technology as " most selective. " for the class of 2029 ( enrolled fall 2025 ), georgia tech received 66, 895 applications from first - time, first - year students, and accepted 8, 640 ( 12. 74 % ). in the 2028 cycle, of those accepted, nearly 4, 000 enrolled, a yield rate ( the percentage of accepted students who choose to attend the university ) of 45. 8 %. of the 77 % of the incoming freshman class who submitted sat scores ; the middle 50 percent composite scores were 1440. of the 35 % of enrolled freshmen in 2023 who submitted act scores ; the middle 50 percent composite score was between 32 georgia tech ' s freshman retention rate is 98 %
https://en.wikipedia.org/wiki/Georgia_Tech
an su ( 2 ) vectorlike singlet quark with a charge either + 2 / 3 ( t ' ) or - 1 / 3 ( b ' ) is predicted in many extensions of the standard model. the mixing of these quarks with the top or bottom lead to flavor changing yukawa interactions and neutral current. the decay modes of the heavier mass eigenstates are therefore different from the standard model type chiral quarks. the large hadron collider ( lhc ) will provide an ideal environment to look for the signals of these exotic quarks. considering all decays, including those involving z - and yukawa interactions, we show how one can distinguish between t ' and b ' from ratios of event rates with different lepton multiplicities. the ability to reconstruct the higgs boson with a mass around 125. 5 gev plays an important role in such differentiation.
arxiv:1404.3374
many important datasets contain samples that are missing one or more feature values. maintaining the interpretability of machine learning models in the presence of such missing data is challenging. singly or multiply imputing missing values complicates the model ' s mapping from features to labels. on the other hand, reasoning on indicator variables that represent missingness introduces a potentially large number of additional terms, sacrificing sparsity. we solve these problems with m - gam, a sparse, generalized, additive modeling approach that incorporates missingness indicators and their interaction terms while maintaining sparsity through l0 regularization. we show that m - gam provides similar or superior accuracy to prior methods while significantly improving sparsity relative to either imputation or naive inclusion of indicator variables.
arxiv:2412.02646
the excessive application of pesticides, particularly the overreliance on insecticides for the protection of desirable crops from pests, has posed a significant threat to both ecological systems and human health due to environmental pollution. this research outlines a comprehensive approach to recognizing and quantifying the presence of insecticides through the application of spectroscopic and electrochemical sensing methods. the detection of emamectin benzoate ( eb ), a commonly used insecticide, was performed utilizing vivianenes, a 2d phosphate that has been mechanically exfoliated from the naturally occurring vivianite minerals. this investigation examined the structural and compositional characteristics of vivianenes, utilizing a range of characterization methods. the spectroscopic analyses reveal the molecular interactions and structural modifications that take place during the interaction of eb with the 2d template. electrochemical investigations employing cyclic voltammetry were performed for different concentrations of eb to enable real - time monitoring of the pesticide. the modified sensing electrode using vivianene demonstrated a linear range of from 50 mg / l to 10 micro g / l, effectively detecting eb molecules at levels significantly below the hazardous threshold. fully atomistic molecular dynamics simulations were also carried out to obtain further insights into the interaction mechanisms of the eb with the vivianites, and the results corroborate the adsorption mechanism. our results highlight the potential application of 2d phosphate minerals as advanced sensors to enhance agricultural monitoring and promote sustainable development.
arxiv:2502.00813
the operation of a novel nonvolatile memory device based on a conductive ferroelectric / non - ferroelectric thin film multilayer stack is simulated numerically. the simulation involves the self - consistent steady state solution of poisson ' s equation and the transport equation for electrons assuming a drift - diffusion transport mechanism. special emphasis is put on the screening of the spontaneous polarization by conduction electrons as a function of the applied voltage. depending on the orientation of the polarization in the ferroelectric layer, a high and a low resistive state are found giving rise to a hysteretic i - v characteristic. the r _ high to r _ low ratio ranging from > 50 % to several orders of magnitude is calculated as a function of the dopant content.
arxiv:cond-mat/0312609
el ni \ ~ no - southern oscillation ( enso ) is the most predominant interannual variability in the tropics, significantly impacting global weather and climate. in this paper, a framework of low - order conceptual models for the enso is systematically derived from a spatially - extended stochastic dynamical system with full mathematical rigor. the spatially - extended stochastic dynamical system has a linear, deterministic, and stable dynamical core. it also exploits a simple stochastic process with multiplicative noise to parameterize the intraseasonal wind burst activities. a principal component analysis based on the eigenvalue decomposition method is applied to provide a low - order conceptual model that succeeds in characterizing the large - scale dynamical and non - gaussian statistical features of the eastern pacific el ni \ ~ no events. despite the low dimensionality, the conceptual modeling framework contains outputs for all the atmosphere, ocean, and sea surface temperature components with detailed spatiotemporal patterns. this contrasts with many existing conceptual models focusing only on a small set of specified state variables. the stochastic versions of many state - of - the - art low - order models, such as the recharge - discharge and the delayed oscillators, become special cases within this framework. the rigorous derivation of such low - order models provides a unique way to connect models with different spatiotemporal complexities. the framework also facilitates understanding the instantaneous and memory effects of stochastic noise in contributing to the large - scale dynamics of the enso.
arxiv:2212.14179
a concern about cutting - edge or " frontier " ai foundation models is that an adversary may use the models for preparing chemical, biological, radiological, nuclear, ( cbrn ), cyber, or other attacks. at least two methods can identify foundation models with potential dual - use capability ; each has advantages and disadvantages : a. open benchmarks ( based on openly available questions and answers ), which are low - cost but accuracy - limited by the need to omit security - sensitive details ; and b. closed red team evaluations ( based on private evaluation by cbrn and cyber experts ), which are higher - cost but can achieve higher accuracy by incorporating sensitive details. we propose a research and risk - management approach using a combination of methods including both open benchmarks and closed red team evaluations, in a way that leverages advantages of both methods. we recommend that one or more groups of researchers with sufficient resources and access to a range of near - frontier and frontier foundation models run a set of foundation models through dual - use capability evaluation benchmarks and red team evaluations, then analyze the resulting sets of models ' scores on benchmark and red team evaluations to see how correlated those are. if, as we expect, there is substantial correlation between the dual - use potential benchmark scores and the red team evaluation scores, then implications include the following : the open benchmarks should be used frequently during foundation model development as a quick, low - cost measure of a model ' s dual - use potential ; and if a particular model gets a high score on the dual - use potential benchmark, then more in - depth red team assessments of that model ' s dual - use capability should be performed. we also discuss limitations and mitigations for our approach, e. g., if model developers try to game benchmarks by including a version of benchmark test data in a model ' s training data.
arxiv:2405.10986
we present our findings of a large - scale screening for new synthesizable materials in five m - sn binaries, m = na, ca, cu, pd, and ag. the focus on these systems was motivated by the known richness of m - sn properties with potential applications in energy storage, electronics packaging, and superconductivity. for the systematic exploration of the large configuration space, we relied on our recently developed maise - net framework that constructs accurate neural network interatomic potentials and utilizes them to accelerate ab initio global structure searches. the scan of over two million candidate phases at a fraction of the typical ab initio calculation cost has uncovered 29 possible intermetallics thermodynamically stable at different temperatures and pressures ( 1 bar and 20 gpa ). notable predictions of ambient - pressure materials include a simple hp6 - nasn $ _ 2 $ phase, fcc - based pd - rich alloys, ti36 - pdsn $ _ 2 $ with a new prototype, and several high - temperature sn - rich ground states in the na - sn, cu - sn, and ag - sn systems. our modeling work also involved ab initio ( re ) examination of previously observed m - sn compounds that helped explain the entropy - driven stabilization of known cu - sn phases. the study demonstrates the benefits of guiding structure searches with machine learning potentials and significantly expands the number of predicted thermodynamically stable crystalline intermetallics achieved with this strategy so far.
arxiv:2306.10223
in dynamic epistemic logic ( van ditmarsch, van der hoek, & kooi, 2008 ) it is customary to use an action frame ( baltag & moss, 2004 ; baltag, moss, & solecki, 1998 ) to describe different views of a single action. in this article, action frames are extended to add or remove agents, we call these agent - update frames. this can be done selectively so that only some specified agents get information of the update, which can be used to model several interesting examples such as private update and deception, studied earlier by baltag and moss ( 2004 ) ; sakama ( 2015 ) ; van ditmarsch, van eijck, sietsma, and wang ( 2012 ). the product update of a kripke model by an action frame is an abbreviated way of describing the transformed kripke model which is the result of performing the action. this is substantially extended to a sum - product update of a kripke model by an agent - update frame in the new setting. these ideas are applied to an ai problem of modelling a story. we show that dynamic epistemic logics, with update modalities now based on agent - update frames, continue to have sound and complete proof systems. decision procedures for model checking and satisfiability have expected complexity. for a sublanguage, there are polynomial space algorithms.
arxiv:2211.02452
we report the novel critical behavior of magnetization in low carrier concentration systems utes and uses that exhibit the large negative magnetoresistance around the ferromagnetic transition temperatures t _ c ~ 85 and 23 k, respectively. utes and uses crystallize in the same orthorhombic tinisi - type crystal structure as those of uranium ferromagnetic superconductors urhge and ucoge. we determine the critical exponents, beta for the spontaneous magnetization m _ s, gamma for the magnetic susceptibility chi, and delta for the magnetization isotherm at t _ c with several methods. the ferromagnetic states in utes and uses have strong uniaxial magnetic anisotropy. however, the critical exponents in the two compounds are different from those in the three - dimensional ising model with short - range magnetic exchange interactions. similar sets of the critical exponents have been reported for the uranium ferromagnetic superconductors uge _ 2 and urhge, and uranium intermetallic ferromagnets urhsi, uir and u ( co _ 0. 98os _ 0. 02 ) al. the universality class of the ferromagnetic transitions in utes and uses may belong to the same one for the uranium compounds. the novel critical phenomenon associated with the ferromagnetic transition is observed not only in the uranium intermetallic ferromagnets with the itinerant 5f electrons but also in the low carrier concentration systems utes and uses with the localized 5f electrons. the large negative magnetoresistance in utes and uses, and the superconductivity in uge _ 2 and urhge share the similarity of their closeness to the ferromagnetism characterized by the novel critical exponents.
arxiv:1908.07652
the purpose of this article is to view the penrose kite from the perspective of symplectic geometry.
arxiv:0712.1978
we present keck / lris - b spectra for a sample of ten aegis x - ray agn host galaxies and thirteen post - starburst galaxies from sdss and deep2 at 0. 2 < z < 0. 8 in order to investigate the presence, properties, and influence of outflowing galactic winds at intermediate redshifts. we focus on galaxies that either host a low - luminosity agn or have recently had their star formation quenched to test whether these galaxies have winds of sufficient velocity to potentially clear gas from the galaxy. we find, using absorption features of fe ii, mg ii, and mg i, that six of the ten ( 60 % ) x - ray agn host galaxies and four of the thirteen ( 31 % ) post - starburst galaxies have outflowing galactic winds, with typical velocities of ~ 200 km / s. we additionally find that most of the galaxies in our sample show line emission, possibly from the wind, in either fe ii * or mg ii. a total of 100 % of our x - ray agn host sample ( including four red sequence galaxies ) and 77 % of our post - starburst sample has either blueshifted absorption or line emission. several k + a galaxies have small amounts of cool gas absorption at the systemic velocity, indicating that not all of the cool gas has been expelled. we conclude that while outflowing galactic winds are common in both x - ray low - luminosity agn host galaxies and post - starburst galaxies at intermediate redshifts, the winds are likely driven by supernovae ( as opposed to agn ) and do not appear to have sufficiently high velocities to quench star formation in these galaxies.
arxiv:1104.0681
the capabilities of the atlas, cms and lhcb detectors to reconstruct jets at forward rapidities ( | \ eta | > 3 ) in p - p collisions at the cern large hadron collider are reviewed. the qcd and higgs physics motivations for such measurements are summarised. details are given on studies that provide information on the parton structure and evolution at small values of fractional momenta in the proton.
arxiv:0911.1273
the relation between classically chaotic dynamics and quantum localization is studied in a system that violates the assumptions of kolmogorov - arnold - moser ( kam ) theorem, namely, kicked rotor in a discontinuous potential barrier. we show that the discontinuous barrier induces chaos and more than two distinct sub - diffusive energy growth regimes, the latter being an unusual feature for hamiltonian chaos. we show that the dynamical localization in the quantized version of this system carries the imprint of non - kam classical dynamics through the dependence of quantum break time on sub - diffusion exponents. we briefly comment on the experimental feasibility of this system.
arxiv:1603.05777
rcsedv2 ( https : / / rcsed2. voxastro. org / ), the second reference catalog of spectral energy distributions of galaxies includes the largest homogeneously processed photometric dataset for 4 million galaxies assembled from several wide - field surveys. here we describe the methodology of the photometric data homogenization. we first correct all photometric measurements for the foreground galactic extinction, then convert them into the photometric system we adopted as a standard ( galex + sdss + ukidss + wise ). we computed aperture corrections into several pre - defined apertures by using published galaxy sizes / light profiles and image quality for each of the surveys. we accounted for k - corrections using our own analytic approximations. such a homogeneous photometric catalog allows us to build fully calibrated seds for the galaxies in our sample ( defined by the availability of their spectra ) and enables direct scientific analysis of this unique extragalactic dataset.
arxiv:2112.04868
we show that the reasoning which led the author of arxiv : 1310. 6252 to reach his conclusions relies on an incorrect criterion for the existence of normalizable bound solutions. we reinforce that the general result derived in the appendix of our paper ( arxiv : 1310. 2185 ), namely, that " there are no tachyonic ( i. e., unstable ) modes for minimally coupled scalar fields in asymptotically flat spherically symmetric static spacetimes containing no horizons " is indeed correct.
arxiv:1310.7849
we introduce a max - plus analogue of the petrov - galerkin finite element method to solve finite horizon deterministic optimal control problems. the method relies on a max - plus variational formulation. we show that the error in the sup norm can be bounded from the difference between the value function and its projections on max - plus and min - plus semimodules, when the max - plus analogue of the stiffness matrix is exactly known. in general, the stiffness matrix must be approximated : this requires approximating the operation of the lax - oleinik semigroup on finite elements. we consider two approximations relying on the hamiltonian. we derive a convergence result, in arbitrary dimension, showing that for a class of problems, the error estimate is of order $ \ delta + \ delta x ( \ delta ) ^ { - 1 } $ or $ \ sqrt { \ delta } + \ delta x ( \ delta ) ^ { - 1 } $, depending on the choice of the approximation, where $ \ delta $ and $ \ delta x $ are respectively the time and space discretization steps. we compare our method with another max - plus based discretization method previously introduced by fleming and mceneaney. we give numerical examples in dimension 1 and 2.
arxiv:math/0603619
background concept extraction, a subdomain of natural language processing ( nlp ) with a focus on extracting concepts of interest, has been adopted to computationally extract clinical information from text for a wide range of applications ranging from clinical decision support to care quality improvement. objectives in this literature review, we provide a methodology review of clinical concept extraction, aiming to catalog development processes, available methods and tools, and specific considerations when developing clinical concept extraction applications. methods based on the preferred reporting items for systematic reviews and meta - analyses ( prisma ) guidelines, a literature search was conducted for retrieving ehr - based information extraction articles written in english and published from january 2009 through june 2019 from ovid medline in - process & other non - indexed citations, ovid medline, ovid embase, scopus, web of science, and the acm digital library. results a total of 6, 686 publications were retrieved. after title and abstract screening, 228 publications were selected. the methods used for developing clinical concept extraction applications were discussed in this review.
arxiv:1910.11377
misinformation on social media presents a major threat to modern societies. while previous research has analyzed the virality across true and false social media posts, not every misleading post is necessarily equally viral. rather, misinformation has different characteristics and varies in terms of its believability and harmfulness - which might influence its spread. in this work, we study how the perceived believability and harmfulness of misleading posts are associated with their virality on social media. specifically, we analyze ( and validate ) a large sample of crowd - annotated social media posts from twitter ' s birdwatch platform, on which users can rate the believability and harmfulness of misleading tweets. to address our research questions, we implement an explanatory regression model and link the crowd ratings for believability and harmfulness to the virality of misleading posts on twitter. our findings imply that misinformation that is ( i ) easily believable and ( ii ) not particularly harmful is associated with more viral resharing cascades. these results offer insights into how different kinds of crowd fact - checked misinformation spreads and suggest that the most viral misleading posts are often not the ones that are particularly concerning from the perspective of public safety. from a practical view, our findings may help platforms to develop more effective strategies to curb the proliferation of misleading posts on social media.
arxiv:2302.05443
this paper presents a numerical study on a fast marching method based back projection reconstruction algorithm for photoacoustic tomography in heterogeneous media. transcranial imaging is used here as a case study. to correct for the phase aberration from the heterogeneity ( i. e., skull ), the fast marching method is adopted to compute the phase delay based on the known speed of sound distribution, and the phase delay is taken into account by the back projection algorithm for more accurate reconstructions. it is shown that the proposed algorithm is more accurate than the conventional back projection algorithm, but slightly less accurate than the time reversal algorithm particularly in the area close to the skull. however, the image reconstruction time for the proposed algorithm can be as little as 124 ms when implemented by a gpu ( 512 sensors, 21323 pixels reconstructed ), which is two orders of magnitude faster than the time reversal reconstruction. the proposed algorithm, therefore, not only corrects for the phase aberration, but can be also potentially implemented in a real - time manner.
arxiv:1501.03869
we use time - dependent hartree fock approximation to study the collective mode spectra of nu = 2 quantum hall bilayers in tilted magnetic field allowing for charge imbalance as well as tunneling between the two layers. in a previous companion paper to this work, we studied the zero temperature global phase diagram of this system which was found to include a symmetric and ferromagnetic phases as well as a first order transition between two canted phases with spontaneously broken u ( 1 ) symmetry. we further found that this first order transition line ends in a quantum critical point within the canted region. in the current work, we study the excitation spectra of all of these phases and pay particular attention to the behavior of the collective modes near the phase transitions. we find, most interestingly, that the first order transition between the two canted phases is signaled by a near softening of a magnetoroton minimum. many of the collective mode features explored here should be accessible experimentally in light scattering experiments.
arxiv:cond-mat/0401044
thomassen conjectured that triangle - free planar graphs have an exponential number of $ 3 $ - colorings. we show this conjecture to be equivalent to the following statement : there exists a positive real $ \ alpha $ such that whenever $ g $ is a planar graph and $ a $ is a subset of its edges whose deletion makes $ g $ triangle - free, there exists a subset $ a ' $ of $ a $ of size at least $ \ alpha | a | $ such that $ g - ( a \ setminus a ' ) $ is $ 3 $ - colorable. this equivalence allows us to study restricted situations, where we can prove the statement to be true.
arxiv:1702.00588
we present a technique to identify exact analytic expressions for the multi - quantum eigenstates of a linear chain of coupled qubits. a choice of hilbert subspaces is described which allows an exact solution of the stationary schr \ " { o } dinger equation without imposing periodic boundary conditions and without neglecting end effects, fully including the dipole - dipole nearest - neighbor interaction between the atoms. the treatment is valid for an arbitrary coherent excitation in the atomic system, any number of atoms, any size of the chain relative to the resonant wavelength and arbitrary initial conditions of the atomic system. the procedure we develop is general enough to be adopted for the study of excitation in an arbitrary array of atoms including spin chains and one - dimensional bose - einstein condensates.
arxiv:quant-ph/0505206
the limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning ( rl ). however, they are also black - boxes, making it hard to understand the agent ' s behaviour, especially when working on the image level. therefore, neuro - symbolic rl aims at creating policies that are interpretable in the first place. unfortunately, interpretability is not explainability. to achieve both, we introduce neurally guided differentiable logic policies ( nudge ). nudge exploits trained neural network - based agents to guide the search of candidate - weighted logic rules, then uses differentiable logic to train the logic agents. our experimental evaluation demonstrates that nudge agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.
arxiv:2306.01439
weighted minwise hashing ( wmh ) is one of the fundamental subroutine, required by many celebrated approximation algorithms, commonly adopted in industrial practice for large scale - search and learning. the resource bottleneck of the algorithms is the computation of multiple ( typically a few hundreds to thousands ) independent hashes of the data. the fastest hashing algorithm is by ioffe \ cite { proc : ioffe _ icdm10 }, which requires one pass over the entire data vector, $ o ( d ) $ ( $ d $ is the number of non - zeros ), for computing one hash. however, the requirement of multiple hashes demands hundreds or thousands passes over the data. this is very costly for modern massive dataset. in this work, we break this expensive barrier and show an expected constant amortized time algorithm which computes $ k $ independent and unbiased wmh in time $ o ( k ) $ instead of $ o ( dk ) $ required by ioffe ' s method. moreover, our proposal only needs a few bits ( 5 - 9 bits ) of storage per hash value compared to around $ 64 $ bits required by the state - of - art - methodologies. experimental evaluations, on real datasets, show that for computing 500 wmh, our proposal can be 60000x faster than the ioffe ' s method without losing any accuracy. our method is also around 100x faster than approximate heuristics capitalizing on the efficient " densified " one permutation hashing schemes \ cite { proc : onehashlsh _ icml14 }. given the simplicity of our approach and its significant advantages, we hope that it will replace existing implementations in practice.
arxiv:1602.08393
in the last few years, the low - momentum nucleon - nucleon ( nn ) interaction v - low - k derived from free - space nn potentials has been successfully used in shell - model calculations. v - low - k is a smooth potential which preserves the deuteron binding energy as well as the half - on - shell t - matrix of the original nn potential up to a momentum cutoff lambda. in this paper we put to the test a new low - momentum nn potential derived from chiral perturbation theory at next - to - next - to - next - to - leading order with a sharp low - momentum cutoff at 2. 1 fm - 1. shell - model calculations for the oxygen isotopes using effective hamiltonians derived from both types of low - momentum potential are performed. we find that the two potentials show the same perturbative behavior and yield very similar results.
arxiv:nucl-th/0701065
einstein action of gravity is obtained from a gauge theory, if our spacetime was once in two folds with a double lorentz symmetry. after the dual symmetry breaks spontaneously, lorentz symmetry absorbs gauge symmetry, while the gauge field begins to drive the vierbein and the spin connection. this gauge model of gravity has many bosons and fermions with negative norms, which will be undetectable. the consistency of these particles with the copenhagen interpretation of wave functions, and their relation to the presence of dark matter as well as the absence of antimatter in the universe are discussed.
arxiv:2212.13131
we study a new procedure to measure the sound horizon scale via baryonic acoustic oscillations ( bao ). instead of fitting the measured power spectrum ( ps ) to a theoretical model containing the cosmological informations and all the nonlinear effects, we define a procedure to project out ( or to " extract " ) the oscillating component from a given nonlinear ps. we show that the bao scale extracted in this way is extremely robust and, moreover, can be reproduced by simple theoretical models at any redshift. by using n - body simulations, we discuss the effect of the nonlinear evolution of the matter field, of redshift space distortions and of scale - dependent halo bias, showing that all these effects can be reproduced with sub - percent accuracy. we give a one - parameter theoretical model based on a simple ( ir ) modification of 1 - loop perturbation theory, which reproduces the bao scale from measurements of halo clustering in redshift space at better than $ 0. 1 \ % $ level and does not need any external uv input, such as coefficients measured from n - body simulations.
arxiv:1708.00375
we present a collective coordinate approach to describe coupled phase oscillators. we apply the method to study synchronisation in a kuramoto model. in our approach an n - dimensional kuramoto model is reduced to an n - dimensional ordinary differential equation with n < < n, constituting an immense reduction in complexity. the onset of both local and global synchronisation is reproduced to good numerical accuracy, and we are able to describe both soft and hard transitions. by introducing 2 collective coordinates the approach is able to describe the interaction of two partially synchronised clusters in the case of bimodally distributed native frequencies. furthermore, our approach allows us to accurately describe finite size scalings of the critical coupling strength. we corroborate our analytical results by comparing with numerical simulations of the kuramoto model with all - to - all coupling networks for several distributions of the native frequencies.
arxiv:1505.05243
the causal structure for measurement bias ( mb ) remains controversial. aided by the directed acyclic graph ( dag ), this paper proposes a new structure for measuring one singleton variable whose mb arises in the selection of an imperfect i / o device - like measurement system. for effect estimation, however, an extra source of mb arises from any redundant association between a measured exposure and a measured outcome. the misclassification will be bidirectionally differential for a common outcome, unidirectionally differential for a causal relation, and non - differential for a common cause between the measured exposure and the measured outcome or a null effect. the measured exposure can actually affect the measured outcome, or vice versa. reverse causality is a concept defined at the level of measurement. our new dags have clarified the structures and mechanisms of mb.
arxiv:2012.10980
the properties of uniform hyperbolicity and dominated splitting have been introduced to study the stability of the dynamics of diffeomorphisms. one meets difficulties when one tries to extend these definitions to vector fields and shantao liao has shown that it is more relevant to consider the linear poincar \ ' e flow rather than the tangent flow in order to study the properties of the derivative. in this paper we define the notion of singular domination, an analog of the dominated splitting for the linear poincar \ ' e flow which is robust under perturbations. based on this, we give a new definition of multi - singular hyperbolicity which is equivalent to the one recently introduced by bonatti - da luz. the novelty of our definition is that it does not involve the blowup of the singular set and the renormalization cocycle of the linear flows.
arxiv:2003.07099
spin collective phenomena including superradiance are even today being intensively investigated with experimental tests performed based on state - of - the - art quantum technologies. such attempts are not only for the simple experimental verification of predictions from the last century but also as a motivation to explore new applications of spin collective phenomena and the coherent control of the coupling between spin ensembles and reservoirs. in this paper, we investigate the open quantum dynamics of two spin ensembles ( double spin domains ) coupled to a common bosonic reservoir. we analyze in detail the dynamics of our collective state and its structure by focusing on both the symmetry and asymmetry of this coupled spin system. we find that when the spin size of one of the double domains is larger than that of the other domain, at the steady state this system exhibits two novel collective behaviors : the negative - temperature state relaxation in the smaller spin domain and the reservoir - assisted quantum entanglement between the two domains. these results are the consequence of the asymmetry of this system and the decoherence driven by the common reservoir.
arxiv:1804.09926
generative adversarial network ( gan ) is a strong deep learning model that has shown its value in applications such as image processing and data enhancement. here, we propose a gan - based quantum topological toric code decoder and we apply it to devise a quantum teleportation protocol which is robust to depolarizing noisy environments. we construct the generator and discriminator networks of gan, train the network using the eigenvalue dataset of the toric code, and obtain an optimized decoder with a high decoding threshold compared to some existing decoders. the simulation experiments at code distances $ d = 3 $ and $ d = 5 $ show that the fidelity threshold of this gan decoder is about $ p = 0. 2108 $, which is significantly larger than the threshold $ p = 0. 1099 $ of the classical decoding model. also, the quantum teleportation protocol, optimized for noise resistance under $ d = 3 $ and $ d = 5 $ topological code, shows a fidelity improvement within the depolarizing noise threshold range of $ p < 0. 06503 $ and $ p < 0. 07512 $, respectively. with appropriate dataset training, the decoder can be adapted to other error models. more broadly, the proposed gan model provides a novel approach for topological code decoders, offering a versatile framework for different types of noise processing.
arxiv:2409.06984
recently, very high - dimensional feature representations, e. g., fisher vector, have achieved excellent performance for visual recognition and retrieval. however, these lengthy representations always cause extremely heavy computational and storage costs and even become unfeasible in some large - scale applications. a few existing techniques can transfer very high - dimensional data into binary codes, but they still require the reduced code length to be relatively long to maintain acceptable accuracies. to target a better balance between computational efficiency and accuracies, in this paper, we propose a novel embedding method called binary projection bank ( bpb ), which can effectively reduce the very high - dimensional representations to medium - dimensional binary codes without sacrificing accuracies. instead of using conventional single linear or bilinear projections, the proposed method learns a bank of small projections via the max - margin constraint to optimally preserve the intrinsic data similarity. we have systematically evaluated the proposed method on three datasets : flickr 1m, ilsvr2010 and ucf101, showing competitive retrieval and recognition accuracies compared with state - of - the - art approaches, but with a significantly smaller memory footprint and lower coding complexity.
arxiv:1509.04916
the strong beams of high - frequency gravitational waves ( gw ) emitted by cusps and kinks of cosmic strings are studied in detail. as a consequence of these beams, the stochastic ensemble of gw ' s generated by a cosmological network of oscillating loops is strongly non gaussian, and includes occasional sharp bursts that stand above the ` ` confusion ' ' gw noise made of many smaller overlapping bursts. even if only 10 % of all string loops have cusps these bursts might be detectable by the planned gw detectors ligo / virgo and lisa for string tensions as small as $ g \ mu \ sim 10 ^ { - 13 } $. in the implausible case where the average cusp number per loop oscillation is extremely small, the smaller bursts emitted by the ubiquitous kinks will be detectable by lisa for string tensions as small as $ g \ mu \ sim 10 ^ { - 12 } $. we show that the strongly non gaussian nature of the stochastic gw ' s generated by strings modifies the usual derivation of constraints on $ g \ mu $ from pulsar timing experiments. in particular the usually considered ` ` rms gw background ' ' is, when $ g \ mu \ gaq 10 ^ { - 7 } $, an overestimate of the more relevant confusion gw noise because it includes rare, intense bursts. the consideration of the confusion gw noise suggests that a grand unified theory ( gut ) value $ g \ mu \ sim 10 ^ { - 6 } $ is compatible with existing pulsar data, and that a modest improvement in pulsar timing accuracy could detect the confusion noise coming from a network of cuspy string loops down to $ g \ mu \ sim 10 ^ { - 11 } $. the gw bursts discussed here might be accompanied by gamma ray bursts.
arxiv:gr-qc/0104026
the performance limits of monolayer transition metal dichalcogenide transistors are examined with a ballistic mosfet model. using ab - initio theory, we calculate the band structures of two - dimensional ( 2d ) transition metal dichalco - genide ( mx2 ). we find the lattice structures of monolayer mx2 remain the same as the bulk mx2. within the ballistic regime, the performances of monolayer mx2 transistors are better compared to the silicon transistors if thin high - { \ kappa } gate insulator is used. this makes monolayer mx2 promising 2d materials for future nanoelectronic device applications.
arxiv:1106.4362
in this work we build a unifying framework to interpolate between density - driven and geometry - based algorithms for data clustering, and specifically, to connect the mean shift algorithm with spectral clustering at discrete and continuum levels. we seek this connection through the introduction of fokker - planck equations on data graphs. besides introducing new forms of mean shift algorithms on graphs, we provide new theoretical insights on the behavior of the family of diffusion maps in the large sample limit as well as provide new connections between diffusion maps and mean shift dynamics on a fixed graph. several numerical examples illustrate our theoretical findings and highlight the benefits of interpolating density - driven and geometry - based clustering algorithms.
arxiv:2108.08687
in the era of 6g, featuring compelling visions of digital twins and metaverses, extended reality ( xr ) has emerged as a vital conduit connecting the digital and physical realms, garnering widespread interest. ensuring a fully immersive wireless xr experience stands as a paramount technical necessity, demanding the liberation of xr from the confines of wired connections. in this paper, we first introduce the technologies applied in the wireless xr domain, delve into their benefits and limitations, and highlight the ongoing challenges. we then propose a novel deployment framework for a broad xr pipeline, termed " gesa - xrf ", inspired by the core philosophy of semantic communication ( semcom ) which shifts the concern from " how " to transmit to " what " to transmit. particularly, the framework comprises three stages : data collection, data analysis, and data delivery. in each stage, we integrate semantic awareness to achieve streamlined transmission and employ generative artificial intelligence ( gai ) to achieve collaborative refinements. for the data collection of multi - modal data with differentiated data volumes and heterogeneous latency requirements, we propose a novel semcom paradigm based on multi - modal fusion and separation and a gai - based robust superposition scheme. to perform a comprehensive data analysis, we employ multi - task learning to perform the prediction of field of view and personalized attention and discuss the possible preprocessing approaches assisted by gai. lastly, for the data delivery stage, we present a semantic - aware multicast - based delivery strategy aimed at reducing pixel level redundant transmissions and introduce the gai collaborative refinement approach. the performance gain of the proposed gesa - xrf is preliminarily demonstrated through a case study.
arxiv:2404.06182
we calculate the explicit probability distribution function for the flux between sites in a simple discrete time diffusive system composed of independent random walkers. we highlight some of the features of the distribution and we discuss its relation to the local instantaneous entropy production in the system. our results are applicable both to equilibrium and non - equilibrium steady states as well as for certain time dependent situations.
arxiv:cond-mat/0511450
in this note, we do the following : a ) by using lacey ' s recent technique, we give an alternative proof for conde - alonso and rey ' s domination theorem, which states that each positive dyadic operator of arbitrary complexity is pointwise dominated by a positive dyadic operator of zero complexity : \ [ \ sum _ { s \ in \ mathcal { s } } \ langle f \ rangle ^ \ mu _ { s ^ { ( k ) } } 1 _ s \ lesssim ( k + 1 ) \ sum _ { s ' \ in \ mathcal { s } ' } \ langle f \ rangle ^ \ mu _ { s ' } 1 _ { s ' }. \ ] b ) by following the analogue between median and mean oscillation, we extend lerner ' s local median oscillation decomposition to arbitrary ( possibly non - doubling ) measures : \ [ \ lvert f - m ( f, \ hat { s _ 0 } ) \ rvert 1 _ { s _ 0 } \ lesssim \ sum _ { s \ in \ mathcal { s } } ( \ omega _ \ lambda ( f ; s ) + \ lvert m ( f, s ) - m ( f, \ hat { s } ) \ rvert ) 1 _ s. \ ] this can be viewed as a median oscillation decomposition adapted to the dyadic ( martingale ) bmo. as an application of the decomposition, we give an alternative proof for the dyadic ( martingale ) john - - nirenberg inequality, and for lacey ' s domination theorem, which states that each martingale transform is pointwise dominated by a positive dyadic operator of complexity zero.
arxiv:1502.05942
relativistic hartree - fock and random phase approximation methods for open shells are used to calculate ionization potentials and static scalar polarizabilities of eight superheavy elements with open $ 6d $ - shell, which include db, sg, bh, hs, mt, ds, rg and cn ( $ z $ = 105 to 112 ). inter - electron correlations are taken into account with the use of the semi - empirical polarization potential. its parameters are chosen to fit the known ionization potentials of lighter atoms. calculations for lighter atoms are also used to illustrate the accuracy of the approach.
arxiv:1602.08190
we present a novel and simple solution to atomic broadcast ( ab ). we reduce ab to two subproblems. one of them is reliable broadcast ( rb ). we also introduce a subproblem we call weakly - terminating binary agreement ( wba ). wba relaxes binary agreement ( ba ) protocols by not always terminating. wba admits much simpler solutions than ba. we discuss concrete solutions to rb and wba. we prove safety, liveness, and censorship resilience of our new ab protocol.
arxiv:2205.06314
this paper studies privacy and secure function evaluation in communication complexity. the focus is on quantum versions of the model and on protocols with only approximate privacy against honest players. we show that the privacy loss ( the minimum divulged information ) in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions ( i. e., those with privacy loss 0 ) is not enlarged by quantum protocols. quantum communication combined with small information leakage on the other hand makes certain functions computable ( almost ) privately which are not computable using either quantum communication without leakage or classical communication with leakage. we also give an example of an exponential reduction of the communication complexity of a function by allowing a privacy loss of $ o ( 1 ) $ instead of privacy loss 0.
arxiv:quant-ph/0110038
cohomology theory of links, introduced by the author, is combinatorial. dror bar - natan recently wrote a program that found ranks of cohomology groups of all prime knots with up to 11 crossings. his surprising experimental data is discussed in this note.
arxiv:math/0201306
in this paper, we consider the directional differentiability of metric projection and its properties in uniformly convex and uniformly smooth bochner space lp ( s ; x ), in which ( s, a, mu ) is a positive measure space and x is a uniformly convex and uniformly smooth banach space. let ( arbitrary ) a in a with measure of a greater than 0 and define a subspace lp ( a ; x ) of lp ( s ; x ), which is considered as a closed and convex subset of lp ( s ; x ). we first study the properties of the normalized duality mapping in lp ( s ; x ) and in lp ( a ; x ). for any c in lp ( a ; x ) and r > 0, we define a closed ball ba ( c ; r ) in lp ( a ; x ) and a cylinder ca ( c ; r ) in lp ( s ; x ) with base ba ( c ; r ). then, we investigate some optimal properties of the corresponding metric projections p ( lp ( a ; x ) ), p ( ba ( c ; r ) ) and p ( ca ( c ; r ) ) that include the inverse images, the directional differentiability and the precise solutions of their directional derivatives.
arxiv:2311.00942
internet of things is an ecosystem of interconnected devices that are accessible through the internet. the recent research focuses on adding more smartness and intelligence to these edge devices. this makes them susceptible to various kinds of security threats. these edge devices rely on cryptographic techniques to encrypt the pre - processed data collected from the sensors deployed in the field. in this regard, block cipher has been one of the most reliable options through which data security is accomplished. the strength of block encryption algorithms against different attacks is dependent on its nonlinear primitive which is called substitution boxes. for the design of s - boxes mainly algebraic and chaos - based techniques are used but researchers also found various weaknesses in these techniques. on the other side, literature endorse the true random numbers for information security due to the reason that, true random numbers are purely non - deterministic. in this paper firstly a natural dynamical phenomenon is utilized for the generation of true random numbers based s - boxes. secondly, a systematic literature review was conducted to know which metaheuristic optimization technique is highly adopted in the current decade for the optimization of s - boxes. based on the outcome of systematic literature review ( slr ), genetic algorithm is chosen for the optimization of s - boxes. the results of our method validate that the proposed dynamic s - boxes are effective for the block ciphers. moreover, our results showed that the proposed substitution boxes achieve better
arxiv:2206.09424
we define the homology of a simplicial set with coefficients in a segal ' s $ \ gamma $ - set ( $ \ mathbf s $ - module ). we show the relevance of this new homology with values in $ \ mathbf s $ - modules by proving that taking as coefficients the $ \ mathbf s $ - modules at the archimedean place over the structure sheaf on $ \ overline { spec \ mathbb z } $ introduced in our previous work, one obtains on the singular homology with real coefficients of a topological space $ x $, a norm equivalent to the gromov norm. moreover, we prove that the two norms agree when $ x $ is an oriented compact riemann surface.
arxiv:1905.03310
in entanglement - based quantum key distribution ( qkd ), the generation and detection of multi - photon modes leads to a trade - off between entanglement visibility and two - fold coincidence events when maximizing the secure key rate ( skr ). we produce a predictive model for the optimal two - fold coincidence probability per coincidence window given the channel efficiency and detector dark count rate of a given system. this model is experimentally validated and used in simulations for qkd with satellites as well as optical fibers.
arxiv:1210.0209
to reach a deeper understanding of fluid interfaces it is necessary to identify a meaningful coarse - graining length that separates intrinsic fluctuations from capillary ones, given the lack of a proper statistical mechanical definition of the latter. here, with the help of unsupervised learning techniques, we introduce a new length scale based on the local density of the fluid. this length scale follows a scaling law that diverges more mildly than the bulk correlation length upon approaching the critical point. this allows to distinguish regimes of correlated and uncorrelated capillary waves from that of intrinsic fluctuations.
arxiv:2201.00911