text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we compute a twisted first cohomology group of the automorphism group of a free group with coefficients in the abelianization $ v $ of the ia - automorphism group of a free group. in particular, we show that it is generated by two crossed homomorphisms constructed with the magnus representation and the magnus expansion due to morita and kawazumi respectively. as a corollary, we see that the first johnson homomorphism does not extend to the automorphism group of a free group as a crossed homomorphism for the rank of the free group is greater than 4.
|
arxiv:0901.0589
|
we study spherically symmetric solutions with a scalar field in the shift - symmetric subclass of the horndeski theory. constructing an effective energy - momentum tensor of the scalar field based on the two - fluid model, we decompose the scalar field into two components : dark matter and dark energy. we find the dark - matter fluid is pressure - less, and its distribution of energy density obeys the inverse - square law. we show the scalar field dark matter can explain the galaxy rotation curve and discuss the time evolution of the dark matter in the cosmic background.
|
arxiv:1911.04741
|
the open radio access network ( ran ) aims to bring openness and intelligence to the traditional closed and proprietary ran technology and offer flexibility, performance improvement, and cost - efficiency in the ran deployment and operation. this paper provides a comprehensive survey of the open ran development. we briefly summarized the ran evolution history and the state - of - the - art technologies applied in open ran. the open ran - related projects, activities, and standardization is then discussed. we then summarize the challenges and future research directions required to support the open ran. finally, we discuss some solutions to tackle these issues from the open source ' s perspective.
|
arxiv:2208.09125
|
lab - on - a - chip ( loc ) applications have emerged as invaluable physical and life sciences tools. the advantages stem from advanced system miniaturization, thus, requiring far less sample volume while allowing for complex functionality, increased reproducibility, and high throughput. however, loc applications necessitate extensive sensor miniaturization to leverage these inherent advantages fully. atom - sized quantum sensors are highly promising to bridge this gap and have enabled measurements of temperature, electric and magnetic fields on the nano - to microscale. nevertheless, the technical complexity of both disciplines has so far impeded an uncompromising combination of loc systems and quantum sensors. here, we present a fully integrated microfluidic platform for solid - state spin quantum sensors, such as the nitrogen - vacancy ( nv ) center in diamond. our platform fulfills all technical requirements, such as fast spin manipulation, enabling full quantum sensing capabilities, biocompatibility, and easy adaptability to arbitrary channel and chip geometries. to illustrate the vast potential of quantum sensors in loc systems, we demonstrate various nv center - based sensing modalities for chemical analysis in our microfluidic platform, ranging from paramagnetic ion detection to high - resolution microscale nv - nmr. consequently, our work opens the door for novel chemical analysis capabilities within loc devices with applications in electrochemistry, high throughput reaction screening, bioanalytics, organ - on - a - chip, or single - cell studies.
|
arxiv:2209.01651
|
by means of numerical simulation we confirm that in graphene with point defects a quasigap opens in the vicinity of the resonance state with increasing impurity concentration. we prove that states inside this quasigap cannot longer be described by a wavevector and are strongly localized. we visualize states corresponding to the density of states maxima within the quasigap and show that they are yielded by impurity pair clusters.
|
arxiv:0908.0653
|
we construct biregular models of families of log del pezzo surfaces with rigid cyclic quotient singularities such that a general member in each family is wellformed and quasismooth. each biregular model consists of infinite series of such families of surfaces ; parameterized by the natural numbers $ \ mathbb { n } $. each family in these models is represented by either a codimension 3 pfaffian format modelled on the pl \ " ucker embedding of gr ( 2, 5 ) or a codimension 4 format modelled on the segre embedding of \ ( \ mathbb { p } ^ 2 \ times \ mathbb { p } ^ 2 \ ). in particular, we show the existence of two biregular models in codimension 4 which are bi parameterized, giving rise to an infinite series of models of families of log del pezzo surfaces. we identify those models of surfaces which do not admit a \ ( \ mathbb { q } \ ) - gorenstein deformation to a toric variety.
|
arxiv:1711.10222
|
conformal field theories play a central role in theoretical physics with many applications ranging from condensed matter to string theory. the conformal bootstrap studies conformal field theories using mathematical consistency conditions and has seen great progress over the last decade. in this thesis we present an implementation of analytic bootstrap methods for perturbative conformal field theories in dimensions greater than two, which we achieve by combining large spin perturbation theory with the lorentzian inversion formula. in the presence of a small expansion parameter, not necessarily the coupling constant, we develop this into a systematic framework, applicable to a wide range of theories. the first two chapters provide the necessary background and a review of the analytic bootstrap. this is followed by a chapter which describes the method in detail, taking the form of a practical guide to large spin perturbation theory by means of a step - by - step implementation. the second part of the thesis presents several explicit implementations of the framework, taking examples from a number of well - studied conformal field theories. we show how many literature results can be reproduced from a purely bootstrap perspective and how a variety of new results can be derived.
|
arxiv:2008.12600
|
we first review how to determine the rate of vibrational energy relaxation ( ver ) using perturbation theory. we then apply those theoretical results to the problem of ver of a cd stretching mode in the protein cytochrome c. we model cytochrome c in vacuum as a normal mode system with the lowest - order anharmonic coupling elements. we find that, for the ` ` lifetime ' ' width parameter $ \ gamma = 3 \ sim 30 $ cm $ ^ { - 1 } $, the ver time is $ 0. 2 \ sim 0. 3 $ ps, which agrees rather well with the previous classical calculation using the quantum correction factor method, and is consistent with spectroscopic experiments by romesberg ' s group. we decompose the ver rate into separate contributions from two modes, and find that the most significant contribution, which depends on the ` ` lifetime ' ' width parameter, comes from those modes most resonant with the cd vibrational mode.
|
arxiv:q-bio/0403019
|
we consider an i. i. d. supercritical bond percolation on $ \ mathbb { z } ^ d $, every edge is open with a probability $ p > p _ c ( d ) $, where $ p _ c ( d ) $ denotes the critical parameter for this percolation. we know that there exists almost surely a unique infinite open cluster $ c _ p $ [ 7 ]. we are interested in the regularity properties in p of the anchored isoperimetric profile of the infinite cluster $ c _ p $. for $ d \ ge2 $, we prove that the anchored isoperimetric profile defined in [ 4 ] is lipschitz continuous on all intervals $ [ p _ 0, p _ 1 ] \ subset ( p _ c ( d ), 1 ) $.
|
arxiv:1901.00367
|
the need for wearable or abandoned microsystems, as well as the trend to a lower power consumption of electronic devices, make miniaturized renewable energy generators a viable alternative to batteries. among the different alternatives, an interesting option is the use of inertial microgenerators for energy scavenging from vibrations present in the environment. these devices constitute perpetual energy sources without the need for refilling, thus being well suited for abandoned sensors, wireless systems or microsystems which must be embedded within the structure, without outside physical connections. different electromagnetic energy scavenging devices have been described in the literature [ 1, 2, 3 ], based on the use of a velocity damped resonator, which is well suited for harvesting of vibrational energy induced by the operation of machines. these vibrations are characterized by a well defined frequency ( in the range between few hz ' s and few khz ' s ) and low displacement amplitudes. adjusting the resonant frequency of the system to that of the vibrations allows amplification of these low amplitude displacements. moreover, for these applications, the use of an electromagnetic device has the potential advantages of a good level of compatibility with si microsystem technology, as well as the possibility of relatively high electromechanical coupling with simple designs.
|
arxiv:0805.0855
|
the mean field approximation to the ising model is a canonical variational tool that is used for analysis and inference in ising models. we provide a simple and optimal bound for the kl error of the mean field approximation for ising models on general graphs, and extend it to higher order markov random fields. our bound improves on previous bounds obtained in work in the graph limit literature by borgs, chayes, lov \ ' asz, s \ ' os, and vesztergombi and another recent work by basak and mukherjee. our bound is tight up to lower order terms. building on the methods used to prove the bound, along with techniques from combinatorics and optimization, we study the algorithmic problem of estimating the ( variational ) free energy for ising models and general markov random fields. for a graph $ g $ on $ n $ vertices and interaction matrix $ j $ with frobenius norm $ \ | j \ | _ f $, we provide algorithms that approximate the free energy within an additive error of $ \ epsilon n \ | j \ | _ f $ in time $ \ exp ( poly ( 1 / \ epsilon ) ) $. we also show that approximation within $ ( n \ | j \ | _ f ) ^ { 1 - \ delta } $ is np - hard for every $ \ delta > 0 $. finally, we provide more efficient approximation algorithms, which find the optimal mean field approximation, for ferromagnetic ising models and for ising models satisfying dobrushin ' s condition.
|
arxiv:1802.06126
|
lossy image compression is essential for mars exploration missions, due to the limited bandwidth between earth and mars. however, the compression may introduce visual artifacts that complicate the geological analysis of the martian surface. existing quality enhancement approaches, primarily designed for earth images, fall short for martian images due to a lack of consideration for the unique martian semantics. in response to this challenge, we conduct an in - depth analysis of martian images, yielding two key insights based on semantics : the presence of texture similarities and the compact nature of texture representations in martian images. inspired by these findings, we introduce marsqe, an innovative, semantic - informed, two - phase quality enhancement approach specifically designed for martian images. the first phase involves the semantic - based matching of texture - similar reference images, and the second phase enhances image quality by transferring texture patterns from these reference images to the compressed image. we also develop a post - enhancement network to further reduce compression artifacts and achieve superior compression quality. our extensive experiments demonstrate that marsqe significantly outperforms existing approaches for earth images, establishing a new benchmark for the quality enhancement on martian images.
|
arxiv:2404.09433
|
detector - device - independent quantum key distribution ( ddiqkd ) held the promise of being robust to detector side - channels, a major security loophole in qkd implementations. in contrast to what has been claimed, however, we demonstrate that the security of ddiqkd is not based on post - selected entanglement, and we introduce various eavesdropping strategies that show that ddiqkd is in fact insecure against detector side - channel attacks as well as against other attacks that exploit device ' s imperfections of the receiver. our attacks are valid even when the qkd apparatuses are built by the legitimate users of the system themselves, and thus free of malicious modifications, which is a key assumption in ddiqkd.
|
arxiv:1607.05814
|
weak gravitational lensing is a powerful probe of the dark sector, once measurement systematic errors can be controlled. in refregier & amara ( 2014 ), a calibration method based on forward modeling, called mccl, was proposed. this relies on fast image simulations ( e. g., ufig ; berge et al. 2013 ) that capture the key features of galaxy populations and measurement effects. the mccl approach has been used in herbel et al. ( 2017 ) to determine the redshift distribution of cosmological galaxy samples and, in the process, the authors derived a model for the galaxy population mainly based on broad - band photometry. here, we test this model by forward modeling the 40 narrow - band photometry given by the novel pau survey ( paus ). for this purpose, we apply the same forced photometric pipeline on data and simulations using source extractor ( bertin & arnouts 1996 ). the image simulation scheme performance is assessed at the image and at the catalogues level. we find good agreement for the distribution of pixel values, the magnitudes, in the magnitude - size relation and the interband correlations. a principal component analysis is then performed, in order to derive a global comparison of the narrow - band photometry between the data and the simulations. we use a ` mixing ' matrix to quantify the agreement between the observed and simulated sets of principal components ( pcs ). we find good agreement, especially for the first three most significant pcs. we also compare the coefficients of the pcs decomposition. while there are slight differences for some coefficients, we find that the distributions are in good agreement. together, our results show that the galaxy population model derived from broad - band photometry is in good overall agreement with the paus data. this offers good prospect for incorporating spectral information to the galaxy model by adjusting it to the paus narrow - band data using forward modeling.
|
arxiv:1805.05340
|
in this paper we consider iterated integrals of multiple polylogarithm functions and prove some explicit relations of multiple polylogarithm functions. then we apply the relations obtained to find numerous formulas of alternating multiple zeta values in terms of unit - exponent alternating multiple zeta values. in particular, we prove several conjectures given by borwein - bradley - broadhurst \ cite { bbbl1997 }, and give some general results. furthermore, we discuss kaneko - yamamoto multiple zeta values, and establish some relations between it and multiple zeta values. finally, we establish a linear relation identity of alternating multiple zeta values.
|
arxiv:1908.03065
|
in the process of exploring the world, the curiosity constantly drives humans to cognize new things. supposing you are a zoologist, for a presented animal image, you can recognize it immediately if you know its class. otherwise, you would more likely attempt to cognize it by exploiting the side - information ( e. g., semantic information, etc. ) you have accumulated. inspired by this, this paper decomposes the generalized zero - shot learning ( g - zsl ) task into an open set recognition ( osr ) task and a zero - shot learning ( zsl ) task, where osr recognizes seen classes ( if we have seen ( or known ) them ) and rejects unseen classes ( if we have never seen ( or known ) them before ), while zsl identifies the unseen classes rejected by the former. simultaneously, without violating osr ' s assumptions ( only known class knowledge is available in training ), we also first attempt to explore a new generalized open set recognition ( g - osr ) by introducing the accumulated side - information from known classes to osr. for g - zsl, such a decomposition effectively solves the class overfitting problem with easily misclassifying unseen classes as seen classes. the problem is ubiquitous in most existing g - zsl methods. on the other hand, for g - osr, introducing such semantic information of known classes not only improves the recognition performance but also endows osr with the cognitive ability of unknown classes. specifically, a visual and semantic prototypes - jointly guided convolutional neural network ( vsg - cnn ) is proposed to fulfill these two tasks ( g - zsl and g - osr ) in a unified end - to - end learning framework. extensive experiments on benchmark datasets demonstrate the advantages of our learning framework.
|
arxiv:1908.03983
|
in this paper, we investigate the joule thomson effects for ads black holes with a global monopole. we study the effect of the global monopole parameter { \ eta } on the inversion temperature and isenthalpic curves. the obtained result is compared with joule thomson expansion of van der waals fluid and the equivalence were noted. phase transition occuring in the extended phase space of this black hole is analogous to van der waals gas. our study shows that global monopole parameter { \ eta } plays a very important role in joule thomson expansion.
|
arxiv:1805.11053
|
this paper is concerned with the analysis of a class of optimal control problems governed by a time - harmonic eddy current system with a dipole source, which is taken as the control variable. a mathematical model is set up for the state equation where the dipole source takes the form of a dirac mass located in the interior of the conducting domain. a non - standard approach featuring the fundamental solution of a $ \ operatorname { curl } \ operatorname { curl } - \ operatorname { id } $ operator is proposed to address the well - posedness of the state problem, leading to a split structure of the state field as the sum of a singular part and a regular part. the aim of the control is the best approximation of desired electric and magnetic fields via a suitable $ l ^ 2 $ - quadratic tracking cost functional. here, special attention is devoted to establishing an adjoint calculus which is consistent with the form of the state variable and in this way first order optimality conditions are eventually derived.
|
arxiv:1912.01504
|
in this note we study the bochner formula on smooth metric measure spaces. we introduce weighted curvature conditions that imply vanishing of all betti numbers.
|
arxiv:2005.02604
|
aims. despite photometry and spectroscopy of its oscillations obtained over the past 25 years, the pulsation frequency spectrum of the rapidly oscillating ap ( roap ) star gamma equ has remained poorly understood. better time - series photometry, combined with recent advances to incorporate interior magnetic field geometry into pulsational models, enable us to perform improved asteroseismology of this roap star. methods. we obtained 19 days of continuous high - precision photometry of gamma equ with the most ( microvariability & oscillations of stars ) satellite. the data were reduced with two different reduction techniques and significant frequencies were identified. those frequencies were fitted by interpolating a grid of pulsation models that include dipole magnetic fields of various polar strengths. results. we identify 7 frequencies in gamma equ that we associate with 5 high - overtone p - modes and 1st and 2nd harmonics of the dominant p - mode. one of the modes and both harmonics are new discoveries for this star. our best model solution ( 1. 8 m _ sun, log t _ eff ~ 3. 882 ; polar field strength ~ 8. 1 kg ) leads to unique mode identifications for these frequencies ( ell = 0, 1, 2 and 4 ). this is the first purely asteroseismic fit to a grid of magnetic models. we measure amplitude and phase modulation of the primary frequency due to beating with a closely spaced frequency which had never been resolved. this casts doubts on theories that such modulation - unrelated to the rotation of the star - is due to a stochastic excitation mechanism.
|
arxiv:0801.0863
|
cross - correlation signals are recorded from fluorescence photons scattered in free space off a trapped ion structure. the analysis of the signal allows for unambiguously revealing the spatial frequency, thus the distance, as well as the spatial alignment of the ions. for the case of two ions we obtain from the cross - correlations a spatial frequency $ f _ \ text { spatial } = 1490 \ pm 2 _ { stat. } \ pm 8 _ { syst. } \, \ text { rad } ^ { - 1 } $, where the statistical uncertainty improves with the integrated number of correlation events as $ n ^ { - 0. 51 \ pm0. 06 } $. we independently determine the spatial frequency to be $ 1494 \ pm 11 \, \ text { rad } ^ { - 1 } $, proving excellent agreement. expanding our method to the case of three ions, we demonstrate its functionality for two - dimensional arrays of emitters of indistinguishable photons, serving as a model system to yield structural information where direct imaging techniques fail.
|
arxiv:2012.13206
|
high quality reference data from diffusion monte carlo calculations are presented for bulk si methane hydrate, a complex crystal exhibiting both hydrogen - bond and dispersion dominated interactions. the performance of some commonly used exchange - correlation functionals and all - atom point charge force fields is evaluated. our results show that none of the exchange - correlation functionals tested are sufficient to describe both the energetics and the structure of methane hydrate accurately, whilst the point charge force fields perform badly in their description of the cohesive energy but fair well for the dissociation energetics. by comparing to ice ih, we show that a good prediction of the volume and cohesive energies for the hydrate relies primarily on an accurate description of the hydrogen bonded water framework, but that to correctly predict stability of the hydrate with respect to dissociation to ice ih and methane gas, accuracy in the water - methane interaction is also required. our results highlight the difficulty that density functional theory faces in describing both the hydrogen bonded water framework and the dispersion bound methane.
|
arxiv:1402.6874
|
we present an 82 ksec chandra acis - i observation of a large - scale hierarchical complex, which consists of various clusters / groups of galaxies and low - surface brightness x - ray emission at z = 0. 247. this high - resolution { \ sl chandra } observation allows us for the first time to separate unambiguously the x - ray contributions from discrete sources and large - scale diffuse hot gas. we detect 99 x - ray sources in a $ 17 ^ \ prime \ times 17 ^ \ prime $ field. ten of these sources are identified as members of the complex and are mostly radio - bright. whereas unresolved x - ray sources tend to be associated with galaxies in intermediate density environments, extended x - ray emission peak at bright radio galaxies in the central cluster. in particular, a distinct x - ray trail appears on one side of the fast - moving galaxy c153, clearly due to ram - pressure stripping. the diffuse x - ray emission from the central cluster can be characterized by a thermal plasma with a characteristic temperature of $ 3. 2 _ { - 0. 4 } ^ { + 0. 5 } $ kev and a heavy element abundance of $ 0. 24 _ { - 0. 12 } ^ { + 0. 15 } $ solar ( 90 % confidence uncertainties ). in comparison, a patch of low - surface brightness x - ray emission apparently originates in relatively low density intergalactic gas with a characteristic temperature of $ 0. 98 _ { - 0. 27 } ^ { + 0. 22 } $ kev and an abundance of $ \ lesssim 0. 09 $ solar. the chandra observation, together with extensive multi - wavelength data, indicates that the complex represents a projection of several galaxy sub - structures, which may be undergoing major mergers. we discuss the dynamic states of the complex and its sub - structures as well as properties of x - ray - emitting galaxies and the relationship to their environments.
|
arxiv:astro-ph/0404602
|
multi - step manipulation tasks where robots interact with their environment and must apply process forces based on the perceived situation remain challenging to learn and prone to execution errors. accurately simulating these tasks is also difficult. hence, it is crucial for robust task performance to learn how to coordinate end - effector pose and applied force, monitor execution, and react to deviations. to address these challenges, we propose a learning approach that directly infers both low - and high - level task representations from user demonstrations on the real system. we developed an unsupervised task segmentation algorithm that combines intention recognition and feature clustering to infer the skills of a task. we leverage the inferred characteristic features of each skill in a novel unsupervised anomaly detection approach to identify deviations from the intended task execution. together, these components form a comprehensive framework capable of incrementally learning task decisions and new behaviors as new situations arise. compared to state - of - the - art learning techniques, our approach significantly reduces the required amount of training data and computational complexity while efficiently learning complex in - contact behaviors and recovery strategies. our proposed task segmentation and anomaly detection approaches outperform state - of - the - art methods on force - based tasks evaluated on two different robotic systems.
|
arxiv:2505.04565
|
an accurate prediction of house prices is a fundamental requirement for various sectors including real estate and mortgage lending. it is widely recognized that a property value is not solely determined by its physical attributes but is significantly influenced by its surrounding neighbourhood. meeting the diverse housing needs of individuals while balancing budget constraints is a primary concern for real estate developers. to this end, we addressed the house price prediction problem as a regression task and thus employed various machine learning techniques capable of expressing the significance of independent variables. we made use of the housing dataset of ames city in iowa, usa to compare support vector regressor, random forest regressor, xgboost, multilayer perceptron and multiple linear regression algorithms for house price prediction. afterwards, we identified the key factors that influence housing costs. our results show that xgboost is the best performing model for house price prediction.
|
arxiv:2402.04082
|
large language models ( llms ) are proven to benefit a lot from retrieval - augmented generation ( rag ) in alleviating hallucinations confronted with knowledge - intensive questions. rag adopts information retrieval techniques to inject external knowledge from semantic - relevant documents as input contexts. however, since today ' s internet is flooded with numerous noisy and fabricating content, it is inevitable that rag systems are vulnerable to these noises and prone to respond incorrectly. to this end, we propose to optimize the retrieval - augmented generator with an adversarial tuning multi - agent system ( atm ). the atm steers the generator to have a robust perspective of useful documents for question answering with the help of an auxiliary attacker agent through adversarially tuning the agents for several iterations. after rounds of multi - agent iterative tuning, the generator can eventually better discriminate useful documents amongst fabrications. the experimental results verify the effectiveness of atm and we also observe that the generator can achieve better performance compared to the state - of - the - art baselines.
|
arxiv:2405.18111
|
the null horizon focusing equation is equivalent via the fluid / gravity correspondence to the entropy balance law of the fluid. using this equation we derive a simple novel formula for the bulk viscosity of the fluid. the formula is expressed in terms of the dependence of scalar fields at the horizon on thermodynamic variables such as the entropy and charge densities. we apply the formula to three classes of gauge theory plasmas : non - conformal branes, perturbations of the n = 4 supersymmetric yang - mills theory and holographic models of qcd, and discuss its range of applicability.
|
arxiv:1103.1657
|
we utilize machine learning to study the string landscape. deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. a decision tree accurately predicts the number of weak fano toric threefolds arising from reflexive polytopes, each of which determines a smooth f - theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of $ \ frac43 \ times 2. 96 \ times 10 ^ { 755 } $ f - theory compactifications. logistic regression generates a new conjecture for when $ e _ 6 $ arises in the large ensemble of f - theory compactifications, which is then rigorously proven. this result may be relevant for the appearance of visible sectors in the ensemble. through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.
|
arxiv:1707.00655
|
we extend the algebra of local observables in topological conformal field theories by nonlocal operators. this allows to construct parameter - dependent operations realized via certain integrals over the compactified moduli spaces, satisfying analogues of the leibniz and higher order leibniz identities holding up to homotopy. we conjecture that one can construct a complete set of such operations which lead to a parameter - dependent version of loday ' s homotopy leibniz algebras.
|
arxiv:1301.6382
|
over finite fields, if the image of a polynomial map is not the entire field, then its cardinality can be bounded above by a significantly smaller value. earlier results bound the cardinality of the value set using the degree of the polynomial, but more recent results make use of the powers of all monomials. in this paper, we explore the geometric properties of the newton polytope and show how they allow for tighter upper bounds on the cardinality of the multivariate value set. we then explore a method which allows for even stronger upper bounds, regardless of whether one uses the multivariate degree or the newton polytope to bound the value set. effectively, this provides an alternate proof of kosters ' degree bound, an improved newton polytope - based bound, and an improvement of a degree matrix - based result given by zan and cao.
|
arxiv:1507.04085
|
we propose a viscosity - enhanced generative pre - trained physics - informed neural network with a transform layer ( vgpt - pinn ) for solving parameterized nonlinear conservation laws. the vgpt - pinn extends the traditional physics - informed neural networks and its recently proposed generative pre - trained strategy for linear model reduction to nonlinear model reduction and shock - capturing domains. by utilizing an adaptive meta - network, a simultaneously trained transform layer, viscosity enhancement strategies, implementable shock interaction analysis, and a separable training process, the vgpt - pinn efficiently captures complex parameter - dependent shock formations and interactions. numerical results of vgpt - pinn applied to the families of inviscid burgers ' equation and the euler equations, parameterized by their initial conditions, demonstrate the robustness and accuracy of the proposed technique. it accurately solves for the viscosity solution via very few neurons without leveraging any { \ it a priori } knowledge of the equations or its initial condition.
|
arxiv:2501.01587
|
in this note we look at the freeness for complex affine hypersurfaces. if $ x \ subset \ mathbb { c } ^ n $ is such a hypersurface, and $ d $ denotes the associated projective hypersurface, obtained by taking the closure of $ x $ in $ \ mathbb { p } ^ n $, then we relate first the jacobian syzygies of $ d $ and those of $ x $. then we introduce two types of freeness for an affine hypersurface $ x $, and prove various relations between them and the freeness of the projective hypersurface $ d $. we write down a proof of the folklore result saying that an affine hypersurface is free if and only if all of its singularities are free, in the sense of k. saito ' s definition in the local setting. in particular, smooth affine hypersurfaces and affine plane curves are always free. some other results, involving global tjurina numbers and minimal degrees of non trivial syzygies are also explored.
|
arxiv:2106.15501
|
first - principles approach is demonstrated to calculate the work function of bi2te3. the reference potential and the vacuum energy levels are extracted from the bi2te3 ( 0001 ) surface structure using the reference potential method based on the density functional theory. the one - shot gowo many - body perturbation theory is used to place the bulk band edge energies with respect to the reference level and the vacuum energy. at last, the work function of 5. 301 - 5. 131 ev is predicted for bi2te3 ( 0001 ) surface and compared to various elements.
|
arxiv:1711.04891
|
coherent control of bound state processes via the interfering overlapping resonances scenario [ christopher et al., j. chem. phys. 123, 064313 ( 2006 ) ] is developed to control intramolecular vibrational redistribution ( ivr ). the approach is applied to the flow of population between bonds in a model of chaotic ocs vibrational dynamics, showing the ability to significantly alter the extent and rate of ivr by varying quantum interference contributions.
|
arxiv:quant-ph/0703263
|
cities play a vital role in providing water to vast populations. due to an increasing population and the urbanization process, urban water demand is expected to double by 2050. however, some cities are expanding in locations with limited water retention capacity, scarce rainfall and unfavourable shape. this study quantifies the impact of urban form on water scarcity across more than 100 cities in latin america, asia, and africa. the analysis integrates indicators of urban morphology, satellite imagery for natural resources, and terrain metrics to model cities ' influence on water accessibility. water tariffs, proximity to critical infrastructure, and access to piped water are used to assess scarcity in urban areas. results show that locations that are far from the centre, with smaller buildings and fewer constructed surfaces, are less likely to have water access. urban sprawl negatively impacts water availability within a city, decreases proximity to critical infrastructure and increases water tariffs. keeping everything else constant, a location that is twice as far from the city centre has 38 % less proximity to critical infrastructure. also, water tariffs are up to 75 % higher and the capacity of cities to provide water drops by half in more sparse cities.
|
arxiv:2402.06676
|
lidar became an important component of the perception systems in autonomous driving. but challenges of training data acquisition and annotation made emphasized the role of the sensor to sensor domain adaptation. in this work, we address the problem of lidar upsampling. learning on lidar point clouds is rather a challenging task due to their irregular and sparse structure. here we propose a method for lidar point cloud upsampling which can reconstruct fine - grained lidar scan patterns. the key idea is to utilize edge - aware dense convolutions for both feature extraction and feature expansion. additionally applying a more accurate sliced wasserstein distance facilitates learning of the fine lidar sweep structures. this in turn enables our method to employ a one - stage upsampling paradigm without the need for coarse and fine reconstruction. we conduct several experiments to evaluate our method and demonstrate that it provides better upsampling.
|
arxiv:2301.13558
|
in both spinel compounds geco $ _ 2 $ o $ _ 4 $ and geni $ _ 2 $ o $ _ 4 $ which order antiferromagnetically ( at $ t _ n = 23. 5 k $ and $ t _ { n _ 1 } = 12. 13 k $, $ t _ { n _ 2 } = 11. 46 k $ ) with different curie weiss temperatures ( $ t _ { cw } $ = 80. 5 k and - 15 k ), the usual magnetic frustration criterion $ f = | t _ { cw } | / t _ n > > 1 $ is not fulfilled. using neutron powder diffraction and magnetization measurements up to 55 t, both compounds are found with a close magnetic ground state at low temperature and a similar magnetic behavior ( but with a different energy scale ), even though spin anisotropy and first neighbor exchange interactions are quite different. this magnetic behavior can be understood when considering the main four magnetic exchange interactions. frustration mechanisms are then enlightened.
|
arxiv:cond-mat/0608061
|
a number field is said to be a cm - number field if it is a totally imaginary quadratic extension of a totally real number field. we define a totally imaginary number field to be of cm - type if it contains a cm - subfield, and of tr - type if it does not contain a cm - subfield. for quartic totally imaginary number fields when ordered by discriminant, we show that about 69. 95 % are of tr - type and about 33. 05 % are of cm - type. for a sextic totally imaginary number field we classify its type in terms of its galois group and possibly some additional information about the location of complex conjugation in the galois group.
|
arxiv:2401.16586
|
we investigate the impact of photorefractive effect on lithium niobate integrated quantum photonic circuits dedicated to continuous variable on - chip experiments. the circuit main building blocks, i. e. cavities, directional couplers, and periodically poled nonlinear waveguides are studied. this work demonstrates that, even when the effect of photorefractivity is weaker than spatial mode hopping, they might compromise the success of on - chip quantum photonics experiments. we describe in detail the characterization methods leading to the identification of this possible issue. we also study to which extent device heating represents a viable solution to counter this effect. we focus on photorefractive effect induced by light at 775 nm, in the context of the generation of non - classical light at 1550 nm telecom wavelength.
|
arxiv:2007.11375
|
this work provides complete description of quasistationary distributions ( qsds ) for markov chains with a unique absorbing state and an irreducible set of non - absorbing states. as is well - known, every qsd has an associated absorption parameter describing the exponential tail of the absorption time under the law of the process with the qsd as the initial distribution. the analysis associated with the existence and representation of qsds corresponding to a given parameter is according to whether the moment generating function of the absorption time starting from any non - absorbing state evaluated at the parameter is finite or infinite, the finite or infinite moment generating function regimes, respectively. for parameters in the finite regime, it is shown that when exist, all qsds are in the convex cone of a martin entry boundary associated with the parameter. the infinite regime corresponds to at most one parameter value and at most one qsd. in this regime, when a qsd exists, it is unique and can be represented by a renewal - type formula. multiple applications to the findings are presented, including revisiting some of the main classical results in the area.
|
arxiv:2402.11154
|
in this paper we introduce a method that allows one to prove uniform local results for one - dimensional discrete schr \ " odinger operators with sturmian potentials. we apply this method to the transfer matrices in order to study the lyapunov exponent and the growth rate of eigenfunctions. this gives uniform vanishing of the lyapunov exponent on the spectrum for all irrational rotation numbers. for irrational rotation numbers with bounded continued fraction expansion, it gives uniform existence of the lyapunov exponent on the whole complex plane. moreover, it yields uniform polynomial upper bounds on the growth rate of transfer matrices for irrational rotation numbers with bounded density. in particular, all our results apply to the fibonacci case.
|
arxiv:math-ph/9905008
|
if the neutrinos are to be identified with the primary source of ultra - high energy cosmic rays ( uhecr ), their interaction on relic neutrinos is of great importance in understanding their long intergalactic journey. in theories with large compact dimensions, the exchange of a tower of massive spin - 2 gravitons ( kaluza - klein excitations ) gives extra contribution to $ \ nu \ bar { \ nu } \ longrightarrow f \ bar { f } $ and $ \ gamma \ gamma $ processes along with the opening of a new channel for the neutrinos to annihilate with the relic cosmic neutrino background $ \ nu \ bar { \ nu } \ longrightarrow g _ { kk } $ to produce bulk gravitons in the extra dimensions. this will affect their attenuation. we compute the contribution of these kaluza - klein excitations to the above processes and find that for parameters of the theory constrained by supernova cooling, the contribution does indeed become the dominant contribution above $ \ sqrt { s } \ simeq 300 $ gev.
|
arxiv:hep-ph/0005030
|
for a differential field $ f $ having an algebraically closed field of constants, we analyze the structure of picard - vessiot extensions of $ f $ whose differential galois groups are unipotent algebraic groups and apply these results to study stability problems in integration in finite terms and the inverse problem in differential galois theory for unipotent algebraic groups.
|
arxiv:2504.04846
|
as in statistical physics, the concept of universality plays an important, albeit qualitative, role in the field of comparative mythology. here we apply statistical mechanical tools to analyse the networks underlying three iconic mythological narratives with a view to identifying common and distinguishing quantitative features. of the three narratives, an anglo - saxon and a greek text are mostly believed by antiquarians to be partly historically based while the third, an irish epic, is often considered to be fictional. here we show that network analysis is able to discriminate real from imaginary social networks and place mythological narratives on the spectrum between them. moreover, the perceived artificiality of the irish narrative can be traced back to anomalous features associated with six characters. considering these as amalgams of several entities or proxies, renders the plausibility of the irish text comparable to the others from a network - theoretic point of view.
|
arxiv:1205.4324
|
repair mechanisms are important within resilient systems to maintain the system in an operational state after an error occurred. usually, constraints on the repair mechanisms are imposed, e. g., concerning the time or resources required ( such as energy consumption or other kinds of costs ). for systems modeled by markov decision processes ( mdps ), we introduce the concept of resilient schedulers, which represent control strategies guaranteeing that these constraints are always met within some given probability. assigning rewards to the operational states of the system, we then aim towards resilient schedulers which maximize the long - run average reward, i. e., the expected mean payoff. we present a pseudo - polynomial algorithm that decides whether a resilient scheduler exists and if so, yields an optimal resilient scheduler. we show also that already the decision problem asking whether there exists a resilient scheduler is pspace - hard.
|
arxiv:1707.03223
|
previous work ( pradines, 1966, aof and brown, 1992 ) has given a setting for a holonomy lie groupoid of a locally lie groupoid. here we develop analogous 2 - dimensional notions starting from a locally lie crossed module of groupoids. this involves replacing the ehresmann notion of a local smooth coadmissible section of a groupoid by a local smooth coadmissible homotopy ( or free derivation ) for the crossed module case. the development also has to use corresponding notions for certain types of double groupoids. this leads to a holonomy lie groupoid rather than double groupoid, but one which involves the 2 - dimensional information.
|
arxiv:math/0009082
|
in this paper, we propose two new classes of tensors : double b - tensors and quasi - double b - tensors, give some properties of double b - tensors and quasi - double b - tensors, discuss their relationships with b - tensors and positive definite tensors and show that even order symmetric double b - tensors and even order symmetric quasi - double b - tensors are positive definite. these give some checkable sufficient conditions for positive definiteness of tensors.
|
arxiv:1408.2299
|
we estimate the accretion rates of 235 classical t tauri star ( ctts ) candidates in the lagoon nebula using $ ugri $ h $ \ alpha $ photometry from the vphas + survey. our sample consists of stars displaying h $ \ alpha $ - excess, the intensity of which is used to derive accretion rates. for a subset of 87 stars, the intensity of the $ u $ - band excess is also used to estimate accretion rates. we find the mean variation in accretion rates measured using h $ \ alpha $ and $ u $ - band intensities to be $ \ sim $ 0. 17 dex, agreeing with previous estimates ( 0. 04 - 0. 4 dex ) but for a much larger sample. the spatial distribution of ctts align with the location of protostars and molecular gas suggesting that they retain an imprint of the natal gas fragmentation process. strong accretors are concentrated spatially, while weak accretors are more distributed. our results do not support the sequential star forming processes suggested in the literature.
|
arxiv:1507.06786
|
due to the global pandemic of covid - 19, there is an urgent need to utilize existing technologies to their full potential. internet of things ( iot ) is regarded as one of the most trending technologies with a great potential in fighting against the coronavirus outbreak. the iot comprises of a scarce network in which the iot devices sense the environment and send useful data on the internet. in this paper, we examine the current status of iot applications related to covid - 19, identify their deployment and operational challenges, and suggest possible opportunities to further contain the pandemic. furthermore, we perform analysis for implementing iot in which internal and external factors are discussed.
|
arxiv:2007.12268
|
alpha }. r ( p ) = α n + ( 1 − α ) j → i 1 n j x j ( k ) { \ displaystyle r { ( p ) } = { \ alpha \ over n } + ( 1 - \ alpha ) \ sum _ { j \ rightarrow i } { 1 \ over n _ { j } } x _ { j } ^ { ( k ) } } another way of looking at it : r ( a ) = r b b ( outlinks ) + + r n n ( outlinks ) { \ displaystyle r ( a ) = \ sum { r _ { b } \ over b _ { \ text { ( outlinks ) } } } + \ cdots + { r _ { n } \ over n _ { \ text { ( outlinks ) } } } } = = = centrality measures = = = information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. centrality measures are essential when a network analysis has to answer questions such as : " which nodes in the network should be targeted to ensure that a message or information spreads to all or most nodes in the network? " or conversely, " which nodes should be targeted to curtail the spread of a disease? ". formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, and katz centrality. the objective of network analysis generally determines the type of centrality measure ( s ) to be used. degree centrality of a node in a network is the number of links ( vertices ) incident on the node. closeness centrality determines how " close " a node is to other nodes in a network by measuring the sum of the shortest distances ( geodesic paths ) between that node and all other nodes in the network. betweenness centrality determines the relative importance of a node by measuring the amount of traffic flowing through that node to other nodes in the network. this is done by measuring the fraction of paths connecting all pairs of nodes and containing the node of interest. group betweenness centrality measures the amount of traffic flowing through a group of nodes. eigenvector centrality is a more sophisticated version of degree centrality where the centrality of a node not only depends on the number of links incident on the node but also the quality of those links. this quality factor is determined by the eigenve
|
https://en.wikipedia.org/wiki/Network_science
|
these are the mini - proceedings of the workshop ` ` spontaneously broken chiral symmetry and hard qcd phenomena " held at the physikzentrum bad honnef from july 15 to 19, 2002. every author presents a summary of his talk. the transparencies of all speakers and further information on the workshop can be found on the website : http : / / www. tp2. ruhr - uni - bochum. de / hardreactions. html
|
arxiv:hep-ph/0211291
|
mathematics is a far reaching discipline and its tools appear in many applications. in this paper we discuss its role in music and signal processing by revisiting the use of mathematics in algorithms that can extract chord information from recorded music. we begin with a light introduction to the theory of music and motivate the use of fourier analysis in audio processing. we introduce the discrete and continuous fourier transforms and investigate their use in extracting important information from audio data.
|
arxiv:1306.2859
|
a key function of the lexicon is to express novel concepts as they emerge over time through a process known as lexicalization. the most common lexicalization strategies are the reuse and combination of existing words, but they have typically been studied separately in the areas of word meaning extension and word formation. here we offer an information - theoretic account of how both strategies are constrained by a fundamental tradeoff between competing communicative pressures : word reuse tends to preserve the average length of word forms at the cost of less precision, while word combination tends to produce more informative words at the expense of greater word length. we test our proposal against a large dataset of reuse items and compounds that appeared in english, french and finnish over the past century. we find that these historically emerging items achieve higher levels of communicative efficiency than hypothetical ways of constructing the lexicon, and both literal reuse items and compounds tend to be more efficient than their non - literal counterparts. these results suggest that reuse and combination are both consistent with a unified account of lexicalization grounded in the theory of efficient communication.
|
arxiv:2411.05379
|
in this article, we study the distribution of large values of the riemann zeta function on the 1 - line. we obtain an improved density function concerning large values, holding in the same range as that given by granville and soundararajan.
|
arxiv:2110.03293
|
this paper describes our deep learning - based approach to multilingual aspect - based sentiment analysis as part of semeval 2016 task 5. we use a convolutional neural network ( cnn ) for both aspect extraction and aspect - based sentiment analysis. we cast aspect extraction as a multi - label classification problem, outputting probabilities over aspects parameterized by a threshold. to determine the sentiment towards an aspect, we concatenate an aspect vector with every word embedding and apply a convolution over it. our constrained system ( unconstrained for english ) achieves competitive results across all languages and domains, placing first or second in 5 and 7 out of 11 language - domain pairs for aspect category detection ( slot 1 ) and sentiment polarity ( slot 3 ) respectively, thereby demonstrating the viability of a deep learning - based approach for multilingual aspect - based sentiment analysis.
|
arxiv:1609.02748
|
we have measured the nuclear transparency of the incoherent diffractive $ a ( e, e ' \ rho ^ 0 ) $ process in $ ^ { 12 } $ c and $ ^ { 56 } $ fe targets relative to $ ^ 2 $ h using a 5 gev electron beam. the nuclear transparency, the ratio of the produced $ \ rho ^ 0 $ ' s on a nucleus relative to deuterium, which is sensitive to $ \ rho a $ interaction, was studied as function of the coherence length ( $ l _ c $ ), a lifetime of the hadronic fluctuation of the virtual photon, and the four - momentum transfer squared ( $ q ^ 2 $ ). while the transparency for both $ ^ { 12 } $ c and $ ^ { 56 } $ fe showed no $ l _ c $ dependence, a significant $ q ^ 2 $ dependence was measured, which is consistent with calculations that included the color transparency effects.
|
arxiv:1201.2735
|
the recent discovery of the extremely lensed earendel object at $ z = 6. 2 $ is remarkable in that it is likely a single star or stellar multiple, observed within the first billion years of cosmic history. depending on its mass, which is still uncertain but will soon be more tightly constrained with the james webb space telescope, the earendel star might even be a member of the first generation of stars, the so - called population ~ iii ( pop ~ iii ). by combining results from detailed cosmological simulations of the assembly of the first galaxies, including the enrichment of the pristine gas with heavy chemical elements, with assumptions on key stellar parameters, we quantify the probability that earendel has indeed a pop ~ iii origin. we find that this probability is non - negligible throughout the mass range inferred for earendel, specifically ranging from a few percent at the lower - mass end to near unity for some pop ~ iii initial mass function ( imf ) models towards the high - mass end of the allowed range. for models that extend the metal - enriched imf to $ 500 $ \, m $ _ \ odot $, the likelihood of earendel being a pop ~ iii star stays at the few to ten percent level. we discuss the implications of such a discovery for the overall endeavor to probe the hitherto so elusive first stars in the universe.
|
arxiv:2207.02863
|
the consistency of planet formation models suffers from the disconnection between the regime of small and large bodies. this is primarily caused by so - called growth barriers : the direct growth of larger bodies is halted at centimetre - sized objects and particular conditions are required for the formation of larger, gravitationally bound planetesimals. we aim to connect models of dust evolution and planetesimal formation to identify regions of protoplanetary discs that are favourable for the formation of kilometre - sized bodies and the first planetary embryos. we combine semi - analytical models of viscous protoplanetary disc evolution, dust growth and drift including backreaction of the dust particles on the gas, and planetesimal formation via the streaming instability into one numerical code. we investigate how planetesimal formation is affected by the mass of the protoplanetary disc, its initial dust content, and the stickiness of dust aggregates. we find that the dust growth and drift leads to a global redistribution of solids. the pile - up of pebbles in the inner disc provides local conditions where the streaming instability is effective. planetesimals form in an annulus with its inner edge lying between 0. 3 au and 1 au and its width ranging from 0. 3 au to 3 au. the resulting surface density of planetesimals follows a radial profile that is much steeper than the initial disc profile. these results support formation of terrestrial planets in the solar system from a narrow annulus of planetesimals, which reproduces their peculiar mass ratios.
|
arxiv:1607.05734
|
a general method is presented which allows one to determine from the local gauge invariant observables of a quantum field theory the underlying particle and symmetry structures appearing at the lower ( ultraviolet ) end of the spatio - - temporal scale. particles which are confined to small scales, i. e., do not appear in the physical spectrum, can be uncovered in this way without taking recourse to gauge fields or indefinite metric spaces. in this way notions such as quark, gluon, colour symmetry and confinement acquire a new and intrinsic meaning which is stable under gauge or duality transformations. the method is illustrated by the example of the schwinger model.
|
arxiv:hep-th/9511002
|
a study on the relation between the smooth structure of a symplectic homotopy k3 surface and its symplectic symmetries is initiated. a measurement of exoticness of a symplectic homotopy k3 surface is introduced, and the influence of an effective action of a k3 group via symplectic symmetries is investigated. it is shown that an effective action by various maximal symplectic k3 groups forces the corresponding homotopy k3 surface to be minimally exotic with respect to our measure. ( however, the standard k3 is the only known example of such minimally exotic homotopy k3 surfaces. ) the possible structure of a finite group of symplectic symmetries of a minimally exotic homotopy k3 surface is determined and future research directions are indicated.
|
arxiv:0709.2448
|
an improved composite - boson theory of quantum hall ferromagnets is formulated both for the monolayer and bilayer systems. in this scheme the field operator describes solely the physical degrees of freedom representing the deviation from the ground state. skyrmions are charged excitations confined to the lowest landau level. by evaluating the excitation energy of one skyrmion in the interlayer - coherent phase it is shown that the bilayer qh state becomes stabler as the interlayer density difference becomes larger.
|
arxiv:cond-mat/9810003
|
we discuss different exotic phases and components of matter from the crust to the core of neutron stars based on theoretical models for equations of state relevant to core collapse supernova simulations and neutron star merger. parameters of the models are constrained from laboratory experiments. it is observed that equations of state involving strangeness degrees of freedom such as hyperons and bose - einstein condensates are compatible with 2m $ _ { solar } $ neutron stars. the role of hyperons is explored on the evolution and stability of the protoneutron star ( pns ) in the context of sn1987a. moment of inertia, mass and radius which are direct probes of neutron star interior are computed and their observational consequences are discussed. we continue our study on the dense matter under strong magnetic fields and its application to magnetoelastic oscillations of neutron stars.
|
arxiv:1709.07260
|
we map the distribution and properties of the milky way ' s interstellar medium as traced by diffuse interstellar bands ( dibs ) detected in near - infrared stellar spectra from the sdss - iii / apogee survey. focusing exclusively on the strongest dib in the h - band, at ~ 1. 527 microns, we present a projected map of the dib absorption field in the galactic plane, using a set of about 60, 000 sightlines that reach up to 15 kpc from the sun and probe up to 30 magnitudes of visual extinction. the strength of this dib is linearly correlated with dust reddening over three orders of magnitude in both dib equivalent width ( w _ dib ) and extinction, with a power law index of 1. 01 + / - 0. 01, a mean relationship of w _ dib / a _ v = 0. 1 angstrom mag ^ - 1, and a dispersion of ~ 0. 05 angstrom mag ^ - 1 at extinctions characteristic of the galactic midplane. these properties establish this dib as a powerful, independent probe of dust extinction over a wide range of a _ v values. the subset of about 14, 000 robustly detected dib features have an exponential w _ dib distribution. we empirically determine the intrinsic rest wavelength of this transition to be lambda _ 0 = 15, 272. 42 angstrom, and then calculate absolute radial velocities of the carrier, which display the kinematical signature of the rotating galactic disk. we probe the dib carrier distribution in three dimensions and show that it can be characterized by an exponential disk model with a scaleheight of about 100 pc and a scalelength of about 5 kpc. finally, we show that the dib distribution also traces large - scale galactic structures, including the central long bar and the warp of the outer disk.
|
arxiv:1406.1195
|
large amounts of deep optical images will be available in the near future, allowing statistically significant studies of low surface brightness structures such as intracluster light ( icl ) in galaxy clusters. the detection of these structures requires efficient algorithms dedicated to this task, where traditional methods suffer difficulties. we present our new detection algorithm with wavelets for intracluster light studies ( dawis ), developed and optimised for the detection of low surface brightness sources in images, in particular ( but not limited to ) icl. dawis follows a multiresolution vision based on wavelet representation to detect sources, embedded in an iterative procedure called synthesis - by - analysis approach to restore the complete unmasked light distribution of these sources with very good quality. the algorithm is built so sources can be classified based on criteria depending on the analysis goal ; we display in this work the case of icl detection and the measurement of icl fractions. we test the efficiency of dawis on 270 mock images of galaxy clusters with various icl profiles and compare its efficiency to more traditional icl detection methods such as the surface brightness threshold method. we also run dawis on a real galaxy cluster image, and compare the output to results obtained with previous multiscale analysis algorithms. we find in simulations that in average dawis is able to disentangle galaxy light from icl more efficiently, and to detect a greater quantity of icl flux due to the way it handles sky background noise. we also show that the icl fraction, a metric used on a regular basis to characterise icl, is subject to several measurement biases both on galaxies and icl fluxes. in the real galaxy cluster image, dawis detects a faint and extended source with an absolute magnitude two orders brighter than previous multiscale methods.
|
arxiv:2101.03835
|
we present results of mid - infrared spectroscopic mapping observations of six star - forming regions in the small magellanic cloud from the spitzer spectroscopic survey of the smc ( s4mc ). we detect the mid - ir emission from polycyclic aromatic hydrocarbons ( pahs ) in all of the mapped regions, greatly increasing the range of environments where pahs have been spectroscopically detected in the smc. we investigate the variations of the mid - ir bands in each region and compare our results to studies of the pah bands in the sings sample and in a sample of low - metallicity starburst galaxies. pah emission in the smc is characterized by low ratios of the 6 - 9 micron features relative to the 11. 3 micron feature and weak 8. 6 and 17. 0 micron features. interpreting these band ratios in the light of laboratory and theoretical studies, we find that pahs in the smc tend to be smaller and less ionized than those in higher metallicity galaxies. based on studies of pah destruction, we argue that a size distribution shifted towards smaller pahs cannot be the result of processing in the interstellar medium, but instead reflects differences in the formation of pahs at low metallicity. finally, we discuss the implications of our observations for our understanding of the pah life - cycle in low - metallicity galaxies - - - namely that the observed deficit of pahs may be a consequence of pahs forming with smaller average sizes and therefore being more susceptible to destruction under typical interstellar medium conditions.
|
arxiv:1109.0999
|
recently zagier proved a remarkable q - series identity. we show that this identity can also be proved by modifying franklin ' s classical proof of euler ' s pentagonal number theorem.
|
arxiv:math/0009036
|
we propose here a static and axisymmetric braneworld in six dimensions as a string - like model extension. for a subtle warp function, this scenario provides near brane corrections. by varying the bulk cosmological constant, we obtain a source which passes through different phases. the solution is defined both for the interior as for the exterior of the string and satisfies the weak energy condition. smooth gravitational massless mode is localized on the brane which core is displaced from the origin. in contrast to the thin string model, the massive solutions have high amplitude near the brane. by means of an analogue quantum potential analysis, we show that s - waves gravitational kaluza - klein modes are permissible as resonant states.
|
arxiv:1410.3164
|
predicting the future behavior of moving agents is essential for real world applications. it is challenging as the intent of the agent and the corresponding behavior is unknown and intrinsically multimodal. our key insight is that for prediction within a moderate time horizon, the future modes can be effectively captured by a set of target states. this leads to our target - driven trajectory prediction ( tnt ) framework. tnt has three stages which are trained end - to - end. it first predicts an agent ' s potential target states $ t $ steps into the future, by encoding its interactions with the environment and the other agents. tnt then generates trajectory state sequences conditioned on targets. a final stage estimates trajectory likelihoods and a final compact set of trajectory predictions is selected. this is in contrast to previous work which models agent intents as latent variables, and relies on test - time sampling to generate diverse trajectories. we benchmark tnt on trajectory prediction of vehicles and pedestrians, where we outperform state - of - the - art on argoverse forecasting, interaction, stanford drone and an in - house pedestrian - at - intersection dataset.
|
arxiv:2008.08294
|
recent advances in semi - supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms : access to vast amounts of human - labeled training data. previous algorithms based on consistency regularization can harness the abundance of unlabeled data to produce impressive results on a number of semi - supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. in this work, we challenge the long - standing success of consistency regularization by introducing self - supervised regularization as the basis for combining semantic feature representations from unlabeled data. we perform extensive comparative experiments to demonstrate the effectiveness of self - supervised regularization for supervised and semi - supervised image classification on svhn, cifar - 10, and cifar - 100 benchmark datasets. we present two main results : ( 1 ) models augmented with self - supervised regularization significantly improve upon traditional supervised classifiers without the need for unlabeled data ; ( 2 ) together with unlabeled data, our models yield semi - supervised performance competitive with, and in many cases exceeding, prior state - of - the - art consistency baselines. lastly, our models have the practical utility of being efficiently trained end - to - end and require no additional hyper - parameters to tune for optimal performance beyond the standard set for training neural networks. reference code and data are available at https : / / github. com / vuptran / sesemi
|
arxiv:1906.10343
|
as deep neural networks ( dnns ) continue to drive advancements in artificial intelligence, the design of hardware accelerators faces growing concerns over embodied carbon footprint due to complex fabrication processes. 3d integration improves performance but introduces sustainability challenges, making carbon - aware optimization essential. in this work, we propose a carbon - efficient design methodology for 3d dnn accelerators, leveraging approximate computing and genetic algorithm - based design space exploration to optimize carbon delay product ( cdp ). by integrating area - efficient approximate multipliers into multiply - accumulate ( mac ) units, our approach effectively reduces silicon area and fabrication overhead while maintaining high computational accuracy. experimental evaluations across three technology nodes ( 45nm, 14nm, and 7nm ) show that our method reduces embodied carbon by up to 30 % with negligible accuracy drop.
|
arxiv:2504.09851
|
the standard way to teach models is by feeding them lots of data. however, this approach often teaches models incorrect ideas because they pick up on misleading signals in the data. to prevent such misconceptions, we must necessarily provide additional information beyond the training data. prior methods incorporate additional instance - level supervision, such as labels for misleading features or additional labels for debiased data. however, such strategies require a large amount of labeler effort. we hypothesize that people are good at providing textual feedback at the concept level, a capability that existing teaching frameworks do not leverage. we propose clarify, a novel interface and method for interactively correcting model misconceptions. through clarify, users need only provide a short text description of a model ' s consistent failure patterns. then, in an entirely automated way, we use such descriptions to improve the training process. clarify is the first end - to - end system for user model correction. our user studies show that non - expert users can successfully describe model misconceptions via clarify, leading to increased worst - case performance in two datasets. we additionally conduct a case study on a large - scale image dataset, imagenet, using clarify to find and rectify 31 novel hard subpopulations.
|
arxiv:2402.03715
|
with increasing energy prices, low income households are known to forego or minimize the use of electricity to save on energy costs. if a household is on a prepaid electricity program, it can be automatically and immediately disconnected from service if there is no balance in its prepaid account. such households need to actively ration the amount of energy they use by deciding which appliances to use and for how long. we present a tool that helps households extend the availability of their critical appliances by limiting the use of discretionary ones, and prevent disconnections. the proposed method is based on a linear optimization problem that only uses average power demand as an input and can be solved to optimality using a simple greedy approach. we compare the model with two mixed - integer linear programming models that require more detailed demand forecasts and optimization solvers for implementation. in a numerical case study based on real household data, we assess the performance of the different models under different accuracy and granularity of demand forecasts. our results show that our proposed linear model is much simpler to implement, while providing similar performance under realistic circumstances.
|
arxiv:2408.14703
|
convolutional neural networks ( cnn ) are the dominant deep neural network ( dnn ) architecture for computer vision. recently, transformer and multi - layer perceptron ( mlp ) - based models, such as vision transformer and mlp - mixer, started to lead new trends as they showed promising results in the imagenet classification task. in this paper, we conduct empirical studies on these dnn structures and try to understand their respective pros and cons. to ensure a fair comparison, we first develop a unified framework called spach which adopts separate modules for spatial and channel processing. our experiments under the spach framework reveal that all structures can achieve competitive performance at a moderate scale. however, they demonstrate distinctive behaviors when the network size scales up. based on our findings, we propose two hybrid models using convolution and transformer modules. the resulting hybrid - ms - s + model achieves 83. 9 % top - 1 accuracy with 63m parameters and 12. 3g flops. it is already on par with the sota models with sophisticated designs. the code and models are publicly available at https : / / github. com / microsoft / spach.
|
arxiv:2108.13002
|
quantum channels are quintessential to quantum information, being used in all protocols, and describing how systems evolve in space and time. as such, they play a key role in the manipulation of quantum resources, and they are often resources themselves, called dynamical resources. this forces us to go beyond standard resource theories of quantum states. here we provide a rigorous foundation for dynamical resource theories, where the resources into play are quantum channels, explaining how to manipulate dynamical resources with free superchannels. in particular, when the set of free superchannels is convex, we present a novel construction of an infinite and complete family of convex resource monotones, giving necessary and sufficient conditions for convertibility under free superchannels. after showing that the conversion problem in convex dynamical resource theories can be solved with conic linear programming, we define various resource - theoretic protocols for dynamical resources. these results serve as the framework for the study of concrete examples of theories of dynamical resources, such as dynamical entanglement theory.
|
arxiv:2101.01552
|
bacteria suspension exhibits a wide range of collective phenomena arising from interactions between individual cells. here we show serratia marcescens cells near an air - liquid interface spontaneously aggregate into dynamic clusters through surface - mediated hydrodynamic interactions. these long - lived clusters translate randomly and rotate in the counter - clockwise direction ; they continuously evolve, merge with others and split into smaller ones. measurements indicate that long - ranged hydrodynamic interactions have strong influences on cluster properties. bacterial clusters change material and fluid transport near the interface and hence may have environmental and biological consequences.
|
arxiv:1504.04089
|
motivated by mobile edge computing and wireless data centers, we study a wireless distributed computing framework where the distributed nodes exchange information over a wireless interference network. our framework follows the structure of mapreduce. this framework consists of map, shuffle, and reduce phases, where map and reduce are computation phases and shuffle is a data transmission phase. in our setting, we assume that the transmission is operated over a wireless interference network. we demonstrate that, by duplicating the computation work at a cluster of distributed nodes in the map phase, one can reduce the amount of transmission load required for the shuffle phase. in this work, we characterize the fundamental tradeoff between computation load and communication load, under the assumption of one - shot linear schemes. the proposed scheme is based on side information cancellation and zero - forcing, and we prove that it is optimal in terms of computation - communication tradeoff. the proposed scheme outperforms the naive tdma scheme with single node transmission at a time, as well as the coded tdma scheme that allows coding across data, in terms of the computation - communication tradeoff.
|
arxiv:1802.00894
|
the natural impedance, or dynamic relationship between force and motion, of a human operator can determine the stability of exoskeletons that use interaction - torque feedback to amplify human strength. while human impedance is typically modelled as a linear system, our experiments on a single - joint exoskeleton testbed involving 10 human subjects show evidence of nonlinear behavior : a low - frequency asymptotic phase for the dynamic stiffness of the human that is different than the expected zero, and an unexpectedly consistent damping ratio as the stiffness and inertia vary. to explain these observations, this paper considers a new frequency - domain model of the human joint dynamics featuring complex value stiffness comprising a real stiffness term and a hysteretic damping term. using a statistical f - test we show that the hysteretic damping term is not only significant but is even more significant than the linear damping term. further analysis reveals a linear trend linking hysteretic damping and the real part of the stiffness, which allows us to simplify the complex stiffness model down to a 1 - parameter system. then, we introduce and demonstrate a customizable fractional - order controller that exploits this hysteretic damping behavior to improve strength amplification bandwidth while maintaining stability, and explore a tuning approach which ensures that this stability property is robust to muscle co - contraction for each individual.
|
arxiv:2009.12446
|
many promising nanophotonics endeavours hinge upon the unique plasmonic properties of nanometallic structures with narrow non - metallic gaps, which support super - concentrated bonding modes that singularly redshift with decreasing separations. in this letter, we present a descriptive physical picture, complemented by elementary asymptotic formulae, of a nonlocal mechanism for plasmon - redshift saturation at subnanometric gap widths. thus, by considering the electron - charge and field distributions in the close vicinity of the metal - vacuum interface, we show that nonlocality is asymptotically manifested as an effective potential discontinuity. for bonding modes in the near - contact limit, the latter discontinuity is shown to be effectively equivalent to a widening of the gap. as a consequence, the resonance - frequency near - contact asymptotics are a renormalisation of the corresponding local ones. specifically, the renormalisation furnishes an asymptotic plasmon - frequency lower bound that scales with the $ 1 / 4 $ - power of the fermi wavelength. we demonstrate these remarkable features in the prototypical cases of nanowire and nanosphere dimers, showing agreement between our elementary expressions and previously reported numerical computations.
|
arxiv:1511.04895
|
we study the allen - cahn equation with a cubic - quintic nonlinear term and a stochastic $ q $ - trace - class stochastic forcing in two spatial dimensions. this stochastic partial differential equation ( spde ) is used as a test case to understand, how numerical continuation methods can be carried over to the spde setting. first, we compute the deterministic bifurcation diagram for the pde, i. e. without stochastic forcing. in this case, two locally asymptotically stable steady state solution branches exist upon variation of the linear damping term. then we consider the lyapunov operator equation for the locally linearized system around steady states for the spde. we discretize the full spde using a combination of finite - differences and spectral noise approximation obtaining a finite - dimensional system of stochastic ordinary differential equations ( sodes ). the large system of sodes is used to approximate the lyapunov operator equation via covariance matrices. the covariance matrices are numerically continued along the two bifurcation branches. we show that we can quantify the stochastic fluctuations along the branches. we also demonstrate scaling laws near branch and fold bifurcation points. furthermore, we perform computational tests to show that, even with a sub - optimal computational setup, we can quantify the subexponential - timescale fluctuations near the deterministic steady states upon stochastic forcing on a standard desktop computer setup. hence, the proposed method for numerical continuation of spdes has the potential to allow for rapid parametric uncertainty quantification of spatio - temporal stochastic systems.
|
arxiv:1408.4000
|
the current global health emergency triggered by the pandemic covid - 19 is one of the greatest challenges mankind face in this generation. computational simulations have played an important role to predict the development of the current pandemic. such simulations enable early indications on the future projections of the pandemic and is useful to estimate the efficiency of control action in the battle against the sars - cov - 2 virus. the seir model is a well - known method used in computational simulations of infectious viral diseases and it has been widely used to model other epidemics such as ebola, sars, mers, and influenza a. this paper presents a modified seirs model with additional exit conditions in the form of death rates and resusceptibility, where we can tune the exit conditions in the model to extend prediction on the current projections of the pandemic into three possible outcomes ; death, recovery, and recovery with a possibility of resusceptibility. the model also considers specific information such as ageing factor of the population, time delay on the development of the pandemic due to control action measures, as well as resusceptibility with temporal immune response. owing to huge variations in clinical symptoms exhibited by covid - 19, the proposed model aims to reflect better on the current scenario and case data reported, such that the spread of the disease and the efficiency of the control action taken can be better understood. the model is verified using two case studies for verification and prediction studies, based on the real - world data in south korea and northern ireland, respectively.
|
arxiv:2004.01974
|
we present the formalism of q - stars with local or global u ( 1 ) symmetry. the equations we formulate are solved numerically and provide the main features of the soliton star. we study its behavior when the symmetry is local in contrast to the global case. a general result is that the soliton remains stable and does not decay into free particles and the electrostatic repulsion preserves it from gravitational collapse. we also investigate the case of a q - star with non - minimal energy - momentum tensor and find that the soliton is stable even in some cases of collapse when the coupling to gravity is absent.
|
arxiv:hep-th/0205197
|
a key trait of stochastic optimizers is that multiple runs of the same optimizer in attempting to solve the same problem can produce different results. as a result, their performance is evaluated over several repeats, or runs, on the problem. however, the accuracy of the estimated performance metrics depends on the number of runs and should be studied using statistical tools. we present a statistical analysis of the common metrics, and develop guidelines for experiment design to measure the optimizer ' s performance using these metrics to a high level of confidence and accuracy. to this end, we first discuss the confidence interval of the metrics and how they are related to the number of runs of an experiment. we then derive a lower bound on the number of repeats in order to guarantee achieving a given accuracy in the metrics. using this bound, we propose an algorithm to adaptively adjust the number of repeats needed to ensure the accuracy of the evaluated metric. our simulation results demonstrate the utility of our analysis and how it allows us to conduct reliable benchmarking as well as hyperparameter tuning and prevent us from drawing premature conclusions regarding the performance of stochastic optimizers.
|
arxiv:2503.16589
|
the exact analytical formulas for the transverse momentum distributions of the bose - einstein, fermi - dirac and maxwell - boltzmann statistics of particles with nonzero mass in the framework of the tsallis normalized and tsallis unnormalized ( also known as tsallis - 1 and tsallis - 2 ) statistics have been consistently derived. the final exact results were expressed in terms of the series expansions in the integral representation. the zeroth term approximation to both quantum and classical statistics of particles has been introduced. we have revealed that the phenomenological classical tsallis distribution ( widely used in high energy physics ) is equal to the distribution of the tsallis unnormalized statistics in the zeroth term approximation, but the phenomenological quantum tsallis distributions ( introduced by definition on the basis of the generalized entropy of the ideal gas ) do not correspond to the distributions of the tsallis statistics. we have found that in the ranges of the entropic parameter relevant to the processes of high - energy physics ( $ q < 1 $ for tsallis - 1 and $ q > 1 $ for tsallis - 2 ) the tsallis statistics is divergent. therefore, to obtain physical results, we have regularized the tsallis statistics by introducing an upper cut - off in the series expansion. the exact numerical results for the bose - einstein, fermi - dirac and maxwell - boltzmann statistics of particles in the tsallis normalized and unnormalized statistics have been obtained. we observed that the exact results of the tsallis statistics strongly enhanced the production of high - $ p _ { t } $ hadrons in comparison with the usual phenomenological tsallis distribution function at the same values of $ q $. the $ q $ - duality of the tsallis normalized and unnormalized statistics for the massive particles was studied.
|
arxiv:1903.06118
|
vortices are considered in relativistic maxwell - higgs systems in interaction with a neutral scalar field. the gauge field interacts with the neutral field via the presence of generalized permeability, and the charged and neutral scalar fields interact in a way dictated by the presence of first order differential equations that solve the equations of motion. the neutral field may be seen as the source field of the vortex, and we study some possibilities, which modify the standard maxwell - higgs solution and include internal structure to the vortex.
|
arxiv:1803.06242
|
we study kernel least - squares estimation under a norm constraint. this form of regularisation is known as ivanov regularisation and it provides better control of the norm of the estimator than the well - established tikhonov regularisation. ivanov regularisation can be studied under minimal assumptions. in particular, we assume only that the rkhs is separable with a bounded and measurable kernel. we provide rates of convergence for the expected squared $ l ^ 2 $ error of our estimator under the weak assumption that the variance of the response variables is bounded and the unknown regression function lies in an interpolation space between $ l ^ 2 $ and the rkhs. we then obtain faster rates of convergence when the regression function is bounded by clipping the estimator. in fact, we attain the optimal rate of convergence. furthermore, we provide a high - probability bound under the stronger assumption that the response variables have subgaussian errors and that the regression function lies in an interpolation space between $ l ^ \ infty $ and the rkhs. finally, we derive adaptive results for the settings in which the regression function is bounded.
|
arxiv:1706.03678
|
humanoid locomotion is a challenging task due to its inherent complexity and high - dimensional dynamics, as well as the need to adapt to diverse and unpredictable environments. in this work, we introduce a novel learning framework for effectively training a humanoid locomotion policy that imitates the behavior of a model - based controller while extending its capabilities to handle more complex locomotion tasks, such as more challenging terrain and higher velocity commands. our framework consists of three key components : pre - training through imitation of the model - based controller, fine - tuning via reinforcement learning, and model - assumption - based regularization ( mar ) during fine - tuning. in particular, mar aligns the policy with actions from the model - based controller only in states where the model assumption holds to prevent catastrophic forgetting. we evaluate the proposed framework through comprehensive simulation tests and hardware experiments on a full - size humanoid robot, digit, demonstrating a forward speed of 1. 5 m / s and robust locomotion across diverse terrains, including slippery, sloped, uneven, and sandy terrains.
|
arxiv:2504.09833
|
a fundamental challenge of recommendation systems ( rs ) is understanding the causal dynamics underlying users ' decision making. most existing literature addresses this problem by using causal structures inferred from domain knowledge. however, there are numerous phenomenons where domain knowledge is insufficient, and the causal mechanisms must be learnt from the feedback data. discovering the causal mechanism from rs feedback data is both novel and challenging, since rs itself is a source of intervention that can influence both the users ' exposure and their willingness to interact. also for this reason, most existing solutions become inappropriate since they require data collected free from any rs. in this paper, we first formulate the underlying causal mechanism as a causal structural model and describe a general causal structure learning framework grounded in the real - world working mechanism of rs. the essence of our approach is to acknowledge the unknown nature of rs intervention. we then derive the learning objective from our framework and propose an augmented lagrangian solver for efficient optimization. we conduct both simulation and real - world experiments to demonstrate how our approach compares favorably to existing solutions, together with the empirical analysis from sensitivity and ablation studies.
|
arxiv:2210.10256
|
we study the effect of including flavor changing neutral currents ( fcnc ) in the analysis of the neutrino signal of a supernova burst. when we include the effect of the fcnc which are beyond the standard model ( sm ) in the study of the msw resonant conversion, we obtain dramatic changes in the \ delta m ^ 2 - sin ^ 2 ( 2 \ theta ) probability contours for neutrino detection.
|
arxiv:hep-ph/9711424
|
exploratory search is an open - ended information retrieval process that aims at discovering knowledge about a topic or domain rather than searching for a specific answer or piece of information. conversational interfaces are particularly suitable for supporting exploratory search, allowing users to refine queries and examine search results through interactive dialogues. in addition to conversational search interfaces, knowledge graphs are also useful in supporting information exploration due to their rich semantic representation of data items. in this study, we demonstrate the synergistic effects of combining knowledge graphs and conversational interfaces for exploratory search, bridging the gap between structured and unstructured information retrieval. to this end, we propose a knowledge - driven dialogue system for exploring news articles by asking natural language questions and using the graph structure to navigate between related topics. based on a user study with 54 participants, we empirically evaluate the effectiveness of the graph - based exploratory search and discuss design implications for developing such systems.
|
arxiv:2310.05150
|
in this paper, we investigate the deflection of a charged particle moving in the equatorial plane of kerr - newman spacetime, focusing on weak field limit. to this end, we use the jacobi geometry, which can be described in three equivalent forms, namely randers - finsler metric, zermelo navigation problem, and $ ( n + 1 ) $ - dimensional stationtary spacetime picture. based on randers data and gauss - bonnet theorem, we utilize osculating riemannian manifold method and the generalized jacobi metric method to study the deflection angle, respectively. in the $ ( n + 1 ) $ - dimensional spacetime picture, the motion of charged particle follows the null geodesic, and thus we use the standard geodesic method to calculate the deflection angle. three methods lead to the same second - order deflection angle, which is obtained for the first time. the result shows that the black hole spin $ a $ affects the deflection of charged particles both gravitationally and magnetically at the leading order ( order $ \ mathcal { o } ( [ m ] ^ 2 / b ^ 2 ) $ ). when $ qq / e < 2m $, $ a $ will decrease ( or increase ) the deflection of prograde ( or retrograde ) charged signal. if $ qq / e > 2m $, the opposite happens, and the ray is divergently deflected by the lens. we also showed that the effect of the magnetic charge of the dyonic kerr - newman black hole on the deflection angle is independent of the particle ' s charge.
|
arxiv:2108.05273
|
as a promising 3d generation technique, multiview diffusion ( mvd ) has received a lot of attention due to its advantages in terms of generalizability, quality, and efficiency. by finetuning pretrained large image diffusion models with 3d data, the mvd methods first generate multiple views of a 3d object based on an image or text prompt and then reconstruct 3d shapes with multiview 3d reconstruction. however, the sparse views and inconsistent details in the generated images make 3d reconstruction challenging. we present mvd $ ^ 2 $, an efficient 3d reconstruction method for multiview diffusion ( mvd ) images. mvd $ ^ 2 $ aggregates image features into a 3d feature volume by projection and convolution and then decodes volumetric features into a 3d mesh. we train mvd $ ^ 2 $ with 3d shape collections and mvd images prompted by rendered views of 3d shapes. to address the discrepancy between the generated multiview images and ground - truth views of the 3d shapes, we design a simple - yet - efficient view - dependent training scheme. mvd $ ^ 2 $ improves the 3d generation quality of mvd and is fast and robust to various mvd methods. after training, it can efficiently decode 3d meshes from multiview images within one second. we train mvd $ ^ 2 $ with zero - 123 + + and objectverse - lvis 3d dataset and demonstrate its superior performance in generating 3d models from multiview images generated by different mvd methods, using both synthetic and real images as prompts.
|
arxiv:2402.14253
|
we investigate the possibilities of observing the decay mode for $ ^ { 124 } $ xe in which two electrons are captured, two neutrinos are emitted, and the final daughter nucleus is in its ground state, using dark matter experiments with liquid xenon. the first upper limit of the decay half - life is calculated to be 1. 66 $ \ times10 ^ { 21 } $ years at a 90 % confidence level ( c. l. ) obtained with the published background data from the xenon100 experiment. employing a known background model from the large underground xenon ( lux ) experiment, we predict that the detection of double - electron capture of $ ^ { 124 } $ xe to the ground state of $ ^ { 124 } $ te with lux will have approximately 115 events, assuming a half - life of 2. 9 $ \ times $ 10 $ ^ { 21 } $ years. we conclude that measuring $ ^ { 124 } $ xe 2 $ \ nu $ double - electron capture to the ground state of $ ^ { 124 } $ te can be performed more precisely with the proposed lux - zeplin ( lz ) experiment.
|
arxiv:1310.1946
|
there exists many resource allocation problems in the field of wireless communications which can be formulated as the generalized assignment problems ( gap ). gap is a generic form of linear sum assignment problem ( lsap ) and is more challenging to solve owing to the presence of both equality and inequality constraints. we propose a novel deep unsupervised learning ( dul ) approach to solve gap in a time - efficient manner. more specifically, we propose a new approach that facilitates to train a deep neural network ( dnn ) using a customized loss function. this customized loss function constitutes the objective function and penalty terms corresponding to both equality and inequality constraints. furthermore, we propose to employ a softmax activation function at the output of dnn along with tensor splitting which simplifies the customized loss function and guarantees to meet the equality constraint. as a case - study, we consider a typical user - association problem in a wireless network, formulate it as gap, and consequently solve it using our proposed dul approach. numerical results demonstrate that the proposed dul approach provides near - optimal results with significantly lower time - complexity.
|
arxiv:2103.14548
|
knowledge of the global magnetic field distribution and its evolution on the sun ' s surface is crucial for modeling the coronal magnetic field, understanding solar wind dynamics, computing the heliospheric open flux distribution and predicting solar cycle strength. as the far side of the sun cannot be observed directly and high - latitude observations always suffer from projection effects, we often rely on surface flux transport simulations ( sft ) to model long - term global magnetic field distribution. meridional circulation, the large - scale north - south component of the surface flow profile, is one of the key components of the sft simulation that requires further constraints near high latitudes. prediction of the photospheric magnetic field distribution requires knowledge of the flow profile in the future, which demands reconstruction of that same flow at the current time so that it can be estimated at a later time. by performing observing system simulation experiments, we demonstrate how the ensemble kalman filter technique, when used with a sft model, can be utilized to make ` ` posterior ' ' estimates of flow profiles into the future that can be used to drive the model forward to forecast photospheric magnetic field distribution.
|
arxiv:2409.15233
|
split learning ( sl ) is an emergent distributed learning framework which can mitigate the computation and wireless communication overhead of federated learning. it splits a machine learning model into a device - side model and a server - side model at a cut layer. devices only train their allocated model and transmit the activations of the cut layer to the server. however, sl can lead to data leakage as the server can reconstruct the input data using the correlation between the input and intermediate activations. although allocating more layers to a device - side model can reduce the possibility of data leakage, this will lead to more energy consumption for resource - constrained devices and more training time for the server. moreover, non - iid datasets across devices will reduce the convergence rate leading to increased training time. in this paper, a new personalized sl framework is proposed. for this framework, a novel approach for choosing the cut layer that can optimize the tradeoff between the energy consumption for computation and wireless transmission, training time, and data privacy is developed. in the considered framework, each device personalizes its device - side model to mitigate non - iid datasets while sharing the same server - side model for generalization. to balance the energy consumption for computation and wireless transmission, training time, and data privacy, a multiplayer bargaining problem is formulated to find the optimal cut layer between devices and the server. to solve the problem, the kalai - smorodinsky bargaining solution ( ksbs ) is obtained using the bisection method with the feasibility test. simulation results show that the proposed personalized sl framework with the cut layer from the ksbs can achieve the optimal sum utilities by balancing the energy consumption, training time, and data privacy, and it is also robust to non - iid datasets.
|
arxiv:2212.06107
|
objectives : an increasing number of cad / cam ( computer - aided design / computer - aided manufacturing ) hybrid materials have been introduced to the dental market in recent years. in addition, cad / cam hybrid materials for additive manufacturing ( am ) are becoming more attractive in digital dentistry. studies on material microstructures using micro - computed tomography ( $ \ mu $ - ct ) combined with scanning electron microscopy ( sem ) have only been available to a limited extent so far. methods : one cad / cam three - dimensional - ( 3d - ) printable hybrid material ( varseosmile crown plus ) and two cad / cam millable hybrid materials ( vita enamic ; voco grandio ), as well as one direct composite material ( ceram. x duo ), were included in the present study. cylindrical samples with a diameter of 2 mm were produced from each material and investigated by means of synchrotron radiation $ \ mu $ - ct at a voxel size of 0. 65 $ \ mu $ m. different samples from the same materials, obtained by cutting and polishing, were investigated by sem. results : the 3d - printed hybrid material showed some agglomerations and a more irregular distribution of fillers, as well as a visible layered macrostructure and a few spherical pores due to the printing process. the cad / cam millable hybrid materials revealed a more homogenous distribution of ceramic particles. the direct composite material showed multiple air bubbles and microstructural irregularities based on manual processing. significance : the $ \ mu $ - ct and sem analysis of the materials revealed different microstructures even though they belong to the same class of materials. it could be shown that $ \ mu $ - ct and sem imaging are valuable tools to understand microstructure and related mechanical properties of materials.
|
arxiv:2308.07341
|
coronary angiography ( cag ) is the gold - standard imaging modality for evaluating coronary artery disease, but its interpretation and subsequent treatment planning rely heavily on expert cardiologists. to enable ai - based decision support, we introduce a two - stage, physician - curated pipeline and a bilingual ( japanese / english ) cag image - report dataset. first, we sample 14, 686 frames from 539 exams and annotate them for key - frame detection and left / right laterality ; a convnext - base cnn trained on this data achieves 0. 96 f1 on laterality classification, even on low - contrast frames. second, we apply the cnn to 243 independent exams, extract 1, 114 key frames, and pair each with its pre - procedure report and expert - validated diagnostic and treatment summary, yielding a parallel corpus. we then fine - tune three open - source vlms ( paligemma2, gemma3, and conceptclip - enhanced gemma3 ) via lora and evaluate them using vlscore and cardiologist review. although paligemma2 w / lora attains the highest vlscore, gemma3 w / lora achieves the top clinician rating ( mean 7. 20 / 10 ) ; we designate this best - performing model as cag - vlm. these results demonstrate that specialized, fine - tuned vlms can effectively assist cardiologists in generating clinical reports and treatment recommendations from cag images.
|
arxiv:2505.04964
|
we study the optimal constant in a sobolev inequality for bv functions with zero mean value and vanishing outside a bounded open set. we are interested in finding the best possible embedding constant in terms of the measure of the domain alone. we set up an optimal shape problem and we completely characterize the behavior of optimal domains.
|
arxiv:1305.6271
|
knowledge graph completion ( kgc ) attempts to predict missing facts in a knowledge graph ( kg ). recently, there ' s been an increased focus on designing kgc methods that can excel in the { \ it inductive setting }, where a portion or all of the entities and relations seen in inference are unobserved during training. numerous benchmark datasets have been proposed for inductive kgc, all of which are subsets of existing kgs used for transductive kgc. however, we find that the current procedure for constructing inductive kgc datasets inadvertently creates a shortcut that can be exploited even while disregarding the relational information. specifically, we observe that the personalized pagerank ( ppr ) score can achieve strong or near sota performance on most inductive datasets. in this paper, we study the root cause of this problem. using these insights, we propose an alternative strategy for constructing inductive kgc datasets that helps mitigate the ppr shortcut. we then benchmark multiple popular methods using the newly constructed datasets and analyze their performance. the new benchmark datasets help promote a better understanding of the capabilities and challenges of inductive kgc by removing any shortcuts that obfuscate performance.
|
arxiv:2406.11898
|
objective : to develop and validate a deep learning model for the identification of out - of - body images in endoscopic videos. background : surgical video analysis facilitates education and research. however, video recordings of endoscopic surgeries can contain privacy - sensitive information, especially if out - of - body scenes are recorded. therefore, identification of out - of - body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. methods : a deep learning model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries. external validation was performed on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. all images extracted from the video datasets were annotated as inside or out - of - body. model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve ( roc auc ). results : the internal dataset consisting of 356, 267 images from 48 videos and the two multicentric test datasets consisting of 54, 385 and 58, 349 images from 10 and 20 videos, respectively, were annotated. compared to ground truth annotations, the model identified out - of - body images with 99. 97 % roc auc on the internal test dataset. mean $ \ pm $ standard deviation roc auc on the multicentric gastric bypass dataset was 99. 94 $ \ pm $ 0. 07 % and 99. 71 $ \ pm $ 0. 40 % on the multicentric cholecystectomy dataset, respectively. conclusion : the proposed deep learning model can reliably identify out - of - body images in endoscopic videos. the trained model is publicly shared. this facilitates privacy preservation in surgical video analysis.
|
arxiv:2301.07053
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.