text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
a search for new dielectron mass resonances using data recorded by the cdf ii detector and corresponding to an integrated luminosity of 5. 7 / fb is presented. no significant excess over the expected standard model prediction is observed. in this dataset, an event with the highest dielectron mass ever observed ( 960 gev / c ^ 2 ) was recorded. the results are interpreted in the randall - sundrum ( rs ) model. combined with the 5. 4 / fb diphoton analysis, the rs - graviton lower mass limit for the coupling k / \ bar { m } _ { pl } = 0. 1 is 1058 gev / c ^ 2, making it the strongest limit to date.
|
arxiv:1103.4650
|
large - scale public datasets with high - quality annotations are rarely available for intelligent medical imaging research, due to data privacy concerns and the cost of annotations. in this paper, we release synfundus - 1m, a high - quality synthetic dataset containing over one million fundus images in terms of \ textbf { eleven disease types }. furthermore, we deliberately assign four readability labels to the key regions of the fundus images. to the best of our knowledge, synfundus - 1m is currently the largest fundus dataset with the most sophisticated annotations. leveraging over 1. 3 million private authentic fundus images from various scenarios, we trained a powerful denoising diffusion probabilistic model, named synfundus - generator. the released synfundus - 1m are generated by synfundus - generator under predefined conditions. to demonstrate the value of synfundus - 1m, extensive experiments are designed in terms of the following aspect : 1 ) authenticity of the images : we randomly blend the synthetic images with authentic fundus images, and find that experienced annotators can hardly distinguish the synthetic images from authentic ones. moreover, we show that the disease - related vision features ( e. g. lesions ) are well simulated in the synthetic images. 2 ) effectiveness for down - stream fine - tuning and pretraining : we demonstrate that retinal disease diagnosis models of either convolutional neural networks ( cnn ) or vision transformer ( vit ) architectures can benefit from synfundus - 1m, and compared to the datasets commonly used for pretraining, models trained on synfundus - 1m not only achieve superior performance but also demonstrate faster convergence on various downstream tasks. synfundus - 1m is already public available for the open - source community.
|
arxiv:2312.00377
|
we propose the world first longitudinal gosnr estimation by using correlation template method at rx, without any monitoring devices located in the middle of the link. the proposed method is experimentally demonstrated in a 12 - span link with commercial transceiver.
|
arxiv:2310.06807
|
surface plasmon resonance of metal nanostructures has broad application prospects in the fields of photocatalysis, optical sensing, biomarkers and surface - enhanced raman scattering. this paper reports a graphene - assisted method for preparing large - scale single crystal ag ( 111 ) nanoparticle arrays based on ion implantation technique. by surface periodic treatment and annealing of the implanted sample, regularly arranged ag nanoparticles can be prepared on the sample surface. a new application for graphene is proposed, that is, as a perfect barrier layer to prevent metal atoms from evaporating or diffusing. all the ag nps show ( 111 ) crystal orientation. besides, the ag atoms are covered by graphene immediately when they precipitate from the substrate, which can prevent them from being oxidized. on the basis of this structure, as one of the applications of metal spr, we measured the raman enhancement effect, and found that the g peak of the raman spectrum of graphene achieved about 20 times enhancement.
|
arxiv:1912.01814
|
methods for random - effects meta - analysis require an estimate of the between - study variance, $ \ tau ^ 2 $. the performance of estimators of $ \ tau ^ 2 $ ( measured by bias and coverage ) affects their usefulness in assessing heterogeneity of study - level effects, and also the performance of related estimators of the overall effect. for the effect measure mean difference ( md ), we review five point estimators of $ \ tau ^ 2 $ ( the popular methods of dersimonian - laird, restricted maximum likelihood, and mandel and paule ( mp ) ; the less - familiar method of jackson ; and a new method ( wt ) based on the improved approximation to the distribution of the $ q $ statistic by \ cite { kulinskaya2004welch } ), five interval estimators for $ \ tau ^ 2 $ ( profile likelihood, q - profile, biggerstaff and jackson, jackson, and the new wt method ), six point estimators of the overall effect ( the five related to the point estimators of $ \ tau ^ 2 $ and an estimator whose weights use only study - level sample sizes ), and eight interval estimators for the overall effect ( five based on the point estimators for $ \ tau ^ 2 $, the hartung - knapp - sidik - jonkman ( hksj ) interval, a modification of hksj, and an interval based on the sample - size - weighted estimator ). we obtain empirical evidence from extensive simulations and an example.
|
arxiv:1904.01948
|
quantum network is a set of nodes connected with channels, through which the nodes communicate photons and classical information. classical structural complexity of a quantum network may be defined through its physical structure, i. e. mutual position of nodes and channels connecting them. we show here that the classical structural complexity of a quantum network does not restrict the structural complexity of entanglement graphs, which may be created in the quantum network with local operations and classical communication. we show, in particular, that 1d quantum network can simulate both simple entanglement graphs such as lattices and random graphs and complex small - world graphs.
|
arxiv:1602.06154
|
we present a quantum variational algorithm based on a novel circuit that generates all permutations that can be spanned by one - and two - qubits permutation gates. the construction of the circuits follows from group - theoretical results, most importantly the bruhat decomposition of the group generated by the \ ( \ mathtt { cx } \ ) gates. these circuits require a number of qubits that scale logarithmically with the permutation dimension, and are therefore employable in near - term applications. we further augment the circuits with ancilla qubits to enlarge their span, and with these we build ansatze to tackle permutation - based optimization problems such as quadratic assignment problems, and graph isomorphisms. the resulting quantum algorithm, \ textsc { quper }, is competitive with respect to classical heuristics and we could simulate its behavior up to a problem with $ 256 $ variables, requiring $ 20 $ qubits.
|
arxiv:2505.05981
|
both knowledge graphs and user - item interaction graphs are frequently used in recommender systems due to their ability to provide rich information for modeling users and items. however, existing studies often focused on one of these sources ( either the knowledge graph or the user - item interaction graph ), resulting in underutilization of the benefits that can be obtained by integrating both sources of information. in this paper, we propose dekgci, a novel double - sided recommendation model. in dekgci, we use the high - order collaborative signals from the user - item interaction graph to enrich the user representations on the user side. additionally, we utilize the high - order structural and semantic information from the knowledge graph to enrich the item representations on the item side. dekgci simultaneously learns the user and item representations to effectively capture the joint interactions between users and items. three real - world datasets are adopted in the experiments to evaluate dekgci ' s performance, and experimental results demonstrate its high effectiveness compared to seven state - of - the - art baselines in terms of auc and acc.
|
arxiv:2306.13837
|
we introduce igdiff, an antibody variable domain diffusion model based on a general protein backbone diffusion framework which was extended to handle multiple chains. assessing the designability and novelty of the structures generated with our model, we find that igdiff produces highly designable antibodies that can contain novel binding regions. the backbone dihedral angles of sampled structures show good agreement with a reference antibody distribution. we verify these designed antibodies experimentally and find that all express with high yield. finally, we compare our model with a state - of - the - art generative backbone diffusion model on a range of antibody design tasks, such as the design of the complementarity determining regions or the pairing of a light chain to an existing heavy chain, and show improved properties and designability.
|
arxiv:2405.07622
|
we prove the equidistribution of ( weighted ) periodic orbits of the geodesic ow on noncompact negatively curved manifolds toward equilibrium states in the narrow topology, i. e. in the dual of bounded continuous functions. we deduce an exact asymptotic counting for periodic orbits ( weighted or not ), which was previously known only for geometrically nite manifolds.
|
arxiv:1907.10898
|
this work presents numerical simulations of meteoroid streams released by comet 21p / giacobini - zinner over the period 1850 - 2030. the initial methodology, based on vaubaillon et al. ( 2005 ), has been updated and modified to account for the evolution of the comet ' s dust production along its orbit. the peak time, intensity, and duration of the shower were assessed using simulated activity profiles that are calibrated to match observations of historic draconid outbursts. the characteristics of all the main apparitions of the shower are reproduced, with a peak time accuracy of half an hour and an intensity estimate correct to within a factor of 2 ( visual showers ) or 3 ( radio outbursts ). our model also revealed the existence of a previously unreported strong radio outburst on october 9 1999, that has since been confirmed by archival radar measurements. the first results of the model, presented in egal et al. ( 2018 ), provided one of the best predictions of the recent 2018 outburst. three future radio outbursts are predicted in the next decade, in 2019, 2025 and 2029. the strongest activity is expected in 2025 when the earth encounters the young 2012 trail. because of the dynamical uncertainties associated with comet 21p ' s orbital evolution between the 1959 and 1965 apparitions, observations of the 2019 radio outburst would be particularly helpful to improve the confidence of subsequent forecasts.
|
arxiv:1904.12185
|
strong correlation effects, such as a dramatic increase in the effective mass of the carriers of electricity, recently observed in the low density electron gas have provided spectacular support for the existence of a sharp metal - insulator transitions in dilute two dimensional electron gases. here we show that strong correlations, normally expected only for narrow integer filled bands, can be effectively enhanced even far away from integer filling, due to incipient charge ordering driven by non - local coulomb interactions. this general mechanism is illustrated by solving an extended hubbard model using dynamical mean - field theory. our findings account for the key aspects of the experimental phase diagram, and reconcile the early view points of wigner and mott. the interplay of short range charge order and local correlations should result in a three peak structure in the spectral function of the electrons which should be observable in tunneling and optical spectroscopy.
|
arxiv:0809.5037
|
the derivative of token standard of ethereum blockchain, termed as non fungible token is distinguishable token. these tokens are bound with digital properties that provide them unique identification which helps in fulfilling the aim of distinguishable tokens. these tokens are used as an evidence of ownership for the digital asset, with which they are bound to. and it is with these non fungible tokens that the problem of proving ownership of digital asset is being solved and with this technique, it is with hope that developers are looking forward to solve many more problems of the real world with it, may it be providing tradability solutions for arts, real estate and many other sectors. during the time of writing this, the nft has shown unpredictable growth in the recent years and this has caused the stimulation of prosperity of dapps ( decentralized application ). with an unpredictable growth and garnering attention worldwide with many mainstream key people investing in it, the nft is still in developing stage and is still premature. this paper is an attempt to squeeze the nft developments systematically, so the aspiring developers can have the resource to start with and aid the development process further
|
arxiv:2308.14389
|
in this note we gather and review some facts about existence of toric spaces over 3 - dimensional simple polytopes. first, over every combinatorial 3 - polytope there exists a quasitoric manifold. second, there exist combinatorial 3 - polytopes, that do not correspond to any smooth projective toric variety. we restate the proof of the second claim which does not refer to complicated algebro - geometrical technique. if follows from these results that any fullerene supports quasitoric manifolds but does not support smooth projective toric varieties.
|
arxiv:1607.03377
|
the deflection of light in the strong field limit is an important test for alternative theories of gravity. however, solutions for the metric that allow for analytic computations are not always available. we implement a hybrid analytic - numerical approximation to determine the deflection angle in static, spherically symmetrics pacetimes. we apply this to a set of numerical black hole solutions within the class of theories known as degenerate higher order scalar - tensor theories. comparing our results to a more time consuming full numerical integration, we find that we can accurately describe the deflection angle for light rays passing at arbitrary distances from the photon sphere with a combination of two analytic - numerical approximations. furthermore, we find a range of parameters where our dhost black holes predict strong lensing effects whose size is comparable with the uncertainty in the properties of the supermassive black hole in m87 reported by the event horizon telescope, showing that strong lensing is a viable alternative to put constraints on these models.
|
arxiv:2007.09473
|
we give a short proof of the following fact. let $ \ sigma $ be a connected, finitely connected, noncompact manifold without boundary. if $ g $ is a complete riemannian metric on $ \ sigma $ whose gaussian curvature $ k $ is nonnegative at infinity, then $ k $ must be integrable. in particular, we obtain a new short proof of the fact that if $ \ sigma $ admits a complete metric whose gaussian curvature is nonnegative and positive at one point, then $ \ sigma $ is diffeomorphic to $ \ mathbb { r } ^ 2 $.
|
arxiv:1609.07631
|
we show that the vacuum permeability and permittivity may originate from the magnetization and the polarization of continuously appearing and disappearing fermion pairs. we then show that if we simply model the propagation of the photon in vacuum as a series of transient captures within these ephemeral pairs, we can derive a finite photon velocity. requiring that this velocity is equal to the speed of light constrains our model of vacuum. within this approach, the propagation of a photon is a statistical process at scales much larger than the planck scale. therefore we expect its time of flight to fluctuate. we propose an experimental test of this prediction.
|
arxiv:1302.6165
|
topological statistical theory provides the foundation for a modern mathematical reformulation of classical statistical theory : structural statistics emphasizes the structural assumptions that accompany distribution families and the set of structure preserving transformations between them, given by their statistical morphisms. the resulting language is designed to integrate complicated structured model spaces like deep - learning models and to close the gap to topology and differential geometry. to preserve the compatibility to classical statistics the language comprises corresponding concepts for standard information criteria like sufficiency and completeness.
|
arxiv:1912.10266
|
in this paper, we predict the existence of low - frequency nonlocal plasmon excitations at the vacuum - surface interface of a superlattice of $ n $ graphene layers interacting with a thick conducting substrate. this is different from graphite which allows inter - layer hopping. a dispersion function is derived which incorporates the polarization function of the graphene monolayers ( mlgs ) and the dispersion function of a semi - infinite electron liquid at whose surface the electrons scatter specularly. we find that this surface plasmon - polariton is not damped by the particle - hole excitations ( phe ' s ) or the bulk modes and separates below the continuum mini - band of bulk plasmon modes. for a conducting substrate with surface plasmon frequency $ \ omega _ s = \ omega _ p / \ sqrt { 2 } $, the surface plasmon frequency of the hybrid structure always lies below $ \ omega _ s $. the intensity of this mode depends on the distance of the graphene layers from the surface of the conductor, the energy band gap between the valence and conduction bands of mlg and, most importantly, on the number of two - dimensional ( 2d ) layers. furthermore, the hybrid structure has no surface plasmon for a sufficiently large number ( $ n \ stackrel { > } { \ sim } 7 $ ) of layers. the existence of two plasmons with different dispersion relations indicates that quasiparticles with different group velocity may coexist for various ranges of wavelength which is determined by the number of layers in the superlattice.
|
arxiv:1508.01963
|
large - scale deep learning models contribute to significant performance improvements on varieties of downstream tasks. current data and model parallelism approaches utilize model replication and partition techniques to support the distributed training of ultra - large models. however, directly deploying these systems often leads to sub - optimal training efficiency due to the complex model architectures and the strict device memory constraints. in this paper, we propose optimal sharded data parallel ( osdp ), an automated parallel training system that combines the advantages from both data and model parallelism. given the model description and the device information, osdp makes trade - offs between the memory consumption and the hardware utilization, thus automatically generates the distributed computation graph and maximizes the overall system throughput. in addition, osdp introduces operator splitting to further alleviate peak memory footprints during training with negligible overheads, which enables the trainability of larger models as well as the higher throughput. extensive experimental results of osdp on multiple different kinds of large - scale models demonstrate that the proposed strategy outperforms the state - of - the - art in multiple regards.
|
arxiv:2305.09940
|
let $ h $ be a pointed hopf algebra with abelian coradical. let $ a \ supseteq b $ be left ( or right ) coideal subalgebras of $ h $ that contain the coradical of $ h $. we show that $ a $ has a pbw basis over $ b $, provided that $ h $ satisfies certain mild conditions. in the case that $ h $ is a connected graded hopf algebra of characteristic zero and $ a $ and $ b $ are both homogeneous of finite gelfand - kirillov dimension, we show that $ a $ is a graded iterated ore extension of $ b $. these results turn out to be conceptual consequences of a structure theorem for each pair $ s \ supseteq t $ of homogeneous coideal subalgebras of a connected graded braided bialgebra $ r $ with braiding satisfying certain mild conditions. the structure theorem claims the existence of a well - behaved pbw basis of $ s $ over $ t $. the approach to the structure theorem is constructive by means of a combinatorial method based on lyndon words and braided commutators, which is originally developed by v. k. kharchenko for primitively generated braided hopf algebras of diagonal type. since in our context we don ' t priorilly assume $ r $ to be primitively generated, new methods and ideas are introduced to handle the corresponding difficulties, among others.
|
arxiv:2301.02139
|
we introduce novel finite element schemes for curve diffusion and elastic flow in arbitrary codimension. the schemes are based on a variational form of a system that includes a specifically chosen tangential motion. we derive optimal $ l ^ 2 $ - and $ h ^ 1 $ - error bounds for continuous - in - time semidiscrete finite element approximations that use piecewise linear elements. in addition, we consider fully discrete schemes and, in the case of curve diffusion, prove unconditional stability for it. finally, we present several numerical simulations, including some convergence experiments that confirm the derived error bounds. the presented simulations suggest that the tangential motion leads to equidistribution in practice.
|
arxiv:2402.16799
|
in evolution, the effects of a single deleterious mutation can sometimes be compensated for by a second mutation which recovers the original phenotype. such epistatic interactions have implications for the structure of genome space - namely, that networks of genomes encoding the same phenotype may not be connected by single mutational moves. we use the folding of rna sequences into secondary structures as a model genotype - phenotype map and explore the neutral spaces corresponding to networks of genotypes with the same phenotype. in most of these networks, we find that it is not possible to connect all genotypes to one another by single point mutations. instead, a network for a phenotypic structure with $ n $ bonds typically fragments into at least $ 2 ^ n $ neutral components, often of similar size. while components of the same network generate the same phenotype, they show important variations in their properties, most strikingly in their evolvability and mutational robustness. this heterogeneity implies contingency in the evolutionary process.
|
arxiv:1108.1150
|
the " high acceptance dielectron spectrometer " ( hades ) at gsi, darmstadt, is investigating the production of e + e - pairs in a + a, p + a and n + n collisions. the latter program allows for the reconstruction of individual sources. this strategy will be roughly outlined in this contribution and preliminary pp / pn data is shown.
|
arxiv:0712.1505
|
lower and upper bounds on the size of a covering of subspaces in the grassmann graph $ \ cg _ q ( n, r ) $ by subspaces from the grassmann graph $ \ cg _ q ( n, k ) $, $ k \ geq r $, are discussed. the problem is of interest from four points of view : coding theory, combinatorial designs, $ q $ - analogs, and projective geometry. in particular we examine coverings based on lifted maximum rank distance codes, combined with spreads and a recursive construction. new constructions are given for $ q = 2 $ with $ r = 2 $ or $ r = 3 $. we discuss the density for some of these coverings. tables for the best known coverings, for $ q = 2 $ and $ 5 \ leq n \ leq 10 $, are presented. we present some questions concerning possible constructions of new coverings of smaller size.
|
arxiv:1111.4319
|
psifx is a plug - and - play multi - modal feature extraction toolkit, aiming to facilitate and democratize the use of state - of - the - art machine learning techniques for human sciences research. it is motivated by a need ( a ) to automate and standardize data annotation processes, otherwise involving expensive, lengthy, and inconsistent human labor, such as the transcription or coding of behavior changes from audio and video sources ; ( b ) to develop and distribute open - source community - driven psychology research software ; and ( c ) to enable large - scale access and ease of use to non - expert users. the framework contains an array of tools for tasks, such as speaker diarization, closed - caption transcription and translation from audio, as well as body, hand, and facial pose estimation and gaze tracking from video. the package has been designed with a modular and task - oriented approach, enabling the community to add or update new tools easily. we strongly hope that this package will provide psychologists a simple and practical solution for efficiently a range of audio, linguistic, and visual features from audio and video, thereby creating new opportunities for in - depth study of real - time behavioral phenomena.
|
arxiv:2407.10266
|
let $ \ omega $ in $ r ^ m $ be a compact connected $ m $ - dimensional real analytic domain with boundary and $ \ phi $ be a primal navigation function ; i. e. a real analytic morse function on $ \ omega $ with a unique minimum and with minus gradient vector field $ g $ of $ \ phi $ on the boundary of $ \ omega $ pointed inwards along each coordinate. related to a robotics problem, we define a sequential hybrid process on $ \ omega $ for $ g $ starting from any initial point $ q _ 0 $ in the interior of $ \ omega $ as follows : at each step we restrict ourselves to an affine subspace where a collection of coordinates are fixed and allow the other coordinates change along an integral curve of the projection of $ g $ onto the subspace. we prove that provided each coordinate appears infinitely many times in the coordinate choices during the process, the process converges to a critical point of $ \ phi $. that critical point is the unique minimum for a dense subset in primal navigation functions. we also present an upper bound for the total length of the trajectories close to a critical point.
|
arxiv:1510.02133
|
despite the rise of deep learning in numerous areas of computer vision and image processing, iris recognition has not benefited considerably from these trends so far. most of the existing research on deep iris recognition is focused on new models for generating discriminative and robust iris representations and relies on methodologies akin to traditional iris recognition pipelines. hence, the proposed models do not approach iris recognition in an end - to - end manner, but rather use standard heuristic iris segmentation ( and unwrapping ) techniques to produce normalized inputs for the deep learning models. however, because deep learning is able to model very complex data distributions and nonlinear data changes, an obvious question arises. how important is the use of traditional segmentation methods in a deep learning setting? to answer this question, we present in this paper an empirical analysis of the impact of iris segmentation on the performance of deep learning models using a simple two stage pipeline consisting of a segmentation and a recognition step. we evaluate how the accuracy of segmentation influences recognition performance but also examine if segmentation is needed at all. we use the casia thousand and sbvpi datasets for the experiments and report several interesting findings.
|
arxiv:1901.10431
|
on the basis of the quasiclassical theory of superconductivity, we obtain a formula for the local density of states ( ldos ) around a vortex core of superconductors with anisotropic pair - potential and fermi surface in arbitrary directions of magnetic fields. earlier results on the ldos of d - wave superconductors and nbse $ _ 2 $ are naturally interpreted within our theory geometrically ; the region with high intensity of the ldos observed in numerical calculations turns out to the enveloping curve of the trajectory of andreev bound states. we discuss experimental results on yni $ _ 2 $ b $ _ 2 $ c within the quasiclassical theory of superconductivity.
|
arxiv:cond-mat/0605441
|
be stated as solved or not. notable historical conjectures were finally proven. in 1976, wolfgang haken and kenneth appel proved the four color theorem, controversial at the time for the use of a computer to do so. andrew wiles, building on the work of others, proved fermat ' s last theorem in 1995. paul cohen and kurt godel proved that the continuum hypothesis is independent of ( could neither be proved nor disproved from ) the standard axioms of set theory. in 1998, thomas callister hales proved the kepler conjecture, also using a computer. mathematical collaborations of unprecedented size and scope took place. an example is the classification of finite simple groups ( also called the " enormous theorem " ), whose proof between 1955 and 2004 required 500 - odd journal articles by about 100 authors, and filling tens of thousands of pages. a group of french mathematicians, including jean dieudonne and andre weil, publishing under the pseudonym " nicolas bourbaki ", attempted to exposit all of known mathematics as a coherent rigorous whole. the resulting several dozen volumes has had a controversial influence on mathematical education. differential geometry came into its own when albert einstein used it in general relativity. entirely new areas of mathematics such as mathematical logic, topology, and john von neumann ' s game theory changed the kinds of questions that could be answered by mathematical methods. all kinds of structures were abstracted using axioms and given names like metric spaces, topological spaces etc. as mathematicians do, the concept of an abstract structure was itself abstracted and led to category theory. grothendieck and serre recast algebraic geometry using sheaf theory. large advances were made in the qualitative study of dynamical systems that poincare had begun in the 1890s. measure theory was developed in the late 19th and early 20th centuries. applications of measures include the lebesgue integral, kolmogorov ' s axiomatisation of probability theory, and ergodic theory. knot theory greatly expanded. quantum mechanics led to the development of functional analysis, a branch of mathematics that was greatly developed by stefan banach and his collaborators who formed the lwow school of mathematics. other new areas include laurent schwartz ' s distribution theory, fixed point theory, singularity theory and rene thom ' s catastrophe theory, model theory, and mandelbrot ' s fractals. lie theory with its lie groups and lie algebras became one of the major areas of study. non - standard analysis, introduced by abraham robinson, rehabili
|
https://en.wikipedia.org/wiki/History_of_mathematics
|
the origin of highest energy cosmic rays is yet unknown. an appealing possibility is the so - called z - burst scenario, in which a large fraction of these cosmic rays are decay products of z bosons produced in the scattering of ultrahigh energy neutrinos on cosmological relic neutrinos. the comparison between the observed and predicted spectra constrains the mass of the heaviest neutrino. the required neutrino mass is fairly robust against variations of the presently unknown quantities, such as the amount of relic neutrino clustering, the universal photon radio background and the extragalactic magnetic field. considering different possibilities for the ordinary cosmic rays the required neutrino masses are determined. in the most plausible case that the ordinary cosmic rays are of extragalactic origin and the universal radio background is strong enough to suppress high energy photons, the required neutrino mass is 0. 08 ev < m _ nu < 0. 40 ev. the required ultrahigh energy neutrino flux should be detected in the near future by experiments such as amanda, rice or the pierre auger observatory.
|
arxiv:hep-ph/0210123
|
accurate segmentation of glioma brain tumors is crucial for diagnosis and treatment planning. deep learning techniques offer promising solutions, but optimal model architectures remain under investigation. we used the brats 2021 dataset, selecting t1 with contrast enhancement ( t1ce ), t2, and fluid - attenuated inversion recovery ( flair ) sequences for model development. the proposed attention xception unet ( axunet ) architecture integrates an xception backbone with dot - product self - attention modules, inspired by state - of - the - art ( sota ) large language models such as google bard and openai chatgpt, within a unet - shaped model. we compared axunet with sota models. comparative evaluation on the test set demonstrated improved results over baseline models. inception - unet and xception - unet achieved mean dice scores of 90. 88 and 93. 24, respectively. attention resunet ( aresunet ) attained a mean dice score of 92. 80, with the highest score of 84. 92 for enhancing tumor ( et ) among all models. attention gate unet ( agunet ) yielded a mean dice score of 90. 38. axunet outperformed all models with a mean dice score of 93. 73. it demonstrated superior dice scores across whole tumor ( wt ) and tumor core ( tc ) regions, achieving 92. 59 for wt, 86. 81 for tc, and 84. 89 for et. the integration of the xception backbone and dot - product self - attention mechanisms in axunet showcases enhanced performance in capturing spatial and contextual information. the findings underscore the potential utility of axunet in facilitating precise tumor delineation.
|
arxiv:2503.20446
|
we have developed a system for automatic facial expression recognition, which runs on google glass and delivers real - time social cues to the wearer. we evaluate the system as a behavioral aid for children with autism spectrum disorder ( asd ), who can greatly benefit from real - time non - invasive emotional cues and are more sensitive to sensory input than neurotypically developing children. in addition, we present a mobile application that enables users of the wearable aid to review their videos along with auto - curated emotional information on the video playback bar. this integrates our learning aid into the context of behavioral therapy. expanding on our previous work describing in - lab trials, this paper presents our system and application - level design decisions in depth as well as the interface learnings gathered during the use of the system by multiple children with asd in an at - home iterative trial.
|
arxiv:2002.06581
|
in a recent letter one of us pointed out how differences in preparation procedures for quantum experiments can lead to non - trivial differences in the results of the experiment. the difference arise from the initial correlations between the system and environment. therefore, any quantum experiment that is prone to the influences from the environment must be prepared carefully. in this paper, we study quantum process tomography in light of this. we suggest several experimental setups, where preparation of initial state plays a role on the final outcome of the experiment. we show that by studying the linearity and the positivity of the resulting maps the experimenter can determine the nature of the initial correlations between the system and the environment.
|
arxiv:0904.4663
|
let $ g $ be a semisimple lie group with discrete series. we use maps $ k _ 0 ( c ^ * _ rg ) \ to \ mathbb { c } $ defined by orbital integrals to recover group theoretic information about $ g $, including information contained in $ k $ - theory classes not associated to the discrete series. an important tool is a fixed point formula for equivariant indices obtained by the authors in an earlier paper. applications include a tool to distinguish classes in $ k _ 0 ( c ^ * _ rg ) $, the ( known ) injectivity of dirac induction, versions of selberg ' s principle in $ k $ - theory and for matrix coefficients of the discrete series, a tannaka - type duality, and a way to extract characters of representations from $ k $ - theory. finally, we obtain a continuity property near the identity element of $ g $ of families of maps $ k _ 0 ( c ^ * _ rg ) \ to \ mathbb { c } $, parametrised by semisimple elements of $ g $, defined by stable orbital integrals. this implies a continuity property for $ l $ - packets of discrete series characters, which in turn can be used to deduce a ( well - known ) expression for formal degrees of discrete series representations from harish - chandra ' s character formula.
|
arxiv:1803.07208
|
in this paper, we present context - adaptive parrot ( ca - parrot ) as an extension of our previous work predictive ad - hoc routing fueled by reinforcement learning and trajectory knowledge ( parrot ). short - term effects, as occurring in urban surroundings, have shown to have a negative impact on the reinforcement learning ( rl ) - based routing process. therefore, we add a timer - based compensation mechanism to the update process and introduce a hybrid machine learning ( ml ) approach to classify radio environment prototypes ( reps ) with a dedicated ml component and enable the protocol for autonomous context adaption. the performance of the novel protocol is evaluated in comprehensive network simulations considering different reps and is compared to well - known established routing protocols for mobile ad - hoc networks ( manets ). the results show, that ca - parrot is capable to compensate the challenges confronted with in different reps and to improve its key performance indicators ( kpis ) up to 23 % compared to parrot, and outperform established routing protocols by up to 50 %.
|
arxiv:2107.06190
|
the toda criterion of the gaussian curvature is applied to calculate analytically the transition energy from regular to chaotic motion in a schematic model describing the interaction between collective dipole and quadrupole modes in atomic nuclei.
|
arxiv:nucl-th/9707024
|
the relativistic six - quark equations are found in the framework of the dispersion relation technique. the approximate solutions of these equations using the method based on the extraction of leading singularities of the amplitudes are obtained. the relativistic six - quark amplitudes of hexaquarks including the quarks of three flavors ( $ u $, $ d $, $ s $ ) are calculated. the poles of these amplitudes determine the masses of six - quark systems.
|
arxiv:1003.0257
|
in this paper, we present a novel method of no - reference image quality assessment ( nr - iqa ), which is to predict the perceptual quality score of a given image without using any reference image. the proposed method harnesses three functions ( i ) the visual attention mechanism, which affects many aspects of visual perception including image quality assessment, however, is overlooked in the nr - iqa literature. the method assumes that the fixation areas on an image contain key information to the process of iqa. ( ii ) the robust averaging strategy, which is a means \ - - - supported by psychology studies \ - - - to integrating multiple / step - wise evidence to make a final perceptual judgment. ( iii ) the multi - task learning, which is believed to be an effectual means to shape representation learning and could result in a more generalized model. to exploit the synergy of the three, we consider the nr - iqa as a dynamic perception process, in which the model samples a sequence of " informative " areas and aggregates the information to learn a representation for the tasks of jointly predicting the image quality score and the distortion type. the model learning is implemented by a reinforcement strategy, in which the rewards of both tasks guide the learning of the optimal sampling policy to acquire the " task - informative " image regions so that the predictions can be made accurately and efficiently ( in terms of the sampling steps ). the reinforcement learning is realized by a deep network with the policy gradient method and trained through back - propagation. in experiments, the model is tested on the tid2008 dataset and it outperforms several state - of - the - art methods. furthermore, the model is very efficient in the sense that a small number of fixations are used in nr - iqa.
|
arxiv:1612.03530
|
the infrared excess around the white dwarf g29 - 38 can be explained by emission from an opaque flat ring of dust with an inner radius 0. 14 of the radius of the sun and an outer radius approximately equal to the sun ' s. this ring lies within the roche region of the white dwarf where an asteroid could have been tidally destroyed, producing a system reminiscent of saturn ' s rings. accretion onto the white dwarf from this circumstellar dust can explain the observed calcium abundance in the atmosphere of g29 - 38. either as a bombardment by a series of asteroids or because of one large disruption, the total amount of matter accreted onto the white dwarf may have been comparable to the total mass of asteroids in the solar system, or, equivalently, about 1 % of the mass in the asteroid belt around the main sequence star zeta lep.
|
arxiv:astro-ph/0301411
|
we construct a realistic supersymmetric model for superheavy metastable cosmic strings ( css ) that can be investigated in the current pulsar timing array ( pta ) experiments. we consider shifted $ \ mu $ hybrid inflation in which the symmetry breaking $ su ( 4 ) _ c \ times su ( 2 ) _ l \ times u ( 1 ) _ r \ rightarrow su ( 3 ) _ c \ times su ( 2 ) _ l \ times u ( 1 ) _ { b - l } \ times u ( 1 ) _ r $ proceeds along an inflationary trajectory such that the topologically unstable primordial monopoles are inflated away. the breaking of $ u ( 1 ) _ { b - l } \ times u ( 1 ) _ r \ rightarrow u ( 1 ) _ y $ after inflation ends yields the metastable css that generate the stochastic gravitational wave background ( sgwb ) which is consistent with the current pta data set. the scalar spectral index $ n _ s $ and the tensor to scalar ratio $ r $ are also compatible with planck 2018. we briefly discuss both reheating and leptogenesis in this model.
|
arxiv:2308.11410
|
we study a two - terminal graphene josephson junction with contacts shaped to form a narrow constriction, less than 100nm in length. the contacts are made from type ii superconducting contacts and able to withstand magnetic fields high enough to reach the quantum hall ( qh ) regime in graphene. in this regime, the device conductance is determined by edge states, plus the contribution from the constricted region. in particular, the constriction area can support supercurrents up to fields of ~ 2. 5t. moreover, enhanced conductance is observed through a wide range of magnetic fields and gate voltages. this additional conductance and the appearance of supercurrent is attributed to the tunneling between counter - propagating quantum hall edge states along opposite superconducting contacts.
|
arxiv:1904.11689
|
we provide analytical solutions to the thermodynamic bethe ansatz equations in the large and small density approximations. we extend results previously obtained for leading order behaviour of the scaling function of affine toda field theories related to simply laced lie algebras to the non - simply laced case. the comparison with semi - classical methods shows perfect agreement for the simply laced case. we derive the y - systems for affine toda field theories with real coupling constant and employ them to improve the large density approximations. we test the quality of our analysis explicitly for the sinh - gordon model and the $ ( g _ 2 ^ { ( 1 ) }, d _ 4 ^ { ( 3 ) } ) $ - affine toda field theory.
|
arxiv:hep-th/0002185
|
can a mere next - token predictor faithfully model human intelligence? we crystallize this emerging concern and correct popular misconceptions surrounding it, and advocate a simple multi - token objective. as a starting point, we argue that the two often - conflated phases of next - token prediction - - autoregressive inference and teacher - forced training - - must be treated distinctly. the popular criticism that errors can compound during autoregressive inference, crucially assumes that teacher - forcing has learned an accurate next - token predictor. this assumption sidesteps a more deep - rooted problem we expose : in certain classes of tasks, teacher - forcing can simply fail to learn an accurate next - token predictor in the first place. we describe a general mechanism of how teacher - forcing can fail, and design a minimal planning task where both the transformer and the mamba architecture empirically fail in that manner - - remarkably, despite the task being straightforward to learn. finally, we provide preliminary evidence that this failure can be resolved using a simple modification that predicts multiple tokens in advance. we hope this finding can ground future debates and inspire explorations beyond the next - token prediction paradigm. we make our code available under https : / / github. com / gregorbachmann / next - token - failures
|
arxiv:2403.06963
|
in hybrid inflation and running mass inflation models it is possible that the inflaton field will fragment into non - topological solitons, resulting in a highly inhomogeneous post - inflation era prior to reheating. in supersymmetric models with a conventional homogeneous post - inflation era, the dynamics of flat direction scalars are determined by ch ^ { 2 } corrections to the mass squared terms, coming from the energy density of the universe combined with planck - scale suppressed interactions. here we reconsider the ch ^ { 2 } corrections for a universe dominated by inflatonic non - topological solitons. we show that the dynamics in this case are typically equivalent to the case c = 0, even in the vicinity of the non - topological solitons. thus affleck - dine baryogenesis will proceed as in the original c = 0 affleck - dine scenario.
|
arxiv:hep-ph/0305306
|
it is a classical result that the multilinear component of the free lie algebra is isomorphic ( as a representation of the symmetric group ) to the top ( co ) homology of the proper part of the poset of partitions $ \ pi _ n $ tensored with the sign representation. we generalize this result in order to study the multilinear component of the free lie algebra with multiple compatible lie brackets. we introduce a new poset of weighted partitions $ \ pi _ n ^ k $ that allows us to generalize the result. the new poset is a generalization of $ \ pi _ n $ and of the poset of weighted partitions $ \ pi _ n ^ w $ introduced by dotsenko and khoroshkin and studied by the author and wachs for the case of two compatible brackets. we prove that the poset $ \ pi _ n ^ k $ with a top element added is el - shellable and hence cohen - macaulay. this and other properties of $ \ pi _ n ^ k $ enable us to answer questions posed by liu on free multibracketed lie algebras. in particular, we obtain various dimension formulas and multicolored generalizations of the classical lyndon and comb bases for the multilinear component of the free lie algebra. we also obtain a plethystic formula for the frobenius characteristic of the representation of the symmetric group on the multilinear component of the free multibracketed lie algebra.
|
arxiv:1408.5415
|
the industrial internet of things ( iiot ) has become a critical technology to accelerate the process of digital and intelligent transformation of industries. as the cooperative relationship between smart devices in iiot becomes more complex, getting deterministic responses of iiot periodic time - critical computing tasks becomes a crucial and nontrivial problem. however, few current works in cloud / edge / fog computing focus on this problem. this paper is a pioneer to explore the deterministic scheduling and network structural optimization problems for iiot periodic time - critical computing tasks. we first formulate the two problems and derive theorems to help quickly identify computation and network resource sharing conflicts. based on this, we propose a deterministic scheduling algorithm, \ textit { iiotbroker }, which realizes deterministic response for each iiot task by optimizing the fine - grained computation and network resources allocations, and a network optimization algorithm, \ textit { iiotdeployer }, providing a cost - effective structural upgrade solution for existing iiot networks. our methods are illustrated to be cost - friendly, scalable, and deterministic response guaranteed with low computation cost from our simulation results.
|
arxiv:2402.16870
|
in the spirit of studying the information paradox as a scattering problem, we pose and answer the following questions : i ) what is the scattering amplitude for $ n $ particles to emerge from a large black hole when two energetic particles are thrown into it? ii ) how long would we have to wait to recover the information sent in? the answer to the second question has long been expected to be page time, a quantity associated with the lifetime of the black hole. we answer the first by evaluating an infinite number of ` ladder of ladders ' feynman diagrams to all orders in $ m _ { pl } / m _ { bh } $. such processes can generically be calculated in effective field theory in the black hole eikonal phase where scattering energies satisfy $ e m _ { bh } \ gg m ^ { 2 } _ { pl } $. importantly, interactions are mediated by a fluctuating metric ; a fixed geometry is insufficient to capture these effects. we find that the characteristic time spent by the particles in the scattering region ( the so - called eisenbud - wigner time delay ) is indeed page time, confirming the long - standing expectation. this implies that the entropy of radiation continues to increase, after the particles are thrown in, until after page time, when information begins to re - emerge.
|
arxiv:2110.14673
|
we present the analysis of multi - wavelength xmm - newton data from the seyfert galaxy ngc 3783, including uv imaging, x - ray and uv lightcurves, the 0. 2 - 10 kev x - ray continuum, the iron k - alpha emission line, and high - resolution spectroscopy and modelling of the soft x - ray warm absorber. the 0. 2 - 10 kev spectral continuum can be well reproduced by a power - law at higher energies ; we detect a prominent fe k - alpha emission line, with both broad and narrow components, and a weaker emission line at 6. 9 kev which is probably a combination of fe k - beta and fe xxvi. we interpret the significant deficit of counts in the soft x - ray region as being due to absorption by ionised gas in the line of sight. this is demonstrated by the large number of narrow absorption lines in the rgs spectrum from iron, oxygen, nitrogen, carbon, neon, argon, magnesium, silicon and sulphur. the wide range of iron states present in the spectrum enables us to deduce the ionisation structure of the absorbing medium. we find that our spectrum contains evidence of absorption by at least two phases of gas : a hotter phase containing plasma with a log ionisation parameter xi ( where xi is in erg cm / s ) of 2. 4 and greater, and a cooler phase with log xi centred around 0. 3. the gas in both phases is outflowing at speeds of around 800 km / s. the main spectral signature of the cold phase is the unresolved transition array ( uta ) of m - shell iron, which is the deepest yet observed ; its depth requires either that the abundance of iron, in the cold phase, is several times that of oxygen, with respect to solar abundances, or that the absorption lines associated with this phase are highly saturated. the cold phase is associated with ionisation states that would also absorb in the uv.
|
arxiv:astro-ph/0206316
|
we discuss the existence of de sitter inflationary solutions for the string - inspired fourth - derivative gravity theories with dilaton field. we consider a space - time of arbitrary dimension d and an arbitrary parametrization of the target space metric. the specific features of the theory in dimension d = 4 and those of the special ghost - free parametrization of the metric are found. we also consider similar string - inspired theories with torsion and construct an inflationary solution with torsion and dilaton for d = 4. the stability of the inflationary solutions is also investigated.
|
arxiv:hep-th/9706179
|
in this paper, we introduce a class of twisted matrix algebras of $ m _ 2 ( e ) $ and twisted direct products of $ e \ times e $ for an algebra $ e $. let $ a $ be a noetherian koszul artin - schelter regular algebra, $ z \ in a _ 2 $ be a regular central element of $ a $ and $ b = a _ p [ y _ 1, y _ 2 ; \ sigma ] $ be a graded double ore extension of $ a $. we use the clifford deformation $ c _ { a ^! } ( z ) $ of koszul dual $ a ^! $ to study the noncommutative quadric hypersurface $ b / ( z + y _ 1 ^ 2 + y _ 2 ^ 2 ) $. we prove that the stable category of graded maximal cohen - macaulay modules over $ b / ( z + y _ 1 ^ 2 + y _ 2 ^ 2 ) $ is equivalent to certain bounded derived categories, which involve a twisted matrix algebra of $ m _ 2 ( c _ { a ^! } ( z ) ) $ or a twisted direct product of $ c _ { a ^! } ( z ) \ times c _ { a ^! } ( z ) $ depending on the values of $ p $. these results are presented as skew versions of kn \ " orrer ' s periodicity theorem. moreover, we show $ b / ( z + y _ 1 ^ 2 + y _ 2 ^ 2 ) $ may not be a noncommutative graded isolated singularity even if $ a / ( z ) $ is.
|
arxiv:2406.03024
|
the becci - rouet - stora - tyutin ( brst ) operator quantization of a finite - dimensional gauge system featuring two quadratic super hamiltonian and m linear supermomentum constraints is studied as a model for quantizing generally covariant gauge theories. the proposed model ` ` completely ' ' mimics the constraint algebra of general relativity. the dirac constraint operators are identified by realizing the brst generator of the system as a hermitian nilpotent operator, and a physical inner product is introduced to complete a consistent quantization procedure.
|
arxiv:gr-qc/0102062
|
lung cancer is the leading cause of cancer - related death worldwide, and early diagnosis is critical to improving patient outcomes. to diagnose cancer, a highly trained pulmonologist must navigate a flexible bronchoscope deep into the branched structure of the lung for biopsy. the biopsy fails to sample the target tissue in 26 - 33 % of cases largely because of poor registration with the preoperative ct map. to improve intraoperative registration, we develop two deep learning approaches to localize the bronchoscope in the preoperative ct map based on the bronchoscopic video in real - time, called airwaynet and bifurcationnet. the networks are trained entirely on simulated images derived from the patient - specific ct. when evaluated on recorded bronchoscopy videos in a phantom lung, airwaynet outperforms other deep learning localization algorithms with an area under the precision - recall curve of 0. 97. using airwaynet, we demonstrate autonomous driving in the phantom lung based on video feedback alone. the robot reaches four targets in the left and right lungs in 95 % of the trials. on recorded videos in eight human cadaver lungs, airwaynet achieves areas under the precision - recall curve ranging from 0. 82 to 0. 997.
|
arxiv:1907.08136
|
the paper presents an idealized warehouse environment where human and robots are both present and need to communicate to avoid mutual collisions. the warehouse shelves are modelled as metallic ( pec ) parallelepipeds, which is the largest obstacle of practical interest, while for communication an uwb gaussian signal at 3. 994 ghz and 468 mhz width is used. the signal propagation is analyzed using ray tracing software, with the goal to determine the range of communication and the optimum antenna configurations. several scenarios have been analyzed and discussed. it is found that the major influence on the communication range is exhibited by type of polarization of antennas placed on human and robot, while the optimum range is obtained when all the antennas are vertically polarized.
|
arxiv:2001.03160
|
cosmic voids as typical under - density regions in the large scale universe are known for their hyperbolic properties as an ability to deviate the photon beams. the under - density then is acting as the negative curvature in the hyperbolic spaces. the hyperbolicity of voids has to lead to distortion in the statistical analysis at galactic surveys. we reveal the sensitivity of the hyperbolicity and hence of the distortion with respect to the ratio of void / wall scales which are observable parameters. this provides a principal possibility to use the distortion in the galactic surveys in revealing the line - of sight number of cosmic voids and their characteristic scales.
|
arxiv:2103.03245
|
signatures of the formation of a strongly interacting thermalized matter of partons have been observed in nucleus - nucleus, proton - nucleus, and high - multiplicity proton - proton collisions at lhc energies. strangeness enhancement in such ultra - relativistic heavy - ion collisions is considered to be a consequence of this thermalized phase, known as quark - gluon plasma ( qgp ). simultaneously, proper modeling of hadronic energy fraction in interactions of ultra - high energy cosmic rays ( uhecr ) has been proposed as a solution for the muon puzzle, an unexpected excess of muons in air showers. these interactions have center - of - mass collision energies of the order of energies attained at the lhc or even higher, indicating that the possibility of a thermalized partonic state cannot be overlooked in uhecr - air interactions. this work investigates the hadronic energy fraction and strangeness enhancement to explore qgp - like phenomena in uhecr - air interactions using various high - energy hadronic models. a core - corona system with a thermalized core undergoing statistical hadronization is considered through the epos lhc model. in contrast, pythia 8, qgsjet ii - 04, and sybill 2. 3d consider string fragmentation without thermalization. we have found that epos lhc gives a better description of strangeness enhancement as compared to other models. we conclude that adequately treating all the relevant effects and further retuning the models is necessary to explain the observed effects.
|
arxiv:2304.00294
|
while large language models ( llms ) have achieved state - of - the - art performance on a wide range of medical question answering ( qa ) tasks, they still face challenges with hallucinations and outdated knowledge. retrieval - augmented generation ( rag ) is a promising solution and has been widely adopted. however, a rag system can involve multiple flexible components, and there is a lack of best practices regarding the optimal rag setting for various medical purposes. to systematically evaluate such systems, we propose the medical information retrieval - augmented generation evaluation ( mirage ), a first - of - its - kind benchmark including 7, 663 questions from five medical qa datasets. using mirage, we conducted large - scale experiments with over 1. 8 trillion prompt tokens on 41 combinations of different corpora, retrievers, and backbone llms through the medrag toolkit introduced in this work. overall, medrag improves the accuracy of six different llms by up to 18 % over chain - of - thought prompting, elevating the performance of gpt - 3. 5 and mixtral to gpt - 4 - level. our results show that the combination of various medical corpora and retrievers achieves the best performance. in addition, we discovered a log - linear scaling property and the " lost - in - the - middle " effects in medical rag. we believe our comprehensive evaluations can serve as practical guidelines for implementing rag systems for medicine.
|
arxiv:2402.13178
|
oj287 is a quasi - periodic quasar with roughly 12 year optical cycles. it displays prominent outbursts which are predictable in a binary black hole model. the model predicted a major optical outburst in december 2015. we found that the outburst did occur within the expected time range, peaking on 2015 december 5 at magnitude 12. 9 in the optical r - band. based on swift / xrt satellite measurements and optical polarization data, we find that it included a major thermal component. its timing provides an accurate estimate for the spin of the primary black hole, chi = 0. 313 + - 0. 01. the present outburst also confirms the established general relativistic properties of the system such as the loss of orbital energy to gravitational radiation at the 2 % accuracy level and it opens up the possibility of testing the black hole no - hair theorem with a 10 % accuracy during the present decade.
|
arxiv:1603.04171
|
biological agents, such as humans and animals, are capable of making decisions out of a very large number of choices in a limited time. they can do so because they use their prior knowledge to find a solution that is not necessarily optimal but good enough for the given task. in this work, we study the motion coordination of multiple drones under the above - mentioned paradigm, bounded rationality ( br ), to achieve cooperative motion planning tasks. specifically, we design a prior policy that provides useful goal - directed navigation heuristics in familiar environments and is adaptive in unfamiliar ones via reinforcement learning augmented with an environment - dependent exploration noise. integrating this prior policy in the game - theoretic bounded rationality framework allows agents to quickly make decisions in a group considering other agents ' computational constraints. our investigation assures that agents with a well - informed prior policy increase the efficiency of the collective decision - making capability of the group. we have conducted rigorous experiments in simulation and in the real world to demonstrate that the ability of informed agents to navigate to the goal safely can guide the group to coordinate efficiently under the br framework.
|
arxiv:2307.15798
|
we prove that the supersingular k3 surface of artin invariant 1 in characteristic p ( where p denotes an arbitrary prime ) admits a model over if _ p with picard number 21.
|
arxiv:1105.4993
|
martingale methods are used to study the almost everywhere convergence of general function series. applications are given to ergodic series, which improves recent results of fan \ cite { fanetds }, and to dilated series, including davenport series, which completes results of gaposhkin \ cite { gaposhkin67 } ( see also \ cite { gaposhkin68 } ). application is also given to the almost everywhere convergence with respect to riesz products of lacunary series.
|
arxiv:1511.08586
|
short term unpredictability is discovered numerically for high reynolds number fluid flows under periodic boundary conditions. furthermore, the abundance of the short term unpredictability is also discovered. these discoveries support our theory that fully developed turbulence is constantly driven by such short term unpredictability.
|
arxiv:1702.02993
|
we propose a proper orthogonal decomposition ( pod ) - galerkin based reduced order model ( rom ) for a leray model. for the implementation of the model, we combine a two - step algorithm called evolve - filter ( ef ) with a computationally efficient finite volume method. the main novelty of the proposed approach relies in applying spatial filtering both for the collection of the snapshots and in the reduced order model, as well as in considering the pressure field at reduced level. in both steps of the ef algorithm, velocity and pressure fields are approximated by using different pod basis and coefficients. for the reconstruction of the pressures fields, we use a pressure poisson equation approach. we test our rom on two benchmark problems : 2d and 3d unsteady flow past a cylinder at reynolds number 0 < = re < = 100. the accuracy of the reduced order model is assessed against results obtained with the full order model. for the 2d case, a parametric study with respect to the filtering radius is also presented.
|
arxiv:2009.13593
|
deep learning techniques have gained a lot of traction in the field of nlp research. the aim of this paper is to predict the age and gender of an individual by inspecting their written text. we propose a supervised bert - based classification technique in order to predict the age and gender of bloggers. the dataset used contains 681284 rows of data, with the information of the blogger ' s age, gender, and text of the blog written by them. we compare our algorithm to previous works in the same domain and achieve a better accuracy and f1 score. the accuracy reported for the prediction of age group was 84. 2 %, while the accuracy for the prediction of gender was 86. 32 %. this study relies on the raw capabilities of bert to predict the classes of textual data efficiently. this paper shows promising capability in predicting the demographics of the author with high accuracy and can have wide applicability across multiple domains.
|
arxiv:2305.08633
|
the tutte polynomial and derksen ' s $ \ mathcal { g } $ - invariant are the universal deletion - contraction and valuative matroid and polymatroid invariants, respectively. there are only a handful of well known invariants ( like the matroid kazhdan - lusztig polynomials ) between ( in terms of fineness ) the tutte polynomial and derksen ' s $ \ mathcal { g } $ - invariant. the aim of this study is to define a spectrum of generalized tutte polynomials to fill the gap between the tutte polynomial and derksen ' s $ \ mathcal { g } $ - invariant. these polynomials are built by taking repeated convolution products of universal tutte characters studied by dupont, fink, and moci and using the framework of ardila and sanchez for studying valuative invariants. we develop foundational aspects of these polynomials by showing they are valuative on generalized permutahedra and present a generalized deletion - contraction formula. we apply these results on chain tutte polynomials to obtain formulas for the m \ " obius polynomial, the opposite characteristic polynomial, a generalized m \ " obius polynomial, ford ' s expected codimension of a matroid variety, and derksen ' s $ \ mathcal { g } $ - invariant.
|
arxiv:2305.02874
|
in this article, we pay attention to transitive dynamical systems having the shadowing property and the entropy functions are upper semicontinuous. as for these dynamical systems, when we consider ergodic optimization restricted on the subset of invariant measures whose metric entropy are equal or greater than a given constant, we prove that for generic real continuous functions the ergodic optimization measure is unique, ergodic, full support and have metric entropy equal to the given constant. similar results also hold for suspension flows over transitive subshift of finite type, cr ( r \ geq 2 ) - generic geometric lorenz attractors and c1 - generic singular hyperbolic attractors.
|
arxiv:2112.12453
|
we give the rectangle condition for strong irreducibility of heegaard splittings of $ 3 $ - manifolds with non - empty boundary. we apply this to a generalized heegaard splitting of a $ 2 $ - fold covering of $ s ^ 3 $ branched along a link. the condition implies that any thin meridional level surface in the link complement is incompressible. we also show that the additivity of knot width holds for a composite knot satisfying the condition.
|
arxiv:1007.2521
|
high - quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. searching the high - dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. these tasks can only be performed reliably and efficiently with monte carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. we show how the markov - chain monte carlo method known as hamiltonian monte carlo, or hybrid monte carlo, can be adapted to this context. it is applicable when an efficient parameterization of the state space is available. the resulting random walk is entirely inside the physical parameter space, and the hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. we use examples of single and double qubit measurements for illustration.
|
arxiv:1407.7806
|
to date, antineutrino experiments built for the purpose of demonstrating a nonproliferation capability have typically employed organic scintillators, were situated as close to the core as possible - typically a few meters to tens of meters distant and have not exceeded a few tons in size. one problem with this approach is that proximity to the reactor core require accommodation by the host facility. water cherenkov detectors located offsite, at distances of a few kilometers or greater, may facilitate non - intrusive monitoring and verification of reactor activities over a large area. as the standoff distance increases, the detector target mass must scale accordingly. this article quantifies the degree to which a kiloton - scale gadolinium - doped water - cherenkov detector can exclude the existence of undeclared reactors within a specified distance, and remotely detect the presence of a hidden reactor in the presence of declared reactors, by verifying the operational power and standoff distance using a feldman - cousins based likelihood analysis. a 1 - kton scale ( fiducial ) water cherenkov detector can exclude gigawatt - scale nuclear reactors up to tens of kilometers within a year. when attempting to identify the specific range and power of a reactor, the detector energy resolution was not sufficient to delineate between the two.
|
arxiv:2210.09391
|
potentially hazardous asteroids ( phas ), a special subset of near - earth objects, are both dangerous and scientifically valuable. phas that truly undergo close approaches with the earth ( dubbed caphas ) are of particular interest and extensively studied. the concept and study of capha can be extended to other solar system planets, which have significant implications for future planet - based observations and explorations. in this work, we conduct numerical simulations that incorporate the yarkovsky effect to study the transformation of main belt asteroids into caphas of terrestrial planets, using precise nominal timesteps, especially to ensure the reliability of the results for mercury and venus. our simulations predict a total of 1893 mercury - caphas, 3014 venus - caphas, 3791 earth - caphas and 18066 mars - caphas, with an occurrence frequency of about 1, 9, 15 and 66 per year, respectively. the values for mars - caphas are consistent with our previous work, which were based on simulations with a larger nominal timestep. the predicted occurrence frequency and velocity distribution of earth - caphas are in reasonable agreement with the observed population of earth - caphas. we also find that certain asteroids can be caught in close approach with different planets at different times, raising an interesting possibility of using them as transportation between terrestrial planets in the future.
|
arxiv:2502.13489
|
we explain how h \ " ormander ' s classical solution of the dbar - equation in the plane with a weight which permits growth near infinity carries over to the rather opposite situation when we ask for decay near infinity. here, however, a natural condition on the datum needs to be imposed. the condition is not only natural but also necessary to have the result at least in the fock weight case.
|
arxiv:1311.2020
|
we report on drastic change of vortex dynamics with increase of quenched disorder : for rather weak disorder we found a single vortex creep regime, which we attribute to a bragg - glass phase, while for enhanced disorder we found an increase of both the depinning current and activation energy with magnetic field, which we attribute to entangled vortex phase. we also found that introduction of additional defects always increases the depinning current, but it increases activation energy only for elastic vortex creep, while it decreases activation energy for plastic vortex creep.
|
arxiv:cond-mat/0104510
|
recently a lot of progress has been made towards a full classification of new physics effects in higgs observables by means of effective dimension six operators. specifically, higgs production in association with a high transverse momentum jet has been suggested as a way to discriminate between operators that modify the higgs - top coupling and operators that induce an effective higgs - gluon coupling - - - a distinction that is hard to achieve with signal strength measurements alone. with this article we would like to draw attention to another source of new physics in higgs + jet observables : the triple gluon operator $ o _ { 3g } $ ( consisting of three factors of the gluon field strength tensor ). we compute the distortions of kinematic distributions in higgs + jet production at a 14 tev lhc due to $ o _ { 3g } $ and compare them with the distortions due to dimension six operators involving the higgs doublet. we find that the transverse momentum, the jet rapidity and the difference between the higgs and jet rapidity are well suited to distinguish between the contributions from $ o _ { 3g } $ and those from other operators, and that the size of the distortions are similar if the wilson coefficients are of the same order as the expected bounds from other observables. we conclude that a full analysis of new physics in higgs + jet observables must take the contributions from $ o _ { 3g } $ into account.
|
arxiv:1411.2029
|
effectively tuning magnetic state by using current is essential for novel spintronic devices. magnetic van der waals ( vdw ) materials have shown superior properties for the applications of magnetic information storage based on the efficient spin torque effect. however, for most of known vdw ferromagnets, the ferromagnetic transition temperatures lower than room temperature strongly impede their applications and the room - temperature vdw spintronic device with low energy consumption is still a long - sought goal. here, we realize the highly efficient room - temperature nonvolatile magnetic switching by current in a single - material device based on vdw ferromagnet fe3gate2. moreover, the switching current density and power dissipation are about 300 and 60000 times smaller than conventional spin - orbit - torque devices of magnet / heavymetal heterostructures. these findings make an important progress on the applications of magnetic vdw materials in the fields of spintronics and magnetic information storage.
|
arxiv:2308.12710
|
active particle assemblies can exhibit a wide range of interesting dynamical phases depending on internal parameters such as density, adhesion strength or self - propulsion. active self - rotations are rarely studied in this context, although they can be relevant for active matter systems, as we illustrate by analyzing the motion of chlamydomonas reinhardtii algae under different experimental conditions. inspired by this example, we simulate the dynamics of a system of interacting active disks endowed with active torques. at low packing fractions, adhesion causes the formation of small rotating clusters, resembling those observed when algae are stressed. at higher densities, the model shows a jamming to unjamming transition promoted by active torques and hindered by adhesion. our results yield a comprehensive picture of the dynamics of active rotators, providing useful guidance to interpret experimental results in cellular systems where rotations might play a role.
|
arxiv:2003.06239
|
emergent communication studies the development of language between autonomous agents, aiming to improve understanding of natural language evolution and increase communication efficiency. while temporal aspects of language have been considered in computational linguistics, there has been no research on temporal references in emergent communication. this paper addresses this gap, by exploring how agents communicate about temporal relationships. we analyse three potential influences for the emergence of temporal references : environmental, external, and architectural changes. our experiments demonstrate that altering the loss function is insufficient for temporal references to emerge ; rather, architectural changes are necessary. however, a minimal change in agent architecture, using a different batching method, allows the emergence of temporal references. this modified design is compared with the standard architecture in a temporal referential games environment, which emphasises temporal relationships. the analysis indicates that over 95 \ % of the agents with the modified batching method develop temporal references, without changes to their loss function. we consider temporal referencing necessary for future improvements to the agents ' communication efficiency, yielding a closer to optimal coding as compared to purely compositional languages. our readily transferable architectural insights provide the basis for their incorporation into other emergent communication settings.
|
arxiv:2310.06555
|
an induced subposet $ ( p _ 2, \ le _ 2 ) $ of a poset $ ( p _ 1, \ le _ 1 ) $ is a subset of $ p _ 1 $ such that for every two $ x, y \ in p _ 2 $, $ x \ le _ 2 y $ if and only if $ x \ le _ 1 y $. the boolean lattice $ q _ n $ of dimension $ n $ is the poset consisting of all subsets of $ \ { 1, \ dots, n \ } $ ordered by inclusion. given two posets $ p _ 1 $ and $ p _ 2 $ the poset ramsey number $ r ( p _ 1, p _ 2 ) $ is the smallest integer $ n $ such that in any blue / red coloring of the elements of $ q _ n $ there is either a monochromatically blue induced subposet isomorphic to $ p _ 1 $ or a monochromatically red induced subposet isomorphic to $ p _ 2 $. we provide upper bounds on $ r ( p, q _ n ) $ for two classes of $ p $ : parallel compositions of chains, i. e. \ posets consisting of disjoint chains which are pairwise element - wise incomparable, as well as subdivided $ q _ 2 $, which are posets obtained from two parallel chains by adding a common minimal and a common maximal element. this completes the determination of $ r ( p, q _ n ) $ for posets $ p $ with at most $ 4 $ elements. if $ p $ is an antichain $ a _ t $ on $ t $ elements, we show that $ r ( a _ t, q _ n ) = n + 3 $ for $ 3 \ le t \ le \ log \ log n $. additionally, we briefly survey proof techniques in the poset ramsey setting $ p $ versus $ q _ n $.
|
arxiv:2303.04462
|
in this work, we explore the de broglie - bohm quantum cosmology for a stiff matter, $ p = \ rho $, anisotropic n - dimensional universe. one begins by considering a gaussian wave function for the universe, which depends on the momenta parameters $ q _ 1 $ and $ q _ 2 $, in addition to the dispersion parameters $ \ sigma _ 1 $ and $ \ sigma _ 2 $. our solutions show that the extra dimensions are stabilized through a dynamical compactification mechanism within the quantum cosmology framework. in this case, we find two distinct configurations for the dynamics of the extra dimensions. the first configuration features larger extra dimensions at the bounce, which subsequently undergo compactification to a smaller size. in contrast, the second configuration exhibits a smaller extra dimension at the bounce, evolving toward a larger, finite, and stabilized value. we also address the particular five - dimensional case where the wheeler - dewitt equation degenerates.
|
arxiv:2502.13402
|
the $ p $ - laplacian is a nonlinear partial differential equation, parametrized by $ p \ in [ 1, \ infty ] $. we provide new numerical algorithms, based on the barrier method, for solving the $ p $ - laplacian numerically in $ o ( \ sqrt { n } \ log n ) $ newton iterations for all $ p \ in [ 1, \ infty ] $, where $ n $ is the number of grid points. we confirm our estimates with numerical experiments.
|
arxiv:2007.15044
|
we review recent developments in our understanding of the dynamics of strongly - coupled chiral $ su ( n ) $ gauge theories in four dimensions, problems which are potentially important in our quest to go beyond the standard $ su ( 3 ) _ { qcd } \ times ( su ( 2 ) \ times u ( 1 ) ) _ { gws } $ model of the fundamental interactions. the generalized symmetries and associated new ' t hooft anomaly - matching constraints allow us to exclude, in a wide class of chiral gauge theories, confining vacuum with full flavor symmetries supported by a set of color - singlet massless composite fermions. the color - flavor - locked dynamical higgs phase, dynamical abelianization or more general symmetry breaking phase, appear as plausible ir dynamics, depending on the massless matter fermions present. we revisit and discuss critically several well - known confinement criteria in the literature, for both chiral and vectorlike gauge theories, and propose tentative, new criteria for discriminating different phases. finally, we review an idea which might sound rather surprising at first, but is indeed realized in some softly - broken supersymmetric theories, that confinement in qcd is a small deformation ( in the ir end of the renormalization - group flow ) of a strongly - coupled, nonlocal, nonabelian conformal fixed point.
|
arxiv:2403.15775
|
we analyze magnetic kinematic dynamo in a conducting fluid where the stationary shear flow is accompanied by relatively weak random velocity fluctuations. the diffusionless and diffusion regimes are described. the growth rates of the magnetic field moments are related to the statistical characteristics of the flow describing divergence of the lagrangian trajectories. the magnetic field correlation functions are examined, we establish their growth rates and scaling behavior. general assertions are illustrated by explicit solution of the model where the velocity field is short - correlated in time.
|
arxiv:1010.5904
|
in this work, we designed a completely blind video quality assessment algorithm using the deep video prior. this work mainly explores the utility of deep video prior in estimating the visual quality of the video. in our work, we have used a single distorted video and a reference video pair to learn the deep video prior. at inference time, the learned deep prior is used to restore the original videos from the distorted videos. the ability of learned deep video prior to restore the original video from the distorted video is measured to quantify distortion in the video. our hypothesis is that the learned deep video prior fails in restoring the highly distorted videos. the restoring ability of deep video prior is proportional to the distortion present in the video. therefore, we propose to use the distance between the distorted video and the restored video as the perceptual quality of the video. our algorithm is trained using a single video pair and it does not need any labelled data. we show that our proposed algorithm outperforms the existing unsupervised video quality assessment algorithms in terms of lcc and srocc on a synthetically distorted video quality assessment dataset.
|
arxiv:2410.22566
|
in mathematics, value may refer to several, strongly related notions. in general, a mathematical value may be any definite mathematical object. in elementary mathematics, this is most often a number – for example, a real number such as π or an integer such as 42. the value of a variable or a constant is any number or other mathematical object assigned to it. physical quantities have numerical values attached to units of measurement. the value of a mathematical expression is the object assigned to this expression when the variables and constants in it are assigned values. the value of a function, given the value ( s ) assigned to its argument ( s ), is the quantity assumed by the function for these argument values. for example, if the function f is defined by f ( x ) = 2x2 − 3x + 1, then assigning the value 3 to its argument x yields the function value 10, since f ( 3 ) = 2 · 32 − 3 · 3 + 1 = 10. if the variable, expression or function only assumes real values, it is called real - valued. likewise, a complex - valued variable, expression or function only assumes complex values. = = see also = = value function value ( computer science ) absolute value truth value = = references = =
|
https://en.wikipedia.org/wiki/Value_(mathematics)
|
this is the first paper of a three - part series in which we develop a theory of conformal blocks for $ c _ 2 $ - cofinite vertex operator algebras ( voas ) that are not necessarily rational. the ultimate goal of this series is to prove a sewing - factorization theorem ( and in particular, a factorization formula ) for conformal blocks over holomorphic families of compact riemann surfaces, associated to grading - restricted ( generalized ) modules of $ c _ 2 $ - cofinite voas. in this paper, we prove that if $ \ mathbb v $ is a $ c _ 2 $ - cofinite voa, if $ \ mathfrak x $ is a compact riemann surface with $ n $ incoming marked points and $ m $ outgoing ones, each equipped with a local coordinate, and if $ \ mathbb w $ is a grading - restricted $ \ mathbb v ^ { \ otimes n } $ - modules, then the ` ` dual fusion product " exists as a grading - restricted $ \ mathbb v ^ { \ otimes m } $ - module. indeed, we prove a more general version of this result without assuming $ \ mathbb v $ to be $ c _ 2 $ - cofinite. our main method is a generalization of the propagation of conformal blocks.
|
arxiv:2305.10180
|
we show that a general one - dimensional ( 1d ) lattice with nonlinear inter - particle interactions can always be thermalized for arbitrarily small nonlinearity in the thermodynamic limit, thus proving equipartition hypothesis in statistical physics for an important class of systems. particularly, we find that in the lattices of interaction potential $ v ( x ) = x ^ 2 / 2 + \ lambda x ^ n / n $ with $ n \ geq4 $, there is \ textit { a universal scaling law } for the thermalization time $ t ^ { eq } $, i. e., $ t ^ { eq } \ propto \ lambda ^ { - 2 } \ epsilon ^ { - ( n - 2 ) } $, where $ \ epsilon $ is the energy density. numerical simulations confirm that it is accurate for an even $ n $. a slight correction is needed for an odd $ n $, which is due to the chirikov overlap occurring in the weakly nonlinear regime between extra vibration modes excited by the asymmetry of potential. based on this scaling law, as well as previous prediction for the case of $ n = 3 $, a universal formula for the thermalization time for a 1d lattice with a general interaction potential is obtained.
|
arxiv:1811.05697
|
a light ( $ m _ { \ nu d } \ lesssim $ mev ) dark fermion mixing with the standard model neutrinos can naturally equilibrate with the neutrinos via oscillations and scattering. in the presence of dark sector interactions, production of dark fermions is generically suppressed above bbn, but then enhanced at later times. over much of the parameter space, we find that the dark sector equilibrates, even for mixing angles $ \ theta _ 0 $ as small as $ 10 ^ { - 13 } $, and equilibration occurs at $ t _ { \ rm equil } \ simeq m _ { \ nu d } \ left ( \ theta _ 0 ^ 2 m _ { pl } / m _ { \ nu d } \ right ) ^ { 1 / 5 } $ which is naturally at most a few orders of magnitude above the dark fermion mass. the implications of this are twofold : one, that light states are often only constrained by the cmb and lss without leaving an imprint on bbn, and two, that sectors which equilibrate before recombination will typically have a mass threshold before recombination, as well. this can result in dark radiation abruptly transitioning from non - interacting to interacting, or vice - versa, a ' ' step ' ' in the amount of dark radiation, and dark matter with similar transitions in its interactions, all of which can leave important signals in the cmb and lss, and may be relevant for cosmological tensions in observables such as $ h _ 0 $ or $ s _ 8 $. minimal models leave an unambiguous imprint on the cmb above the sensitivity of upcoming experiments.
|
arxiv:2301.10792
|
in this paper, we present an experimental study for the classification of perceived human stress using non - invasive physiological signals. these include electroencephalography ( eeg ), galvanic skin response ( gsr ), and photoplethysmography ( ppg ). we conducted experiments consisting of steps including data acquisition, feature extraction, and perceived human stress classification. the physiological data of $ 28 $ participants are acquired in an open eye condition for a duration of three minutes. four different features are extracted in time domain from eeg, gsr and ppg signals and classification is performed using multiple classifiers including support vector machine, the naive bayes, and multi - layer perceptron ( mlp ). the best classification accuracy of 75 % is achieved by using mlp classifier. our experimental results have shown that our proposed scheme outperforms existing perceived stress classification methods, where no stress inducers are used.
|
arxiv:1905.06384
|
let $ 3 \ le d \ le k $ and $ \ nu \ ge 0 $ be fixed and $ \ mathcal { f } \ subset \ binom { [ n ] } { k } $. the matching number of $ \ mathcal { f } $, denoted by $ \ nu ( \ mathcal { f } ) $, is the maximum number of pairwise disjoint sets in $ \ mathcal { f } $, and $ \ mathcal { f } $ is $ d $ - cluster - free if it does not contain $ d $ sets with the union of size at most $ 2k $ and empty intersection. in this paper, we give a lower bound and an upper bound for the maximum size of a $ d $ - cluster - free family with a matching number at least $ \ nu + 1 $. in particular, our result of the case $ \ nu = 1 $ settles a conjecture of mammoliti and britz. we also introduce a tur \ ' { a } n problem in hypergraphs that allows multiple edges, which may be of independent interest.
|
arxiv:1811.07064
|
in a recent paper the authors beliakova, blanchet and gainutdinov have shown that the modified trace on the category $ h $ - pmod of the projective modules corresponds to the symmetrised integral on the finite dimensional pivotal hopf algebra $ h $. we generalize this fact to the context of $ g $ - graded categories and hopf $ g $ - coalgebra studied by turaev - virelizier. we show that the symmetrised $ g $ - integral on a finite type pivotal hopf $ g $ - coalgebra induces a modified trace in the associated $ g $ - graded category.
|
arxiv:1804.02416
|
as elderly population increases, portion of dementia patients becomes larger. thus social cost of caring dementia patients has been a major concern to many nations. this article introduces a dementia assistive system operated by various sensors and devices installed in body area and activity area of patients. since this system is served based on a network which includes a number of nodes, it requires techniques to reduce the network performance degradation caused by densely composed sensors and devices. this article introduces existing protocols for communications of sensors and devices at both low rate and high rate transmission.
|
arxiv:1510.04240
|
the problem of concern in this work is the construction of free divergence fields given scattered horizontal components. as customary, the problem is formulated as a pde constrained least squares problem. the novelty of our approach is to construct the so called adjusted field, as the unique solution along an appropriately chosen descent direction. the latter is obtained by the adjoint equation technique. it is shown that the classical adjusted field of sasaki ' s is a particular case. on choosing descent directions, the underlying mass consistent model leads to the solution of an elliptic problem which is solved by means of a radial basis functions method. finally some numerical results for wind field adjustment are presented.
|
arxiv:1612.00788
|
we investigate the dynamics of the xxz spin chain after a geometric quench, which is realized by connecting two half - chains prepared in their ground states with zero and maximum magnetizations, respectively. the profiles of magnetization after the subsequent time evolution are studied numerically by density - matrix renormalization group methods, and a comparison to the predictions of generalized hydrodynamics yields a very good agreement. we also calculate the profiles of entanglement entropy and propose an ansatz for the noninteracting xx case, based on arguments from conformal field theory. in the general interacting case, the propagation of the entropy front is studied numerically both before and after the reflection from the chain boundaries. finally, our results for the magnetization fluctuations indicate a leading order proportionality relation to the entanglement entropy.
|
arxiv:1902.05834
|
in this report i will present the recent results on k mesons from the kloe experiment at the dafne e + e - collider working at the center of mass energy ~ 1gev ~ m _ { phi }. they include v _ { us } determinations, the test on the unitarity of the first row of the ckm matrix and the related experimental measurements. tests of lepton universality from leptonic and semileptonic decays will be also discussed. then i will present tests of quantum coherence, cpt and lorentz symmetry performed by studying the time evolution of the neutral kaon system.
|
arxiv:0805.1969
|
let $ x $ be a compact k \ " ahler fourfold with klt singularities and vanishing first chern class, smooth in codimension two. we show that $ x $ admits a beauville - bogomolov decomposition : a finite quasi - \ ' etale cover of $ x $ splits as a product of a complex torus and singular calabi - yau and irreducible holomorphic symplectic varieties. we also prove that $ x $ has small projective deformations and the fundamental group of $ x $ is projective. to obtain these results, we propose and study a new version of the lipman - zariski conjecture.
|
arxiv:2101.06764
|
the dynamics and thermodynamics of melting in two - dimensional coulomb clusters is revisited using molecular dynamics and monte carlo simulations. several parameters are considered, including the lindemann index, the largest lyapunov exponent and the diffusion constant. in addition to the orientational and radial melting processes, isomerizations and complex size effects are seen to occur in a very similar way to atomic and molecular clusters. the results are discussed in terms of the energy landscape represented through disconnectivity graphs, with proper attention paid to the broken ergodicity problems in simulations. clusters bound by 1 / r ^ 3 and e ^ { - \ kappa r } / r forces, and heterogeneous clusters made of singly - and doubly - charged species, are also studied, as well as the evolution toward larger systems.
|
arxiv:physics/0511034
|
the scale of fermion mass generation can, as shown by appelquist and chanowitz, be bounded from above by relating it to the scale of unitarity violation in the helicity nonconserving amplitude for fermion - anti - fermion pairs to scatter into pairs of longitudinally polarized electroweak gauge bosons. in this paper, we examine the process t tbar - > w _ l w _ l in a family of phenomenologically - viable deconstructed higgsless models and we show that scale of unitarity violation depends on the mass of the additional vector - like fermion states that occur in these theories ( the states that are the deconstructed analogs of kaluza - klein partners of the ordinary fermions in a five - dimensional theory ). for sufficiently light vector fermions, and for a deconstructed theory with sufficiently many lattice sites ( that is, sufficiently close to the continuum limit ), the appelquist - chanowitz bound can be substantially weakened. more precisely, we find that, as one varies the mass of the vector - like fermion for fixed top - quark and gauge - boson masses, the bound on the scale of top - quark mass generation interpolates smoothly between the appelquist - chanowitz bound and one that can, potentially, be much higher. in these theories, therefore, the bound on the scale of fermion mass generation is independent of the bound on the scale of gauge - boson mass generation. while our analysis focuses on deconstructed higgsless models, any theory in which top - quark mass generation proceeds via the mixing of chiral and vector fermions will give similar results.
|
arxiv:hep-ph/0702281
|
a speedy pixon algorithm for image reconstruction is described. two applications of the method to simulated astronomical data sets are also reported. in one case, galaxy clusters are extracted from multiwavelength microwave sky maps using the spectral dependence of the sunyaev - zel ' dovich effect to distinguish them from the microwave background fluctuations and the instrumental noise. the second example involves the recovery of a sharply peaked emission profile, such as might be produced by a galaxy cluster observed in x - rays. these simulations show the ability of the technique both to detect sources in low signal - to - noise data and to deconvolve a telescope beam in order to recover the internal structure of a source.
|
arxiv:astro-ph/9912078
|
in this paper, we investigate the effect of the u. s. - - china trade war on stock markets from a financial contagion perspective, based on high - frequency financial data. specifically, to account for risk contagion between the u. s. and china stock markets, we develop a novel jump - diffusion process. for example, we consider three channels for volatility contagion - - such as integrated volatility, positive jump variation, and negative jump variation - - and each stock market is able to affect the other stock market as an overnight risk factor. we develop a quasi - maximum likelihood estimator for model parameters and establish its asymptotic properties. furthermore, to identify contagion channels and test the existence of a structural break, we propose hypothesis test procedures. from the empirical study, we find evidence of financial contagion from the u. s. to china and evidence that the risk contagion channel has changed from integrated volatility to negative jump variation.
|
arxiv:2111.09655
|
existing parking recommendation solutions mainly focus on finding and suggesting parking spaces based on the unoccupied options only. however, there are other factors associated with parking spaces that can influence someone ' s choice of parking such as fare, parking rule, walking distance to destination, travel time, likelihood to be unoccupied at a given time. more importantly, these factors may change over time and conflict with each other which makes the recommendations produced by current parking recommender systems ineffective. in this paper, we propose a novel problem called multi - objective parking recommendation. we present a solution by designing a multi - objective parking recommendation engine called moparker that considers various conflicting factors together. specifically, we utilise a non - dominated sorting technique to calculate a set of pareto - optimal solutions, consisting of recommended trade - off parking spots. we conduct extensive experiments using two real - world datasets to show the applicability of our multi - objective recommendation methodology.
|
arxiv:2106.07384
|
in this work, we propose an asymptotic preserving scheme for a non - linear kinetic reaction - transport equation, in the regime of sharp interface. with a non - linear reaction term of kpp - type, a phenomenon of front propagation has been proved in [ 9 ]. this behavior can be highlighted by considering a suitable hyperbolic limit of the kinetic equation, using a hopf - cole transform. it has been proved in [ 6, 8, 11 ] that the logarithm of the distribution function then converges to the viscosity solution of a constrained hamilton - jacobi equation. the hyperbolic scaling and the hopf - cole transform make the kinetic equation stiff. thus, the numerical resolution of the problem is challenging, since the standard numerical methods usually lead to high computational costs in these regimes. the asymptotic preserving ( ap ) schemes have been typically introduced to deal with this difficulty, since they are designed to be stable along the transition to the macroscopic regime. the scheme we propose is adapted to the non - linearity of the problem, enjoys a discrete maximum principle and solves the limit equation in the sense of viscosity. it is based on a dedicated micro - macro decomposition, attached to the hopf - cole transform. as it is well adapted to the singular limit, our scheme is able to cope with singular behaviors in space ( sharp interface ), and possibly in velocity ( concentration in the velocity distribution ). various numerical tests are proposed, to illustrate the properties and the efficiency of our scheme.
|
arxiv:1705.06054
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.