text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the recently observed high - spin rotational bands in odd - $ a $ nuclei $ ^ { 247, 249 } $ cm and $ ^ { 249 } $ cf [ tandel \ textit { et al. }, phys. rev. c 82 ( 2010 ) 041301r ] are investigated by using the cranked shell model ( csm ) with the pairing correlations treated by a particle - number conserving ( pnc ) method in which the blocking effects are taken into account exactly. the experimental moments of inertia and alignments and their variations with the rotational frequency $ \ omega $ are reproduced very well by the pnc - csm calculations. by examining the $ \ omega $ - dependence of the occupation probability of each cranked nilsson orbital near the fermi surface and the contributions of valence orbitals to the angular momentum alignment in each major shell, the level crossing and upbending mechanism in each nucleus is understood clearly.
|
arxiv:1101.3607
|
we provide line luminosities and spectroscopic templates of prominent optical emission lines of 133 galactic wolf - rayet stars by exploiting gaia dr3 parallaxes and optical spectrophotometry, and provide comparisons with 112 counterparts in the magellanic clouds. average line luminosities of the broad blue ( he ii 4686, c iii 4647, 51, n iii 4634, 41, n v 4603, 20 ) and yellow ( c iv 5801, 12 ) emission features for wn, wn / c, wc and wo stars have application in characterising the wolf - rayet populations of star - forming regions of distant, unresolved galaxies. early - type wn stars reveal lower line luminosities in more metal poor environments, but the situation is less clear for late - type wn stars. lmc wc4 - 5 line luminosities are higher than their milky way counterparts, with line luminosities of magellanic cloud wo stars higher than galactic stars. we highlight other prominent optical emission lines, n iv 3478, 85 for wn and wn / c stars, o iv 3403, 13 for wc and wo stars and o vi 3811, 34 for wo stars. we apply our calibrations to representative metal - poor and metal - rich wr galaxies, ic 4870 and ngc 3049, respectively, with spectral templates also applied based on a realistic mix of subtypes. finally, the global blue and c iv 5801, 12 line luminosities of the large ( small ) magellanic clouds are 2. 6e38 erg / s ( 9e36 erg / s ) and 8. 8e37 erg / s ( 4e36 erg / s ), respectively, with the cumulative wr line luminosity of the milky way estimated to be an order of magnitude higher than the lmc.
|
arxiv:2301.11297
|
( abridged ) the lifetime of protoplanetary disks around young stars limits the timescale when planets form. a disk dissipation timescale < 10 myr was inferred from surveys providing the fraction of stars with disks in young stellar clusters with different ages. however, most previous surveys focused on the compact region within ~ 2 pc from the clusters ' centers, for which the disk fraction information considering the outer part is practically absent. we aim to test if disk fraction estimates change when inferred from an extended region around the clusters ' centers. gaia edr3 data and a best - suited, virtual observatory ( vo ) - based tool - clusterix -, are used to identify member stars for a representative sample of 19 young clusters considering two concentric fields of view ( fov ) with radii ~ 20 pc and ~ 2 pc. our analysis reveals that the inner disk fractions inferred from the compact and the extended regions are equal within ~ 10 %, which does not support a previous hypothesis proposing that disk fractions should be significantly larger considering extended regions. a list of member and disk stars in each cluster is provided and stored in a vo - compliant archive. averaged values and plots characterizing the whole clusters are also provided, including hr diagrams based on gaia colors and absolute magnitudes. our results cover the largest fields ever probed when dealing with disk fractions for all clusters analysed, and imply that their complete characterization requires the use of wide fovs. the resulting database is a benchmark for future detailed studies of young clusters, whose disk fractions must be accurately determined by using multi - wavelength analysis potentially combined with data from coming gaia releases.
|
arxiv:2206.03511
|
##rs, text enlargers, organization tools, word predictions, and talking word processors falls under the category of educational software. = = eating impairments = = adaptive eating devices include items commonly used by the general population like spoons and forks and plates. however they become assistive technology when they are modified to accommodate the needs of people who have difficulty using standard cutlery due to a disabling condition. common modifications include increasing the size of the utensil handle to make it easier to grasp. plates and bowls may have a guard on the edge that stops food being pushed off of the dish when it is being scooped. more sophisticated equipment for eating includes manual and powered feeding devices. these devices support those who have little or no hand and arm function and enable them to eat independently. = = in sports = = assistive technology in sports is an area of technology design that is growing. assistive technology is the array of new devices created to enable sports enthusiasts who have disabilities to play. assistive technology may be used in adaptive sports, where an existing sport is modified to enable players with a disability to participate ; or, assistive technology may be used to invent completely new sports with athletes with disabilities exclusively in mind. an increasing number of people with disabilities are participating in sports, leading to the development of new assistive technology. assistive technology devices can be simple, or " low - technology ", or they may use highly advanced technology. " low - tech " devices can include velcro gloves and adaptive bands and tubes. " high - tech " devices can include all - terrain wheelchairs and adaptive bicycles. accordingly, assistive technology can be found in sports ranging from local community recreation to the elite paralympic games. more complex assistive technology devices have been developed over time, and as a result, sports for people with disabilities " have changed from being a clinical therapeutic tool to an increasingly competition - oriented activity ". = = in education = = in the united states there are two major pieces of legislation that govern the use of assistive technology within the school system. the first is section 504 of the rehabilitation act of 1973 and the second being the individuals with disabilities education act ( idea ) which was first enacted in 1975 under the name the education for all handicapped children act. in 2004, during the reauthorization period for idea, the national instructional material access center ( nimac ) was created which provided a repository of accessible text including publisher ' s textbooks to students with a qualifying disability. files provided are in xml format and used as
|
https://en.wikipedia.org/wiki/Assistive_technology
|
tissue growth underpins a wide array of biological and developmental processes, and numerical modeling of growing systems has been shown to be a useful tool for understanding these processes. however, the phenomena that can be captured are often limited by the size of systems that can be modeled. here, we address this limitation by introducing a lattice - boltzmann method ( lbm ) for a growing system that is able to efficiently model hydrodynamic length - scales. the model incorporates a novel approach to describing the growing front of a tissue, which we use to investigate the dynamics of the interface of growing model tissues. we find that the interface grows with scaling in agreement with the kardar - parisi - zhang ( kpz ) universality class when growth in the system is bulk driven. interestingly, we also find the emergence of a previously unreported hydrodynamic instability when proliferation is restricted to the tissue edge. we then develop an analytical theory to show that the instability arises due to a coupling between the number of cells actively proliferating and the position of the interface.
|
arxiv:2303.02210
|
a computer study is performed to estimate the influence of the small - $ k _ t $ region in the bfkl evolution equation. we consider the small - x region of the deep inelastic structure function $ f _ 2 $ and show that the magnitude of the small - $ k _ t $ region depends on $ q ^ 2 $ and $ x _ b $. we suggest that the width of the $ \ log k _ t ^ 2 $ - distribution in the final state may serve as an additional footprint of bfkl dynamics. for diffractive dissociation it is shown that the contribution of the infrared region is large - even for large $ q ^ 2 $. this contribution becomes smaller only if restrictions on the final state are imposed.
|
arxiv:hep-ph/9511399
|
it is shown that the neutron matter interacting through argonne v18 pair - potential plus modern variants of urbana or illinois three - body forces is unstable. for the energy of $ n $ neutrons $ e ( n ) $, which interact through these forces, we prove mathematically that $ e ( n ) = - cn ^ 3 + \ mathcal { o } ( n ^ { 8 / 3 } ) $, where $ c > 0 $ is a constant. this means that : ( i ) the energy per particle and neutron density diverge rapidly for large neutron numbers ; ( ii ) bound states of $ n $ neutrons exist for $ n $ large enough. the neutron matter collapse is possible due to the form of the repulsive core in three - body forces, which vanishes when three nucleons occupy the same site in space. the old variant of the forces urbana vi, where the phenomenological repulsive core does not vanish at the origin, resolves this problem. we prove that to prevent the collapse one should add a repulsive term to the urbana ix potential, which should be larger than 50 mev when 3 nucleons occupy the same spatial position.
|
arxiv:1306.5573
|
out - of - distribution ( ood ) detection is essential for the reliability of ml models. most existing methods for ood detection learn a fixed decision criterion from a given in - distribution dataset and apply it universally to decide if a data point is ood. recent work ~ \ cite { fang2022is } shows that given only in - distribution data, it is impossible to reliably detect ood data without extra assumptions. motivated by the theoretical result and recent exploration of test - time adaptation methods, we propose a non - parametric test time \ textbf { ada } ptation framework for \ textbf { o } ut - of - \ textbf { d } istribution \ textbf { d } etection ( \ abbr ). unlike conventional methods, \ abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions. the framework incorporates detected ood instances into decision - making, reducing false positive rates, particularly when id and ood distributions overlap significantly. we demonstrate the effectiveness of \ abbr through comprehensive experiments on multiple ood detection benchmarks, extensive empirical studies show that \ abbr significantly improves the performance of ood detection over state - of - the - art methods. specifically, \ abbr reduces the false positive rate ( fpr95 ) by $ 23. 23 \ % $ on the cifar - 10 benchmarks and $ 38 \ % $ on the imagenet - 1k benchmarks compared to the advanced methods. lastly, we theoretically verify the effectiveness of \ abbr.
|
arxiv:2311.16420
|
let mu ( g ) and mu _ min ( g ) be the largest and smallest eigenvalues of the adjacency matricx of a graph g. we refine quantitatively the following two results on graph spectra. ( i ) if h is a proper subgraph of a connected graph g, then mu ( g ) > mu ( h ). ( ii ) if g is a connected nonbipartite graph, then mu ( g ) > - mu _ min ( g ).
|
arxiv:math/0609111
|
alzheimer ' s disease ( ad ) is a common neurodegenerative disorder nowadays. amyloid - beta ( a $ \ beta $ ) and tau proteins are among the main contributors to the development or propagation of ad. in ad, a $ \ beta $ proteins clump together to form plaques and disrupt cell functions. on the other hand, the abnormal chemical change in the brain helps to build sticky tau tangles that block the neuron ' s transport system. astrocytes generally maintain a healthy balance in the brain by clearing the a $ \ beta $ plaques ( toxic a $ \ beta $ ). however, over - activated astrocytes release chemokines and cytokines in the presence of a $ \ beta $ and react to pro - inflammatory cytokines, further increasing the production of a $ \ beta $. in this paper, we construct a mathematical model that can capture astrocytes ' dual behaviour. furthermore, we reveal that the disease propagation depends on the current time instance and the disease ' s earlier status, called the ` ` memory effect ' '. we consider a fractional order network mathematical model to capture the influence of such memory effect on ad propagation. we have integrated brain connectome data into the model and studied the memory effect, the dual role of astrocytes, and the brain ' s neuronal damage. based on the pathology, primary, secondary, and mixed tauopathies parameters are considered in the model. due to the mixed tauopathy, different brain nodes or regions in the brain connectome accumulate different toxic concentrations of a $ \ beta $ and tau proteins. finally, we explain how the memory effect can slow down the propagation of such toxic proteins in the brain, decreasing the rate of neuronal damage.
|
arxiv:2208.03540
|
many cooperative multi - agent problems require agents to learn individual tasks while contributing to the collective success of the group. this is a challenging task for current state - of - the - art multi - agent reinforcement algorithms that are designed to either maximize the global reward of the team or the individual local rewards. the problem is exacerbated when either of the rewards is sparse leading to unstable learning. to address this problem, we present decomposed multi - agent deep deterministic policy gradient ( de - maddpg ) : a novel cooperative multi - agent reinforcement learning framework that simultaneously learns to maximize the global and local rewards. we evaluate our solution on the challenging defensive escort team problem and show that our solution achieves a significantly better and more stable performance than the direct adaptation of the maddpg algorithm.
|
arxiv:2003.10598
|
many exclusive $ c \ to d / s \ ell ^ + \ nu _ \ ell ~ ( \ ell = e, \ mu, \ tau ) $ transitions have been well measured, and they can be used to test the theoretical calculations. motivated by this, we study the $ d \ to p / v / s \ ell ^ + \ nu _ \ ell $ decays induced by the $ c \ to d / s \ ell ^ + \ nu _ \ ell $ transitions with the su ( 3 ) flavor symmetry approach, where $ p $ denotes the pseudoscalar meson, $ v $ denotes the vector meson, and $ s $ denotes the scalar meson with a mass below $ 1 $ $ gev $. the different decay amplitudes of the $ d \ to p \ ell ^ + \ nu _ \ ell $, $ d \ to v \ ell ^ + \ nu _ \ ell $ or $ d \ to s \ ell ^ + \ nu _ \ ell $ decays can be related by using the su ( 3 ) flavor symmetry and by considering the su ( 3 ) flavor breaking. using the present data of $ d \ to p / v / s \ ell ^ + \ nu _ \ ell $, we predict the not yet measured or not yet well measured processes in the $ d \ to p / v / s \ ell ^ + \ nu _ \ ell $ decays. we find that the su ( 3 ) flavor symmetry approach works well in the semileptonic $ d \ to p / v \ ell ^ + \ nu _ \ ell $ decays. for the $ d \ to s \ ell ^ + \ nu _ \ ell $ decays, only the decay $ d ^ + _ s \ to f _ 0 ( 980 ) e ^ + \ nu _ e $ has been measured, the branching ratios of the $ d ^ + _ s \ to f _ 0 ( 980 ) e ^ + \ nu _ e $ and $ d \ to s ( s \ to p _ 1p _ 2 ) \ ell ^ + \ nu _ \ ell $ decays are used to constrain the nonperturbative parameters and then predict not yet measured $ d \ to s \ ell ^ + \ nu _ \ ell $ decays, in addition, the two quark and the four quark scenarios for the
|
arxiv:2301.00079
|
we discuss a poincar \ ' e invariant coupled - channel formalism which is based on the point - form of relativistic quantum mechanics. electromagnetic scattering of an electron by a 2 - body bound state is treated as a 2 - channel problem for a bakamjian - thomas - type mass operator. in this way retardation effects in the photon - exchange interaction are fully taken into account. the electromagnetic current of the 2 - body bound state is then extracted from the one - photon - exchange optical potential. as an application we calculate electromagnetic pion and deuteron form factors. wrong cluster properties, inherent in the bakamjian - thomas framework, are seen to cause spurious ( unphysical ) contributions in the current. these are separated and eliminated in an unambiguous way such that one is left with a current that has all the desired properties.
|
arxiv:1011.0170
|
consider d uniformly random permutation matrices on n labels. consider the sum of these matrices along with their transposes. the total can be interpreted as the adjacency matrix of a random regular graph of degree 2d on n vertices. we consider limit theorems for various combinatorial and analytical properties of this graph ( or the matrix ) as n grows to infinity, either when d is kept fixed or grows slowly with n. in a suitable weak convergence framework, we prove that the ( finite but growing in length ) sequences of the number of short cycles and of cyclically non - backtracking walks converge to distributional limits. we estimate the total variation distance from the limit using stein ' s method. as an application of these results we derive limits of linear functionals of the eigenvalues of the adjacency matrix. a key step in this latter derivation is an extension of the kahn - szemer \ ' edi argument for estimating the second largest eigenvalue for all values of d and n.
|
arxiv:1109.4094
|
in this paper, we advance in investigating a specific set of eigenvectors of a linearly - independent square root of the identity matrix to obtain a new class of spin - half mass - dimension one fermions. such eigenvectors, after an appropriated dual structure examination, may serve as expansion coefficients of a local field.
|
arxiv:2203.05992
|
we study the first derivative of the staggered magnetization squared $ dm ^ \ dag ( \ theta ) ^ 2 / d \ theta $ and the second derivative $ d ^ 2e _ 0 ( \ theta ) / d \ theta ^ 2 $ of the ground state energy per site. the parameter $ \ theta $ controls the anisotropy between horizontal and vertical couplings in a two - dimensional ( 2d ) spin - 1 / 2 antiferromagnetic heisenberg model. it is shown, that both derivatives diverge at $ \ theta = 1 $, where the anisotropic 2d model reduces to the 1d model.
|
arxiv:cond-mat/9711021
|
we propose a variation of spacetime noncommutative field theory to realize the stringy spacetime uncertainty relation without breaking any of the global symmetries of the homogeneous isotropic universe. we study the spectrum of metric perturbations in this model for a wide class of accelerating background cosmologies. spacetime noncommutativity leads to a coupling between the fluctuation modes and the background cosmology which is nonlocal in time. for each mode, there is a critical time at which the spacetime uncertainty relation is saturated. this is the time when the mode is generated. these effects lead to a spectrum of fluctuations whose spectral index is different from what is obtained for commutative spacetime in the infrared region, but is unchanged in the ultraviolet region. in the special case of an exponentially expanding background, we find a scale - invariant spectrum. but with a different magnitude than in the context of commutative spacetime if the hubble constant is above the string scale.
|
arxiv:hep-th/0203119
|
we review status and perspectives of the search of the eta - mesic helium at the cooler synchrotron cosy.
|
arxiv:0909.3979
|
we construct a map between bloch ' s higher chow groups and deligne homology for smooth, complex quasiprojective varieties on the level of complexes. for complex projective varieties this results in a formula which generalizes at the same time the classical griffiths abel - jacobi map and the borel / beilinson / goncharov regulator type maps.
|
arxiv:math/0409116
|
the elimination of diseases such as polio by dr. jonas salk. gene mapping and gene sequencing, invented by drs. mark skolnik and walter gilbert, respectively, are the two technologies that made the human genome project feasible. computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering, studies the nature and limits of computation. subfields include computability, computational complexity, database design, computer networking, artificial intelligence, and the design of computer hardware. one area in which advances in computing have contributed to more general scientific development is by facilitating large - scale archiving of scientific data. contemporary computer science typically distinguishes itself by emphasizing mathematical ' theory ' in contrast to the practical emphasis of software engineering. einstein ' s paper " on the quantum theory of radiation " outlined the principles of the stimulated emission of photons. this led to the invention of the laser ( light amplification by the stimulated emission of radiation ) and the optical amplifier which ushered in the information age. it is optical amplification that allows fiber optic networks to transmit the massive capacity of the internet. based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet. = = = developments in political science and economics = = = in political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of ' pol - sci ' subdisciplines including rational choice theory, voting theory, game theory ( also used in economics ), psephology, political geography / geopolitics, political anthropology / political psychology / political sociology, political economy, policy analysis, public administration, comparative political analysis and peace studies / conflict analysis. in economics, john maynard keynes prompted a division between microeconomics and macroeconomics in the 1920s. under keynesian economics macroeconomic trends can overwhelm economic choices made by individuals. governments should promote aggregate demand for goods as a means to encourage economic expansion. following world war ii, milton friedman created the concept of monetarism. monetarism focuses on using the supply and demand of money as a method for controlling economic activity. in the 1970s, monetarism has adapted into supply - side economics which advocates reducing taxes as a means to increase the amount of money available for economic expansion. other modern schools of economic thought are new classical economics and new keynesian economics. new classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth
|
https://en.wikipedia.org/wiki/History_of_science
|
neural networks ( nns ) usually hinder any insight into the reasoning behind their predictions. we demonstrate how influence functions can unravel the black box of nn when trained to predict the phases of the one - dimensional extended spinless fermi - hubbard model at half - filling. results provide strong evidence that the nn correctly learns an order parameter describing the quantum transition in this model. we demonstrate that influence functions allow to check that the network, trained to recognize known quantum phases, can predict new unknown ones within the data set. moreover, we show they can guide physicists in understanding patterns responsible for the phase transition. this method requires no a priori knowledge on the order parameter, has no dependence on the nn ' s architecture or the underlying physical model, and is therefore applicable to a broad class of physical models or experimental data.
|
arxiv:2004.04711
|
in this paper, we verify carl de boor ' s conjecture on ideal projectors for real ideal projectors of type partial derivative by proving that there exists a positive $ \ eta \ in \ mathbb { r } $ such that a real ideal projector of type partial derivative $ p $ is the pointwise limit of a sequence of lagrange projectors which are perturbed from $ p $ up to $ \ eta $ in magnitude. furthermore, we present an algorithm for computing the value of such $ \ eta $ when the range of the lagrange projectors is spanned by the gr \ " { o } bner \ ' { e } scalier of their kernels w. r. t. lexicographic order.
|
arxiv:1102.2475
|
unsupervised domain adaptation ( uda ) aims to leverage a label - rich source domain to solve tasks on a related unlabeled target domain. it is a challenging problem especially when a large domain gap lies between the source and target domains. in this paper we propose a novel solution named ssrt ( safe self - refinement for transformer - based domain adaptation ), which brings improvement from two aspects. first, encouraged by the success of vision transformers in various vision tasks, we arm ssrt with a transformer backbone. we find that the combination of vision transformer with simple adversarial adaptation surpasses best reported convolutional neural network ( cnn ) - based results on the challenging domainnet benchmark, showing its strong transferable feature representation. second, to reduce the risk of model collapse and improve the effectiveness of knowledge transfer between domains with large gaps, we propose a safe self - refinement strategy. specifically, ssrt utilizes predictions of perturbed target domain data to refine the model. since the model capacity of vision transformer is large and predictions in such challenging tasks can be noisy, a safe training mechanism is designed to adaptively adjust learning configuration. extensive evaluations are conducted on several widely tested uda benchmarks and ssrt achieves consistently the best performances, including 85. 43 % on office - home, 88. 76 % on visda - 2017 and 45. 2 % on domainnet.
|
arxiv:2204.07683
|
the walls of a victim ' s stomach. toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. their work is critical in determining whether harmful substances play a role in a person ’ s death or impairment. read more james marsh was the first to apply this new science to the art of forensics. he was called by the prosecution in a murder trial to give evidence as a chemist in 1832. the defendant, john bodle, was accused of poisoning his grandfather with arsenic - laced coffee. marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. while he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt. annoyed by that, marsh developed a much better test. he combined a sample containing arsenic with sulfuric acid and arsenic - free zinc, resulting in arsine gas. the gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery - black deposit. so sensitive was the test, known formally as the marsh test, that it could detect as little as one - fiftieth of a milligram of arsenic. he first described this test in the edinburgh philosophical journal in 1836. = = = ballistics and firearms = = = ballistics is " the science of the motion of projectiles in flight ". in forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. when fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. this examination can help scientists identify possible makes and models of weapons connected to a crime. henry goddard at scotland yard pioneered the use of bullet comparison in 1835. he noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process. = = = anthropometry = = = the french police officer alphonse bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. before that time, criminals could be identified only by name or photograph. dissatisfied with the ad hoc methods used to identify captured
|
https://en.wikipedia.org/wiki/Forensic_science
|
three families of quarks and leptons, one higgs to rule them all, and in the darkness bind them.
|
arxiv:1402.3841
|
modern data sources are typically of large scale and multi - modal natures, and acquired on irregular domains, which poses serious challenges to traditional deep learning models. these issues are partially mitigated by either extending existing deep learning algorithms to irregular domains through graphs, or by employing tensor methods to alleviate the computational bottlenecks imposed by the curse of dimensionality. to simultaneously resolve both these issues, we introduce a novel multi - graph tensor network ( mgtn ) framework, which leverages on the desirable properties of graphs, tensors and neural networks in a physically meaningful and compact manner. this equips mgtns with the ability to exploit local information in irregular data sources at a drastically reduced parameter complexity, and over a range of learning paradigms such as regression, classification and reinforcement learning. the benefits of the mgtn framework, especially its ability to avoid overfitting through the inherent low - rank regularization properties of tensor networks, are demonstrated through its superior performance against competing models in the individual tensor, graph, and neural network domains.
|
arxiv:2103.14998
|
in some models, periodic configurations can be shown to be stable under, both, global $ \ ell ^ 2 $ or local perturbations. this is not the case for aperiodic media. the specific class of aperiodic media we are interested, in arise from taking two 2d periodic crystals and stacking them parallel at a relative twist. in periodic media, phonons are generalized eigenvectors for a stability operator acting on $ \ ell ^ 2 $, coming from a mechanical energy. the goal of our analysis is to provide phonons in the given class of aperiodic media with meaning. as rigorously established for the 1d frenkel - kontorova model and previously applied by one of the authors, we assume that we can parametrize minimizing lattice deformations w. r. t. local perturbations via continuous stacking - periodic functions, for which we previously derived a continuous energy density functional. such ( continuous ) energy densities are analytically and computationally much better accessible compared to discrete energy functionals. in order to pass to an $ \ ell ^ 2 $ - based energy functional, we also study the offset energy w. r. t. given lattice deformations, under $ \ ell ^ 1 $ - perturbations. our findings show that, in the case of an undeformed bilayer heterostructure, while the energy density can be shown to be stable under the assumption of stability of individual layers, the offset energy fails to be stable in the case of twisted bilayer graphene. we then establish conditions for stability and instability of the offset energy w. r. t. the relaxed lattice. finally, we show that, in the case of incommensurate bilayer homostructures, i. e., two equal layers, if we choose minimizing deformations according to the global energy density above, the offset energy is stable in the limit of zero twist angle. consequently, in this case, one can then define phonons as generalized eigenvectors w. r. t. the stability operator associated with the offset energy.
|
arxiv:2409.06151
|
the experimental realization of correlated quantum phases with ultracold gases in optical lattices and their theoretical understanding has witnessed remarkable progress during the last decade. in this review we introduce basic concepts and tools to describe the many - body physics of quantum gases in optical lattices. this includes the derivation of effective lattice hamiltonians from first principles and an overview of the emerging quantum phases. additionally, state - of - the - art numerical tools to quantitatively treat bosons or fermions on different lattices are introduced.
|
arxiv:1312.5772
|
we show that every geodesic metric space admitting an injective continuous map into the plane as well as every planar graph has nagata dimension at most two, hence asymptotic dimension at most two. this relies on and answers a question in a very recent work by fujiwara and papasoglu. we conclude that all three - dimensional hadamard manifolds have nagata dimension three. as a consequence, all such manifolds are absolute lipschitz retracts.
|
arxiv:2004.10576
|
facial recognition models are increasingly employed by commercial enterprises, government agencies, and cloud service providers for identity verification, consumer services, and surveillance. these models are often trained using vast amounts of facial data processed and stored in cloud - based platforms, raising significant privacy concerns. users ' facial images may be exploited without their consent, leading to potential data breaches and misuse. this survey presents a comprehensive review of current methods aimed at preserving facial image privacy in cloud - based services. we categorize these methods into two primary approaches : image obfuscation - based protection and adversarial perturbation - based protection. we provide an in - depth analysis of both categories, offering qualitative and quantitative comparisons of their effectiveness. additionally, we highlight unresolved challenges and propose future research directions to improve privacy preservation in cloud computing environments.
|
arxiv:2501.08665
|
we extend the standard model gauge group by a a gauged $ u ( 1 ) _ r $ r - symmetry or a gauged $ u ( 1 ) ' $. the requirement of cancellation of anomalies is very constraining but can be achieved by adding three or four hidden - sector fields which are standrad model singlets. the $ u ( 1 ) _ r $ or $ u ( 1 ) ' $ quantum numbers of these singlets are usually large producing a non - renormalisable superpotential with a high power in the singelt fields. we have minimized the supergravity scalar potential and have found solutions where the vacuum expectation values of all hidden - sector singlet fields are less than the planck mass $ < { z _ m } > = o ( m _ { pl } / 10 ) $. this produces the small supersymmetry scale of order the weak scale from only the planck scale. the mu problem is simultaneously solved in this manner.
|
arxiv:hep-ph/9607261
|
vision transformer architectures have been demonstrated to work very effectively for image classification tasks. efforts to solve more challenging vision tasks with transformers rely on convolutional backbones for feature extraction. in this paper we investigate the use of a pure transformer architecture ( i. e., one with no cnn backbone ) for the problem of 2d body pose estimation. we evaluate two vit architectures on the coco dataset. we demonstrate that using an encoder - decoder transformer architecture yields state of the art results on this estimation problem.
|
arxiv:2112.04981
|
we present evidence, via a large survey of 191 new spectra along with previously - published spectra, of a divide in the 3 - $ \ mu $ m spectral properties of the low - albedo asteroid population. one group ( " sharp - types " or st, with band centers $ < $ 3 $ \ mu $ m ) has a spectral shape consistent with carbonaceous chondrite meteorites, while the other group ( " not - sharp - types " or nst, with bands centered $ > $ 3 $ \ mu $ m ) is not represented in the meteorite literature but is as abundant as the sts among large objects. both groups are present in most low - albedo asteroid taxonomic classes, and except in limited cases taxonomic classifications based on 0. 5 - 2. 5 - $ \ mu $ m data alone cannot predict whether an asteroid is st or nst. statistical tests show the sts and nsts differ in average band depth, semi - major axis, and perihelion at confidence levels $ \ ge $ 98 \ %, while not showing significant differences in albedo. we also show that many nsts have a 3 - $ \ mu $ m absorption band shape like comet 67p, and likely represent an important small - body composition throughout the solar system. a simple explanation for the origin of these groups is formation on opposite sides of the ammonia snow line, with the nst group accreting h2o and nh3 and the st group only accreting h2o, with subsequent thermal and chemical evolution resulting in the minerals seen today. such an explanation is consistent with recent dynamical modeling of planetesimal formation and delivery, and suggests that much more outer solar system material was delivered to the main asteroid belt than would be thought based on the number of d - class asteroids found today.
|
arxiv:2205.09166
|
with the development of artificial intelligence, more and more attention has been put onto generative models, which represent the creativity, a very important aspect of intelligence. in recent years, diffusion models have been studied and proven to be more reasonable and effective than previous methods. however, common diffusion frameworks suffer from controllability problems. although extra conditions have been considered by some work to guide the diffusion process for a specific target generation, it only controls the generation result but not its process. in this work, we propose a new adaptive framework, $ \ textit { adaptively controllable diffusion ( ac - diff ) model } $, to automatically and fully control the generation process, including not only the type of generation result but also the length and parameters of the generation process. both inputs and conditions will be first fed into a $ \ textit { conditional time - step ( cts ) module } $ to determine the number of steps needed for a generation. then according to the length of the process, the diffusion rate parameters will be estimated through our $ \ textit { adaptive hybrid noise schedule ( ahns ) module } $. we further train the network with the corresponding adaptive sampling mechanism to learn how to adjust itself according to the conditions for the overall performance improvement. to enable its practical applications, ac - diff is expected to largely reduce the average number of generation steps and execution time while maintaining the same performance as done in the literature diffusion models.
|
arxiv:2411.15199
|
low - resolution infrared ( ir ) array sensors enable people counting applications such as monitoring the occupancy of spaces and people flows while preserving privacy and minimizing energy consumption. deep neural networks ( dnns ) have been shown to be well - suited to process these sensor data in an accurate and efficient manner. nevertheless, the space of dnns ' architectures is huge and its manual exploration is burdensome and often leads to sub - optimal solutions. to overcome this problem, in this work, we propose a highly automated full - stack optimization flow for dnns that goes from neural architecture search, mixed - precision quantization, and post - processing, down to the realization of a new smart sensor prototype, including a microcontroller with a customized instruction set. integrating these cross - layer optimizations, we obtain a large set of pareto - optimal solutions in the 3d - space of energy, memory, and accuracy. deploying such solutions on our hardware platform, we improve the state - of - the - art achieving up to 4. 2x model size reduction, 23. 8x code size reduction, and 15. 38x energy reduction at iso - accuracy.
|
arxiv:2402.01226
|
it was recently demonstrated [ j. electron. imaging, 25 ( 2 ), 2016 ] that one can perform fast non - local means ( nlm ) denoising of one - dimensional signals using a method called lifting. the cost of lifting is independent of the patch length, which dramatically reduces the run - time for large patches. unfortunately, it is difficult to directly extend lifting for non - local means denoising of images. to bypass this, the authors proposed a separable approximation in which the image rows and columns are filtered using lifting. the overall algorithm is significantly faster than nlm, and the results are comparable in terms of psnr. however, the separable processing often produces vertical and horizontal stripes in the image. this problem was previously addressed by using a bilateral filter - based post - smoothing, which was effective in removing some of the stripes. in this letter, we demonstrate that stripes can be mitigated in the first place simply by involving the neighboring rows ( or columns ) in the filtering. in other words, we use a two - dimensional search ( similar to nlm ), while still using one - dimensional patches ( as in the previous proposal ). the novelty is in the observation that one can use lifting for performing two - dimensional searches. the proposed approach produces artifact - free images, whose quality and psnr are comparable to nlm, while being significantly faster.
|
arxiv:1710.09552
|
we compare convergence rates of metropolis - - hastings chains to multi - modal target distributions when the proposal distributions can be of ` ` local ' ' and ` ` small world ' ' type. in particular, we show that by adding occasional long - range jumps to a given local proposal distribution, one can turn a chain that is ` ` slowly mixing ' ' ( in the complexity of the problem ) into a chain that is ` ` rapidly mixing. ' ' to do this, we obtain spectral gap estimates via a new state decomposition theorem and apply an isoperimetric inequality for log - concave probability measures. we discuss potential applicability of our result to metropolis - coupled markov chain monte carlo schemes.
|
arxiv:math/0703021
|
a systematic ab initio search for low enthalpy phases of disilane ( si _ 2h _ 6 ) at high pressures was performed based on the minima hopping method. we found a novel metallic phase of disilane with cmcm symmetry, which is enthalpically more favorable than the recently proposed structures of disilane up to 280 gpa, but revealing compositional instability below 190 gpa. the cmcm phase has a moderate electron - phonon coupling yielding a superconducting transition temperature t _ c of around 20 k at 100 gpa, decreasing to 13 k at 220 gpa. these values are an order of magnitude smaller than previously predicted t _ c for disilane, and cast strong doubts on the possibility of high - t _ c superconductivity in these systems as well as in other hydrogen - rich compounds under moderate pressure.
|
arxiv:1111.6302
|
objective : radiotherapy uses precise doses of radiation to treat cancer, requiring accurate verification, e. g. using the electronic portal imaging device ( epid ), to guide treatment. to develop an effective artificial intelligence ( ai ) model for error detection and treatment verification, a large and well - annotated dataset of epid images is needed, however, acquiring such high quality real data is difficult. while synthetic epid data could be a viable alternative, it is critical to ensure that this data is as realistic as possible to effectively train an accurate and reliable ai model. the measurement uncertainty that is not modeled in epid predictions but is present on real measured epid images can hinder downstream tasks such as error detection and classification. our research aims to improve synthetic epid data through image - to - image ( i2i ) translation based on deep generative modeling. approach : a dataset of 989 predicted epid images and corresponding measured epid images was used. we evaluate both paired and unpaired generative modeling approaches for this task. for the former, we introduce a novel modification of variational autoencoder ( vae ) to i2i, a method that, to the best of our knowledge, has not been previously explored for this task. for the latter, we use unsupervised image - to - image translation networks ( unit ). results : our results show that both models achieved some degree of i2i translation, with our novel modification of the vae model outperforming the unit model in improving key metrics ( mean absolute error : 4. 1 cgy vs 6. 4 cgy ; relative dose difference in - field : 2. 5 % vs 5. 5 % ; absolute dose difference in - field : 5. 3 cgy vs 10. 8 cgy ). significance : this enhanced synthetic data is expected to improve downstream tasks such as training neural networks for automated error detection and error classification in radiotherapy.
|
arxiv:2410.01828
|
we articulate confocal microscopy and electron spin resonance to implement spin - to - charge conversion in a small ensemble of nitrogen - vacancy ( nv ) centers in bulk diamond, and demonstrate charge conversion of neighboring defects conditional on the nv spin state. we build on this observation to show time - resolved nv spin manipulation and ancilla - charge - aided nv spin state detection via integrated measurements. our results hint at intriguing opportunities in the search for enhanced forms of color - center - based metrology and information processing down to the limit of individual point defects.
|
arxiv:2003.13148
|
non - standard neutrino interactions ( nsi ) are vector contact interactions involving two neutrinos and two first generation fermions, which can affect neutrino propagation in matter. su ( 2 ) gauge invariance suggests that nsi should be accompanied by more observable charged lepton contact interactions. however, these can be avoided at tree level in various ways. we focus on lepton flavour - changing nsi, suppose they are generated by new physics heavier than $ m _ w $ that does not induce ( charged ) lepton flavour violation ( lfv ) at tree level, and show that lfv is generated at one loop in most cases. the current constraints on charged lepton flavour violation therefore suggest that mu < - - - > e flavour - changing nsi are unobservable and tau < - - - > l flavour - changing nsi are an order of magnitude weaker than the weak interactions. this conclusion can be avoided if the heavy new physics conspires to cancel the one - loop lfv, or if nsi are generated by light new physics to which our analysis does not apply.
|
arxiv:1909.07406
|
the superconduction axion search experiment ( supax ) is a haloscope designed to probe axion - like particles ( alps ) as candidates for dark matter and solutions to the strong cp problem. alps are predicted to couple to photons, allowing their detection through resonant conversion in electromagnetic cavities placed within strong magnetic fields. \ supax employs a 12 t magnetic field and tunable superconducting cavities with resonance frequencies ranging from 2 \, ghz to 7. 2 \, ghz, enabling the exploration of axion masses between 8 \, $ \ mu $ ev and 30 \, $ \ mu $ ev. the tuning mechanism, based on piezo motors and gas - pressure regulation, allows for simultaneous scanning of up to three frequencies, significantly improving search efficiency. this paper presents the technical design of the supax experiment, preliminary r \ & d efforts, and results from prototype experiments. in particular, we exclude dark photons with masses around $ 35 \, \ mu $ ev with a kinetic mixing parameter $ \ chi > 5 \ cdot 10 ^ { - 14 } $, i. e. a region of parameter space which has not been previously explored.
|
arxiv:2505.07541
|
a diverse workforce and open culture are essential to satisfaction in the workplace, to innovation and creativity, and to the ability of an organisation to attract and retain talent. to ensure a diverse and inclusive workplace, efforts can be made to remove and prevent physical, systematic and attitudinal barriers. the invited talk on diversity and inclusion in astronomy given by mich \ ` ele p \ ' eron at this conference set the stage for discussion of this important topic, and presented how some of our institutions are addressing the issues and problems that exist, so as to set up a positive work environment for all. the aim of this bof was to take some of the points raised by p \ ' eron and present them for discussion by the bof participants. it was intended that the bof be a forum for frank discussion and positive suggestions that participants could take back to their institutions.
|
arxiv:1904.08278
|
the existence of at least two solutions to superlinear integral equation in cone is proved using the krasnosielskii fixed point theorem. the result is applied to the dirichlet bvps with the fractional laplacian.
|
arxiv:1311.0645
|
we report the discovery, using fors1 at the eso - vlt and espadons at the cfht, of magnetic fields in the young a - type stars hd 101412, v380 ori and hd 72106a. two of these stars ( hd 101412 and v380 ori ) are pre - main sequence herbig ae / be ( haebe ) stars, while one ( hd 72106a ) is physically associated with a haebe star. remarkably, evidence of surface abundance spots is detected in the spectra of hd 72106a. the magnetic fields of these objects display intensities of order 1 kg at the photospheric level, are ordered on global scales, and appear in approximately 10 % of the stars studied. based on these properties, the detected stars may well represent pre - main sequence progenitors of the magnetic ap / bp stars. the low masses inferred for these objects ( 2. 6, 2. 8 and 2. 4 solar masses ) represent additional contradictions to the hypothesis of hubrig et al. ( 2000 ), who claim that magnetic fields appear in intermediate - mass stars only after 30 % of their main sequence evolution is complete. finally, we fail to confirm claims by hubrig et al. ( 2004 ) of magnetic fields in the herbig ae star hd 139614.
|
arxiv:astro-ph/0509295
|
we investigate the resonance behaviour in a system composed by n - coupled duffing oscillators where only the first oscillator is driven by a periodic force, assuming a nearest neighbour coupling. we have derived the frequency - response equations for a system composed of two - coupled oscillators by using a theoretical approach. interestingly, the frequency - response curve displays two resonance peaks and one anti - resonance. a theoretical prediction of the response amplitudes of two oscillators closely match with the numerically computed amplitudes. we analyse the effect of the coupling strength on the resonance and anti - resonance frequencies and the response amplitudes at these frequencies. for the n - coupled oscillators system, in general, there are n - resonant peaks and ( n - 1 ) anti - resonant peaks. for large values of n, except for the first resonance, other resonant peaks are weak due to linear damping. the resonance behaviours observed in the n - coupled duffing oscillators are also realized in an electronic analog circuit simulation of the equations. understanding the role of coupling and system size has the potential applications in music, structural engineering, power systems, biological networks, electrical and electronic systems.
|
arxiv:1510.01564
|
stellar members of binary systems are formed from the same material, therefore they should be chemically identical. however, recent high - precision studies have unveiled chemical differences between the two members of binary pairs composed by sun - like stars. the very existence of these chemically inhomogeneous binaries represents one of the most contradictory examples in stellar astrophysics and source of tension between theory and observations. it is still unclear whether the abundance variations are the result of chemical inhomogeneities in the protostellar gas clouds or instead if they are due to planet engulfment events occurred after the stellar formation. while the former scenario would undermine the belief that the chemical makeup of a star provides the fossil information of the environment where it formed, a key assumption made by several studies of our galaxy, the second scenario would shed light on the possible evolutionary paths of planetary systems. here, we perform a statistical study on 107 binary systems composed by sun - like stars to provide - for the first time - unambiguous evidence in favour of the planet engulfment scenario. we also establish that planet engulfment events occur in stars similar to our own sun with a probability ranging between 20 and 35 $ \ % $. this implies that a significant fraction of planetary systems undergo very dynamical evolutionary paths that can critically modify their architectures, unlike our solar system which has preserved its planets on nearly circular orbits. this study also opens to the possibility of using chemical abundances of stars to identify which ones are the most likely to host analogues of the calm solar system.
|
arxiv:2108.12040
|
we study a scalar zero mode originated from extradimensional components of a gauge field in a six - dimensional theory compactified on a magnetized torus. we confirm it is a nambu - goldstone boson of the translational symmetry on the torus which is breaking spontaneously due to magnetic flux. we also show explicitly it is massless up to the two - loop level. moreover, we discuss full order contributions by considering the effective potential.
|
arxiv:1912.04581
|
we study conditions that ensure uniqueness theorems of cuntz - krieger type for relative cuntz - pimsner algebras $ \ mathcal { o } ( j, x ) $ associated to a $ c ^ * $ - correspondence $ x $ over a $ c ^ * $ - algebra $ a $. we give general sufficient conditions phrased in terms of a multivalued map $ \ widehat { x } $ acting on the spectrum $ \ widehat { a } $ of $ a $. when $ x ( j ) $ is of type i we construct a directed graph dual to $ x $ and prove a uniqueness theorem using this graph. when $ x ( j ) $ is liminal, we show that topological freeness of this graph is equivalent to the uniqueness property for $ \ mathcal { o } ( j, x ) $, as well as to an algebraic condition, which we call $ j $ - acyclicity of $ x $. as an application we improve the fowler - raeburn uniqueness theorem for the toeplitz algebra $ \ mathcal { t } _ x $. we give new simplicity criteria for $ \ mathcal { o } _ x $. we generalize and enhance uniqueness results for relative quiver $ c ^ * $ - algebras of muhly and tomforde. we also discuss applications to crossed products by endomorphisms.
|
arxiv:1801.03142
|
we argue that a nonthermally looking spectrum of a gamma - ray burst ( grb ) can be formed as a superposition of a set of thermal blackbody spectra. this superposition may be done by time integration which is present even in ` time resolved ' grb spectroscopy. a nonthermal spectrum can be obtained also by the space integration which should take place unless all the emission comes from a plane front moving exactly towards the observer. we propose a model of the gamma - ray burst spectrum formation based on this idea. this model allows the grb radiation to be optically thick and to have higher values of baryon load. thus the latter is limited by the energy considerations only, and not by the condition of a small optical depth.
|
arxiv:astro-ph/9902378
|
knowledge distillation ( kd ) has made remarkable progress in the last few years and become a popular paradigm for model compression and knowledge transfer. however, almost all existing kd algorithms are data - driven, i. e., relying on a large amount of original training data or alternative data, which is usually unavailable in real - world scenarios. in this paper, we devote ourselves to this challenging problem and propose a novel adversarial distillation mechanism to craft a compact student model without any real - world data. we introduce a model discrepancy to quantificationally measure the difference between student and teacher models and construct an optimizable upper bound. in our work, the student and the teacher jointly act the role of the discriminator to reduce this discrepancy, when a generator adversarially produces some " hard samples " to enlarge it. extensive experiments demonstrate that the proposed data - free method yields comparable performance to existing data - driven methods. more strikingly, our approach can be directly extended to semantic segmentation, which is more complicated than classification, and our approach achieves state - of - the - art results. code and pretrained models are available at https : / / github. com / vainf / data - free - adversarial - distillation.
|
arxiv:1912.11006
|
multi - typed objects multi - view multi - instance multi - label learning ( m4l ) deals with interconnected multi - typed objects ( or bags ) that are made of diverse instances, represented with heterogeneous feature views and annotated with a set of non - exclusive but semantically related labels. m4l is more general and powerful than the typical multi - view multi - instance multi - label learning ( m3l ), which only accommodates single - typed bags and lacks the power to jointly model the naturally interconnected multi - typed objects in the physical world. to combat with this novel and challenging learning task, we develop a joint matrix factorization based solution ( m4l - jmf ). particularly, m4l - jmf firstly encodes the diverse attributes and multiple inter ( intra ) - associations among multi - typed bags into respective data matrices, and then jointly factorizes these matrices into low - rank ones to explore the composite latent representation of each bag and its instances ( if any ). in addition, it incorporates a dispatch and aggregation term to distribute the labels of bags to individual instances and reversely aggregate the labels of instances to their affiliated bags in a coherent manner. experimental results on benchmark datasets show that m4l - jmf achieves significantly better results than simple adaptions of existing m3l solutions on this novel problem.
|
arxiv:2010.02539
|
} ) ). } = = examples = = 1 x { \ displaystyle { \ frac { 1 } { x } } } has oscillation ∞ at x { \ displaystyle x } = 0, and oscillation 0 at other finite x { \ displaystyle x } and at −∞ and + ∞. sin 1 x { \ displaystyle \ sin { \ frac { 1 } { x } } } ( the topologist ' s sine curve ) has oscillation 2 at x { \ displaystyle x } = 0, and 0 elsewhere. sin x { \ displaystyle \ sin x } has oscillation 0 at every finite x { \ displaystyle x }, and 2 at −∞ and + ∞. ( − 1 ) x { \ displaystyle ( - 1 ) ^ { x } } or 1, −1, 1, −1, 1, −1... has oscillation 2. in the last example the sequence is periodic, and any sequence that is periodic without being constant will have non - zero oscillation. however, non - zero oscillation does not usually indicate periodicity. geometrically, the graph of an oscillating function on the real numbers follows some path in the xy - plane, without settling into ever - smaller regions. in well - behaved cases the path might look like a loop coming back on itself, that is, periodic behaviour ; in the worst cases quite irregular movement covering a whole region. = = continuity = = oscillation can be used to define continuity of a function, and is easily equivalent to the usual ε - δ definition ( in the case of functions defined everywhere on the real line ) : a function ƒ is continuous at a point x0 if and only if the oscillation is zero ; in symbols, ω f ( x 0 ) = 0. { \ displaystyle \ omega _ { f } ( x _ { 0 } ) = 0. } a benefit of this definition is that it quantifies discontinuity : the oscillation gives how much the function is discontinuous at a point. for example, in the classification of discontinuities : in a removable discontinuity, the distance that the value of the function is off by is the oscillation ; in a jump discontinuity, the size of the jump is the oscillation ( assuming that the value at the point lies between these limits from the two sides ) ; in
|
https://en.wikipedia.org/wiki/Oscillation_(mathematics)
|
among expanding discoveries of quantum phases in moir \ ' e superlattices, correlated insulators stand out as both the most stable and most commonly observed. despite the central importance of these states in moir \ ' e physics, little is known about their underlying nature. here, we use pump - probe spectroscopy to show distinct time - domain signatures of correlated insulators at fillings of one ( v = - 1 ) and two ( v = - 2 ) holes per moir \ ' e unit cell in the angle - aligned wse2 / ws2 system. following photo - doping, we find that the disordering time of the v = - 1 state is independent of excitation density ( n _ ex ), as expected from the characteristic phonon response time associated with a polaronic state. in contrast, the disordering time of the v = - 2 state scales with ( n _ ex ) ^ - 0. 5, in agreement with plasmonic screening from free holons and doublons. these states display disparate reordering behavior dominated either by first order ( v = - 1 ) or second order ( v = - 2 ) recombination, suggesting the presence of hubbard excitons and free carrier - like holons / doublons, respectively. our work delineates the roles of electron - phonon ( e - ph ) versus electron - electron ( e - e ) interactions in correlated insulators on the moir \ ' e landscape and establishes non - equilibrium responses as mechanistic signatures for distinguishing and discovering quantum phases.
|
arxiv:2406.15067
|
the tight - binding model is closely associated with the modified random - phase approximation to thoroughly explore the electron - electron interactions in trilayer ab - stacked graphene. the intralayer and interlayer atomic / coulomb interactions dominate the collective and electron - hole excitations. the unusual energy bands are directly reflected in the diverse transferred momentum - frequency phase diagrams. there exist three kinds of plasmon modes during the variation of the doping level, being accompanied with the complicated intraband and interband single - particle excitations. the excitation behaviors are greatly diversified by the number of layers. the theoretical predictions require the high - resolution experimental examinations.
|
arxiv:1803.10715
|
we report on experiments with cold thermal $ ^ 7 $ li atoms confined in combined magnetic and electric potentials. a novel type of three - dimensional trap was formed by modulating a magnetic guide using electrostatic fields. we observed atoms trapped in a string of up to six individual such traps, a controlled transport of an atomic cloud over a distance of 400 $ \ mu $ m, and a dynamic splitting of a single trap into a double well potential. applications for quantum information processing are discussed.
|
arxiv:quant-ph/0306111
|
we consider symmetric polynomials, p, in the noncommutative free variables ( x _ 1, x _ 2,..., x _ g ). we define the noncommutative complex hessian of p and we call a noncommutative symmetric polynomial noncommutative plurisubharmonic if it has a noncommutative complex hessian that is positive semidefinite when evaluated on all tuples of n x n matrices for every size n. in this paper, we show that the symmetric noncommutative plurisubharmonic polynomials are precisely the noncommutative convex polynomials with a noncommutative analytic change of variables ; i. e., a noncommutative symmetric polynomial, p, is noncommutative plurisubharmonic if and only if it has the form p = \ sum f _ j ^ t f _ j + \ sum k _ j k _ j ^ t + f + f ^ t where the sums are finite and f _ j, k _ j, f are all noncommutative analytic. we also present a theory of noncommutative integration for noncommutative polynomials and we prove a noncommutative version of the frobenius theorem. a subsequent paper by greene proves that if the noncommutative complex hessian of p takes positive semidefinite values on a " noncommutative open set " then the noncommutative complex hessian takes positive semidefinite values on all matrix tuples. thus, p has the form above. the proof in the subsequent paper draws on most of the theorems in this paper together with a very different technique involving representations of noncommutative quadratic functions.
|
arxiv:1101.0107
|
the method of closure testing for analysing the effectiveness of a pdf fitting procedure is discussed. in order to pass a closure test, a fitting methodology must be able to reproduce a known generating function in a fit to an ideal pseudo - dataset generated by that pdf up to the level of experimental uncertainty in the data. here we present an initial study of the closure property of the nnpdf fitting methodology. an idealised pseudo - dataset is generated by a set of toy pdfs that differ substantially from previous nnpdf determinations. in a fit to this pseudodata, the nnpdf methodology is shown to be able to reproduce well the toy pdfs used as a generating function.
|
arxiv:1307.2046
|
we study a class of parabolic quasilinear systems, in which the diffusion matrix is not uniformly elliptic, but satisfies a petrovskii condition of positivity of the real part of the eigenvalues. local wellposedness is known since the work of amann in the 90s, by a semi - group method. we revisit these results in the context of sobolev spaces modelled on l ^ 2 and exemplify our method with the skt system, showing the existence of local, non - negative, strong solutions.
|
arxiv:2407.08226
|
bitcoin stands as a groundbreaking development in decentralized exchange throughout human history, enabling transactions without the need for intermediaries. by leveraging cryptographic proof mechanisms, bitcoin eliminates the reliance on third - party financial institutions. ethereum, ranking as the second - largest cryptocurrency by market capitalization, builds upon bitcoin ' s groundwork by introducing smart contracts and decentralized applications. ethereum strives to surpass the limitations of bitcoin ' s scripting language, achieving full turing - completeness for executing intricate computational tasks. solana introduces a novel architecture for high - performance blockchain, employing timestamps to validate decentralized transactions and significantly boosting block creation throughput. through a comprehensive examination of these blockchain technologies, their distinctions, and the associated challenges, this paper aims to offer valuable insights and comparative analysis for both researchers and practitioners.
|
arxiv:2404.04841
|
many modern network designs incorporate " failover " paths into routers ' forwarding tables. we initiate the theoretical study of the conditions under which such resilient routing tables can guarantee delivery of packets.
|
arxiv:1207.3732
|
we perform a detailed analysis of the phenomenological properties of hidden abelian gauge bosons with a kinetic mixing with the ordinary photon within type iib flux compactifications. we study the interplay between moduli stabilisation and the green - schwarz mechanism that gives mass to the hidden photon paying particular attention to the role of d - terms. we present two generic classes of explicit calabi - yau examples with an isotropic and an anisotropic shape of the extra dimensions showing how the last case turns out to be very promising to make contact with current experiments. in fact, anisotropic compactifications lead naturally to a gev - scale hidden photon ( " dark forces " that can be searched for in beam dump experiments ) for an intermediate string scale ; or even to an mev - scale hidden photon ( which could lead to a " hidden cmb " and can be tested by light - shining - through - a - wall experiments ) in the case of tev - scale strings.
|
arxiv:1103.3705
|
in software - defined networks ( sdn ), a controller program is in charge of deploying diverse network functionality across a large number of switches, but this comes at a great risk : deploying buggy controller code could result in network and service disruption and security loopholes. the automatic detection of bugs or, even better, verification of their absence is thus most desirable, yet the size of the network and the complexity of the controller makes this a challenging undertaking. in this paper we propose mocs, a highly expressive, optimised sdn model that allows capturing subtle real - world bugs, in a reasonable amount of time. this is achieved by ( 1 ) analysing the model for possible partial order reductions, ( 2 ) statically pre - computing packet equivalence classes and ( 3 ) indexing packets and rules that exist in the model. we demonstrate its superiority compared to the state of the art in terms of expressivity, by providing examples of realistic bugs that a prototype implementation of mocs in uppaal caught, and performance / scalability, by running examples on various sizes of network topologies, highlighting the importance of our abstractions and optimisations.
|
arxiv:2004.11988
|
the formation of solid macroscopic grains ( pebbles ) in protoplanetary discs is the first step toward planet formation. we aim to study the distribution of pebbles and the chemical composition of their ice mantles in a young protoplanetary disc. we use the two - dimensional hydrodynamical code feosad in the thin - disc approximation, which is designed to model the global evolution of a self - gravitating viscous protoplanetary disc taking into account dust coagulation and fragmentation, thermal balance, and phase transitions and transport of the main volatiles ( h $ _ 2 $ o, co $ _ { 2 } $, ch $ _ { 4 } $ and co ), which can reside in the gas, on small dust ( $ < 1 $ $ \ mu $ m ), on grown dust ( $ > 1 $ $ \ mu $ m ) and on pebbles. we model the dynamics of the protoplanetary disc from the cloud collapse to the 500 kyr moment. we determine the spatial distribution of pebbles and composition of their ice mantles and estimate the mass of volatiles on pebbles, grown dust and small dust. we show that pebbles form as early as 50 kyr after the disc formation and exist until the end of simulation ( 500 kyr ), providing prerequisites for planet formation. all pebbles formed in the model are covered by icy mantles. using a model considering accretion and desorption of volatiles onto dust / pebbles, we find that the ice mantles on pebbles consist mainly of h $ _ 2 $ o and co $ _ { 2 } $, and are carbon - depleted compared to gas and ices on small and grown dust, which contain more co and ch $ _ 4 $. this suggests a possible dominance of oxygen in the composition of planets formed from pebbles under these conditions.
|
arxiv:2403.02895
|
this paper presents an offering of some of the myriad connections between combinatorics and probability, directed in particular toward combinatorialists. the choice of material was dictated by the author ' s own interests, tastes and familiarity, as well as by a desire to present results with either complete proofs or well developed sketches of proofs, and to ensure that the arguments are rather accessible to combinatorialists. the first several sections collect some concepts and rudimentary results from probability theory that are needed to understand the rest of the paper.
|
arxiv:2105.13834
|
. this allowed for the powerful computers and other electronic devices we see today. = = = microelectronics and nanoelectronics = = = microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. the most common microelectronic components are semiconductor transistors, although all main electronic components ( resistors, capacitors etc. ) can be created at a microscopic level. nanoelectronics is the further scaling of devices down to nanometer levels. modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon ( at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide ) to obtain the desired transport of electronic charge and control of current. the field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics. = = = signal processing = = = signal processing deals with the analysis and manipulation of signals. signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. for analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. for digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals. signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. analog signal processing is still important in the design of many control systems. dsp processor ics are found in many types of modern electronic devices, such as digital television sets, radios, hi - fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. in such products, dsp may be
|
https://en.wikipedia.org/wiki/Electrical_engineering
|
we characterise and investigate co - higgs sheaves and associated algebraic and combinatorial invariants on toric varieties. in particular, we compute explicit examples.
|
arxiv:2004.03721
|
we propose a numerical algorithm for finding optimal measurements for quantum - state discrimination. the theory of the semidefinite programming provides a simple check of the optimality of the numerically obtained results.
|
arxiv:quant-ph/0201109
|
b or b ≤ a. dense order : for all a and b in p for which a < b, there is a c in p such that a < c < b. least - upper - bound property : every non - empty subset of p with an upper bound has a least upper bound ( supremum ) in p. = = = ordered fields = = = if ( f, +, × ) is a field and ≤ is a total order on f, then ( f, +, ×, ≤ ) is called an ordered field if and only if : a ≤ b implies a + c ≤ b + c ; 0 ≤ a and 0 ≤ b implies 0 ≤ a × b. both ( q, +, ×, ≤ ) { \ displaystyle ( \ mathbb { q }, +, \ times, \ leq ) } and ( r, +, ×, ≤ ) { \ displaystyle ( \ mathbb { r }, +, \ times, \ leq ) } are ordered fields, but ≤ cannot be defined in order to make ( c, +, ×, ≤ ) { \ displaystyle ( \ mathbb { c }, +, \ times, \ leq ) } an ordered field, because −1 is the square of i and would therefore be positive. besides being an ordered field, r also has the least - upper - bound property. in fact, r can be defined as the only ordered field with that quality. = = chained notation = = the notation a < b < c stands for " a < b and b < c ", from which, by the transitivity property above, it also follows that a < c. by the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. hence, for example, a < b + e < c is equivalent to a − e < b < c − e. this notation can be generalized to any number of terms : for instance, a1 ≤ a2 ≤... ≤ an means that ai ≤ ai + 1 for i = 1, 2,..., n − 1. by transitivity, this condition is equivalent to ai ≤ aj for any 1 ≤ i ≤ j ≤ n. when solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. for instance, to solve the inequality 4
|
https://en.wikipedia.org/wiki/Inequality_(mathematics)
|
the origin of light is a unsolved mystery in nature. recently, it was suggested that light may originate from a new kind of order - quantum order. to test this idea in experiments, we study systems of screened magnetic / electric dipoles in 2d and 3d lattices. we show that our models contain an artificial light - - a photon - like collective excitation. we discuss how to design realistic devices that realize our models. we show that the ` ` speed of light ' ' and the ` ` fine structure constant ' ' of the artificial light can be tuned in our models. the properties of artificial atoms ( bound states of pairs of artificial charges ) are also discussed. the existence of artificial light ( as well as artificial electron ) in condensed matter systems suggests that elementary particles, such as light and electron, may not be elementary. they may be collective excitations of quantum order in our vacuum. our models further suggest that a gauge theory is a string - net theory in disguise. light is a fluctuation of nets of large closed strings and charge is the end of open strings.
|
arxiv:cond-mat/0210040
|
we first describe a system of inequalities ( horn ' s inequalities ) that characterize eigenvalues of sums of hermitian matrices. when we apply this system for integral hermitian matrices, one can directly test it by using littlewood - richardson coefficients. in this paper, we apply horn ' s inequalities to analysis the eigenvalues of an integral line graph $ g $ of a connected bipartite graph. then we show that the diameter of $ g $ is at most $ 2 \ omega ( g ) $, where $ \ omega ( g ) $ is the clique number of $ g $. also using horn ' s inequalities, we show that for every odd integer $ k \ geq 19 $, a non - complete $ k $ - regular ramanujan graph has an eigenvalue less than $ - 2 $.
|
arxiv:2303.01304
|
we prove, assuming that the bohr - sommerfeld rules hold, that the joint spectrum near a focus - focus critical value of a quantum integrable system determines the classical lagrangian foliation around the full focus - focus leaf. the result applies, for instance, to h - pseudodifferential operators, and to berezin - toeplitz operators on prequantizable compact symplectic manifolds.
|
arxiv:1302.2260
|
optimistic concurrency control ( occ ) can exploit the strengths of parallel hardware to provide excellent performance for uncontended transactions, and is popular in high - performance in - memory databases and transactional systems. but at high contention levels, occ is susceptible to frequent aborts, leading to wasted work and degraded performance. contention managers, mixed optimistic / pessimistic concurrency control algorithms, and novel optimistic - inspired concurrency control algorithms, such as tictoc, aim to address this problem, but these mechanisms introduce sometimes - high overheads of their own. we show that in real - world benchmarks, traditional occ can outperform these alternative mechanisms by simply adding fine - grained version timestamps ( using different timestamps for disjoint components of each record ). with fine - grained timestamps, occ gets 1. 14x tictoc ' s throughput in tpc - c at 128 cores ( previous work reported tictoc having 1. 8x higher throughput than occ at 80 hyperthreads ). our study shows that timestamp granularity has a greater impact than previously thought on the performance of transaction processing systems, and should not be overlooked in the push for faster concurrency control schemes.
|
arxiv:1811.04967
|
we prove haag duality property of any translation invariant pure state on $ \ clb = \ otimes _ { \ iz } m _ d ( c ), \ ; d \ ge 2 $, where $ m _ d ( c ) $ is the set of $ d \ times d $ dimensional matrices over field of complex numbers. we also prove a necessary and sufficient condition for a translation invariant factor state to be pure on $ \ clb $. this result makes it possible to study such a pure state with additional symmetry. we prove that exponentially decaying two point spacial correlation function of a real lattice symmetric reflection positive translation invariant pure state is a split state. further there exists no translation invariant pure state on $ \ clb $ that is real, lattice symmetric, refection positive and $ su ( 2 ) $ invariant when $ d $ is an even integer. this in particular says that heisenberg iso - spin anti - ferromagnets model for 1 / 2 - odd integer spin degrees of freedom admits spontaneous symmetry breaking at it ' s ground states
|
arxiv:0904.2104
|
recently proposed low scale quantum gravity scenario is expected to have a significant impact on the mirror world hypothesis. some aspects of this influence is investigated here, assuming that the fundamental gravity scale is near a tev. it is shown that future colliders will be capable to produce the mirror matter, but an experimental signature, which will distinguish such events from the background, is unclear. the ` smoking gun ' ' signals of the mirror world would be an observation of decays like $ \ upsilon ( 2s ) \ to \ tilde \ chi _ { b2 } \ gamma $. but unfortunately the expected branching ratios are very small. finally, it is shown that a mirror supernova will be quite a spectacular event for our world too, because a considerable amount of ordinary energy is expected to be emitted in the first several seconds.
|
arxiv:hep-ph/9908208
|
we have measured k - shell x - ray spectra of highly ionized argon and phosphorus following charge exchange with molecular hydrogen at low collision energy in an electron beam ion trap using an x - ray calorimeter array with $ \ sim $ 6 ev resolution. we find that the emission at the high - end of the lyman series is greater by a factor of 2 for phosphorus than for argon, even though the measurement was performed concurrently and the atomic numbers are similar. this does not agree with current theoretical models and deviates from the trend observed in previous measurements.
|
arxiv:1008.2478
|
the phase diagram of the collapse of a two - dimensional infinite branched polymer interacting with the solvent and with itself through contact interactions is studied from the $ q \ to 1 $ limit of an extension of the $ q - $ states potts model. exact solution on the bethe lattice and migdal - kadanoff renormalization group calculations show that there is a line of $ \ theta $ transitions from the extended to a single compact phase. the $ \ theta $ line, governed by three different fixed points, consists of two lines of extended - - compact transitions which are in different universality classes and meet in a multicritical point. on the other hand, directed branched polymers are shown to be completely determined by the strongly embedded case and there is a single $ \ theta $ transition which is in the directed percolation universality class.
|
arxiv:cond-mat/9601105
|
sanitizers are a relatively recent trend in software engineering. they aim at automatically finding bugs in programs, and they are now commonly available to programmers as part of compiler toolchains. for example, the llvm project includes out - of - the - box sanitizers to detect thread safety ( tsan ), memory ( asan, msan, lsan ), or undefined behaviour ( ubsan ) bugs. in this article, we present nsan, a new sanitizer for locating and debugging floating - point numerical issues, implemented inside the llvm sanitizer framework. nsan puts emphasis on practicality. it aims at providing precise, and actionable feedback, in a timely manner. nsan uses compile - time instrumentation to augment each floating - point computation in the program with a higher - precision shadow which is checked for consistency during program execution. this makes nsan between 1 and 4 orders of magnitude faster than existing approaches, which allows running it routinely as part of unit tests, or detecting issues in large production applications.
|
arxiv:2102.12782
|
posterior predictive p - values ( ppps ) have become popular tools for bayesian model assessment, being general - purpose and easy to use. however, interpretation can be difficult because their distribution is not uniform under the hypothesis that the model did generate the data. calibrated ppps ( cppps ) can be obtained via a bootstrap - like procedure, yet remain unavailable in practice due to high computational cost. this paper introduces methods to enable efficient approximation of cppps and their uncertainty for fast model assessment. we first investigate the computational trade - off between the number of calibration replicates and the number of mcmc samples per replicate. provided that the mcmc chain from the real data has converged, using short mcmc chains per calibration replicate can save significant computation time compared to naive implementations, without significant loss in accuracy. we propose different variance estimators for the cppp approximation, which can be used to confirm quickly the lack of evidence against model misspecification. as variance estimation uses effective sample sizes of many short mcmc chains, we show these can be approximated well from the real - data mcmc chain. the procedure for cppp is implemented in nimble, a flexible framework for hierarchical modeling that supports many models and discrepancy measures.
|
arxiv:2306.04866
|
the problem of analytical estimation of the lyapunov exponents and lyapunov timescales of the motion in multiplets of interacting nonlinear resonances is considered. to this end, we elaborate a unified framework, based on the separatrix map theory, which incorporates both an earlier approach for the first fundamental model of perturbed resonance ( given by the perturbed pendulum hamiltonian ) and a new one for its second fundamental model ( given by the perturbed andoyer hamiltonian ). within this framework, new accurate estimates for the lyapunov timescales of the inner and outer subsystems of the solar planetary system are presented and discussed.
|
arxiv:2411.01939
|
as one of the most widespread social dynamics, cooperative behavior is among the most fascinating collective phenomena. several animal species, from social insects to human beings, feature social groups altruistically working for a common benefit. this collaborative conduct pervades the actions and opinions of individuals, yielding strategic decision - making between political, religious, ethnic, and economic social puzzles. here, we explore how cooperative behavior phenomena impact collective opinion dynamics and entropy generation in social groups. we select a random fraction $ f $ of community members as collaborative individuals and model the opinion dynamics using a social temperature parameter $ q $ that functions as a social anxiety noise. with probability $ q $, regular individuals oppose their companions about a social decision, assuming group dissent. collaborative agents experience a reduced effective social noise $ \ mu q $, where $ 0 < \ mu < 1 $ is the social anxiety noise sensibility parameter that enhances social validation. we perform numerical simulations and mean - field analysis and find the system undergoes nonequilibrium order - disorder phase transitions with expressive social entropy production. our results also highlight the effects of an individual social anxiety attenuation level in enhancing group consensus and inducing exuberant collective phenomena in complex systems.
|
arxiv:2311.05803
|
x - ray and gamma fluxes from the high intensity laser - plasma interaction are extremely short, well beyond temporal resolution of any detectors. if laser pulses come repetitively, the single photon counting technique allows to accumulate the photon spectra, however, its relation to the spectrum of the initial fast electron population in plasma is not straightforward. we present efficient and fast approach based on the geant4 package that significantly reduces computer time needed to re - construct the high energy tail of electron spectrum from experimental data accounting for the pileup effect. here, we first tabulate gamma spectrum from monoenergetic electron bunches of different energy for a given experimental setup, and then compose the simulated spectrum. to account for the pileups, we derive analytical formula to reverse the data. we also consider errors coming from the approximation of the initial electron spectrum by the sum of monoenergetic impacts, the finite range of the electron spectrum, etc. and give estimates on how to choose modelling parameters to minimize the approximation errors. finally, we present an example of the experimental data treatment for the case of laser - solid interaction using 50 fs laser pulse with intensity above 1018 w / cm2.
|
arxiv:2212.08925
|
in the united states, hurricanes are the most devastating natural disasters causing billions of dollars worth of damage every year. more importantly, construction jobsites are classified among the most vulnerable environments to severe wind events. during hurricanes, unsecured and incomplete elements of construction sites, such as scaffoldings, plywoods, and metal rods, will become the potential wind - borne debris, causing cascading damages to the construction projects and the neighboring communities. thus, it is no wonder that construction firms implement jobsite emergency plans to enforce preparedness responses before extreme weather events. however, relying on checklist - based emergency action plans to carry out a thorough hurricane preparedness is challenging in large - scale and complex site environments. for enabling systematic responses for hurricane preparedness, we have proposed a vision - based technique to identify and analyze the potential wind - borne debris in construction jobsites. building on this, this paper demonstrates the fidelity of a new machine vision - based method to support construction site hurricane preparedness and further discuss its implications. the outcomes indicate that the convenience of visual data collection and the advantages of the machine vision - based frameworks enable rapid scene understanding and thus, provide critical heads up for practitioners to recognize and localize the potential wind - borne derbies in construction jobsites and effectively implement hurricane preparedness.
|
arxiv:2110.12968
|
large language models ( llms ) have shown impressive performance in various tasks, including knowledge graph completion ( kgc ). however, current studies mostly apply llms to classification tasks, like identifying missing triplets, rather than ranking - based tasks, where the model ranks candidate entities based on plausibility. this focus limits the practical use of llms in kgc, as real - world applications prioritize highly plausible triplets. additionally, while graph paths can help infer the existence of missing triplets and improve completion accuracy, they often contain redundant information. to address these issues, we propose kg - cf, a framework tailored for ranking - based kgc tasks. kg - cf leverages llms ' reasoning abilities to filter out irrelevant contexts, achieving superior results on real - world datasets. the code and datasets are available at \ url { https : / / anonymous. 4open. science / r / kg - cf }.
|
arxiv:2501.02711
|
for the purpose of finding benchmark quality solutions to time dependent sn transport problems, we develop a numerical method in a discontinuous galerkin ( dg ) framework that utilizes time dependent cell edges, which we call a moving mesh, and an uncollided source treatment. the dg method for discretizing space is a powerful solution technique on smooth problems and is robust on non - smooth problems. in order to realize the potential of the dg method to spectrally resolve smooth problems, our moving mesh and uncollided source treatment is devised to circumvent discontinuities in the solution or the first derivative of the solutions that are admitted in transport calculations. the resulting method achieves spectral convergence on smooth problems, like a standard dg implementation. when applied to problems with nonsmooth sources that induce discontinuities, our moving mesh, uncollided source method returns a significantly more accurate solution than the standard dg method. on problems with smooth sources, we observe spectral convergence even in problems with wave fronts. in problems where the angular flux is inherently non - smooth, as in ganapol ' s ( 2001 ) well known plane pulse benchmark, we do not observe an elevated order of accuracy when compared with static meshes, but there is a reduction in error that is nearly three orders of magnitude.
|
arxiv:2206.13445
|
this paper aims to continue the classification of non - smooth regular curves, but over fields of characteristic three. these curves were originally introduced by zariski as generic fibers of counterexamples to bertini ' s theorem on the variation of singular points of linear series. such a classification has been introduced by st \ " ohr, taking advantage of the equivalent theory of non - conservative function fields, which in turn occurs only over non - perfect fields $ k $ of characteristic $ p > 0 $. we propose here a different way of approach, relying on the fact that a non - smooth regular curve in $ \ mathbb { p } ^ n _ k $ provides a singular curve when viewed inside $ \ mathbb { p } ^ n _ { k ^ { 1 / p } } $. hence we were naturally induced to the question of characterizing singular curves in $ \ mathbb { p } ^ n _ { k ^ { 1 / p } } $ coming from regular curves in $ \ mathbb { p } ^ n _ k $. to understand this phenomenon we consider the notion of integrable connections with zero $ p $ - curvature to extend katz ' s version of cartier ' s theorem for purely inseparable morphisms, where we solve the above characterization for the slightly general setup of coherent sheaves. moreover, we also had to introduce some new local invariants attached to non - smooth points, as the differential degree. as an application of the theory developed here, we classify complete, geometrically integral, non - smooth regular curves $ c $ of genus $ 3 $, over a separably closed field $ k $ of characteristic $ 3 $, whose base extension $ c \ times _ { \ operatorname { spec } k } { \ operatorname { spec } \ overline { k } } $ is non - hyperelliptic with normalization having geometric genus $ 1 $.
|
arxiv:2501.17353
|
we consider the two - hop interference channel ( ic ), which consists of two source - destination pairs communicating with each other via two relays. we analyze the degrees of freedom ( dof ) of this network when the relays are restricted to perform linear schemes, and the channel gains are constant ( i. e., slow fading ). we show that, somewhat surprisingly, by using vector - linear strategies at the relays, it is possible to achieve 4 / 3 sum - dof when the channel gains are real. the key achievability idea is to alternate relaying coefficients across time, to create different end - to - end interference structures ( or topologies ) at different times. although each of these topologies has only 1 sum - dof, we manage to achieve 4 / 3 by coding across them. furthermore, we develop a novel outer bound that matches our achievability, hence characterizing the sum - dof of two - hop interference channels with linear schemes. as for the case of complex channel gains, we characterize the sum - dof with linear schemes to be 5 / 3. we also generalize the results to the multi - antenna setting, characterizing the sum - dof with linear schemes to be 2m - 1 / 3 ( for complex channel gains ), where m is the number of antennas at each node.
|
arxiv:1309.0898
|
in this paper we describe our submissions to the 2nd and 3rd slavner shared tasks held at bsnlp 2019 and bsnlp 2021, respectively. the tasks focused on the analysis of named entities in multilingual web documents in slavic languages with rich inflection. our solution takes advantage of large collections of both unstructured and structured documents. the former serve as data for unsupervised training of language models and embeddings of lexical units. the latter refers to wikipedia and its structured counterpart - wikidata, our source of lemmatization rules, and real - world entities. with the aid of those resources, our system could recognize, normalize and link entities, while being trained with only small amounts of labeled data.
|
arxiv:2104.13456
|
an extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question : what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? we address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. we find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.
|
arxiv:1810.10531
|
gaseous detectors are used in both low energy and high energy physics experiments. the present day gaseous detectors, i. e., the micro - pattern gaseous detectors ( mpgd ) are more efficient and fast. gas electron multiplier ( gem ) is quite well know among the mpgd members. the mpgds are also being used in other applications like tomography / imaging, moreover, recently, hybridization of two different kinds of mpgd is another emerging subject of r \ & d. ion feedback is an intrinsic drawback of the gaseous detectors. however, it is not a big issue where the event rate is not very high, or the drift volume is not too large. here, we are showing a simple experimental technique to find the ion feedback of a single gem foil. this can address the experiments / applications where a single gem foil is employed.
|
arxiv:1708.02118
|
we present experimental observations and numerical simulations of nonequilibrium spatial structures in a trapped bose - einstein condensate subject to oscillatory perturbations. in experiment, first, there appear collective excitations, followed by quantum vortices. increasing the amount of the injected energy leads to the formation of vortex tangles representing quantum turbulence. we study what happens after the regime of quantum turbulence, with increasing further the amount of injected energy. in such a strongly nonequilibrium bose - condensed system of trapped atoms, vortices become destroyed and there develops a new kind of spatial structure exhibiting essentially heterogeneous spatial density. the structure reminds fog consisting of high - density droplets, or grains, surrounded by the regions of low density. the grains are randomly distributed in space, where they move. they live sufficiently long time to be treated as a type of metastable objects. such structures have been observed in nonequilibrium trapped bose gases of $ ^ { 87 } $ rb, subject to the action of alternating fields. here we present experimental results and support them by numerical simulations. the granular, or fog structure is essentially different from the state of wave turbulence that develops after increasing further the amount of injected energy.
|
arxiv:1407.5603
|
thanks to the rapidly developing technology, unmanned aerial vehicles ( uavs ) are able to complete a number of tasks in cooperation with each other without need for human intervention. in recent years, uavs, which are widely utilized in military missions, have begun to be deployed in civilian applications and mostly for commercial purposes. with their growing numbers and range of applications, uavs are becoming more and more popular ; on the other hand, they are also the target of various threats which can exploit various vulnerabilities of uav systems in order to cause destructive effects. it is therefore critical that security is ensured for uavs and the networks that provide communication between uavs. this survey seeks to provide a comprehensive perspective on security within the domain of uavs and flying ad hoc networks ( fanets ). our approach incorporates attack surface analysis and aligns it with the identification of potential threats. additionally, we discuss countermeasures proposed in the existing literature in two categories : preventive and detection strategies. our primary focus centers on the security challenges inherent to fanets, acknowledging their susceptibility to insider threats due to their decentralized and dynamic nature. to provide a deeper understanding of these challenges, we simulate and analyze four distinct routing attacks on fanets, using realistic parameters to evaluate their impact. hence, this study transcends a standard review by integrating an attack analysis based on extensive simulations. finally, we rigorously examine open issues, and propose research directions to guide future endeavors in this field.
|
arxiv:2306.14281
|
topological insulator phases of non - interacting particles have been generalized from periodic crystals to amorphous lattices, which raises the question whether topologically ordered quantum many - body phases may similarly exist in amorphous systems? here we construct a soluble chiral amorphous quantum spin liquid by extending the kitaev honeycomb model to random lattices with fixed coordination number three. the model retains its exact solubility but the presence of plaquettes with an odd number of sides leads to a spontaneous breaking of time reversal symmetry. we unearth a rich phase diagram displaying abelian as well as a non - abelian quantum spin liquid phases with a remarkably simple ground state flux pattern. furthermore, we show that the system undergoes a finite - temperature phase transition to a conducting thermal metal state and discuss possible experimental realisations.
|
arxiv:2208.08246
|
long - range spatial coherence can be induced in thermal emitters by embedding a periodic grating into a material supporting propagating polaritons or dielectric modes. however, the emission angle and frequency cannot be defined simultaneously and uniquely, resulting in emission at unusable angles or frequencies. here, we explore superstructure gratings ( ssgs ) to control the spatial and spectral properties of thermal emitters. ssgs have long - range periodicity, but a unit cell that provides tailorable bragg components to interact with light. these bragg components allow simultaneous launching of polaritons with different frequencies / wavevectors in a single grating, manifesting as additional spatial and spectral bands upon the emission profile. as the unit cell period approaches the spatial coherence length, the coherence properties of the superstructure will be lost. whilst the 1d k - space representation of the grating provides insights into the emission, the etch depth of the grating can result in strong polariton - polariton interactions. an emergent effect of these interactions is the creation of polaritonic band gaps, and defect states that can have a well - defined frequency and emission angle. in all, our results show experimentally how even in simple 1d gratings there is significant design flexibility for engineering the profile of thermal emitters, bound by finite coherence length.
|
arxiv:2012.08611
|
we construct landau - ginzburg lagrangians for minimal bosonic ( $ n = 0 $ ) $ w $ - models perturbed with the least relevant field, inspired by the theory of $ n = 2 $ supersymmetric landau - ginzburg lagrangians. they agree with the lagrangians for unperturbed models previously found with zamolodchikov ' s method. we briefly study their properties, e. g. the perturbation algebra and the soliton structure. we conclude that the known properties of $ n = 2 $ solitons ( bps, lines in $ w $ plane, etc. ) hold as well. hence, a connection with a generalized supersymmetric structure of minimal $ w $ - models is conjectured.
|
arxiv:hep-th/9602001
|
in many practical applications of multiple hypothesis testing using the false discovery rate ( fdr ), the given hypotheses can be naturally partitioned into groups, and one may not only want to control the number of false discoveries ( wrongly rejected null hypotheses ), but also the number of falsely discovered groups of hypotheses ( we say a group is falsely discovered if at least one hypothesis within that group is rejected, when in reality the group contains only nulls ). in this paper, we introduce the p - filter, a procedure which unifies and generalizes the standard fdr procedure by benjamini and hochberg and global null testing procedure by simes. we first prove that our proposed method can simultaneously control the overall fdr at the finest level ( individual hypotheses treated separately ) and the group fdr at coarser levels ( when such groups are user - specified ). we then generalize the p - filter procedure even further to handle multiple partitions of hypotheses, since that might be natural in many applications. for example, in neuroscience experiments, we may have a hypothesis for every ( discretized ) location in the brain, and at every ( discretized ) timepoint : does the stimulus correlate with activity in location x at time t after the stimulus was presented? in this setting, one might want to group hypotheses by location and by time. importantly, our procedure can handle multiple partitions which are nonhierarchical ( i. e. one partition may arrange p - values by voxel, and another partition arranges them by time point ; neither one is nested inside the other ). we prove that our procedure controls fdr simultaneously across these multiple lay - ers, under assumptions that are standard in the literature : we do not need the hypotheses to be independent, but require a nonnegative dependence condition known as prds.
|
arxiv:1512.03397
|
a relatively recent study by mars and senovilla provided us with a uniqueness result for the exterior vacuum gravitational field generated by an isolated distribution of matter in axial rotation in equilibrium in general relativity. the generalisation to exterior electrovacuum gravitational fields, to include charged rotating objects, is presented here.
|
arxiv:gr-qc/0507062
|
it is encouraged to see that progress has been made to bridge videos and natural language. however, mainstream video captioning methods suffer from slow inference speed due to the sequential manner of autoregressive decoding, and prefer generating generic descriptions due to the insufficient training of visual words ( e. g., nouns and verbs ) and inadequate decoding paradigm. in this paper, we propose a non - autoregressive decoding based model with a coarse - to - fine captioning procedure to alleviate these defects. in implementations, we employ a bi - directional self - attention based network as our language model for achieving inference speedup, based on which we decompose the captioning procedure into two stages, where the model has different focuses. specifically, given that visual words determine the semantic correctness of captions, we design a mechanism of generating visual words to not only promote the training of scene - related words but also capture relevant details from videos to construct a coarse - grained sentence " template ". thereafter, we devise dedicated decoding algorithms that fill in the " template " with suitable words and modify inappropriate phrasing via iterative refinement to obtain a fine - grained description. extensive experiments on two mainstream video captioning benchmarks, i. e., msvd and msr - vtt, demonstrate that our approach achieves state - of - the - art performance, generates diverse descriptions, and obtains high inference efficiency. our code is available at https : / / github. com / yangbang18 / non - autoregressive - video - captioning.
|
arxiv:1911.12018
|
we give the inversion formula and the plancherel formula for the hypergeometric fourier transform associated with a root system of type $ bc $, when the multiplicity parameters are not necessarily nonnegative.
|
arxiv:2007.08281
|
demonstrating small error rates by integrating quantum error correction ( qec ) into an architecture of quantum computing is the next milestone towards scalable fault - tolerant quantum computing ( ftqc ). encoding logical qubits with superconducting qubits and surface codes is considered a promising candidate for ftqc architectures. in this paper, we propose an ftqc architecture, which we call q3de, that enhances the tolerance to multi - bit burst errors ( mbbes ) by cosmic rays with moderate changes and overhead. there are three core components in q3de : in - situ anomaly detection, dynamic code deformation, and optimized error decoding. in this architecture, mbbes are detected only from syndrome values for error correction. the effect of mbbes is immediately mitigated by dynamically increasing the encoding level of logical qubits and re - estimating probable recovery operation with the rollback of the decoding process. we investigate the performance and overhead of the q3de architecture with quantum - error simulators and demonstrate that q3de effectively reduces the period of mbbes by 1000 times and halves the size of their region. therefore, q3de significantly relaxes the requirement of qubit density and qubit chip size to realize ftqc. our scheme is versatile for mitigating mbbes, i. e., temporal variations of error properties, on a wide range of physical devices and ftqc architectures since it relies only on the standard features of topological stabilizer codes.
|
arxiv:2501.00331
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.