text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
direct - write multi - photon laser lithography ( mpl ) combines highest resolution on the nanoscale with essentially unlimited 3d design freedom. over the previous years, the groundbreaking potential of this technique has been demonstrated in various application fields, including micromechanics, material sciences, microfluidics, life sciences as well as photonics, where in - situ printed optical coupling elements offer new perspectives for package - level system integration. however, millimeter - wave ( mmw ) and terahertz ( thz ) devices could not yet leverage the unique strengths of mpl, even though the underlying devices and structures could also greatly benefit from 3d freeform microfabrication. one of the key challenges in this context is the fact that functional mmw and thz structures require materials with high electrical conductivity and low dielectric losses, which are not amenable to structuring by multi - photon polymerization. in this work, we introduce and experimentally demonstrate a novel approach that allows to leverage mpl for fabricating high - performance mmw and thz structures with hitherto unachieved functionalities. our concept exploits in - situ printed polymer templates that are selectively coated through highly directive metal deposition techniques in combination with precisely aligned 3d - printed shadowing structures. the resulting metal - coated freeform structures offer high surface quality in combination with low dielectric losses and conductivities comparable to bulk material values, while lending themselves to fabrication on planar mmw / thz circuits. we experimentally show the viability of our concept by demonstrating a series of functional thz structures such as thz interconnects, probe tips, and suspended antennas. we believe that our approach offers disruptive potential in the field of mmw and thz technology and may unlock an entirely new realm of laser - based 3d manufacturing.
|
arxiv:2401.03316
|
long, stable and free - standing linear atomic carbon wires have been carved out from graphene recently [ meyer et al : nature ( london ) 2008, 454, 319 ; jin et al : phys : rev : lett : 2009, 102, 205501 ]. they can be considered as the extremely narrow graphene nanoribbons or extremely thin carbon nanotubes. it might even be possible to make use of high strength and identical ( without charity ) carbon wires as a transport channel or on - chip interconnects for field - effect transistors. here we investigate electron transport properties of linear atomic carbon wire - graphene junctions by nonequilibruim green ' s function combined with density functional theory. for short wires, linear ballistic transport is observed in odd - numbered wire but destroyed by peirerls distortion in even - numbered wire. for wires longer than 2. 1 nm as fabricated above, however, the ballistic conductance of carbon wire - graphene junctions is remarkably robust against the peierls distortion, structural imperfections, and hydrogen impurity adsorption of the linear carbon wires except oxygen impurities. as such, the epoxy groups might be the origin of low conductance experimentally observed in carbon wires. moreover, double atomic carbon wires exhibit negative differential resistance ( ndr ) effect.
|
arxiv:0910.4006
|
we introduce a novel approach for rendering static and dynamic 3d neural signed distance functions ( sdf ) in real - time. we rely on nested neighborhoods of zero - level sets of neural sdfs, and mappings between them. this framework supports animations and achieves real - time performance without the use of spatial data - structures. it consists of three uncoupled algorithms representing the rendering steps. the multiscale sphere tracing focuses on minimizing iteration time by using coarse approximations on earlier iterations. the neural normal mapping transfers details from a fine neural sdf to a surface nested on a neighborhood of its zero - level set. it is smooth and it does not depend on surface parametrizations. as a result, it can be used to fetch smooth normals for discrete surfaces such as meshes and to skip later iterations when sphere tracing level sets. finally, we propose an algorithm for analytic normal calculation for mlps and describe ways to obtain sequences of neural sdfs to use with the algorithms.
|
arxiv:2201.09147
|
this paper studies convergence to equilibrium for second - order langevin dynamics under general growth conditions on the potential. although we are principally motivated by the case when the potential is singular, e. g. when the dynamics has repulsive forces and / or interactions, the results presented in this paper hold more generally. in particular, our main result is that, given ( very ) basic structural and growth conditions on the potential, the dynamics relaxes to equilibrium exponentially fast in an explicitly measurable way. the ` ` explicitness " of this result comes directly from the constants appearing in the growth conditions, which can all be readily estimated, and a local poincar \ ' { e } constant for the invariant measure $ \ mu $. this result is applied to the specific situation of a singular interaction and polynomial confining well to provide explicit estimates on the exponential convergence rate $ e ^ { - \ sigma } $ in terms of the number $ n \ geqslant 1 $ of particles in the system. we will see that $ \ sigma \ geqslant c / ( \ rho \ vee n ^ p ) $, where $ \ rho > 0 $ is the local poincar \ ' { e } constant for $ \ mu $ and $ c > 0, p \ geqslant 1 $ are constants that are independent of $ n $.
|
arxiv:1907.03092
|
non - zero spin orbit coupling has been reported in several unconventional superconductors due to the absence of inversion symmetry breaking. this contrasts with cuprate superconductors, where such interaction has been neglected for a long time. the recent report of a non - trivial spin orbit coupling in overdoped bi2212 cuprate superconductor, has re - opened an old debate on both the source and role of such interaction and its evolution throughout the superconducting dome. using high - resolution spin - and angle - resolved photoemission spectroscopy, we reveal a momentum - dependent spin texture throughout the hole - doped side of the superconducting phase diagram for single - and double - layer bismuth - based cuprates. the universality of the reported effect among different dopings and the disappearance of spin polarization upon lead substitution, suggest a common source. we argue that local structural fluctuations of the cuo planes and the resulting charge imbalance may cause local inversion symmetry breaking and spin polarization, which might be crucial for understanding cuprates physics.
|
arxiv:2408.05913
|
we consider a one - dimensional random walk $ s _ n $ having i. i. d. increments with zero mean and finite variance. we continue our study of asymptotic expansions for local probabilities $ \ mathbf p ( s _ n = x, \ tau _ 0 > n ) $, which has been started in \ cite { dtw23 }. obtained there expansions make sense in the zone $ x = o ( \ frac { \ sqrt { n } } { \ log ^ { 1 / 2 } n } ) $ only. in the present paper we derive an alternative expansion, which deals with $ x $ of order $ \ sqrt { n } $.
|
arxiv:2412.09145
|
we find new characterizations for the points in the \ textit { symmetrized polydisc } $ \ mathbb g _ n $, a family of domains associated with the spectral interpolation, defined by \ [ \ mathbb g _ n : = \ left \ { \ left ( \ sum _ { 1 \ leq i \ leq n } z _ i, \ sum _ { 1 \ leq i < j \ leq n } z _ iz _ j \ dots, \ prod _ { i = 1 } ^ n z _ i \ right ) : \, | z _ i | < 1, i = 1, \ dots, n \ right \ }. \ ] we introduce a new family of domains which we call \ textit { the extended symmetrized polydisc } $ \ widetilde { \ mathbb g } _ n $, and define in the following way : \ begin { align * } \ widetilde { \ mathbb g } _ n : = \ bigg \ { ( y _ 1, \ dots, y _ { n - 1 }, q ) \ in \ mathbb c ^ n : \ ; q \ in \ mathbb d, \ ; y _ j = \ beta _ j + \ bar { \ beta } _ { n - j } q, \ ; \ beta _ j \ in \ mathbb c & \ text { and } \ \ | \ beta _ j | + | \ beta _ { n - j } | < { n \ choose j } & \ text { for } j = 1, \ dots, n - 1 \ bigg \ }. \ end { align * } we show that $ \ mathbb g _ n = \ widetilde { \ mathbb g } _ n $ for $ n = 1, 2 $ and that $ { \ mathbb g } _ n \ subsetneq \ widetilde { \ mathbb g } _ n $ for $ n \ geq 3 $. we first obtain a variety of characterizations for the points in $ \ widetilde { \ mathbb g } _ n $ and we apply these necessary and sufficient conditions to produce an analogous set of characterizations for the points in $ { \ mathbb g } _ n $. also we obtain similar characterizations for the points in $ \ gamma _ n \ setminus { \ mathbb g } _ n $, where $ \ gamma
|
arxiv:1904.03745
|
while in every organization corporate culture and history change over time, intentional efforts to identify performance problems are of particular interest when trying to understand the current state of an organization. the results of past improvement initiatives can shed light on the evolution of an organization and represent, with the advantage of perfect hindsight, a learning opportunity for future process improvements. the opportunity to test this premise occurred in an applied research collaboration with the swedish transport administration, the government agency responsible for the planning, implementation, and maintenance of long - term rail, road, shipping, and aviation infrastructure in sweden. this article is part of a theme issue on process improvement.
|
arxiv:2309.12439
|
consider a diffusion process corresponding to the operator $ l = \ frac12a \ frac { d ^ 2 } { dx ^ 2 } + b \ frac d { dx } $ and which is transient to $ + \ infty $. for $ c > 0 $, we give an explicit criterion in terms of the coefficients $ a $ and $ b $ which determines whether or not the diffusion almost surely eventually stops making down - crossings of length $ c $. as a particular case, we show that if $ a = 1 $, then the diffusion almost surely stops making down - crossings of length $ c $ if $ b ( x ) \ ge \ frac1 { 2c } \ log x + \ frac \ gamma c \ log \ log x $, for some $ \ gamma > 1 $ and for large $ x $, but makes down - crossings of length $ c $ at arbitrarily large times if $ b ( x ) \ le \ frac1 { 2c } \ log x + \ frac1c \ log \ log x $, for large $ x $.
|
arxiv:0912.1973
|
context. the majority of bright extragalactic gamma - ray sources are blazars. only a few radio galaxies have been detected by fermi / lat. recently, the ghz - peaked spectrum source pks 1718 - 649 was confirmed to be gamma - ray bright, providing further evidence for the existence of a population of gamma - ray loud, compact radio galaxies. a spectral turnover in the radio spectrum in the mhz to ghz range is a characteristic feature of these objects, which are thought to be young due to their small linear sizes. the multiwavelength properties of the gamma - ray source pmn j1603 - 4904 suggest that it is a member of this source class. aims. the known radio spectrum of pmn j1603 - 4904 can be described by a power law above 1 ghz. using observations from the giant metrewave radio telescope ( gmrt ) at 150, 325, and 610 mhz, we investigate the behaviour of the spectrum at lower frequencies to search for a low - frequency turnover. methods. data from the tifr gmrt sky survey ( tgss adr ) catalogue and archival gmrt observations were used to construct the first mhz to ghz spectrum of pmn j1603 - 4904. results. we detect a low - frequency turnover of the spectrum and measure the peak position at about 490 mhz ( rest - frame ), which, using the known relation of peak frequency and linear size, translates into a maximum linear source size of ~ 1. 4 kpc. conclusions. the detection of the mhz peak indicates that pmn j1603 - 4904 is part of this population of radio galaxies with turnover frequencies in the mhz to ghz regime. therefore it can be considered the second, confirmed object of this kind detected in gamma - rays. establishing this gamma - ray source class will help to investigate the gamma - ray production sites and to test broadband emission models.
|
arxiv:1609.01992
|
design proponents sought to reintroduce the creationist ideas into science classrooms while sidestepping the first amendment ' s prohibition against religious infringement. however, the intelligent design curriculum was struck down as a violation of the establishment clause in kitzmiller v. dover area school district, the judge in the case ruled " that id is nothing less than the progeny of creationism. " today, creation science as an organized movement is primarily centered within the united states. creation science organizations are also known in other countries, most notably creation ministries international which was founded ( under the name creation science foundation ) in australia. proponents are usually aligned with a christian denomination, primarily with those characterized as evangelical, conservative, or fundamentalist. while creationist movements also exist in islam and judaism, these movements do not use the phrase creation science to describe their beliefs. = = issues = = creation science has its roots in the work of young earth creationist george mccready price disputing modern science ' s account of natural history, focusing particularly on geology and its concept of uniformitarianism, and his efforts instead to furnish an alternative empirical explanation of observable phenomena which was compatible with strict biblical literalism. price ' s work was later discovered by civil engineer henry m. morris, who is now considered to be the father of creation science. morris and later creationists expanded the scope with attacks against the broad spectrum scientific findings that point to the antiquity of the universe and common ancestry among species, including growing body of evidence from the fossil record, absolute dating techniques, and cosmogony. the proponents of creation science often say that they are concerned with religious and moral questions as well as natural observations and predictive hypotheses. many state that their opposition to scientific evolution is primarily based on religion. the overwhelming majority of scientists are in agreement that the claims of science are necessarily limited to those that develop from natural observations and experiments which can be replicated and substantiated by other scientists, and that claims made by creation science do not meet those criteria. duane gish, a prominent creation science proponent, has similarly claimed, " we do not know how the creator created, what processes he used, for he used processes which are not now operating anywhere in the natural universe. this is why we refer to creation as special creation. we cannot discover by scientific investigation anything about the creative processes used by the creator. " but he also makes the same claim against science ' s evolutionary theory, maintaining that on the subject of origins, scientific evolution is a
|
https://en.wikipedia.org/wiki/Creation_science
|
let $ m $ be a closed triangulable manifold, and let $ \ delta $ be a triangulation of $ m $. what is the smallest number of vertices that $ \ delta $ can have? how big or small can the number of edges of $ \ delta $ be as a function of the number of vertices? more generally, what are the possible face numbers ( $ f $ - numbers, for short ) that $ \ delta $ can have? in other words, what restrictions does the topology of $ m $ place on the possible $ f $ - numbers of triangulations of $ m $? to make things even more interesting, we can add some combinatorial conditions on the triangulations we are considering ( e. g., flagness, balancedness, etc. ) and ask what additional restrictions these combinatorial conditions impose. while only a few theorems in this area of combinatorics were known a couple of decades ago, in the last ten years or so, the field simply exploded with new results and ideas. thus we feel that a survey paper is long overdue. as new theorems are being proved while we are typing this chapter, and as we have only a limited number of pages, we apologize in advance to our friends and colleagues, some of whose results will not get mentioned here.
|
arxiv:1505.06380
|
the calculus of finite differences is a solid foundation for the development of operations such as the derivative and the integral for infinite sequences. here we showed a way to extend it for finite sequences. we could then define convexity for finite sequences and some related concepts. to finalize, we propose a way to go from our extension to the calculus of finite differences.
|
arxiv:1606.02182
|
we investigate the relevance of multiple - orbital and multipole effects during high - harmonic generation ( hhg ). the time - dependent configuration - interaction singles ( tdcis ) approach is used to study the impact of the detailed description of the residual electron - ion interaction on the hhg spectrum. we find that the shape and position of the cooper minimum in the hhg spectrum of argon changes significantly whether or not interchannel interactions are taken into account. the hhg yield can be underestimated by up to 2 orders of magnitude in the energy regio of 30 - 50 ev. we show that the argument of low ionization probability is not sufficient to justify ignoring multiple - orbital contributions. additionally, we find the hhg yield is sensitive to the nonspherical multipole character of the electron - ion interaction.
|
arxiv:1202.4855
|
there has been a major advance in the field of data science in the last few decades, and these have been utilized for different engineering disciplines and applications. artificial intelligence ( ai ), machine learning ( ml ) and deep learning ( dl ) algorithms have been utilized for civil structural health monitoring ( shm ) especially for damage detection applications using sensor data. although ml and dl methods show superior learning skills for complex data structures, they require plenty of data for training. however, in shm, data collection from civil structures can be expensive and time taking ; particularly getting useful data ( damage associated data ) can be challenging. the objective of this study is to address the data scarcity problem for damage detection applications. this paper employs 1 - d wasserstein deep convolutional generative adversarial networks using gradient penalty ( 1 - d wdcgan - gp ) for synthetic labelled acceleration data generation. then, the generated data is augmented with varying ratios for the training dataset of a 1 - d deep convolutional neural network ( 1 - d dcnn ) for damage detection application. the damage detection results show that the 1 - d wdcgan - gp can be successfully utilized to tackle data scarcity in vibration - based damage detection applications of civil structures. keywords : structural health monitoring ( shm ), structural damage detection, 1 - d deep convolutional neural networks ( 1 - d dcnn ), 1 - d generative adversarial networks ( 1 - d gan ), wasserstein generative adversarial networks with gradient penalty ( wgan - gp )
|
arxiv:2112.03478
|
high - dimensional quantum information processing has become a mature field of research with several different approaches being adopted for the encoding of $ d $ - dimensional quantum systems. such progress has fueled the search of reliable quantum tomographic methods aiming for the characterization of these systems, being most of these methods specifically designed for a given scenario. here, we report on a new tomographic method based on multiply symmetric states and on experimental investigations to study its performance in higher dimensions. unlike other methods, it is guaranteed to exist in any dimension and provides a significant reduction in the number of measurement outcomes when compared to standard quantum tomography. furthermore, in the case of odd dimensions, the method requires the least possible number of measurement outcomes. in our experiment we adopt the technique where high - dimensional quantum states are encoded using the linear transverse momentum of single photons and are controlled by spatial light modulators. our results show that fidelities of $ 0. 984 \ pm0. 009 $ with ensemble sizes of only $ 1. 5 \ times10 ^ 5 $ photons in dimension $ d = 15 $ can be obtained in typical laboratory conditions, thus showing its practicability in higher dimensions.
|
arxiv:1808.07148
|
we introduce two partially overlapping classes of pathwise dualities between interacting particle systems that are based on commutative monoids ( semigroups with a neutral element ) and semirings, respectively. for interacting particle systems whose local state space has two elements, this approach yields a unified treatment of the well - known additive and cancellative dualities. for local state spaces with three or more elements, we discover several new dualities.
|
arxiv:2108.01492
|
roadside collaborative perception refers to a system where multiple roadside units collaborate to pool their perceptual data, assisting vehicles in enhancing their environmental awareness. existing roadside perception methods concentrate on model design but overlook data issues like calibration errors, sparse information, and multi - view consistency, leading to poor performance on recent published datasets. to significantly enhance roadside collaborative perception and address critical data issues, we present the first simulation framework roco - sim for road - side collaborative perception. roco - sim is capable of generating diverse, multi - view consistent simulated roadside data through dynamic foreground editing and full - scene style transfer of a single image. roco - sim consists of four components : ( 1 ) camera extrinsic optimization ensures accurate 3d to 2d projection for roadside cameras ; ( 2 ) a novel multi - view occlusion - aware sampler ( moas ) determines the placement of diverse digital assets within 3d space ; ( 3 ) depthsam innovatively models foreground - background relationships from single - frame fixed - view images, ensuring multi - view consistency of foreground ; and ( 4 ) scalable post - processing toolkit generates more realistic and enriched scenes through style transfer and other enhancements. roco - sim significantly improves roadside 3d object detection, outperforming sota methods by 83. 74 on rcooper - intersection and 83. 12 on tumtraf - v2x for ap70. roco - sim fills a critical gap in roadside perception simulation. code and pre - trained models will be released soon : https : / / github. com / duyuwen - duen / roco - sim
|
arxiv:2503.10410
|
demand for data - intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. computer architectures are evolving to accommodate the computing of large data better. protecting the computation of sensitive data is also an imperative yet challenging objective ; processor - supported secure enclaves serve as the key element in confidential computing in the cloud. however, side - channel attacks are threatening their security boundaries. the current processor architectures consume a considerable portion of its cycles in moving data. near data computation is a promising approach that minimizes redundant data movement by placing computation inside storage. in this paper, we present a novel design for processing - in - memory ( pim ) as a data - intensive workload accelerator for confidential computing. based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing \ emph { inside } memory. we then explain our security model and programming model developed for pim - based computation offloading. we construct our findings into a software - hardware co - design, which we call pim - enclave. our design illustrates the advantages of pim - based confidential computing acceleration. our evaluation shows pim - enclave can provide a side - channel resistant secure computation offloading and run data - intensive applications with negligible performance overhead compared to baseline pim model.
|
arxiv:2111.03307
|
observing the dynamics of compact astrophysical objects provides insights into their inner workings, thereby probing physics under extreme conditions. the immediate vicinity of an active supermassive black hole with its event horizon, photon ring, accretion disk, and relativistic jets is a perfect place to study general relativity and magneto - hydrodynamics. the observations of m87 * with very long baseline interferometry ( vlbi ) by the event horizon telescope ( eht ) allows to investigate its dynamical processes on time scales of days. compared to regular radio interferometers, vlbi networks typically have fewer antennas and low signal to noise ratios ( snrs ). furthermore, the source is variable, prohibiting integration over time to improve snr. here, we present an imaging algorithm that copes with the data scarcity and temporal evolution, while providing uncertainty quantification. our algorithm views the imaging task as a bayesian inference problem of a time - varying brightness, exploits the correlation structure in time, and reconstructs a $ { 2 + 1 + 1 } $ dimensional time - variable and spectrally resolved image at once. we apply this method to the eht observation of m87 * and validate our approach on synthetic data. the time - and frequency - resolved reconstruction of m87 * confirms variable structures on the emission ring. the reconstruction indicates extended and time - variable emission structures outside the ring itself.
|
arxiv:2002.05218
|
this paper studies distributed nonconvex optimization problems with stochastic gradients for a multi - agent system, in which each agent aims to minimize the sum of all agents ' cost functions by using local compressed information exchange. we propose a distributed stochastic gradient descent ( sgd ) algorithm, suitable for a general class of compressors. we show that the proposed algorithm achieves the linear speedup convergence rate $ \ mathcal { o } ( 1 / \ sqrt { nt } ) $ for smooth nonconvex functions, where $ t $ and $ n $ are the number of iterations and agents, respectively. if the global cost function additionally satisfies the polyak - - { \ l } ojasiewicz condition, the proposed algorithm can linearly converge to a neighborhood of the global optimum, regardless of whether the stochastic gradient is unbiased or not. numerical experiments are carried out to verify the efficiency of our algorithm.
|
arxiv:2403.01322
|
comments on the results presented at the conference " frontiers beyond the standard model, " ftpi, oct. 2012. this summary traces a historical perspective. v2 : a reference corrected and a footnote added ; v3 : a few grammar mistakes and typos corrected, author ' s address and e mail address added.
|
arxiv:1211.0004
|
to a transitive pseudo - anosov flow $ \ varphi $ on a $ 3 $ - manifold $ m $ and a representation $ \ rho $ of $ \ pi _ 1 ( m ) $, we associate a zeta function $ \ zeta _ { \ varphi, \ rho } ( s ) $ defined for $ \ re s \ gg 1 $, generalizing the anosov case. for a class of ` ` smooth pseudo - anosov flows ' ', we prove that $ \ zeta _ { \ varphi, \ rho } ( s ) $ has a meromorphic continuation to $ \ mathbb { c } $. we also prove a version of the fried conjecture for smooth pseudo - anosov flows which, under some conditions on $ \ rho $, relates $ \ zeta _ { \ varphi, \ rho } ( 0 ) $ to the reidemeister torsion of $ m $. finally we prove a topological analogue of the dirichlet class number formula. in order to deal with singularities, we use $ c ^ \ infty $ versions of the approaches of rugh and sanchez - - morgado, based on markov partitions.
|
arxiv:2409.17014
|
a borel resummation scheme of subtracting the perturbative contribution from the average plaquette is proposed using the bilocal expansion of borel transform. it is shown that the remnant of the average plaquette, after subtraction of the perturbative contribution, scales as a dim - 4 condensate. a critical review of the existing procedure of renormalon subtraction is presented.
|
arxiv:1003.0231
|
we study traveling time and traveling length for tracer dispersion in porous media. we model porous media by two - dimensional bond percolation, and we model flow by tracer particles driven by a pressure difference between two points separated by euclidean distance $ r $. we find that the minimal traveling time $ t _ { min } $ scales as $ t _ { min } \ sim r ^ { 1. 40 } $, which is different from the scaling of the most probable traveling time, $ { \ tilde t } \ sim r ^ { 1. 64 } $. we also calculate the length of the path corresponding to the minimal traveling time and find $ \ ell _ { min } \ sim r ^ { 1. 13 } $ and that the most probable traveling length scales as $ { \ tilde \ ell } \ sim r ^ { 1. 21 } $. we present the relevant distribution functions and scaling relations.
|
arxiv:cond-mat/9903066
|
higher - order topological insulators are a recently discovered class of materials that can possess zero - dimensional localized states regardless of the dimension of the lattice. here, we experimentally demonstrate that the topological corner - localized modes of higher - order topological insulators can be symmetry protected bound states in the continuum ; these states do not hybridize with the surrounding bulk states of the lattice even in the absence of a bulk bandgap. as such, this class of structures has potential applications in confining and controlling light in systems that do not support a complete photonic bandgap.
|
arxiv:2006.06524
|
we describe our ongoing program designed to measure the sn - ia rate in a sample of massive z = 0. 5 - 0. 9 galaxy clusters. the sn - ia rate is a poorly known observable, especially at high z, and in cluster environments. the sn rate and its redshift dependence can serve as powerful discrimiminants for a number of key issues in astrophysics and cosmology. our observations will put clear constraints on the characteristic sn - ia ` ` delay time ' ', the typical time between the formation of a stellar population and the explosion of some of its members as sne - ia. such constraints can exclude entire categories of sn - ia progenitor models, since different models predict different delays. these data will also help to resolve the question of the dominant source of the high metallicity in the intracluster medium ( icm ) - sne - ia, or core - collapse sne from an early stellar population with a top - heavy imf, perhaps those population iii stars responsible for the early re - ionization of the universe. since clusters are excellent laboratories for studying enrichment ( they generally have a simple star - formation history, and matter cannot leave their deep potentials ), the results will be relevant for understanding metal enrichment in general, and the possible role of first generation stars in early universal enrichment. observations obtained so far during cycles 14 and 15 yield many sne in our cluster fields, but our follow - up campaign reveals most are not in cluster galaxies.
|
arxiv:astro-ph/0611920
|
anti - lock brake system ( abs ) is a mandatory active safety feature on road vehicles with analogous systems for aircraft and locomotives. this feature aims to prevent locking of the wheels when braking and to improve the handling performance, as well as reduce stopping distance of the vehicle. estimation uncertainties in the vehicle state and environment ( road surface ) are often neglected or handled separately from the abs controller, leading to sub - optimal braking. in this paper, a dual control for exploration - exploitation ( dcee ) approach is taken toward the abs problem which achieves both accurate state ( and environment ) estimation and superior braking performance. compared with popular extremum seeking methods, improvements of up to $ 15 \ % $ and $ 8. 5 \ % $ are shown in stopping time and stopping distance, respectively. a regularized particle filter with markov chain monte carlo step is used to estimate vehicle states and parameters of the magic formula tyre model that includes the peak friction coefficient for the environment. the effectiveness of the dcee approach is demonstrated across a range of driving scenarios such as low and high speeds ; snow, wet and dry roads and changing road surfaces.
|
arxiv:2306.14730
|
a combination of molecular - dynamics simulations, theoretical predictions, and previous experiments are used in a two - part study to determine the role of the knudsen layer in rapid granular flows. first, a robust criterion for the identification of the thickness of the knudsen layer is established : a rapid deterioration in navier - stokes - order prediction of the heat flux is found to occur in the knudsen layer. for ( experimental ) systems in which heat flux measurements are not easily obtained, a rule - of - thumb for estimating the knudsen layer thickness follows, namely that such effects are evident within 2. 5 ( local ) mean free paths of a given boundary. second, comparisons of simulation and experimental data with navier - stokes order theory are used to provide a measure as to when knudsen layer effects become non - negligible. specifically, predictions that do not account for the presence of a knudsen layer appear reliable for knudsen layers collectively composing up to 20 % of the domain, whereas deterioration of such predictions becomes apparent when the domain is fully comprised of the knudsen layer.
|
arxiv:cond-mat/0703349
|
power flow analysis plays a crucial role in examining the electricity flow within a power system network. by performing power flow calculations, the system ' s steady - state variables, including voltage magnitude, phase angle at each bus, active / reactive power flow across branches, can be determined. while the widely used dc power flow model offers speed and robustness, it may yield inaccurate line flow results for certain transmission lines. this issue becomes more critical when dealing with renewable energy sources such as wind farms, which are often located far from the main grid. obtaining precise line flow results for these critical lines is vital for next operations. to address these challenges, data - driven approaches leverage historical grid profiles. in this paper, a graph neural network ( gnn ) model is trained using historical power system data to predict power flow outcomes. the gnn model enables rapid estimation of line flows. a comprehensive performance analysis is conducted, comparing the proposed gnn - based power flow model with the traditional dc power flow model, as well as deep neural network ( dnn ) and convolutional neural network ( cnn ). the results on test systems demonstrate that the proposed gnn - based power flow model provides more accurate solutions with high efficiency comparing to benchmark models.
|
arxiv:2307.02049
|
perhaps the simplest first - principles approach to electronic structure is to fit the charge distribution of each orbital pair and use those fits wherever they appear in the entire electron - electron ( ee ) interaction energy. the charge distributions in quantum chemistry are typically represented as a sums over products of gaussian orbital basis functions. if fitted, they are also represented as a sum over single - center gaussian fitting basis functions. with two representations of the charge distributions, the proper definition of energy is ambiguous. to remedy this, we require that the variation of the energy with respect to a product of orbitals generates a fitted potential. this makes the quantum - mechanical energy robust, i. e. corrected to first order for the error made using an incomplete fitting basis. the coupled orbital and fitting equations are then the result of making the energy stationary with respect to two independent sets of variables. we define the potentials and unique energies for methods based on the hartree fock model and variationally fit the full ee interaction in dft. we compare implementations of variational fitting in dft at six different levels for three different functionals. our calculations are performed on transition metal atoms, for which first - order coulomb errors, due to an incomplete fitting basis sets, are significant. variational first - order exchange and correlation errors have similar magnitude in all cases. robust energy differences are much smaller, particularly in the local density approximation.
|
arxiv:1511.02253
|
we present a combined model for magnetic field generation and transport in cool stars with outer convection zones. the mean toroidal magnetic field, which is generated by a cyclic thin - layer alpha - omega dynamo at the bottom of the convection zone is taken to determine the emergence probability of magnetic flux tubes in the photosphere. following the nonlinear rise of the unstable thin flux tubes, emergence latitudes and tilt angles of bipolar magnetic regions are determined. these quantities are put into a surface flux transport model, which simulates the surface evolution of magnetic flux under the effects of large - scale flows and turbulent diffusion. first results are discussed for the case of the sun and for more rapidly rotating solar - type stars.
|
arxiv:1111.2453
|
period, which results in energy efficiency and sustainability. egain lowers building energy consumption and emissions while determining time for maintenance where inefficiencies are observed. = = = solar power = = = = = computational sustainability = = = = = sustainable agriculture = = = sustainable agriculture is an approach to farming that utilizes technology in a way that ensures food protection, while ensuring the long - term health and productivity of agricultural systems, ecosystems, and communities. historically, technological advancements have significantly contributed to increasing agricultural productivity and reducing physical labor. the national institute of food and agriculture improves sustainable agriculture through the use of funded programs aimed at fulfilling human food and fiber needs, improving environmental quality, and preserving natural resources vital to the agricultural economy, optimizing the utilization of both nonrenewable and on - farm resources while integrating natural biological cycles and controls as appropriate, maintaining the economic viability of farm operations, and to foster an improved quality of life for farmers and society at large. among its initiatives, the nifa wants to improve farm and ranch practices, integrated pest management, rotational grazing, soil conservation, water quality / wetlands, cover crops, crop / landscape diversity, nutrient management, agroforestry, and alternative marketing. = = education = = courses aimed at developing graduates with some specific skills in environmental systems or environmental technology are becoming more common and fall into three broad classes : environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment ; environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects ( good and bad ) of chemicals in the environment. such awards can focus on mining processes pollutants and commonly also cover biochemical processes ; environmental technology courses are oriented towards producing electronic, electrical, or electrotechnology graduates capable of developing devices and artifacts that can monitor, measure, model, and control environmental impact, including monitoring and managing energy generation from renewable sources and developing novel energy generation technologies. = = see also = = = = references = = = = further reading = = oecd studies on environmental innovation invention and transfer of environmental technologies. oecd. september 2011. isbn 978 - 92 - 64 - 11561 - 3. = = external links = =
|
https://en.wikipedia.org/wiki/Environmental_technology
|
we summarize recent results for the gribov - zwanziger lagrangian which includes the effect of restricting the path integral to the first gribov region. these include the two loop msbar and one loop mom gap equations for the gribov mass.
|
arxiv:0711.3622
|
the random sequential adsorption ( rsa ) model is modified to describe damage and crack accumulation. the exclusion for object deposition ( for damaged region formation ) is not for the whole object, as in the standard rsa, but only for the initial point ( or higher - dimensional defect ) from which the damaged region or crack initiates. the one - dimensional variant of the model is solved exactly.
|
arxiv:0712.3567
|
we study the decay rate of process b - > k l + l - ( l = e, mu ) and some of its other related observables, like forward backward asymmetry ( a _ { fb } ), polarization asymmetry ( pa ) and cp - asymmetry ( a _ { cp } ) in r - parity violating ( r _ { p } ) minimal supersymmetric standard model ( mssm ). the analysis shows that r _ { p } yukawa coupling products contribute significantly to the branching fraction of b - > k l + l - within 1 sigma and 2 sigma. study shows that pa and a _ { fb } are sensitive enough to r _ { p } yukawa coupling products and turn out to be good predictions for measurement in future experiments. the cp - asymmetry calculated in this framework agrees well with the recently reported value ( i. e. 7 % ).
|
arxiv:0903.0969
|
we consider the energy - critical defocusing nonlinear wave equation ( nlw ) on $ \ mathbb { r } ^ d $, $ d = 4 $ and $ 5 $. we prove almost sure global existence and uniqueness for nlw with rough random initial data in $ h ^ s ( \ mathbb { r } ^ d ) \ times h ^ { s - 1 } ( \ mathbb { r } ^ d ) $, with $ 0 < s \ leq 1 $ if $ d = 4 $, and $ 0 \ leq s \ leq 1 $ if $ d = 5 $. the randomization we consider is naturally associated with the wiener decomposition and with modulation spaces. the proof is based on a probabilistic perturbation theory. under some additional assumptions, for $ d = 4 $, we also prove the probabilistic continuous dependence of the flow with respect to the initial data ( in the sense proposed by burq and tzvetkov ).
|
arxiv:1406.1782
|
this paper introduces the concept of hyers - ulam stability for linear relations in normed linear spaces and presents several intriguing results that characterize the hyers - ulam stability of closed linear relations in hilbert spaces. additionally, sufficient conditions are established under which the sum and product of two hyers - ulam stable linear relations remain stable.
|
arxiv:2501.15204
|
the characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by microscopy techniques. in this study, we present a theoretical analysis of the effects of the image size reduction, due to a gradual decimation of the original image. three different decimation procedures were implemented and their consequences on the discrete correlation functions and the coarseness are reported and analyzed. a normalization for each of the correlation functions has been performed. when the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image. in contrast, when the decimated image does not represent the statistical evidence of the original one, the normalized correlation function diverts from the reference function. moreover, the equally weighted sum of the average of the squared differences leads to a definition of an overall error. during the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. at this stage, some statistical information is lost and the error becomes dependent of the decimation procedure. these results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
|
arxiv:1712.03183
|
a k - matrix formalism is used to relate single - channel and multi - channel fits. we show how the single - channel formalism changes as new hadronic channels become accessible. these relations are compared to those commonly used to fit pseudoscalar meson photoproduction data.
|
arxiv:nucl-th/0510025
|
much of the research in social computing analyzes data from social media platforms, which may inherently carry biases. an overlooked source of such bias is the over - representation of weird ( western, educated, industrialized, rich, and democratic ) populations, which might not accurately mirror the global demographic diversity. we evaluated the dependence on weird populations in research presented at the aaai icwsm conference ; the only venue whose proceedings are fully dedicated to social computing research. we did so by analyzing 494 papers published from 2018 to 2022, which included full research papers, dataset papers and posters. after filtering out papers that analyze synthetic datasets or those lacking clear country of origin, we were left with 420 papers from which 188 participants in a crowdsourcing study with full manual validation extracted data for the weird scores computation. this data was then used to adapt existing weird metrics to be applicable for social media data. we found that 37 % of these papers focused solely on data from western countries. this percentage is significantly less than the percentages observed in research from chi ( 76 % ) and facct ( 84 % ) conferences, suggesting a greater diversity of dataset origins within icwsm. however, the studies at icwsm still predominantly examine populations from countries that are more educated, industrialized, and rich in comparison to those in facct, with a special note on the ' democratic ' variable reflecting political freedoms and rights. this points out the utility of social media data in shedding light on findings from countries with restricted political freedoms. based on these insights, we recommend extensions of current " paper checklists " to include considerations about the weird bias and call for the community to broaden research inclusivity by encouraging the use of diverse datasets from underrepresented regions.
|
arxiv:2406.02090
|
we construct braid group actions on coideal subalgebras of quantized enveloping algebras which appear in the theory of quantum symmetric pairs. in particular, we construct an action of the semidirect product of z ^ n and the classical braid group in n strands on the coideal subalgebra corresponding to the symmetric pair ( sl _ { 2n } ( c ), sp _ { 2n } ( c ) ). this proves a conjecture by molev and ragoucy. we expect similar actions to exist for all symmetric lie algebras. the given actions are inspired by lusztig ' s braid group action on quantized enveloping algebras and are defined explicitly on generators. braid group and algebra relations are verified with the help of the package quagroup within the computer algebra program gap.
|
arxiv:1102.4185
|
public agencies are increasingly publishing open data to increase transparency and fuel data - driven innovation. for these organizations, maintaining sufficient data quality is key to continuous re - use but also heavily dependent on feedback loops being initiated between data publishers and users. this paper reports from a longitudinal engagement with scandinavian transportation agencies, where such feedback loops have been successfully established. based on these experiences, we propose four distinct types of data feedback loops in which both data publishers and re - users play critical roles.
|
arxiv:2110.01023
|
few - shot named entity recognition ( ner ) is a task aiming to identify named entities via limited annotated samples. recently, prototypical networks have shown promising performance in few - shot ner. most of prototypical networks will utilize the entities from the support set to construct label prototypes and use the query set to compute span - level similarities and optimize these label prototype representations. however, these methods are usually unsuitable for fine - tuning in the target domain, where only the support set is available. in this paper, we propose promptner : a novel prompting method for few - shot ner via k nearest neighbor search. we use prompts that contains entity category information to construct label prototypes, which enables our model to fine - tune with only the support set. our approach achieves excellent transfer learning ability, and extensive experiments on the few - nerd and crossner datasets demonstrate that our model achieves superior performance over state - of - the - art methods.
|
arxiv:2305.12217
|
a general theory for the intrinsic ( lorentzian ) linewidth of photonic - - crystal surface - - emitting lasers ( pcsels ) is presented. the effect of spontaneous emission is modeled by a classical langevin force entering the equation for the slowly varying waves. the solution of the coupled - - wave equations, describing the propagation of four basic waves within the plane of the photonic crystal, is expanded in terms of the solutions of the associated spectral problem, i. e. the laser modes. expressions are given for photon number, rate of spontaneous emission into the laser mode, petermann factor and effective henry factor entering the general formula for the linewidth. the theoretical framework is applied to the calculation of the linewidth - - power product of air - - hole and all - - semiconductor pcsels. for output powers in the watt range, intrinsic linewidths in the khz range are obtained in agreement with recent experimental results.
|
arxiv:2402.11246
|
in this paper, we explore the problem of identifying substitute relationship between food pairs from real - world food consumption data as the first step towards the healthier food recommendation. our method is inspired by the distributional hypothesis in linguistics. specifically, we assume that foods that are consumed in similar contexts are more likely to be similar dietarily. for example, a turkey sandwich can be considered a suitable substitute for a chicken sandwich if both tend to be consumed with french fries and salad. to evaluate our method, we constructed a real - world food consumption dataset from myfitnesspal ' s public food diary entries and obtained ground - truth human judgements of food substitutes from a crowdsourcing service. the experiment results suggest the effectiveness of the method in identifying suitable substitutes.
|
arxiv:1607.08807
|
observations of pre - / proto - stellar cores in young star - forming regions show them to be mass segregated, i. e. the most massive cores are centrally concentrated, whereas pre - main sequence stars in the same star - forming regions ( and older regions ) are not. we test whether this apparent contradiction can be explained by the massive cores fragmenting into stars of much lower mass, thereby washing out any signature of mass segregation in pre - main sequence stars. whilst our fragmentation model can reproduce the stellar initial mass function, we find that the resultant distribution of pre - main sequence stars is mass segregated to an even higher degree than that of the cores, because massive cores still produce massive stars if the number of fragments is reasonably low ( between one and five ). we therefore suggest that the reason cores are observed to be mass segregated and stars are not is likely due to dynamical evolution of the stars, which can move significant distances in star - forming regions after their formation.
|
arxiv:1909.07982
|
we study the scattering of monochromatic planar scalar waves in a geometry that interpolates between the schwarzschild solution, regular black holes and traversable wormhole spacetimes. we employ the partial waves approach to compute the differential scattering cross section of the regular black hole, as well as of the wormhole solutions. we compare our full numerical results with the classical geodesic scattering and the glory approximation, obtaining excellent agreement in the appropriate regime of validity of such approximations. we obtain that the differential scattering cross section for the regular black hole case is similar to the schwarzschild result. notwithstanding, the results for wormholes can be very distinctive from the black hole ones. in particular, we show that the differential scattering cross section for wormholes considerably decreases at large scattering angles for resonant frequencies.
|
arxiv:2211.09886
|
a detailed analysis of gravitational slip, a new post - general relativity cosmological parameter characterizing the degree of departure of the laws of gravitation from general relativity on cosmological scales, is presented. this phenomenological approach assumes that cosmic acceleration is due to new gravitational effects ; the amount of spacetime curvature produced per unit mass is changed in such a way that a universe containing only matter and radiation begins to accelerate as if under the influence of a cosmological constant. changes in the law of gravitation are further manifest in the behavior of the inhomogeneous gravitational field, as reflected in the cosmic microwave background, weak lensing, and evolution of large - scale structure. the new parameter, $ \ varpi _ 0 $, is naively expected to be of order unity. however, a multiparameter analysis, allowing for variation of all the standard cosmological parameters, finds that $ \ varpi _ 0 = 0. 09 ^ { + 0. 74 } _ { - 0. 59 } ( 2 \ sigma ) $ where $ \ varpi _ 0 = 0 $ corresponds to a $ \ lambda $ cdm universe under general relativity. future probes of the cosmic microwave background ( planck ) and large - scale structure ( euclid ) may improve the limits by a factor of four.
|
arxiv:0901.0919
|
it is shown that in a subcritical random graph with given vertex degrees satisfying a power law degree distribution with exponent $ \ gamma > 3 $, the largest component is of order $ n ^ { 1 / ( \ gamma - 1 ) } $. more precisely, the order of the largest component is approximatively given by a simple constant times the largest vertex degree. these results are extended to several other random graph models with power law degree distributions. this proves a conjecture by durrett.
|
arxiv:0708.4404
|
data normalization is an essential task when modeling a classification system. when dealing with data streams, data normalization becomes especially challenging since we may not know in advance the properties of the features, such as their minimum / maximum values, and these properties may change over time. we compare the accuracies generated by eight well - known distance functions in data streams without normalization, normalized considering the statistics of the first batch of data received, and considering the previous batch received. we argue that experimental protocols for streams that consider the full stream as normalized are unrealistic and can lead to biased and poor results. our results indicate that using the original data stream without applying normalization, and the canberra distance, can be a good combination when no information about the data stream is known beforehand.
|
arxiv:2307.00106
|
a photoelectric monitoring program has been applied, during the last four years, to five central stars of planetary nebulae ( pnns ) with strong o \, vi $ \ lambda $ 3811 - - 34 \ aa \ hspace { 0. 1mm } emission. ngc \, 6905 and, marginally, ngc \, 2452, show intrinsic luminosity variations, while ngc \, 7026, ic \, 2003 and ngc \, 1501 have constant luminosity within a few mmag. photometric data have been analyzed with the best available packages for power - - spectra reductions. both pulsators have periods and physical characteristics well encompassed by the theoretical pulsational models relative to these stars.
|
arxiv:astro-ph/9506046
|
quasi - periodic pulsations ( qpps ) are frequently detected in solar and stellar flares, but the underlying physical mechanisms are still to be ascertained. here, we show microwave qpps during a solar flare originating from quasi - periodic magnetic reconnection at the flare current sheet. they appear as two vertically detached but closely related sources with the brighter ones located at flare loops and the weaker ones along the stretched current sheet. although the brightness temperatures of the two microwave sources differ greatly, they vary in phase with periods of about 10 - - 20 s and 30 - - 60 s. the gyrosynchrotron - dominated microwave spectra also present a quasi - periodic soft - hard - soft evolution. these results suggest that relevant high - energy electrons are accelerated by quasi - periodic reconnection, likely arising from the modulation of magnetic islands within the current sheet as validated by a 2. 5 - dimensional magnetohydrodynamic simulation.
|
arxiv:2212.08318
|
a variant of the accelerating cosmology reconstruction program is developed for $ f ( r ) $ gravity and for a modified yang - mills / maxwell theory. reconstruction schemes in terms of e - foldings and by using an auxiliary scalar field are developed and carefully compared, for the case of $ f ( r ) $ gravity. an example of a model with a transient phantom behavior without real matter is explicitly discussed in both schemes. further, the two reconstruction schemes are applied to the more physically interesting case of a yang - mills / maxwell theory, again with explicit examples. detailed comparison of the two schemes of reconstruction is presented also for this theory. it seems to support, as well, physical non - equivalence of the two frames.
|
arxiv:1004.5021
|
we study three and four jet production in hadronic collisions at next - to - leading order accuracy in massless qcd. we cross check results previously obtained by the blackhat collaboration for the lhc with a centre - of - mass energy of sqrt ( s ) = 7 tev and present new results for the lhc operating at 8 tev. we find large negative nlo corrections reducing the leading - order cross sections by about 40 - 50 %. furthermore we observe an important reduction of the scale uncertainty. in addition to the cross sections we also present results for differential distributions. the dynamical renormalization / factorization scale used in the calculation leads to a remarkably stable k - factor. the results presented here were obtained with the njet package, a publicly available library for the evaluation of one - loop amplitudes in massless qcd.
|
arxiv:1209.0098
|
gravastars, theoretical alternatives to black holes, have captured the interest of scientists in astrophysics due to their unique properties. this paper aims to further investigate the exact solution of a novel gravastar model based on the mazur - mottola ( 2004 ) method within the framework of general relativity, specifically by incorporating the cloud of strings and quintessence. by analyzing the gravitational field and energy density of gravastars, valuable insights into the nature of compact objects in the universe can be gained. understanding the stability of gravastars is also crucial for our comprehension of black holes and alternative compact objects. for this purpose, we present the einstein field equations with the modified matter source and calculate the exact solutions for the inner and intermediate regions of gravastars. the exterior region is considered as a black hole surrounded by the cloud of strings and quintessence, and the spacetimes are matched using the darmoise - israel formalism. { an investigation is conducted on the stability of gravastars using linearized radial perturbation. additionally, the proper length, energy content, and entropy of the shell are computed. the stability of gravastars is positively correlated with the enhancement of the cloud of strings parameter, while it is negatively correlated with the growth in the quintessence field parameter. } the paper concludes with a summary of the findings and their implications in the field of astrophysics and cosmology.
|
arxiv:2309.17023
|
this paper generalizes results concerning strong convexity of two - stage mean - risk models with linear recourse to distortion risk measures. introducing the concept of ( restricted ) partial strong convexity, we conduct an in - depth analysis of the expected excess functional with respect to the decision variable and the threshold parameter. these results allow to derive sufficient conditions for strong convexity of models building on the conditional value - at - risk due to its variational representation. via kusuoka representation these carry over to comonotonic and distortion risk measures, where we obtain verifiable conditions in terms of the distortion function. for stochastic optimisation models, we point out implications for quantitative stability with respect to perturbations of the underlying probability measure. recent work in \ cite { ba14 } and \ cite { waxi17 } also gives testimony to the importance of strong convexity for the convergence rates of modern stochastic subgradient descent algorithms and in the setting of machine learning.
|
arxiv:1812.08109
|
while the population of main sequence debris discs is well constrained, little is known about debris discs around evolved stars. this paper provides a theoretical framework considering the effects of stellar evolution on debris discs, particularly the production and loss of dust within them. here we repeat a steady state model fit to disc evolution statistics for main sequence a stars, this time using realistic grain optical properties, then evolve that population to consider its detectability at later epochs. our model predicts that debris discs around giant stars are harder to detect than on the main sequence because radiation pressure is more effective at removing small dust around higher luminosity stars. just 12 % of first ascent giants within 100pc are predicted to have discs detectable with herschel at 160um. however this is subject to the uncertain effect of sublimation on the disc, which we propose can thus be constrained with such observations. our model also finds that the rapid decline in stellar luminosity results in only very young white dwarfs having luminous discs. as such systems are on average at larger distances they are hard to detect, but we predict that the stellar parameters most likely to yield a disc detection are a white dwarf at 200pc with cooling age of 0. 1myr, in line with observations of the helix nebula. our model does not predict close - in ( < 0. 01au ) dust, as observed for some white dwarfs, however we find that stellar wind drag leaves significant mass ( ~ 10 ^ { - 2 } msolar ), in bodies up to ~ 10m in diameter, inside the disc at the end of the agb phase which may replenish these discs.
|
arxiv:1007.4517
|
the phase - shifting digital holography ( psdh ) is a widely used approach for recovering signals by their interference ( with reference waves ) intensity measurements. such measurements are traditionally from multiple shots ( corresponding to multiple reference waves ). however, the imaging of dynamic signals requires a single - shot psdh approach, namely, such an approach depends only on the intensity measurements from the interference with a single reference wave. in this paper, based on the uniform admissibility of plane ( or spherical ) reference wave and the interference intensity - based approximation to quasi - interference intensity, the nonnegative refinable function is applied to establish the single - shot psdh in sobolev space. our approach is conducted by the intensity measurements from the interference of the signal with a single reference wave. the main results imply that the approximation version from such a single - shot approach converges exponentially to the signal as the level increases. moreover, like the transport of intensity equation ( tie ), our results can be interpreted from the perspective of intensity difference.
|
arxiv:2303.02839
|
we study interaction and radial polarization effects on the the absorption spectrum of neutral bound magnetoexcitons confined in quantum - ring structures. we show that the size and orientation of the exciton ' s dipole moment, as well as the interaction screening, play important roles in the aharonov - bohm oscillations. in particular, the excitonic absorption peaks display a - b oscillations both in position and amplitude for weak electron - hole interaction and large radial polarization. the presence of impurity scattering induces anticrossings in the exciton spectrum, leading to a modulation in the absorption strength. these properties could be used in experimental investigations of the effect in semiconductor quantum - ring structures.
|
arxiv:cond-mat/0504569
|
this paper is concerned with the study of scalability in nonlinear heterogeneous networks affected by communication delays and disturbances. after formalizing the notion of scalability, we give two sufficient conditions to assess this property. our results can be used to study leader - follower and leaderless networks and also allow to consider the case when the desired configuration of the system changes over time. we show how our conditions can be turned into design guidelines to guarantee scalability and illustrate their effectiveness via numerical examples.
|
arxiv:2006.07422
|
a potential implementation of quantum - information schemes in semiconductor nanostructures is studied. to this end, the formal theory of quantum encoding for avoiding errors is recalled and the existence of noiseless states for model systems is discussed. based on this theoretical framework, we analyze the possibility of designing noiseless quantum codes in realistic semiconductor structures. in the specific implementation considered, information is encoded in the lowest energy sector of charge excitations of a linear array of quantum dots. the decoherence channel considered is electron - phonon coupling we show that besides the well - known phonon bottleneck, reducing single - qubit decoherence, suitable many - qubit initial preparation as well as register design may enhance the decoherence time by several orders of magnitude. this behaviour stems from the effective one - dimensional character of the phononic environment in the relevant region of physical parameters.
|
arxiv:quant-ph/9808036
|
in the framework of the model of the polar singlet - triplet jahn - teller centers the cross - section is obtained for magnetic neutron scattering in high - $ t _ { c } $ cuprates. multi - mode character of the $ cuo _ { 4 } $ cluster ground manifold in the new phase of polar centers determines the dependence of magnetic form - factor on the local structure and charge state of the center. it is shown that magnetic inelastic neutron scattering in the system of the polar singlet - triplet jahn - teller centers permits to investigate the non - magnetic charge and structure excitations.
|
arxiv:cond-mat/9709033
|
this paper investigates a discretization scheme for mean curvature motion on point cloud varifolds with particular emphasis on singular evolutions. to define the varifold a local covariance analysis is applied to compute an approximate tangent plane for the points in the cloud. the core ingredient of the mean curvature motion model is the regularization of the first variation of the varifold via convolution with kernels with small stencil. consistency with the evolution velocity for a smooth surface is proven if a sufficiently small stencil and a regular sampling are taking into account. furthermore, an implicit and a semiimplicit time discretization are derived. the implicit scheme comes with discrete barrier properties known for the smooth, continuous evolution, whereas the semiimplicit still ensures in all our numerical experiments very good approximation properties while being easy to implement. it is shown that the proposed method is robust with respect to noise and recovers the evolution of smooth curves as well as the formation of singularities such as triple points in 2d or minimal cones in 3d.
|
arxiv:2010.09419
|
current datasets for vehicular applications are mostly collected in north america or europe. models trained or evaluated on these datasets might suffer from geographical bias when deployed in other regions. specifically, for scene classification, a highway in a latin american country differs drastically from an autobahn, for example, both in design and maintenance levels. we propose vwise, a novel benchmark for road - type classification and scene classification tasks, in addition to tasks focused on external contexts related to vehicular applications in latam. we collected over 520 video clips covering diverse urban and rural environments across latin american countries, annotated with six classes of road types. we also evaluated several state - of - the - art classification models in baseline experiments, obtaining over 84 % accuracy. with this dataset, we aim to enhance research on vehicular tasks in latin america.
|
arxiv:2406.03273
|
in this paper, an approach for generalizing the gromov - hausdorff metric is presented, which applies to metric spaces equipped with some additional structure. a special case is the gromov - hausdorff - prokhorov metric between measured metric spaces. this abstract framework unifies several existing gromov - hausdorff - type metrics for metric spaces equipped with a measure, a point, a closed subset, a curve, a tuple of such structures, etc. along with reviewing these special cases in the literature, several new examples are also presented. two frameworks are provided, one for compact metric spaces and the other for boundedly - compact pointed metric spaces. in both cases, a gromov - hausdorff - type metric is defined and its topological properties are studied. in particular, completeness and separability is proved under some conditions. this enables one to study random metric spaces equipped with additional structures, which is the main motivation of this work.
|
arxiv:1812.03760
|
we present next - to - leading order ( nlo ) calculations including qcd and electroweak ( ew ) corrections for $ 2 \ ell2 \ nu $ diboson signatures with two opposite - charge leptons and two neutrinos. specifically, we study the processes $ pp \ to e ^ + \ mu ^ - \ nu _ { e } \ bar \ nu _ { \ mu } $ and $ pp \ to e ^ + e ^ - \ nu \ bar \ nu $, including all relevant off - shell diboson channels, $ w ^ + w ^ - $, $ zz $, $ \ gamma z $, as well as non - resonant contributions. photon - induced processes are computed at nlo ew, and we discuss subtle differences related to the definition and the renormalisation of the coupling $ \ alpha $ for processes with initial - and final - state photons. all calculations are performed within the automated munich / sherpa + openloops frameworks, and we provide numerical predictions for the lhc at 13 tev. the behaviour of the corrections is investigated with emphasis on the high - energy regime, where nlo ew effects can amount to tens of percent due to large sudakov logarithms. the interplay between $ ww $ and $ zz $ contributions to the same - flavour channel, $ pp \ to e ^ + e ^ - \ nu \ bar \ nu $, is discussed in detail, and a quantitative analysis of photon - induced contributions is presented. finally, we consider approximations that account for all sources of large logarithms, at high and low energy, by combining virtual ew corrections with a yfs soft - photon resummation or a qed parton shower.
|
arxiv:1705.00598
|
we investigate a machine learning approach to option greeks approximation based on gaussian process ( gp ) surrogates. the method takes in noisily observed option prices, fits a nonparametric input - output map and then analytically differentiates the latter to obtain the various price sensitivities. our motivation is to compute greeks in cases where direct computation is expensive, such as in local volatility models, or can only ever be done approximately. we provide a detailed analysis of numerous aspects of gp surrogates, including choice of kernel family, simulation design, choice of trend function and impact of noise. we further discuss the application to delta hedging, including a new lemma that relates quality of the delta approximation to discrete - time hedging loss. results are illustrated with two extensive case studies that consider estimation of delta, theta and gamma and benchmark approximation quality and uncertainty quantification using a variety of statistical metrics. among our key take - aways are the recommendation to use matern kernels, the benefit of including virtual training points to capture boundary conditions, and the significant loss of fidelity when training on stock - path - based datasets.
|
arxiv:2010.08407
|
techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones. in practice including collaborative filtering, prior information and special structures are usually employed in order to improve the accuracy of matrix completion. in this paper, we propose a unified nonconvex optimization framework for matrix completion with linearly parameterized factors. in particular, by introducing a condition referred to as correlated parametric factorization, we can conduct a unified geometric analysis for the nonconvex objective by establishing uniform upper bounds for low - rank estimation resulting from any local minimum. perhaps surprisingly, the condition of correlated parametric factorization holds for important examples including subspace - constrained matrix completion and skew - symmetric matrix completion. the effectiveness of our unified nonconvex optimization method is also empirically illustrated by extensive numerical simulations.
|
arxiv:2003.13153
|
phonons change remarkable the interatomic bond length in solids and this work suggest a novel method how this behavior can be displayed and analyzed. the bond - length spectrum is plotted for each of the different atomic bonding types. when comparing the bond - length to an un - deformed crystal by the so - called difference bond length spectrum, the effect of phonons is clearly visible. the perovskite lattice of srtio3 is used as an example and several lattice vibration modes are applied in a frozen phonon calculation in a 2x2x2 supercell. ab - initio dft simulations using the vasp software were performed to calculate the density of states. the results show the important finding reported here first, that certain phonon interactions with shorter ti - o bonds decrease the band gap, while changes in the sr - ti bond length enlarge the band gap.
|
arxiv:0711.0567
|
building on the symmetry classification of disordered fermions, we give a proof of the proposal by kitaev, and others, for a " bott clock " topological classification of free - fermion ground states of gapped systems with symmetries. our approach differs from previous ones in that ( i ) we work in the standard framework of hermitian quantum mechanics over the complex numbers, ( ii ) we directly formulate a mathematical model for ground states rather than spectrally flattened hamiltonians, and ( iii ) we use homotopy - theoretic tools rather than k - theory. key to our proof is a natural transformation that squares to the standard bott map and relates the ground state of a d - dimensional system in symmetry class s to the ground state of a ( d + 1 ) - dimensional system in symmetry class s + 1. this relation gives a new vantage point on topological insulators and superconductors.
|
arxiv:1409.2537
|
we investigate cosmological consequences of an inflationary model which incorporates a generic seesaw extension ( types i and ii ) of the standard model of particle physics. a non - minimal coupling between the inflaton field and the ricci scalar is considered as well as radiative corrections at one loop order. this connection between the inflationary dynamics with neutrino physics results in a predictive model whose observational viability is investigated in light of the current cosmic microwave background data, baryon acoustic oscillation observations and type ia supernovae measurements. our results show that the non - minimal coupled seesaw potential provides a good description of the observational data when radiative corrections are positive. such result favours the type ii seesaw mechanism over type i and may be an indication for physics beyond the standard model.
|
arxiv:2002.05154
|
the rapid growth of scientific literature makes it challenging for researchers to identify novel and impactful ideas, especially across disciplines. modern artificial intelligence ( ai ) systems offer new approaches, potentially inspiring ideas not conceived by humans alone. but how compelling are these ai - generated ideas, and how can we improve their quality? here, we introduce scimuse, which uses 58 million research papers and a large - language model to generate research ideas. we conduct a large - scale evaluation in which over 100 research group leaders - - from natural sciences to humanities - - ranked more than 4, 400 personalized ideas based on their interest. this data allows us to predict research interest using ( 1 ) supervised neural networks trained on human evaluations, and ( 2 ) unsupervised zero - shot ranking with large - language models. our results demonstrate how future systems can help generating compelling research ideas and foster unforeseen interdisciplinary collaborations.
|
arxiv:2405.17044
|
recently it was realized that the zigzag magnetic order in kitaev materials can be stabilized by small negative off - diagonal interactions called the $ \ gamma ' $ terms. to fully understand the effect of the $ \ gamma ' $ interactions, we investigate the quantum $ k $ - $ \ gamma $ - $ \ gamma ' $ model on the honeycomb lattice using the variational monte carlo method. two multinode z $ _ 2 $ quantum spin liquids ( qsls ) are found at $ \ gamma ' > 0 $, one of which is the previously found proximate kitaev spin liquid called the pksl14 state which shares the same projective symmetry group ( psg ) with the kitaev spin liquid. a remarkable result is that a $ \ pi $ - flux state with a distinct psg appears at larger $ \ gamma ' $. the $ \ pi $ - flux state is characterized by an enhanced periodic structure in the spinon dispersion in the original brillouin zone ( bz ), which is experimentally observable. interestingly, two pksl8 states are competing with the $ \ pi $ - flux state and one of them can be stabilized by six - spin ring - exchange interactions. the physical properties of these nodal qsls are studied by applying magnetic fields and the results depend on the number of cones. our study infers that there exist a family of zero - flux qsls that contain $ 6n + 2, n \ in \ mathbb z $ majorana cones and a family of $ \ pi $ - flux qsls containing $ 4 ( 6n + 2 ) $ cones in the original bz. it provides guidelines for experimental realization of non - kitaev qsls in relevant materials.
|
arxiv:2003.10488
|
in the papers ziegler ( 2001 ) and goldstein ( 2012 ) it was previously shown that any subset of the boolean cube $ s \ subset \ { 0, 1 \ } ^ n $ for $ n \ leq 9 $ can be partitioned into $ n + 1 $ parts of smaller diameter, i. e., the borsuk conjecture holds for such subsets. in this paper, it is shown that this is also true for $ n = 10 $ ; however, the complexity of the computational verification increases significantly. in order to perform the computations in a reasonable time, several heuristics were developed to reduce the search tree. the sat solver $ \ textbf { kissat } $ was used to cut off the search branches.
|
arxiv:2504.01233
|
we propose two types, namely type - i and type - ii, quantum stabilizer codes using quadratic residue sets of prime modulus given by the form $ p = 4n \ pm1 $. the proposed type - i stabilizer codes are of cyclic structure and code length $ n = p $. they are constructed based on multi - weight circulant matrix generated from idempotent polynomial, which is obtained from a quadratic residue set. the proposed type - ii stabilizer codes are of quasi - cyclic ( qc ) structure and code length $ n = pk $, where $ k $ is the size of a quadratic residue set. they are constructed based on structured sparse - graphs codes derived from proto - matrix and circulant permutation matrix. with the proposed methods, we design rich classes of cyclic and quasi - cyclic quantum stabilizer codes with variable code length. we show how the commutative constraint ( also referred to as the symplectic inner product constraint ) for quantum codes can be satisfied for each proposed construction method. we also analyze both the dimension and distance for type - i stabilizer codes and the dimension of type - ii stabilizer codes. for the cyclic quantum stabilizer codes, we show that they meet the existing distance bounds in literature.
|
arxiv:1407.8249
|
this work builds on and confirms the theoretical findings of part 1 of this paper, moarref & jovanovi \ ' c ( 2010 ). we use direct numerical simulations of the navier - stokes equations to assess the efficacy of blowing and suction in the form of streamwise traveling waves for controlling the onset of turbulence in a channel flow. we highlight the effects of the modified base flow on the dynamics of velocity fluctuations and net power balance. our simulations verify the theoretical predictions of part 1 that the upstream traveling waves promote turbulence even when the uncontrolled flow stays laminar. on the other hand, the downstream traveling waves with parameters selected in part 1 are capable of reducing the fluctuations ' kinetic energy, thereby maintaining the laminar flow. in flows driven by a fixed pressure gradient, a positive net efficiency as large as 25 % relative to the uncontrolled turbulent flow can be achieved with downstream waves. furthermore, we show that these waves can also relaminarize fully developed turbulent flows at low reynolds numbers. we conclude that the theory developed in part 1 for the linearized flow equations with uncertainty has considerable ability to predict full - scale phenomena.
|
arxiv:1006.4598
|
recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. one recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. this involves replacing the face of an individual from a source video with the face of a second person, in the destination video. this idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. combined with the outreach and speed of social media, deepfakes could easily fool individuals when depicting someone saying things that never happened and thus could persuade people in believing fictional scenarios, creating distress, and spreading fake news. in this paper, we examine a technique for possible identification of deepfake videos. we use euler video magnification which applies spatial decomposition and temporal filtering on video data to highlight and magnify hidden features like skin pulsation and subtle motions. our approach uses features extracted from the euler technique to train three models to classify counterfeit and unaltered videos and compare the results with existing techniques.
|
arxiv:2101.11563
|
in this paper we construct random conformal snowflakes with large integral means spectrum at different points. these new estimates are significant improvement over previously known lower bound of the universal spectrum. our estimates are within 5 - 10 percent from the conjectured value of the universal spectrum.
|
arxiv:0710.4175
|
analyzing temporal data ( e. g., wearable device data ) requires a decision about how to combine information from the recent and distant past. in the context of classifying sleep status from actigraphy, webster ' s rescoring rules offer one popular solution based on the long - term patterns in the output of a moving - window model. unfortunately, the question of how to optimize rescoring rules for any given setting has remained unsolved. to address this problem and expand the possible use cases of rescoring rules, we propose rephrasing these rules in terms of epoch - specific features. our features take two general forms : ( 1 ) the time lag between now and the most recent [ or closest upcoming ] bout of time spent in a given state, and ( 2 ) the length of the most recent [ or closest upcoming ] bout of time spent in a given state. given any initial moving window model, these features can be defined recursively, allowing for straightforward optimization of rescoring rules. joint optimization of the moving window model and the subsequent rescoring rules can also be implemented using gradient - based optimization software, such as tensorflow. beyond binary classification problems ( e. g., sleep - wake ), the same approach can be applied to summarize long - term patterns for multi - state classification problems ( e. g., sitting, walking, or stair climbing ). we find that optimized rescoring rules improve the performance of sleep - wake classifiers, achieving accuracy comparable to that of certain neural network architectures.
|
arxiv:2104.14291
|
roy ' s equations are used to check if the scalar - isoscalar pion - pion scattering amplitudes fitted to experimental data fulfill crossing symmetry conditions. it is shown that the amplitudes describing the ` ` down - flat ' ' phase shift solution satisfy crossing symmetry below 1 gev while the amplitudes fitted to the " up - flat ' ' data do not. in this way the long standing " up - down " ambiguity in the phenomenological determination of the scalar - isoscalar pion - pion amplitudes has been resolved confirming the independent result of the recent joint analysis of the pi + pi - and pi0pi0 data.
|
arxiv:hep-ph/0310082
|
we propose a new algorithm for the problem of recovering data that adheres to multiple, heterogeneous low - dimensional structures from linear observations. focusing on data matrices that are simultaneously row - sparse and low - rank, we propose and analyze an iteratively reweighted least squares ( irls ) algorithm that is able to leverage both structures. in particular, it optimizes a combination of non - convex surrogates for row - sparsity and rank, a balancing of which is built into the algorithm. we prove locally quadratic convergence of the iterates to a simultaneously structured data matrix in a regime of minimal sample complexity ( up to constants and a logarithmic factor ), which is known to be impossible for a combination of convex surrogates. in experiments, we show that the irls method exhibits favorable empirical convergence, identifying simultaneously row - sparse and low - rank matrices from fewer measurements than state - of - the - art methods. code is available at https : / / github. com / ckuemmerle / simirls.
|
arxiv:2306.04961
|
huge volume of data from domain specific applications such as medical, financial, telephone, shopping records and individuals are regularly generated. sharing of these data is proved to be beneficial for data mining application. since data mining often involves data that contains personally identifiable information and therefore releasing such data may result in privacy breaches. on one hand such data is an important asset to business decision making by analyzing it. on the other hand data privacy concerns may prevent data owners from sharing information for data analysis. in order to share data while preserving privacy, data owner must come up with a solution which achieves the dual goal of privacy preservation as well as accuracy of data mining task mainly clustering and classification. privacy preserving data publishing ( ppdp ) is a study of eliminating privacy threats like linkage attack while preserving data utility by anonymizing data set before publishing. proposed work is an extension to k - anonymization where privacy gain ( prgain ) has been computed for selective anonymization for set of tuples. classification and clustering characteristics of original data and anonymized data using proposed algorithm have been evaluated in terms of information loss, execution time, and privacy achieved. algorithm has been processed against standard data sets and analysis shows that values for sensitive attributes are being preserved with minimal information loss.
|
arxiv:1403.5250
|
visually rich documents ( vrds ) are essential in academia, finance, medical fields, and marketing due to their multimodal information content. traditional methods for extracting information from vrds depend on expert knowledge and manual labor, making them costly and inefficient. the advent of deep learning has revolutionized this process, introducing models that leverage multimodal information vision, text, and layout along with pretraining tasks to develop comprehensive document representations. these models have achieved state - of - the - art performance across various downstream tasks, significantly enhancing the efficiency and accuracy of information extraction from vrds. in response to the growing demands and rapid developments in visually rich document understanding ( vrdu ), this paper provides a comprehensive review of deep learning - based vrdu frameworks. we systematically survey and analyze existing methods and benchmark datasets, categorizing them based on adopted strategies and downstream tasks. furthermore, we compare different techniques used in vrdu models, focusing on feature representation and fusion, model architecture, and pretraining methods, while highlighting their strengths, limitations, and appropriate scenarios. finally, we identify emerging trends and challenges in vrdu, offering insights into future research directions and practical applications. this survey aims to provide a thorough understanding of vrdu advancements, benefiting both academic and industrial sectors.
|
arxiv:2408.01287
|
we investigate the reactions p n - > d omega and p n - > d phi close to threshold and at higher energies. near threshold we calculate the s - wave amplitudes within the framework of the two - step model which is described by a triangle graph with pi - mesons in the intermediate state and find a ratio of the s - wave amplitudes squared of r = | a ( phi ) | ^ 2 / | a ( omega ) | ^ 2 = ( 4 - 8 ) x 10 ^ { - 3 }. any significant enhancement of the experimental value of r ( phi / omega ) over this prediction can be interpreted as a possible contribution of the intrinsic s - anti s component in the nucleon - wave function. we present arguments that there is a strong resonance effect in the omega n channel close to threshold. at higher energies we calculate the differential cross sections of the reactions p n - > d omega, p n - > d phi and the ratio of the phi / omega yields within the framework of the quark - gluon string model. an irregular behavior of the phi / omega - ratio is found at s < = 12 gev ^ 2 due to the interference of the t - and u - channel contributions.
|
arxiv:nucl-th/9808050
|
adopting fpga as an accelerator in datacenters is becoming mainstream for customized computing, but the fact that fpgas are hard to program creates a steep learning curve for software programmers. even with the help of high - level synthesis ( hls ), accelerator designers still have to manually perform code reconstruction and cumbersome parameter tuning to achieve the optimal performance. while many learning models have been leveraged by existing work to automate the design of efficient accelerators, the unpredictability of modern hls tools becomes a major obstacle for them to maintain high accuracy. to address this problem, we propose an automated dse framework - autodse - that leverages a bottleneck - guided coordinate optimizer to systematically find a better design point. autodse detects the bottleneck of the design in each step and focuses on high - impact parameters to overcome it. the experimental results show that autodse is able to identify the design point that achieves, on the geometric mean, 19. 9x speedup over one cpu core for machsuite and rodinia benchmarks. compared to the manually optimized hls vision kernels in xilinx vitis libraries, autodse can reduce their optimization pragmas by 26. 38x while achieving similar performance. with less than one optimization pragma per design on average, we are making progress towards democratizing customizable computing by enabling software programmers to design efficient fpga accelerators.
|
arxiv:2009.14381
|
it has been recently proved that the category of n - manifolds of degree $ n $, i. e., $ \ mathbb n $ - graded supermanifolds of degree $ n $ in which the parity agrees with the gradation, is equivalent to the category of purely even $ n $ - tuple vector superbundles with a certain action of the symmetry group $ s _ n $ permuting the vector bundle structures ; this can be viewed as a desuperization of n - manifolds. we put this result into a much wider context of graded structures on supermanifolds and describe explicitly several canonical equivalences of the corresponding categories in a purely geometrical and constructive way ; the desuperization equivalence functor is a composition of some of them. our constructions are completely canonical, we use such tools of supergeometry as the iterated tangent functor, the parity reversion in vector superbundles, and the modern view of $ n $ - tuple vector bundles as a sequence of commuting euler vector fields of the vector bundle structures in question. all this opens new horizons in the land of graded supergeometry.
|
arxiv:2505.06366
|
we study the geometry and topology of hilbert schemes of points on the orbifold surface [ c ^ 2 / g ], respectively the singular quotient surface c ^ 2 / g, where g is a finite subgroup of sl ( 2, c ) of type a or d. we give a decomposition of the ( equivariant ) hilbert scheme of the orbifold into affine space strata indexed by a certain combinatorial set, the set of young walls. the generating series of euler characteristics of hilbert schemes of points of the singular surface of type a or d is computed in terms of an explicit formula involving a specialized character of the basic representation of the corresponding affine lie algebra ; we conjecture that the same result holds also in type e. our results are consistent with known results in type a, and are new for type d.
|
arxiv:1512.06848
|
we use a well - known lusternik - schnirelman theory to prove the existence of a nondecreasing sequence of variational eigenvalues for the subelliptic $ p $ - laplacian subject to the dirichlet boundary conditon.
|
arxiv:2307.03013
|
in this short, we study sums of the shape $ \ sum _ { n \ leqslant x } { f ( [ x / n ] ) } / { [ x / n ] }, $ where $ f $ is euler totient function $ \ varphi $, dedekind function $ \ psi $, sum - of - divisors function $ \ sigma $ or the alternating sum - of - divisors function $ \ beta. $ we improve previous results when $ f = \ varphi $ and derive new estimates when $ f = \ psi, f = \ sigma $ and $ f = \ beta. $
|
arxiv:2109.02924
|
the absence of a missing moment inertia in clean solid $ ^ 4 $ he suggests that the minimal experimentally relevant model is one in which disorder induces superfluidity in a bosonic lattice. to this end, we explore the relevance of the disordered bose - hubbard model in this context. we posit that a clean array $ ^ 4 $ he atoms is a self - generated mott insulator, that is, the $ ^ 4 $ he atoms constitute the lattice as well as the ` charge carriers '. with this assumption, we are able to interpret the textbook defect - driven supersolids as excitations of either the lower or upper hubbard bands. in the experiments at hand, disorder induces a closing of the mott gap through the generation of mid - gap localized states at the chemical potential. depending on the magnitude of the disorder, we find that the destruction of the mott state takes place for $ d + z > 4 $ either through a bose glass phase ( strong disorder ) or through a direct transition to a superfluid ( weak disorder ). for $ d + z < 4 $, disorder is always relevant. the critical value of the disorder that separates these two regimes is shown to be a function of the boson filling, interaction and the momentum cut off. we apply our work to the experimentally observed enhancement $ ^ 3 $ he impurities has on the onset temperature for the missing moment of inertia. we find quantitative agreement with experimental trends.
|
arxiv:cond-mat/0612505
|
we present a quantum algorithm for approximating the linear structures of a boolean function $ f $. different from previous algorithms ( such as simon ' s and shor ' s algorithms ) which rely on restrictions on the boolean function, our algorithm applies to every boolean function with no promise. here, our methods are based on the result of the bernstein - vazirani algorithm which is to identify linear boolean functions and the idea of simon ' s period - finding algorithm. more precisely, how the extent of approximation changes over the time is obtained, and meanwhile we also get some quasi linear structures if there exists. next, we obtain that the running time of the quantum algorithm to thoroughly determine this question is related to the relative differential uniformity $ \ delta _ f $ of $ f $. roughly speaking, the smaller the $ \ delta _ f $ is, the less time will be needed.
|
arxiv:1404.0611
|
in this paper we describe synergy, which is a highly parallelizable, linear planning system that is based on the genetic programming paradigm. rather than reasoning about the world it is planning for, synergy uses artificial selection, recombination and fitness measure to generate linear plans that solve conjunctive goals. we ran synergy on several domains ( e. g., the briefcase problem and a few variants of the robot navigation problem ), and the experimental results show that our planner is capable of handling problem instances that are one to two orders of magnitude larger than the ones solved by ucpop. in order to facilitate the search reduction and to enhance the expressive power of synergy, we also propose two major extensions to our planning system : a formalism for using hierarchical planning operators, and a framework for planning in dynamic environments.
|
arxiv:cs/9810016
|
evolutionary game theory has impacted many fields of research by providing a mathematical framework for studying the evolution and maintenance of social and moral behaviors. this success is owed in large part to the demonstration that the central equation of this theory - the replicator equation - is the deterministic limit of a stochastic imitation ( social learning ) dynamics. here we offer an alternative elementary proof of this result, which holds for the scenario where players compare their instantaneous ( not average ) payoffs to decide whether to maintain or change their strategies, and only more successful individuals can be imitated.
|
arxiv:2404.00754
|
while deep learning ( dl ) - based video deraining methods have achieved significant success recently, they still exist two major drawbacks. firstly, most of them do not sufficiently model the characteristics of rain layers of rainy videos. in fact, the rain layers exhibit strong physical properties ( e. g., direction, scale and thickness ) in spatial dimension and natural continuities in temporal dimension, and thus can be generally modelled by the spatial - temporal process in statistics. secondly, current dl - based methods seriously depend on the labeled synthetic training data, whose rain types are always deviated from those in unlabeled real data. such gap between synthetic and real data sets leads to poor performance when applying them in real scenarios. against these issues, this paper proposes a new semi - supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer, expecting to better depict its insightful characteristics. specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks, respectively, which both are parameterized as deep neural networks ( dnns ). further more, different prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them. last but not least, we also design a monte carlo em algorithm to solve this model. extensive experiments are conducted to verify the superiorities of the proposed semi - supervised deraining model.
|
arxiv:2103.07939
|
the science of value, or value science, is a creation of philosopher robert s. hartman, which attempts to formally elucidate value theory using both formal and symbolic logic. = = fundamentals = = the fundamental principle, which functions as an axiom, and can be stated in symbolic logic, is that a thing is good insofar as it exemplifies its concept. to put it another way, " a thing is good if it has all its descriptive properties. " this means, according to hartman, that the good thing has a name, that the name has a meaning defined by a set of properties, and that the thing possesses all of the properties in the set. a thing is bad if it does not fulfill its description. he introduces three basic dimensions of value, systemic, extrinsic and intrinsic for sets of properties — perfection is to systemic value what goodness is to extrinsic value and what uniqueness is to intrinsic value — each with their own cardinality : finite, 0 { \ displaystyle \ aleph _ { 0 } } and 1 { \ displaystyle \ aleph _ { 1 } }. in practice, the terms " good " and " bad " apply to finite sets of properties, since this is the only case where there is a ratio between the total number of desired properties and the number of such properties possessed by some object being valued. ( in the case where the number of properties is countably infinite, the extrinsic dimension of value, the exposition as well as the mere definition of a specific concept is taken into consideration. ) hartman quantifies this notion by the principle that each property of the thing is worth as much as each other property, depending on the level of abstraction. hence, if a thing has n properties, each of them — if on the same level of abstraction — is proportionally worth n−1. = = infinite sets of properties = = hartman goes on to consider infinite sets of properties. hartman claims that according to a theorem of transfinite mathematics, any collection of material objects is at most denumerably infinite. this is not, in fact, a theorem of mathematics. but, according to hartman, people are capable of a denumerably infinite set of predicates, intended in as many ways, which he gives as 1 { \ displaystyle \ aleph _ { 1 } }. as this yields a notional cardinality of the continuum, hartman advises that when setting out to describe a person,
|
https://en.wikipedia.org/wiki/Science_of_value
|
we study quintessential inflation with an inverse hyperbolic type potential $ v ( \ phi ) = { v _ 0 } / { \ cosh \ left ( { \ phi ^ n } / { \ lambda ^ n } \ right ) } $, where $ v _ 0 $, $ \ lambda $ and " n " are parameters of the theory. we obtain a bound on $ \ lambda $ for different values of the parameter n. the spectral index and the tensor - to - scalar - ratio fall in the $ 1 \ sigma $ bound given by the planck 2015 data for $ n \ geq 5 $ for certain values of $ \ lambda $. however for $ 3 \ leq n < 5 $ there exist values of $ \ lambda $ for which the spectral index and the tensor - to - scalar - ratio fall only within the $ 2 \ sigma $ bound of the planck data. furthermore, we show that the scalar field with the given potential can also give rise to late time acceleration if we invoke the coupling to massive neutrino matter. we also consider the instant preheating mechanism with yukawa interaction and put bounds on the coupling constants for our model using the nucleosynthesis constraint on relic gravity waves produced during inflation.
|
arxiv:1708.00156
|
celestial amplitude plays an important role in the understanding of holography. computing celestial amplitudes by recursion can deepen our understanding of the structure of celestial amplitudes. as an important recursion method, the berends - giele ( bg ) currents on the celestial sphere are worth studying. in this paper, we study the celestial bg recursion and utilize this to calculate some typical examples. we also explore the ope behavior of celestial bg currents. moreover, we generalize the " sewing procedure " for bg currents to the celestial case.
|
arxiv:2307.14772
|
it has been widely observed that capitalization - weighted indexes can be beaten by surprisingly simple, systematic investment strategies. indeed, in the u. s. stock market, equal - weighted portfolios, random - weighted portfolios, and other naive, non - optimized portfolios tend to outperform a capitalization - weighted index over the long term. this outperformance is generally attributed to beneficial factor exposures. here, we provide a deeper, more general explanation of this phenomenon by decomposing portfolio log - returns into an average growth and an excess growth component. using a rank - based empirical study we argue that the excess growth component plays the major role in explaining the outperformance of naive portfolios. in particular, individual stock growth rates are not as critical as is traditionally assumed.
|
arxiv:1809.03769
|
circadian clocks are oscillatory genetic networks that help organisms adapt to the 24 - hour day / night cycle. the clock of the green alga ostreococcus tauri is the simplest plant clock discovered so far. its many advantages as an experimental system facilitate the testing of computational predictions. we present a model of the ostreococcus clock in the stochastic process algebra bio - pepa and exploit its mapping to different analysis techniques, such as ordinary differential equations, stochastic simulation algorithms and model - checking. the small number of molecules reported for this system tests the limits of the continuous approximation underlying differential equations. we investigate the difference between continuous - deterministic and discrete - stochastic approaches. stochastic simulation and model - checking allow us to formulate new hypotheses on the system behaviour, such as the presence of self - sustained oscillations in single cells under constant light conditions. we investigate how to model the timing of dawn and dusk in the context of model - checking, which we use to compute how the probability distributions of key biochemical species change over time. these show that the relative variation in expression level is smallest at the time of peak expression, making peak time an optimal experimental phase marker. building on these analyses, we use approaches from evolutionary systems biology to investigate how changes in the rate of mrna degradation impacts the phase of a key protein likely to affect fitness. we explore how robust this circadian clock is towards such potential mutational changes in its underlying biochemistry. our work shows that multiple approaches lead to a more complete understanding of the clock.
|
arxiv:1002.4661
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.