text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
imitating successful behavior is a natural and frequently applied approach to trust in when facing scenarios for which we have little or no experience upon which we can base our decision. in this paper, we consider such behavior in atomic congestion games. we propose to study concurrent imitation dynamics that emerge when each player samples another player and possibly imitates this agents ' strategy if the anticipated latency gain is sufficiently large. our main focus is on convergence properties. using a potential function argument, we show that our dynamics converge in a monotonic fashion to stable states. in such a state none of the players can improve its latency by imitating somebody else. as our main result, we show rapid convergence to approximate equilibria. at an approximate equilibrium only a small fraction of agents sustains a latency significantly above or below average. in particular, imitation dynamics behave like fully polynomial time approximation schemes ( fptas ). fixing all other parameters, the convergence time depends only in a logarithmic fashion on the number of agents. since imitation processes are not innovative they cannot discover unused strategies. furthermore, strategies may become extinct with non - zero probability. for the case of singleton games, we show that the probability of this event occurring is negligible. additionally, we prove that the social cost of a stable state reached by our dynamics is not much worse than an optimal state in singleton congestion games with linear latency function. finally, we discuss how the protocol can be extended such that, in the long run, dynamics converge to a nash equilibrium.
|
arxiv:0808.2081
|
in this paper we study the relationship between query and search engine by exploring the adaptive properties based on a simple search engine. we used set theory and utilized the words and terms for defining singleton space of event in a search engine model, and then provided the inclusion between one singleton to another.
|
arxiv:1212.3906
|
using a sample of cosmic voids identified in the sloan digital sky survey data release 7, we study the star formation activity of void galaxies. the properties of galaxies living in voids are compared with those of galaxies living in the void shells and with a control sample, representing the general galaxy population. void galaxies appear to form stars more efficiently than shell galaxies and the control sample. this result can not be interpreted as a consequence of the bias towards low masses in underdense regions, as void galaxy subsamples with the same mass distribution as the control sample also show statistically different specific star formation rates. this highlights the fact that galaxy evolution in voids is slower with respect to the evolution of the general population. nevertheless, when only the star forming galaxies are considered, we find that the star formation rate is insensitive to the environment, as the main sequence is remarkably constant in the three samples under consideration. this fact implies that environmental effects manifest themselves as fast quenching mechanisms, while leaving the non - quenched galaxies almost unaffected, as their star formation activity is largely regulated by the mass of their halo. we also analyse galaxy properties as a function of void - centric distance and find that the enhancement in the star formation activity with respect to the control sample is observable up to a radial distance 1. 5 rvoid. this result can be used as a suitable definition of void shells. finally, we find that larger voids show an enhanced star formation activity in the shells with respect to their smaller counterparts, that could be related to the different dynamical evolution experienced by voids of different sizes.
|
arxiv:1410.0023
|
dual phase time projection chamber using liquid xenon as target material is one of most successful detectors for dark matter direct search, and has improved the sensitivities of searching for weakly interacting massive particles by almost five orders of magnitudes in past several decades. however, it still remains a great challenge for dual phase liquid xenon time projection chamber to be used as the detector in next - generation dark matter search experiments ( $ \ sim $ 50 tonne sensitive mass ), in terms of reaching sufficiently high field strength for drifting electrons, and sufficiently low background rate. here we propose a single phase liquid xenon time projection chamber with detector geometry similar to a geiger counter, as a potential detector technique for future dark matter search, which trades off field uniformity for less isolated charge signals. preliminary field simulation and signal reconstruction study have shown that such single phase time projection chamber is technically feasible and can have sufficiently good signal reconstruction performance for dark matter direct search.
|
arxiv:2102.06903
|
in this paper, we propose a model - free unsupervised learning approach to forecast real - time locational marginal prices ( rtlmps ) in wholesale electricity markets. by organizing system - wide hourly rtlmp data into a 3 - dimensional ( 3d ) tensor consisting of a series of time - indexed matrices, we formulate the rtlmp forecasting problem as a problem of generating the next matrix with forecasted rtlmps given the historical rtlmp tensor, and propose a generative adversarial network ( gan ) model to forecast rtlmps. the proposed formulation preserves the spatio - temporal correlations among system - wide rtlmps in the format of historical rtlmp tensor. the proposed gan model learns the spatio - temporal correlations using the historical rtlmp tensors and generate rtlmps that are statistically similar and temporally coherent to the historical rtlmp tensor. the proposed approach forecasts system - wide rtlmps using only publicly available historical price data, without involving confidential information of system model, such as system parameters, topology, or operating conditions. the effectiveness of the proposed approach is verified through case studies using historical rtlmp data in southwest power pool ( spp ).
|
arxiv:2011.04717
|
this paper describes a new algorithm for computing nonnegative low rank matrix ( nlrm ) approximation for nonnegative matrices. our approach is completely different from classical nonnegative matrix factorization ( nmf ) which has been studied for more than twenty five years. for a given nonnegative matrix, the usual nmf approach is to determine two nonnegative low rank matrices such that the distance between their product and the given nonnegative matrix is as small as possible. however, the proposed nlrm approach is to determine a nonnegative low rank matrix such that the distance between such matrix and the given nonnegative matrix is as small as possible. there are two advantages. ( i ) the minimized distance by the proposed nlrm method can be smaller than that by the nmf method, and it implies that the proposed nlrm method can obtain a better low rank matrix approximation. ( ii ) our low rank matrix admits a matrix singular value decomposition automatically which provides a significant index based on singular values that can be used to identify important singular basis vectors, while this information cannot be obtained in the classical nmf. the proposed nlrm approximation algorithm was derived using the alternating projection on the low rank matrix manifold and the non - negativity property. experimental results are presented to demonstrate the above mentioned advantages of the proposed nlrm method compared the nmf method.
|
arxiv:1912.06836
|
glass - forming liquids exhibit slow dynamics below their melting temperatures, maintaining an amorphous structure reminiscent of normal liquids. distinguishing microscopic structures in the supercooled and high - temperature regimes remains a debated topic. building on recent advances in machine learning, particularly graph neural networks ( gnns ), our study automatically extracts features, unveiling fundamental mechanisms driving structural changes at varying temperatures. we employ the self - attention mechanism to generate attention coefficients that quantify the importance of connections between graph nodes, providing insights into the rationale behind gnn predictions. exploring structural changes with decreasing temperature through the gnn + self - attention using physically - defined structural descriptors, including the bond - orientational order parameter, voronoi cell volume, and coordination number, we identify strong correlations between high attention coefficients and more disordered structures as a key indicator of variations in glass - forming liquids.
|
arxiv:2505.00993
|
we have analyzed new and archival iue observations of narrow - line seyfert 1 galaxies ( nls1 ) in order to revise the ultraviolet ( uv ) properties of this sub - group of active galactic nuclei ( agn ). we have found broad wings in the strongest uv emission lines, ruling out the hypothesis that there is no broad line emission region in this type of objects. since the similarities in spectral energy distributions from the far - infrared ( fir ) to the soft x rays in both narrow - line and broad - line seyfert 1 galaxies do not suggest that the nuclei of nls1 are hidden from a direct view, we discuss the possibility that the line emitting material in nls1 is optically thin.
|
arxiv:astro-ph/9706127
|
a compact graph - like space is a triple $ ( x, v, e ) $ where $ x $ is a compact, metrizable space, $ v \ subseteq x $ is a closed zero - dimensional subset, and $ e $ is an index set such that $ x \ setminus v \ cong e \ times ( 0, 1 ) $. new characterizations of compact graph - like spaces are given, connecting them to certain classes of continua, and to standard subspaces of freudenthal compactifications of locally finite graphs. these are applied to characterize eulerian graph - like compacta.
|
arxiv:1609.00933
|
a joint project between the canadian astronomy data center of the national research council canada, and the italian istituto nazionale di astrofisica - osservatorio astronomico di trieste ( inaf - oats ), partially funded by the egi - engage h2020 european project, is devoted to deploy an integrated infrastructure, based on the international virtual observatory alliance ( ivoa ) standards, to access and exploit astronomical data. currently cadc - canfar provides scientists with an access, storage and computation facility, based on software libraries implementing a set of standards developed by the international virtual observatory alliance ( ivoa ). the deployment of a twin infrastructure, basically built on the same open source software libraries, has been started at inaf - oats. this new infrastructure now provides users with an access control service and a storage service. the final goal of the ongoing project is to build an integrated infrastructure geographycally distributed providing complete interoperability, both in users access control and data sharing. this paper describes the target infrastructure, the main user requirements covered, the technical choices and the implemented solutions.
|
arxiv:1712.02610
|
the growing problem of unsolicited bulk e - mail, also known as " spam ", has generated a need for reliable anti - spam e - mail filters. filters of this type have so far been based mostly on manually constructed keyword patterns. an alternative approach has recently been proposed, whereby a naive bayesian classifier is trained automatically to detect spam messages. we test this approach on a large collection of personal e - mail messages, which we make publicly available in " encrypted " form contributing towards standard benchmarks. we introduce appropriate cost - sensitive measures, investigating at the same time the effect of attribute - set size, training - corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. finally, the naive bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e - mail reader.
|
arxiv:cs/0008019
|
the recent advances in language modeling significantly improved the generative capabilities of deep neural models : in 2019 openai released gpt - 2, a pre - trained language model that can autonomously generate coherent, non - trivial and human - like text samples. since then, ever more powerful text generative models have been developed. adversaries can exploit these tremendous generative capabilities to enhance social bots that will have the ability to write plausible deepfake messages, hoping to contaminate public debate. to prevent this, it is crucial to develop deepfake social media messages detection systems. however, to the best of our knowledge no one has ever addressed the detection of machine - generated texts on social networks like twitter or facebook. with the aim of helping the research in this detection field, we collected the first dataset of \ real deepfake tweets, tweepfake. it is real in the sense that each deepfake tweet was actually posted on twitter. we collected tweets from a total of 23 bots, imitating 17 human accounts. the bots are based on various generation techniques, i. e., markov chains, rnn, rnn + markov, lstm, gpt - 2. we also randomly selected tweets from the humans imitated by the bots to have an overall balanced dataset of 25, 572 tweets ( half human and half bots generated ). the dataset is publicly available on kaggle. lastly, we evaluated 13 deepfake text detection methods ( based on various state - of - the - art approaches ) to both demonstrate the challenges that tweepfake poses and create a solid baseline of detection techniques. we hope that tweepfake can offer the opportunity to tackle the deepfake detection on social media messages as well.
|
arxiv:2008.00036
|
recent research in the runtime analysis of estimation of distribution algorithms ( edas ) has focused on univariate edas for multi - valued decision variables. in particular, the runtime of the multi - valued cga ( r - cga ) and umda on multi - valued functions has been a significant area of study. adak and witt ( ppsn 2024 ) and hamano et al. ( ecj 2024 ) independently performed a first runtime analysis of the r - cga on the r - valued onemax function ( r - onemax ). adak and witt also introduced a different r - valued onemax function called g - onemax. however, for that function, only empirical results were provided so far due to the increased complexity of its runtime analysis, since r - onemax involves categorical values of two types only, while g - onemax encompasses all possible values. in this paper, we present the first theoretical runtime analysis of the r - cga on the g - onemax function. we demonstrate that the runtime is o ( nr ^ 3 log ^ 2 n log r ) with high probability. additionally, we refine the previously established runtime analysis of the r - cga on r - onemax, improving the previous bound to o ( nr log n log r ), which improves the state of the art by an asymptotic factor of log n and is tight for the binary case. moreover, we for the first time include the case of frequency borders.
|
arxiv:2503.21439
|
we apply the hamiltonian formulation of teleparallel theories of gravity in 2 + 1 dimensions to a circularly symmetric geometry. we find a family of one - parameter black hole solutions. the btz solution fixes the unique free parameter of the theory. the resulting field equations coincide with the teleparallel equivalent of einstein ' s three - dimensional equations. we calculate the gravitational energy of the black holes by means of the simple expression that arises in the hamiltonian formulation and conclude that the resulting value is identical to that calculated by means of the brown - york method.
|
arxiv:gr-qc/0301079
|
we put forward reverse engineering protocols to shape in time the components of the magnetic field to manipulate a single spin, two independent spins with different gyromagnetic factors, and two interacting spins in short amount of times. we also use these techniques to setup protocols robust against the exact knowledge of the gyromagnetic factors for the one spin problem, or to generate entangled states for two or more spins coupled by dipole - dipole interactions.
|
arxiv:1705.05164
|
( abridged ) we describe the cascade of plasma waves or turbulence injected, presumably by reconnection, at scales comparable to the size of a solar flare loop to scales comparable to particle gyroradii, and evaluate their damping by various mechanisms. we show that the classical viscous damping is unimportant for magnetically dominated or low beta plasmas and the primary damping mechanism is the collisionless damping by the background particles. we show that the damping rate is proportional to the total random momentum density of the particles. for solar flare conditions this means that in most flares, except the very large ones, the damping is dominated by thermal background electrons. for large flares one requires acceleration of essentially all background electrons into a nonthermal distribution so that the accelerated electrons can be important in the damping of the waves. in general, damping by thermal or nonthermal protons is negligible compared to that of electrons except for quasi - perpendicular propagating waves or for rare proton dominated flares with strong nuclear gamma - ray line emission. using the rate for damping we determine the critical scale below which the damping becomes important and the spectrum of the turbulence steepens. this critical scale, however, has strong dependence on the angle of propagation with respect to the magnetic field direction. the waves can cascade down to very small scales, such as the gyroradii of the particles at small angles ( quasi - parallel propagation ) and possibly near 90 degree ( quasi - perpendicular propagation ) giving rise to a highly anisotropic spectral distribution.
|
arxiv:astro-ph/0508567
|
the monocrystalline silicon neutron beam window is one of the key components of neutron spectrometers and thin circular plate. monocrystalline silicon is a brittle material and its strength is not constant but is consistent with the weibull distribution. the window is designed not simply through the average strength, but according to the survival rate. bending deformation is the main form of the window, so dangerous parts of the neutron beam window is stress - linearized to the combination of membrane stress and bending stress. according to the weibull distribution of bending strength of monocrystalline silicon based on a large number of experimental data, finally the optimized neutron beam window is 1. 5mm thick. its survival rate is 0. 9994 and its transmittance is 0. 98447 ; it meets both physical requirements and the mechanical strength.
|
arxiv:1411.5991
|
generative models have seen an explosion in popularity with the release of huge generative diffusion models like midjourney and stable diffusion to the public. because of this new ease of access, questions surrounding the automated collection of data and issues regarding content ownership have started to build. in this paper we present new work which aims to provide ways of protecting content when shared to the public. we show that a generative diffusion model trained on data that has been imperceptibly watermarked will generate new images with these watermarks present. we further show that if a given watermark is correlated with a certain feature of the training data, the generated images will also have this correlation. using statistical tests we show that we are able to determine whether a model has been trained on marked data, and what data was marked. as a result our system offers a solution to protect intellectual property when sharing content online.
|
arxiv:2308.11123
|
in the gauge theory of gravity based on the poincare group ( the semidirect product of the lorentz group and the spacetime translations ) the mass ( energy - momentum ) and the spin are treated on an equal footing as the sources of the gravitational field. the corresponding spacetime manifold carries the riemann - cartan geometric structure with the nontrivial curvature and torsion. we describe some aspects of the classical poincare gauge theory of gravity. namely, the lagrange - noether formalism is presented in full generality, and the family of quadratic ( in the curvature and the torsion ) models is analyzed in detail. we discuss the special case of the spinless matter and demonstrate that einstein ' s theory arises as a degenerate model in the class of the quadratic poincare theories. another central point is the overview of the so - called double duality method for constructing of the exact solutions of the classical field equations.
|
arxiv:gr-qc/0601090
|
type 2 quasars are an important constituent of active galaxies, possibly representing the evolutionary precursors of traditionally studied type 1 quasars. we characterize the black hole mass ( $ m _ { \ rm bh } $ ) and eddington ratio ( $ l _ { \ rm bol } / l _ { \ rm edd } $ ) for 669 type 2 quasars selected from the sloan digital sky survey, using black hole masses estimated from the $ m _ { \ rm bh } - \ sigma _ { \ ast } $ relation and bolometric corrections scaled from the extinction - corrected $ [ \ rm o ~ { \ scriptsize iii } ] ~ \ lambda 5007 $ luminosity. when stellar velocity dispersions cannot be measured directly from the spectra, we estimate them from the core velocity dispersions of the narrow emission lines $ \ rm [ o ~ { \ scriptsize ii } ] ~ \ lambda \ lambda 3726, 3729 $, $ \ rm [ s ~ { \ scriptsize ii } ] ~ \ lambda \ lambda 6716, 6731 $, and $ [ \ rm o ~ { \ scriptsize iii } ] ~ \ lambda 5007 $, which are shown to trace the gravitational potential of the stars. energy input from the active nucleus still imparts significant perturbations to the gas kinematics, especially to high - velocity, blueshifted wings. nonvirial motions in the gas become most noticeable in systems with high eddington ratios. the black hole masses of our sample of type 2 quasars range from $ m _ { \ rm bh } \ approx 10 ^ { 6. 5 } $ to $ 10 ^ { 10. 4 } \, m _ \ odot $ ( median $ 10 ^ { 8. 2 } \, m _ \ odot $ ). type 2 quasars have characteristically large eddington ratios ( $ l _ { \ rm bol } / l _ { \ rm edd } ~ \ approx 10 ^ { - 2. 9 } - 10 ^ { 1. 8 } $ ; median $ 10 ^ { - 0. 7 } $ ), slightly higher than in type ~ 1 quasars of similar redshift ; the luminosities of $ \ sim $ 20 % of the sample formally exceed the eddington limit. the high eddington ratios may be consistent with the notion that obscured quasars evolve into unobscured quasars.
|
arxiv:1804.09852
|
this paper investigates the castelnuovo - mumford regularity of the generic hyperplane section of projective curves in positive characteristic case, and yields an application to a sharp bound on the regularity for nondegenerate projective varieties.
|
arxiv:math/9809042
|
given a locally convex vector space with a topology induced by hilbert seminorms and a continuous bilinear form on it we construct a topology on its symmetric algebra such that the usual star product of exponential type becomes continuous. many properties of the resulting locally convex algebra are explained. we compare this approach to various other discussions of convergent star products in finite and infinite dimensions. we pay special attention to the case of a hilbert space and to nuclear spaces.
|
arxiv:1703.05577
|
in this paper, we consider the information content of maximum ranked set sampling procedure with unequal samples ( mrssu ) in terms of tsallis entropy which is a nonadditive generalization of shannon entropy. we obtain several results of tsallis entropy including bounds, monotonic properties, stochastic orders, and sharp bounds under some assumptions. we also compare the uncertainty and information content of mrssu with its counterpart in the simple random sampling ( srs ) data. finally, we develop some characterization results in terms of cumulative tsallis entropy and residual tsallis entropy of mrssu and srs data.
|
arxiv:2010.13139
|
pre - training has been investigated to improve the efficiency and performance of training neural operators in data - scarce settings. however, it is largely in its infancy due to the inherent complexity and diversity, such as long trajectories, multiple scales and varying dimensions of partial differential equations ( pdes ) data. in this paper, we present a new auto - regressive denoising pre - training strategy, which allows for more stable and efficient pre - training on pde data and generalizes to various downstream tasks. moreover, by designing a flexible and scalable model architecture based on fourier attention, we can easily scale up the model for large - scale pre - training. we train our pde foundation model with up to 0. 5b parameters on 10 + pde datasets with more than 100k trajectories. extensive experiments show that we achieve sota on these benchmarks and validate the strong generalizability of our model to significantly enhance performance on diverse downstream pde tasks like 3d data. code is available at \ url { https : / / github. com / thu - ml / dpot }.
|
arxiv:2403.03542
|
matsumoto conjectured that for any finsler manifold $ ( m, f ) $ for which the restriction of the fundamental tensor to the indicatrix of $ f $ is positive definite, the absolute length $ f ( x ) $ of any tangent vector $ x \ in t _ xm $ is the global minimum for the relative length $ | x | _ y $ as $ y $ varies along the indicatrix $ i _ x \ subset t _ xm $ of $ f $. in this note, we disprove this conjecture by presenting a counterexample.
|
arxiv:1801.07821
|
we discuss possible implications of a large interaction cross section between cosmic rays and dark matter particles due to new physics at the tev scale. in particular, in models with extra dimensions and a low fundamental scale of gravity the cross section grows very fast at transplanckian energies. we argue that the knee observed in the cosmic ray flux could be caused by such interactions. we show that this hypothesis implies a well defined flux of secondary gamma rays that seems consistent with milagro observations.
|
arxiv:0904.0921
|
we consider dynamic subgraph connectivity problems for planar graphs. in this model there is a fixed underlying planar graph, where each edge and vertex is either " off " ( failed ) or " on " ( recovered ). we wish to answer connectivity queries with respect to the " on " subgraph. the model has two natural variants, one in which there are $ d $ edge / vertex failures that precede all connectivity queries, and one in which failures / recoveries and queries are intermixed. we present a $ d $ - failure connectivity oracle for planar graphs that processes any $ d $ edge / vertex failures in $ sort ( d, n ) $ time so that connectivity queries can be answered in $ pred ( d, n ) $ time. ( here $ sort $ and $ pred $ are the time for integer sorting and integer predecessor search over a subset of $ [ n ] $ of size $ d $. ) our algorithm has two discrete parts. the first is an algorithm tailored to triconnected planar graphs. it makes use of barnette ' s theorem, which states that every triconnected planar graph contains a degree - 3 spanning tree. the second part is a generic reduction from general ( planar ) graphs to triconnected ( planar ) graphs. our algorithm is, moreover, provably optimal. an implication of patrascu and thorup ' s lower bound on predecessor search is that no $ d $ - failure connectivity oracle ( even on trees ) can beat $ pred ( d, n ) $ query time. we extend our algorithms to the subgraph connectivity model where edge / vertex failures ( but no recoveries ) are intermixed with connectivity queries. in triconnected planar graphs each failure and query is handled in $ o ( \ log n ) $ time ( amortized ), whereas in general planar graphs both bounds become $ o ( \ log ^ 2 n ) $.
|
arxiv:1204.4159
|
we study spontaneous scalarization of electrically charged extremal black holes in $ d \ geq 4 $ spacetime dimensions. such a phenomenon is caused by the symmetry breaking due to quartic interactions of the scalar - - higgs potential and stueckelberg interaction with electromagnetic and gravitational fields, characterized by the couplings $ a $ and $ b $, respectively. we use the entropy representation of the states in the vicinity of the horizon, apply the inverse attractor mechanism for the scalar field, and analyze analytically the thermodynamic stability of the system using the laws of thermodynamics. as a result, we obtain that the scalar field condensates on the horizon only in spacetimes which are asymptotically non - flat, $ \ lambda \ neq 0 $ ( ds or ads ), and whose extremal black holes have non - planar horizons $ k = \ pm 1 $, provided that the mass $ m $ of the scalar field belongs to a mass interval ( area code ) different for each set of the boundary conditions specified by $ ( \ lambda, k ) $. a process of scalarization describes a second order phase transition of the black hole, from the extremal reissner - nordstr \ " { o } m ( a ) ds one, to the corresponding extremal hairy one. furthermore, for the transition to happen, the interaction has to be strong enough, and all physical quantities on the horizon depend at most on the effective higgs - stueckelberg interaction $ am ^ 2 - 2b $. most of our results are general, valid for any parameter and any spacetime dimension.
|
arxiv:2203.14388
|
time integration of odes or time - dependent pdes with required resolution of the fastest time scales of the system, can be very costly if the system exhibits multiple time scales of different magnitudes. if the different time scales are localised to different components, corresponding to localisation in space for a pde, efficient time integration thus requires that we use different time steps for different components. we present an overview of the multi - adaptive galerkin methods mcg ( q ) and mdg ( q ) recently introduced in a series of papers by the author. in these methods, the time step sequence is selected individually and adaptively for each component, based on an a posteriori error estimate of the global error. the multi - adaptive methods require the solution of large systems of nonlinear algebraic equations which are solved using explicit - type iterative solvers ( fixed point iteration ). if the system is stiff, these iterations may fail to converge, corresponding to the well - known fact that standard explicit methods are inefficient for stiff systems. to resolve this problem, we present an adaptive strategy for explicit time integration of stiff odes, in which the explicit method is adaptively stabilised by a small number of small, stabilising time steps.
|
arxiv:1205.2805
|
to increase the spectral efficiency of coherent communication systems, lasers with ever - narrower linewidths are required as they enable higher - order modulation formats with lower bit - error rates. in particular, semiconductor lasers are a key component due to their compactness, low power consumption, and potential for mass production. in field - testing scenarios their output is coupled to a fiber, making them susceptible to external optical feedback ( eof ). this has a detrimental effect on its stability, thus it is traditionally countered by employing, for example, optical isolators and angled output waveguides. in this work, eof is explored in a novel way with the aim to reduce and stabilize the laser linewidth. eof has been traditionally studied in the case where it is applied to only one side of the laser cavity. in contrast, this work gives a generalization to the case of feedback on both sides. it is implemented using photonic components available via generic foundry platforms, thus creating a path towards devices with high technology - readiness level. numerical results shows an improvement in performance of the double - feedback case with respect to the single - feedback case. in particularly, by appropriately selecting the phase of the feedback from both sides, a broad stability regime is discovered. this work paves the way towards low - cost, integrated and stable narrow - linewidth integrated lasers.
|
arxiv:2112.13895
|
we learn about the world from a diverse range of sensory information. automated systems lack this ability as investigation has centred on processing information presented in a single form. adapting architectures to learn from multiple modalities creates the potential to learn rich representations of the world - but current multimodal systems only deliver marginal improvements on unimodal approaches. neural networks learn sampling noise during training with the result that performance on unseen data is degraded. this research introduces a second objective over the multimodal fusion process learned with variational inference. regularisation methods are implemented in the inner training loop to control variance and the modular structure stabilises performance as additional neurons are added to layers. this framework is evaluated on a multilabel classification task with textual and visual inputs to demonstrate the potential for multiple objectives and probabilistic methods to lower variance and improve generalisation.
|
arxiv:2008.11450
|
the subject of this paper is a jacobian, introduced by f. lazzeri, ( unpublished ), associated to every compact oriented riemannian manifold of dimension twice an odd number. we start the investigation of torelli type problems and schottky type problem for lazzeri ' s jacobian ; in particular we examine the case of tori with flat metrics. besides we study lazzeri ' s jacobian for kahler manifolds and its relationship with other jacobians. finally we examine lazzeri ' s jacobian of a bundle.
|
arxiv:math/9812110
|
a common network inference problem, arising from real - world data constraints, is how to infer a dynamic network from its time - aggregated adjacency matrix and time - varying marginals ( i. e., row and column sums ). prior approaches to this problem have repurposed the classic iterative proportional fitting ( ipf ) procedure, also known as sinkhorn ' s algorithm, with promising empirical results. however, the statistical foundation for using ipf has not been well understood : under what settings does ipf provide principled estimation of a dynamic network from its marginals, and how well does it estimate the network? in this work, we establish such a setting, by identifying a generative network model whose maximum likelihood estimates are recovered by ipf. our model both reveals implicit assumptions on the use of ipf in such settings and enables new analyses, such as structure - dependent error bounds on ipf ' s parameter estimates. when ipf fails to converge on sparse network data, we introduce a principled algorithm that guarantees ipf converges under minimal changes to the network structure. finally, we conduct experiments with synthetic and real - world data, which demonstrate the practical value of our theoretical and algorithmic contributions.
|
arxiv:2402.18697
|
capitalizing on the directed nature of the atomic fluxes in molecular beam epitaxy, we propose and demonstrate the sequential directional deposition of lateral ( in, ga ) n shells on gan nanowires. in this approach, a sub - monolayer thickness of each constituent atomic species, i. e. ga, in, and n, is deposited subsequently from the same direction by rotating the sample and operating the shutters accordingly. using multiple iterations of this process, we achieve the growth of homogeneous shells on a single side facet of the nanowires. for higher in content and thus lattice mismatch, we observe a strain - induced bending of the nanowire heterostructures. the incorporation of in and the resulting emission spectra are systematically investigated as a function of both the growth temperature and the in / ga flux ratio.
|
arxiv:2307.11235
|
vision - language models have made significant strides recently, demonstrating superior performance across a range of tasks, e. g. optical character recognition and complex diagram analysis. building on this trend, we introduce a new vision - language model, points1. 5, designed to excel in various real - world applications. points1. 5 is an enhancement of points1. 0 and incorporates several key innovations : i ) we replace the original clip vision encoder, which had a fixed image resolution, with a navit - style vision encoder that supports native dynamic high resolution. this allows points1. 5 to process images of any resolution without needing to split them into tiles. ii ) we add bilingual support to points1. 5, significantly enhancing its capability in chinese. due to the scarcity of open - source chinese datasets for vision - language models, we collect numerous images from the internet and annotate them using a combination of manual and automatic methods. iii ) we propose a set of rigorous filtering methods for visual instruction tuning datasets. we comprehensively evaluate all these filtering methods, and choose the most effective ones to obtain the final visual instruction tuning set. thanks to these innovations, points1. 5 significantly outperforms points1. 0 and demonstrates strong performance across a range of real - world applications. notably, points1. 5 - 7b is trained on fewer than 4 billion tokens and ranks first on the opencompass leaderboard among models with fewer than 10 billion parameters
|
arxiv:2412.08443
|
service providers employ different transport technologies like pdh, sdh / sonet, otn, dwdm, ethernet, mpls - tp etc. to support different types of traffic and service requirements. dynamic service provisioning requires the use of on - line algorithms that automatically compute the path to be taken to satisfy the given service request. a typical transport network element supports adaptation of multiple technologies and multiple layers of those technologies to carry the input traffic. further, transport networks are deployed such that they follow different topologies like linear, ring, mesh, protected linear, dual homing etc. in different layers. path computation for service requests considering the above factors is the focus of this work, where a new mechanism for building an auxiliary graph which models each layer as a node within each network element and creates adaptation edges between them and also supports creation of special edges to represent different types of topologies is proposed. logical links that represent multiplexing or adaptation are also created in the auxiliary graph. initial weight assignment scheme for non - adaptation edges that consider both link distance and link capacity is proposed and three dynamic weight assignment functions that consider the current utilization of the links are proposed. path computation algorithms considering adaptation and topologies are proposed over the auxiliary graph structure. the performance of the algorithms is evaluated and it is found that the weighted number of requests accepted is higher and the weighted capacity provisioned is lesser for one of the dynamic weight function and certain combination of values proposed as part of the weight assignment.
|
arxiv:1901.01531
|
computational science and engineering magazine, 4 ( 2 ) : 39 – 43 ( 1997 ) g. hager and g. wellein, introduction to high performance computing for scientists and engineers, chapman and hall ( 2010 ) a. k. hartmann, practical guide to computer simulations, world scientific ( 2009 ) journal computational methods in science and technology ( open access ), polish academy of sciences journal computational science and discovery, institute of physics r. h. landau, c. c. bordeianu, and m. jose paez, a survey of computational physics : introductory computational science, princeton university press ( 2008 ) = = external links = = journal of computational science the journal of open research software the national center for computational science at oak ridge national laboratory
|
https://en.wikipedia.org/wiki/Computational_science
|
since its first demonstration in graded - index multimode fibers, spatial beam self - cleaning has attracted a growing research interest. it allows for the propagation of beams with a bell - shaped spatial profile, thus enabling the use of multimode fibers for several applications, from biomedical imaging to high - power beam delivery. so far, beam self - cleaning has been experimentally studied under several different experimental conditions. whereas it has been theoretically described as the irreversible energy transfer from high - order modes towards the fundamental mode, in analogy with a beam condensation mechanism. here, we provide a definitive theoretical description of beam self - cleaning, by means of a semi - classical statistical mechanics model of wave thermalization. this approach is confirmed by an extensive experimental characterization, based on a holographic mode decomposition technique, employing laser pulses with temporal durations ranging from femtoseconds up to nanoseconds. an excellent agreement between theory and experiments is found, which demonstrates that beam self - cleaning can be fully described in terms of the basic conservation laws of statistical mechanics.
|
arxiv:2111.08063
|
in two influential contributions, rosenbaum ( 2005, 2020 ) advocated for using the distances between component - wise ranks, instead of the original data values, to measure covariate similarity when constructing matching estimators of average treatment effects. while the intuitive benefits of using covariate ranks for matching estimation are apparent, there is no theoretical understanding of such procedures in the literature. we fill this gap by demonstrating that rosenbaum ' s rank - based matching estimator, when coupled with a regression adjustment, enjoys the properties of double robustness and semiparametric efficiency without the need to enforce restrictive covariate moment assumptions. our theoretical findings further emphasize the statistical virtues of employing ranks for estimation and inference, more broadly aligning with the insights put forth by peter bickel in his 2004 rietz lecture ( bickel, 2004 ).
|
arxiv:2312.07683
|
in solid mechanics and fluid mechanics being important components of the engineering curriculum. continuum mechanics is also an important branch of mathematics in its own right. it has served as the inspiration for a vast range of difficult research questions for mathematicians involved in the analysis of partial differential equations, differential geometry and the calculus of variations. perhaps the most well - known mathematical problem posed by a continuum mechanical system is the question of navier - stokes existence and smoothness. prominent career mathematicians rather than engineers who have contributed to the mathematics of continuum mechanics are clifford truesdell, walter noll, andrey kolmogorov and george batchelor. an essential discipline for many fields in engineering is that of control engineering. the associated mathematical theory of this specialism is control theory, a branch of applied mathematics that builds off the mathematics of dynamical systems. control theory has played a significant enabling role in modern technology, serving a foundational role in electrical, mechanical and aerospace engineering. like continuum mechanics, control theory has also become a field of mathematical research in its own right, with mathematicians such as aleksandr lyapunov, norbert wiener, lev pontryagin and fields medallist pierre - louis lions contributing to its foundations. = = = scientific computing = = = scientific computing includes applied mathematics ( especially numerical analysis ), computing science ( especially high - performance computing ), and mathematical modelling in a scientific discipline. = = = computer science = = = computer science relies on logic, algebra, discrete mathematics such as graph theory, and combinatorics. = = = operations research and management science = = = operations research and management science are often taught in faculties of engineering, business, and public policy. = = = statistics = = = applied mathematics has substantial overlap with the discipline of statistics. statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. statistical theory relies on probability and decision theory, and makes extensive use of scientific computing, analysis, and optimization ; for the design of experiments, statisticians use algebra and combinatorial design. applied mathematicians and statisticians often work in a department of mathematical sciences ( particularly at colleges and small universities ). = = = actuarial science = = = actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions. = = = mathematical economics = = = mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. the applied methods usually refer to nontrivial mathematical techniques or approaches. mathematical economics is based on statistics
|
https://en.wikipedia.org/wiki/Applied_mathematics
|
direct numerical simulations two - way coupled with inertial particles are used to investigate the particle distribution and two - way coupling effect of low - inertia ( $ st _ { lsm } = 0. 0625 $, $ st _ { vlsm } = 0. 009 $ ) and high - inertia ( $ st _ { lsm } = 0. 475 $, $ st _ { vlsm } = 0. 069 $ ) particles associated with the large - scale motions ( lsms ) and very - large - scale motions ( vlsms ) in an open channel flow at a reynolds number of $ re _ \ tau = 550 $. one method of filtering the vlsms from the flow is via artificial domain truncation, which alters the mean particle concentration profile and particle clustering due to the removal of vlsms from a large domain simulation. in order to exclude possible correlation of the turbulence introduced by a small domain size with periodic boundary conditions, low - and high - pass filtering is performed during the simulation to isolate the particle interaction with different spatial scales. the results show that particle accumulation and turbophoresis are under - predicted without vlsms, whereas the particle clustering and two - way coupling effects are mainly determined by particle coupling with lsms. in the inner layer, the elongated streamwise anisotropic particle clustering can be reproduced by particles coupling solely with lsms for low stokes number ( $ st _ { lsm } = 0. 0625 $ ) particles. however, we do not observe similar particle clustering behavior in the outer layer as seen in the full simulation by coupling particles with either lsms or vlsms for high stokes number ( $ st _ { vlsm } = 0. 069 $ ) particles. this indicates that the organized particle structures are formed by the joint action of lsms and vlsms, especially for high stokes number particles in the outer layer.
|
arxiv:1906.01779
|
the ongoing work to upgrade als to als - u demands strict rf requirements such as low jitter and low spurs frequency reference to meet its accelerator and science goals. a low phase noise dual frequency master oscillator ( mo ), where the two frequencies are related by a fractional ratio of 608 / 609 and flexible divide by four frequency outputs has been consolidated into a single chassis. optical fiber clock distribution system has been selected over the old coax system used in als to distribute these signals to various clients across the facility, providing high electrical isolation between outputs and therefore lower phase errors. a xilinx fpga ties the mo chassis together by providing a rs - 485 interface to monitor and control the system. the new system aims to deliver phase - continuous frequencies with a phase noise ( integrated rms jitter ) from 1 hz to 1 mhz of less than 200 femtosecond per output. this paper will discuss the design, implementation, performance and installation of the new mo generation and distribution system.
|
arxiv:2310.15509
|
a simple argument indicates that covariant loop gravity ( spinfoam theory ) predicts a maximal acceleration, and hence forbids the development of curvature singularities. this supports the results obtained for cosmology and black holes using canonical methods.
|
arxiv:1307.3228
|
we study the bethe ansatz equations for the quantum kdv model, which are also known to be solved by the spectral determinants of a specific family of anharmonic oscillators called monster potentials ( ode / im correspondence ). these bethe ansatz equations depend on two parameters, identified with the momentum and the degree at infinity of the anharmonic oscillators. we provide a complete classification of the solutions with only real and positive roots - - when the degree is greater than 2 - - in terms of admissible sequences of holes. in particular, we prove that admissible sequences of holes are naturally parameterised by integer partitions, and we prove that they are in one - to - one correspondence with solutions of the bethe ansatz equations if the momentum is large enough. consequently, we deduce that the monster potentials are complete, in the sense that every solution of the bethe ansatz equations coincides with the spectrum of a unique monster potential. this essentially ( i. e. up to gaps in the previous literature ) proves the ode / im correspondence for the quantum kdv model / monster potentials - - which was conjectured by dorey - tateo and bazhanov - lukyanov - zamolodchikov - - when the degree is greater than 2. our approach is based on the transformation of the bethe ansatz equations into a free - boundary nonlinear integral equation - - akin to the equations known in the physics literature as ddv or kbp or nlie - - of which we develop the mathematical theory from the beginning.
|
arxiv:2112.14625
|
the aim of this study is to establish many new inequalities for the operator $ a $ - norm and $ a $ - numerical radius of sums of bounded linear operators in hilbert spaces. in particular, two refinements are made to the generalized triangle inequality for operator norm. additionally, we examine a number of intriguing uses for two bounded linear operators in the cartesian decomposition of an operator. these disparities improve and generalize several earlier results in the literature. some applications of our results are given. we also present some instances to support our results.
|
arxiv:2502.17696
|
such as glass, with minimal plastic deformation ranges, to break. = = see also = = = = references = =
|
https://en.wikipedia.org/wiki/Deformation_(engineering)
|
certain well - known spacetimes of general relativity ( gr ) are generated from the collision of suitable null - sources coupled with gravitational waves. this is a classical process underlying the full nonlinearity of gr that may be considered alternative to the quantum creativity at a large scale. schwarzschild, de sitter, anti de sitter and the $ \ gamma $ - metrics are given as examples.
|
arxiv:2304.14771
|
the key step of syndrome - based decoding of reed - solomon codes up to half the minimum distance is to solve the so - called key equation. list decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. this article provides a link between syndrome - based decoding approaches based on key equations and the interpolation - based list decoding algorithms of guruswami and sudan for reed - solomon codes. the original interpolation conditions of guruswami and sudan for reed - solomon codes are reformulated in terms of a set of key equations. these equations provide a structured homogeneous linear system of equations of block - hankel form, that can be solved by an adaption of the fundamental iterative algorithm. for an $ ( n, k ) $ reed - solomon code, a multiplicity $ s $ and a list size $ \ listl $, our algorithm has time complexity \ on { \ listl s ^ 4n ^ 2 }.
|
arxiv:1110.3898
|
network slicing in 5g / 6g non - terrestrial network ( ntn ) is confronted with mobility and traffic variability. an artificial intelligence ( ai ) - based digital twin ( dt ) architecture with deep reinforcement learning ( drl ) using deep deterministic policy gradient ( ddpg ) is proposed for dynamic optimization of resource allocation. dt virtualizes network states to enable predictive analysis, while drl changes bandwidth for embb slice. simulations show a 25 \ % latency reduction compared to static methods, with enhanced resource utilization. this scalable solution supports 5g / 6g ntn applications like disaster recovery and urban blockage.
|
arxiv:2505.08328
|
we present renormalization constants of overlap quark bilinear operators on 2 + 1 - flavor domain wall fermion configurations. both overlap and domain wall fermions have chiral symmetry on the lattice. the scale independent renormalization constant for the local axial vector current is computed using a ward identity. the renormalization constants for the scalar, pseudoscalar and vector current are calculated in the ri - mom scheme. results in the ms - bar scheme are obtained by using perturbative conversion ratios. the analysis uses in total six ensembles with lattice sizes 24 ^ 3x64 and 32 ^ 3x64.
|
arxiv:1312.0375
|
in this paper, we focus on knapsack cones, a specific type of simplicial cones that arise naturally in the context of the knapsack problem $ x _ 1 a _ 1 + \ cdots + x _ n a _ n = a _ 0 $. we present a novel combinatorial decomposition for these cones, named \ texttt { decdenu }, which aligns with barvinok ' s unimodular cone decomposition within the broader framework of algebraic combinatorics. computer experiments support us to conjecture that our \ texttt { decdenu } algorithm is polynomial when the number of variables $ n $ is fixed. if true, \ texttt { decdenu } will provide the first alternative polynomial algorithm for barvinok ' s unimodular cone decomposition, at least for denumerant cones. the \ texttt { cteuclid } algorithm is designed for macmahon ' s partition analysis, and is notable for being the first algorithm to solve the counting problem for magic squares of order 6. we have enhanced the \ texttt { cteuclid } algorithm by incorporating \ texttt { decdenu }, resulting in the \ texttt { lllcteuclid } algorithm. this enhanced algorithm makes significant use of lll ' s algorithm and stands out as an effective elimination - based approach.
|
arxiv:2406.13974
|
sampling from high dimensional distributions and volume approximation of convex bodies are fundamental operations that appear in optimization, finance, engineering, artificial intelligence and machine learning. in this paper we present volesti, an r package that provides efficient, scalable algorithms for volume estimation, uniform and gaussian sampling from convex polytopes. volesti scales to hundreds of dimensions, handles efficiently three different types of polyhedra and provides non existing sampling routines to r. we demonstrate the power of volesti by solving several challenging problems using the r language.
|
arxiv:2007.01578
|
we derive crystal braneworld solutions, comprising of intersecting families of parallel $ n + 2 $ - branes in a $ 4 + n $ - dimensional $ ads $ space. each family consists of alternating positive and negative tension branes. in the simplest case of exactly orthogonal families, there arise different crystals with unbroken 4d poincare invariance on the intersections, where our world can reside. a crystal can be finite along some direction, either because that direction is compact, or because it ends on a segment of $ ads $ bulk, or infinite, where the branes continue forever. if the crystal is interlaced by connected 3 - branes directed both along the intersections and orthogonal to them, it can be viewed as an example of a manyfold universe proposed recently by arkani - hamed, dimopoulos, dvali and the author. there are new ways for generating hierarchies, since the bulk volume of the crystal and the lattice spacing affect the 4d planck mass. the low energy physics is sensitive to the boundary conditions in the bulk, and has to satisfy the same constraints discussed in the manyfold universe. phenomenological considerations favor either finite crystals, or crystals which are infinite but have broken translational invariance in the bulk. the most distinctive signature of the bulk structure is that the bulk gravitons are bloch waves, with a band spectrum, which we explicitly construct in the case of a 5 - dimensional theory.
|
arxiv:hep-th/9912125
|
we derive an exact solution of an explicitly time - dependent multichannel model of quantum mechanical nonadiabatic transitions. in the limit n > > 1, where n is the number of states, we find that the survival probability of the initially populated state remains finite despite an almost arbitrary choice of a large number of parameters. this observation proves that quantum mechanical nonadiabatic transitions among a large number of states can effectively keep memory about the initial state of the system. this property can lead to a strongly non - ergodic behavior even in the thermodynamic limit of some systems with a broad distribution of coupling constants and the lack of energy conservation.
|
arxiv:1303.5122
|
physics - inspired molecular representations are the cornerstone of similarity - based learning applied to solve chemical problems. despite their conceptual and mathematical diversity, this class of descriptors shares a common underlying philosophy : they all rely on the molecular information that determines the form of the electronic schr \ " odinger equation. existing representations take the most varied forms, from non - linear functions of atom types and positions to atom densities and potential, up to complex quantum chemical objects directly injected into the ml architecture. in this work, we present the spectrum of approximated hamiltonian matrices ( spa $ ^ \ mathrm { h } $ m ) as an alternative pathway to construct quantum machine learning representations through leveraging the foundation of the electronic schr \ " odinger equation itself : the electronic hamiltonian. as the hamiltonian encodes all quantum chemical information at once, spa $ ^ \ mathrm { h } $ m representations not only distinguish different molecules and conformations, but also different spin, charge, and electronic states. as a proof of concept, we focus here on efficient spa $ ^ \ mathrm { h } $ m representations built from the eigenvalues of a hierarchy of well - established and readily - evaluated " guess " hamiltonians. these spa $ ^ \ mathrm { h } $ m representations are particularly compact and efficient for kernel evaluation and their complexity is independent of the number of different atom types in the database.
|
arxiv:2110.13037
|
let g be a complete kac - moody group of rank n \ geq 2 over the finite field of order q, with weyl group w and building \ delta. we first show that if w is right - angled, then for all q \ neq 1 mod 4 the group g admits a cocompact lattice \ gamma which acts transitively on the chambers of \ delta. we also obtain a cocompact lattice for q = 1 mod 4 in the case that \ delta is bourdon ' s building. as a corollary of our constructions, for certain right - angled w and certain q, the lattice \ gamma has a surface subgroup. we also show that if w is a free product of spherical special subgroups, then for all q, the group g admits a cocompact lattice \ gamma with \ gamma a finitely generated free group. our proofs use generalisations of our results in rank 2 concerning the action of certain finite subgroups of g on \ delta, together with covering theory for complexes of groups.
|
arxiv:1203.2680
|
giant multipole resonances in nd and sm isotopes are studied by employing the quasiparticle - random - phase approximation on the basis of the skyrme energy - density - functional method. deformation effects on giant resonances are investigated in these isotopes which manifest a typical nuclear shape change from spherical to prolate shapes. the peak energy, the broadening, and the deformation splitting of the isoscalar giant monopole ( isgmr ) and quadrupole ( isgqr ) resonances agree well with measurements. the magnitude of the peak splitting and the fraction of the energy - weighted strength in the lower peak of the isgmr reflect the nuclear deformation. the experimental data on isgmr, isgdr, and isgqr are consistent with the nuclear - matter incompressibility $ k \ simeq 210 - 230 $ mev and the effective mass $ m ^ * _ 0 / m \ simeq 0. 8 - 0. 9 $. however, the high - energy octupole resonance ( heor ) in $ ^ { 144 } $ sm seems to indicates a smaller effective mass, $ m ^ * _ 0 / m \ simeq 0. 7 - 0. 8 $. a further precise measurement of heor is desired to determine the effective mass.
|
arxiv:1305.6437
|
no firm evidence has existed that the ancient maya civilization recorded specific occurrences of meteor showers or outbursts in the corpus of maya hieroglyphic inscriptions. in fact, there has been no evidence of any pre - hispanic civilization in the western hemisphere recording any observations of any meteor showers on any specific dates. the authors numerically integrated meteoroid - sized particles released by comet halley as early as 1404 bc to identify years within the maya classic period, ad 250 - 909, when eta aquariid outbursts might have occurred. outbursts determined by computer model were then compared to specific events in the maya record to see if any correlation existed between the date of the event and the date of the outburst. the model was validated by successfully explaining several outbursts around the same epoch in the chinese record. some outbursts observed by the maya were due to recent revolutions of comet halley, within a few centuries, and some to resonant behavior in older halley trails, of the order of a thousand years. examples were found of several different jovian mean motion resonances as well as the 1 : 3 saturnian resonance that have controlled the dynamical evolution of meteoroids in apparently observed outbursts.
|
arxiv:1707.08246
|
classification of partially occluded images is a highly challenging computer vision problem even for the cutting edge deep learning technologies. to achieve a robust image classification for occluded images, this paper proposes a novel scheme using subspace decomposition based estimation ( sdbe ). the proposed sdbe - based classification scheme first employs a base convolutional neural network to extract the deep feature vector ( dfv ) and then utilizes the sdbe to compute the dfv of the original occlusion - free image for classification. the sdbe is performed by projecting the dfv of the occluded image onto the linear span of a class dictionary ( cd ) along the linear span of an occlusion error dictionary ( oed ). the cd and oed are constructed respectively by concatenating the dfvs of a training set and the occlusion error vectors of an extra set of image pairs. two implementations of the sdbe are studied in this paper : the $ l _ 1 $ - norm and the squared $ l _ 2 $ - norm regularized least - squares estimates. by employing the resnet - 152, pre - trained on the ilsvrc2012 training set, as the base network, the proposed sbde - based classification scheme is extensively evaluated on the caltech - 101 and ilsvrc2012 datasets. extensive experimental results demonstrate that the proposed sdbe - based scheme dramatically boosts the classification accuracy for occluded images, and achieves around $ 22. 25 \ % $ increase in classification accuracy under $ 20 \ % $ occlusion on the ilsvrc2012 dataset.
|
arxiv:2001.04066
|
we rewrite the martin - siggia - rose ( msr ) formalism for the statistical dynamics of classical fields in a covariant second order form appropriate for the statistical dynamics of relativistic field theory. this second order formalism is related to a rotation of schwinger ' s closed time path ( ctp ) formalism for quantum dynamics, with the main difference being that certain vertices are absent in the classical theory. these vertices are higher order in an $ \ hbar $ expansion. the structure of the second order formulation of the schwinger dyson ( s - d ) equations is identical to that of the rotated ctp formalism apart from initial conditions on the green ' s functions and the absence of these vertices. we then discuss self - consistent truncation schemes based on keeping certain graphs in the two - particle irreducible effective action made up of bare vertices and exact green ' s functions.
|
arxiv:hep-ph/0106113
|
a dyson - schwinger - based model of pomeron exchange is employed to calculate diffractive rho -, phi - and j / psi - meson electroproduction cross sections. it is shown that the magnitude of the current - quark mass m _ f of the quark and antiquark inside the produced vector meson determines the onset of the asymptotic - q ^ 2 power - law behavior of the cross section, and how correlated quark - exchanges are included to provide a complete picture of the diffractive electroproduction of light vector mesons applicable over all energies and photon momenta q ^ 2.
|
arxiv:nucl-th/9806065
|
derandomization is one of the classic topics studied in the theory of parallel computations, dating back to the early 1980s. despite much work, all known techniques lead to deterministic algorithms that are not work - efficient. for instance, for the well - studied problem of maximal independent set - - e. g., [ karp, wigderson stoc ' 84 ; luby stoc ' 85 ; luby focs ' 88 ] - - state - of - the - art deterministic algorithms require at least $ m \ cdot poly ( \ log n ) $ work, where $ m $ and $ n $ denote the number of edges and vertices. hence, these deterministic algorithms will remain slower than their trivial sequential counterparts unless we have at least $ poly ( \ log n ) $ processors. in this paper, we present a generic parallel derandomization technique that moves exponentially closer to work - efficiency. the method iteratively rounds fractional solutions representing the randomized assignments to integral solutions that provide deterministic assignments, while maintaining certain linear or quadratic objective functions, and in an \ textit { essentially work - efficient } manner. as example end - results, we use this technique to obtain deterministic algorithms with $ m \ cdot poly ( \ log \ log n ) $ work and $ poly ( \ log n ) $ depth for problems such as maximal independent set, maximal matching, and hitting set.
|
arxiv:2504.15700
|
how does pore liquid reconfigure within shear bands in wet granular media? conventional wisdom predicts that liquid is drawn into dilating granular media. we, however, find a depletion of liquid in shear bands despite increased porosity due to dilatancy. this apparent paradox is resolved by a microscale model for liquid transport at low liquid contents induced by rupture and reconfiguration of individual liquid bridges. measured liquid content profiles show macroscopic depletion bands similar to results of numerical simulations. we derive a modified diffusion description for rupture - induced liquid migration.
|
arxiv:1205.0999
|
the utility of jet spectroscopy at the lhc is compromised by the existence of multiple interactions within a bunch crossing. the energy deposits from these interactions at the design luminosity of the lhc may degrade the dijet mass resolution unless great care is taken. energy clusters making up the jet can be required to have an energy flow with respect to the jet axis which resembles qcd. in addition, subsidiary information such as the jet mass or the out of jet cone mass or transverse momentum can be deployed so as to alleviate the adverse effects of pileup.
|
arxiv:hep-ex/0402031
|
we introduce the nuclear electronic all - particle density matrix renormalization group ( neap - dmrg ) method for solving the time - independent schr \ " odinger equation simultaneously for electrons and other quantum species. in contrast to already existing multicomponent approaches, in this work we construct from the outset a multi - reference trial wave function with stochastically optimized non - orthogonal gaussian orbitals. by iterative refining of the gaussians ' positions and widths, we obtain a compact multi - reference expansion for the multicomponent wave function. we extend the dmrg algorithm to multicomponent wave functions to take into account inter - and intra - species correlation effects. the efficient parametrization of the total wave function as a matrix product state allows neap - dmrg to accurately approximate full configuration interaction energies of molecular systems with more than three nuclei and twelve particles in total, which is currently a major challenge for other multicomponent approaches. we present neap - dmrg results for two few - body systems, i. e., h $ _ 2 $ and h $ _ 3 ^ + $, and one larger system, namely bh $ _ 3 $
|
arxiv:2003.04446
|
$ k = 0 $. in this paper, we solve completely the case $ k = 2 $ using estimates for gauss sum and the use of the computer, we also obtain a new condition for the existence of $ k $ - normal elements in $ \ mathbb { f } _ { q ^ n } $.
|
arxiv:2007.11169
|
branched junction molecule assembly of dna nanostructures, pioneered by seeman ' s laboratory in the 1980s, has become increasingly sophisticated, as have the assembly targets. a critical design step is finding minimal sets of branched junction molecules that will self - assemble into target structures without unwanted substructures forming. we use graph theory, which is a natural design tool for self - assembling dna complexes, to address this problem. after determining that finding optimal design strategies for this method is generally np - complete, we provide pragmatic solutions in the form of programs for special settings and provably optimal solutions for natural assembly targets such as platonic solids, regular lattices, and nanotubes. these examples also illustrate the range of design challenges.
|
arxiv:2108.00035
|
using rigorous constitutive linearization of second variation introduced in [ 6 ] we study weak stability of homogeneous deformation of the axially compressed circular cylindrical shell, regarded as a 3 - dimensional hyperelastic body. we show that such deformation becomes weakly unstable at the citical load that coincides with value of the bifurcation load in von - k \ ' arm \ ' an - donnel shell theory. we also show that the linear bifurcation modes described by the koiter circle [ 11 ] minimize the second variation asymptotically. the key ingredients of our analysis are the asymptoticaly sharp estimates of the korn constant for cylindrical shells and korn - like inequalities on components of the deformation gradient tensor in cylindrical coordinates. the notion of buckling equivalence introduced in [ 6 ] is developed further and becomes central in this work. a link between features of this theory and sensitivity of the critical load to imprefections of load and shape is conjectured.
|
arxiv:1301.6079
|
we use the spin - polarized excitons in a single quantum dot to design optical controls for basic operations in quantum computing. we examine the ultrafast nonlinear optical processes required and use the coherent nonlinear optical responses to deduce if such processes are physically reasonable. the importance and construction of an entangled state of polarized exciton states in a single quantum dot is explained. we put our proposal in perspective with respect to a number of theoretical suggestions of utilizing the semiconductor quantum dots.
|
arxiv:cond-mat/0009307
|
we present large language model for mixed reality ( llmr ), a framework for the real - time creation and modification of interactive mixed reality experiences using llms. llmr leverages novel strategies to tackle difficult cases where ideal training data is scarce, or where the design goal requires the synthesis of internal dynamics, intuitive analysis, or advanced interactivity. our framework relies on text interaction and the unity game engine. by incorporating techniques for scene understanding, task planning, self - debugging, and memory management, llmr outperforms the standard gpt - 4 by 4x in average error rate. we demonstrate llmr ' s cross - platform interoperability with several example worlds, and evaluate it on a variety of creation and modification tasks to show that it can produce and edit diverse objects, tools, and scenes. finally, we conducted a usability study ( n = 11 ) with a diverse set that revealed participants had positive experiences with the system and would use it again.
|
arxiv:2309.12276
|
the nonlinear evolution of an ion ring instability in a low - beta magnetospheric plasma is considered. the evolution of the two - dimensional ring distribution is essentially quasilinear. ignoring nonlinear processes the time - scale for the quasilinear evolution is the same as for the linear instability 1 / t _ ql gamma _ l. however, when nonlinear processes become important, a new time scale becomes relevant to the wave saturation mechanism. induced nonlinear scattering of the lower - hybrid waves by plasma electrons is the dominant nonlinearity relevant for plasmas in the inner magnetosphere and typically occurs on the timescale 1 / t _ ql w ( m / m ) w / nt, where w is the wave energy density, nt is the thermal energy density of the background plasma, and m / m is the ion to electron mass ratio, which has the consequence that the wave amplitude saturates at a low level, and the timescale for quasilinear relaxation is extended by orders of magnitude.
|
arxiv:1103.3715
|
the key notion to understand the left determined olschok model category of star - shaped cattani - sassone transition systems is past - similarity. two states are past - similar if they have homotopic pasts. an object is fibrant if and only if the set of transitions is closed under past - similarity. a map is a weak equivalence if and only if it induces an isomorphism after the identification of all past - similar states. the last part of this paper is a discussion about the link between causality and homotopy.
|
arxiv:1607.07678
|
we explore reprocessing models for a sample of 17 hypervariable quasars, taken from the sloan digital sky survey reverberation mapping ( sdss - rm ) project, which all show coordinated optical luminosity hypervariability with amplitudes of factors $ \ gtrsim 2 $ between 2014 and 2020. we develop and apply reprocessing models for quasar light curves in simple geometries that are likely to be representative of quasar inner environments. in addition to the commonly investigated thin - disk model, we include the thick - disk and hemisphere geometries. the thick - disk geometry could, for instance, represent a magnetically - elevated disk, whereas the hemisphere model can be interpreted as a first - order approximation for any optically - thick out - of - plane material caused by outflows / winds, warped / tilted disks, etc. of the 17 quasars in our sample, eleven are best - fit by a hemisphere geometry, five are classified as thick disks, and both models fail for just one object. we highlight the successes and shortcomings of our thermal reprocessing models in case studies of four quasars that are representative of the sample. while reprocessing is unlikely to explain all of the variability we observe in quasars, we present our classification scheme as a starting point for revealing the likely geometries of reprocessing for quasars in our sample and hypervariable quasars in general.
|
arxiv:2306.13120
|
in linear disordered systems anderson localization makes any wave packet stay localized for all times. its fate in nonlinear disordered systems is under intense theoretical debate and experimental study. we resolve this dispute showing that at any small but finite nonlinearity ( energy ) value there is a finite probability for anderson localization to break up and propagating nonlinear waves to take over. it increases with nonlinearity ( energy ) and reaches unity at a certain threshold, determined by the initial wave packet size. moreover, the spreading probability stays finite also in the limit of infinite packet size at fixed total energy. these results are generalized to higher dimensions as well.
|
arxiv:1108.0899
|
moore - tachikawa varieties are certain hamiltonian holomorphic symplectic varieties conjectured in the context of $ 2 $ - dimensional topological quantum field theories. we discuss several constructions related to these varieties.
|
arxiv:2104.05555
|
from an operational and planning perspective, it is important to quantify the impact of increasing penetration of photovoltaics on the distribution system. most existing impact assessment studies are scenario - based where derived results are scenario specific and not generalizable. moreover, stochasticity in the temporal behavior of spatially distributed pvs requires a large number of scenarios that increase with the size of the network and the level of penetration. therefore, we propose a new computationally efficient analytical framework of voltage sensitivity analysis that allows for stochastic analysis of voltage change due to random changes in pv generation. we first derive an analytical approximation for voltage change at any node of the network due to change in power at other nodes in an unbalanced distribution network. the quality of this approximation is reinforced via bounds on the approximation error. then, we derive the probability distribution of voltage change at a certain node due to random changes in power injections / consumptions at multiple locations of the network. the accuracy of the proposed pvsa is illustrated using a modified version of the ieee 37 bus test system. the proposed pvsa can serve as a powerful tool for proactive monitoring / control and ease the computational burden associated with perturbation based cybersecurity mechanisms.
|
arxiv:2009.05734
|
it is shown that negative ions are ejected from gas - phase water molecules when bombarded with positive ions at kev energies typical of solar - wind velocities. this finding is relevant for studies of planetary and cometary atmospheres, as well as for radiolysis and radiobiology. emission of both h - and heavier ( o - and oh - ) anions, with a larger yield for h -, was observed in 6. 6 - kev 16o + + h2o collisions. the ex - perimental setup allowed separate identification of anions formed in collisions with many - body dynamics from those created in hard, binary collisions. most of the ani - ons are emitted with low kinetic energy due to many - body processes. model calcu - lations show that both nucleus - nucleus interactions and electronic excitations con - tribute to the observed large anion emission yield.
|
arxiv:1506.09006
|
noteworthy strides continue to be made in the development of full - duplex millimeter wave ( mmwave ) communication systems, but most of this progress has been built on theoretical models and validated through simulation. in this work, we conduct a long overdue real - world evaluation of full - duplex mmwave systems using off - the - shelf 60 ghz phased arrays. using an experimental full - duplex base station, we collect over 200, 000 measurements of self - interference by electronically sweeping its transmit and receive beams across a dense spatial profile, shedding light on the effects of the environment, array positioning, and beam steering direction. we then call attention to five key challenges faced by practical full - duplex mmwave systems and, with these in mind, propose a general framework for beamforming - based full - duplex solutions. guided by this framework, we introduce a novel solution called steer +, a more robust version of recent work called steer, and experimentally evaluate both in a real - world setting with actual downlink and uplink users. rather than purely minimize self - interference as with steer, steer + makes use of additional measurements to maximize spectral efficiency, which proves to make it much less sensitive to one ' s choice of design parameters. we experimentally show that steer + can reliably reduce self - interference to near or below the noise floor while maintaining high snr on the downlink and uplink, thus enabling full - duplex operation purely via beamforming.
|
arxiv:2307.10523
|
advances in development of quantum computing processors brought ample opportunities to test the performance of various quantum algorithms with practical implementations. in this paper we report on implementations of quantum compression algorithm that can efficiently compress unknown quantum information. we restricted ourselves to compression of three pure qubits into two qubits, as the complexity of even such a simple implementation is barely within the reach of today ' s quantum processors. we implemented the algorithm on ibm quantum processors with two different topological layouts - a fully connected triangle processor and a partially connected line processor. it turns out that the incomplete connectivity of the line processor affects the performance only minimally. on the other hand, it turns out that the transpilation, i. e. compilation of the circuit into gates physically available to the quantum processor, crucially influences the result. we also have seen that the compression followed by immediate decompression is, even for such a simple case, on the edge or even beyond the capabilities of currently available quantum processors.
|
arxiv:2201.10999
|
this paper analyzes the consequences of electric current generation at the front of the bow shock ( bs ) and the dependence of the direction of this current on the imf. the conditions of this current closure through the body of the magnetosphere are discussed. it is shown that the process of penetration of the external current into magnetized plasma has a two - stage character. initially, a change in current on the boundary gives rise to a region of surface charge, the field of which polarizes the near - wall layer with the thickness on the order of one gyroradius of protons. the polarization process involves the formation of the displacement current which produces the ampere force accelerating the plasma inside the double layer. when the plasma velocity reaches the electric drift velocity ( within a time on the order of the inverse gyrofrequency of protons ), the electric field in this plasma disappears, whereas in a fixed frame of reference, on the contrary, it reaches equilibrium values. the front of variation of the electric field penetrates the plasma with the velocity of a fast magnetosonic wave. a change in the convection velocity field causes a redistribution of plasma pressure. the appearance of corresponding gradients signifies the penetration of current into plasma. the gradients are changing until a new steady state is reached, to which the new convection velocity field and the new plasma pressure field correspond. this new state is reached in a time which is estimated.
|
arxiv:physics/0306041
|
an all high - latitude sky survey for cool carbon giant stars in the galactic halo has revealed 75 such stars, of which the majority are new detections. of these, more than half are clustered on a great circle on the sky which intersects the center of sagittarius dwarf galaxy ( sgr ) and is parallel to its proper motion vector, while many of the remainder are outlying magellanic cloud c - stars. a pole - count analysis of the carbon star distribution clearly indicates that the great circle stream we have isolated is statistically significant, being a 5 - 6 sigma over - density. these two arguments strongly support our conclusion that a large fraction of the halo carbon stars originated in sgr. the stream orbits the galaxy between the present location of sgr, 16 kpc from the galactic center, and the most distant stream carbon star, at ~ 60 kpc. it follows neither a polar nor a galactic plane orbit, so that a large range in both galactic r and z distances are probed. that the stream is observed as a great circle indicates that the galaxy does not exert a significant torque upon the stream, so the galactic potential must be nearly spherical in the regions probed by the stream. we present n - body experiments simulating this disruption process as a function of the distribution of mass in the galactic halo. a likelihood analysis shows that, in the galactocentric distance range 16 kpc < r < 60 kpc, the dark halo is most likely almost spherical. we rule out, at high confidence levels, the possibility that the halo is significantly oblate, with isodensity contours of aspect q _ m < 0. 7. this result is quite unexpected and contests currently popular galaxy formation models. ( abridged )
|
arxiv:astro-ph/0004011
|
discovery of superconductivity at megabar ( mb ) pressures in hydrogen sulfide h3s, then in metal polyhydrides, starting with binary, lah10, etc., and ending with ternary ones, including ( la, y ) h10, revolutionized the field of condensed matter physics. these discoveries strengthen hopes for solution of the century - old problem of creating materials that are superconducting at room temperature. in experiments performed over the past 5 years at mb pressures, in addition to the synthesis of hydrides itself, their physical properties were studied using optical, x - ray and mossbauer spectroscopy, as well as galvanomagnetic measurement techniques. this paper presents the major results of galvanomagnetic studies, including measurements in high static ( up to 21t ) and pulsed ( up to 70t ) magnetic fields. measurements of resistance drops to vanishingly small level at temperatures below the critical tc value, a decrease in the critical temperature tc with increasing magnetic field, as well as diamagnetic screening, indicate the superconducting state of the polyhydrides. the results of measurements of the isotope effect, together with the effect of magnetic impurities on tc, indicate the electron - phonon mechanism of electron pairing. however, electron - electron correlations in polyhydrides are by no means small, both in the superconducting and normal states. it is possible that this is precisely what accounts for the unusual properties of polyhydrides that have not yet received a satisfactory explanation, such as a linear temperature dependence of the second critical field hc2 ( t ), a linear dependence of resistance r ( t ), and a linear magnetoresistance, very similar to that discovered by p. l. kapitza in 1929.
|
arxiv:2406.11344
|
solid - state single - photon emitters provide a versatile platform for exploring quantum technologies such as optically connected quantum networks. a key challenge is to ensure optical coherence and spectral stability of the emitters. here, we introduce a high - bandwidth ` check - probe ' scheme to quantitatively measure ( laser - induced ) spectral diffusion and ionisation rates, as well as homogeneous linewidths. we demonstrate these methods on single v2 centers in commercially available bulk - grown 4h - silicon carbide. despite observing significant spectral diffusion under laser illumination ( $ \ gtrsim $ ghz / s ), the optical transitions are narrow ( $ \ sim $ 35 mhz ), and remain stable in the dark ( $ \ gtrsim $ 1 s ). through landau - zener - st \ " uckelberg interferometry, we determine the optical coherence to be near - lifetime limited ( $ t _ 2 = 16. 4 ( 4 ) $ ns ), hinting at the potential for using bulk - grown materials for developing quantum technologies. these results advance our understanding of spectral diffusion of quantum emitters in semiconductor materials, and may have applications for studying charge dynamics across other platforms.
|
arxiv:2409.13018
|
despite the deleterious effect of hardware impairments on communication systems, most prior works have not investigated their impact on widely used relay systems. most importantly, the application of inexpensive transceivers, being prone to hardware impairments, is the most cost - efficient way for the implementation of massive multiple - input multiple - output ( mimo ) systems. consequently, the direction of this paper is towards the investigation of the impact of hardware impairments on mimo relay networks with large number of antennas. specifically, we obtain the general expression for the ergodic capacity of dual - hop ( dh ) amplify - and - forward ( af ) relay systems. next, given the advantages of the free probability ( fp ) theory with comparison to other known techniques in the area of large random matrix theory, we pursue a large limit analysis in terms of number of antennas and users by shedding light to the behavior of relay systems inflicted by hardware impairments.
|
arxiv:1509.04674
|
interaction - free measurement is shown to arise from the forward - scattered wave accompanying absorption : a " quantum silhouette " of the absorber. accordingly, the process is not free of interaction. for a perfect absorber the forward - scattered wave is locked both in amplitude and in phase. for an imperfect one it has a nontrivial phase of dynamical origin ( ` ` colored silhouette " ), measurable by interferometry. other examples of quantum silhouettes, all controlled by unitarity, are briefly discussed.
|
arxiv:quant-ph/9804058
|
this research aims to study a self - supervised 3d clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. compared with existing methods, we observe that three primary challenges remain : ( 1 ) 3d ground - truth meshes of clothing are usually inaccessible due to annotation difficulties and time costs ; ( 2 ) conventional template - based methods are limited to modeling non - rigid objects, e. g., handbags and dresses, which are common in fashion images ; ( 3 ) the inherent ambiguity compromises the model training, such as the dilemma between a large shape with a remote camera or a small shape with a close camera. in an attempt to address the above limitations, we propose a causality - aware self - supervised learning method to adaptively reconstruct 3d non - rigid objects from 2d images without 3d annotations. in particular, to solve the inherent ambiguity among four implicit variables, i. e., camera position, shape, texture, and illumination, we introduce an explainable structural causal map ( scm ) to build our model. the proposed model structure follows the spirit of the causal map, which explicitly considers the prior template in the camera estimation and shape prediction. when optimization, the causality intervention tool, i. e., two expectation - maximization loops, is deeply embedded in our algorithm to ( 1 ) disentangle four encoders and ( 2 ) facilitate the prior template. extensive experiments on two 2d fashion benchmarks ( atr and market - hq ) show that the proposed method could yield high - fidelity 3d reconstruction. furthermore, we also verify the scalability of the proposed method on a fine - grained bird dataset, i. e., cub. the code is available at https : / / github. com / layumi / 3d - magic - mirror.
|
arxiv:2204.13096
|
unconstrained remote gaze estimation remains challenging mostly due to its vulnerability to the large variability in head - pose. prior solutions struggle to maintain reliable accuracy in unconstrained remote gaze tracking. among them, appearance - based solutions demonstrate tremendous potential in improving gaze accuracy. however, existing works still suffer from head movement and are not robust enough to handle real - world scenarios. especially most of them study gaze estimation under controlled scenarios where the collected datasets often cover limited ranges of both head - pose and gaze which introduces further bias. in this paper, we propose novel end - to - end appearance - based gaze estimation methods that could more robustly incorporate different levels of head - pose representations into gaze estimation. our method could generalize to real - world scenarios with low image quality, different lightings and scenarios where direct head - pose information is not available. to better demonstrate the advantage of our methods, we further propose a new benchmark dataset with the most rich distribution of head - gaze combination reflecting real - world scenarios. extensive evaluations on several public datasets and our own dataset demonstrate that our method consistently outperforms the state - of - the - art by a significant margin.
|
arxiv:2004.03737
|
a copula of continuous random variables $ x $ and $ y $ is called an \ emph { implicit dependence copula } if there exist functions $ \ alpha $ and $ \ beta $ such that $ \ alpha ( x ) = \ beta ( y ) $ almost surely, which is equivalent to $ c $ being factorizable as the $ * $ - product of a left invertible copula and a right invertible copula. every implicit dependence copula is supported on the graph of $ f ( x ) = g ( y ) $ for some measure - preserving functions $ f $ and $ g $ but the converse is not true in general. we obtain a characterization of copulas with implicit dependence supports in terms of the non - atomicity of two newly defined associated $ \ sigma $ - algebras. as an application, we give a broad sufficient condition under which a self - similar copula has an implicit dependence support. under certain extra conditions, we explicitly compute the left invertible and right invertible factors of the self - similar copula.
|
arxiv:1606.07602
|
we theoretically analyze a quasi - two - dimensional system of fermionic polar molecules in a harmonic transverse confining potential. the renormalized energy bands are calculated by solving the hartree - fock equation numerically for various trap and dipolar interaction strengths. the inter - subband excitations of the system are studied in the conserving time - dependent hartree - fock ( tdhf ) approximation from the perspective of lattice modulation spectroscopy experiments. we find that the excitation spectrum consists of both inter - subband particle - hole excitation continuums and anti - bound excitons, arising from the anisotropic nature of dipolar interactions. the excitonic modes capture the majority of the spectral weight. we also evaluate the inter - subband transition rates in order to investigate the nature of the excitonic modes and find that they are anti - bound states formed from particle - hole excitations arising from several subbands. our results indicate that the excitonic effects are present for interaction strengths and temperatures accessible in current experiments with polar molecules.
|
arxiv:1106.4345
|
we propose to take advantage of the common knowledge of the characteristic function of the swap rate process as modelled in the libor market model with stochastic volatility and displaced diffusion ( ddsvlmm ) to derive analytical expressions of the gradient of swaptions prices with respect to the model parameters. we use this result to derive an efficient calibration method for the ddsvlmm using gradient - based optimization algorithms. our study relies on and extends the work by ( cui et al., 2017 ) that developed the analytical gradient for fast calibration of the heston model, based on an alternative formulation of the heston moment generating function proposed by ( del ba { \ ~ n } o et al., 2010 ). our main conclusion is that the analytical gradient - based calibration is highly competitive for the ddsvlmm, as it significantly limits the number of steps in the optimization algorithm while improving its accuracy. the efficiency of this novel approach is compared to classical standard optimization procedures.
|
arxiv:2006.13521
|
despite recent progress, the astrophysical channels responsible for rapid neutron capture ( r - process ) nucleosynthesis remain an unsettled question. observations of kilonovae following gravitational wave - detected neutron star mergers established mergers as one site of the r - process, but additional sources may be needed to fully explain r - process enrichment in the universe. one intriguing possibility is that rapidly rotating massive stars undergoing core collapse launch r - process - rich outflows off the accretion disks formed from their infalling matter. in this scenario, r - process winds comprise one component of the supernova ( sn ) ejecta produced by " collapsar " explosions. we present the first systematic study of the effects of r - process enrichment on the emission from collapsar - generated sne. we semi - analytically model r - process sn emission from explosion out to late times, and determine its distinguishing features. the ease with which r - process sne can be identified depends on how effectively wind material mixes into the initially r - process - free outer layers of the ejecta. in many cases, enrichment produces a near infrared ( nir ) excess that can be detected within ~ 75 days of explosion. we also discuss optimal targets and observing strategies for testing the r - process collapsar theory, and find that frequent monitoring of optical and nir emission from high - velocity sne in the first few months after explosion offers a reasonable chance of success while respecting finite observing resources. such early identification of r - process collapsar candidates also lays the foundation for nebular - phase spectroscopic follow - up in the near - and mid - infrared, for example with the james webb space telescope.
|
arxiv:2205.10421
|
interference of photons emerging from independent sources is essential for modern quantum information processing schemes, above all quantum repeaters and linear - optics quantum computers. we report an observation of non - classical interference of two single photons originating from two independent, separated sources, which were actively synchronized with an r. m. s. timing jitter of 260 fs. the resulting ( two - photon ) interference visibility was 83 ( + / - ) 4 %.
|
arxiv:quant-ph/0603048
|
we prove the existence of extremal sasakian structures occurring on a countably infinite number of distinct contact structures on $ t ^ 2 \ times s ^ 3 $ and certain related manifolds. these structures occur in bouquets and exhaust the sasaki cones in all except one case in which there are no extremal metrics.
|
arxiv:1108.2005
|
we propose a novel class of deep stochastic predictors for classifying metric data on graphs within the pac - bayes risk certification paradigm. classifiers are realized as linearly parametrized deep assignment flows with random initial conditions. building on the recent pac - bayes literature and data - dependent priors, this approach enables ( i ) to use risk bounds as training objectives for learning posterior distributions on the hypothesis space and ( ii ) to compute tight out - of - sample risk certificates of randomized classifiers more efficiently than related work. comparison with empirical test set errors illustrates the performance and practicality of this self - certifying classification method.
|
arxiv:2201.11162
|
notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted surplus to the public. the first treated public water supply in the world was installed by engineer james simpson for the chelsea waterworks company in london in 1829. the first screw - down water tap was patented in 1845 by guest and chrimes, a brass foundry in rotherham. the practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician john snow during the 1854 broad street cholera outbreak demonstrated the role of the water supply in spreading the cholera epidemic. = = = second industrial revolution ( 1860s – 1914 ) = = = the 19th century saw astonishing developments in transportation, construction, manufacturing and communication technologies originating in europe. after a recession at the end of the 1830s and a general slowdown in major inventions, the second industrial revolution was a period of rapid innovation and industrialization that began in the 1860s or around 1870 and lasted until world war i. it included rapid development of chemical, electrical, petroleum, and steel technologies connected with highly structured technology research. telegraphy developed into a practical technology in the 19th century to help run the railways safely. along with the development of telegraphy was the patenting of the first telephone. march 1876 marks the date that alexander graham bell officially patented his version of an " electric telegraph ". although bell is noted with the creation of the telephone, it is still debated about who actually developed the first working model. building on improvements in vacuum pumps and materials research, incandescent light bulbs became practical for general use in the late 1870s. edison electric illuminating company, a company founded by thomas edison with financial backing from spencer trask, built and managed the first electricity network. electrification was rated the most important technical development of the 20th century as the foundational infrastructure for modern civilization. this invention had a profound effect on the workplace because factories could now have second and third shift workers. shoe production was mechanized during the mid 19th century.
|
https://en.wikipedia.org/wiki/History_of_technology
|
we investigate models of the life annuity insurance when the company invests its reserve into a risky asset with price following a geometric brownian motion. our main result is an exact asymptotic of the ruin probabilities for the case of exponentially distributed benefits. as in the case of non - life insurance with exponential claims, the ruin probabilities are either decreasing with a rate given by a power function ( the case of small volatility ) or equal to unit identically ( the case of large volatility ). the result allows us to quantify the share of reserve to invest into such a risky asset to avoid a catastrophic outcome : the ruin with probability one. we address also the question of smoothness of the ruin probabilities as a function of the initial reserve for generally distributed jumps.
|
arxiv:1505.04331
|
convolutional neural networks ( cnns ) have recently become a favored technique for image denoising due to its adaptive learning ability, especially with a deep configuration. however, their efficacy is inherently limited owing to their homogenous network formation with the unique use of linear convolution. in this study, we propose a heterogeneous network model which allows greater flexibility for embedding additional non - linearity at the core of the data transformation. to this end, we propose the idea of an operational neuron or operational neural networks ( onn ), which enables a flexible non - linear and heterogeneous configuration employing both inter and intra - layer neuronal diversity. furthermore, we propose a robust operator search strategy inspired by the hebbian theory, called the synaptic plasticity monitoring ( spm ) which can make data - driven choices for non - linearities in any architecture. an extensive set of comparative evaluations of onns and cnns over two severe image denoising problems yield conclusive evidence that onns enriched by non - linear operators can achieve a superior denoising performance against cnns with both equivalent and well - known deep configurations.
|
arxiv:2009.00612
|
rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. recently, some topological concepts such as neighborhood have been applied to covering rough sets. in this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. we show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. to analyze the coverings themselves, we introduce unary and composition operations on coverings. a notion of homomorphismis provided to relate two covering approximation spaces. we also examine the properties of approximations preserved by the operations and homomorphisms, respectively.
|
arxiv:0911.5394
|
we give some illustrative applications of our recent result on decompositions of labelled complexes, including some new results on decompositions of hypergraphs with coloured or directed edges. for example, we give fairly general conditions for decomposing an edge - coloured graph into rainbow triangles, and for decomposing an r - digraph into tight q - cycles.
|
arxiv:1807.05770
|
the most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. the kernel can then be solved quickly using exact or heuristic algorithms - - - or by repeatedly kernelizing recursively in the branch - and - reduce paradigm. it is of critical importance for these algorithms that kernelization is fast and returns a small kernel. current algorithms are either slow but produce a small kernel, or fast and give a large kernel. we attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. we combine our parallelization techniques with two techniques to accelerate kernelization further : dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. finally, we show that our kernelization algorithm can be used to accelerate existing state - of - the - art heuristic algorithms, allowing us to find larger independent sets faster on large real - world networks and synthetic instances.
|
arxiv:1708.06151
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.