text
stringlengths
1
3.65k
source
stringlengths
15
79
we analyze nonabelian massive higgs - free theories in the causal epstein - glaser approach. recently, there has been renewed interest in these models. in particular we consider the well - known curci - ferrari model and the nonabelian st \ " uckelberg models. we explicitly show the reason why the considered models fail to be unitary. in our approach only the asymptotic ( linear ) brs - symmetry has to be considered.
arxiv:hep-th/9511176
particle decays do not constitute a spin ` ` measurement ' ' in the quantum - mechanical sense, but still modify the spin state, in particular for an entangled system. we show that for a spin - entangled pair of particles the entanglement of the system can increase after the decay of one particle. this unique phenomenon has no equivalent for stable particles and could be observable in top pair production at a high - energy polarized $ e ^ + e ^ - $ collider.
arxiv:2401.06854
we address the local well - posedness for the stochastic navier - stokes system with multiplicative cylindrical noise in the whole space. more specifically, we prove that there exists a unique local strong solution to the system in $ l ^ p ( \ mathbb { r } ^ 3 ) $ for $ p > 3 $.
arxiv:2301.12877
apart from mercury that has no known co - orbital companions, venus remains as the inner planet that hosts the smallest number of known co - orbitals ( two ) : ( 322756 ) 2001 ck32 and 2002 ve68. both objects have absolute magnitudes 18 < h < 21 and were identified as venus co - orbitals in 2004. here, we analyse the orbit of the recently discovered asteroid 2012 xe133 with h = 23. 5 mag to conclude that it is a new venus co - orbital currently following a transitional trajectory between venus ' lagrangian points l5 and l3. the object could have been a 1 : 1 librator for several thousand years and it may leave the resonance with venus within the next few hundred years, after a close encounter with the earth. our calculations show that its dynamical status as co - orbital, as well as that of the two previously known venus co - orbitals, is controlled by the earth - moon system with mercury playing a secondary role. the three temporary co - orbitals follow rather chaotic but similar trajectories with e - folding times of the order of 100 yr. out of the three co - orbitals, 2012 xe133 currently follows the most perturbed path. an actual collision with the earth during the next 10000 yr cannot be discarded. extrapolation of the number distribution of venus co - orbitals as a function of the absolute magnitude suggests that dozens of objects similar to 2012 xe133 could be transient companions to venus. some additional objects that were or will be transient co - orbitals to venus are also briefly discussed.
arxiv:1303.3705
fuel cells instead of batteries, and conducted the first american spacewalks and rendezvous operations. the ranger program was started in the 1950s as a response to soviet lunar exploration, however most missions ended in failure. the lunar orbiter program had greater success, mapping the surface in preparation for apollo landings, conducting meteoroid detection, and measuring radiation levels. the surveyor program conducted uncrewed lunar landings and takeoffs, as well as taking surface and regolith observations. despite the setback caused by the apollo 1 fire, which killed three astronauts, the program proceeded. apollo 8 was the first crewed spacecraft to leave low earth orbit and the first human spaceflight to reach the moon. the crew orbited the moon ten times on december 24 and 25, 1968, and then traveled safely back to earth. the three apollo 8 astronauts β€” frank borman, james lovell, and william anders β€” were the first humans to see the earth as a globe in space, the first to witness an earthrise, and the first to see and manually photograph the far side of the moon. the first lunar landing was conducted by apollo 11. commanded by neil armstrong with astronauts buzz aldrin and michael collins, apollo 11 was one of the most significant missions in nasa ' s history, marking the end of the space race when the soviet union gave up its lunar ambitions. as the first human to step on the surface of the moon, neil armstrong uttered the now famous words : that ' s one small step for man, one giant leap for mankind. nasa would conduct six total lunar landings as part of the apollo program, with apollo 17 concluding the program in 1972. = = = = end of apollo = = = = wernher von braun had advocated for nasa to develop a space station since the agency was created. in 1973, following the end of the apollo lunar missions, nasa launched its first space station, skylab, on the final launch of the saturn v. skylab reused a significant amount of apollo and saturn hardware, with a repurposed saturn v third stage serving as the primary module for the space station. damage to skylab during its launch required spacewalks to be performed by the first crew to make it habitable and operational. skylab hosted nine missions and was decommissioned in 1974 and deorbited in 1979, two years prior to the first launch of the space shuttle and any possibility of boosting its orbit. in 1975, the apollo – soyuz mission was the first ever international spaceflight and a major diplomatic accomplishment between the cold war
https://en.wikipedia.org/wiki/NASA
in this paper, we present the metamorphic testing of an in - use deep learning based forecasting application. the application looks at the past data of system characteristics ( e. g. ` memory allocation ' ) to predict outages in the future. we focus on two statistical / machine learning based components - a ) detection of co - relation between system characteristics and b ) estimating the future value of a system characteristic using an lstm ( a deep learning architecture ). in total, 19 metamorphic relations have been developed and we provide proofs & algorithms where applicable. we evaluated our method through two settings. in the first, we executed the relations on the actual application and uncovered 8 issues not known before. second, we generated hypothetical bugs, through mutation testing, on a reference implementation of the lstm based forecaster and found that 65. 9 % of the bugs were caught through the relations.
arxiv:1907.06632
quantum groups and quantum homogeneous spaces - developed by several authors since the 80 ' s - provide a large class of examples of algebras which for many reasons we interpret as ` coordinate algebras ' over noncommutative spaces. this dissertation is an attempt to understand them from the point of view of connes ' noncommutative geometry.
arxiv:0811.3187
##er ) and " practical mathematics " ( including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain ). in the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral : brahmagupta ' s theorem : if a cyclic quadrilateral has diagonals that are perpendicular to each other, then the perpendicular line drawn from the point of intersection of the diagonals to any side of the quadrilateral always bisects the opposite side. chapter 12 also included a formula for the area of a cyclic quadrilateral ( a generalisation of heron ' s formula ), as well as a complete description of rational triangles ( i. e. triangles with rational sides and rational areas ). brahmagupta ' s formula : the area, a, of a cyclic quadrilateral with sides of lengths a, b, c, d, respectively, is given by a = ( s βˆ’ a ) ( s βˆ’ b ) ( s βˆ’ c ) ( s βˆ’ d ) { \ displaystyle a = { \ sqrt { ( s - a ) ( s - b ) ( s - c ) ( s - d ) } } \, } where s, the semiperimeter, given by s = a + b + c + d 2. { \ displaystyle s = { \ frac { a + b + c + d } { 2 } }. } brahmagupta ' s theorem on rational triangles : a triangle with rational sides a, b, c { \ displaystyle a, b, c } and rational area is of the form : a = u 2 v + v, b = u 2 w + w, c = u 2 v + u 2 w βˆ’ ( v + w ) { \ displaystyle a = { \ frac { u ^ { 2 } } { v } } + v, \ \ b = { \ frac { u ^ { 2 } } { w } } + w, \ \ c = { \ frac { u ^ { 2 } } { v } } + { \ frac { u ^ { 2 } } { w } } - ( v + w ) } for some rational numbers u, v, { \ displaystyle u, v, } and w { \ displaystyle w }. chapter 18 contained 103 sanskrit verses which began with rules for arithmetical operations involving zero and negative numbers and is considered the first systematic treatment
https://en.wikipedia.org/wiki/Indian_mathematics
allen - - cahn equation with constant and degenerate mobility, and with polynomial and logarithmic energy functionals is discretized using symmetric interior penalty discontinuous galerkin ( sipg ) finite elements in space. we show that the energy stable average vector field ( avf ) method as the time integrator for gradient systems like the allen - cahn equation satisfies the energy decreasing property for the fully discrete scheme. the numerical results for one and two dimensional allen - cahn equation with periodic boundary condition, using adaptive time stepping, reveal that the discrete energy decreases monotonically, the phase separation and metastability phenomena can be observed and the ripening time is detected correctly.
arxiv:1409.3997
in this paper, we study how the notions of geometric formality according to kotschick and other geometric formalities adapted to the hermitian setting evolve under the action of the chern - ricci flow on class vii surfaces, including hopf and inoue surfaces, and on kodaira surfaces.
arxiv:1906.01424
smooth transition autoregressive models are widely used to capture nonlinearities in univariate and multivariate time series. existence of stationary solution is typically assumed, implicitly or explicitly. in this paper we describe conditions for stationarity and ergodicity of vector star models. the key condition is that the joint spectral radius of certain matrices is below 1, which is not guaranteed if only separate spectral radii are below 1. our result allows to use recently introduced toolboxes from computational mathematics to verify the stationarity and ergodicity of vector star models.
arxiv:1805.11311
we find multicenter ( majumdar - papapetrou type ) solutions of eddington - inspired born - infeld gravity coupled to electromagnetic fields governed by a born - infeld - like lagrangian. we construct the general solution for an arbitrary number of centers in equilibrium and then discuss the properties of their one - particle configurations, including the existence of bounces and the regularity ( geodesic completeness ) of these spacetimes. our method can be used to construct multicenter solutions in other theories of gravity.
arxiv:2006.08180
we show that there exists an invertible frequently hypercyclic operator on $ \ ell ^ 1 ( \ mathbb { n } ) $ whose inverse is not frequently hypercyclic.
arxiv:1910.04452
the exclusive rare radiative b meson decay to the orbitally excited tensor k _ 2 ^ * ( 1430 ) meson is investigated in the framework of the relativistic quark model based on the quasipotential approach in quantum field theory. the calculated branching ratio br ( b - - > k _ 2 ^ * ( 1430 ) \ gamma ) = ( 1. 7 \ pm 0. 6 ) \ times 10 ^ { - 5 } as well as the ratio br ( b - - > k _ 2 ^ * ( 1430 ) \ gamma ) / br ( b - - > k ^ * ( 892 ) \ gamma ) = 0. 38 \ pm 0. 08 is found in a good agreement with recent experimental data from cleo.
arxiv:hep-ph/0009308
we study the problem of query attribute value extraction, which aims to identify named entities from user queries as diverse surface form attribute values and afterward transform them into formally canonical forms. such a problem consists of two phases : { named entity recognition ( ner ) } and { attribute value normalization ( avn ) }. however, existing works only focus on the ner phase but neglect equally important avn. to bridge this gap, this paper proposes a unified query attribute value extraction system in e - commerce search named queaco, which involves both two phases. moreover, by leveraging large - scale weakly - labeled behavior data, we further improve the extraction performance with less supervision cost. specifically, for the ner phase, queaco adopts a novel teacher - student network, where a teacher network that is trained on the strongly - labeled data generates pseudo - labels to refine the weakly - labeled data for training a student network. meanwhile, the teacher network can be dynamically adapted by the feedback of the student ' s performance on strongly - labeled data to maximally denoise the noisy supervisions from the weak labels. for the avn phase, we also leverage the weakly - labeled query - to - attribute behavior data to normalize surface form attribute values from queries into canonical forms from products. extensive experiments on a real - world large - scale e - commerce dataset demonstrate the effectiveness of queaco.
arxiv:2108.08468
the problem of estimating the total mass of a visual binary when its orbit is incomplete is treated with bayesian methods. the posterior mean of a mass estimator is approximated by a triple integral over orbital period, time of periastron and orbital eccentricity. this reduction to 3 - d from the 7 - d space defined by the conventional campbell parameters is achieved by adopting the thiele - innes elements and exploiting the linearity with respect to the four thiele - innes constants. the formalism is tested on synthetic observational data covering a variable fraction of a model binary ' s orbit. the posterior mean of the mass estimator is numerically found to be unbiased when the data cover > 40 % of the orbit.
arxiv:1309.2868
the importance of ixps to interconnect different networks and exchange traffic locally has been well studied over the last few years. however, far less is known about the role ixps play as a platform to enable large - scale content delivery and to reach a world - wide customer base. in this paper, we study the infrastructure deployment of a content hypergiant, netflix, and show that the combined worldwide ixp substrate is the major corner stone of its content delivery network. to meet its worldwide demand for high - quality video delivery, netflix has built a dedicated cdn. its scale allows us to study a major part of the internet ecosystem, by observing how netflix takes advantage of the combined capabilities of ixps and isps present in different regions. we find wide disparities in the regional netflix deployment and traffic levels at ixps and isps across various local ecosystems. this highlights the complexity of large - scale content delivery as well as differences in the capabilities of ixps in specific regions. on a global scale we find that the footprint provided by ixps allows netflix to deliver most of its traffic directly from them. this highlights the additional role that ixps play in the internet ecosystem, not just in terms of interconnection, but also allowing players such as netflix to deliver significant amounts of traffic.
arxiv:1606.05519
the problem of developing binary classifiers from positive and unlabeled data is often encountered in machine learning. a common requirement in this setting is to approximate posterior probabilities of positive and negative classes for a previously unseen data point. this problem can be decomposed into two steps : ( i ) the development of accurate predictors that discriminate between positive and unlabeled data, and ( ii ) the accurate estimation of the prior probabilities of positive and negative examples. in this work we primarily focus on the latter subproblem. we study nonparametric class prior estimation and formulate this problem as an estimation of mixing proportions in two - component mixture models, given a sample from one of the components and another sample from the mixture itself. we show that estimation of mixing proportions is generally ill - defined and propose a canonical form to obtain identifiability while maintaining the flexibility to model any distribution. we use insights from this theory to elucidate the optimization surface of the class priors and propose an algorithm for estimating them. to address the problems of high - dimensional density estimation, we provide practical transformations to low - dimensional spaces that preserve class priors. finally, we demonstrate the efficacy of our method on univariate and multivariate data.
arxiv:1601.01944
a method to boot a cluster of diskless network clients from a single write - protected nfs root file system is shown. the problems encountered when first implementing the setup and their solution are discussed. finally, the setup is briefly compared to using a kernel - embedded root file system.
arxiv:cs/0410007
it is unlikely to reach at high temperatures the state which is used as the starting point of dcc formation in the quenched approximation. the chiral symmetry is restored in the linear sigma model by goldstone modes ( pions ), because such isospin - p - wave states carry more entropy. in this paper we estimate this effect of isospin - angular motion in the mean field approximation assuming equipartition of the energy.
arxiv:hep-ph/9603299
this dissertation presents results of a thorough study of ultracold bosonic and fermionic gases in three - dimensional and quasi - one - dimensional systems. although the analyses are carried out within various theoretical frameworks ( gross - pitaevskii, bethe ansatz, local density approximation, etc. ) the main tool of the study is the quantum monte carlo method in different modifications ( variational monte carlo, diffusion monte carlo, fixed - node monte carlo methods ). we benchmark our monte carlo calculations by recovering known analytical results ( perturbative theories in dilute limits, exactly solvable models, etc. ) and extend calculations to regimes, where the results are so far unknown. in particular we calculate the equation of state and correlation functions for gases in various geometries and with various interatomic interactions.
arxiv:1412.4529
for a complex number s, the s - order integral of function f fulfilling some conditions is defined as action of an operator, noted j ^ s, on f. the definition of the operator j ^ s is given firstly for the case of complex number s with positive real part. then, using the fact that the operator of one order derivative, noted d ^ 1, is the left - hand side inverse of the operator j ^ 1, an s - order derivative operator, noted d ^ s, is also defined for all complex number s with positive real part. finally, considering the relation j ^ s = d ^ ( - s ), the definition of the s - order integral and s - order derivative is extended for all complex number s. an extension of the definition domain of the operators is given too.
arxiv:1209.0400
snapshot compressive imaging ( sci ) refers to compressive imaging systems where multiple frames are mapped into a single measurement, with video compressive imaging and hyperspectral compressive imaging as two representative applications. though exciting results of high - speed videos and hyperspectral images have been demonstrated, the poor reconstruction quality precludes sci from wide applications. this paper aims to boost the reconstruction quality of sci via exploiting the high - dimensional structure in the desired signal. we build a joint model to integrate the nonlocal self - similarity of video / hyperspectral frames and the rank minimization approach with the sci sensing process. following this, an alternating minimization algorithm is developed to solve this non - convex problem. we further investigate the special structure of the sampling process in sci to tackle the computational workload and memory issues in sci reconstruction. both simulation and real data ( captured by four different sci cameras ) results demonstrate that our proposed algorithm leads to significant improvements compared with current state - of - the - art algorithms. we hope our results will encourage the researchers and engineers to pursue further in compressive imaging for real applications.
arxiv:1807.07837
confining dirac fermions in graphene by electrostatic fields is a challenging task. electric quantum dots created by a scanning tunneling microscope ( stm ) tip can trap zero - energy quasi - particles. the lorentzian quantum well provides a faithful, exactly solvable, approximation to such a potential, hosting zero - energy bound states for certain values of the coupling constant. we show that in this critical configuration, the system can be related to the free particle model by means of a supersymmetric transformation. the revealed shape invariance of the model greatly simplifies the calculation of the zero modes and naturally explains the degeneracy of the zero energy.
arxiv:2401.13864
we describe a tracer in a bath of soft brownian colloids by a particle coupled to the density field of the other bath particles. from the dean equation, we derive an exact equation for the evolution of the whole system, and show that the density field evolution can be linearized in the limit of a dense bath. this linearized dean equation with a tracer taken apart is validated by the reproduction of previous results on the mean - field liquid structure and transport properties. then, the tracer is submitted to an external force and we compute the density profile around it, its mobility and its diffusion coefficient. our results exhibit effects such as bias enhanced diffusion that are very similar to those observed in the opposite limit of a hard core lattice gas, indicating the robustness of these effects. our predictions are successfully tested against molecular dynamics simulations.
arxiv:1401.5515
we consider some critical claims concerning our above paper, and reply to these claims.
arxiv:1905.08553
exciton polaritons are hybrid particles of excitons ( bound electron - hole pairs ) and cavity photons, which are renowned for displaying bose einstein condensation and other coherent phenomena at elevated temperatures. however, their formation in semiconductor microcavities is often accompanied by the appearance of an incoherent bath of optically dark excitonic states that can interact with polaritons via their matter component. here we show that the presence of such a dark excitonic medium can " dress " polaritons with density fluctuations to form coherent polaron - like quasiparticles, thus fundamentally modifying their character. we employ a many - body green ' s function approach that naturally incorporates correlations beyond the standard mean - field theories applied to this system. with increasing exciton density, we find a reduction in the light - matter coupling that arises from the polaronic dressing cloud rather than any saturation induced by the fermionic constituents of the exciton. in particular, we observe the strongest effects when the spin of the polaritons is opposite that of the excitonic medium. in this case, the coupling to light generates an additional polaron quasiparticle - the biexciton polariton - which emerges due to the dark - exciton counterpart of a polariton feshbach resonance. our results can explain recent experiments on polariton interactions in two - dimensional semiconductors and potentially provide a route to tailoring the properties of exciton polaritons and their correlations.
arxiv:2312.00985
the seesaw mechanism explains the exclusive smallness of neutrino masses by the presence of very heavy majorana masses and leads to the appearance of majorana particles and to the direct lepton number violation. the author proposes a seesaw scenario that produces only dirac neutrinos with the same violation. this scenario appears possible for heavy neutrinos with non - perturbative higgs boson h couplings. such a scenario could be accomplished in the model describing the structure of weak mixing matrices for quarks and leptons through the existence of very heavy mirror analogs of standard model fermions. the non - perturbativity of the problem hinders the analytical solution, but the conditions derived indicate that the mechanism under consideration preferentially generates only dirac neutrinos. this phenomenon may have relevance for leptogenesis processes if all existing neutrinos turn out to be of the dirac type.
arxiv:2004.02272
for a pair of coupled rectangular random matrices we consider the squared singular values of their product, which form a determinantal point process. we show that the limiting mean distribution of these squared singular values is described by the second component of the solution to a vector equilibrium problem. this vector equilibrium problem is defined for three measures with an upper constraint on the first measure and an external field on the second measure. we carry out the steepest descent analysis for a 4 $ \ times $ 4 matrix - valued riemann - hilbert problem, which characterizes the correlation kernel and is related to mixed type multiple orthogonal polynomials associated with the modified bessel functions. a careful study of the vector equilibrium problem, combined with this asymptotic analysis, ultimately leads to the aforementioned convergence result for the limiting mean distribution, an explicit form of the associated spectral curve, as well as local sine, meijer - g and airy universality results for the squared singular values considered.
arxiv:1908.05708
we investigate the mkdv hierarchy with integral type of source ( mkdvhws ), which consist of the reduced akns eigenvalue problem with $ r = q $ and the mkdv hierarchy with extra term of the integration of square eigenfunction. first we propose a method to find the explicit evolution equation for eigenfunction of the auxiliary linear problems of the mkdvhws. then we determine the evolution equations of scattering data corresponding to the mkdvhws which allow us to solve the equation in the mkdvhws by inverse scattering transformation.
arxiv:nlin/0205024
in this article, we discuss the propagation of scalar fields in conformally transformed spacetimes with either minimal or conformal coupling. the conformally coupled equation of motion is transformed into a one - dimensional schr \ " { o } dinger - like equation with an invariant potential under conformal transformation. in a second stage, we argue that calculations based on conformal coupling yield the same hawking temperature as those based on minimal coupling. finally, it is conjectured that the quasi normal modes of black holes are invariant under conformal transformation.
arxiv:1403.1014
we describe an application of tropical moduli spaces to complex dynamics. a post - critically finite branched covering $ \ varphi $ of $ s ^ 2 $ induces a pullback map on the teichm \ " uller space of complex structures of $ s ^ 2 $ ; this descends to an algebraic correspondence on the moduli space of point - configurations of $ \ mathbb { c } \ mathbb { p } ^ 1 $. we make a case for studying the action of the tropical moduli space correspondence by making explicit the connections between objects that have come up in one guise in tropical geometry and in another guise in complex dynamics. for example, a thurston obstruction for $ \ varphi $ corresponds to a ray that is fixed by the tropical moduli space correspndence, and scaled by a factor $ \ ge 1 $. this article is intended to be accessible to algebraic and tropical geometers as well as to complex dynamicists.
arxiv:2402.14421
vision transformer has achieved impressive performance for many vision tasks. however, it may suffer from high redundancy in capturing local features for shallow layers. local self - attention or early - stage convolutions are thus utilized, which sacrifice the capacity to capture long - range dependency. a challenge then arises : can we access efficient and effective global context modeling at the early stages of a neural network? to address this issue, we draw inspiration from the design of superpixels, which reduces the number of image primitives in subsequent processing, and introduce super tokens into vision transformer. super tokens attempt to provide a semantically meaningful tessellation of visual content, thus reducing the token number in self - attention as well as preserving global modeling. specifically, we propose a simple yet strong super token attention ( sta ) mechanism with three steps : the first samples super tokens from visual tokens via sparse association learning, the second performs self - attention on super tokens, and the last maps them back to the original token space. sta decomposes vanilla global attention into multiplications of a sparse association map and a low - dimensional attention, leading to high efficiency in capturing global dependencies. based on sta, we develop a hierarchical vision transformer. extensive experiments demonstrate its strong performance on various vision tasks. in particular, without any extra training data or label, it achieves 86. 4 % top - 1 accuracy on imagenet - 1k with less than 100m parameters. it also achieves 53. 9 box ap and 46. 8 mask ap on the coco detection task, and 51. 9 miou on the ade20k semantic segmentation task. code is released at https : / / github. com / hhb072 / stvit.
arxiv:2211.11167
previous methods have dealt with discrete manipulation of facial attributes such as smile, sad, angry, surprise etc, out of canonical expressions and they are not scalable, operating in single modality. in this paper, we propose a novel framework that supports continuous edits and multi - modality portrait manipulation using adversarial learning. specifically, we adapt cycle - consistency into the conditional setting by leveraging additional facial landmarks information. this has two effects : first cycle mapping induces bidirectional manipulation and identity preserving ; second pairing samples from different modalities can thus be utilized. to ensure high - quality synthesis, we adopt texture - loss that enforces texture consistency and multi - level adversarial supervision that facilitates gradient flow. quantitative and qualitative experiments show the effectiveness of our framework in performing flexible and multi - modality portrait manipulation with photo - realistic effects.
arxiv:1807.01826
we investigate a conjecture to describe the characters of large families of rcft ' s in terms of contour integrals of feigin - fuchs type. we provide a simple algorithm to determine the modular s - matrix for arbitrary numbers of characters as a sum over paths. thereafter we focus on the case of 2, 3 and 4 characters, where agreement between the critical exponents of the integrals and the characters implies that the conjecture is true. in these cases, we compute the modular s - matrix explicitly, verify that it agrees with expectations for known theories, and use it to compute degeneracies and multiplicities of primaries. we also compute s in an 8 - character example to provide additional evidence for the original conjecture. on the way we note that the verlinde formula provides interesting constraints on the critical exponents of rcft in this context.
arxiv:1912.04298
this article describes an application of three well - known statistical methods in the field of game - tree search : using a large number of classified othello positions, feature weights for evaluation functions with a game - phase - independent meaning are estimated by means of logistic regression, fisher ' s linear discriminant, and the quadratic discriminant function for normally distributed features. thereafter, the playing strengths are compared by means of tournaments between the resulting versions of a world - class othello program. in this application, logistic regression - which is used here for the first time in the context of game playing - leads to better results than the other approaches.
arxiv:cs/9512106
nonlinear frequency conversion plays a crucial role in advancing the functionality of next - generation optical systems. portable metrology references and quantum networks will demand highly efficient second - order nonlinear devices, and the intense nonlinear interactions of nanophotonic waveguides can be leveraged to meet these requirements. here we demonstrate second harmonic generation ( shg ) in gaas - on - insulator waveguides with unprecedented efficiency of 40 w $ ^ { - 1 } $ for a single - pass device. this result is achieved by minimizing the propagation loss and optimizing phase - matching. we investigate surface - state absorption and design the waveguide geometry for modal phase - matching with tolerance to fabrication variation. a 2. 0 $ \ mu $ m pump is converted to a 1. 0 $ \ mu $ m signal in a length of 2. 9 mm with a wide signal bandwidth of 148 ghz. tunable and efficient operation is demonstrated over a temperature range of 45 $ ^ { \ circ } $ c with a slope of 0. 24 nm / $ ^ { \ circ } $ c. wafer - bonding between gaas and sio $ _ 2 $ is optimized to minimize waveguide loss, and the devices are fabricated on 76 mm wafers with high uniformity. we expect this device to enable fully integrated self - referenced frequency combs and high - rate entangled photon pair generation.
arxiv:1912.12346
in this work, we investigate the effects of logarithms on the asymptotic behavior of power expansion / ope in supper - renormalizable qfts. we performed a careful investigation of the large $ p ^ 2 $ expansion of a scalar - scalar two - point function at the next - to - leading order in the large - $ n $ expansion, in a large - $ n $ $ o ( n ) $ quartic model that is populated by logarithms. we show that because the large - $ p ^ 2 $ logarithms of the individual bubbles can be amplified by bubble - chains, there are factorial enhancements to the power expansion. we show how the factorial enhancements appear separately in the coefficient functions and operator condensates, and demonstrate how they are cancelled off - diagonally across different powers. restricted to any given power, the factorial enhancements are no - longer canceled. the large - $ p ^ 2 $ power expansion is divergent.
arxiv:2502.02031
we study an anderson impurity in a semiconducting host using the density matrix renormalization group technique. we use the $ u = 0 $ one - - dimensional anderson hamiltonian at half filling as the semiconducting host since it has a hybridization gap. by varying the hybridization of the host, we can control the size of the semiconducting gap. we consider chains with 25 sites and we place the anderson impurity ( with $ u > 0 $ ) in the middle of the chain. we dope the half - - filled system with one hole and we find two regimes : when the hybridization of the impurity is small, the hole density and the spin are localized near the impurity. when the hybridization of the impurity is large, the hole and spin density are spread over the lattice. additional holes avoid the impurity and are extended throughout the lattice. away from half - - filling, the semiconductor with an impurity is analogous to a double well potential with a very high barrier. we also examine the chemical potential as a function of electron filling, and we find that the impurity introduces midgap states when the impurity hybridization is small.
arxiv:cond-mat/9603175
approximate nearest neighbor search ( anns ), which enables efficient semantic similarity search in large datasets, has become a fundamental component of critical applications such as information retrieval and retrieval - augmented generation ( rag ). however, anns is a well - known i / o - intensive algorithm with a low compute - to - i / o ratio, often requiring massive storage due to the large volume of high - dimensional data. this leads to i / o bottlenecks on cpus and memory limitations on gpus. dram - based processing - in - memory ( dram - pim ) architecture, which offers high bandwidth, large - capacity memory, and the ability to perform efficient computation in or near the data, presents a promising solution for anns. in this work, we investigate the use of commercial dram - pim for anns for the first time and propose drim - ann, an optimized anns engine based on dram - pims from upmem. notably, given that the target dram - pim exhibits an even lower compute - to - i / o ratio than basic anns, we leverage lookup tables ( luts ) to replace more multiplications with i / o operations. we then systematically tune anns to search optimized configurations with lower computational load, aligning the compute - to - i / o ratio of anns with that of dram - pims while maintaining accuracy constraints. building on this tuned anns algorithm, we further explore implementation optimizations to fully utilize the two thousand parallel processing units with private local memory in dram - pims. to address the load imbalance caused by anns requests distributed across different clusters of large datasets, we propose a load - balancing strategy that combines static data layout optimization with dynamic runtime request scheduling. experimental results on representative datasets show that drim - ann achieves an average performance speedup of 2. 92x compared to a 32 - thread cpu counterpart.
arxiv:2410.15621
this analysis is based on the same ideas and numerical inputs of the recent paper by ciuchini et al. on the subject. some approximations are applied, which make analytical calculations applicable in most of the work, thus avoiding monte carlo integration. the final result is practically identical to the one obtained by the more detailed numerical analysis.
arxiv:hep-ph/0107067
we investigate the properties of conduction electrons in single - walled armchair carbon nanotubes ( swnt ) in the presence of both transverse electric and magnetic fields. we find that these fields provide a controlled means of tuning low - energy band structure properties such as inducing gaps in the spectrum, breaking various symmetries and altering the fermi velocities. we show that the fields can strongly affect electron - electron interaction, yielding tunable luttinger liquid physics, the possibility of spin - charge - band separation, and a competition between spin - density - wave and charge - density - wave order. for short tubes, the fields can alter boundary conditions and associated single - particle level spacings as well as quantum dot behavior.
arxiv:0812.1851
we consider the problem of designing auctions which maximize consumer surplus ( i. e., the social welfare minus the payments charged to the buyers ). in the consumer surplus maximization problem, a seller with a set of goods faces a set of strategic buyers with private values, each of whom aims to maximize their own individual utility. the seller, in contrast, aims to allocate the goods in a way which maximizes the total buyer utility. the seller must then elicit the values of the buyers in order to decide what goods to award each buyer. the canonical approach in mechanism design to ensure truthful reporting of the private information is to find appropriate prices to charge each buyer in order to align their objective with the objective of the seller. indeed, there are many celebrated results to this end when the seller ' s objective is welfare maximization [ clarke, 1971, groves, 1973, vickrey, 1961 ] or revenue maximization [ myerson, 1981 ]. however, in the case of consumer surplus maximization the picture is less clear - - using high payments to ensure the highest value bidders are served necessarily decreases their surplus utility, but using low payments may lead the seller into serving lower value bidders. our main result in this paper is a framework for designing mechanisms which maximize consumer surplus. we instantiate our framework in a variety of canonical multi - parameter auction settings ( i. e., unit - demand bidders with heterogeneous items, multi - unit auctions, and auctions with divisible goods ) and use it to design auctions achieving consumer surplus with optimal approximation guarantees against the total social welfare. along the way, we answer an open question posed by hartline and roughgarden [ 2008 ] for the two bidders single item setting.
arxiv:2402.16972
spectral formation in steady state, spherical accretion onto neutron stars and black holes is examined by solving numerically and analytically the equation of radiative transfer. the photons escape diffusively and their energy gains come from their scattering off thermal electrons in the converging flow of the accreting gas. we show that the bulk motion of the flow is more efficient in upscattering photons than thermal comptonization in the range of non - relativistic electron temperatures. the spectrum observed at infinity is a power law with an exponential turnover at energies of order the electron rest mass. especially in the case of accretion into a black hole, the spectral energy power - law index is distributed around 1. 5. because bulk motion near the horizon ( 1 - 5 schwarzschild radii ) is most likely a necessary characteristic of accretion into a black hole, we claim that observations of an extended power law up to about the electron rest mass, formed as a result of bulk motion comptonization, is a real observational evidence for the existence of an underlying black hole.
arxiv:astro-ph/9611084
evolution at high - z protoclusters, although future observations are necessary for confirmation.
arxiv:1705.10330
although luhmann formulated with modesty and precaution, for example in die wissenschaft der gesellschaft ( 1990a, at pp. 412f. ), that his theory claims to be a universal one because it is self - referential, the " operational closure " that follows from this assumption easily generates a problem for empirical research. can a theory which considers society - - and science as one of its subsystems - - operationally closed, nevertheless contribute to the project of enlightenment which popper ( 1945 ) so vigorously identified as the driver of an open society? how can a theory which proclaims itself to be circular and universal nevertheless claim to celebrate " the triumph of the enlightenment " luhmann, ( 1990a, at p. 548 )? is the lack of an empirical program of research building on luhmann ' s theory fortuitous or does it indicate that this theory should be considered as a philosophy rather than a heuristic for the explanation of operations in social systems?
arxiv:0911.1041
reconfigurable intelligent surfaces ( ris ) show great promise in the realm of 6th generation ( 6g ) wireless systems, particularly in the areas of localization and communication. their cost - effectiveness and energy efficiency enable the integration of numerous passive and reflective elements, enabling near - field propagation. in this paper, we tackle the challenges of ris - aided 3d localization and synchronization in multipath environments, focusing on the near - field of mmwave systems. specifically, our approach involves formulating a maximum likelihood ( ml ) estimation problem for the channel parameters. to initiate this process, we leverage a combination of canonical polyadic decomposition ( cpd ) and orthogonal matching pursuit ( omp ) to obtain coarse estimates of the time of arrival ( toa ) and angle of departure ( aod ) under the far - field approximation. subsequently, distances are estimated using $ l _ { 1 } $ - regularization based on a near - field model. additionally, we introduce a refinement phase employing the spatial alternating generalized expectation maximization ( sage ) algorithm. finally, a weighted least squares approach is applied to convert channel parameters into position and clock offset estimates. to extend the estimation algorithm to ultra - large ( ul ) ris - assisted localization scenarios, it is further enhanced to reduce errors associated with far - field approximations, especially in the presence of significant near - field effects, achieved by narrowing the ris aperture. moreover, the cram \ ' er - rao bound ( crb ) is derived and the ris phase shifts are optimized to improve the positioning accuracy. numerical results affirm the efficacy of the proposed estimation algorithm.
arxiv:2403.06460
surfaces admitting flows all whose orbits are dense are called minimal. minimal orientable surfaces were characterized by j. c. beni \ ` { e } re in 1998, leaving open the nonorientable case. this paper fills this gap providing a characterization of minimal nonorientable surfaces of finite genus. we also construct an example of a minimal nonorientable surface with infinite genus and conjecture that any nonorientable surface without combinatorial boundary is minimal.
arxiv:1608.08788
in this paper we analyze the performance of single stream and multi - stream spatial multiplexing ( sm ) systems employing opportunistic scheduling in the presence of interference. in the proposed downlink framework, every active user reports the post - processing signal - to - interference - plus - noise - power - ratio ( post - sinr ) or the receiver specific mutual information ( mi ) to its own transmitter using a feedback channel. the combination of scheduling and multi - antenna receiver processing leads to substantial interference suppression gain. specifically, we show that opportunistic scheduling exploits spatial interference alignment ( sia ) property inherent to a multi - user system for effective interference mitigation. we obtain bounds for the outage probability and the sum outage capacity for single stream and multi stream sm employing real or complex encoding for a symmetric interference channel model. the techniques considered in this paper are optimal in different operating regimes. we show that the sum outage capacity can be maximized by reducing the sm rate to a value less than the maximum allowed value. the optimum sm rate depends on the number of interferers and the number of available active users. in particular, we show that the generalized multi - user sm ( mu sm ) method employing real - valued encoding provides a performance that is either comparable, or significantly higher than that of mu sm employing complex encoding. a combination of analysis and simulation is used to describe the trade - off between the multiplexing rate and sum outage capacity for different antenna configurations.
arxiv:1307.3701
accurate and consistent mental interpretation of fluoroscopy to determine the position and orientation of acetabular bone fragments in 3d space is difficult. we propose a computer assisted approach that uses a single fluoroscopic view and quickly reports the pose of an acetabular fragment without any user input or initialization. intraoperatively, but prior to any osteotomies, two constellations of metallic ball - bearings ( bbs ) are injected into the wing of a patient ' s ilium and lateral superior pubic ramus. one constellation is located on the expected acetabular fragment, and the other is located on the remaining, larger, pelvis fragment. the 3d locations of each bb are reconstructed using three fluoroscopic views and 2d / 3d registrations to a preoperative ct scan of the pelvis. the relative pose of the fragment is established by estimating the movement of the two bb constellations using a single fluoroscopic view taken after osteotomy and fragment relocation. bb detection and inter - view correspondences are automatically computed throughout the processing pipeline. the proposed method was evaluated on a multitude of fluoroscopic images collected from six cadaveric surgeries performed bilaterally on three specimens. mean fragment rotation error was 2. 4 + / - 1. 0 degrees, mean translation error was 2. 1 + / - 0. 6 mm, and mean 3d lateral center edge angle error was 1. 0 + / - 0. 5 degrees. the average runtime of the single - view pose estimation was 0. 7 + / - 0. 2 seconds. the proposed method demonstrates accuracy similar to other state of the art systems which require optical tracking systems or multiple - view 2d / 3d registrations with manual input. the errors reported on fragment poses and lateral center edge angles are within the margins required for accurate intraoperative evaluation of femoral head coverage.
arxiv:1910.10187
we evaluate the van der waals ( vdw ) interaction energy at zero temperature between two undoped strained graphene layers separated by a finite distance. we consider the following three models for the anisotropic case : ( a ) where one of the two layers is uniaxially strained, ( b ) the two layers are strained in the same direction, and ( c ) one of the layers is strained in the perpendicular direction with respect to the other. we find that for all three models and given value of the electron - electron interaction coupling, the vdw interaction energy increases with increasing anisotropy. the effect is most striking for the case when both layers are strained in the same direction where we observe up to an order of magnitude increase in the strained relative to the unstrained case. we also investigate the effect of electron electron interaction renormalization in the region of large separation between the strained graphene layers. we find that the many - body renormalization contributions to the correlation energy are non negligible and the vdw interaction energy decreases as a function of increasing distance between the layers due to renormalization of the fermi velocity, the anisotropy, and the effective interaction. our analysis can be useful in designing novel graphene - based vdw heterostructures which, in recent times, has seen an upsurge in research activity.
arxiv:1402.3369
the character integral representation of one loop partition functions is useful to establish the relation between partition functions of conformal fields on weyl equivalent spaces. the euclidean space $ s ^ a \ times ads _ b $ can be mapped to $ s ^ { a + b } $ provided $ s ^ a $ and $ ads _ b $ are of the same radius. as an example, to begin with, we show that the partition function in the character integral representation of conformally coupled free scalars and fermions are identical on $ s ^ a \ times ads _ b $ and $ s ^ { a + b } $. we then demonstrate that the partition function of higher derivative conformal scalars and fermions are also the same on hyperbolic cylinders and branched spheres. the partition function of the four - derivative conformal vector gauge field on the branched sphere in $ d = 6 $ dimension can be expressed as an integral over ` naive ' bulk and ` naive ' edge characters. however, the partition function of the conformal vector gauge field on $ s ^ 1 _ q \ times ads _ 5 $ contains only the ` naive ' bulk part of the partition function. this follows the same pattern which was observed for the partition of conformal $ p $ - form fields on hyperbolic cylinders. we use the partition function of higher derivative conformal fields on hyperbolic cylinders to obtain a linear relationship between the hofman - maldacena variables which enables us to show that these theories are non - unitary.
arxiv:2108.00929
we prove a law of large numbers for the range of rotor walks with random initial configuration on regular trees and on galton - watson trees. more precisely, we show that on the classes of trees under consideration, even in the case when the rotor walk is recurrent, the range grows at linear speed. we also show the existence of the speed for such rotor walks.
arxiv:1805.05746
with the emergence of smart grids as the primary means of distribution across wide areas, the importance of improving its resilience to faults and mishaps is increasing. the reliability of a distribution system depends upon its tolerance to attacks and the efficiency of restoration after an attack occurs. this paper proposes a unique approach to the restoration of smart grids under attack by impostors or due to natural calamities via optimal islanding of the grid with primary generators and distributed generators ( dgs ) into sub - grids minimizing the amount of load shed which needs to be incurred and at the same time minimizing the number of switching operations via graph theory. the minimum load which needs to be shed is computed in the first stage followed by selecting the nodes whose load needs to be shed to achieve such a configuration and then finally deriving the sequence of switching operations required to achieve the configuration. the proposed method is tested against standard ieee 37 - bus and a 1069 - bus grid system and the minimum load shed along with the sequencing steps to optimal configuration and time to achieve such a configuration are presented which demonstrates the effectiveness of the method when compared to the existing methods in the field. moreover, the proposed algorithm can be easily modified to incorporate any other constraints which might arise due to any operational configuration of the grid.
arxiv:2011.03214
we show that fermion systems with random interactions lead to strong coupling of glassy order and fermionic correlations, which culminates in the implementation of parisi replica permutation symmetry breaking ( rpsb ) in their t = 0 quantum field theories. precursor effects below fermionic at - lines become stronger as the temperature decreases and play a crucial role within the entire low t regime. the parisi ultrametric structure is shown to determine low energy excitations and the dynamic behaviour of fermionic correlations for large times, which is predicted to affect transport properties in metallic ( and superconducting ) spin glasses. thus we reveal quantum dynamical fingerprints of the parisi scheme. these effects, being strongest as t - > 0, are contrasted with quantum spin glass transitions at t = 0 displaying only small rpsb corrections at low t. rpsb - effects moreover appear to influence the loci of the ground state transitions at o ( t ^ 0 ) and hence the phase diagrams. we derive a new representation of the t = 0 green ' s function which leads to a map of the fermionic ( insulating ) spin glass solution to the local limit solution of a hubbard model with a random repulsive interaction. we obtain the distribution of the hubbard interaction fluctuation and its dependence on the order of rpsb. a generalized mapping between metallic spin glass and random u hubbard model is conjectured. the new representation of the green ' s function at t = 0 is suggested to be useful for generalizations to superconductors with spin glass phases.
arxiv:cond-mat/9803239
we use the wise - 2mass infrared galaxy catalog matched with pan - starrs1 ( ps1 ) galaxies to search for a supervoid in the direction of the cosmic microwave background cold spot. our imaging catalog has median redshift $ z \ simeq 0. 14 $, and we obtain photometric redshifts from ps1 optical colours to create a tomographic map of the galaxy distribution. the radial profile centred on the cold spot shows a large low density region, extending over 10 ' s of degrees. motivated by previous cosmic microwave background results, we test for underdensities within two angular radii, $ 5 ^ \ circ $, and $ 15 ^ \ circ $. the counts in photometric redshift bins show significantly low densities at high detection significance, $ \ gtrsim 5 \ sigma $ and $ \ gtrsim 6 \ sigma $, respectively, for the two fiducial radii. the line - of - sight position of the deepest region of the void is $ z \ simeq 0. 15 - 0. 25 $. our data, combined with an earlier measurement by granett et al. 2010, are consistent with a large $ r _ { \ rm void } = ( 220 \ pm 50 ) h ^ { - 1 } mpc $ supervoid with $ \ delta _ { m } \ simeq - 0. 14 \ pm 0. 04 $ centered at $ z = 0. 22 \ pm0. 03 $. such a supervoid, constituting at least a $ \ simeq 3. 3 \ sigma $ fluctuation in a gaussian distribution of the $ \ lambda cdm $ model, is a plausible cause for the cold spot.
arxiv:1405.1566
we report the discovery of bright, fast, radio flashes lasting tens of seconds with the aartfaac high - cadence all - sky survey at 60 mhz. the vast majority of these coincide with known, bright radio sources that brighten by factors of up to 100 during such an event. we attribute them to magnification events induced by plasma near the earth, most likely in the densest parts of the ionosphere. they can occur both in relative isolation, during otherwise quiescent ionospheric conditions, and in large clusters during more turbulent ionospheric conditions. using a toy model, we show that the likely origin of the more extreme ( up to a factor of 100 or so ) magnification events likely originate in the region of peak electron density in the ionosphere, at an altitude of 300 - 400 km. distinguishing these events from genuine astrophysical transients is imperative for future surveys searching for low frequency radio transient at timescales below a minute.
arxiv:2003.11138
this paper deals with the two fundamental problems concerning the handling of large n - gram language models : indexing, that is compressing the n - gram strings and associated satellite data without compromising their retrieval speed ; and estimation, that is computing the probability distribution of the strings from a large textual source. regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state - of - the - art solutions and related software packages. in particular, we present a compressed trie data structure in which each word following a context of fixed length k, i. e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. since the number of words following a given context is typically very small in natural languages, we lower the space of representation to compression levels that were never achieved before. despite the significant savings in space, our technique introduces a negligible penalty at query time. regarding the problem of estimation, we present a novel algorithm for estimating modified kneser - ney language models, that have emerged as the de - facto choice for language modeling in both academia and industry, thanks to their relatively low perplexity performance. estimating such models from large textual sources poses the challenge of devising algorithms that make a parsimonious use of the disk. the state - of - the - art algorithm uses three sorting steps in external memory : we show an improved construction that requires only one sorting step thanks to exploiting the properties of the extracted n - gram strings. with an extensive experimental analysis performed on billions of n - grams, we show an average improvement of 4. 5x on the total running time of the state - of - the - art approach.
arxiv:1806.09447
disperion relation of em field in a chiral medium is discussed from the viewpoint of constitutive equations to be used as a partner of maxwell equations. the popular form of drude - born - fedorov ( dbf ) constitutive eqs is criticized via a comparison with the first - principles macroscopic constitutive eqs. the two sets of eqs show a decisive difference in the dispersion curve in the resonant region of chiral, left - handed character, in the form of presence or absence of linear crossing at k = 0. dbf eqs could be used only as a phenomenology in off - resonant region.
arxiv:1501.01078
the improvements on quantum technology are threatening our daily cybersecurity, as a capable quantum computer can break all currently employed asymmetric cryptosystems. in preparation for the quantum - era the national institute of standards and technology ( nist ) has initiated in 2016 a standardization process for public - key encryption ( pke ) schemes, key - encapsulation mechanisms ( kem ) and digital signature schemes. in 2023, nist made an additional call for post - quantum signatures. with this chapter we aim at providing a survey on code - based cryptography, focusing on pkes and signature schemes. we cover the main frameworks introduced in code - based cryptography and analyze their security assumptions. we provide the mathematical background in a lecture notes style, with the intention of reaching a wider audience.
arxiv:2201.07119
any non - affine one - to - one binary gate can be wired together with suitable inputs to give and, or, not and fan - out gates, and so suffices to construct a general - purpose computer.
arxiv:1504.03376
the lensing properties of superconducting cosmic strings endowed with a time dependent pulse of lightlike current are investigated. the metric outside the core of the string belongs to the $ pp $ - - wave class, with a deficit angle. we study the field theoretic bosonic witten model coupled to gravity, and we show that the full metric ( both outside and inside the core ) is a taub - kerr - shild generalization of that for the static string with no current. it is shown that the double image due to the deficit angle evolves in an unambiguous way as a pulse of lightlike current passes between the source and the observer. observational consequences of this signature of the existence of cosmic strings are briefly discussed.
arxiv:gr-qc/9403025
we calculate the almost sure hausdorff dimension of the random covering set $ \ limsup _ { n \ to \ infty } ( g _ n + \ xi _ n ) $ in $ d $ - dimensional torus $ \ mathbb t ^ d $, where the sets $ g _ n \ subset \ mathbb t ^ d $ are parallelepipeds, or more generally, linear images of a set with nonempty interior, and $ \ xi _ n \ in \ mathbb t ^ d $ are independent and uniformly distributed random points. the dimension formula, derived from the singular values of the linear mappings, holds provided that the sequences of the singular values are decreasing.
arxiv:1207.3615
the strong coupling constant is a fundamental parameter of nature. it can be extracted from experiments measuring three - jet events in electron - positron annihilation. for this extraction precise theoretical calculations for jet rates and event shapes are needed. in this talk i will discuss the nnlo calculation for these observables.
arxiv:1001.1281
we present the chandra acis and asca gis results for a series of four long - term observations on doar 21, roxs 21 and roxs 31 ; the x - ray brightest t - tauri stars ( ttss ) in the rho ophiuchi cloud. in the four observations with a net exposure of ~ 600 ksec, we found six, three and two flares from doar 21, roxs 21 and roxs 31, respectively ; hence the flare rate is fairly high. the spectra of doar 21 are well fitted with a single - temperature plasma model, while those of roxs 21 and roxs 31 need an additional soft plasma component. since doar 21 is younger ( ~ 10 ^ 5 yr ) than roxs 21 and roxs 31 ( ~ 10 ^ 6 yr ), these results may indicate that the soft component gradually increases as t - tauri stars age. the abundances are generally sub - solar and vary from element to element. both high - fip ( first ionization potential ) and low - fip elements show enhancement over the mean abundances. an unusual giant flare is detected from roxs 31. the peak luminosity and temperature are ~ 10 ^ 33 ergs s ^ - 1 and ~ 10 kev, respectively. the temperature reaches its peak value before the flux maximum, and is nearly constant ( 4 - - 5 kev ) during the decay phase, indicating successive energy release during the flare. the abundances and absorption show dramatic variability from the quiescent to flare phase.
arxiv:astro-ph/0202287
based on semiclassical tunneling method, we focus on charged fermions tunneling from higher - dimensional reissner - nordstr \ " { o } m black hole. we first simplify the dirac equation by semiclassical approximation, and then a semiclassical hamilton - jacobi equation is obtained. using the hamilton - jacobi equation, we study the hawking temperature and fermions tunneling rate at the event horizon of the higher - dimensional reissner - nordstr \ " { o } m black hole spacetime. finally, the correct entropy is calculation by the method beyond semiclassical approximation.
arxiv:0903.1983
context. the class 0 protostellar binary iras 16293 - 2422 is an interesting target for ( sub ) millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. its proximity to earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1d models of the envelope. aims. the purpose of this paper is to study the environment of the two components of the binary through 3d radiative transfer modeling and to compare with data from the atacama large millimeter / submillimeter array. such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. methods. we present 13co, c17o and c18o j = 3 - 2 observations from the alma protostellar interferometric line survey ( pils ), together with a qualitative study of the dust and gas density distribution of iras 16293 - 2422. a 3d dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. results and conclusions. radiative transfer modeling of source a and b, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk - like emission around source a and b from the observed dust continuum and co isotopologue gas emission. if a protoplanetary disk model is used around source b, it has to have an unusually high scale - height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. our 3d model requires source a to be much more luminous than source b ; la ~ 18 $ l _ \ odot $ and lb ~ 3 $ l _ \ odot $.
arxiv:1712.06984
we present a physically reasonable source for an static, axially - - symmetric solution to the einstein equations. arguments are provided, supporting our belief that the exterior space - - time produced by such source, describing a quadrupole correction to the schwarzschild metric, is particularly suitable ( among known solutions of the weyl family ) for discussing the properties of quasi - - spherical gravitational fields.
arxiv:gr-qc/0410105
in this paper we study the classical single machine scheduling problem where the objective is to minimize the total weight of tardy jobs. our analysis focuses on the case where one or more of three natural parameters is either constant or is taken as a parameter in the sense of parameterized complexity. these three parameters are the number of different due dates, processing times, and weights in our set of input jobs. we show that the problem belongs to the class of fixed parameter tractable ( fpt ) problems when combining any two of these three parameters. we also show that the problem is polynomial - time solvable when the latter two parameters are constant, complementing karp ' s result who showed that the problem is np - hard already for a single due date.
arxiv:1709.05751
generating future frames given a few context ( or past ) frames is a challenging task. it requires modeling the temporal coherence of videos and multi - modality in terms of diversity in the potential future states. current variational approaches for video generation tend to marginalize over multi - modal future outcomes. instead, we propose to explicitly model the multi - modality in the future outcomes and leverage it to sample diverse futures. our approach, diverse video generator, uses a gaussian process ( gp ) to learn priors on future states given the past and maintains a probability distribution over possible futures given a particular sample. in addition, we leverage the changes in this distribution over time to control the sampling of diverse future states by estimating the end of ongoing sequences. that is, we use the variance of gp over the output function space to trigger a change in an action sequence. we achieve state - of - the - art results on diverse future frame generation in terms of reconstruction quality and diversity of the generated sequences.
arxiv:2107.04619
we study the effect of dynamical gauge field on the kaplan ' s chiral fermion on a boundary in the strong gauge coupling limit in the extra dimension. to all orders of the hopping parameter expansion, we prove exact parity invariance of the fermion propagator on the boundary. this means that the chiral property of the boundary fermion, which seems to survive even in the presence of the gauge field from a perturbative point of view, is completely destroyed by the dynamics of the gauge field.
arxiv:hep-lat/9405014
score - based generative models ( sgms ) have demonstrated remarkable synthesis quality. sgms rely on a diffusion process that gradually perturbs the data towards a tractable distribution, while the generative model learns to denoise. the complexity of this denoising task is, apart from the data distribution itself, uniquely determined by the diffusion process. we argue that current sgms employ overly simplistic diffusions, leading to unnecessarily complex denoising processes, which limit generative modeling performance. based on connections to statistical mechanics, we propose a novel critically - damped langevin diffusion ( cld ) and show that cld - based sgms achieve superior performance. cld can be interpreted as running a joint diffusion in an extended space, where the auxiliary variables can be considered " velocities " that are coupled to the data variables as in hamiltonian dynamics. we derive a novel score matching objective for cld and show that the model only needs to learn the score function of the conditional distribution of the velocity given data, an easier task than learning scores of the data directly. we also derive a new sampling scheme for efficient synthesis from cld - based diffusion models. we find that cld outperforms previous sgms in synthesis quality for similar network architectures and sampling compute budgets. we show that our novel sampler for cld significantly outperforms solvers such as euler - - maruyama. our framework provides new insights into score - based denoising diffusion models and can be readily used for high - resolution image synthesis. project page and code : https : / / nv - tlabs. github. io / cld - sgm.
arxiv:2112.07068
nonconvex and nonsmooth optimization problems are frequently encountered in much of statistics, business, science and engineering, but they are not yet widely recognized as a technology in the sense of scalability. a reason for this relatively low degree of popularity is the lack of a well developed system of theory and algorithms to support the applications, as is the case for its convex counterpart. this paper aims to take one step in the direction of disciplined nonconvex and nonsmooth optimization. in particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints. in the case of without coupled constraints, we show a sublinear rate of convergence to an $ \ epsilon $ - stationary solution in the form of variational inequality for a generalized conditional gradient method, where the convergence rate is shown to be dependent on the h \ " olderian continuity of the gradient of the smooth part of the objective. for the model with coupled affine constraints, we introduce corresponding $ \ epsilon $ - stationarity conditions, and apply two proximal - type variants of the admm to solve such a model, assuming the proximal admm updates can be implemented for all the block variables except for the last block, for which either a gradient step or a majorization - minimization step is implemented. we show an iteration complexity bound of $ o ( 1 / \ epsilon ^ 2 ) $ to reach an $ \ epsilon $ - stationary solution for both algorithms. moreover, we show that the same iteration complexity of a proximal bcd method follows immediately. numerical results are provided to illustrate the efficacy of the proposed algorithms for tensor robust pca.
arxiv:1605.02408
we show that every toric sasaki - einstein manifold $ s $ admits a special legendrian submanifold $ l $ which arises as the link $ { \ rm fix } ( \ tau ) \ cap s $ of the fixed point set $ { \ rm fix } ( \ tau ) $ of an anti - holomorphic involution $ \ tau $ on the cone $ c ( s ) $. in particular, an irregular toric sasaki - einstein manifold $ s ^ { 2 } \ times s ^ { 3 } $ has a special legendrian torus $ s ^ { 1 } \ times s ^ { 1 } $. moreover, we also obtain a special legendrian submanifold in $ \ sharp m ( s ^ { 2 } \ times s ^ { 3 } ) $ for each $ m \ ge 1 $.
arxiv:1201.1080
in this paper, we look for solutions to the following coupled schr \ " { o } dinger system \ begin { equation * } \ begin { cases } - \ delta u + \ lambda _ { 1 } u = \ alpha _ { 1 } | u | ^ { p - 2 } u + \ mu _ { 1 } u ^ { 3 } + \ rho v ^ { 2 } u & \ text { in } \ \ \ mathbb { r } ^ { n }, - \ delta v + \ lambda _ { 2 } v = \ alpha _ { 2 } | v | ^ { p - 2 } v + \ mu _ { 2 } v ^ { 3 } + \ rho u ^ { 2 } v & \ text { in } \ \ \ mathbb { r } ^ { n }, \ end { cases } \ end { equation * } with the additional conditions $ \ int _ { \ mathbb { r } ^ { n } } u ^ { 2 } dx = b ^ { 2 } _ { 1 } $ and $ \ int _ { \ mathbb { r } ^ { n } } v ^ { 2 } dx = b ^ { 2 } _ { 2 }. $ here $ b _ 1, b _ 2 > 0 $ are prescribed, $ n \ leq3 $, $ \ mu _ { 1 }, \ mu _ { 2 }, \ alpha _ { 1 }, \ alpha _ { 2 }, \ rho > 0 $, $ p \ in ( 2, 4 ) $ and the frequencies $ \ lambda _ { 1 }, \ lambda _ { 2 } $ are unknown and will appear as lagrange multipliers. in the one dimension case, the energy functional is bounded from below on the product of $ l ^ 2 $ - spheres, normalized ground states exist and are obtained as global minimizers. when $ n = 2 $, the energy functional is not always bounded on the product of $ l ^ 2 $ - spheres, we prove the existence of normalized ground states under suitable conditions on $ b _ 1 $ and $ b _ 2 $, which are obtained as global minimizers. when $ n = 3 $, we show that under suitable conditions on $ b _ 1 $ and $ b _ 2 $, at least two normalized solutions exist, one is a ground state and the other is an excited state. we also shows the limit behavior of the normalized
arxiv:2108.10317
the late - time spectra of the kilonova at 2017gfo associated with gw170817 exhibit a strong emission line feature at $ 2. 1 \, { \ rm \ mu m } $. the line structure develops with time and there is no apparent blue - shifted absorption feature in the spectra, suggesting that this emission line feature is produced by electron collision excitation. we attribute the emission line to a fine structure line of tellurium ( te ) iii, which is one of the most abundant elements in the second r - process peak. by using a synthetic spectral modeling including fine structure emission lines with the solar r - process abundance pattern beyond the first r - process peak, i. e., atomic mass numbers $ a \ gtrsim 88 $, we demonstrate that [ te iii ] $ 2. 10 \, \ rm \ mu m $ is indeed expected to be the strongest emission line in the near infrared region. we estimate that the required mass of te iii is $ \ sim 10 ^ { - 3 } m _ { \ odot } $, corresponding to the merger ejecta of $ 0. 05m _ { \ odot } $, which is in agreement with the mass estimated from the kilonova light curve.
arxiv:2307.00988
we consider the resource allocation problem and its numerical solution. the following constructions are demonstrated : 1 ) walrasian price - adjustment mechanism for determining the equilibrium ; 2 ) decentralized role of the prices ; 3 ) slater ' s method for price restrictions ( dual lagrange multipliers ) ; 4 ) a new mechanism for determining equilibrium prices, in which prices are fully controlled not by center ( government ), but by economic agents - - nodes ( factories ). in economic literature the convergence of the considered methods is only proved. in contrast, this paper provides an accurate analysis of the convergence rate of the described procedures for determining the equilibrium. the analysis is based on the primal - dual nature of the suggested algorithms. more precisely, in this article we propose the economic interpretation of the following numerical primal - dual methods of convex optimization : dichotomy and subgradient projection method. numerical experiments conclude the paper.
arxiv:1806.09071
we demonstrate that p - adic analysis is a natural basis for the construction of a wide variety of the ultrametric diffusion models constrained by hierarchical energy landscapes. a general analytical description in terms of p - adic analysis is given for a class of models. two exactly solvable examples, i. e. the ultrametric diffusion constraned by the linear energy landscape and the ultrametric diffusion with reaction sink, are considered. we show that such models can be applied to both the relaxation in complex systems and the rate processes coupled to rearrangenment of the complex surrounding.
arxiv:cond-mat/0106506
it is shown that various definitions of $ \ varphi $ - connes amenability as introduced independently in \ cite { gh - ja, mah, sh - am }, are just rediscovering existing notions and presenting them in different ways. it is also proved that even $ \ varphi $ - contractibility as defined in \ cite { sangani }, is equivalent to an older and simpler concept.
arxiv:1706.06161
the human brain has inspired novel concepts complementary to classical and quantum computing architectures, such as artificial neural networks and neuromorphic computers, but it is not clear how their performances compare. here we report a new methodological framework for benchmarking cognitive performance based on solving computational problems with increasing problem size. we determine computational efficiencies in experiments with human participants and benchmark these against complexity classes. we show that a neuromorphic architecture with limited field - of - view size and added noise provides a good approximation to our results. the benchmarking also suggests there is no quantum advantage on the scales of human capability compared to the neuromorphic model. thus, the framework offers unique insights into the computational efficiency of the brain by considering it a black box.
arxiv:2305.14363
the effects of electronic correlations and orbital degeneracy on thermoelectric properties are studied within the context of multi - orbital hubbard models on different lattices. we use dynamical mean field theory with iterative perturbation theory as a solver to calculate the self - energy of the models in wide range of interaction strengths. the seebeck coefficient, which measures the voltage drop in response to a temperature gradient across the system, shows a non - monotonic behavior with temperatures in the presence of strong correlations. this anomalous behavior is associated with a crossover from a fermi liquid metal at low temperatures to a bad metal with incoherent excitations at high temperatures. we find that for interactions comparable to the bandwidth the seebeck coefficient acquires large values at low temperatures. moreover, for strongly correlated cases, where the interaction is larger than the band width, the figure of merit is enhanced over a wide range of temperatures because of decreasing electronic contributions to the thermal conductivity. we also find that multi - orbital systems will typically yield larger thermopower compared to single orbital models.
arxiv:1308.2582
the active star - forming region w33b is a source of oh and h2o maser emission located in distinct zones around the central object. the aim was to obtain the complete stokes pattern of polarised oh maser emission, to trace its variability and to investigate flares and long - term variability of the h2o maser and evolution of individual emission features. observations in the oh lines at a wavelength of 18 cm were carried out on the nan \ c { c } ay radio telescope ( france ) at a number of epochs in 2008 - - 2014 ; h2o line observations ( long - term monitoring ) at a wavelength of 1. 35 cm were performed on the 22 - metre radio telescope of the pushchino radio astronomy observatory ( russia ) between 1981 and 2014. we have observed strong variability of the emission features in the main 1665 - and 1667 - mhz oh lines as well as in the 1612 - mhz satellite line. zeeman splitting has been detected in the 1665 - mhz oh line at 62 km / s and in the 1667 - mhz line at 62 and 64 km / s. the magnetic field intensity was estimated to be from 2 to 3 mg. the h2o emission features form filaments, chains with radial - velocity gradients, or more complicated structures including large - scale ones. long - term observations of the hydroxyl maser in the w33b region have revealed narrowband polarised emission in the 1612 - mhz line with a double - peak profile characteristic of type iib circumstellar masers. the 30 - year monitoring of the water - vapour maser in w33b showed several strong flares of the h2o line. the observed radial - velocity drift of the h2o emission features suggests propagation of an excitation wave in the masering medium with a gradient of radial velocities. in oh and h2o masers some turbulent motions of material are inferred.
arxiv:1412.0462
the ao327 drift survey for radio pulsars and transients used the arecibo telescope from 2010 until its collapse in 2020. ao327 collected ~ 3100 hours of data at 327 mhz with a time resolution of 82 us and frequency resolution of 24 khz. while the main motivation for such surveys is the discovery of new pulsars and new, even unforeseen, types of radio transients, they also serendipitously collect a wealth of data on known pulsars. we present an electronic catalog of data and data products on 206 pulsars whose periodic emission was detected by ao327 and are listed in the atnf catalog of all published pulsars. the ao327 data products include dedispersed time series at full time resolution, average ( " folded " ) pulse profiles, gaussian pulse profile templates, and an absolute phase reference that allows phase - aligning the ao327 pulse profiles in a physically meaningful manner with profiles from data taken with other instruments. we also provide machine - readable tables with uncalibrated flux measurements at 327 mhz and pulse widths at 50 % and 10 % of the pulse peak determined from the fitted gaussian profile templates. the ao327 catalog data set can be used in applications like population analysis of radio pulsars, pulse profile evolution studies in time and frequency, cone and core emission of the pulsar beam, scintillation, pulse intensity distributions, and others. it also constitutes a ready - made resource for teaching signal processing and pulsar astronomy techniques.
arxiv:2401.01947
the wiener - hopf factorization is obtained in closed form for a phase type approximation to the cgmy l \ ' { e } vy process. this allows, for the approximation, exact computation of first passage times to barrier levels via laplace transform inversion. calibration of the cgmy model to market option prices defines the risk neutral process for which we infer the first passage times of stock prices to 30 % of the price level at contract initiation. these distributions are then used in pricing 50 % recovery rate equity default swap ( eds ) contracts and the resulting prices are compared with the prices of credit default swaps ( cds ). an illustrative analysis is presented for these contracts on ford and gm.
arxiv:0711.2807
the first direct detection of gravitational waves may be made through observations of pulsars. the principal aim of pulsar timing array projects being carried out worldwide is to detect ultra - low frequency gravitational waves ( f ~ 10 ^ - 9 to 10 ^ - 8 hz ). such waves are expected to be caused by coalescing supermassive binary black holes in the cores of merged galaxies. it is also possible that a detectable signal could have been produced in the inflationary era or by cosmic strings. in this paper we review the current status of the parkes pulsar timing array project ( the only such project in the southern hemisphere ) and compare the pulsar timing technique with other forms of gravitational - wave detection such as ground - and space - based interferometer systems.
arxiv:0812.2721
the sample mean is among the most well studied estimators in statistics, having many desirable properties such as unbiasedness and consistency. however, when analyzing data collected using a multi - armed bandit ( mab ) experiment, the sample mean is biased and much remains to be understood about its properties. for example, when is it consistent, how large is its bias, and can we bound its mean squared error? this paper delivers a thorough and systematic treatment of the bias, risk and consistency of mab sample means. specifically, we identify four distinct sources of selection bias ( sampling, stopping, choosing and rewinding ) and analyze them both separately and together. we further demonstrate that a new notion of \ emph { effective sample size } can be used to bound the risk of the sample mean under suitable loss functions. we present several carefully designed examples to provide intuition on the different sources of selection bias we study. our treatment is nonparametric and algorithm - agnostic, meaning that it is not tied to a specific algorithm or goal. in a nutshell, our proofs combine variational representations of information - theoretic divergences with new martingale concentration inequalities.
arxiv:1902.00746
electrons on the half - filled honeycomb lattice are expected to undergo a direct continuous transition from the semimetallic into the antiferromagnetic insulating phase with increase of on - site hubbard repulsion. we attempt to further quantify the critical behavior at this quantum phase transition by means of functional renormalization group ( rg ), within an effective gross - neveu - yukawa theory for an so ( 3 ) order parameter ( " chiral heisenberg universality class " ). our calculation yields an estimate of the critical exponents $ \ nu \ simeq 1. 31 $, $ \ eta _ \ phi \ simeq 1. 01 $, and $ \ eta _ \ psi \ simeq 0. 08 $, in reasonable agreement with the second - order expansion around the upper critical dimension. to test the validity of the present method we use the conventional gross - neveu - yukawa theory with z ( 2 ) order parameter ( " chiral ising universality class " ) as a benchmark system. we explicitly show that our functional rg approximation in the sharp - cutoff scheme becomes one - loop exact both near the upper as well as the lower critical dimension. directly in 2 + 1 dimensions, our chiral - ising results agree with the best available predictions from other methods within the single - digit percent range for $ \ nu $ and $ \ eta _ \ phi $ and the double - digit percent range for $ \ eta _ \ psi $. while one would expect a similar performance of our approximation in the chiral heisenberg universality class, discrepancies with the results of other calculations here are more significant. discussion and summary of various approaches is presented.
arxiv:1402.6277
predicting molecular properties ( e. g., atomization energy ) is an essential issue in quantum chemistry, which could speed up much research progress, such as drug designing and substance discovery. traditional studies based on density functional theory ( dft ) in physics are proved to be time - consuming for predicting large number of molecules. recently, the machine learning methods, which consider much rule - based information, have also shown potentials for this issue. however, the complex inherent quantum interactions of molecules are still largely underexplored by existing solutions. in this paper, we propose a generalizable and transferable multilevel graph convolutional neural network ( mgcn ) for molecular property prediction. specifically, we represent each molecule as a graph to preserve its internal structure. moreover, the well - designed hierarchical graph neural network directly extracts features from the conformation and spatial information followed by the multilevel interactions. as a consequence, the multilevel overall representations can be utilized to make the prediction. extensive experiments on both datasets of equilibrium and off - equilibrium molecules demonstrate the effectiveness of our model. furthermore, the detailed results also prove that mgcn is generalizable and transferable for the prediction.
arxiv:1906.11081
we develop librationism, { \ pounds }, and clarify some mathematical and philosophical matters which relate to the particular manner in which it deals with the paradoxes and to its usefulness as a foundation for mathematics and type free reasoning. we isolate a domination operation which unlike the power set operation is not paradoxical and which helps us isolate the definable real numbers. we show that { \ pounds } plus a postulate and a postulation interprets zfc ; our strategy for achieving this involves extending an interpretation by harvey friedman of zf in a system weaker than zf with collection minus extensionality and a novel notion of $ librationist \ capture $ which entails collection, specification and choice in desired contexts.
arxiv:1407.3877
image denoising is the first step in many biomedical image analysis pipelines and deep learning ( dl ) based methods are currently best performing. a new category of dl methods such as noise2void or noise2self can be used fully unsupervised, requiring nothing but the noisy data. however, this comes at the price of reduced reconstruction quality. the recently proposed probabilistic noise2void ( pn2v ) improves results, but requires an additional noise model for which calibration data needs to be acquired. here, we present improvements to pn2v that ( i ) replace histogram based noise models by parametric noise models, and ( ii ) show how suitable noise models can be created even in the absence of calibration data. this is a major step since it actually renders pn2v fully unsupervised. we demonstrate that all proposed improvements are not only academic but indeed relevant.
arxiv:1911.12291
we propose a method to explore the flavor structure of leptons using diffusion models, which are known as one of generative artificial intelligence ( generative ai ). we consider a simple extension of the standard model with the type i seesaw mechanism and train a neural network to generate the neutrino mass matrix. by utilizing transfer learning, the diffusion model generates 104 solutions that are consistent with the neutrino mass squared differences and the leptonic mixing angles. the distributions of the cp phases and the sums of neutrino masses, which are not included in the conditional labels but are calculated from the solutions, exhibit non - trivial tendencies. in addition, the effective mass in neutrinoless double beta decay is concentrated near the boundaries of the existing confidence intervals, allowing us to verify the obtained solutions through future experiments. an inverse approach using the diffusion model is expected to facilitate the experimental verification of flavor models from a perspective distinct from conventional analytical methods.
arxiv:2503.21432
high - order gas - kinetic scheme ( hgks ) with 5th - order non - compact reconstruction has been well implemented for implicit large eddy simulation ( iles ) in nearly incompressible turbulent channel flows. in this study, the hgks with higher - order non - compact reconstruction and compact reconstruction will be validated in turbulence simulation. for higher - order non - compact reconstruction, 7th - order normal reconstruction and tangential reconstruction are implemented. in terms of compact reconstruction, 5th - order normal reconstruction is adopted. current work aims to show the benefits of high - order non - compact reconstruction and compact reconstruction for iles. the accuracy of hgks is verified by numerical simulation of three - dimensional advection of density perturbation. for the non - compact 7th - order scheme, 16 gaussian points are required on the cell interface to preserve the order of accuracy. then, hgks with non - compact and compact reconstruction is used in the three - dimensional taylor - green vortex ( tgv ) problem and turbulent channel flows. accurate iles solutions have been obtained from hgks. in terms of the physical modeling underlying the numerical algorithms, the compact reconstruction has the consistent physical and numerical domains of dependence without employing additional information from cells which have no any direct physical connection with the targeted cell. the compact gks shows a favorable performance for turbulence simulation in resolving the multi - scale structures.
arxiv:2208.07713
this article deals with different generalizations of the discrete stability property. three possible definitions of discrete stability are introduced, followed by a study of some particular cases of discrete stable distributions and their properties.
arxiv:1502.02588
in the version \ cite { fiziev14 } of this paper we presented for the first time the basic equations and relations for relativistic static spherically symmetric stars ( ssss ) in the model of minimal dilatonic gravity ( mdg ). this model is { \ em locally } equivalent to the f ( r ) theory of gravity and gives an alternative description of the effects of dark matter and dark energy using the branse - dicke dilaton $ \ phi $. to outline the basic properties of the mdg model of ssss and to compare them with general relativistic results, in the present paper we use the relativistic equation of state ( eos ) of neutron matter as an ideal fermi neutron gas at zero temperature. we overcome the well - known difficulties of the physics of ssss in the f ( r ) theories of gravity \ cite { felice10, berti } applying novel highly nontrivial nonlinear boundary conditions, which depend on the global properties of the solution and on the eos. we also introduce two pairs of new notions : cosmological - energy - pressure densities and dilaton - energy - pressure densities, as well as two new eos for them : cosmological eos ( ceos ) and dilaton eos ( deos ). special attention is paid to the dilatonic sphere ( in brief - - disphere ) of ssss, introduced in this paper for the first time. using several realistic eos for neutron star ( ns ) : sly, bsk19, bsk20 and bsk21, and current observational two - solar - masses - limit, we derive an estimate for scalar - field - mass $ m _ \ phi \ sim 10 ^ { - 13 } ev / c ^ 2 \ div 4 \ times 10 ^ { - 11 } ev / c ^ 2 $. thus, the present version of the paper reflects some of the recent developments of the topic.
arxiv:1402.2813
we present an analytical model for the cosmological accretion of gas onto dark matter halos, based on a similarity solution applicable to spherical systems. performing simplified radiative transfer, we compute how the accreting gas turns increasingly neutral as it self - shields from the ionising background, and obtain the column density, $ n _ { \ rm hi } $, as a function of impact parameter. the resulting column - density distribution function ( cddf ) is in excellent agreement with observations. the analytical expression elucidates ( 1 ) why halos over a large range in mass contribute about equally to the cddf as well as ( 2 ) why the cddf evolves so little with redshift in the range $ z = 2 \ rightarrow 5 $. we show that the model also predicts reasonable dla line - widths ( $ v _ { 90 } $ ), bias and molecular fractions. integrating over the cddf yields the mass density in neutral gas, $ \ omega _ { \ rm hi } $, which agrees well with observations. $ \ omega _ { \ rm hi } ( z ) $ is nearly constant even though the accretion rate onto halos evolves. we show that this occurs because the fraction of time that the inflowing gas is neutral depends on the dynamical time of the halo, which is inversely proportional to the accretion rate. encapsulating results from cosmological simulations, the simple model shows that most lyman - limit system and damped lyman - alpha absorbers are associated with the cosmological accretion of gas onto halos.
arxiv:2010.15857
network function virtualization ( nfv ) carries the potential for on - demand deployment of network algorithms in virtual machines ( vms ). in large clouds, however, vm resource allocation incurs delays that hinder the dynamic scaling of such nfv deployment. parallel resource management is a promising direction for boosting performance, but it may significantly increase the communication overhead and the decline ratio of deployment attempts. our work analyzes the performance of various placement algorithms and provides empirical evidence that state - of - the - art parallel resource management dramatically increases the decline ratio of deterministic algorithms but hardly affects randomized algorithms. we, therefore, introduce apsr - - an efficient parallel random resource management algorithm that requires information only from a small number of hosts and dynamically adjusts the degree of parallelism to provide provable decline ratio guarantees. we formally analyze apsr, evaluate it on real workloads, and integrate it into the popular openstack cloud management platform. our evaluation shows that apsr matches the throughput provided by other parallel schedulers, while achieving up to 13x lower decline ratio and a reduction of over 85 % in communication overheads.
arxiv:2202.07710
no longer in use. once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. something similar to a memory leak may still occur if a programmer ' s code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. if methods for a non - existent object are called, a null pointer exception is thrown. one of the ideas behind java ' s automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. in some languages, memory for the creation of objects is implicitly allocated on the stack or explicitly allocated and deallocated from the heap. in the latter case, the responsibility of managing memory resides with the programmer. if the program does not deallocate an object, a memory leak occurs. if the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable or crash. this can be partially remedied by the use of smart pointers, but these add overhead and complexity. garbage collection does not prevent logical memory leaks, i. e. those where the memory is still referenced but never used. garbage collection may happen at any time. ideally, it will occur when a program is idle. it is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object ; this can cause a program to stall momentarily. explicit memory management is not possible in java. java does not support c / c + + style pointer arithmetic, where object addresses can be arithmetically manipulated ( e. g. by adding or subtracting an offset ). this allows the garbage collector to relocate referenced objects and ensures type safety and security. as in c + + and some other object - oriented languages, variables of java ' s primitive data types are either stored directly in fields ( for objects ) or on the stack ( for methods ) rather than on the heap, as is commonly true for non - primitive data types ( but see escape analysis ). this was a conscious decision by java ' s designers for performance reasons. java contains multiple types of garbage collectors. since java 9, hotspot uses the garbage first garbage collector ( g1gc ) as the default. however, there are also several other garbage collectors that can be used to manage the heap, such as the z garbage collector ( zgc )
https://en.wikipedia.org/wiki/Java_(programming_language)
this paper discusses the key principles of gigabit passive optical network ( gpon ) which is based on time division multiplexing passive optical network ( tdm pon ) and wavelength division multiplexing passive optical network ( wdm pon ), which is considered to be next generation passive optical network. in the present day scenario, access to broad - band is increasing at a rapid pace. because of the advantages of fiber access in terms of capacity and cost, most of the countries have started deploying gpon access as an important part of national strategy. though gpon is promising, it has few limitations. on the other hand wdm pon, a next generation network, is quite promising unlike gpon, it is easily scalable and interoperable with different vendors. this paper provides an overview of gpon, wdm pon and its key dissimilarities based on technicalities and cost.
arxiv:1308.5356
this paper addresses the problem of video summarization. given an input video, the goal is to select a subset of the frames to create a summary video that optimally captures the important information of the input video. with the large amount of videos available online, video summarization provides a useful tool that assists video search, retrieval, browsing, etc. in this paper, we formulate video summarization as a sequence labeling problem. unlike existing approaches that use recurrent models, we propose fully convolutional sequence models to solve video summarization. we firstly establish a novel connection between semantic segmentation and video summarization, and then adapt popular semantic segmentation networks for video summarization. extensive experiments and analysis on two benchmark datasets demonstrate the effectiveness of our models.
arxiv:1805.10538
markov chain monte carlo ( mcmc ) methods are powerful computational tools for analysis of complex statistical problems. however, their computational efficiency is highly dependent on the chosen proposal distribution, which is generally difficult to find. one way to solve this problem is to use adaptive mcmc algorithms which automatically tune the statistics of a proposal distribution during the mcmc run. a new adaptive mcmc algorithm, called the variational bayesian adaptive metropolis ( vbam ) algorithm, is developed. the vbam algorithm updates the proposal covariance matrix using the variational bayesian adaptive kalman filter ( vb - akf ). a strong law of large numbers for the vbam algorithm is proven. the empirical convergence results for three simulated examples and for two real data examples are also provided.
arxiv:1308.5875