text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we suggest using einstein ' s static universe metric for the metastable state after reheating, instead of the friedman - robertson - walker spacetime. in this case strong static gravitational potential leads to the effective reduction of the higgs vacuum expectation value, which is found to be compatible with the standard model first order electroweak phase transition conditions. gravity could also increase the cp - violating effects for particles that cross the new phase bubble walls and thus is able to lead to the successful electroweak baryogenesis scenario.
|
arxiv:1702.08445
|
past research has proposed numerous hardware prefetching techniques, most of which rely on exploiting one specific type of program context information ( e. g., program counter, cacheline address ) to predict future memory accesses. these techniques either completely neglect a prefetcher ' s undesirable effects ( e. g., memory bandwidth usage ) on the overall system, or incorporate system - level feedback as an afterthought to a system - unaware prefetch algorithm. we show that prior prefetchers often lose their performance benefit over a wide range of workloads and system configurations due to their inherent inability to take multiple different types of program context and system - level feedback information into account while prefetching. in this paper, we make a case for designing a holistic prefetch algorithm that learns to prefetch using multiple different types of program context and system - level feedback information inherent to its design. to this end, we propose pythia, which formulates the prefetcher as a reinforcement learning agent. for every demand request, pythia observes multiple different types of program context information to make a prefetch decision. for every prefetch decision, pythia receives a numerical reward that evaluates prefetch quality under the current memory bandwidth usage. pythia uses this reward to reinforce the correlation between program context information and prefetch decision to generate highly accurate, timely, and system - aware prefetch requests in the future. our extensive evaluations using simulation and hardware synthesis show that pythia outperforms multiple state - of - the - art prefetchers over a wide range of workloads and system configurations, while incurring only 1. 03 % area overhead over a desktop - class processor and no software changes in workloads. the source code of pythia can be freely downloaded from https : / / github. com / cmu - safari / pythia.
|
arxiv:2109.12021
|
this paper is devoted to investigate the thermodynamic stability of a generic cosmological fluid known as van der waals fluid in the context of flat frw universe. it is treated as a perfect fluid that obeys the equation of state $ p = \ frac { \ gamma \ rho } { 1 - \ beta \ rho } - \ alpha \ rho ^ { 2 }, 0 \ leq \ gamma < 1 $, where $ \ rho $ stands for energy density and $ p $ stands for pressure of the fluid. in this regard, we discuss the behavior of physical parameters to analyze the evolution of the universe. we investigate whether the cosmological scenario fulfills the third law of thermodynamics using specific heat formalism. next we discuss the thermal equation of state and by means of adiabatic, specific heat and isothermal conditions from classical thermodynamics we examine the thermal stability.
|
arxiv:2110.11770
|
computer software for aerospace applications, including flight software, ground control software, test & evaluation software, etc. risk and reliability – the study of risk and reliability assessment techniques and the mathematics involved in the quantitative methods. noise control – the study of the mechanics of sound transfer. aeroacoustics – the study of noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. flight testing – designing and executing flight test programs in order to gather and analyze performance and handling qualities data in order to determine if an aircraft meets its design and performance goals and certification requirements. the basis of most of these elements lies in theoretical physics, such as fluid dynamics for aerodynamics or the equations of motion for flight dynamics. there is also a large empirical component. historically, this empirical component was derived from testing of scale models and prototypes, either in wind tunnels or in the free atmosphere. more recently, advances in computing have enabled the use of computational fluid dynamics to simulate the behavior of the fluid, reducing time and expense spent on wind - tunnel testing. those studying hydrodynamics or hydroacoustics often obtain degrees in aerospace engineering. additionally, aerospace engineering addresses the integration of all components that constitute an aerospace vehicle ( subsystems including power, aerospace bearings, communications, thermal control, life support system, etc. ) and its life cycle ( design, temperature, pressure, radiation, velocity, lifetime ). = = degree programs = = aerospace engineering may be studied at the advanced diploma, bachelor ' s, master ' s, and ph. d. levels in aerospace engineering departments at many universities, and in mechanical engineering departments at others. a few departments offer degrees in space - focused astronautical engineering. some institutions differentiate between aeronautical and astronautical engineering. graduate degrees are offered in advanced or specialty areas for the aerospace industry. a background in chemistry, physics, computer science and mathematics is important for students pursuing an aerospace engineering degree. = = in popular culture = = the term " rocket scientist " is sometimes used to describe a person of great intelligence since rocket science is seen as a practice requiring great mental ability, especially technically and mathematically. the term is used ironically in the expression " it ' s not rocket science " to indicate that a task is simple. strictly speaking, the use of " science " in " rocket science " is a misnomer since science is about understanding the origins, nature, and behavior of the universe ; engineering is about using scientific and engineering principles to solve problems and develop new technology. the more etymologically correct version of this phrase would
|
https://en.wikipedia.org/wiki/Aerospace_engineering
|
unconstrained video recognition and deep convolution network ( dcn ) are two active topics in computer vision recently. in this work, we apply dcns as frame - based recognizers for video recognition. our preliminary studies, however, show that video corpora with complete ground truth are usually not large and diverse enough to learn a robust model. the networks trained directly on the video data set suffer from significant overfitting and have poor recognition rate on the test set. the same lack - of - training - sample problem limits the usage of deep models on a wide range of computer vision problems where obtaining training data are difficult. to overcome the problem, we perform transfer learning from images to videos to utilize the knowledge in the weakly labeled image corpus for video recognition. the image corpus help to learn important visual patterns for natural images, while these patterns are ignored by models trained only on the video corpus. therefore, the resultant networks have better generalizability and better recognition rate. we show that by means of transfer learning from image to video, we can learn a frame - based recognizer with only 4k videos. because the image corpus is weakly labeled, the entire learning process requires only 4k annotated instances, which is far less than the million scale image data sets required by previous works. the same approach may be applied to other visual recognition tasks where only scarce training data is available, and it improves the applicability of dcns in various computer vision problems. our experiments also reveal the correlation between meta - parameters and the performance of dcns, given the properties of the target problem and data. these results lead to a heuristic for meta - parameter selection for future researches, which does not rely on the time consuming meta - parameter search.
|
arxiv:1409.4127
|
we compiled an all - sky catalog of 451 nearby galaxies, each having an individual distance estimate $ d \ la 10 $ mpc or the radial velocity $ v _ { lg } < 550 $ km s $ ^ { - 1 } $. the catalog contains data on basic optical and hi properties of the galaxies : their diameters, absolute magnitudes, morphological types, optical and hi surface brightnesses, rotational velocities, indicative mass - to - luminosity and hi mass - to - luminosity ratios, as well as a so - called " tidal index ", which quantifies the galaxy environment. we expect the catalog completeness to be $ \ sim75 $ % within 8 mpc. about 85 % of the local volume population are dwarf ( dirr, dim, dsph ) galaxies with $ m _ b > - 17. 0 $, which contribute about 4 % to the local luminosity density, and $ \ sim ( 10 - 16 ) $ % to the local hi mass density. we found that the mean local barion density $ \ omega _ b ( < 8 $ mpc ) = 2. 3 % consists of only a half of the global barion density, $ \ omega _ b = ( 4. 7 \ pm0. 6 ) $ % ( spergel et al. 2003 ). the mean - square pairwise difference of radial velocities is about 100 km s $ ^ { - 1 } $ for spatial separations within 1 mpc, increasing to $ \ sim 300 $ km s $ ^ { - 1 } $ on a scale of $ \ sim 3 $ mpc.
|
arxiv:astro-ph/0410078
|
non - malleable code is a relaxed version of error - correction codes and the decoding of modified codewords results in the original message or a completely unrelated value. thus, if an adversary corrupts a codeword then he cannot get any information from the codeword. this means that non - malleable codes are useful to provide a security guarantee in such situations that the adversary can overwrite the encoded message. in 2010, dziembowski et al. showed a construction for non - malleable codes against the adversary who can falsify codewords bitwise independently. in this paper, we consider an extended adversarial model ( affine error model ) where the adversary can falsify codewords bitwise independently or replace some bit with the value obtained by applying an affine map over a limited number of bits. we prove that the non - malleable codes ( for the bitwise error model ) provided by dziembowski et al. are still non - malleable against the adversary in the affine error model.
|
arxiv:1701.07914
|
a new simulation technique to obtain the synchronized steady - state solutions existing in coupled oscillator systems is presented. the technique departs from a semi - analytical formulation presented in previous works. it extends the model of the admittance function describing each individual oscillator to a piecewise linear one. this provides a global formulation of the coupled system, considering the whole characteristic of each voltage - controlled oscillator ( vco ) in the array. in comparison with the previous local formulation, the new formulation significantly improves the accuracy in the prediction of the system synchronization ranges. the technique has been tested by comparison with computationally demanding circuit - level harmonic balance simulations in an array of van der pol - type oscillators and then applied to a coupled system of fet based oscillators at 5 ghz, with very good agreement with measurements.
|
arxiv:2404.12780
|
elementary mathematics, also known as primary or secondary school mathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. it includes a wide range of mathematical concepts and skills, including number sense, algebra, geometry, measurement, and data analysis. these concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. the study of elementary mathematics is a crucial part of a student ' s education and lays the foundation for future academic and career success. = = strands of elementary mathematics = = = = = number sense and numeration = = = number sense is an understanding of numbers and operations. in the ' number sense and numeration ' strand students develop an understanding of numbers by being taught various ways of representing numbers, as well as the relationships among numbers. properties of the natural numbers such as divisibility and the distribution of prime numbers, are studied in basic number theory, another part of elementary mathematics. elementary focus : = = = spatial sense = = = ' measurement skills and concepts ' or ' spatial sense ' are directly related to the world in which students live. many of the concepts that students are taught in this strand are also used in other subjects such as science, social studies, and physical education in the measurement strand students learn about the measurable attributes of objects, in addition to the basic metric system. elementary focus : the measurement strand consists of multiple forms of measurement, as marian small states : " measurement is the process of assigning a qualitative or quantitative description of size to an object based on a particular attribute. " = = = equations and formulas = = = a formula is an entity constructed using the symbols and formation rules of a given logical language. for example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion ; but, having done this once in terms of some parameter ( the radius for example ), mathematicians have produced a formula to describe the volume. an equation is a formula of the form a = b, where a and b are expressions that may contain one or several variables called unknowns, and " = " denotes the equality binary relation. although written in the form of proposition, an equation is not a statement that is either true or false, but a problem consisting of finding the values, called solutions, that, when substituted for the unknowns, yield equal values of the expressions a and b. for example, 2 is the unique solution of the
|
https://en.wikipedia.org/wiki/Elementary_mathematics
|
dcat is an rdf vocabulary designed to facilitate interoperability between data catalogs published on the web. since its first release in 2014 as a w3c recommendation, dcat has seen a wide adoption across communities and domains, particularly in conjunction with implementing the fair data principles ( for findable, accessible, interoperable and reusable data ). these implementation experiences, besides demonstrating the fitness of dcat to meet its intended purpose, helped identify existing issues and gaps. moreover, over the last few years, additional requirements emerged in data catalogs, given the increasing practice of documenting not only datasets but also data services and apis. this paper illustrates the new version of dcat, explaining the rationale behind its main revisions and extensions, based on the collected use cases and requirements, and outlines the issues yet to be addressed in future versions of dcat.
|
arxiv:2303.08883
|
we present picasso, a cuda - based library comprising novel modules for deep learning over complex real - world 3d meshes. hierarchical neural architectures have proved effective in multi - scale feature extraction which signifies the need for fast mesh decimation. however, existing methods rely on cpu - based implementations to obtain multi - resolution meshes. we design gpu - accelerated mesh decimation to facilitate network resolution reduction efficiently on - the - fly. pooling and unpooling modules are defined on the vertex clusters gathered during decimation. for feature learning over meshes, picasso contains three types of novel convolutions namely, facet2vertex, vertex2facet, and facet2facet convolution. hence, it treats a mesh as a geometric structure comprising vertices and facets, rather than a spatial graph with edges as previous methods do. picasso also incorporates a fuzzy mechanism in its filters for robustness to mesh sampling ( vertex density ). it exploits gaussian mixtures to define fuzzy coefficients for the facet2vertex convolution, and barycentric interpolation to define the coefficients for the remaining two convolutions. in this release, we demonstrate the effectiveness of the proposed modules with competitive segmentation results on s3dis. the library will be made public through https : / / github. com / hlei - ziyan / picasso.
|
arxiv:2103.15076
|
network slice placement with the problem of allocation of resources from a virtualized substrate network is an optimization problem which can be formulated as a multiobjective integer linear programming ( ilp ) problem. however, to cope with the complexity of such a continuous task and seeking for optimality and automation, the use of machine learning ( ml ) techniques appear as a promising approach. we introduce a hybrid placement solution based on deep reinforcement learning ( drl ) and a dedicated optimization heuristic based on the power of two choices principle. the drl algorithm uses the so - called asynchronous advantage actor critic ( a3c ) algorithm for fast learning, and graph convolutional networks ( gcn ) to automate feature extraction from the physical substrate network. the proposed heuristically - assisted drl ( ha - drl ) allows to accelerate the learning process and gain in resource usage when compared against other state - of - the - art approaches as the evaluation results evidence.
|
arxiv:2105.06741
|
this paper studies the minimum weight set cover ( minwsc ) problem with a { \ em small neighborhood cover } ( snc ) property proposed by agarwal { \ it et al. } in \ cite { agarwal. }. a parallel algorithm for minwsc with $ \ tau $ - snc property is presented, obtaining approximation ratio $ \ tau ( 1 + 3 \ varepsilon ) $ in $ o ( l \ log _ { 1 + \ varepsilon } \ frac { n ^ 3 } { \ varepsilon ^ 2 } + 4 \ tau ^ { 3 } 2 ^ \ tau l ^ 2 \ log n ) $ rounds, where $ 0 < \ varepsilon < \ frac { 1 } { 2 } $ is a constant, $ n $ is the number of elements, and $ l $ is a parameter related to snc property. our results not only improve the approximation ratio obtained in \ cite { agarwal. }, but also answer two questions proposed in \ cite { agarwal. }.
|
arxiv:2202.03872
|
in this paper, we discuss the results of a new particle pusher in realistic ultra - strong electromagnetic fields as those encountered around rotating neutron stars. after presenting results of this algorithm in simple fields and comparing them to expected exact analytical solutions, we present new simulations for a rotating magnetic dipole in vacuum for a millisecond pulsar by using deutsch solution. particles are injected within the magnetosphere, neglecting radiation reaction, interaction among them and their feedback on the fields. our simulations are therefore not yet fully self - consistent because maxwell equations are not solved according to the current produced by these particles. the code highlights the symmetrical behaviour of particles of opposite charge to mass ratio $ q / m $ with respect to the north and south hemispheres. the relativistic lorentz factor of the accelerated particles is proportional to this ratio $ q / m $ : protons reach up to $ \ gamma _ p \ simeq 10 ^ { 10. 7 } $, whereas electrons reach up to $ \ gamma _ e \ simeq 10 ^ { 14 } $. our simulations show that particles could be either captured by the neutron star, trapped around it, or ejected far from it, well outside the light - cylinder. actually, for a given charge to mass ratio, particles follow similar trajectories. these particle orbits show some depleted directions, especially at high magnetic inclination with respect to the rotation axis for positive charges and at low inclination for negative charges because of symmetry. other directions are preferred and loaded with a high density of particles, some directions concentrating the highest or lowest acceleration efficiencies.
|
arxiv:2007.04797
|
the problem of formulating synchronous variational principles in the context of general relativity is discussed. based on the analogy with classical relativistic particle dynamics, the existence of variational principles is pointed out in relativistic classical field theory which are either asynchronous or synchronous. the historical einstein - hilbert and palatini variational formulations are found to belong to the first category. nevertheless, it is shown that an alternative route exists which permits one to cast these principles in terms of equivalent synchronous lagrangian variational formulations. the advantage is twofold. first, synchronous approaches allow one to overcome the lack of gauge symmetry of the asynchronous principles. second, the property of manifest covariance of the theory is also restored at all levels, including the symbolic euler - lagrange equations, with the variational lagrangian density being now identified with a $ 4 - $ scalar. as an application, a joint synchronous variational principle holding both for the non - vacuum einstein and maxwell equations is displayed, with the matter source being described by means of a vlasov kinetic treatment.
|
arxiv:1609.04418
|
recently it has been discussed whether a bow shock ahead of the heliospheric stagnation region does exist or not. this discussion was triggered by measurements indicating that the alfv \ ' en speed and that of fast magnetosonic waves are higher than the flow speed of the local interstellar medium ( lism ) relative to the heliosphere and resulted in the conclusion that there might exist either a bow wave or a slow magnetosonic shock. we demonstrate here that including the he $ ^ { + } $ component of the lism yields both an alfv \ ' en and fast magnetosonic wave speed lower than the lism flow speed. consequently, the scenario of a bow shock in front of the heliosphere as modelled in numerous simulations of the interaction of the solar wind with the lism remains valid.
|
arxiv:1312.1197
|
this is a slightly revised version of the presidential address ( general ) delivered at the 84th annual conference of the indian mathematical society held at jammu, india during november 2018.
|
arxiv:2004.02100
|
we study the semileptonic branching fraction of $ b $ - meson into higher resonance of charmed meson $ d ^ { * * } $ by using the bjorken sum rule and the heavy quark effective theory ( hqet ). this sum rule and the current experiment of $ b $ - meson semileptonic decay into $ d $ and $ d ^ * $ predict that the branching ratio into $ d ^ { * * } l \ nu _ l $ is about 1. 7 \ %. this predicted value is larger than the value obtained by the various theoretical hadron models based on the hqet.
|
arxiv:hep-ph/9603355
|
information retrieval techniques have demonstrated exceptional capabilities in identifying semantic similarities across diverse domains through robust feature representations. however, their potential in guiding synthesis tasks, particularly cross - view image synthesis, remains underexplored. cross - view image synthesis presents significant challenges in establishing reliable correspondences between drastically different viewpoints. to address this, we propose a novel retrieval - guided framework that reimagines how retrieval techniques can facilitate effective cross - view image synthesis. unlike existing methods that rely on auxiliary information, such as semantic segmentation maps or preprocessing modules, our retrieval - guided framework captures semantic similarities across different viewpoints, trained through contrastive learning to create a smooth embedding space. furthermore, a novel fusion mechanism leverages these embeddings to guide image synthesis while learning and encoding both view - invariant and view - specific features. to further advance this area, we introduce vigor - gen, a new urban - focused dataset with complex viewpoint variations in real - world scenarios. extensive experiments demonstrate that our retrieval - guided approach significantly outperforms existing methods on the cvusa, cvact and vigor - gen datasets, particularly in retrieval accuracy ( r @ 1 ) and synthesis quality ( fid ). our work bridges information retrieval and synthesis tasks, offering insights into how retrieval techniques can address complex cross - domain synthesis challenges.
|
arxiv:2411.19510
|
we have performed systematic studies of narrow fe - k line ( 6. 4 kev ) flux variability and ni - k line intensity for seyfert galaxies, using { \ em suzaku } and { \ em xmm - newton } archival data. significant fe - k line variability of several tens percents was detected for a pair of observations separated by 1000 - - 2000 days ( cen a, ic 4329 a, ngc 3516, and ngc 4151 ) and 158 days ( ngc 3516 ). these timescales are larger by a factor of 10 - - 100 than the inner radius of torus, consistent with the view that x - ray reflection by torus is a main origin of a narrow fe - k line. ni - k line was detected with $ > 2 \ sigma $ level for the circinus galaxy, cen a, mrk 3, ngc 4388, and ngc 4151. a mean and variance of ni - k $ \ alpha $ to fe - k $ \ alpha $ line intensity ratio are 0. 066 and 0. 026, respectively. comparing this with the monte - carlo simulation of reflection, ni to fe abundance ratio is 1. 9 $ \ pm $ 0. 8 solar. we discuss the results and possibility of ni abundance enhancement.
|
arxiv:1603.08630
|
the dependence of the raman spectrum on the excitation energy has been investigated for aba - and abc - stacked few - layer graphene in order to establish the fingerprint of the stacking order and the number of layers, which affect the transport and optical properties of few - layer graphene. five different excitation sources with energies of 1. 96, 2. 33, 2. 41, 2. 54 and 2. 81 ev were used. the position and the line shape of the raman 2d, g *, n, m, and other combination modes show dependence on the excitation energy as well as the stacking order and the thickness. one can unambiguously determine the stacking order and the thickness by comparing the 2d band spectra measured with 2 different excitation energies or by carefully comparing weaker combination raman modes such as n, m, or lola modes. the criteria for unambiguous determination of the stacking order and the number of layers up to 5 layers are established.
|
arxiv:1404.1252
|
the framework of bodlaender et al. ( icalp 2008 ) and fortnow and santhanam ( stoc 2008 ) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity - theoretical assumptions. however, there are also some issues that are not addressed by this framework, including the existence of turing kernels such as the " kernelization " of leaf out branching ( k ) into a disjunction over n instances of size poly ( k ). observing that turing kernels are preserved by polynomial parametric transformations, we define a kernelization hardness hierarchy, akin to the m - and w - hierarchy of ordinary parameterized complexity, by the ppt - closure of problems that seem likely to be fundamentally hard for efficient turing kernelization. we find that several previously considered problems are complete for our fundamental hardness class, including min ones d - sat ( k ), binary ndtm halting ( k ), connected vertex cover ( k ), and clique ( k log n ), the clique problem parameterized by k log n.
|
arxiv:1110.0976
|
we develop a novel framework for fully decentralized offloading policy design in multi - access edge computing ( mec ) systems. the system comprises $ n $ power - constrained user equipments ( ues ) assisted by an edge server ( es ) to process incoming tasks. tasks are labeled with urgency flags, and in this paper, we classify them under three urgency levels, namely, high, moderate, and low urgency. we formulate the problem of designing computation decisions for the ues within a large population noncooperative game framework, where each ue selfishly decides on how to split task execution between its local onboard processor and the es. we employ the weighted average age of information ( aoi ) metric to quantify information freshness at the ues. increased onboard processing consumes more local power, while increased offloading may potentially incur a higher average aoi due to other ues ' packets being offloaded to the same es. thus, we use the mean - field game ( mfg ) formulation to compute approximate decentralized nash equilibrium offloading and local computation policies for the ues to balance between the information freshness and local power consumption. finally, we provide a projected gradient descent - based algorithm to numerically assess the merits of our approach.
|
arxiv:2501.05660
|
as various forms of fraud proliferate on ethereum, it is imperative to safeguard against these malicious activities to protect susceptible users from being victimized. while current studies solely rely on graph - based fraud detection approaches, it is argued that they may not be well - suited for dealing with highly repetitive, skew - distributed and heterogeneous ethereum transactions. to address these challenges, we propose bert4eth, a universal pre - trained transformer encoder that serves as an account representation extractor for detecting various fraud behaviors on ethereum. bert4eth features the superior modeling capability of transformer to capture the dynamic sequential patterns inherent in ethereum transactions, and addresses the challenges of pre - training a bert model for ethereum with three practical and effective strategies, namely repetitiveness reduction, skew alleviation and heterogeneity modeling. our empirical evaluation demonstrates that bert4eth outperforms state - of - the - art methods with significant enhancements in terms of the phishing account detection and de - anonymization tasks. the code for bert4eth is available at : https : / / github. com / git - disl / bert4eth.
|
arxiv:2303.18138
|
the electron scattering from periodic line defects on the surface of topological insulators with hexagonal warping effect is investigated theoretically by means of a transfer matrix method. the influence of surface line defects, acting as structural ripples on propagation of electrons are studied in two perpendicular directions due to the asymmetry of warped energy contour under momentum exchange. the transmission profiles and the details of resonant peaks which vary with the number of defects and the strength of their potentials are strongly dependent on the direction in which the line defects extend. at low energies, the quantum interference between the incident and reflected propagating electrons has the dominant contribution in transmission resonances, while at high energies the multiple scattering processes on the constant - energy contour also appear because of the strong warping effect. by increasing the spatial separation between the line defects, the minimum value of electrical conductance remains significantly high at low incident energies, while the minimum value may approach zero at high energies as the number of defects is increased. our findings suggest that the potential ripples on the surface of topological insulators can be utilized to control the local electronic properties of these materials.
|
arxiv:2007.12867
|
we consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. we assume for the objective that its evaluation, gradient, and hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. we propose a stochastic algorithm based on sequential quadratic programming ( sqp ) that uses a differentiable exact augmented lagrangian as the merit function. to motivate our algorithm design, we first revisit and simplify an old sqp method \ citep { lucidi1990recursive } developed for solving deterministic problems, which serves as the skeleton of our stochastic algorithm. based on the simplified deterministic algorithm, we then propose a non - adaptive sqp for dealing with stochastic objective, where the gradient and hessian are replaced by stochastic estimates but the stepsizes are deterministic and prespecified. finally, we incorporate a recent stochastic line search procedure \ citep { paquette2020stochastic } into the non - adaptive stochastic sqp to adaptively select the random stepsizes, which leads to an adaptive stochastic sqp. the global " almost sure " convergence for both non - adaptive and adaptive sqp methods is established. numerical experiments on nonlinear problems in cutest test set demonstrate the superiority of the adaptive algorithm.
|
arxiv:2102.05320
|
dawn - dusk asymmetries are ubiquitous features of the coupled solar - wind - magnetosphere - ionosphere system. during the last decades, increasing availability of satellite and ground - based measurements has made it possible to study these phenomena in more detail. numerous publications have documented the existence of persistent asymmetries in processes, properties and topology of plasma structures in various regions of geospace. in this paper, we present a review of our present knowledge of some of the most pronounced dawn - dusk asymmetries. we focus on four key aspects : ( 1 ) the role of external influences such as the solar wind and its interaction with the earth ' s magnetosphere ; ( 2 ) properties of the magnetosphere itself ; ( 3 ) the role of the ionosphere and ( 4 ) feedback and coupling between regions. we have also identified potential inconsistencies and gaps in our understanding of dawn - dusk asymmetries in the earth ' s magnetosphere and ionosphere.
|
arxiv:1701.04701
|
for all poincar \ ' e invariant lagrangians of the form $ { \ cal l } \ equiv f ( f _ { \ mu \ nu } ) $, in three euclidean dimensions, where $ f $ is any invariant function of a non - compact $ u ( 1 ) $ field strength $ f _ { \ mu \ nu } $, we find that the only continuum limit ( described by just such a gauge field ) is that of free field theory : first we approximate a gauge invariant version of wilson ' s renormalization group by neglecting all higher derivative terms $ \ sim \ partial ^ nf $ in $ { \ cal l } $, but allowing for a general non - vanishing anomalous dimension. then we prove analytically that the resulting flow equation has only one acceptable fixed point : the gaussian fixed point. the possible relevance to high - $ t _ c $ superconductivity is briefly discussed.
|
arxiv:hep-th/9503225
|
in his nobel prize lecture victor hess urged that different instruments, working together, should be used to solve the problem of the origin of cosmic rays. i review some of the key developments that have opened up the new fields of direct and indirect multi - messenger astronomy and that are guiding us to the solution of this riddle. i then discuss, very briefly, some of the new instruments that are shortly to come on line and give examples to show the long lead - times from conception to implementation that occur in this field. i conclude with some remarks about very ambitious future projects. the paper is not intended as a review : rather it is an attempt to set down issues discussed in the hess memorial public lecture given at the 2019 icrc in madison, wisconsin and accessible at www. icrc2019. org.
|
arxiv:1909.00670
|
it is suggested that virtual gluon - clusters exist in nucleon, and that such colorless and colored objects manifest themselves in the small $ x _ b $ region of inelastic lepton - nucleon scattering processes. the relationship between the space - time properties of such clusters and the striking features observed in these scattering processes is discussed. a phase - space model is used to show how quantitative results can be obtained in such an approach. the results of this model - calculation are in reasonable agreement with the existing data. further experiments are suggested.
|
arxiv:hep-ph/9509422
|
most current work in nlp utilizes deep learning, which requires a lot of training data and computational power. this paper investigates the strengths of genetic algorithms ( gas ) for extractive summarization, as we hypothesized that gas could construct more efficient solutions for the summarization task due to their relative customizability relative to deep learning models. this is done by building a vocabulary set, the words of which are represented as an array of weights, and optimizing those set of weights with the ga. these weights can be used to build an overall weighting of a sentence, which can then be passed to some threshold for extraction. our results showed that the ga was able to learn a weight representation that could filter out excessive vocabulary and thus dictate sentence importance based on common english words.
|
arxiv:2105.02365
|
nanostructured silicon is a promising material for thermoelectric conversion, because the thermal conductivity in silicon nanostructures can be strongly reduced with respect to that of bulk materials. we present thermal conductivity measurements, performed with the 3 $ \ omega $ technique, of suspended monocrystalline silicon thin films ( nanomembranes or nanoribbons ) with smooth and rough surfaces. we find evidence for a significant effect of surface roughness on phonon propagation : the measured thermal conductivity for the rough structures is well below that predicted by theoretical models which take into account diffusive scattering on the nanostructure walls. conversely, the electrical conductivity appears to be substantially unaffected by surface roughness : the measured resistance of smooth and rough nanostructures are comparable, if we take into account the geometrical factors. nanomembranes are more easily integrable in large area devices with respect to nanowires and are mechanically stronger and able to handle much larger electrical currents ( thus enabling the fabrication of thermoelectric devices that can supply higher power levels with respect to current existing solutions ).
|
arxiv:1710.04403
|
calibration is an essential prerequisite for the accurate data fusion of lidar and camera sensors. traditional calibration techniques often require specific targets or suitable scenes to obtain reliable 2d - 3d correspondences. to tackle the challenge of target - less and online calibration, deep neural networks have been introduced to solve the problem in a data - driven manner. while previous learning - based methods have achieved impressive performance on specific datasets, they still struggle in complex real - world scenarios. most existing works focus on improving calibration accuracy but overlook the underlying mechanisms. in this paper, we revisit the development of learning - based lidar - camera calibration and encourage the community to pay more attention to the underlying principles to advance practical applications. we systematically analyze the paradigm of mainstream learning - based methods, and identify the critical limitations of regression - based methods with the widely used data generation pipeline. our findings reveal that most learning - based methods inadvertently operate as retrieval networks, focusing more on single - modality distributions rather than cross - modality correspondences. we also investigate how the input data format and preprocessing operations impact network performance and summarize the regression clues to inform further improvements.
|
arxiv:2501.16969
|
we analyze the topological nature of $ c = 1 $ string theory at the self - - dual radius. we find that it admits two distinct topological field theory structures characterized by two different puncture operators. we show it first in the unperturbed theory in which the only parameter is the cosmological constant, then in the presence of any infinitesimal tachyonic perturbation. we also discuss in detail a landau - - ginzburg representation of one of the two topological field theory structures.
|
arxiv:hep-th/9505140
|
dark matter simulations can serve as a basis for creating galaxy histories via the galaxy - dark matter connection. here, one such model by becker ( 2015 ) is implemented with several variations on three different dark matter simulations. stellar mass and star formation rates are assigned to all simulation subhalos at all times, using subhalo mass gain to determine stellar mass gain. the observational properties of the resulting galaxy distributions are compared to each other and observations for a range of redshifts from 0 - 2. although many of the galaxy distributions seem reasonable, there are noticeable differences as simulations, subhalo mass gain definitions, or subhalo mass definitions are altered, suggesting that the model should change as these properties are varied. agreement with observations may improve by including redshift dependence in the added - by - hand random contribution to star formation rate. there appears to be an excess of faint quiescent galaxies as well ( perhaps due in part to differing definitions of quiescence ). the ensemble of galaxy formation histories for these models tend to have more scatter around their average histories ( for a fixed final stellar mass ) than the two more predictive and elaborate semi - analytic models of guo et al ( 2013 ) and henriques et al ( 2015 ), and require more basis fluctuations ( using pca ) to capture 90 percent of the scatter around their average histories. the codes to plot model predictions ( in some cases alongside observational data ) are publicly available to test other mock catalogues at https : / / github. com / jdcphysics / validation. information on how to use these codes is in the appendix.
|
arxiv:1609.03956
|
various coarse - grained models have been proposed to study the spreading dynamics in the network. a microscopic theory is needed to connect the spreading dynamics with the individual behaviors. in this letter, we unify the description of different spreading dynamics on complex networks by decomposing the microscopic dynamics into two basic processes, the aging process and the contact process. a microscopic dynamical equation is derived to describe the dynamics of individual nodes on the network. the hierarchy of a duration coarse - grained ( dcg ) approach is obtained to study duration - dependent processes, where the transition rates depend on the duration of an individual node on a state. applied to the epidemic spreading, such formalism is feasible to reproduce different epidemic models, e. g., the susceptible - infected - recovered and the susceptible - infected - susceptible models, and to associate with the corresponding macroscopic spreading parameters with the microscopic transition rate. the dcg approach enables us to obtain the steady state of the general sis model with arbitrary duration - dependent recovery and infection rates. the current hierarchical formalism can also be used to describe the spreading of information and public opinions, or to model a reliability theory in networks.
|
arxiv:2009.06919
|
. berndt, ramanujan ' s lost notebook : part iii, ( springer, 2012, isbn 978 - 1 - 4614 - 3809 - 0 ) george e. andrews and bruce c. berndt, ramanujan ' s lost notebook : part iv, ( springer, 2013, isbn 978 - 1 - 4614 - 4080 - 2 ) george e. andrews and bruce c. berndt, ramanujan ' s lost notebook : part v, ( springer, 2018, isbn 978 - 3 - 319 - 77832 - 7 ) m. p. chaudhary, a simple solution of some integrals given by srinivasa ramanujan, ( resonance : j. sci. education – publication of indian academy of science, 2008 ) m. p. chaudhary, mock theta functions to mock theta conjectures, scientia, series a : math. sci., ( 22 ) ( 2012 ) 33 – 46. m. p. chaudhary, on modular relations for the roger - ramanujan type identities, pacific j. appl. math., 7 ( 3 ) ( 2016 ) 177 – 184. = = selected publications on ramanujan and his work = = = = selected publications on works of ramanujan = = = = see also = = = = footnotes = = = = references = = = = external links = = = = = media links = = = biswas, soutik ( 16 march 2006 ). " film to celebrate mathematics genius ". bbc. retrieved 24 august 2006. feature film on mathematics genius ramanujan by dev benegal and stephen fry bbc radio programme about ramanujan – episode 5 a biographical song about ramanujan ' s life " why did this mathematician ' s equations make everyone so angry? ". thoughty2. 11 april 2022. retrieved 29 june 2022 – via youtube. = = = biographical links = = = srinivasa ramanujan at the mathematics genealogy project o ' connor, john j. ; robertson, edmund f., " srinivasa ramanujan ", mactutor history of mathematics archive, university of st andrews weisstein, eric wolfgang ( ed. ). " ramanujan, srinivasa ( 1887 – 1920 ) ". scienceworld. a short biography of ramanujan " our devoted site for great mathematical genius " = = = other links = = = wolfram, stephen ( 27 april 2016 ). " who
|
https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
|
we consider a linear regression model with a spatially correlated error term on a lattice. when estimating coefficients in the linear regression model, the generalized least squares estimator ( glse ) is used if the covariance structures are known. however, the glse for large spatial data sets is computationally expensive, because it involves inverting the covariance matrix of error terms from each observations. to reduce the computational complexity, we propose a pseudo best estimator ( pbe ) using spatial covariance structures approximated by separable covariance functions. we derive the asymptotic covariance matrix of the pbe and compare it with those of the least squares estimator ( lse ) and the glse through some simulations. monte carlo simulations demonstrate that the pbe using separable covariance functions has superior accuracy to that of the lse, which does not contain the information of the spatial covariance structure, even if the true process has an isotropic mat \ ' ern covariance function. additionally, our proposed pbe is computationally efficient relative to the glse for large spatial data sets.
|
arxiv:1212.6596
|
lack of conservation has been the biggest drawback in meshfree generalized finite difference methods ( gfdms ). in this paper, we present a novel modification of classical meshfree gfdms to include local balances which produce an approximate conservation of numerical fluxes. this numerical flux conservation is done within the usual moving least squares framework. unlike finite volume methods, it is based on locally defined control cells, rather than a globally defined mesh. we present the application of this method to an advection diffusion equation and the incompressible navier - stokes equations. our simulations show that the introduction of flux conservation significantly reduces the errors in conservation in meshfree gfdms.
|
arxiv:1701.08973
|
we report our investigation on the input signal amplification in unidirectionally coupled monostable duffing oscillators in one - and two - dimensions with first oscillator alone driven by a weak periodic signal. applying a perturbation theory we obtain a set of nonlinear equations for the response amplitude of the coupled oscillators. we identify the conditions for undamped signal propagation with enhanced amplitude through the coupled oscillators. when the number of oscillators increases the response amplitude approaches a limiting value. we determine this limiting value. also, we analyse the signal amplification in the coupled oscillators in two - dimensions with fraction of oscillators chosen randomly for coupling and forcing.
|
arxiv:1404.5397
|
the paper defends the thesis that it ' s possible to maintain some conceptual preconditions of overcoming of relativistic intentions in modern philosophy of science ( " there are no any general foundations in philosophy of science " ). we found two general foundations in philosophy of science as a minimum. from the first side it ' s realistic to reveal on the base of special understanding of time the value of time not only in natural thought ( especially in theory of gravity ) but also in humanitarian knowledge. that ' s why philosophy of science has independent position in epistemology and ontology corresponding to interpretation of time as a general category of scientific thinking. the nature of time has internally inconsistent ( paradoxical ) character. time is phenomenon which existing and not existing at the same time. this phenomenon is identified with imaginary movement and also ideal ( formal ) process of formation of the nature. the general understanding of time is connected with its " mathematical " meaning as calculable formal regulation of language practice and also the universal organization rules of quantitative parameters of intelligence of natural ( physical ) processes. from the second side we can say that exist an actual branch of philosophy of science. it exists on the basis of disclosure of aprioristic limits of consciousness of its cultural and historical development. there is possible a special interpretation of time. in that context time is the connection of an action of the cultural phenomenon or its " energy " with some kind of " weight ", the historical importance of a separate limit of consciousness through analog of " distance " as intensity of cultural and historical space ( or " oppositional nature of interaction of mental intentions " ).
|
arxiv:1407.3585
|
due to the strong dependency of our societies onglobal navigation satellite systems and their vulnerability to outages, there is an urgent need for additional navigation systems. a possible approach for such an additional system uses the communication signals of the emerging leo satellite mega - constellations as signals of opportunity. the doppler shift of those signals is leveraged to calculate positioning, navigation and timing information. therefore the signals have to be detected and the frequency has to be estimated. in this paper, we present the results of starlink signal measurements. the results are used to develope a novel correlation - based detection algorithm for starlink burst signals. the carrier frequency of the detected bursts is measured and the attainable positioning accuracy is estimated. it is shown, that the presented algorithms are applicable for a navigation solution in an operationally relevant setup using an omnidirectional antenna.
|
arxiv:2304.09535
|
in recent years, a specific machine learning method called deep learning has gained huge attraction, as it has obtained astonishing results in broad applications such as pattern recognition, speech recognition, computer vision, and natural language processing. recent research has also been shown that deep learning techniques can be combined with reinforcement learning methods to learn useful representations for the problems with high dimensional raw data input. this chapter reviews the recent advances in deep reinforcement learning with a focus on the most used deep architectures such as autoencoders, convolutional neural networks and recurrent neural networks which have successfully been come together with the reinforcement learning framework.
|
arxiv:1806.08894
|
the main objective of this work is to study a natural class of catalytic ornstein - uhlenbeck ( o - u ) processes with a measure - valued random catalyst, for example, super - brownian motion. we relate this to the class of affine processes that provides a unified setting in which to view ornstein - uhlenbeck processes, superprocesses, and ornstein - uhlenbeck processes with superprocess catalyst. we then review some basic properties of super - brownian motion which we need and introduce the ornstein - uhlenbeck process with catalyst given by a superprocess. the main results are the affine characterization of the characteristic functional - laplace transform of the joint catalytic o - u process and catalyst process and the identification of basic properties of the quenched and annealed versions of these processes.
|
arxiv:1411.4907
|
endoscopic surgery relies on two - dimensional views, posing challenges for surgeons in depth perception and instrument manipulation. while monocular visual simultaneous localization and mapping ( mvslam ) has emerged as a promising solution, its implementation in endoscopic procedures faces significant challenges due to hardware limitations, such as the use of a monocular camera and the absence of odometry sensors. this study presents bodyslam, a robust deep learning - based mvslam approach that addresses these challenges through three key components : cyclevo, a novel unsupervised monocular pose estimation module ; the integration of the state - of - the - art zoe architecture for monocular depth estimation ; and a 3d reconstruction module creating a coherent surgical map. the approach is rigorously evaluated using three publicly available datasets ( hamlyn, endoslam, and scared ) spanning laparoscopy, gastroscopy, and colonoscopy scenarios, and benchmarked against four state - of - the - art methods. results demonstrate that cyclevo exhibited competitive performance with the lowest inference time among pose estimation methods, while maintaining robust generalization capabilities, whereas zoe significantly outperformed existing algorithms for depth estimation in endoscopy. bodyslam ' s strong performance across diverse endoscopic scenarios demonstrates its potential as a viable mvslam solution for endoscopic applications.
|
arxiv:2408.03078
|
the mean of a random variable can be understood as a $ \ textit { linear } $ functional on the space of probability distributions. quantum computing is known to provide a quadratic speedup over classical monte carlo methods for mean estimation. in this paper, we investigate whether a similar quadratic speedup is achievable for estimating $ \ textit { non - linear } $ functionals of probability distributions. we propose a quantum - inside - quantum monte carlo algorithm that achieves such a speedup for a broad class of non - linear estimation problems, including nested conditional expectations and stochastic optimization. our algorithm improves upon the direct application of the quantum multilevel monte carlo algorithm introduced by an et al.. the existing lower bound indicates that our algorithm is optimal up polylogarithmic factors. a key innovation of our approach is a new sequence of multilevel monte carlo approximations specifically designed for quantum computing, which is central to the algorithm ' s improved performance.
|
arxiv:2502.05094
|
in this paper we present a new approach for testing qm against the realism aspect of hidden variable theory ( hvt ). we consider successive measurements of non - commuting operators on a input spin $ s $ state. the key point is that, although these operators are non - commuting, they act on different states so that the joint probabilities for the outputs of successive measurements are well defined. we show that, in this scenario hvt leads to bell type inequalities for the correlation between the outputs of successive measurements. we account for the maximum violation of these inequalities by quantum correlations by varying spin value and the number of successive measurements. our approach can be used to obtain a measure of the deviation of qm from realism say in terms of the amount of information needed to be transferred between successive measurements in order to classically simulate the quantum correlations.
|
arxiv:quant-ph/0602005
|
flow harmonics ( $ v _ n $ ) in the fourier expansion of the azimuthal distribution of particles are widely used to quantify the anisotropy in particle emission in high - energy heavy - ion collisions. the symmetric cumulants, $ sc ( m, n ) $, are used to measure the correlations between different orders of flow harmonics. these correlations are used to constrain the initial conditions and the transport properties of the medium in theoretical models. in this letter, we present the first measurements of the four - particle symmetric cumulants in au + au collisions at $ \ sqrt { s _ { nn } } $ = 39 and 200 gev from data collected by the star experiment at rhic. we observe that $ v _ { 2 } $ and $ v _ { 3 } $ are anti - correlated in all centrality intervals with similar correlation strengths from 39 gev au + au to 2. 76 tev pb + pb ( measured by the alice experiment ). the $ v _ { 2 } $ - $ v _ { 4 } $ correlation seems to be stronger at 39 gev than at higher collision energies. the initial - stage anti - correlations between second and third order eccentricities are sufficient to describe the measured correlations between $ v _ { 2 } $ and $ v _ { 3 } $. the best description of $ v _ { 2 } $ - $ v _ { 4 } $ correlations at $ \ sqrt { s _ { nn } } $ = 200 gev is obtained with inclusion of the system ' s nonlinear response to initial eccentricities accompanied by the viscous effect with $ \ eta / s $ $ > $ 0. 08. theoretical calculations using different initial conditions, equations of state and viscous coefficients need to be further explored to extract $ \ eta / s $ of the medium created at rhic.
|
arxiv:1803.03876
|
current techniques in machine learning are so far are unable to learn classifiers that are robust to adversarial perturbations. however, they are able to learn non - robust classifiers with very high accuracy, even in the presence of random perturbations. towards explaining this gap, we highlight the hypothesis that $ \ textit { robust classification may require more complex classifiers ( i. e. more capacity ) than standard classification. } $ in this note, we show that this hypothesis is indeed possible, by giving several theoretical examples of classification tasks and sets of " simple " classifiers for which : ( 1 ) there exists a simple classifier with high standard accuracy, and also high accuracy under random $ \ ell _ \ infty $ noise. ( 2 ) any simple classifier is not robust : it must have high adversarial loss with $ \ ell _ \ infty $ perturbations. ( 3 ) robust classification is possible, but only with more complex classifiers ( exponentially more complex, in some examples ). moreover, $ \ textit { there is a quantitative trade - off between robustness and standard accuracy among simple classifiers. } $ this suggests an alternate explanation of this phenomenon, which appears in practice : the tradeoff may occur not because the classification task inherently requires such a tradeoff ( as in [ tsipras - santurkar - engstrom - turner - madry ` 18 ] ), but because the structure of our current classifiers imposes such a tradeoff.
|
arxiv:1901.00532
|
we report vlt - isaac and keck - nirspec near - infrared spectroscopy for a sample of 30 0. 47 < z < 0. 92 cfrs galaxies and five [ oii ] - selected, m _ b, ab < - 21. 5, z ~ 1. 4 galaxies. we have measured halpha and [ nii ] line fluxes for the cfrs galaxies which have [ oii ], hbeta and [ oiii ] line fluxes available from optical spectroscopy. for the z ~ 1. 4 objects we measured hbeta and [ oiii ] emission line fluxes from j - band spectra, and halpha line fluxes plus upper limits for [ nii ] fluxes from h - band spectra. we derive the extinction and oxygen abundances for the sample using a method based on a set of ionisation parameter and oxygen abundance diagnostics, simultaneously fitting the [ oii ], hbeta, [ oiii ], halpha and [ nii ] line fluxes. our most salient conclusions are : a ) the source of gas ionisation in the 30 cfrs and in all z ~ 1. 4 galaxies is not due to agn activity ; b ) about one third of the 0. 47 < z < 0. 92 cfrs galaxies in our sample have substantially lower metallicities than local galaxies with similar luminosities and star formation rates ; c ) comparison with a chemical evolution model indicates that these low metallicity galaxies are unlikely to be the progenitors of metal - poor dwarf galaxies at z ~ 0, but more likely the progenitors of massive spirals ; d ) the z ~ 1. 4 galaxies are characterized by the high [ oiii ] / [ oii ] line ratios, low extinction and low metallicity that are typical of lower luminosity cadis galaxies at 0. 4 < z < 0. 7, and of more luminous lyman break galaxies at z ~ 3. 1, but not seen in cfrs galaxies at 0. 4 < z < 1. 0 ; e ) the properties of the z ~ 1. 4 galaxies suggest that the period of rapid chemical evolution takes place progressively in lower mass systems as the universe ages, and thus provides further support for a downsizing picture of galaxy formation, at least from z ~ 1. 4 to today.
|
arxiv:astro-ph/0509114
|
the presence of inhomogeneities modifies the cosmic distances through the gravitational lensing effect, and, indirectly, must affect the main cosmological tests. assuming that the dark energy is a smooth component, the simplest way to account for the influence of clustering is to suppose that the average evolution of the expanding universe is governed by the total matter - energy density whereas the focusing of light is only affected by a fraction of the total matter density quantified by the $ \ alpha $ dyer - roeder parameter. by using two different samples of sne type ia data, the $ \ omega _ m $ and $ \ alpha $ parameters are constrained by applying the zeldovich - kantowski - dyer - roeder ( zkdr ) luminosity distance redshift relation for a flat ( $ \ lambda $ cdm ) model. a $ \ chi ^ { 2 } $ - analysis using the 115 sne ia data of astier { \ it et al. } sample ( 2006 ) constrains the density parameter to be $ \ omega _ m = 0. 26 _ { - 0. 07 } ^ { + 0. 17 } $ ( $ 2 \ sigma $ ) while the $ \ alpha $ parameter is weakly limited ( all the values $ \ in [ 0, 1 ] $ are allowed even at 1 $ \ sigma $ ). however, a similar analysis based the 182 sne ia data of riess { \ it et al. } ( 2007 ) constrains the pair of parameters to be $ \ omega _ m = 0. 33 ^ { + 0. 09 } _ { - 0. 07 } $ and $ \ alpha \ geq 0. 42 $ ( $ 2 \ sigma $ ). basically, this occurs because the riess { \ it et al. } sample extends to appreciably higher redshifts. as a general result, even considering the existence of inhomogeneities as described by the smoothness $ \ alpha $ parameter, the einstein - de sitter model is ruled out by the two samples with a high degree of statistical confidence ( $ 11. 5 \ sigma $ and $ 9. 9 \ sigma $, respectively ). the inhomogeneous hubble - sandage diagram discussed here highlight the necessity of the dark energy, and a transition deceleration / accelerating phase at $ z \ sim 0. 5 $ is also required.
|
arxiv:0709.3679
|
enumerating ramified coverings of the sphere with fixed ramification types is a well - known problem first considered by a. hurwitz. up to now, explicit solutions have been obtained only for some families of ramified coverings, for instant, those realized by polynomials in one complex variable. in this paper we obtain an explicit answer for a large new family of coverings, namely, the coverings realized by simple almost polynomials, defined below. unlike most other results in the field, our formula is obtained by elementary methods.
|
arxiv:math/0504588
|
graph anomaly detection ( gad ) has achieved success and has been widely applied in various domains, such as fraud detection, cybersecurity, finance security, and biochemistry. however, existing graph anomaly detection algorithms focus on distinguishing individual entities ( nodes or graphs ) and overlook the possibility of anomalous groups within the graph. to address this limitation, this paper introduces a novel unsupervised framework for a new task called group - level graph anomaly detection ( gr - gad ). the proposed framework first employs a variant of graph autoencoder ( gae ) to locate anchor nodes that belong to potential anomaly groups by capturing long - range inconsistencies. subsequently, group sampling is employed to sample candidate groups, which are then fed into the proposed topology pattern - based graph contrastive learning ( tpgcl ) method. tpgcl utilizes the topology patterns of groups as clues to generate embeddings for each candidate group and thus distinct anomaly groups. the experimental results on both real - world and synthetic datasets demonstrate that the proposed framework shows superior performance in identifying and localizing anomaly groups, highlighting it as a promising solution for gr - gad. datasets and codes of the proposed framework are at the github repository https : / / anonymous. 4open. science / r / topology - pattern - enhanced - unsupervised - group - level - graph - anomaly - detection.
|
arxiv:2308.01063
|
we consider the motion of an incompressible shear - thickening power - law - like non - newtonian fluid in $ r ^ 3 $ with a variable power - law index. this system of nonlinear partial differential equations arises in mathematical models of electrorheological fluids. the aim of this paper is to investigate the large - time behaviour of the difference $ u - \ tilde { u } $ where $ u $ is a strong solution of the given equations with the initial data $ u _ 0 $ and $ \ tilde { u } $ is the strong solution of the same equations with perturbed initial data $ u _ 0 + w _ 0 $. the initial perturbation $ w _ 0 $ is not required to be small, but is assumed to satisfy certain decay condition. in particular, we can show that $ ( 1 + t ) ^ { - \ frac { \ gamma } { 2 } } \ lesssim \ | u ( t ) - \ tilde { u } ( t ) \ | _ 2 \ lesssim ( 1 + t ) ^ { - \ frac { \ gamma } { 2 } } $, for sufficiently large $ t > 0 $, where $ \ gamma \ in ( 2, \ frac { 5 } { 2 } ) $. the proof is based on the observation that the solution of the linear heat equation describes the asymptotic behaviour of the solutions of the electrorheological fluids well for sufficiently large time $ t > 0 $, and the generalized fourier splitting method with an iterative argument. furthermore, it will also be discussed that the argument used in the present paper can improve the previous results for the generalized newtonian fluids with a constant power - law index.
|
arxiv:2203.16116
|
within the framework of the massive o ( $ n $ ) nonlinear sigma model extended to the next - to - leading order in the chiral counting ( for $ n = 3 $ corresponding to the two ( - quark ) - flavor chiral perturbation theory ), we calculate the relativistic six - pion scattering amplitude at low energy up to and including terms $ \ mathcal { o } ( p ^ 4 ) $. results for the pion mass, decay constant and the four - pion amplitude in the case of $ n $ ( meson ) flavors at $ \ mathcal { o } ( p ^ 4 ) $ are also presented.
|
arxiv:2107.06291
|
accurate segmentation of power lines in various aerial images is very important for uav flight safety. the complex background and very thin structures of power lines, however, make it an inherently difficult task in computer vision. this paper presents plgan, a simple yet effective method based on generative adversarial networks, to segment power lines from aerial images with different backgrounds. instead of directly using the adversarial networks to generate the segmentation, we take their certain decoding features and embed them into another semantic segmentation network by considering more context, geometry, and appearance information of power lines. we further exploit the appropriate form of the generated images for high - quality feature embedding and define a new loss function in the hough - transform parameter space to enhance the segmentation of very thin power lines. extensive experiments and comprehensive analysis demonstrate that our proposed plgan outperforms the prior state - of - the - art methods for semantic segmentation and line detection.
|
arxiv:2204.07243
|
we study how the spectral properties of resonance fluorescence propagate through a two - atom system. within the weak - driving - field approximation we find that, as we go from one atom to the next, the power spectrum exhibits both sub - natural linewidth narrowing and large asymmetries while the spectrum of squeezing narrows but remains otherwise unchanged. analytical results for the observed spectral features of the fluorescence are provided and their origin is thoroughly discussed.
|
arxiv:physics/9902020
|
a solar sail is one of the most promising space exploration system because of its theoretically infinite specific impulse using solar radiation pressure ( srp ). recently, some researchers proposed " transformable spacecrafts " that can actively reconfigure their body configurations with actuatable joints. the transformable spacecrafts are expected to greatly enhance orbit and attitude control capability due to its high redundancy in control degree of freedom if they are used like solar sails. however, its large number of input poses difficulties in control, and therefore, previous researchers imposed strong constraints to limit its potential control capabilities. this paper addresses novel attitude control techniques for the transformable spacecrafts under srp. the authors have constructed two proposed methods ; one of those is a joint angle optimization to acquire arbitrary srp force and torque, and the other is a momentum damping control driven by joint angle actuation. our proposed methods are formulated in general forms and applicable to any transformable spacecraft that has front faces that can dominantly receive srp on each body. validity of the proposed methods are confirmed by numerical simulations. this paper contributes to making most of the high control redundancy of transformable spacecrafts without consuming any expendable propellants, which is expected to greatly enhance orbit and attitude control capability.
|
arxiv:2301.08435
|
because of the limitations of the infrared imaging principle and the properties of infrared imaging systems, infrared images have some drawbacks, including a lack of details, indistinct edges, and a large amount of salt - andpepper noise. to improve the sparse characteristics of the image while maintaining the image edges and weakening staircase artifacts, this paper proposes a method that uses the lp quasinorm instead of the l1 norm and for infrared image deblurring with an overlapping group sparse total variation method. the lp quasinorm introduces another degree of freedom, better describes image sparsity characteristics, and improves image restoration. furthermore, we adopt the accelerated alternating direction method of multipliers and fast fourier transform theory in the proposed method to improve the efficiency and robustness of our algorithm. experiments show that under different conditions for blur and salt - and - pepper noise, the proposed method leads to excellent performance in terms of objective evaluation and subjective visual results.
|
arxiv:1812.11725
|
we calculate the fiducial and differential $ w ^ { \ pm } / z ^ 0 + jet ( s ) $ production cross - sections in the presence of electroweak ( ew ) corrections through virtual loop contributions to the matrix elements ( mes ) of the processes and real partonic cascade emissions. the calculations are carried out for proton - proton collisions at $ \ sqrt { s } = 13 $ tev, using herwig 7 general - purpose monte - carlo event generator with leading - order or next - to - leading - order mes that are interfaced with different parton - shower configurations. the results are compared with precision experimental measurements from atlas collaboration and with similar predictions within the $ k _ t $ - factorisation framework, providing a test for the validity of the newly - implemented qcd $ \ oplus $ qed $ \ oplus $ ew parton shower in herwig 7. it is shown that the inclusion of ew radiations in the parton shower simulations improves herwig 7 ' s predictions in describing the experimental data. additionally, the inclusion of parton shower - induced real ew emissions can take precedence over the incorporation of virtual ew corrections for the simulation of ew - sensitive events.
|
arxiv:2112.15487
|
using the quantum - mechanical approach combined with the image charge method we calculated the lowest energy levels of the impurities and neutral vacancies with two electrons or holes located in the vicinity of flat surface of different solids. we obtained that the magnetic triplet state is the ground state of the impurities and neutral vacancies in the vicinity of surface, while the nonmagnetic singlet is the ground state in the bulk for e. g. he atom, li +, be + +, etc. ions. the energy difference between the lowest triplet and singlet states strongly depends on the electron ( hole ) effective mass, dielectric permittivity of the solid and the distance from the surface. pair interaction of the identical surface defects ( two doubly charged impurities or vacancies with two electrons or holes ) reveals the ferromagnetic spin state with the maximal exchange energy at the definite distance between the defects ( ~ 5 - 25 nm ). we obtained that the nonmagnetic singlet state is the lowest one for a molecule with two electrons formed by a pair of identical surface impurities ( like surface hydrogen ), while its next state with deep enough negative energy minimum is the magnetic triplet. the metastable magnetic triplet state appeared for such molecule at the surface indicates the possibility of metastable orto - states of the hydrogen - like molecules, while they are absent in the bulk of material. we hope that obtained results could provide an alternative mechanism of the room temperature ferromagnetism observed in tio2, hfo2, and in2o3 thin films with contribution of the oxygen vacancies.
|
arxiv:1006.3670
|
extending the ideal mhd stability code mishka, a new code, mishka - a, is developed to study the impact of pressure anisotropy on plasma stability. based on full anisotropic equilibrium and geometry, the code can provide normal mode analysis with three fluid closure models : the single adiabatic model ( sa ), the double adiabatic model ( cgl ) and the incompressible model. a study on the plasma continuous spectrum shows that in low beta, large aspect ratio plasma, the main impact of anisotropy lies in the modification of the bae gap and the sound frequency, if the q profile is conserved. the sa model preserves the bae gap structure as ideal mhd, while in cgl the lowest frequency branch does not touch zero frequency at the resonant flux surface where $ m + nq = 0 $, inducing a gap at very low frequency. also, the bae gap frequency with bi - maxwellian distribution in both model becomes higher if $ p _ \ perp > p _ \ parallel $ with a q profile dependency. as a benchmark of the code, we study the m / n = 1 / 1 internal kink mode. numerical calculation of the marginal stability boundary with bi - maxwellian distribution shows a good agreement with the generalized incompressible bussac criterion [ a. b. mikhailovskii, sov. j. plasma phys 9, 190 ( 1983 ) ] : the mode is stabilized ( destabilized ) if $ p _ \ parallel < p _ \ perp ( p _ \ parallel > p _ \ perp ) $.
|
arxiv:1502.02411
|
the pairwise comparisons method is a convenient tool used when the relative order of preferences among different concepts ( alternatives ) needs to be determined. there are several popular implementations of this method, including the eigenvector method, the least squares method, the chi squares method and others. each of the above methods comes with one or more inconsistency indices that help to decide whether the consistency of input guarantees obtaining a reliable output, thus taking the optimal decision. this article explores the relationship between inconsistency of input and discrepancy of output. a global ranking discrepancy describes to what extent the obtained results correspond to the single expert ' s assessments. on the basis of the inconsistency and discrepancy indices, two properties of the weight deriving procedure are formulated. these properties are proven for eigenvector method and koczkodaj ' s inconsistency index. several estimates using koczkodaj ' s inconsistency index for a principal eigenvalue, saaty ' s inconsistency index and the condition of order preservation are also provided.
|
arxiv:1401.8219
|
dark energy is an important science driver of many upcoming large - scale surveys. with small, stable seeing and low thermal infrared background, dome a, antarctica, offers a unique opportunity for shedding light on fundamental questions about the universe. we show that a deep, high - resolution imaging survey of 10, 000 square degrees in \ emph { ugrizyjh } bands can provide competitive constraints on dark energy equation of state parameters using type ia supernovae, baryon acoustic oscillations, and weak lensing techniques. such a survey may be partially achieved with a coordinated effort of the kunlun dark universe survey telescope ( kdust ) in \ emph { yjh } bands over 5000 - - 10, 000 deg $ ^ 2 $ and the large synoptic survey telescope in \ emph { ugrizy } bands over the same area. moreover, the joint survey can take advantage of the high - resolution imaging at dome a to further tighten the constraints on dark energy and to measure dark matter properties with strong lensing as well as galaxy - - galaxy weak lensing.
|
arxiv:1005.3810
|
this paper shows how to improve the real - time object detection in complex robotics applications, by exploring new visual features as adaboost weak classifiers. these new features are symmetric haar filters ( enforcing global horizontal and vertical symmetry ) and n - connexity control points. experimental evaluation on a car database show that the latter appear to provide the best results for the vehicle - detection problem.
|
arxiv:0910.1293
|
twin - field quantum key distribution ( tf - qkd ) can beat the linear bound of repeaterless qkd systems. after the proposal of the original protocol, multiple papers have extended the protocol to prove its security. however, these works are limited to the case where the two channels have equal amount of loss ( i. e. are symmetric ). in a practical network setting, it is very likely that the channels are asymmetric due to e. g. geographical locations. in this paper we extend the " simple tf - qkd " protocol to the scenario with asymmetric channels. we show that by simply adjusting the two signal states of the two users ( and not the decoy states ) they can effectively compensate for channel asymmetry and consistently obtain an order of magnitude higher key rate than previous symmetric protocol. it also can provide 2 - 3 times higher key rate than the strategy of deliberately adding fibre to the shorter channel until channels have equal loss ( and is more convenient as users only need to optimize their laser intensities and do not need to physically modify the channels ). we also perform simulation for a practical case with three decoy states and finite data size, and show that our method works well and has a clear advantage over prior art methods with realistic parameters.
|
arxiv:1907.05291
|
variable selection in high dimensional space has challenged many contemporary statistical problems from many frontiers of scientific disciplines. recent technology advance has made it possible to collect a huge amount of covariate information such as microarray, proteomic and snp data via bioimaging technology while observing survival information on patients in clinical studies. thus, the same challenge applies to the survival analysis in order to understand the association between genomics information and clinical information about the survival time. in this work, we extend the sure screening procedure fan and lv ( 2008 ) to cox ' s proportional hazards model with an iterative version available. numerical simulation studies have shown encouraging performance of the proposed method in comparison with other techniques such as lasso. this demonstrates the utility and versatility of the iterative sure independent screening scheme.
|
arxiv:1002.3315
|
due to the interconnectedness of financial entities, estimating certain key properties of a complex financial system ( e. g. the implied level of systemic risk ) requires detailed information about the structure of the underlying network. however, since data about financial linkages are typically subject to confidentiality, network reconstruction techniques become necessary to infer both the presence of connections and their intensity. recently, several " horse races " have been conducted to compare the performance of the available financial network reconstruction methods. these comparisons, however, were based on arbitrarily - chosen similarity metrics between the real and the reconstructed network. here we establish a generalised maximum - likelihood approach to rigorously define and compare weighted reconstruction methods. our generalization maximizes the conditional entropy to solve the problem represented by the fact that the density - dependent constraints required to reliably reconstruct the network are typically unobserved. the resulting approach admits as input any reconstruction method for the purely binary topology and, conditionally on the latter, exploits the available partial information to infer link weights. we find that the most reliable method is obtained by " dressing " the best - performing binary method with an exponential distribution of link weights having a properly density - corrected and link - specific mean value and propose two unbiased ( in the sense of maximum conditional entropy ) variants of it. while the one named crema is perfectly general ( as a particular case, it can place optimal weights on a network whose topology is known ), the one named cremb is recommended both in case of full uncertainty about the network topology and if the existence of some links is certain. in these cases, the cremb is faster and reproduces empirical networks with highest generalised likelihood.
|
arxiv:1811.09829
|
due to asymptotic freedom, qcd is guaranteed to be accessible to perturbative methods at asymptotically high temperatures. however, in 1979 linde has pointed out the existence of an " infrared wall ", beyond which an infinite number of feynman diagrams contribute. following a proposal by braaten and nieto, it is shown explicitly how the limits to computability that this infrared problem poses can be overcome in the framework of dimensionally reduced effective theories.
|
arxiv:hep-ph/0410130
|
the aim of object - centric vision is to construct an explicit representation of the objects in a scene. this representation is obtained via a set of interchangeable modules called \ emph { slots } or \ emph { object files } that compete for local patches of an image. the competition has a weak inductive bias to preserve spatial continuity ; consequently, one slot may claim patches scattered diffusely throughout the image. in contrast, the inductive bias of human vision is strong, to the degree that attention has classically been described with a spotlight metaphor. we incorporate a spatial - locality prior into state - of - the - art object - centric vision models and obtain significant improvements in segmenting objects in both synthetic and real - world datasets. similar to human visual attention, the combination of image content and spatial constraints yield robust unsupervised object - centric learning, including less sensitivity to model hyperparameters.
|
arxiv:2305.19550
|
the existence of the unique strong solution for a class of stochastic differential equations with non - lipschitz coefficients was established recently. in this paper, we shall investigate the dependence with respect to the initial values. we shall prove that the non confluence of solutions holds under our general conditions. to obtain a continuous version, the modulus of continuity of coefficients is assumed to be less than $ \ dis | x - y | \ log { 1 \ over | x - y | } $. in this case, it will give rise to a flow of homeomorphisms if the coefficients are compactly supported.
|
arxiv:math/0311034
|
a timed network consists of an arbitrary number of initially identical 1 - clock timed automata, interacting via hand - shake communication. in this setting there is no unique central controller, since all automata are initially identical. we consider the universal safety problem for such controller - less timed networks, i. e., verifying that a bad event ( enabling some given transition ) is impossible regardless of the size of the network. this universal safety problem is dual to the existential coverability problem for timed - arc petri nets, i. e., does there exist a number $ m $ of tokens, such that starting with $ m $ tokens in a given place, and none in the other places, some given transition is eventually enabled. we show that these problems are pspace - complete.
|
arxiv:1806.08170
|
in an earlier paper, the authors introduced partial translation algebras as a generalisation of group c * - algebras. here we establish an extension of partial translation algebras, which may be viewed as an excision theorem in this context. we apply this general framework to compute the k - theory of partial translation algebras and group c * - algebras in the context of almost invariant subspaces of discrete groups. this generalises the work of cuntz, lance, pimsner and voiculescu. in particular we provide a new perspective on pimsner ' s calculation of the k - theory for a graph product of groups.
|
arxiv:1304.7130
|
we give several inequalities on generalized entropies involving tsallis entropies, using some inequalities obtained by improvements of young ' s inequality. we also give a generalized han ' s inequality.
|
arxiv:1104.0360
|
we propose a new general approach for estimating the effect of a binary treatment on a continuous and potentially highly skewed response variable, the generalized quantile treatment effect ( gqte ). the gqte is defined as the difference between a function of the quantiles under the two treatment conditions. as such, it represents a generalization over the standard approaches typically used for estimating a treatment effect ( i. e., the average treatment effect and the quantile treatment effect ) because it allows the comparison of any arbitrary characteristic of the outcome ' s distribution under the two treatments. following dominici et al. ( 2005 ), we assume that a pre - specified transformation of the two quantiles is modeled as a smooth function of the percentiles. this assumption allows us to link the two quantile functions and thus to borrow information from one distribution to the other. the main theoretical contribution we provide is the analytical derivation of a closed form expression for the likelihood of the model. exploiting this result we propose a novel bayesian inferential methodology for the gqte. we show some finite sample properties of our approach through a simulation study which confirms that in some cases it performs better than other nonparametric methods. as an illustration we finally apply our methodology to the 1987 national medicare expenditure survey data to estimate the difference in the single hospitalization medical cost distributions between cases ( i. e., subjects affected by smoking attributable diseases ) and controls.
|
arxiv:1509.01042
|
this paper introduces an effective framework for designing memoryless dissipative full - state feedbacks for general linear delay systems via the krasovski \ u { i } functional ( kf ) approach, where an unlimited number of pointwise and general distributed delays ( dds ) exists in the state, input and output. to handle the infinite dimensionality of dds, we employ the kronecker - seuret decomposition ( ksd ) which we recently proposed for analyzing matrix - valued functions in the context of delay systems. the ksd enables factorization or least - squares approximation of any number of $ \ mathcal { l } ^ 2 $ dd kernels from any number of dds without introducing conservatism. this also facilitates the construction of a complete - type kf with flexible integral kernels, following from an application of a novel integral inequality derived from the least - squares principle. our solution includes two theorems and an iterative algorithm to compute controller gains without relying on nonlinear solvers. a challenging numerical example, intractable for existing methods, underscores the efficacy of this approach.
|
arxiv:2504.00165
|
the existing flare prediction primarily relies on photospheric magnetic field parameters from the entire active region ( ar ), such as space - weather hmi activity region patches ( sharp ) parameters. however, these parameters may not capture the details the ar evolution preceding flares. the magnetic structure within the core area of an ar is essential for predicting large solar flares. this paper utilizes the area of high photospheric free energy density ( hed region ) as a proxy for the ar core region. we construct two datasets : sharp and hed datasets. the ars contained in both datasets are identical. furthermore, the start and end times for the same ar in both datasets are identical. we develop six models for 24 - hour solar flare forecasting, utilizing sharp and hed datasets. we then compare their categorical and probabilistic forecasting performance. additionally, we conduct an analysis of parameter importance. the main results are as follows : ( 1 ) among the six solar flare prediction models, the models using hed parameters outperform those using sharp parameters in both categorical and probabilistic prediction, indicating the important role of the hed region in the flare initiation process. ( 2 ) the transformer flare prediction model stands out significantly in true skill statistic ( tss ) and brier skill score ( bss ), surpassing the other models. ( 3 ) in parameter importance analysis, the total photospheric free magnetic energy density ( $ \ mathrm { e _ { free } } $ ) within the hed parameters excels in both categorical and probabilistic forecasting. similarly, among the sharp parameters, the r _ value stands out as the most effective parameter for both categorical and probabilistic forecasting.
|
arxiv:2410.18562
|
in this paper we prove a conjecture by wocjan, elphick and anekstein ( 2018 ) which upper bounds the sum of the squares of the positive ( or negative ) eigenvalues of the adjacency matrix of a graph by an expression that behaves monotonically in terms of the vector chromatic number. one of our lemmas is a strengthening of the cauchy - schwarz inequality for hermitian matrices when one of the matrices is positive semidefinite. a related conjecture due to bollob \ ' as and nikiforov ( 2007 ) replaces the vector chromatic number by the clique number and sums over the first two eigenvalues only. we prove a version of this conjecture with weaker constants. an important consequence of our work is a proof that for any fixed $ r $, computing a rank $ r $ optimum solution to the vector chromatic number semidefinite programming is np - hard. we also present a vertex weighted version of some of our results, and we show how it leads quite naturally to the known vertex - weighted version of the motzkin - straus quadratic optimization formulation for the clique number.
|
arxiv:2411.08184
|
in the last few years we have developed stellar model atmospheres which included effects of anomalous abundances and strong magnetic field. the full treatment of anomalous zeeman splitting and polarized radiative transfer were introduced in the model atmosphere calculations for the first time. in this investigation we present results of modelling the atmosphere of one of the most extreme magnetic chemically peculiar stars, hd137509. this bp sicrfe star has the mean surface magnetic field modulus of about 29kg. we use the recent version of the line - by - line opacity sampling stellar model atmosphere code llmodels, which incorporates the full treatment of zeeman splitting of spectral lines, detailed polarized radiative transfer and arbitrary abundances. we compare model predictions with photometric and spectroscopic observations of the star, aiming to reach a self - consistency between the abundance pattern derived from high - resolution spectra and abundances used for model atmosphere calculation. based on magnetic model atmospheres, we redetermined abundances and fundamental parameters of hd137509 using spectroscopic and photometric observations. this allowed us to obtain a better agreement between observed and theoretical parameters compared to non - magnetic models with individual or scaled - solar abundances. we confirm that the magnetic field effects should be taken into account in the stellar parameter determination and abundance analysis.
|
arxiv:0806.1296
|
we investigate possible regimes in spatially flat vacuum cosmological models in cubic lovelock gravity. the spatial section is a product of three - and extra - dimensional isotropic subspaces. this is the second paper of the series and we consider d = 5 and general d > = 6 cases here. for each d case we found critical values for $ \ alpha $ ( gauss - bonnet coupling ) and $ \ beta $ ( cubic lovelock coupling ) which separate different dynamical cases and study the dynamics in each region to find all regimes for all initial conditions and for arbitrary values of $ \ alpha $ and $ \ beta $. the results suggest that for d > = 3 there are regimes with realistic compactification originating from ` generalized taub ' solution. the endpoint of the compactification regimes is either anisotropic exponential solution ( for $ \ alpha > 0 $, $ \ mu \ equiv \ beta / \ alpha ^ 2 < \ mu _ 1 $ ( including entire $ \ beta < 0 $ ) ) or standard kasner regime ( for $ \ alpha > 0 $, $ \ mu > \ mu _ 1 $ ). for d > = 8 there is additional regime which originates from high - energy ( cubic lovelock ) kasner regime and ends as anisotropic exponential solution. it exists in two domains : $ \ alpha > 0 $, $ \ beta < 0 $, $ \ mu \ leqslant \ mu _ 4 $ and entire $ \ alpha > 0 $, $ \ beta > 0 $. let us note that for d > = 8 and $ \ alpha > 0 $, $ \ beta < 0 $, $ \ mu < \ mu _ 4 $ there are two realistic compactification regimes which exist at the same time and have two different anisotropic exponential solutions as a future asymptotes. for d > = 8 and $ \ alpha > 0 $, $ \ beta > 0 $, $ \ mu < \ mu _ 2 $ there are two realistic compactification regimes but they lead to the same anisotropic exponential solution. this behavior is quite different from the einstein - gauss - bonnet case. there are two more unexpected observations among the results - - all realistic compactification regimes exist only for $ \ alpha > 0 $ and there is no smooth transition from high - energy kasner regime to low - energy one with realistic compactification.
|
arxiv:1807.01601
|
we address the effect of orientation of the accretion disk plane and the geometry of the broad - line region ( blr ) in the context of understanding the distribution of quasars along their main sequence. we utilize the photoionization code cloudy to model the blr, incorporating the ` un - constant ' virial factor. we show the preliminary results of the analysis to highlight the co - dependence of the eigenvector 1 parameter, $ \ mathrm { r _ { feii } } $ on the broad h $ \ beta $ fwhm ( i. e. the line dispersion ) and the inclination angle ( $ \ theta $ ), assuming fixed values for the eddington ratio ( $ \ mathrm { l _ { bol } } / \ mathrm { l _ { edd } } $ ), black hole mass ( $ \ mathrm { m _ { bh } } $ ), spectral energy distribution ( sed ) shape, cloud density ( n $ \ rm { _ { h } } $ ) and composition.
|
arxiv:1912.03118
|
we present novel neutral and uncharged solutions that describe the cluster of einstein in the teleparallel equivalent of general relativity ( tegr ). to this end, we use a tetrad field with non - diagonal spherical symmetry which gives the vanishing of the off - diagonal components for the gravitational field equations in the tegr theory. the clusters are calculated by using an anisotropic energy - momentum tensor. we solve the field equations of tegr theory, using two assumptions : the first one is by using an equation of state that relates density with tangential pressure while the second postulate is to assume a specific form of one of the two unknown functions that appear in the non - diagonal tetrad field. among many things presented in this study, we investigate the static stability specification. we also study the tolman - oppenheimer - volkoff equation of these solutions in addition to the conditions of energy. the causality constraints with the adiabatic index in terms of the limit of stability are discussed.
|
arxiv:2002.11471
|
based on many experts ' former work in the jacobian conjecture and an essential analysis of intrinsic topology of linear maps, i completely prove the jacobian conjecture by demonstrating the injectivity of real keller map of any $ n $ - dimensions.
|
arxiv:2008.09101
|
expressive text - to - speech ( tts ) has become a hot research topic recently, mainly focusing on modeling prosody in speech. prosody modeling has several challenges : 1 ) the extracted pitch used in previous prosody modeling works have inevitable errors, which hurts the prosody modeling ; 2 ) different attributes of prosody ( e. g., pitch, duration and energy ) are dependent on each other and produce the natural prosody together ; and 3 ) due to high variability of prosody and the limited amount of high - quality data for tts training, the distribution of prosody cannot be fully shaped. to tackle these issues, we propose prosospeech, which enhances the prosody using quantized latent vectors pre - trained on large - scale unpaired and low - quality text and speech data. specifically, we first introduce a word - level prosody encoder, which quantizes the low - frequency band of the speech and compresses prosody attributes in the latent prosody vector ( lpv ). then we introduce an lpv predictor, which predicts lpv given word sequence. we pre - train the lpv predictor on large - scale text and low - quality speech data and fine - tune it on the high - quality tts dataset. finally, our model can generate expressive speech conditioned on the predicted lpv. experimental results show that prosospeech can generate speech with richer prosody compared with baseline methods.
|
arxiv:2202.07816
|
quantum mechanics has been argued to be a coarse - graining of some underlying deterministic theory. here we support this view by establishing a map between certain solutions of the schroedinger equation, and the corresponding solutions of the irrotational navier - stokes equation for viscous fluid flow. as a physical model for the fluid itself we propose the quantum probability fluid. it turns out that the ( state - dependent ) viscosity of this fluid is proportional to planck ' s constant, while the volume density of entropy is proportional to boltzmann ' s constant. stationary states have zero viscosity and a vanishing time rate of entropy density. on the other hand, the nonzero viscosity of nonstationary states provides an information - loss mechanism whereby a deterministic theory ( a classical fluid governed by the navier - stokes equation ) gives rise to an emergent theory ( a quantum particle governed by the schroedinger equation ).
|
arxiv:1409.7036
|
a graph $ g $ is $ m $ - minor - universal if every graph with at most $ m $ edges ( and no isolated vertices ) is a minor of $ g $. we prove that the $ d $ - dimensional hypercube, $ q _ d $, is $ \ omega \ left ( \ frac { 2 ^ d } { d } \ right ) $ - minor - universal, and that there exists an absolute constant $ c > 0 $ such that $ q _ d $ is not $ \ frac { c2 ^ d } { \ sqrt { d } } $ - minor - universal. similar results are obtained in a more generalized setting, where we bound the size of minors in a product of finite connected graphs. a key component of our proof is the following claim regarding the decomposition of a permutation of a box into simpler, one - dimensional permutations : let $ n _ 1, \ dots, n _ d $ be positive integers, and define $ x : = [ n _ 1 ] \ times \ dots \ times [ n _ d ] $. we prove that every permutation $ \ sigma : x \ to x $ can be expressed as $ \ sigma = \ sigma _ 1 \ circ \ dots \ circ \ sigma _ { 2d - 1 } $, where each $ \ sigma _ i $ is a one - dimensional permutation, meaning it fixes all coordinates except possibly one. we discuss future directions and pose open problems.
|
arxiv:2501.13730
|
entropy serves as a central observable which indicates uncertainty in many chemical, thermodynamical, biological and ecological systems, and the principle of the maximum entropy ( maxent ) is widely supported in natural science. recently, entropy is employed to describe the social system in which human subjects are interacted with each other, but the principle of the maximum entropy has never been reported from this field empirically. by using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two person constant sum $ 2 \ times 2 $ game. empirical evidence shows that, in this competing game environment, the outcome of human ' s decision - making obeys the principle of maximum entropy.
|
arxiv:1111.2405
|
the concept of low - voltage depletion and accumulation of electron charge in semiconductors, utilized in field - effect transistors ( fets ), is one of the cornerstones of current information processing technologies. spintronics which is based on manipulating the collective state of electron spins in a ferromagnet provides complementary technologies for reading magnetic bits or for the solid - state memories. the integration of these two distinct areas of microelectronics in one physical element, with a potentially major impact on the power consumption and scalability of future devices, requires to find efficient means for controlling magnetization electrically. current induced magnetization switching phenomena represent a promising step towards this goal, however, they relay on relatively large current densities. the direct approach of controlling the magnetization by low - voltage charge depletion effects is seemingly unfeasible as the two worlds of semiconductors and metal ferromagnets are separated by many orders of magnitude in their typical carrier concentrations. here we demonstrate that this concept is viable by reporting persistent magnetization switchings induced by short electrical pulses of a few volts in an all - semiconductor, ferromagnetic p - n junction.
|
arxiv:0807.0906
|
sports officials around the world are facing incredible challenges due to the unfair means of practices performed by the athletes to improve their performance in the game. it includes the intake of hormonal based drugs or transfusion of blood to increase their strength and the result of their training. however, the current direct test of detection of these cases includes the laboratory - based method, which is limited because of the cost factors, availability of medical experts, etc. this leads us to seek for indirect tests. with the growing interest of artificial intelligence in healthcare, it is important to propose an algorithm based on blood parameters to improve decision making. in this paper, we proposed a statistical and machine learning - based approach to identify the presence of doping substance rhepo in blood samples.
|
arxiv:2203.00001
|
the actual limit of the unitarity condition of the first row of the ckm matrix | v _ ud | ^ 2 + | v _ us | ^ 2 + | v _ ub | ^ 2 = 1 + \ delta _ ckm is \ delta _ ckm = - 0. 0001 ( 6 ). in 2010 the same was \ delta _ ckm = + 0. 0001 ( 6 ). despite the only difference of a sign, and with an absolute change of the value of one third of the accuracy, a substantial amount of work has been done in the last two years to improve the knowledge of all the contributions to this stringent limit to ckm unitarity, and more is expected in the next years. in this paper we present an organized summary of all the important contributions presented during the wg1 sessions, referring as much as possible to the contribution papers prepared by the individual authors.
|
arxiv:1408.6374
|
this paper studies the mutation - selection balance in three simplified replication models. the first model considers a population of organisms replicating via the production of asexual spores. the second model considers a sexually replicating population that produces identical gametes. the third model considers a sexually replicating population that produces distinct sperm and egg gametes. all models assume diploid organisms whose genomes consist of two chromosomes, each of which is taken to be functional if equal to some master sequence, and defective otherwise. in the asexual population, the asexual diploid spores develop directly into adult organisms. in the sexual populations, the haploid gametes enter a haploid pool, where they may fuse with other haploids. the resulting immature diploid organisms then proceed to develop into mature organisms. based on an analysis of all three models, we find that, as organism size increases, a sexually replicating population can only outcompete an asexually replicating population if the adult organisms produce distinct sperm and egg gametes. a sexual replication strategy that is based on the production of large numbers of sperm cells to fertilize a small number of eggs is found to be necessary in order to maintain a sufficiently low cost for sex for the strategy to be selected for over a purely asexual strategy. we discuss the usefulness of this model in understanding the evolution and maintenance of sexual replication as the preferred replication strategy in complex, multicellular organisms.
|
arxiv:0707.3464
|
{ a ∈ a : sup { μ ( b ) : b ∈ p ( a ) } < + ∞ } × { 0 }. { \ displaystyle \ mu _ { \ text { sf } } = \ mu | _ { \ mu ^ { \ text { pre } } ( \ mathbb { r } _ { > 0 } ) } \ cup \ { a \ in { \ cal { a } } : \ sup \ { \ mu ( b ) : b \ in { \ cal { p } } ( a ) \ } = + \ infty \ } \ times \ { + \ infty \ } \ cup \ { a \ in { \ cal { a } } : \ sup \ { \ mu ( b ) : b \ in { \ cal { p } } ( a ) \ } < + \ infty \ } \ times \ { 0 \ }. } since μ sf { \ displaystyle \ mu _ { \ text { sf } } } is semifinite, it follows that if μ = μ sf { \ displaystyle \ mu = \ mu _ { \ text { sf } } } then μ { \ displaystyle \ mu } is semifinite. it is also evident that if μ { \ displaystyle \ mu } is semifinite then μ = μ sf. { \ displaystyle \ mu = \ mu _ { \ text { sf } }. } = = = = non - examples = = = = every 0 − ∞ { \ displaystyle 0 - \ infty } measure that is not the zero measure is not semifinite. ( here, we say 0 − ∞ { \ displaystyle 0 - \ infty } measure to mean a measure whose range lies in { 0, + ∞ } { \ displaystyle \ { 0, + \ infty \ } } : ( a ∈ a ) ( μ ( a ) ∈ { 0, + ∞ } ). { \ displaystyle ( \ forall a \ in { \ cal { a } } ) ( \ mu ( a ) \ in \ { 0, + \ infty \ } ). } ) below we give examples of 0 − ∞ { \ displaystyle 0 - \ infty } measures that are not zero measures. let x { \ displaystyle x } be nonempty, let a { \ displaystyle { \ cal { a } } } be a σ { \ displaystyle \ sigma }
|
https://en.wikipedia.org/wiki/Measure_(mathematics)
|
in this paper we prove the nonlinear instability of the shear flow in three dimensional inviscid non - conducting boussinesq equations with rotation. we establish the nonlinear instability of the shear motion with respect to a general class of perturbation, by means of constructing an approximate solution containing the shear flow and an exponentially growing profile, which is deduced from the geostrophic limit model.
|
arxiv:2410.06835
|
the main purpose of this work is to characterize the almost sure local structure stability of solutions to a class of linear stochastic partial functional differential equations ( spfdes ) by investigating the lyapunov exponents and invariant manifolds near the stationary point. it is firstly proved that the trajectory field of the stochastic delayed stochastic partial functional differential equation admits an almost sure continuous version which is compact for $ t > \ tau $ by a delicate construction based on the random semiflow generated by the diffusion term. then it is proved that the version generates a random dynamical system ( rds ) by the wong - zakai approximation of the stochastic partial differential equation constructed by the diffusion term. subsequently, it is shown that the constructed linear cocycle admits fixed ( at most ) countable set of lyapunov exponents and the associate oseledets random filtration of the banach space is obtained by adopting the infinite - dimensional multiplicative ergodic theorem in banach spaces established by lian and lu [ \ textit { mem amer math soc, 2010, 206 : 967 } ]. as a by product, the stable - manifolds theorem for the linear spfde in the hyperbolic case is also established.
|
arxiv:2303.04102
|
the last decade brought a significant increase in the amount of data and a variety of new inference methods for reconstructing the detailed evolutionary history of various cancers. this brings the need of designing efficient procedures for comparing rooted trees representing the evolution of mutations in tumor phylogenies. bernardini et al. [ cpm 2019 ] recently introduced a notion of the rearrangement distance for fully - labelled trees motivated by this necessity. this notion originates from two operations : one that permutes the labels of the nodes, the other that affects the topology of the tree. each operation alone defines a distance that can be computed in polynomial time, while the actual rearrangement distance, that combines the two, was proven to be np - hard. we answer two open question left unanswered by the previous work. first, what is the complexity of computing the permutation distance? second, is there a constant - factor approximation algorithm for estimating the rearrangement distance between two arbitrary trees? we answer the first one by showing, via a two - way reduction, that calculating the permutation distance between two trees on $ n $ nodes is equivalent, up to polylogarithmic factors, to finding the largest cardinality matching in a sparse bipartite graph. in particular, by plugging in the algorithm of liu and sidford [ arxiv 2020 ], we obtain an $ o ( n ^ { 4 / 3 + o ( 1 ) } ) $ time algorithm for computing the permutation distance between two trees on $ n $ nodes. then we answer the second question positively, and design a linear - time constant - factor approximation algorithm that does not need any assumption on the trees.
|
arxiv:2002.05600
|
we extend the gelfand and graev construction of generalized fourier transforms on basic affine space from split groups to quasi - split groups over a local non - archimedean field $ f $.
|
arxiv:1912.07071
|
the transportation sector is the single largest contributor to us emissions and the second largest globally. electric vehicles ( evs ) are expected to represent half of global car sales by 2035, emerging as a pivotal solution to reduce emissions and enhance grid flexibility. the electrification of buildings, manufacturing, and transportation is expected to grow electricity demand substantially over the next decade. without effectively managed ev charging, evs could strain energy grid infrastructure and increase electricity costs. drawing on de - identified 2023 ev telematics data from rivian automotive, this study found that 72 % of home charging commenced after the customer plugged in their vehicle regardless of utility time of use ( tou ) tariffs or managed charging programs. in fewer than 26 % of charging sessions in the sample, ev owners actively scheduled charging times to align or participate in utility tariffs or programs. with a majority of drivers concurrently plugged in during optimal charging periods yet not actively charging, the study identified an opportunity to reduce individual ev owner costs and carbon emissions through smarter charging habits without significant behavioral modifications or sacrifice in user preferences. by optimizing home charging schedules within existing plug - in and plug - out windows, the study suggests that ev owners can save an average of $ 140 annually and reduce the associated carbon emissions of charging their ev by as much as 28 %.
|
arxiv:2503.03167
|
we present new theoretical tools, based on fluctuational electrodynamics and the integral - equation approach to computational electromagnetism, for numerical modeling of forces and torques on bodies of complex shapes and materials due to emission of thermal radiation out of thermal equilibrium. this extends our recently - developed fluctuating - surface - current ( fsc ) and fluctuating - volume - current ( fvc ) techniques for radiative heat transfer to the computation of non - equilibrium fluctuation - induced forces and torques ; as we show, the extension is non - trivial due to the greater computational cost of modeling radiative momentum transfer, including new singularities that must be carefully neutralized. we introduce a new analytical cancellation technique that addresses these challenges and allows, for the first time, accurate and efficient prediction of non - equilibrium forces and torques on bodies of essentially arbitrary shapes - - - including asymmetric and chiral particles - - - and complex material properties, including continuously - varying and anisotropic dielectrics. we validate our approach by showing that it reproduces known results, then present new numerical predictions of non - equilibrium self - propulsion, self - rotation, and momentum - transfer phenomena in complex geometries that would be difficult or impossible to study with existing methods. our findings indicate that the fluctuation - induced dynamics of micron - size room - temperature bodies in cold environments involve microscopic length scales but macroscopic time scales, with typical linear and angular velocities on the order of microns / second and radians / second ; for a micron - scale gear driven by thermal radiation from a nearby chiral emitter, we find a strong and non - monotonic dependence of the magnitude and even the \ textit { sign } of the induced torque on the temperature of the emitter.
|
arxiv:1708.01985
|
diffusion models have enabled high - quality, conditional image editing capabilities. we propose to expand their arsenal, and demonstrate that off - the - shelf diffusion models can be used for a wide range of cross - domain compositing tasks. among numerous others, these include image blending, object immersion, texture - replacement and even cg2real translation or stylization. we employ a localized, iterative refinement scheme which infuses the injected objects with contextual information derived from the background scene, and enables control over the degree and types of changes the object may undergo. we conduct a range of qualitative and quantitative comparisons to prior work, and exhibit that our method produces higher quality and realistic results without requiring any annotations or training. finally, we demonstrate how our method may be used for data augmentation of downstream tasks.
|
arxiv:2302.10167
|
a procedure and optical concept is introduced for ultrashort pulsed laser cleaving of transparent materials with tailored edges in a single pass. the procedure is based on holographically splitting a number of foci along the desired edge geometry including c - shaped edges with local 45 { \ deg } tangential angles to the surface. single - pass, full thickness laser modifications are achieved requiring single - side access to the workpiece only without inclining the optical head. after having induced laser modifications with feed rates of 1 m / s actual separation is performed using a selective etching strategy.
|
arxiv:2111.01612
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.