text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the metallicity of galactic gaseous halos provides insights into accretion and feedback of galaxies. the nearby edge - on galaxy ngc 891 has a multi - component gaseous halo and a background agn ( lqac 035 + 042 003 ) projected 5 kpc above the disk near the minor axis. against the uv continuum of this agn, we detect lines from 13 ions associated with ngc 891 in new { \ it hst } / cos spectra. most of the absorption is from the warm ionized gas with $ \ log t = 4. 22 \ pm0. 04 $, $ \ log n _ { \ rm h } = - 1. 26 \ pm0. 51 $, and $ \ log n _ { \ rm h } = 20. 81 \ pm 0. 20 $. the metallicity of volatile elements ( i. e., c, n, and s ) is about half solar ( $ \ rm [ x / h ] \ approx - 0. 3 \ pm 0. 3 $ ), while mg, fe, and ni show lower metallicities of $ \ rm [ x / h ] \ approx - 0. 9 $. the absorption system shows the depletion pattern seen for warm galactic diffuse clouds, which is consistent with a mixture of ejected solar metallicity disk gases and the hot x - ray emitting halo ( $ z = 0. 1 - 0. 2z _ \ odot $ ). the warm ionized gases are about 5 times more massive than the cold \ ion { h } { 1 } emitting gases around the galactic center, which might lead to accretion with a mean rate of $ 10 ^ 2 ~ m _ \ odot \ rm ~ yr ^ { - 1 } $ for a period of time. we also detect low metallicity ( $ \ approx 0. 1 ~ z _ \ odot $ ) gases toward lqac 035 + 042 003 at $ 110 \ rm ~ km ~ s ^ { - 1 } $ ( a high velocity cloud ) and toward another sight line ( 3c 66a ; 108 kpc projected from ngc 891 ) at $ 30 \ rm ~ km ~ s ^ { - 1 } $. this low metallicity material could be the cold mode accretion from igm or the tidal disruption of satellites in the ngc 891 halo.
|
arxiv:1904.04716
|
steady - state and transient antiplane dynamic processes in a structured solids consisting of uniform periodic square - cell lattices connected by a lattice layer of different bond stiffnesses and point masses are analyzed. a semi - infinite lattice covered by a layer is also considered. localization phenomena that are characterized by a waveguide - like propagation along the layer direction and exponential attenuation along its normal are studied. waveguide pass - bands and attenuation factors are obtained analytically, while transient processes developed under the action of a monochromatic local source are numerically simulated. as a result, it is shown how a two - dimensional problem is transformed with time into a quasi - one - dimensional one and how a layer traps the source energy. special attention is paid to revealing particularities of transient waves in cases where steady - state solutions are absent : resonant waves with frequencies demarcating pass - and stop - bands at the ends of the brillouin zone and wave transition in the vicinities of transition points in dispersion curves. in the latter case, a simultaneous onset of different localization phenomena - a spatial star - like beaming and a one - dimensional waveguide - like localization - is shown.
|
arxiv:1104.0328
|
residential electrification of transport and heat is changing consumption and its characteristics significantly. previous studies have demonstrated the impact of socio - techno - economic determinants on residential consumption. however, they fail to capture the distributional characteristics of such consumer groups, which impact network planning and flexibility assessment. using actual residential electricity consumption profile data for 720, 000 households in denmark, we demonstrate that heat pumps are more likely to influence aggregated peak consumption than electric vehicles. at the same time, other socio - economic factors, such as occupancy, dwelling area and income, show little impact. comparing the extrapolation of a comprehensive rollout of heat pumps or electric vehicles indicates that the most common consumer category deploying heat pumps has 14 % more maximum consumption during peak load hours, 46 % more average consumption and twice the higher median compared to households owning an electric vehicle. electric vehicle show already flexibility with coincidence factors that ranges between 5 - 15 % with a maximum of 17 % whereas heat pumps are mostly baseload. the detailed and holistic outcomes of this study support flexibility assessment and grid planning in future studies but also the operation of flexible technologies.
|
arxiv:2210.03524
|
we present results of rxte observations of the x - ray source cyg x - 2 during 1996 - 1999. its power density spectra in the 0. 1 - 128 - hz band are fitted by a model that takes into account the power - law spectral behavior at frequencies below and above the break frequency, with an introduction of one or more lorenz lines to describe the peaks of quasi - periodic oscillations that correspond to the horizontal branch of the z track. the rxte observations have revealed a positive correlation between the break frequency and the indices of the two parts of the spectrum. the spectrum steepens with increasing break frequency both above and below the break frequency.
|
arxiv:astro-ph/0110400
|
we investigate the validity of fluctuation theorems for an asymmetric rotor experiment in a granular gas. a first state, with a gaussian distribution of the angular velocity, is found to be well described by a first order langevin equation. we show that fluctuation theorems are valid for the injected work and for the total entropy production. in a second state the angular velocity distribution is double - peaked due to a spontaneous symmetry breaking : a convection roll develops in the granular gas, which strongly couples to the rotor. surprisingly, in this case similar symmetry relations hold, which lead to a good prediction for the height ratio of the two peaks.
|
arxiv:1204.6130
|
self gravitating systems ( sgs ) in the universe are generally thought to be non - extensive, and often show long - tails in various distribution functions. in principle, these non - boltzmann properties are naturally expected from the peculiar property of gravity, long - range and unshielded. therefore the ordinary boltzmann statistical mechanics would not be applicable for these self gravitating systems ( sgs ) in its naive form. in order to step further, we quantitatively investigate the above two properties, non - extensivity and long - tails, by explicitly introducing various models of statistical mechanics. we use the data of cfa ii south redshift survey and apply the count - in - cell method. we study four statistical mechanics, ( 1 ) boltzmann, ( 2 ) fractal, ( 3 ) r \ ' enyi, and ( 4 ) tsallis, and use akaike information criteria ( aic ) for the fair comparison.
|
arxiv:astro-ph/0304301
|
an equitable $ k $ - partition of a graph $ g $ is a collection of induced subgraphs $ ( g [ v _ 1 ], g [ v _ 2 ], \ ldots, g [ v _ k ] ) $ of $ g $ such that $ ( v _ 1, v _ 2, \ ldots, v _ k ) $ is a partition of $ v ( g ) $ and $ - 1 \ le | v _ i | - | v _ j | \ le 1 $ for all $ 1 \ le i < j \ le k $. we prove that every planar graph admits an equitable $ 2 $ - partition into $ 3 $ - degenerate graphs, an equitable $ 3 $ - partition into $ 2 $ - degenerate graphs, and an equitable $ 3 $ - partition into two forests and one graph.
|
arxiv:1907.09911
|
observations of wasp - 121 b have suggested an under - abundance of titanium and titanium - oxide from its terminator region. in this study, we aim to determine whether this depletion is global by investigating the day - side emission spectrum. we analyse 8 epochs of high - resolution spectra obtained with espresso, targeting orbital phases when the day - side is in view. we use a cross - correlation method to search for various atoms, tio and vo and compare to models. we constrain the velocities and phase - function of the emission signal using a bayesian framework. we report significant detections of ca i, v i, cr i, mn i, fe i, co i and ni i, but not t i or tio. models containing ti are unable to reproduce the data. the detected signals are consistent with the known orbital and systemic velocities and with peak emission originating from the sub - stellar point. we find that ti is depleted from regions of the atmosphere where transmission and emission spectroscopy are sensitive. we interpret this as evidence for the night - side condensation of titanium, preventing it from being mixed back into the upper layers of the atmosphere elsewhere on the planet. species with lower condensation temperatures are unaffected, implying sharp chemical transitions exist between ultra - hot jupiters that have slight differences in temperature or dynamical properties. as tio can act as a strong source of stratospheric heating, cold - trapping creates a coupling between the thermal structures on the day - side and night - side, and thus condensation chemistry needs to be included in global circulation models. observed elemental abundances in hot jupiters are not reliably representative of bulk abundances unless night - side condensation is accounted for or the planet is hot enough to avoid night - side cold - traps entirely. planetary rotation may significantly lower the apparent orbital velocity of emission signals.
|
arxiv:2210.12847
|
historical maps contain detailed geographic information difficult to find elsewhere covering long - periods of time ( e. g., 125 years for the historical topographic maps in the us ). however, these maps typically exist as scanned images without searchable metadata. existing approaches making historical maps searchable rely on tedious manual work ( including crowd - sourcing ) to generate the metadata ( e. g., geolocations and keywords ). optical character recognition ( ocr ) software could alleviate the required manual work, but the recognition results are individual words instead of location phrases ( e. g., " black " and " mountain " vs. " black mountain " ). this paper presents an end - to - end approach to address the real - world problem of finding and indexing historical map images. this approach automatically processes historical map images to extract their text content and generates a set of metadata that is linked to large external geospatial knowledge bases. the linked metadata in the rdf ( resource description framework ) format support complex queries for finding and indexing historical maps, such as retrieving all historical maps covering mountain peaks higher than 1, 000 meters in california. we have implemented the approach in a system called mapkurator. we have evaluated mapkurator using historical maps from several sources with various map styles, scales, and coverage. our results show significant improvement over the state - of - the - art methods. the code has been made publicly available as modules of the kartta labs project at https : / / github. com / kartta - labs / project.
|
arxiv:2112.01671
|
nuclear engineering is the engineering discipline concerned with designing and applying systems that utilize the energy released by nuclear processes. the most prominent application of nuclear engineering is the generation of electricity. worldwide, some 440 nuclear reactors in 32 countries generate 10 percent of the world ' s energy through nuclear fission. in the future, it is expected that nuclear fusion will add another nuclear means of generating energy. both reactions make use of the nuclear binding energy released when atomic nucleons are either separated ( fission ) or brought together ( fusion ). the energy available is given by the binding energy curve, and the amount generated is much greater than that generated through chemical reactions. fission of 1 gram of uranium yields as much energy as burning 3 tons of coal or 600 gallons of fuel oil, without adding carbon dioxide to the atmosphere. = = history = = nuclear engineering was born in 1938, with the discovery of nuclear fission. the first artificial nuclear reactor, cp - 1, was designed by a team of physicists who were concerned that nazi germany might also be seeking to build a bomb based on nuclear fission. ( the earliest known nuclear reaction on earth occurred naturally, 1. 7 billion years ago, in oklo, gabon, africa. ) the second artificial nuclear reactor, the x - 10 graphite reactor, was also a part of the manhattan project, as were the plutonium - producing reactors of the hanford engineer works. the first nuclear reactor to generate electricity was experimental breeder reactor i ( ebr - i ), which did so near arco, idaho, in 1951. ebr - i was a standalone facility, not connected to a grid, but a later idaho research reactor in the borax series did briefly supply power to the town of arco in 1955. the first commercial nuclear power plant, built to be connected to an electrical grid, is the obninsk nuclear power plant, which began operation in 1954. the second is the shippingport atomic power station, which produced electricity in 1957. for a chronology, from the discovery of uranium to the current era, see outline history of nuclear energy or history of nuclear power. also see history of nuclear engineering part 1 : radioactivity, part 2 : building the bomb, and part 3 : atoms for peace. see list of commercial nuclear reactors for a comprehensive listing of nuclear power reactors and iaea power reactor information system ( pris ) for worldwide and country - level statistics on nuclear power generation. = = sub - disciplines = = nuclear engineers work in such areas as the following : nuclear reactor design, which
|
https://en.wikipedia.org/wiki/Nuclear_engineering
|
large language models ( llms ) have achieved remarkable performance in recent years but are fundamentally limited by the underlying training data. to improve models beyond the training data, recent works have explored how llms can be used to generate synthetic data for autonomous self - improvement. however, successive steps of self - improvement can reach a point of diminishing returns. in this work, we propose a complementary approach towards self - improvement where finetuning is applied to a multiagent society of language models. a group of language models, all starting from the same base model, are independently specialized by updating each one using data generated through multiagent interactions among the models. by training each model on independent sets of data, we illustrate how this approach enables specialization across models and diversification over the set of models. as a result, our overall system is able to preserve diverse reasoning chains and autonomously improve over many more rounds of fine - tuning than single - agent self - improvement methods. we quantitatively illustrate the efficacy of the approach across a wide suite of reasoning tasks.
|
arxiv:2501.05707
|
we establish a schubert calculus for bott - samelson resolutions in the algebraic cobordism ring of a complete flag variety g / b.
|
arxiv:0903.3936
|
distributed energy resources ( ders ), such as rooftop solar panels, are growing rapidly and are reshaping power systems. to promote ders, feed - in - tariff ( fit ) is usually adopted by utilities to pay der owners certain fixed rates for supplying energy to the grid. an alternative to fit is a market based approach ; i. e., consumers and der owners trade energy in an auction - based peer - to - peer ( p2p ) market, and the rates are determined by a market clearing process. however, the complexities in sucha market and agents ' bounded rationality may invalidate many well - established theories on auction design and hinder market development. to address this issue, we propose an automated bidding framework in a repeated auction based on multi - armed bandit learning, which aims to minimize each bidder ' s cumulative regret. numerical results indicate convergence of such a multi - agent learning game to a steady - state. for comparison purpose, we apply the framework to three different auction designs to realize a p2p market.
|
arxiv:2002.09435
|
we show that the globular cluster mass function ( gcmf ) in the milky way depends on cluster half - mass density ( rho _ h ) in the sense that the turnover mass m _ to increases with rho _ h while the width of the gcmf decreases. we argue that this is the expected signature of the slow erosion of a mass function that initially rose towards low masses, predominantly through cluster evaporation driven by internal two - body relaxation. we find excellent agreement between the observed gcmf - - including its dependence on internal density rho _ h, central concentration c, and galactocentric distance r _ gc - - and a simple model in which the relaxation - driven mass - loss rates of clusters are approximated by - dm / dt = mu _ ev ~ rho _ h ^ { 1 / 2 }. in particular, we recover the well - known insensitivity of m _ to to r _ gc. this feature does not derive from a literal ` ` universality ' ' of the gcmf turnover mass, but rather from a significant variation of m _ to with rho _ h - - the expected outcome of relaxation - driven cluster disruption - - plus significant scatter in rho _ h as a function of r _ gc. our conclusions are the same if the evaporation rates are assumed to depend instead on the mean volume or surface densities of clusters inside their tidal radii, as mu _ ev ~ rho _ t ^ { 1 / 2 } or mu _ ev ~ sigma _ t ^ { 3 / 4 } - - alternative prescriptions that are physically motivated but involve cluster properties ( rho _ t and sigma _ t ) that are not as well defined or as readily observable as rho _ h. in all cases, the normalization of mu _ ev required to fit the gcmf implies cluster lifetimes that are within the range of standard values ( although falling towards the low end of this range ). our analysis does not depend on any assumptions or information about velocity anisotropy in the globular cluster system.
|
arxiv:0704.0080
|
##vability and set - theoretic forcing. intuitionistic logic was developed by heyting to study brouwer ' s program of intuitionism, in which brouwer himself avoided formalization. intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. kleene ' s work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. for example, any provably total function in intuitionistic arithmetic is computable ; this is not true in classical theories of arithmetic such as peano arithmetic. = = = algebraic logic = = = algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. a fundamental example is the use of boolean algebras to represent truth values in classical propositional logic, and the use of heyting algebras to represent truth values in intuitionistic propositional logic. stronger logics, such as first - order logic and higher - order logic, are studied using more complicated algebraic structures such as cylindric algebras. = = set theory = = set theory is the study of sets, which are abstract collections of objects. many of the basic notions, such as ordinal and cardinal numbers, were developed informally by cantor before formal axiomatizations of set theory were developed. the first such axiomatization, due to zermelo, was extended slightly to become zermelo β fraenkel set theory ( zf ), which is now the most widely used foundational theory for mathematics. other formalizations of set theory have been proposed, including von neumann β bernays β godel set theory ( nbg ), morse β kelley set theory ( mk ), and new foundations ( nf ). of these, zf, nbg, and mk are similar in describing a cumulative hierarchy of sets. new foundations takes a different approach ; it allows objects such as the set of all sets at the cost of restrictions on its set - existence axioms. the system of kripke β platek set theory is closely related to generalized recursion theory. two famous statements in set theory are the axiom of choice and the continuum hypothesis. the axiom of choice, first stated by zermelo, was proved independent of zf by fraenkel, but has come to be widely accepted by mathematicians. it states that given a collection of nonempty sets there is a single set c that contains
|
https://en.wikipedia.org/wiki/Mathematical_logic
|
a theory of ordinal powers of the ideal $ \ mathfrak { g } _ { \ mathcal { s } } $ of $ \ mathcal { s } $ - ghost morphisms is developed by introducing for every ordinal $ \ lambda $, the $ \ lambda $ - th inductive power $ \ mathcal { j } ^ { ( \ lambda ) } $ of an ideal $ \ mathcal { j }. $ the generalized $ \ lambda $ - generating hypothesis ( $ \ lambda $ - ggh ) for an ideal $ \ mathcal j $ of an exact category $ \ mathcal { a } $ is the proposition that the $ \ lambda $ - th inductive power $ { \ mathcal { j } } ^ { ( \ lambda ) } $ is an object ideal. it is shown that under mild conditions every inductive power of a ghost ideal is an object - special preenveloping ideal. when $ \ lambda $ is infinite, the proof is based on an ideal version of eklof ' s lemma. when $ \ lambda $ is an infinite regular cardinal, the generalized $ \ lambda $ - generating hypothesis is established for the ghost ideal $ \ mathfrak { g } _ { \ mathcal { s } } $ for the case when $ \ mathcal a $ a locally $ \ lambda $ - presentable grothendieck category and $ \ mathcal { s } $ is a set of $ \ lambda $ - presentable objects in $ \ mathcal a $ such that $ ^ \ perp ( \ mathcal { s } ^ \ perp ) $ contains a generating set for $ \ mathcal a. $ as a consequence of $ \ lambda $ - ggh for the ghost ideal $ \ mathfrak { g } _ { r \ mbox { - } \ mathrm { mod } } $ in the category of modules $ r \ mbox { - } \ mathrm { mod } $ over a ring, it is shown that if the class of pure projective left $ r $ - modules is closed under extensions, then every left fp - projective module is pure projective. a restricted version $ n $ - ggh ( $ \ mathfrak { g } ( \ mathbf { c } ( r ) ) $ ) for the ghost ideal in $ \ mathbf { c } ( r ) ) $ is also considered and it is shown that $ n $ - ggh ( $
|
arxiv:2411.05250
|
the last two decades have seen an explosive growth in the theory and practice of both quantum computing and machine learning. modern machine learning systems process huge volumes of data and demand massive computational power. as silicon semiconductor miniaturization approaches its physics limits, quantum computing is increasingly being considered to cater to these computational needs in the future. small - scale quantum computers and quantum annealers have been built and are already being sold commercially. quantum computers can benefit machine learning research and application across all science and engineering domains. however, owing to its roots in quantum mechanics, research in this field has so far been confined within the purview of the physics community, and most work is not easily accessible to researchers from other disciplines. in this paper, we provide a background and summarize key results of quantum computing before exploring its application to supervised machine learning problems. by eschewing results from physics that have little bearing on quantum computation, we hope to make this introduction accessible to data scientists, machine learning practitioners, and researchers from across disciplines.
|
arxiv:2006.12025
|
weak interactions in neutron $ \ beta $ - decay exhibit parity violation through the preferential emission of right - handed antineutrinos. we identify this symmetry breaking with a reduction of phase space due to the small neutrino mass. during a brief interval of momentum exchange, a small mass neutrino puts the emission process close to the bifurcation horizon of rindler space, doubly covered by minkowski space $ { \ cal m } $ as dictated by the equivalence principle of general relativity. in the limit of arbitrarily small mass, this two - sheet covering effectively collapses into a single sheet, reducing the dimension of dirac spinors from four to two, leaving neutrinos single - handed. this predicts a similar reduction to single - handed particle states in electrons created at tev energies, which may be tested with the planned linear leptonic colliders. if confirmed, right - handed small mass neutrinos are expected to exist at sufficiently low energies.
|
arxiv:2502.09855
|
the exotic range of known planetary systems has provoked an equally exotic range of physical explanations for their diverse architectures. however, constraining formation processes requires mapping the observed exoplanet population to that which initially formed in the protoplanetary disc. numerous results suggest that ( internal or external ) dynamical perturbation alters the architectures of some exoplanetary systems. isolating planets that have evolved without any perturbation can help constrain formation processes. we consider the kepler multiples, which have low mutual inclinations and are unlikely to have been dynamically perturbed. we apply a modelling approach similar to that of mulders et al. ( 2018 ), additionally accounting for the two - dimensionality of the radius ( $ r = 0. 3 - 20 \, r _ \ oplus $ ) and period ( $ p = 0. 5 - 730 $ days ) distribution. we find that an upper limit in planet mass of the form $ m _ { \ rm { lim } } \ propto a ^ \ beta \ exp ( - a _ { \ rm { in } } / a ) $, for semi - major axis $ a $ and a broad range of $ a _ { \ rm { in } } $ and $ \ beta $, can reproduce a distribution of $ p $, $ r $ that is indistinguishable from the observed distribution by our comparison metric. the index is consistent with $ \ beta = 1. 5 $, expected if growth is limited by accretion within the hill radius. this model is favoured over models assuming a separable pdf in $ p $, $ r $. the limit, extrapolated to longer periods, is coincident with the orbits of rv - discovered planets ( $ a > 0. 2 $ au, $ m > 1 \, m _ { \ rm { j } } $ ) around recently identified low density host stars, hinting at isolation mass limited growth. we discuss the necessary circumstances for a coincidental age - related bias as the origin of this result, concluding that such a bias is possible but unlikely. we conclude that, in light of the evidence that some planetary systems have been dynamically perturbed, simple models for planet growth during the formation stage are worth revisiting.
|
arxiv:2105.02907
|
we present a novel algorithm to perform the hessenberg reduction of an $ n \ times n $ matrix $ a $ of the form $ a = d + uv ^ * $ where $ d $ is diagonal with real entries and $ u $ and $ v $ are $ n \ times k $ matrices with $ k \ le n $. the algorithm has a cost of $ o ( n ^ 2k ) $ arithmetic operations and is based on the quasiseparable matrix technology. applications are shown to solving polynomial eigenvalue problems and some numerical experiments are reported in order to analyze the stability of the approach
|
arxiv:1501.07812
|
this paper presents the basic ideas and properties of elliptic functions and elliptic integrals as an expository essay. it explores some of their numerous consequences and includes applications to some problems such as the simple pendulum, the euler rigid body motion and some others integrable hamiltonian systems.
|
arxiv:0707.1137
|
the topic of the glass transition gives rise to a a wide diversity of views. it is, accordingly, characterized by a lack of agreement on which would be the most profitable theoretical perspective. in this chapter, i provide some elements that can help sorting out the many theoretical approaches, understanding their foundations, as well as discussing their validity and mutual compatibility. along the way, i describe the progress made in the last twenty years, including new insights concerning the spatial heterogeneity of the dynamics and the characteristic length scales associated with the glass transition. an emphasis is put on those theories that associate glass formation with growing collective behavior and emerging universality.
|
arxiv:1010.2938
|
we demonstrate how a superposition of coherent states can be generated for a microwave field inside a coplanar transmission line coupled to a single superconducting charge qubit, with the addition of a single classical magnetic pulse for chirping of the qubit transition frequency. we show how the qubit dephasing induces decoherence on the field superposition state, and how it can be probed by the qubit charge detection. the character of the charge qubit relaxation process itself is imprinted in the field state decoherence profile.
|
arxiv:1104.5189
|
crowd movement guidance has been a fascinating problem in various fields, such as easing traffic congestion in unusual events and evacuating people from an emergency - affected area. to grab the reins of crowds, there has been considerable demand for a decision support system that can answer a typical question : ` ` what will be the outcomes of each of the possible options in the current situation. in this paper, we consider the problem of estimating the effects of crowd movement guidance from past data. to cope with limited amount of available data biased by past decision - makers, we leverage two recent techniques in deep representation learning for spatial data analysis and causal inference. we use a spatial convolutional operator to extract effective spatial features of crowds from a small amount of data and use balanced representation learning based on the integral probability metrics to mitigate the selection bias and missing counterfactual outcomes. to evaluate the performance on estimating the treatment effects of possible guidance, we use a multi - agent simulator to generate realistic data on evacuation scenarios in a crowded theater, since there are no available datasets recording outcomes of all possible crowd movement guidance. the results of three experiments demonstrate that our proposed method reduces the estimation error by at most 56 % from state - of - the - art methods.
|
arxiv:2102.03980
|
compressed sensing techniques enable efficient acquisition and recovery of sparse, high - dimensional data signals via low - dimensional projections. in this work, we propose uncertainty autoencoders, a learning framework for unsupervised representation learning inspired by compressed sensing. we treat the low - dimensional projections as noisy latent representations of an autoencoder and directly learn both the acquisition ( i. e., encoding ) and amortized recovery ( i. e., decoding ) procedures. our learning objective optimizes for a tractable variational lower bound to the mutual information between the datapoints and the latent representations. we show how our framework provides a unified treatment to several lines of research in dimensionality reduction, compressed sensing, and generative modeling. empirically, we demonstrate a 32 % improvement on average over competing approaches for the task of statistical compressed sensing of high - dimensional datasets.
|
arxiv:1812.10539
|
complexifying space time has many interesting applications, from the construction of higher dimensional unification, to provide a useful framework for quantum gravity and to better define some local symmetries that suffer singularities in real space time. in this context here spacetime is extended to complex spacetime and standard general coordinate invariance is also extended to complex holomorphic general coordinate transformations. this is possible by introducing a non riemannian measure of integration, which transforms avoiding non holomorphic behavior. instead the measure transforms according to the inverse of the jacobian of the coordinate transformation and avoids the traditional square root of the determinant of the metric $ \ sqrt { - g } $. which is not globally holomorphic, or the determinant of the vierbein which is sensitive to the vierbein orientations and not invariant under local lorentz transformations with negative determinants. a contribution to the cosmological term appears as an integration constant in the equations of motion. a proposed action for finsler geometry, which involves $ - g $ rather than $ \ sqrt { - g } $ will also constitute an example of a holomorphic general coordinate invariant modified measure gravitational theory.
|
arxiv:2308.09246
|
starting from a model that consists of a semiclassical spin coupled to two leads we present a microscopic derivation of the langevin equation for the direction of the spin. for slowly - changing direction it takes on the form of the stochastic landau - lifschitz - gilbert equation. we give expressions for the gilbert damping parameter and the strength of the fluctuations, including their bias - voltage dependence. at nonzero bias - voltage the fluctuations and damping are not related by the fluctuation - dissipation theorem. we find, however, that in the low - frequency limit it is possible to introduce a voltage - dependent effective temperature that characterizes the fluctuations in the direction of the spin, and its transport - steady - state probability distribution function.
|
arxiv:0705.1432
|
in this paper we compute the holographic entanglement entropy for massive flavors in the d3 - d7 system, for arbitrary mass and various entangling region geometries. we show that the universal terms in the entanglement entropy exactly match those computed in the dual theory using conformal perturbation theory. we derive holographically the universal terms in the entanglement entropy for a cft perturbed by a relevant operator, up to second order in the coupling ; our results are valid for any entangling region geometry. we present a new method for computing the entanglement entropy of any top - down brane probe system using kaluza - klein holography and illustrate our results with massive flavors at finite density. finally we discuss the differential entropy for brane probe systems, emphasising that the differential entropy captures only the effective lower - dimensional einstein metric rather than the ten - dimensional geometry.
|
arxiv:1505.07697
|
we investigate a graphene quantum pump, adiabatically driven by two thin potential barriers vibrating around their equilibrium positions. for the highly doped leads, the pumped current per mode diverges at the dirac point due to the more efficient contribution of the evanescent modes in the pumping process. the pumped current shows an oscillatory behavior with an increasing amplitude as a function of the carrier concentration. this effect is in contrast to the decreasing oscillatory behavior of the similar normal pump. the graphene pump driven by two vibrating thin barriers operates more efficient than the graphene pump driven by two oscillating thin barriers.
|
arxiv:1512.06400
|
for graphs g and h, let the induced ramsey number ir ( h, g ) be the smallest number of vertices in a graph f such that any coloring of the edges of f in red and blue, there is either a red induced copy of h or a blue induced copy of g. in this note we consider the case when g = sn is a star on n edges, for large n, and h is a fixed graph. we prove that ( r - 1 ) n < ir ( h, sn ) < ( r - 1 ) ( r - 1 ) n + cn, for any c > 0, sufficiently large n, and r denoting the chromatic number of h. the lower bound is asymptotically tight for any fixed bipartite h. the upper bound is attained up to a constant factor, for example by a clique h.
|
arxiv:2002.01297
|
we present new detections of the co ( 5 - 4 ), co ( 7 - 6 ), [ ci ] ( 1 - 0 ) and [ ci ] ( 2 - 1 ) molecular and atomic line transitions towards the unlensed, obscured quasar ams12 ( z = 2. 7672 ), observed with the iram pdbi. this is the first unlensed, high redshift source to have both [ ci ] transitions detected. continuum measurements between 70 $ \ mu $ m and 3 mm are used to constrain the fir sed, and we find a best fit fir luminosity of log [ lfir / lsol ] = 13. 5 + / - 0. 1, dust temperature t _ d = 88 + / - 8 k and emissivity index { \ beta } = 0. 6 + / - 0. 1. the highly - excited molecular gas probed by co ( 3 - 2 ), ( 5 - 4 ) and ( 7 - 6 ), is modelled with large velocity gradient ( lvg ) models. the gas kinetic temperature t _ g, density n ( h2 ), and the characteristic size r0, are determined using the dust temperature from the fir sed as a prior for the gas temperature. the best fitting parameters are t _ g = 90 + / - 8 k, n ( h2 ) = 10 ^ ( 3. 9 + / - 0. 1 ) cm ^ ( - 3 ) and r0 = 0. 8 + / - 0. 04 kpc. the ratio of the [ ci ] lines gives a [ ci ] excitation temperature of 43 + / - 10 k, indicating the [ ci ] and the high - excitation co are not in thermal equilibrium. the [ ci ] excitation temperature is below that of t _ d and t _ g of the high - excitation co, perhaps because [ ci ] lies at a larger radius where there may also be a large reservoir of co at a cooler temperature, perhaps detectable through the co ( 1 - 0 ). using the [ ci ] ( 1 - 0 ) line we can estimate the strength of the co ( 1 - 0 ) line and hence the gas mass. this suggests that a significant fraction ( ~ 30 % ) of the molecular gas is missed from the high - excitation line analysis. the eddington limited black hole mass is found from the bolometric luminosity to be mbh > ~
|
arxiv:1204.5480
|
we consider degenerated nonlinear pde of elliptic type : $ $ - \ mathrm { div } ( a ( | x | ) | \ nabla w ( x ) | ^ { p - 2 } \ nabla w ( x ) ) + h ( | x |, w ( x ), \ langle \ nabla w ( x ), \ frac { x } { | x | } \ rangle ) = \ phi ( w ( x ) ), $ $ where $ x $ belongs to the ball in $ \ bf { r } ^ n $. using the argument based on opial - type inequalities, we investigate qualitative properties of their radial solutions, like e. g. maximum principles, monotonicity, as well as nonexistence of the nontrivial solutions.
|
arxiv:1908.08915
|
$ \ alpha $ - rucl $ _ 3 $ has been hinted as a spin - orbital - assisted mott insulator in proximity to a kitaev spin liquid state. here we present arpes measurements on single crystal $ \ alpha $ - rucl $ _ 3 $ in both the pristine and electron - doped states, and combine them with lda + soc + u calculations performed for the several low - energy competing magnetically ordered states as well as the paramagnetic state. a large mott gap is found in the measured band structure of the pristine compound that persists to more than 20 times beyond the magnetic ordering temperature, though the paramagnetic calculation shows almost no gap. upon electron doping, spectral weight is transferred into the gap but the new states still maintain a sizable gap from the fermi edge. these findings are most consistent with a mott insulator with a somewhat exotic evolution out of the mott state with both temperature and doping, likely related to unusually strong spin fluctuations.
|
arxiv:1603.02279
|
event - based cameras are bio - inspired sensors that detect light changes asynchronously for each pixel. they are increasingly used in fields like computer vision and robotics because of several advantages over traditional frame - based cameras, such as high temporal resolution, low latency, and high dynamic range. as with any camera, the output ' s quality depends on how well the camera ' s settings, called biases for event - based cameras, are configured. while frame - based cameras have advanced automatic configuration algorithms, there are very few such tools for tuning these biases. a systematic testing framework would require observing the same scene with different biases, which is tricky since event cameras only generate events when there is movement. event simulators exist, but since biases heavily depend on the electrical circuit and the pixel design, available simulators are not well suited for bias tuning. to allow reproducibility, we present biasbench, a novel event dataset containing multiple scenes with settings sampled in a grid - like pattern. we present three different scenes, each with a quality metric of the downstream application. additionally, we present a novel, rl - based method to facilitate online bias adjustments.
|
arxiv:2504.18235
|
an equation of motion for open quantum systems incorporating memory effects and initial correlations with the environment is presented in terms of an effective liouville operator that solely acts on states of the system. the environment can induce memory effects via the frequency dependence of the effective liouville and initial correlations can be mapped to a shifted frequency dependent initial state within the system. the equation of motion generalizes the well known semi - group dynamic equations. in generic systems the effective liouville has a non - degenerate zero mode. by probability conservation one can demonstrate that a generic open system reaches, in the long time limit, a stationary state, which is independent of any initial condition.
|
arxiv:1810.06458
|
the dust reverberation mapping is one of powerful methods to investigate the structure of the dusty tori in agns, and it has been performed on more than a hundred type 1 agns. however, no clear results have been reported on type 2 agns because their strong optical - uv extinction completely hides their accretion disc emission. here we focus on an x - ray - bright type 2 agn, ngc 2110, and utilize 2 - 20 kev x - ray variation monitored by maxi to trace disc emission, instead of optical - uv variation. comparing it with light curves in the wise infrared ( ir ) w1 band ( $ \ lambda = 3. 4 $ $ \ mu $ m ) and w2 band ( $ \ lambda = 4. 6 $ $ \ mu $ m ) with cross - correlation analyses, we found candidates of the dust reverberation time lag at $ \ sim60 $ days, $ \ sim130 $ days, and $ \ sim1250 $ days between the x - ray flux variation and those of the ir bands. by examining the best - fitting x - ray and ir light curves with the derived time lags, we found that the time lag of $ \ sim130 $ days is most favoured. with this time lag, the relation between the time lag and luminosity of ngc 2110 is consistent with those in type 1 agns, suggesting that the dust reverberation in ngc 2110 mainly originates in hot dust in the torus innermost region, the same as in type 1 agns. as demonstrated by the present study, x - ray and ir simultaneous monitoring can be a promising tool to perform the dust reverberation mapping on type 2 agns.
|
arxiv:2005.07339
|
the nonlinear hall effect ( nlhe ), which can produce a transverse voltage without any magnetic field, is a potential alternative for rectification or frequency doubling. however, the low temperature detection of nlhe limits its applications. here, we report the room - temperature nlhe in a type - ii weyl semimetal tairte4, which hosts a robust nlhe due to substantial broken inversion symmetry and large band overlapping at the fermi level. we also observe a temperature - induced sign inversion of nlhe in tairte4. our theoretical calculations suggest that the observed sign inversion is a result of temperature - induced shift in the chemical potential indicating a direct correlation of nlhe with the electronic structure at the fermi surface. finally, the room - temperature nlhe in tairte4 is exploited to demonstrate the wireless rf rectification with zero external bias and magnetic field. this work opens a door to realizing room temperature applications based on the nlhe in weyl semimetals.
|
arxiv:2012.14104
|
after a brief introduction to the statistical description of data, these lecture notes focus on quantum field theories as they emerge from lattice models in the critical limit. for the simulation of these lattice models, markov chain monte - carlo methods are widely used. we discuss the heat bath and, more modern, cluster algorithms. the ising model is used as a concrete illustration of important concepts such as correspondence between a theory of branes and quantum field theory or the duality map between strong and weak couplings. the notes then discuss the inclusion of gauge symmetries in lattice models and, in particular, the continuum limit in which quantum yang - mills theories arise.
|
arxiv:0711.3004
|
a multitude of individuals across the globe grapple with motor disabilities. neural prosthetics utilizing brain - computer interface ( bci ) technology exhibit promise for improving motor rehabilitation outcomes. the intricate nature of eeg data poses a significant hurdle for current bci systems. recently, a qualitative repository of eeg signals tied to both upper and lower limb execution of motor and motor imagery tasks has been unveiled. despite this, the productivity of the machine learning ( ml ) models that were trained on this dataset was alarmingly deficient, and the evaluation framework seemed insufficient. to enhance outcomes, robust feature engineering ( signal processing ) methodologies are implemented. a collection of time domain, frequency domain, and wavelet - derived features was obtained from 16 - channel eeg signals, and the maximum relevance minimum redundancy ( mrmr ) approach was employed to identify the four most significant features. for classification k nearest neighbors ( knn ), support vector machine ( svm ), decision tree ( dt ), and na \ " ive bayes ( nb ) models were implemented with these selected features, evaluating their effectiveness through metrics such as testing accuracy, precision, recall, and f1 score. by leveraging svm with a gaussian kernel, a remarkable maximum testing accuracy of 92. 50 % for motor activities and 95. 48 % for imagery activities is achieved. these results are notably more dependable and gratifying compared to the previous study, where the peak accuracy was recorded at 74. 36 %. this research work provides an in - depth analysis of the mi limb eeg dataset and it will help in designing and developing simple, cost - effective and reliable bci systems for neuro - rehabilitation.
|
arxiv:2412.07175
|
by combining classical molecular dynamics simulations and density functional theory total energy calculations, we study the possibility of doping graphene with b / n atoms using low - energy ion irradiation. our simulations show that the optimum irradiation energy is 50 ev with substitution probabilities of 55 % for n and 40 % for b. we further estimate probabilities for different defect configurations to appear under b / n ion irradiation. we analyze the processes responsible for defect production and report an effective swift chemical sputtering mechanism for n irradiation at low energies ( ~ 125 ev ) which leads to production of single vacancies. our results show that ion irradiation is a promising method for creating hybrid c - b / n structures for future applications in the realm of nanoelectronics.
|
arxiv:1102.0645
|
heavy fermion materials have been a rich playground for strongly correlated physics for decades. however, engineering tunable and synthesizable heavy fermion materials remains a challenge. we strive to integrate heavy fermion properties into carbon boron clathrates as a universal structure which can host a diverse array of interesting physical phenomena. using a combination of density functional theory and dynamical mean field theory, we study two rare earth carbon boron clathrates, smb $ _ 3 $ c $ _ 3 $ and ceb $ _ 3 $ c $ _ 3 $, and explore properties arising from the strong electronic correlations. we find a significant increase in the density of states at the fermi level in ceb $ _ 3 $ c $ _ 3 $ as the temperature is lowered, indicating the development of a heavy electron state. in smb $ _ 3 $ c $ _ 3 $, a potential kondo insulating state is identified. both findings point to rare earth carbon boron clathrates as novel strongly correlated materials within a universally tunable structure, offering a fresh platform to innovate upon conventional heavy - fermion materials design.
|
arxiv:2310.06094
|
the scenario approach is widely used in robust control system design and chance - constrained optimization, maintaining convexity without requiring assumptions about the probability distribution of uncertain parameters. however, the approach can demand large sample sizes, making it intractable for safety - critical applications that require very low levels of constraint violation. to address this challenge, we propose a novel yet simple constraint scaling method, inspired by large deviations theory. under mild nonparametric conditions on the underlying probability distribution, we show that our method yields an exponential reduction in sample size requirements for bilinear constraints with low violation levels compared to the classical approach, thereby significantly improving computational tractability. numerical experiments on robust pole assignment problems support our theoretical findings.
|
arxiv:2411.07361
|
this paper explores the experimental search potential for sbottom pair production in an r - parity conserving scenario at the lhc run - 3 and hl - lhc. the sbottom decays with a 100 % br via a chargino, $ \ tilde { b } _ 1 \ to t \ tilde { \ chi } _ 1 ^ \ pm $, which subsequently decays to a $ w $ boson and a neutralino, $ \ tilde { \ chi } _ 1 ^ \ pm \ to w \ tilde { \ chi } _ 1 ^ 0 $, also with a 100 % br. the study follows the atlas object definitions and event selection criteria from ref. jhep06 ( 2020 ) 046, focusing on rpc2l1b and rpc2l2b signal regions defined with same - sign leptons and at least one $ b $ - tagged jet. projected exclusion limits are presented in the $ \ tilde { b } _ 1 $ - $ \ tilde { \ chi } _ 1 ^ 0 $ mass plane for three center - of - mass energies ( 13 tev, 13. 6 tev, and 14 tev ) and three integrated luminosity scenarios ( 139 fb $ ^ { - 1 } $, 300 fb $ ^ { - 1 } $, and 3000 fb $ ^ { - 1 } $ ). keywords : sbottom pair production, same - sign leptons, multi - leptons
|
arxiv:2412.19327
|
in the spirit of classic works of wilson on the renormalization group and operator product expansion, a new framework for the study of the theory space of euclidean quantum field theories has been introduced. this formalism is particularly useful for elucidating the structure of the short - distance expansions of the $ n $ - point functions of a renormalizable quantum field theory near a non - trivial fixed point. we review and apply this formalism in the study of the scaling limit of the two dimensional massive ising model. renormalization group analysis and operator product expansions determine all the non - analytic mass dependence of the short - distance expansion of the correlation functions. an extension of the first order variational formula to higher orders provides a manifestly finite scheme for the perturbative calculation of the operator product coefficients to any order in parameters. a perturbative expansion of the correlation functions follows. we implement this scheme for a systematic study of correlation functions involving two spin operators. we show how the necessary non - trivial integrals can be calculated. as two concrete examples we explicitly calculate the short - distance expansion of the spin - spin correlation function to third order and the spin - spin - energy density correlation function to first order in the mass. we also discuss the applicability of our results to perturbations near other non - trivial fixed points corresponding to other unitary minimal models.
|
arxiv:hep-th/9312207
|
we present results from a ~ 55 ks long xmm - newton observation of the obscured agn, ngc 5643, performed in july 2009. a previous, shorter ( about 10 ks ) xmm - newton observation in february 2003 had left two major issues open, the nature of the hard x - ray emission ( compton - thin vs compton - thick ) and of the soft x - ray excess ( photoionized vs collisionally ionized matter ). the new observation shows that the source is compton - thick and that the dominant contribution to the soft x - ray emission is by photoionized matter ( even if it is still unclear whether collisionally ionized matter may contribute as well ). we also studied three bright x - ray sources that are in the field of ngc 5643. the ulx ngc 5643 x - 1 was confirmed to be very luminous, even if more than a factor 2 fainter than in 2003. we then provided the first high quality spectrum of the cluster of galaxies abell 3602. the last source, cxoj143244. 5 - 442020, is likely an unobscured agn, possibly belonging to abell 3602.
|
arxiv:1307.1591
|
in this paper, we study the problem of signal estimation from noisy non - linear measurements when the unknown $ n $ - dimensional signal is in the range of an $ l $ - lipschitz continuous generative model with bounded $ k $ - dimensional inputs. we make the assumption of sub - gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1 - bit, and other quantized models. in addition, we consider the impact of adversarial corruptions on these measurements. our analysis is based on a generalized lasso approach ( plan and vershynin, 2016 ). we first provide a non - uniform recovery guarantee, which states that under i. i. d. ~ gaussian measurements, roughly $ o \ left ( \ frac { k } { \ epsilon ^ 2 } \ log l \ right ) $ samples suffice for recovery with an $ \ ell _ 2 $ - error of $ \ epsilon $, and that this scheme is robust to adversarial noise. then, we apply this result to neural network generative models, and discuss various extensions to other models and non - i. i. d. ~ measurements. moreover, we show that our result can be extended to the uniform recovery guarantee under the assumption of a so - called local embedding property, which is satisfied by the 1 - bit and censored tobit models.
|
arxiv:2006.12415
|
this is a review talk on the uv and infrared selected galaxies. the central question addressed is : do uv and infrared surveys see the 2 sides of star formation of the same population, or star formation of 2 different populations? we first review the literature on the uv and ir selected galaxy samples, try to quantify the difference and overlaps between these two populations of star forming galaxies. we then present some preliminary results of a galex / swire comparison study for ir and uv selected galaxies at z = 0. 6, in an attempt to constrain the evolution of the dust attenuation and of stellar mass of these galaxies.
|
arxiv:astro-ph/0601086
|
we prove that every raag ( a right - angled artin group ) embeds in the group of hamiltonian symplectomorphisms of the 2 - sphere.
|
arxiv:1104.0348
|
we propose a data - driven, coarse - graining formulation in the context of equilibrium statistical mechanics. in contrast to existing techniques which are based on a fine - to - coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse - to - fine map. this corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale ( all - atom ) data. from an information - theoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the cg process. furthermore, it can be readily extended to a fully bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. the latter can be used to produce not only point estimates of fine - scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse - graining. the issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse - grained model. a flexible and parallelizable monte carlo - expectation - maximization ( mc - em ) scheme is proposed for carrying out inference and learning tasks. a comparative assessment of the proposed methodology is presented for a lattice spin system and the spc / e water model.
|
arxiv:1605.08301
|
we address the question of when a covering of the boundary of a surface can be extended to a covering of the surface ( equivalently : when is there a branched cover with a prescribed monodromy ). if such an extension is possible, when can the total space be taken to be connected? when can the extension be taken to be regular? we give necessary and sufficient conditions for both finite and infinite covers ( infinite covers are our main focus ). in order to prove our results, we show group - theoretic results of independent interests, such as the following extension ( and simplification ) of the theorem of ore } : every element of the infinite symmetric group is the commutator of two elements which, together, act transitively
|
arxiv:0901.3594
|
a recent suggestion [ plb 774 ( 2017 ) 522 ] that purely - $ \ lambda ^ { \ ast } ( 1405 ) $ nuclei provide the absolute minimum energy in charge - neutral baryon matter for baryon - number $ a \ gtrsim 8 $, is tested within rmf calculations. a broad range of $ \ lambda ^ { \ ast } $ interaction strengths, commensurate with $ ( \ bar k \ bar k nn ) _ { i = 0 } $ binding energy assumed to be of order 100 mev, is scanned. it is found that the binding energy per $ \ lambda ^ { \ ast } $, $ b / a $, saturates for $ a \ gtrsim 120 $ with values of $ b / a $ considerably below 100 mev, implying that $ \ lambda ^ { \ ast } ( 1405 ) $ matter is highly unstable against strong decay to $ \ lambda $ and $ \ sigma $ hyperon aggregates. the central density of $ \ lambda ^ { \ ast } $ matter is found to saturate as well, at roughly twice nuclear matter density. moreover, it is shown that the underlying very strong $ \ bar k n $ potentials, fitted for isospin $ i = 0 $ to the mass and width values of $ \ lambda ^ { \ ast } ( 1405 ) $, fail to reproduce values of single - nucleon absorption fractions deduced across the periodic table from $ k ^ - $ capture - at - rest bubble chamber experiments.
|
arxiv:1805.11368
|
this paper proposes a new mutual independence test for a large number of high dimensional random vectors. the test statistic is based on the characteristic function of the empirical spectral distribution of the sample covariance matrix. the asymptotic distributions of the test statistic under the null and local alternative hypotheses are established as dimensionality and the sample size of the data are comparable. we apply this test to examine multiple ma ( 1 ) and ar ( 1 ) models, panel data models with some spatial cross - sectional structures. in addition, in a flexible applied fashion, the proposed test can capture some dependent but uncorrelated structures, for example, nonlinear ma ( 1 ) models, multiple arch ( 1 ) models and vandermonde matrices. simulation results are provided for detecting these dependent structures. an empirical study of dependence between closed stock prices of several companies from new york stock exchange ( nyse ) demonstrates that the feature of cross - - sectional dependence is popular in stock markets.
|
arxiv:1205.6607
|
until recently synthetic agb models had not taken into account the break - down of the core mass - luminosity ( mc - l ) relation due to the occurrence of envelope burning in the most massive ( m > 3. 5 msun ) and luminous ( mbol > - 6 ) stars. marigo et al. ( 1998 ) made the first attempt to consistently include the related over - luminosity effect ( i. e. above the mc - l relation ) in synthetic tp - agb calculations. in this paper the reliability of the solution scheme is tested by comparison with the results of complete evolutionary calculations for a 7 msun agb star undergoing envelope burning ( e. g. bloecker & schoenberner 1991 ). indeed, the method proves to be valid as it is able to reproduce with remarkable accuracy several evolutionary features of the 7 msun star. we present extensive synthetic tp - agb calculations for stars with initial masses of 3. 5, 4. 0, 4. 5, and 5. 0 msun, and three choices of the initial metallicity, i. e. z = 0. 019, z = 0. 008, and z = 0. 004. three values of the mixing - length parameter are used, i. e. alpha = 1. 68, 2. 0, 2. 5. we investigate the dependence of envelope burning on such stellar parameters ( m, z, and alpha ). the comparison between different cases gives hints on the interplay between envelope burning over - luminosity and mass loss, and related effects on tp - agb lifetimes.
|
arxiv:astro-ph/9805312
|
we obtain time - resolved spectra of spontaneous emission and resonance fluorescence of a single multilevel emitter where two antiparallel transitions interfere and cause quantum beats. after rising as a single broad peak, the spontaneous emission spectrum turns into a doublet of subnatural peaks and then fades for long times. for strong field resonance fluorescence, the beat signature is the formation of doublet sidebands, which initially grow asymmetrically but end up symmetrical. we stress the filter bandwidth ' s crucial role in the spectral resolution and causal evolution.
|
arxiv:2412.14038
|
we study a model of a photon mode dipole - coupled to a medium of two - level oscillators in a microcavity in the presence of dephasing processes introduced by coupling to external baths. decoherence processes can be classified as pair - breaking or non - pair - breaking in analogy with magnetic or non - magnetic impurities in superconductors. in the absence of dephasing, the ground state of the model is a polariton condensate with a gap in the excitation spectrum. increase of the pair - breaking parameter $ \ gamma $ reduces the gap, which becomes zero at a critical value $ \ gamma _ { c1 } $ ; for large $ \ gamma $, the conventional laser regime is obtained in a way that demonstrates its close analogy to a gapless superconductor. in contrast, weak non - pair - breaking processes have no qualitative effect on the condensate or the existence of a gap, although they lead to inhomogeneous broadening of the excitations.
|
arxiv:cond-mat/0204271
|
despite the astonishing success of standard $ \ lambda $ cdm cosmology, there is mounting evidence for a tension with observations at small and intermediate scales. we introduce a simple model where both cold dark matter ( dm ) and sterile neutrinos are charged under a new $ u ( 1 ) _ x $ gauge interaction. the resulting dm self - interactions resolve the tension with the observed abundances and internal density structures of dwarf galaxies. at the ame time, the sterile neutrinos can account for both the small hot dm component favored by cosmological observations and the neutrino anomalies found in short - baseline experiments.
|
arxiv:1312.4947
|
due to the wide distribution and usage of digital media, an important issue is protection of the digital content. there is a number of algorithms and techniques developed for the digital watermarking. in this paper, the invisible image watermark procedure is considered. watermark is created as a pseudo random sequence, embedded in the certain region of the image, obtained using haar wavelet decomposition. generally, the watermarking procedure should be robust to the various attacks - filtering, noise etc. here we assume the compressive sensing scenario as a new signal processing technique that may influence the robustness. the focus of this paper was the possibility of the watermark detection under compressive sensing attack with different number of available image coefficients. the quality of the reconstructed images has been evaluated using peak signal to noise ratio ( psnr ). the theory is supported with experimental results.
|
arxiv:1502.01996
|
the d - dimensional n - spin facilitated kinetic ising model is studied analytically starting from usual master equations and their transformation into a fock - space representation. the evolution of relevant operators is rewritten in terms of a projection formalism. the obtained frequency matrices and memory terms are analyzed. especially, the influences of the memory terms is approached by using standard techniques of the usual mode coupling approach. the temperature dependence of the relaxation times related to the n - spin facilitated kinetic ising model shows a weak non - arrhenius behavior. furthermore, a characteristic stretched decay of the correlation function is obtained.
|
arxiv:cond-mat/0004445
|
existing models encounter bottlenecks in balancing performance and computational efficiency when modeling long sequences. although the state space model ( ssm ) has achieved remarkable success in handling long sequence tasks, it still faces the problem of large number of parameters. in order to further improve the efficiency of ssm, we propose a new state space layer based on multiple - input multiple - output ssm, called efficient ssm ( essm ). our essm is built on the convolutional representation of multi - input and multi - input ( mimo ) ssm. we propose a variety of effective strategies to improve the computational efficiency. the diagonalization of the system matrix first decouples the original system. then a fast tensor convolution is proposed based on the fast fourier transform. in addition, the block diagonalization of the ssm further reduces the model parameters and improves the model flexibility. extensive experimental results show that the performance of the proposed model on multiple databases matches the performance of state - of - the - art models, such as s4, and is significantly better than transformers and lstm. in the model efficiency benchmark, the parameters of essm are only 12. 89 \ % of lstm and 13. 24 \ % of mamba. the training speed of essm is 3. 94 times faster than lstm and 1. 35 times faster than mamba. code is available at : \ href { https : / / github. com / leonty1 / essm } { https : / / github. com / leonty1 / essm }.
|
arxiv:2402.15290
|
the rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. for instance, the personalized diagnosis and treatment planning for a single cancer patient relies on the various images ( e. g., radiological, pathological, and camera images ) and non - image data ( e. g., clinical data and genomic data ). however, such decision - making procedures can be subjective, qualitative, and have large inter - subject variabilities. with the recent advances in multi - modal deep learning technologies, an increasingly large number of efforts have been devoted to a key question : how do we extract and aggregate multi - modal information to ultimately provide more objective, quantitative computer - aided clinical decision making? this paper reviews the recent studies on dealing with such a question. briefly, this review will include the ( 1 ) overview of current multi - modal learning workflows, ( 2 ) summarization of multi - modal fusion methods, ( 3 ) discussion of the performance, ( 4 ) applications in disease diagnosis and prognosis, and ( 5 ) challenges and future directions.
|
arxiv:2203.15588
|
we present an experiment to characterize our new linear ion trap designed for the operation of a many - ion optical clock using 115 - in ^ + as clock ions. for the characterization of the trap as well as the sympathetic cooling of the clock ions we use 172 - yb ^ +. the trap design has been derived from finite element method ( fem ) calculations and a first prototype based on glass - reinforced thermoset laminates was built. this paper details on the trap manufacturing process and micromotion measurement. excess micromotion is measured using photon - correlation spectroscopy with a resolution of 1. 1nm in motional amplitude, and residual axial rf fields in this trap are compared to fem calculations. with this method, we demonstrate a sensitivity to systematic clock shifts due to excess micromotion of | ( { \ delta } { \ nu } / { \ nu } ) | = 8. 5x10 ^ - 20. based on the measurement of axial rf fields of our trap, we estimate a number of twelve ions that can be stored per trapping segment and used as an optical frequency standard with a fractional inaccuracy of \ leq 1x10 ^ - 18 due to micromotion.
|
arxiv:1206.5111
|
diffusion models are mainly studied on image data. however, non - image data ( e. g., tabular data ) are also prevalent in real applications and tend to be noisy due to some inevitable factors in the stage of data collection, degrading the generation quality of diffusion models. in this paper, we consider a novel problem setting where every collected sample is paired with a vector indicating the data quality : risk vector. this setting applies to many scenarios involving noisy data and we propose risk - sensitive sde, a type of stochastic differential equation ( sde ) parameterized by the risk vector, to address it. with some proper coefficients, risk - sensitive sde can minimize the negative effect of noisy samples on the optimization of diffusion models. we conduct systematic studies for both gaussian and non - gaussian noise distributions, providing analytical forms of risk - sensitive sde. to verify the effectiveness of our method, we have conducted extensive experiments on multiple tabular and time - series datasets, showing that risk - sensitive sde permits a robust optimization of diffusion models with noisy samples and significantly outperforms previous baselines.
|
arxiv:2402.02081
|
we present a novel benchmark and associated evaluation metrics for assessing the performance of text anonymization methods. text anonymization, defined as the task of editing a text document to prevent the disclosure of personal information, currently suffers from a shortage of privacy - oriented annotated text resources, making it difficult to properly evaluate the level of privacy protection offered by various anonymization methods. this paper presents tab ( text anonymization benchmark ), a new, open - source annotated corpus developed to address this shortage. the corpus comprises 1, 268 english - language court cases from the european court of human rights ( echr ) enriched with comprehensive annotations about the personal information appearing in each document, including their semantic category, identifier type, confidential attributes, and co - reference relations. compared to previous work, the tab corpus is designed to go beyond traditional de - identification ( which is limited to the detection of predefined semantic categories ), and explicitly marks which text spans ought to be masked in order to conceal the identity of the person to be protected. along with presenting the corpus and its annotation layers, we also propose a set of evaluation metrics that are specifically tailored towards measuring the performance of text anonymization, both in terms of privacy protection and utility preservation. we illustrate the use of the benchmark and the proposed metrics by assessing the empirical performance of several baseline text anonymization models. the full corpus along with its privacy - oriented annotation guidelines, evaluation scripts and baseline models are available on : https : / / github. com / norskregnesentral / text - anonymisation - benchmark
|
arxiv:2202.00443
|
this paper presents a conforming finite element semi - discretization of the streamfunction form of the one - layer unsteady quasi - geostrophic equations, which are a commonly used model for large - scale wind - driven ocean circulation. we derive optimal error estimates and present numerical results.
|
arxiv:1405.7836
|
higher criticism, or second - level significance testing, is a multiple - comparisons concept mentioned in passing by tukey. it concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. tukey suggested comparing the fraction of observed significances at a given \ alpha - level to the expected fraction under the joint null. in fact, he suggested standardizing the difference of the two quantities and forming a z - score ; the resulting z - score tests the significance of the body of significance tests. we consider a generalization, where we maximize this z - score over a range of significance levels 0 < \ alpha \ leq \ alpha _ 0. we are able to show that the resulting higher criticism statistic is effective at resolving a very subtle testing problem : testing whether n normal means are all zero versus the alternative that a small fraction is nonzero. the subtlety of this ` ` sparse normal means ' ' testing problem can be seen from work of ingster and jin, who studied such problems in great detail. in their studies, they identified an interesting range of cases where the small fraction of nonzero means is so small that the alternative hypothesis exhibits little noticeable effect on the distribution of the p - values either for the bulk of the tests or for the few most highly significant tests. in this range, when the amplitude of nonzero means is calibrated with the fraction of nonzero means, the likelihood ratio test for a precisely specified alternative would still succeed in separating the two hypotheses.
|
arxiv:math/0410072
|
in this paper, we introduce a nonlinear optimization problem whose objective function is the convex log - sum - exp function and the feasible region is defined as a system of fuzzy relational inequalities ( fri ) defined by the lukasiewicz t - norm. some necessary and sufficient conditions are derived to determine the feasibility of the problem. the feasible solution set is characterized in terms of a finite number of closed convex cells. since the feasible solutions set of fris is non - convex, conventional methods may not be directly employed. an algorithm is presented for solving this nonlinear problem. it is proved that the algorithm can find the exact optimal solution and an example is presented to illustrate the proposed algorithm.
|
arxiv:2206.09716
|
the remarkable performance of deep learning models and their applications in consequential domains ( e. g., facial recognition ) introduces important challenges at the intersection of equity and security. fairness and robustness are two desired notions often required in learning models. fairness ensures that models do not disproportionately harm ( or benefit ) some groups over others, while robustness measures the models ' resilience against small input perturbations. this paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples. the reported analysis sheds light on the factors causing such contrasting behavior, suggesting that distance to the decision boundary across groups as a key explainer for this behavior. extensive experiments on non - linear models and different architectures validate the theoretical findings in multiple vision domains. finally, the paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
|
arxiv:2211.11835
|
we present the first combined study of the recently discovered source igr j16283 - 4838 with swift, integral, and rxte. the source, discovered by integral on april 7, 2005, shows a highly absorbed ( variable n _ h = 0. 4 - 1. 7 x 1e23 1 / cm * * 2 ) and flat ( photon index = 1 ) spectrum in the swift / xrt and rxte / pca data. no optical counterpart is detectable ( v > 20 mag ), but a possible infrared counterpart within the swift / xrt error radius is detected in the 2mass and spitzer / glimpse survey. the observations suggest that igr j16283 - 4838 is a high mass x - ray binary containing a neutron star embedded in compton thick material. this makes igr j16283 - 4838 a member of the class of highly absorbed hmxbs, discovered by integral.
|
arxiv:astro-ph/0506170
|
jump is an algebraic modeling language embedded in the julia programming language. jump allows users to model optimization problems of a variety of kinds, including linear programming, integer programming, conic optimization, semidefinite programming, and nonlinear programming, and handles the low - level details of communicating with solvers. after nearly 10 years in development, jump 1. 0 was released in march, 2022. in this short communication, we highlight the improvements to jump from recent releases up to and including 1. 0.
|
arxiv:2206.03866
|
in this paper, we give a faster width - dependent algorithm for mixed packing - covering lps. mixed packing - covering lps are fundamental to combinatorial optimization in computer science and operations research. our algorithm finds a $ 1 + \ eps $ approximate solution in time $ o ( nw / \ eps ) $, where $ n $ is number of nonzero entries in the constraint matrix and $ w $ is the maximum number of nonzeros in any constraint. this run - time is better than nesterov ' s smoothing algorithm which requires $ o ( n \ sqrt { n } w / \ eps ) $ where $ n $ is the dimension of the problem. our work utilizes the framework of area convexity introduced in [ sherman - focs ' 17 ] to obtain the best dependence on $ \ eps $ while breaking the infamous $ \ ell _ { \ infty } $ barrier to eliminate the factor of $ \ sqrt { n } $. the current best width - independent algorithm for this problem runs in time $ o ( n / \ eps ^ 2 ) $ [ young - arxiv - 14 ] and hence has worse running time dependence on $ \ eps $. many real life instances of the mixed packing - covering problems exhibit small width and for such cases, our algorithm can report higher precision results when compared to width - independent algorithms. as a special case of our result, we report a $ 1 + \ eps $ approximation algorithm for the densest subgraph problem which runs in time $ o ( md / \ eps ) $, where $ m $ is the number of edges in the graph and $ d $ is the maximum graph degree.
|
arxiv:1909.12387
|
we compute exactly both the spectral function of the electron and of the small polaron for the two site holstein model. we find that for intermediary coupling, the small polaron is a better fundamental excitation of the system than the electron. however, the lang - firsov approximation fails to predict the right dispersion relation for the small polaron.
|
arxiv:cond-mat/9704237
|
we briefly review the status of three - family grand unified string models.
|
arxiv:hep-th/9807240
|
using the observation that configurations of n polymers with hard core interactions on a closed random surface correspond to random surfaces with n boundary components we calculate the free energy of a gas of polymers interacting with fully quantized two - dimensional gravity. we derive the equation of state for the polymer gas and find that all the virial coefficients beyond the second one vanish identically.
|
arxiv:hep-th/9801130
|
deep neural networks have been demonstrated to achieve phenomenal success in many domains, and yet their inner mechanisms are not well understood. in this paper, we investigate the curvature of image manifolds, i. e., the manifold deviation from being flat in its principal directions. we find that state - of - the - art trained convolutional neural networks for image classification have a characteristic curvature profile along layers : an initial steep increase, followed by a long phase of a plateau, and followed by another increase. in contrast, this behavior does not appear in untrained networks in which the curvature flattens. we also show that the curvature gap between the last two layers has a strong correlation with the generalization capability of the network. moreover, we find that the intrinsic dimension of latent codes is not necessarily indicative of curvature. finally, we observe that common regularization methods such as mixup yield flatter representations when compared to other methods. our experiments show consistent results over a variety of deep learning architectures and multiple data sets. our code is publicly available at https : / / github. com / azencot - group / crlm
|
arxiv:2305.19730
|
we briefly describe the construction of a renormalizable gauge model based on the nonlocal gauge invariant mass operator f1 / d ^ 2f. we also take a look at the unitarity of the resulting model.
|
arxiv:0710.1524
|
we give a construction that takes a simple linear algebraic group $ g $ over a field and produces a commutative, unital, and simple non - associative algebra $ a $ over that field. two attractions of this construction are that ( 1 ) when $ g $ has type $ e _ 8 $, the algebra $ a $ is obtained by adjoining a unit to the 3875 - dimensional representation and ( 2 ) it is effective, in that the product operation on $ a $ can be implemented on a computer. a description of the algebra in the $ e _ 8 $ case has been requested for some time, and interest has been increased by the recent proof that $ e _ 8 $ is the full automorphism group of that algebra. the algebras obtained by our construction have an unusual peirce spectrum.
|
arxiv:2005.07618
|
in 2020, i designed the course cmsc 20630 / 30630 human - robot interaction : research and practice as a hands - on introduction to human - robot interaction ( hri ) research for both undergraduate and graduate students at the university of chicago. since 2020, i have taught and refined this course each academic year. human - robot interaction : research and practice focuses on the core concepts and cutting - edge research in the field of human - robot interaction ( hri ), covering topics that include : nonverbal robot behavior, verbal robot behavior, social dynamics, norms & ethics, collaboration & learning, group interactions, applications, and future challenges of hri. course meetings involve students in the class leading discussions about cutting - edge peer - reviewed research hri publications. students also participate in a quarter - long collaborative research project, where they pursue an hri research question that often involves conducing their own human - subjects research study where they recruit human subjects to interact with a robot. in this paper, i detail the structure of the course and its learning goals as well as my reflections and student feedback on the course.
|
arxiv:2403.18692
|
in particular, just auguste comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re - examination of nature according to the affirmation of ' sociality ' as the basis of the scientifically interpreted society. = = = romanticism = = = the romantic movement of the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the enlightenment. the decline of romanticism occurred because a new movement, positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. at the same time, the romantic reaction to the enlightenment produced thinkers such as johann gottfried herder and later wilhelm dilthey whose work formed the basis for the culture concept which is central to the discipline. traditionally, much of the history of the subject was based on colonial encounters between western europe and the rest of the world, and much of 18th - and 19th - century anthropology is now classed as scientific racism. during the late 19th century, battles over the " study of man " took place between those of an " anthropological " persuasion ( relying on anthropometrical techniques ) and those of an " ethnological " persuasion ( looking at cultures and traditions ), and these distinctions became part of the later divide between physical anthropology and cultural anthropology, the latter ushered in by the students of franz boas. = = 20th century = = science advanced dramatically during the 20th century. there were new and radical developments in the physical and life sciences, building on the progress from the 19th century. = = = theory of relativity and quantum mechanics = = = the beginning of the 20th century brought the start of a revolution in physics. the long - held theories of newton were shown not to be correct in all circumstances. beginning in 1900, max planck, albert einstein, niels bohr and others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. not only did quantum mechanics show that the laws of motion did not hold on small scales, but the theory of general relativity, proposed by einstein in 1915, showed that the fixed background of spacetime, on which both newtonian mechanics and special relativity depended, could not exist. in 1925, werner heisenberg and erwin schrodinger formulated quantum mechanics, which explained the preceding quantum theories. currently, general relativity and quantum mechanics
|
https://en.wikipedia.org/wiki/History_of_science
|
the brain is in a state of perpetual reverberant neural activity, even in the absence of specific tasks or stimuli. shedding light on the origin and functional significance of such a dynamical state is essential to understanding how the brain transmits, processes, and stores information. an inspiring, albeit controversial, conjecture proposes that some statistical characteristics of empirically observed neuronal activity can be understood by assuming that brain networks operate in a dynamical regime near the edge of a phase transition. moreover, the resulting critical behavior, with its concomitant scale invariance, is assumed to carry crucial functional advantages. here, we present a data - driven analysis based on simultaneous high - throughput recordings of the activity of thousands of individual neurons in various regions of the mouse brain. to analyze these data, we synergistically combine cutting - edge methods for the study of brain activity ( such as a phenomenological renormalization group approach and techniques that infer the general dynamical state of a neural population ), while designing complementary tools. this strategy allows us to uncover strong signatures of scale invariance that is " quasi - universal " across brain regions and reveal that all these areas operate, to a greater or lesser extent, near the edge of instability. furthermore, this framework allows us to distinguish between quasi - universal background activity and non - universal input - related activity. taken together, this study provides strong evidence that brain networks actually operate in a critical regime which, among other functional advantages, provides them with a scale - invariant substrate of activity covariances that can sustain optimal input representations.
|
arxiv:2111.12067
|
in this paper, we present a bit stream feature based energy model that accurately estimates the energy required to decode a given hevc - coded bit stream. therefore, we take a model from literature and extend it by explicitly modeling the inloop filters, which was not done before. furthermore, to prove its superior estimation performance, it is compared to seven different energy models from literature. by using a unified evaluation framework we show how accurately the required decoding energy for different decoding systems can be approximated. we give thorough explanations on the model parameters and explain how the model variables are derived. to show the modeling capabilities in general, we test the estimation performance for different decoding software and hardware solutions, where we find that the proposed model outperforms the models from literature by reaching frame - wise mean estimation errors of less than 7 % for software and less than 15 % for hardware based systems.
|
arxiv:2203.00466
|
we provide schauder estimates for nonlinear beltrami equations and lower bounds of the jacobians for homeomorphic solutions. the results were announced in arxiv : 1412. 4046 but here we give detailed proofs.
|
arxiv:1511.08370
|
graphene and few - layer graphene at high bias expose a wealth of phenomena due to the high temperatures reached. with in - situ transmission electron microscopy ( tem ) we observe directly how the current modifies the structure, and vice versa. in some samples, cracks propagate from the edges of the flakes, leading to the formation of narrow constrictions or to nanometer spaced gaps after breakdown. in other samples we find layer - by - layer evaporation of few - layer graphene, which could be exploited for the controlled production of single layer graphene from multi - layered samples. surprisingly, we even find that two pieces of graphene that overlap can heal out at high bias and form one continuous sheet. these findings open up new avenues to structure graphene for specific device applications.
|
arxiv:1203.3219
|
we analyze the stability of the einstein static universe by considering homogeneous perturbations in the context of f ( g ) modified gauss - bonnet theories of gravity. by considering a generic form of f ( g ), the stability region of the einstein static universe is parameterized by the linear equation of state parameter w = p / rho and the second derivative f " ( g ) of the gauss - bonnet term.
|
arxiv:0902.2982
|
layered ruthenates are prototype materials with strong structure - property correlations. we report the structural and physical properties of double - layered perovskite sr3 ( ru1 - xmnx ) 2o7 single crystals with 0 < = x < = 0. 7. single crystal x - ray diffraction refinements reveal that mn doping on the ru site leads to the shrinkage of unit - cell volume and disappearance of ( ru / mn ) o6 octahedron rotation when x > 0. 16, while the crystal structure remains tetragonal. correspondingly, the electric and magnetic properties change with x. the electrical resistivity reveals metallic character ( d rho / d t > 0 ) at high temperatures but insulating behavior ( d rho / d t < 0 ) below a characteristic temperature t _ mit. interestingly, t _ mit is different from t _ m, at which magnetic susceptibility reaches maximum. t _ mit monotonically increases with increasing x while t _ m shows non - monotonic dependence with x. the difference between t _ mit and t _ m ( t _ mit > t _ m ) becomes larger when x > 0. 16. the constructed phase diagram consists of five distinct regions, demonstrating that the physical properties of such a system can easily be tuned by chemical doping.
|
arxiv:1108.0392
|
the zwicky transit factory ( ztf ) is a powerful time domain survey facility with a large field of view. we apply the synthetic tracking technique to integrate a ztf ' s long - dwell dataset, which consists of 133 nominal 30 - second exposure frames spanning about 1. 5 hours, to search for slowly moving asteroids down to approximately 23rd magnitude. we found more than one thousand objects from searching 40 ccd - quadrant subfields, each of which covers a field size of $ \ sim $ 0. 73 deg $ ^ 2 $. while most of the objects are main belt asteroids, there are asteroids belonging to families of trojan, hilda, hungaria, phocaea, and near - earth - asteroids. such an approach is effective and productive. here we report the data process and results.
|
arxiv:1907.11299
|
we measure the r - band galaxy luminosity function ( lf ) across environments over the redshift range 0 < $ z $ < 0. 107 using the sdss. we divide our sample into galaxies residing in large scale voids ( void galaxies ) and those residing in denser regions ( wall galaxies ). the best fitting schechter parameters for void galaxies are : log $ \ phi ^ * $ = - 3. 40 $ \ pm $ 0. 03 log ( mpc $ ^ { - 3 } $ ), $ m ^ * $ = - 19. 88 $ \ pm $ 0. 05, and $ \ alpha $ = - 1. 20 $ \ pm $ 0. 02. for wall galaxies, the best fitting parameters are : log $ \ phi ^ * $ = - 2. 86 $ \ pm $ 0. 02 log ( mpc $ ^ { - 3 } $ ), $ m ^ * $ = - 20. 80 $ \ pm $ 0. 03, and $ \ alpha $ = - 1. 16 $ \ pm $ 0. 01. we find a shift in the characteristic magnitude, $ m ^ * $, towards fainter magnitudes for void galaxies and find no significant difference between the faint - end slopes of the void and wall galaxy lfs. we investigate how low surface brightness selections effects can affect the galaxy lf. to attempt to examine a sample of galaxies that is relatively free of surface brightness selection effects, we compute the optical galaxy lf of galaxies detected by the blind hi survey, alfalfa. we find that the global lf of the alfalfa sample is not well fit by a schechter function, because of the presence of a wide dip in the lf around $ m _ r $ = - 18 and an upturn at fainter magnitudes ( $ \ alpha $ ~ - 1. 47 ). we compare the hi selected r - band lf to various lfs of optically selected populations to determine where the hi selected optical lf obtains its shape. we find that sample selection plays a large role in determining the shape of the lf.
|
arxiv:1508.04199
|
automatic generation of a high - quality video from a single image remains a challenging task despite the recent advances in deep generative models. this paper proposes a method that can create a high - resolution, long - term animation using convolutional neural networks ( cnns ) from a single landscape image where we mainly focus on skies and waters. our key observation is that the motion ( e. g., moving clouds ) and appearance ( e. g., time - varying colors in the sky ) in natural scenes have different time scales. we thus learn them separately and predict them with decoupled control while handling future uncertainty in both predictions by introducing latent codes. unlike previous methods that infer output frames directly, our cnns predict spatially - smooth intermediate data, i. e., for motion, flow fields for warping, and for appearance, color transfer maps, via self - supervised learning, i. e., without explicitly - provided ground truth. these intermediate data are applied not to each previous output frame, but to the input image only once for each output frame. this design is crucial to alleviate error accumulation in long - term predictions, which is the essential problem in previous recurrent approaches. the output frames can be looped like cinemagraph, and also be controlled directly by specifying latent codes or indirectly via visual annotations. we demonstrate the effectiveness of our method through comparisons with the state - of - the - arts on video prediction as well as appearance manipulation.
|
arxiv:1910.07192
|
order 10. gary mcguire proved a minimum uniquely solvable sudoku requires 17 clues. symbolic validation ( via computer algebra ) of conjectures to motivate the search for an analytical proof solutions to a special case of the quantum three - body problem known as the hydrogen molecule - ion were found standard quantum chemistry basis sets before realizing they all lead to the same unique analytical solution in terms of a generalization of the lambert w function. related to this work is the isolation of a previously unknown link between gravity theory and quantum mechanics in lower dimensions ( see quantum gravity and references therein ). in the realm of relativistic many - bodied mechanics, namely the time - symmetric wheeler β feynman absorber theory : the equivalence between an advanced lienard β wiechert potential of particle j acting on particle i and the corresponding potential for particle i acting on particle j was demonstrated exhaustively to order 1 / c 10 { \ displaystyle 1 / c ^ { 10 } } before being proved mathematically. the wheeler - feynman theory has regained interest because of quantum nonlocality. in the realm of linear optics, verification of the series expansion of the envelope of the electric field for ultrashort light pulses travelling in non isotropic media. previous expansions had been incomplete : the outcome revealed an extra term vindicated by experiment. evaluation of infinite series, infinite products and integrals ( also see symbolic integration ), typically by carrying out a high precision numerical calculation, and then using an integer relation algorithm ( such as the inverse symbolic calculator ) to find a linear combination of mathematical constants that matches this value. for example, the following identity was rediscovered by enrico au - yeung, a student of jonathan borwein using computer search and pslq algorithm in 1993 : k = 1 β 1 k 2 ( 1 + 1 2 + 1 3 + + 1 k ) 2 = 17 Ο 4 360. { \ displaystyle { \ begin { aligned } \ sum _ { k = 1 } ^ { \ infty } { \ frac { 1 } { k ^ { 2 } } } \ left ( 1 + { \ frac { 1 } { 2 } } + { \ frac { 1 } { 3 } } + \ cdots + { \ frac { 1 } { k } } \ right ) ^ { 2 } = { \ frac { 17 \ pi ^ { 4 } } { 360 } }. \ end { aligned } } } visual investigations in
|
https://en.wikipedia.org/wiki/Experimental_mathematics
|
it was recently pointed out that so - called " superhydrides ", hydrogen - rich materials that appear to become superconducting at high temperatures and pressures, exhibit physical properties that are different from both conventional and unconventional standard type i and type ii superconductors [ 1, 2 ]. here we consider magnetic field expulsion in the first material in this class discovered in 2015, sulfur hydride [ 3 ]. a nuclear resonant scattering experiment has been interpreted as demonstration that the meissner effect takes place in this material [ 4, 5 ]. here we point out that the observed effect, under the assumption that the system is in thermodynamic equilibrium, implies a meissner pressure [ 6 ] in this material that is { \ it much larger } than that of standard superconductors. this suggests that hydride superconductors are qualitatively different from the known standard superconductors { \ it if } they are superconductors.
|
arxiv:2101.01701
|
engineering topological quantum order has become a major field of physics. many advances have been made by synthesizing gauge fields in cold atomic systems. here, we carry over these developments to other platforms which are extremely well suited for quantum engineering, namely trapped ions and nano - trapped atoms. since these systems are typically one - dimensional, the action of artificial magnetic fields has so far received little attention. however, exploiting the long - range nature of interactions, loops with non - vanishing magnetic fluxes become possible even in one - dimensional settings. this gives rise to intriguing phenomena, such as fractal energy spectra, flat bands with localized edge states, and topological many - body states. we elaborate on a simple scheme for generating the required artificial fluxes by periodically driving an xy spin chain. concrete estimates demonstrating the experimental feasibility for trapped ions and atoms in waveguides are given.
|
arxiv:1412.6059
|
the european north sea has a vast renewable energy potential and can be a powerhouse for europe ' s energy transition. however, currently there is uncertainty about how much offshore wind energy can be integrated, whether offshore grids should be meshed and to what extent offshore hydrogen should play a role. to address these questions, we use the open - source energy system optimization model pypsa - eur to model a european carbon - neutral sector - coupled energy system in high spatial and temporal resolution. we let the model endogenously decide how much offshore wind is deployed and which infrastructure is used to integrate the offshore wind. we find that with point - to - point connections like we have today, 310 gw offshore wind can be integrated in the north sea. however, if we allow meshed networks and hydrogen, we find that this can be raised to 420 gw with cost savings up to 15 billion euros per year. furthermore, we only observe significant amounts of up to 75 gw of floating wind turbines in the north sea if we have offshore hydrogen production. generally, the model opts for offshore wind integration through a mix of both electricity and hydrogen infrastructure. however, the bulk of the offshore energy is transported as hydrogen, which is twice as much as the amount transported as electricity. moreover, we find that the offshore power network is mainly used for offshore wind integration, with only a small portion used for inter - country transmission.
|
arxiv:2404.09721
|
let $ m $ be a positive integer and $ q \ in ( 1, m + 1 ] $. a $ q $ - expansion of a real number $ x $ is a sequence $ ( c _ i ) = c _ 1c _ 2 \ cdots $ with $ c _ i \ in \ { 0, 1, \ ldots, m \ } $ such that $ x = \ sum _ { i = 1 } ^ { \ infty } c _ iq ^ { - i } $. in this paper we study the set $ \ mathcal { u } _ q ^ j $ consisting of those real numbers having exactly $ j $ $ q $ - expansions. our main result is that for lebesgue almost every $ q \ in ( q _ { kl }, m + 1 ), $ we have $ $ \ dim _ { h } \ mathcal { u } _ { q } ^ { j } \ leq \ max \ { 0, 2 \ dim _ h \ mathcal { u } _ q - 1 \ } \ text { for all } j \ in \ { 2, 3, \ ldots \ }. $ $ here $ q _ { kl } $ is the komornik - loreti constant. as a corollary of this result, we show that for any $ j \ in \ { 2, 3, \ ldots \ }, $ the function mapping $ q $ to $ \ dim _ { h } \ mathcal { u } _ { q } ^ { j } $ is not continuous.
|
arxiv:2105.11608
|
stream - based runtime monitoring frameworks are safety assurance tools that check the runtime behavior of a system against a formal specification. this tutorial provides a hands - on introduction to rtlola, a real - time monitoring toolkit for cyber - physical systems and networks. rtlola processes, evaluates, and aggregates streams of input data, such as sensor readings, and provides a real - time analysis in the form of comprehensive statistics and logical assessments of the system ' s health. rtlola has been applied successfully in monitoring autonomous systems such as unmanned aircraft. the tutorial guides the reader through the development of a stream - based specification for an autonomous drone observing other flying objects in its flight path. each tutorial section provides an intuitive introduction, highlighting useful language features and specification patterns, and gives a more in - depth explanation of technical details for the advanced reader. finally, we discuss how runtime monitors generated from rtlola specifications can be integrated into a variety of systems and discuss different monitoring applications.
|
arxiv:2501.15913
|
in this paper, we described the gap implementation of crossed modules of commutative algebras and cat $ ^ { 1 } $ - algebras and their equivalence.
|
arxiv:1311.6692
|
using large - scale call detail records of anonymised mobile phone service subscribers with demographic and location information, we investigate how a long - distance residential move within the country affects the mobile communication patterns between an ego who moved and a frequently called alter who did not move. by using clustering methods in analysing the call frequency time series, we find that such ego - alter pairs are grouped into two clusters, those with the call frequency increasing and those with the call frequency decreasing after the move of the ego. this indicates that such residential moves are correlated with a change in the communication pattern soon after moving. we find that the pre - move calling behaviour is a relevant predictor for the post - move calling behaviour. while demographic and location information can help in predicting whether the call frequency will rise or decay, they are not relevant in predicting the actual call frequency volume. we also note that at four months after the move, most of these close pairs maintain contact, even if the call frequency is decreased.
|
arxiv:2009.00252
|
we show that in $ l ( \ mathbb { r } ) $, assuming large cardinals, $ \ mathsf { hod } { \ parallel } \ eta ^ { + \ mathsf { hod } } $ is locally definable from $ \ mathsf { hod } { \ parallel } \ eta $ for all $ \ mathsf { hod } $ - cardinals $ \ eta \ in [ \ boldsymbol { \ delta } ^ 2 _ 1, \ theta ) $. this is a further elaboration of the statement " $ \ mathsf { hod } ^ { l ( \ mathbb { r } ) } $ is a core model below $ \ theta $ " made by john steel.
|
arxiv:2308.01072
|
conventional planet formation theory suggests that chondritic materials have delivered crucial atmospheric and hydrospheric elements such as carbon ( c ), nitrogen ( n ), and hydrogen ( h ) onto primitive earth. however, recent measurements highlight the significant elemental ratio discrepancies between terrestrial parent bodies and the supposed planet building blocks. here we present a volatile evolution model during the assembly of earth and earth - like planets. our model includes impact losses, atmosphere - mantle exchange, and time dependent effects of accretion and outgassing calculated from dynamical modeling outcomes. exploring a wide range of planetesimal properties ( i. e., size and composition ) as well as impact history informed by n - body accretion simulations, we find that while the degree of cnh fractionation has inherent stochasticity, the evolution of c / n and c / h ratios can be traced back to certain properties of the protoplanet and projectiles. interestingly, the majority of our earth - like planets acquire superchondritic final c / n ratios, implying that the volatile elemental ratios on terrestrial planets are driven by the complex interplay between delivery, atmospheric ablation, and mantle degassing.
|
arxiv:2207.13087
|
we investigate, in a certain decoupling limit, the effect of having a constant c - field on the m - theory five - brane using an open membrane probe. we define an open membrane metric for the five - brane that remains non - degenerate in the limit. the canonical quantisation of the open membrane boundary leads to a noncommutative loop space which is a functional analogue of the noncommutative geometry that occurs for d - branes.
|
arxiv:hep-th/0005026
|
in this chapter, we present an overview of sources of biologically relevant astrophysical radiation and effects of that radiation on organisms and their habitats. we consider both electromagnetic and particle radiation, with an emphasis on ionizing radiation and ultraviolet light, all of which can impact organisms directly as well as indirectly through modifications of their habitats. we review what is known about specific sources, such as supernovae, gamma - ray bursts, and stellar activity, including the radiation produced and likely rates of significant events. we discuss both negative and potential positive impacts on individual organisms and their environments and how radiation in a broad context affects habitability.
|
arxiv:1711.02748
|
given the ubiquity of multi - task in practical systems, multi - task learning ( mtl ) has found widespread application across diverse domains. in real - world scenarios, these tasks often have different priorities. for instance, in web search, relevance is often prioritized over other metrics, such as click - through rates or user engagement. existing frameworks pay insufficient attention to the prioritization among different tasks, which typically adjust task - specific loss function weights to differentiate task priorities. however, this approach encounters challenges as the number of tasks grows, leading to exponential increases in hyper - parameter tuning complexity. furthermore, the simultaneous optimization of multiple objectives can negatively impact the performance of high - priority tasks due to interference from lower - priority tasks. in this paper, we introduce a novel multi - task learning framework employing lagrangian differential multiplier methods for step - wise multi - task optimization. it is designed to boost the performance of high - priority tasks without interference from other tasks. its primary advantage lies in its ability to automatically optimize multiple objectives without requiring balancing hyper - parameters for different tasks, thereby eliminating the need for manual tuning. additionally, we provide theoretical analysis demonstrating that our method ensures optimization guarantees, enhancing the reliability of the process. we demonstrate its effectiveness through experiments on multiple public datasets and its application in taobao search, a large - scale industrial search ranking system, resulting in significant improvements across various business metrics.
|
arxiv:2412.12092
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.