text
stringlengths
1
3.65k
source
stringlengths
15
79
( abridged ) we present near - infrared ks - band imaging of 13 high redshift ( 0. 6 < z < 1. 3 ) bl lac objects. we clearly detect the host in eight objects, and marginally in three others. in all cases, the host galaxy is well represented by an r ^ 1 / 4 surface brightness law. the host galaxies of high redshift bl lacs are large ( < r ( e ) > ~ 7 kpc ) and very luminous ( < m ( k ) > = - 27. 9 + - 0. 7 ), ~ 3 mag brighter than l *, and ~ 1 mag brighter than brightest cluster galaxies. they are also ~ 1 mag brighter than low redshift radio galaxies and appear to deviate from their k - z relationship. on the other hand, the high luminosities agree with the few optical studies of high redshift bl lac hosts. the nuclear luminosity and the nucleus - galaxy luminosity ratio of the high redshift bl lacs are much larger than those in low redshift bl lacs. this may be due to either a higher intrinsic nuclear luminosity, or enhanced luminosity because of strong beaming. contrary to what is observed in low redshift bl lacs, the luminosities of the host galaxy and of the nucleus are fairly well correlated, as expected from the black hole mass - bulge luminosity relationship. high redshift bl lacs radiate with a wide range of power with respect to their eddington luminosity, and this power is intermediate between those in nearby bl lacs and in luminous radio - loud quasars. the high redshift bl lac host galaxies appear to be ~ 2 mag brighter than those at low redshift. this is likely due to a strong selection effect in the surveys of bl lacs that makes observable only the most luminous sources at z > 0. 5 and produces a correlation between the nuclear and the host luminosity. however, this may also suggest strong luminosity evolution which is inconsistent with a simple passive evolution of the host galaxies, and requires a contribution from relatively recent star formation episodes.
arxiv:astro-ph/0505443
the search for baryon - number - violating ( bnv ) nucleon decay is an intriguing probe of new physics beyond the sm in future neutrino experiments with enhanced sensitivity. the dark sector states such as an axion or axion - like particle ( alp ) can induce nucleon decays with distinct signature and kinematics from the conventional nucleon decays. in this work, we study the alp effective field theories ( efts ) with baryon number violation and the impact of light alp on bnv nucleon decays. we revisit the dimension - 8 bnv operators in the extended efts with an alp field $ a $ respecting shift symmetry. the low - energy eft operators with $ | \ delta ( b - l ) | = 2 $ and $ | \ delta ( b - l ) | = 0 $ are matched to the baryon chiral perturbation theory. we obtain the effective chiral lagrangian and the bnv interactions between alp and baryons / mesons. the alp interactions lead to two - body baryon decays $ b \ to \ ell ~ ( { \ rm or } ~ \ nu ) ~ a $ and three - body nucleon decays $ n \ to m ~ \ ell ~ ( { \ rm or } ~ \ nu ) ~ a $. we obtain the constraints on the uv scale from the invisible $ \ lambda ^ 0 $ decay search at besiii, the invisible neutron decay search at kamland and proton decay search at super - k. we also show the projections of some other baryon / nucleon decays and present the distinct distributions of kinematic observable.
arxiv:2406.11382
the question about the existence of bogoliubov ' s quasiparticles in the bcs wave functions underneath gutzwiller ' s projection is of importance to strongly correlated systems. we develop a method to examine the two - particle excitations of gutzwiller - projected bcs wave functions by using the variational monte carlo approach. we find that the exact gutzwiller - projected quasiparticle ( gqp ) dispersions are quantitatively reproduced by the gutzwiller - projected bogoliubov quasiparticles ( gbqp ) except the regions where d - wave cooper pairing is strong. since gqp still shows higher energy than gbqp near the antinodes, we believe gbqp provides a reasonable description to the low - energy excitations in strongly correlated superconducting systems. in addition, the intimate connection between gutzwiller ' s projection and d - wave cooper pairing may also imply that strong correlations play a significant role in the nodal - antinodal dichotomy seen by photoemission experiments in cuprates.
arxiv:1303.7060
properly annotated multimedia content is crucial for supporting advances in many information retrieval applications. it enables, for instance, the development of automatic tools for the annotation of large and diverse multimedia collections. in the context of everyday sounds and online collections, the content to describe is very diverse and involves many different types of concepts, often organised in large hierarchical structures called taxonomies. this makes the task of manually annotating content arduous. in this paper, we present our user - centered development of two tools for the manual annotation of audio content from a wide range of types. we conducted a preliminary evaluation of functional prototypes involving real users. the goal is to evaluate them in a real context, engage in discussions with users, and inspire new ideas. a qualitative analysis was carried out including usability questionnaires and semi - structured interviews. this revealed interesting aspects to consider when developing tools for the manual annotation of audio content with labels drawn from large hierarchical taxonomies.
arxiv:1811.10988
the clouds have a great impact on venus ' s energy budget and climate evolution, but its three - dimensional structure is still not well understood. here we incorporate a simple venus cloud physics scheme into a flexible gcm to investigate the three - dimensional cloud spatial variability. our simulations show good agreement with observations in terms of the vertical profiles of clouds and h2so4 vapor. h2o vapor is overestimated above the clouds due to efficient transport in the cloud region. the cloud top decreases as latitude increases, qualitatively consistent with venus express observations. the underlying mechanism is the combination of h2so4 chemical production and meridional circulation. the mixing ratios of h2so4 at 50 - 60 km and h2o vapors in the main cloud deck basically exhibit maxima around the equator, due to the effect of temperature ' s control on the saturation vapor mixing ratios of the two species. the cloud mass distribution is subject to both h2so4 chemical production and dynamical transport and shows a pattern that peaks around the equator in the upper cloud while peaks at mid - high latitudes in the middle cloud. at low latitudes, h2so4 and h2o vapors, cloud mass loading and acidity show semidiurnal variations at different altitude ranges, which can be validated against future missions. our model emphasizes the complexity of the venus climate system and the great need for more observations and simulations to unravel its spatial variability and underlying atmospheric and / or geological processes.
arxiv:2407.15966
certain subsets of limit sets of geometrically finite fuchsian groups with parabolic elements are considered. it is known that jarn \ ' { \ i } k limit sets determine a " weak multifractal spectrum " of the patterson measure in this situation. this paper will describe a natural generalisation of these sets, called strict jarn \ ' { \ i } k limit sets, and show how these give rise to another weak multifractal spectrum. number - theoretical interpretations of these results in terms of continued fractions will also be given.
arxiv:1111.4945
we present a wide - field $ ( 60 \ arcmin \ times 30 \ arcmin ) $ study of a dense region within the polaris flare, hereafter referred to as the ` polaris molecular cloud ', using $ ^ { 12 } $ co, $ ^ { 13 } $ co, and c $ ^ { 18 } $ o ( $ j = 1 - 0 $ ) observations at $ 20 \ arcsec $ resolution, obtained with the nobeyama 45 m radio telescope. the analysis reveals molecular gas formation occurring at column densities up to $ \ sim10 ^ { 21 } $ cm $ ^ { - 2 } $, evidenced by an anti - correlation between $ \ textsc { hi } $ and co distributions, indicating active atomic - to - molecular gas conversion. we found a threshold column density for molecular formation at $ \ sim5 \ times10 ^ { 20 } $ cm $ ^ { - 2 } $, which is common among more evolved molecular clouds. the co - to - h $ _ 2 $ conversion factor, $ x _ { \ rm co } $, was found to be $ 0. 7 \ times 10 ^ { 20 } $ h $ _ 2 $ cm $ ^ { - 2 } $ ( k km s $ ^ { - 1 } ) ^ { - 1 } $, lower than the solar neighborhood average. our chemical models estimate the cloud ' s age to be $ \ sim10 ^ { 5 } - 10 ^ { 6 } $ years, suggesting an early stage of molecular cloud evolution. this interpretation is consistent with the observed low $ x _ { \ rm co } $ factor. while virial analysis suggests that the entire cloud is gravitationally unbound, we identified several filamentary structures extending from the main cloud body. these filaments show systematic velocity gradients of $ 0. 5 - 1. 5 $ km s $ ^ { - 1 } $ pc $ ^ { - 1 } $, and analysis of the velocities shows that the molecular gas within them is falling toward the main cloud body, following a free - fall model. this suggests ongoing mass accumulation processes through the filaments, demonstrating that gravitational processes can be important even at column densities of $ \ sim10 ^ { 21 } $ cm $ ^ { - 2 } $.
arxiv:2502.10668
the most distant kuiper belt objects appear to be clustered in longitude of perihelion and in orbital pole position. to date, the only two suggestions for the cause of these apparent clusterings have been either the effects of observational bias or the existence of the distant giant planet in an eccentric inclined orbit known as planet nine. to determine if observational bias can be the cause of these apparent clusterings, we develop a rigorous method of quantifying the observational biases in the observations of longitude of perihelion and orbital pole position. from this now more complete understanding of the biases we calculate that the probability that these distant kuiper belt objects would be clustered as strongly as observed in both longitude of perihelion and in orbital pole position is only 0. 2 %. while explanations other than planet nine may someday be found, the statistical significance of this clustering is now difficult to discount.
arxiv:1901.07115
we complete the classification of order $ 5 $ nonsymplectic automorphisms on hyper - k \ " ahler fourfolds deformation equivalent to the hilbert square of a k3 surface. we then compute the topological lefschetz number of natural automorphisms of generalized kummer fourfolds and we describe the geometry of their fix loci.
arxiv:1902.01685
cosmic ray muon, as they pass through a material, undergoes multiple coulomb scattering ( mcs ). the analysis of muon scattering angle in a material provides us with an opportunity to study the characteristics of material and its internal 3d structure as the scattering angle depends on the atomic number, the density of the material, and the thickness of the medium at a given energy. we have used the geant4 toolkit to study the scattering angle and utilize this information to identify the material. we have analyzed the density dependent $ \ & $ density independent scattering angle and observed various patterns for distinct periods in the periodic table.
arxiv:2302.00447
open many body quantum systems play a paramount role in various branches of physics, such as quantum information, nonlinear optics or condensed matter. the dissipative character of open systems has gained a lot of interest especially within the fields of quantum optics, due to unprecedented stabilization of quantum coherence, and quantum information, with its desire to control environmental degrees of freedom. we look beyond the typical mechanism of dissipation associated with an external source and show that strongly interacting many particle systems can create quantum decoherence within themselves. we study a quantum bosonic two - dimensional many body system with extended interactions between particles. analytical calculations show that the system can be driven out of its coherent state, which is prevalent among commonly used setups. however, we also observe a revival of the superfluid phase within the same framework for sufficiently large interaction strength. the breakdown of quantum coherence is inevitable, but can be misinterpreted if one assumes improper coupling between the constituents of the many particle system. we show an adequate path to retrieve physically relevant results and consider its limitations. the system displays a natural cutoff that enforces the breakdown of superfluidity.
arxiv:2308.16423
we review the schr \ " odinger picture of field theory in curved spacetime and using this formalism, the power spectrum of massive non - interacting, minimally coupled scalars in a fixed de sitter background is obtained. to calculate the n - point function in schr \ " odinger field theory, the " in - in " formalism is extended in the friedmann - lema \ ^ itre - robertson - walker ( flrw ) universe. we compute the three - point function for primordial scalar field fluctuation in the single field inflation by this in - in formalism. the results are the same as the three - point function in the heisenberg picture.
arxiv:1610.05038
big data processing systems handle huge unstructured and structured data to store, process, and analyze through cluster analysis which helps in identifying unseen patterns to find the relationships between them. clustering analysis over the shared machines in big data technologies helps in deriving the relations and making decisions using data in context. it can handle every form of raw, tabular data along with structured, semi - structured, and unstructured data. the data doesn ' t have to possess linearity property. it can reflect associative and correlative patterns and groupings. the main contribution and findings of this paper are to gather and summarize the recent big data clustering techniques, and their strengths, and weaknesses in any distributed environment.
arxiv:2211.05339
the sun is the most studied of stars and a laboratory of fundamental physics. however, the understanding of our star is stained by the solar modelling problem which can stem from various causes. we combine inversions of sound speed, an entropy proxy and the ledoux discriminant with the position of the base of the convective zone and the photospheric helium abundance to test combinations of ingredients such as equation of state, abundance and opacity tables. we study the potential of the inversions to constrain ad - hoc opacity modifications and additional mixing in the sun. we show that they provide constraints on these modifications to the ingredients and that the solar problem likely occurs from various sources and using phase shifts with our approach is the next step to take.
arxiv:1902.10390
we present a novel hierarchical formulation of the fourth - order forward symplectic integrator and its numerical implementation in the gpu - accelerated direct - summation n - body code frost. the new integrator is especially suitable for simulations with a large dynamical range due to its hierarchical nature. the strictly positive integrator sub - steps in a fourth - order symplectic integrator are made possible by computing an additional gradient term in addition to the newtonian accelerations. all force calculations and kick operations are synchronous so the integration algorithm is manifestly momentum - conserving. we also employ a time - step symmetrisation procedure to approximately restore the time - reversibility with adaptive individual time - steps. we demonstrate in a series of binary, few - body and million - body simulations that frost conserves energy to a level of $ | \ delta e / e | \ sim 10 ^ { - 10 } $ while errors in linear and angular momentum are practically negligible. for typical star cluster simulations, we find that frost scales well up to $ n _ \ mathrm { gpu } ^ \ mathrm { max } \ sim 4 \ times n / 10 ^ 5 $ gpus, making direct summation n - body simulations beyond $ n = 10 ^ 6 $ particles possible on systems with several hundred and more gpus. due to the nature of hierarchical integration the inclusion of a kepler solver or a regularised integrator with post - newtonian corrections for close encounters and binaries in the code is straightforward.
arxiv:2011.14984
in the causal adjustment setting, variable selection techniques based on one of either the outcome or treatment allocation model can result in the omission of confounders, which leads to bias, or the inclusion of spurious variables, which leads to variance inflation, in the propensity score. we propose a variable selection method based on a penalized objective function which considers the outcome and treatment assignment models simultaneously. the proposed method facilitates confounder selection in high - dimensional settings. we show that under regularity conditions our method attains the oracle property. the selected variables are used to form a doubly robust regression estimator of the treatment effect. we show that under some conditions our method attains the oracle property. simulation results are presented and economic growth data are analyzed. specifically, we study the effect of life expectancy as a measure of population health on the average growth rate of gross domestic product per capita.
arxiv:1511.08501
alice electrodynamics ( aed ) is a theory of electrodynamics in which charge conjugation is a local gauge symmetry. in this paper we investigate a charge instability in alice electrodynamics in ( 2 + 1 ) - dimensions due to this local charge conjugation. the instability manifests itself through the creation of a pair of alice fluxes. the final state is one in which the charge is completely delocalized, i. e., it is carried as cheshire charge by the flux pair that gets infinitely separated. we determine the decay rate in terms of the parameters of the model. the relation of this phenomenon with other salient features of 2 - dimensional compact qed, such as linear confinement due to instantons / monopoles, is discussed.
arxiv:hep-th/0304186
deep learning ( dl ) has brought significant advances to robotics vision tasks. however, most existing dl methods have a major shortcoming, they rely on a static inference paradigm inherent in traditional computer vision pipelines. on the other hand, recent studies have found that active perception improves the perception abilities of various models by going beyond these static paradigms. despite the significant potential of active perception, it poses several challenges, primarily involving significant changes in training pipelines for deep learning models. to overcome these limitations, in this work, we propose a generic supervised active perception pipeline for object detection that can be trained using existing off - the - shelf object detectors, while also leveraging advances in simulation environments. to this end, the proposed method employs an additional neural network architecture that estimates better viewpoints in cases where the object detector confidence is insufficient. the proposed method was evaluated on synthetic datasets, constructed within the webots robotics simulator, showcasing its effectiveness in two object detection cases.
arxiv:2312.10200
holistic understanding of multiphase reactive flow mechanisms such as co $ _ 2 $ dissolution, multiphase displacement, and snap - off events are vital for optimisation of large - scale industrial operations like co $ _ 2 $ sequestration, enhanced oil recovery, and geothermal energy. recent advances in three - dimensional ( 3d ) printing allow for cheap and fast manufacturing of complex porosity models, which enable investigation of specific flow processes in a repeatable manner as well as sensitivity analysis for small geometry alterations. however, there are concerns regarding dimensional fidelity, shape conformity and surface quality, and therefore the printing quality and printer limitations must be benchmarked. we present an experimental investigation into the ability of 3d printing to generate custom - designed micromodels accurately and repeatably down to a minimum pore throat size of 140 micrometers, which is representative of the average pore - throat size in coarse sandstones. homogeneous and heterogeneous micromodel geometries are designed, then the 3d printing process is optimised to achieve repeatable experiments with single - phase fluid flow. finally, particle image velocimetry is used to compare the velocity map obtained from flow experiments in 3d printed micromodels with the map generated with direct numerical simulation ( openfoam software ) and an accurate match is obtained. this work indicates that 3d printed micromodels can be used to accurately investigate pore - scale processes present in co $ _ 2 $ sequestration, enhanced oil recovery and geothermal energy applications more cheapely than traditional micromodel methods.
arxiv:2103.03597
in this work we construct an extension for the category of 0 - modules by analogy with [ h. - j. baues and g. wirshing, cohomology of small categories, j. pure appl. algebra, 38 ( 1985 ), 187 - 211 ]. the 0 - cohomology functor becomes a derived functor in the extended category. as an application of this construction we calculate the cohomological dimension of so - called 0 - free monoids.
arxiv:0802.4414
attention mechanism in sequence - to - sequence models is designed to model the alignments between acoustic features and output tokens in speech recognition. however, attention weights produced by models trained end to end do not always correspond well with actual alignments, and several studies have further argued that attention weights might not even correspond well with the relevance attribution of frames. regardless, visual similarity between attention weights and alignments is widely used during training as an indicator of the models quality. in this paper, we treat the correspondence between attention weights and alignments as a learning problem by imposing a supervised attention loss. experiments have shown significant improved performance, suggesting that learning the alignments well during training critically determines the performance of sequence - to - sequence models.
arxiv:2204.12308
the effect of platinum ( pt ) bottom electrode texture on the tetragonality, dielectric, ferroelectric, and polarization switching response of pulsed laser deposited ba0. 8sr0. 2tio3 ( bst ) thin films has been studied. the x - ray diffraction and raman analysis revealed the higher tetragonality of bst films when they were grown on higher ( 111 ) textured pt layer. the properties like dielectric permittivity, polarization, switching time, and leakage currents were found to be correlated to tetragonality and orientation of the bst films. the polarization current was observed to be higher in bst films on pt epitaxial layer and it exhibits exponential dependence on the electric field. the voltage - current measurements displayed ohmic behavior of leakage current irrespective of pt texture for low voltages ( up to 1 v ), whereas at higher voltages the conduction mechanism was found to be dependent on texture selection of bottom pt electrode.
arxiv:2501.12454
discrete structure rules for validating molecular structures are usually limited to fulfillment of the octet rule or similar simple deterministic heuristics. we propose a model, inspired by language modeling from natural language processing, with the ability to learn from a collection of undirected molecular graphs, enabling fitting of any underlying structure rule present in the collection. we introduce an adaption to the popular transformer model, which can learn relationships between atoms and bonds. to our knowledge, the transformer adaption is the first model that is trained to solve the unsupervised task of recovering partially observed molecules. in this work, we assess how different degrees of information impact performance w. r. t. to fitting the qm9 dataset, which conforms to the octet rule, and to fitting the zinc dataset, which contains hypervalent molecules and ions requiring the model to learn a more complex structure rule. more specifically, we test a full discrete graph with bond order information, a full discrete graph with only connectivity, a bag - of - neighbors, a bag - of - atoms, and a count - based unigram statistics. these results provide encouraging evidence that neural networks, even when only connectivity is available, can learn arbitrary molecular structure rules specific to a dataset, as the transformer adaption surpasses a strong octet rule baseline on the zinc dataset.
arxiv:2001.03517
qatar science & technology park ( qstp ) is a home for international technology companies in qatar, and an incubator of start - up technology businesses. inaugurated in march 2009 as a part of qatar foundation, the purpose of the science park is to spur development of qatar ' s knowledge economy. at an investment of more than $ 800 million by qatar foundation, the qstp also became qatar ' s first free - trade zone. = = facilities and functions = = the qstp complex was designed by australian architects woods bagot, who won the australian institute of architects national award for their design. qstp functions by providing office and lab space to tenant companies, in a complex of multi - user and single - user buildings, and by providing professional services and support programs to those companies, such as the flagship qstp xlr8, a tech business accelerator program. in september 2005 the government of qatar passed a law making the science park a " free zone ", allowing foreign companies to set up a 100 percent owned entity free from tax and duties. = = = management and operations = = = qstp has a staff of 1000 which includes all individuals that work at the science park for the qstp entities, as well qstp management. = = collaboration and ecosystem = = a feature of qstp is that it is co - located at qatar foundation ' s education city alongside international universities. these include carnegie mellon, cornell, georgetown, northwestern, texas a & m and virginia commonwealth. the science park helps its tenant companies to collaborate with the universities, and acts as an incubator for spin - out ventures from the universities ( and other sources ). = = = tenant companies and requirements = = = tenants of qstp are required to make technology development their main activity but can also trade commercially. the first companies to join qstp were eads, exxonmobil, ge, microsoft, rolls - royce, shell, total and ihorizons. = = goals and objectives = = qstp aims to grow qatar ' s knowledge economy by encouraging companies and institutes from around the world to develop and commercialise their technology in these sectors in qatar. in addition, to have a big responsibility and commitment to develop the role of qataris themselves in the r & d technology sector and this includes providing high tech training, supporting new business startups and enhancing technology management skills. the move towards a knowledge based society requires focus and this is how it is structured for the coming years ahead. the second
https://en.wikipedia.org/wiki/Qatar_Science_&_Technology_Park
the experimental data on goes magnetic measurements and plasma measurements on lanl geosynchronous satellites is used for selection of 169 case events containing 638 geosynchronous magnetopause crossings ( gmcs ) in 1995 to 2001. we study the necessary conditions for the geosynchronous magnetopause crossings using scatter plot of the gmcs in the coordinate space of psw versus bz. in such representation the upstream solar wind conditions demonstrate sharp envelope boundary under which no gmcs are occurred. the boundary has two strait horizontal branches where bz does not influence on the magnetopause location. the first branch is located in the range of psw = 21 npa for large positive bz and is associated with an asymptotic regime of the pressure balance. the second branch asymptotically approaches to the range of psw = 4. 8 npa under very strong negative bz and it is associated with a regime of the bz influence saturation. we suggest that the saturation is caused by relatively high contribution of the magnetosphere thermal pressure into the pressure balance on the magnetopause. the intermediate region of the boundary for the moderate negative and small positive imf bz can be well approximated by a hyperbolic tangent function. we interpret the envelope boundary as a range of necessary upstream solar wind conditions required for gmc in the point on the magnetopause located mostly close to the earth ( " perigee " point ). we obtain that the dipole tilt angle and dawn - dusk asymmetry influence on the " perigee " point location. we find that the agsm latitude of this point depends linearly on the dipole tilt angle with the slope about - 0. 5. the agsm longitude of the " perigee " point decreases with imf bz with a rate of about 2 angular minutes per 1 nt. an empirical model predicting the magnetopause crossing of the geosynchronous orbit in the " perigee " point is proposed.
arxiv:1109.6513
the celebrated exchange fluctuation theorem - - proposed by jarzynski and w \ ' ozcik, ( phys rev. lett. 92, 230602 ( 2004 ) ) for heat exchange between two systems in thermal equilibrium at different temperatures - - is explored here for quantum gaussian states in thermal equilibrium. we employ wigner distribution function formalism for quantum states, which exhibits close resemblance with the classcial phase - space trajectory description, to arrive at this theorem. for two gaussian states in thermal equilibrium at two different temperatures kept in contact with each other for a fixed duration of time we show that the quantum jarzyinski - w \ ' ozcik theorem agrees with the corresponding classical result in the limit \ hbar - > 0.
arxiv:2007.04255
according to the ads / cft duality, the superconformal index of a superconformal field theory should have an ads interpretation as a euclidean functional integral with periodic boundary conditions on the fermions. unlike the thermal case, the euclidean continuation of the supersymmetric ads black hole does not smoothly fill in these boundary conditions, leading us to ask the title question. in the context of ads3 / cft2, we show using supersymmetric localization that the gravitational functional integral for the elliptic genus localizes onto asymptotically ads3 configurations that are annihilated by a certain supercharge, in the relevant off - shell supergravity theory. for ( 0, 4 ) superconformal field theories, we find such a localizing configuration in the 5d $ { \ cal n } = 2 $ off - shell supergravity theory that is asymptotically $ ads _ 3 \ times s ^ 2 $. this configuration interpolates smoothly between the supersymmetric btz black hole in the interior and a constant gauge field configuration at the boundary, thus smoothly filling in the ( + + ) boundary conditions for spinors on the boundary torus. it has an action equal to the supersymmetric btz black hole, holomorphic in the complex structure $ \ tau $ of the boundary torus. our results have interesting implications for the black hole farey tail in ads3, as well as for higher dimensional ads theories.
arxiv:1112.4844
we propose a protocol to probe the ultrafast evolution and dephasing of coherent electronic excitation in molecules in the time domain by the intrinsic streaking field generated by the molecule itself. coherent electronic motion in the endohedral fullerene \ necsixty ~ is initiated by a moderately intense femtosecond uv - vis pulse leading to coherent oscillations of the molecular dipole moment that persist after the end of the laser pulse. the resulting time - dependent molecular near - field is probed through the momentum modulation of photoemission from the central neon atom by a time - delayed attosecond xuv pulse. our ab - initio time - dependent density functional theory and classical trajectory simulations predict that this self - streaking signal accurately traces the molecular dipole oscillations in real time. we discuss the underlying processes and give an analytical model that captures the essence of our ab - initio simulations.
arxiv:1505.05857
it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price - setting power. for a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. demand is often represented by a table or a graph showing price and quantity demanded ( as in the figure ). demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. a term for this is " constrained utility maximisation " ( with income and wealth as the constraints on demand ). here, utility refers to the hypothesised relation of each individual consumer for ranking different commodity bundles as more or less preferred. the law of demand states that, in general, price and quantity demanded in a given market are inversely related. that is, the higher the price of a product, the less of it people would be prepared to buy ( other things unchanged ). as the price of a commodity falls, consumers move toward it from relatively more expensive goods ( the substitution effect ). in addition, purchasing power from the price decline increases ability to buy ( the income effect ). other factors can change demand ; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. all determinants are predominantly taken as constant factors of demand and supply. supply is the relation between the price of a good and the quantity available for sale at that price. it may be represented as a table or graph relating price and quantity supplied. producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. supply is typically represented as a function relating price and quantity, if other factors are unchanged. that is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. the higher price makes it profitable to increase production. just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. the " law of supply " states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. here as well, the determinants of supply, such as price of substitutes, cost of production, technology
https://en.wikipedia.org/wiki/Economics
rag systems rely on rerankers to identify relevant documents. however, fine - tuning these models remains challenging due to the scarcity of annotated query - document pairs. existing distillation - based approaches suffer from training - inference misalignment and fail to capture interdependencies among candidate documents. to overcome these limitations, we reframe the reranking process as an attention - mask problem and propose gumbel reranking, an end - to - end training framework for rerankers aimed at minimizing the training - inference gap. in our approach, reranker optimization is reformulated as learning a stochastic, document - wise top - $ k $ attention mask using the gumbel trick and relaxed top - $ k $ sampling. this formulation enables end - to - end optimization by minimizing the overall language loss. experiments across various settings consistently demonstrate performance gains, including a 10. 4 \ % improvement in recall on hotpotqa for distinguishing indirectly relevant documents.
arxiv:2502.11116
for a manifold embedded in an inner product space, we express geometric quantities such as { \ it hamilton vector fields, affine and levi - civita connections, curvature } in global coordinates. instead of coordinate indices, the global formulas for most quantities are expressed as { \ it operator - valued } expressions, using an { \ it affine projection } to the tangent bundle. for a submersion image of an embedded manifold, we introduce { \ it liftings } of hamilton vector fields, allowing us to use embedded coordinates on horizontal bundles. we derive a { \ it gauss - codazzi equation } for affine connections on vector bundles. this approach allows us to evaluate geometric expressions globally, and could be used effectively with modern numerical frameworks in applications. examples considered include rigid body mechanics and hamilton mechanics on grassmann manifolds. we show explicitly the cross - curvature ( mtw - tensor ) for the { \ it kim - mccann } metric with a reflector antenna - type cost function on the space of positive - semidefinite matrices of fixed rank has nonnegative cross - curvature, while the corresponding cost could have negative cross - curvature on grassmann manifolds, except for projective spaces.
arxiv:2307.10017
the nearby cloud l1642 is one of only two known very high latitude ( | b | > 30 deg ) clouds actively forming stars. it is a rare example of star formation in isolated conditions, and can reveal important details of star formation in general, e. g., of the effect of magnetic fields. we compare herschel dust emission structures and magnetic field orientation revealed by planck polarization maps in l1642. the high - resolution ( $ \ sim20 " $ ) herschel data reveal a complex structure including a dense, compressed central clump, and low density striations. the planck polarization data ( at 10 $ ' $ resolution ) reveal an ordered magnetic field pervading the cloud and aligned with the surrounding striations. there is a complex interplay between the cloud structure and large scale magnetic field. this suggests that the magnetic field is closely linked to the formation and evolution of the cloud. co rotational emission confirms that the striations are connected with the main clumps and likely to contain material either falling into or flowing out of the clumps. there is a clear transition from aligned to perpendicular structures approximately at a column density of $ n _ { \ rm { h } } = 1. 6 \ times 10 ^ { 21 } \, { { \ rm cm } } ^ { - 2 } $. comparing the herschel maps with the planck polarization maps shows the close connection between the magnetic field and cloud structure even in the finest details of the cloud.
arxiv:1512.03775
we calculate to next - to - leading order accuracy the high - energy elastic scattering cross section for an electron off of a classical point source. we use the $ \ overline { \ mathrm { ms } } $ renormalization scheme to tame the ultraviolet divergences while the infrared singularities are dealt with using the well known kinoshita - lee - nauenberg theorem. we show for the first time how to correctly apply the kinoshita - lee - nauenberg theorem diagrammatically in a next - to - leading order scattering process. we improve on previous works by including all initial and final state soft radiative processes, including absorption and an infinite sum of partially disconnected amplitudes. crucially, we exploit the monotone convergence theorem to prove that our delicate rearrangement of this formally divergent series is uniquely correct. this rearrangement yields a factorization of the infinite contribution from the initial state soft photons that then cancels in the physically observable cross section. since we use the $ \ overline { \ mathrm { ms } } $ renormalization scheme, our result is valid up to arbitrarily large momentum transfers between the source and the scattered electron as long as $ \ alpha \ log ( 1 / \ delta ) \ ll 1 $ and $ \ alpha \ log ( 1 / \ delta ) \ log ( \ delta / e ) \ ll 1 $, where $ \ delta $ and $ \ delta $ are the experimental energy and angular resolutions, respectively, and $ e $ is the energy of the scattered electron. our work aims at computing the nlo corrections to the energy loss of a high energetic parton propagating in a quark - gluon plasma.
arxiv:1706.09989
a non - empirical exchange functional based on an interpolation between two limits of electron density : slowly varying limit and asymptotic limit, is proposed. in the slowly varying limit, we follow the study by kleinman in 1984 which considered the response of a free - electron gas to an external periodic potential, but further assume that the perturbing potential also induces bragg diffraction of the fermi electrons. the interpolation function is motivated by the exact exchange functional of a hydrogen atom. combined with our recently proposed correlation functional, tests on 56 small molecules show that, for the first - row molecules, the exchange - correlation combo predicts the total energies four times more accurate than presently available quantum monte carlo results. for the second - row molecules, errors of the core electrons exchange energies can be corrected, leading to the most accurate molecular total energy predictions to date despite minimal computational efforts. the calculated bond energies, zero point energies, and dipole moments are also presented.
arxiv:1706.01343
we have studied several radio galaxies at low radio frequencies using gmrt. our prime motivation to detect faint radio emission at very low frequencies due to low energy electrons. our results provide evidence that there exists two classes of sources on morphological grounds. the first class is explained by the simple picture of spectral electron ageing but in the second class the low - frequency synchrotron emission fades ( nearly ) as rapidly as high - frequency synchrotron emission. in addition, in several sources, the spectra of low - surface - brightness features are flatter than the spectra of high - surface - brightness features, which suggests that either the simple picture of spectral electron ageing needs revision or we need to re - examine the formation mechanism of such sources. the images and statistics, and the relevance of these results along with the role of gmrt in exploring several unknowns are presented.
arxiv:0904.2724
let $ q $ be a power of the prime 3. a locally 5 - arc transitive $ g $ - graph of pushing up type is constructed for each value of $ q $. for $ q = 3 $, the $ g $ - graph constructed provides an example of a graph with a vertex stabilizer amalgam of shape $ { \ cal e } _ 1 $ in the sense of [ 1 ]. whereas, for the other values of $ q $, the vertex stabilizer amalgam of the $ g $ - graph is of a previously unknown shape. in particular, for $ q \ neq 3 $, these graphs are the first examples of locally 5 - arc transitive graphs containing a vertex $ z $ for which the group that fixes all 3 - arcs originating at $ z $ is non - trivial.
arxiv:2404.05353
we report optical observations during the first hour of the gamma - ray burst ( grb ) afterglow of grb021004. our observation revealed the existence of a short plateau phase, in which the afterglow remained at almost constant brightness, before an ordinary rapid fading phase. this plateau phase lasted for about 2 hours from 0. 024 to 0. 10 d after the burst, which corresponds to a missing blank of the early afterglow light curve of grb990123. we propose that the plateau phase can be interpreted as the natural evolution of synchrotron emission from the forward shock region of a blast wave. the time when the typical frequency of the synchrotron emission passes through the optical range has been predicted to be about 0. 1 d after the burst, which is consistent with the observed light curve. our scenario hence implies that the observed feature in grb021004 is a common nature of grb afterglows.
arxiv:astro-ph/0303119
the tunneling rate, with exact prefactor, is calculated to first order in \ hbar for a closed frw universe filled with perfect fluid violating the strong energy condition. the calculations are performed by applying the dilute - instanton approximation on the corresponding duru - kleinert path integral. it is shown that a closed frw universe filled with a perfect fluid with small violation of strong energy condition is more probable to tunnel than the same universe with large violation of strong energy condition.
arxiv:gr-qc/9811081
we prove criteria for a { \ it ' magnetic ' weyl operator } to be in a schatten - von neuman class by extending a method developed by h. cordes, t. kato and g. arsu.
arxiv:1601.04613
the strong coupling constants of light pseudoscalar mesons with spin - - 3 / 2 and spin - - 1 / 2 heavy baryons are calculated in the framework of light cone qcd sum rules. it is shown that each class of transitions among members of the sextet spin - - 3 / 2 to sextet spin - - 1 / 2 baryons and that of the sextet spin - - 3 / 2 to spin - - 1 / 2 anti - - triplet baryons is described by only one invariant function. we also estimate the widths of kinematically allowed transitions. our results on decay widths are in good agreement with the existing experimental data, as well as predictions of other nonperturbative approaches.
arxiv:1012.5935
we have studied the neutron diffuse scattering in the relaxor pmn. the diffuse scattering appears around the burns temperature ( ~ 620k ), indicating its origin from the polar nanoregions ( pnr ). while the relative diffuse intensities are consistent with previous reports, they are entirely different from those of the lowest - energy to phonon. because of that, it has been considered that this to mode could not be the ferroelectric soft mode. recently, a neutron scattering study has unambiguously shown that the to mode does soften on cooling. if the diffuse scattering in pmn originates from the soft mode condensation, then the atomic displacements must satisfy the center of mass condition. but, the atomic displacements determined from diffuse scattering intensities do not fulfill this condition. to resolve this contradiction, we propose a simple model in which the total atomic displacement consists of two components : $ \ delta _ { cm } $ is created by the soft mode condensation, satisfying the center of mass condition, and, $ \ delta _ { shift } $ represents a uniform displacement of the pnr along their polar direction relative to the surrounding ( unpolarized ) cubic matrix. within this framework, we can successfully describe the neutron diffuse scattering intensities observed in pmn.
arxiv:cond-mat/0109386
this note briefly reviews the { \ it mirror principle } as developed in the series of papers \ llyi \ llyii \ llyiii \ llyiv \ lchy. we illustrate this theory with a few new examples. one of them gives an intriguing connection to a problem of counting holomorphic disks and annuli. this note has been submitted for the proceedings of the workshop on strings, duality and geometry the c. r. m. in montreal of march 2000.
arxiv:math/0010064
we construct and train an artificial neural network called the back - propagation neural network to describe the evolution of the type ia supernova spectrum by using the data from the cfa supernova program. this network method has many attractive features, and one of them is that the constructed model is differentiable. benefitting from this, we calculate the absorption velocity and its variation. the model we constructed can well describe not only the spectrum of sne ia with wavelength range from $ 3500 \ aa $ to $ 8000 \ aa $, but also the light - curve evolution with phase time from $ - 15 $ to $ 50 $ with different colors. moreover, the number of parameters needed during the training process is much less than the usual methods.
arxiv:1801.01723
recent observation of sagittarius a $ ^ * $ ( sgr a $ ^ * $ ) by the event horizon telescope ( eht ) collaboration has uncovered various unanswered questions in black hole ( bh ) physics. besides, it may also probe various beyond the standard model ( bsm ) scenarios. one of the most profound possibilities is the search for ultralight bosons ( ulbs ) using bh superradiance ( sr ). eht observations imply that sgr a $ ^ * $ has a non - zero spin. using this observation, we derive bounds on the mass of ulbs with purely gravitational interactions. considering self - interacting ultralight axions, we constrain new regions in the parameter space of decay constant, for a certain spin of sgr a $ ^ * $. future observations of various spinning bhs can improve the present constraints on ulbs.
arxiv:2208.03530
prethermalization refers to the relaxation to a quasi - stationary state before reaching thermal equilibrium. recently, it is found that not only local conserved quantities but also entanglement plays a key role in a special type of prethermalization, called entanglement prethermalization. here, we show that in the tomonaga - luttinger model the entanglement prethermalization can also be explained by the conventional prethermalization of two independent subsystems without entanglement. moreover, it is argued that prethermalization in the tomonaga - luttinger model is essentially different from entanglement prethermalization in the lieb - liniger model because of the different types of energy degeneracies.
arxiv:1708.02404
theoretical dft calculations using gga + u and hse06 frameworks enabled vibrational mode assignment and partial ( atomic ) phonon dos determination in kagf3 perovskite, a low - dimensional magnetic fluoroargentate ( ii ). twelve bands in the spectra of kagf3 were assigned to either ir active or raman active modes, reaching very good correlation with experimental values ( r2 > 0. 997 ). low - temperature raman measurements indicate that the intriguing spin - peierls - like phase transition at 230 k is an order - disorder transition and it does not strongly impact the vibrational structure of the material.
arxiv:2012.07010
we give three new proofs of a theorem of c. sabbah asserting that the weight filtration of the limit mixed hodge structure at infinity of cohomologically tame polynomials coincides with the monodromy filtration up to a certain shift depending on the unipotent or non - unipotent monodromy part.
arxiv:1110.4840
since the beginning of the covid - 19 spreading, the number of studies on the epidemic models increased dramatically. it is important for policymakers to know how the disease will spread and what are the effects of the policies and environment on the spreading. in this paper, we propose two extensions to the standard sir model : ( a ) we consider the prevention measures adopted based on the current severity of the infection. those measures are adaptive and change over time ; ( b ) multiple cities and regions are considered, with population movements between those cities and regions, while taking into account that each region may have different prevention measures. although the adaptive measures and mobility of the population were often observed during the pandemic, these effects are rarely explicitly modeled and studied in the classical epidemic models. the model we propose gives rise to a plateau phenomenon : the number of people infected by the disease stays at the same level during an extended period of time. we show what are conditions need to be met in order for the spreading to exhibit a plateau period in a single city. in addition, this phenomenon is interdependent when considering multiple cities. we verify from the real - world data that the plateau phenomenon does exist in many regions of the world in the current covid - 19 development. finally, we provide theoretical analysis on the plateau phenomenon for the single - city model and derive a series of results on the emergence and the ending of the plateau, as well as on the height and length of the plateau. our theoretical results match well with our experimental findings.
arxiv:2011.03376
this paper addresses the design of linear and nonlinear stabilization procedures for high - order continuous galerkin ( cg ) finite element discretizations of scalar conservation laws. we prove that the standard cg method is entropy conservative for the square entropy. in general, the rate of entropy production / dissipation depends on the residual of the governing equation and on the accuracy of the finite element approximation to the entropy variable. the inclusion of linear high - order stabilization generates an additional source / sink in the entropy budget equation. to balance the amount of entropy production in each cell, we construct entropy - dissipative element contributions using a coercive bilinear form and a parameter - free entropy viscosity coefficient. the entropy stabilization term is high - order consistent, and optimal convergence behavior is achieved in practice. to enforce preservation of local bounds in addition to entropy stability, we use the bernstein basis representation of the finite element solution and a new subcell flux limiting procedure. the underlying inequality constraints ensure the validity of localized entropy conditions and local maximum principles. the benefits of the proposed modifications are illustrated by numerical results for linear and nonlinear test problems.
arxiv:2005.08788
the conclusion of an ai challenge is not the end of its lifecycle ; ensuring a long - lasting impact requires meticulous post - challenge activities. the long - lasting impact also needs to be organised. this chapter covers the various activities after the challenge is formally finished. this work identifies target audiences for post - challenge initiatives and outlines methods for collecting and organizing challenge outputs. the multiple outputs of the challenge are listed, along with the means to collect them. the central part of the chapter is a template for a typical post - challenge paper, including possible graphs and advice on how to turn the challenge into a long - lasting benchmark.
arxiv:2312.06036
in the demanding biosensing environment, improving selection efficiency strategies has become an issue of great significance. dna minicircles containing between 200 and 400 base - pairs, also named microdna, are representative of the supercoiled dna loops found in nature. their short size makes them extremely susceptible to writhe and twist, which is known to play a central role in dna denaturation. we investigate minicircle lengths and superhelical densities that induce dna denaturation bubbles of nanometer size and control well - defined long - life. mesoscopic modeling and accelerated dynamics simulations allow us to study accurately the thermodynamic and dynamical properties associated with the nucleation and closure mechanisms of long - lived denaturation bubbles. our results pave the way for new types of dna biosensors with enhanced selectivity for specific dna binding proteins.
arxiv:1805.04287
some properties of a local discontinuous galerkin ( ldg ) algorithm are demonstrated for the problem of evaluting a second derivative $ g = f _ { xx } $ for a given $ f $. ( this is a somewhat unusual problem, but it is useful for understanding the initial transient response of an algorithm for diffusion equations. ) ldg uses an auxiliary variable to break this up into two first order equations and then applies techniques by analogy to dg algorithms for advection algorithms. this introduces an asymmetry into the solution that depends on the choice of upwind directions for these two first order equations. when using piecewise linear basis functions, this ldg solution $ g _ h $ is shown not to converge in an $ l _ 2 $ norm because the slopes in each cell diverge. however, when ldg is used in a time - dependent diffusion problem, this error in the second derivative term is transient and rapidly decays away, so that the overall error is bounded. i. e., the ldg approximation $ f _ h ( x, t ) $ for a diffusion equation $ \ partial f / \ partial t = f _ { xx } $ converges to the proper solution ( as has been shown before ), even though the initial rate of change $ \ partial f _ h / \ partial t $ does not converge. we also show results from the recovery discontinuous galerkin ( rdg ) approach, which gives symmetric solutions that can have higher rates of convergence for a stencil that couples the same number of cells.
arxiv:1405.5907
we evaluate the contribution of susy - qcd to top - charm associated production at next generation linear colliders. our results show that the production cross section of the process $ e ^ + e ^ - \ to t \ bar c { or } \ bar t c $ could be as large as 0. 1 fb, which is larger than the prediction of the sm by a factor of $ 10 ^ 8 $.
arxiv:hep-ph/9904273
we present an in - depth analysis on the strength of the almost 10, 000 passwords from users of an instant messaging server in italy. we estimate the strength of those passwords, and compare the effectiveness of state - of - the - art attack methods such as dictionaries and markov chain - based techniques. we show that the strength of passwords chosen by users varies enormously, and that the cost of attacks based on password strength grows very quickly when the attacker wants to obtain a higher success percentage. in accordance with existing studies we observe that, in the absence of measures for enforcing password strength, weak passwords are common. on the other hand we discover that there will always be a subset of users with extremely strong passwords that are very unlikely to be broken. the results of our study will help in evaluating the security of password - based authentication means, and they provide important insights for inspiring new and better proactive password checkers and password recovery tools.
arxiv:0907.3402
let $ l $ be a separable quadratic extension of either $ \ mathbb { q } $ or $ \ mathbb { f } _ q ( t ) $. we propose efficient algorithms for finding isomorphisms between quaternion algebras over $ l $. our techniques are based on computing maximal one - sided ideals of the corestriction of a central simple $ l $ - algebra.
arxiv:2007.06981
in recent years, deep or reinforcement learning approaches have been applied to optimise investment portfolios through learning the spatial and temporal information under the dynamic financial market. yet in most cases, the existing approaches may produce biased trading signals based on the conventional price data due to a lot of market noises, which possibly fails to balance the investment returns and risks. accordingly, a multi - agent and self - adaptive portfolio optimisation framework integrated with attention mechanisms and time series, namely the masaat, is proposed in this work in which multiple trading agents are created to observe and analyse the price series and directional change data that recognises the significant changes of asset prices at different levels of granularity for enhancing the signal - to - noise ratio of price series. afterwards, by reconstructing the tokens of financial data in a sequence, the attention - based cross - sectional analysis module and temporal analysis module of each agent can effectively capture the correlations between assets and the dependencies between time points. besides, a portfolio generator is integrated into the proposed framework to fuse the spatial - temporal information and then summarise the portfolios suggested by all trading agents to produce a newly ensemble portfolio for reducing biased trading actions and balancing the overall returns and risks. the experimental results clearly demonstrate that the masaat framework achieves impressive enhancement when compared with many well - known portfolio optimsation approaches on three challenging data sets of djia, s & p 500 and csi 300. more importantly, our proposal has potential strengths in many possible applications for future study.
arxiv:2404.08935
over the last decade, applications of neural networks ( nns ) have spread to various aspects of our lives. a large number of companies base their businesses on building products that use neural networks for tasks such as face recognition, machine translation, and self - driving cars. much of the intellectual property underpinning these products is encoded in the exact parameters of the neural networks. consequently, protecting these is of utmost priority to businesses. at the same time, many of these products need to operate under a strong threat model, in which the adversary has unfettered physical control of the product. in this work, we present barracuda, a novel attack on general purpose graphic processing units ( gpus ) that can extract parameters of neural networks running on the popular nvidia jetson nano device. barracuda uses correlation electromagnetic analysis to recover parameters of real - world convolutional neural networks.
arxiv:2312.07783
fourier - positivity, i. e. the mathematical property that a function has a positive fourier transform, can be used as a constraint on the parametrization of qcd dipole - target cross - sections or wilson line correlators in transverse position ( r ) space. they are bessel transforms of positive transverse momentum dependent gluon distributions. using mathematical fourier - positivity constraints on the limit r - > 0 behavior of the dipole amplitudes, we identify the common origin of the violation of fourier - positivity for various, however phenomenologically convenient, dipole models. it is due to the behavior r ^ { 2 + epsilon }, epsilon > 0, softer, even slightly, than color transparency. fourier - positivity seems thus to conflict with the present dipole formalism when it includes a qcd running coupling constant alpha ( r ).
arxiv:1604.01932
brillouin scattering has applications ranging from signal processing, sensing and microscopy, to quantum information and fundamental science. most of these applications rely on the electrostrictive interaction between light and phonons. here we show that in liquids optically - induced surface deformations can provide an alternative and far stronger interaction. this allows the demonstration of ultralow threshold brillouin lasing and strong phonon - mediated optical coupling for the first time. this form of strong coupling is a key capability for brillouin - reconfigurable optical switches and circuits, for photonic quantum interfaces, and to generate synthetic electromagnetic fields. while applicable to liquids quite generally, our demonstration uses superfluid helium. configured as a brillouin gyroscope this provides the prospect of measuring superfluid circulation with unprecedented precision, and to explore the rich physics of quantum fluid dynamics, from quantized vorticity to quantum turbulence.
arxiv:1907.06811
we investigate the influence of the granularity of the lattice on the potential between monopoles. using the flux definition of monopoles we introduce their centers of mass and are able to realize continuous shifts of the monopole positions. we find periodic deviations from the $ 1 / r $ - behavior of the monopole - antimonopole potential leading to local extrema. we suppose that these meta - stabilities may influence the order of the phase transition in compact qed.
arxiv:hep-lat/0409007
model merging aims to build a multi - task learner by combining the parameters of individually fine - tuned models without additional training. while a straightforward approach is to average model parameters across tasks, this often results in suboptimal performance due to interference among parameters across tasks. in this paper, we present intriguing results that weight averaging implicitly induces task vectors centered around the weight averaging itself and that applying a low - rank approximation to these centered task vectors significantly improves merging performance. our analysis shows that centering the task vectors effectively reduces task interference and most of task - specific knowledge is concentrated in the top singular vectors. our method demonstrates robust and scalable performance on vision benchmarks across varying numbers of tasks and model sizes. furthermore, we observe that our approach is applicable to natural language processing tasks with competitive performance.
arxiv:2412.12153
we investigate a class of non - involutive solutions of the yang - baxter equation which generalize self - distributive ( derived ) solutions. in particular, we study generalized multipermutation solutions in this class. we show that the yang - baxter ( permutation ) groups of such solutions are nilpotent. we formulate the results in the language of biracks.
arxiv:1906.03960
we introduce bayesian multi - tensor factorization, a model that is the first bayesian formulation for joint factorization of multiple matrices and tensors. the research problem generalizes the joint matrix - tensor factorization problem to arbitrary sets of tensors of any depth, including matrices, can be interpreted as unsupervised multi - view learning from multiple data tensors, and can be generalized to relax the usual trilinear tensor factorization assumptions. the result is a factorization of the set of tensors into factors shared by any subsets of the tensors, and factors private to individual tensors. we demonstrate the performance against existing baselines in multiple tensor factorization tasks in structural toxicogenomics and functional neuroimaging.
arxiv:1412.4679
topological network motifs represent functional relationships within and between regulatory and protein - protein interaction networks. enriched motifs often aggregate into self - contained units forming functional modules. theoretical models for network evolution by duplication - divergence mechanisms and for network topology by hierarchical scale - free networks have suggested a one - to - one relation between network motif enrichment and aggregation, but this relation has never been tested quantitatively in real biological interaction networks. here we introduce a novel method for assessing the statistical significance of network motif aggregation and for identifying clusters of overlapping network motifs. using an integrated network of transcriptional, posttranslational and protein - protein interactions in yeast we show that network motif aggregation reflects a local modularity property which is independent of network motif enrichment. in particular our method identified novel functional network themes for a set of motifs which are not enriched yet aggregate significantly and challenges the conventional view that network motif enrichment is the most basic organizational principle of complex networks.
arxiv:1109.1932
calculated by a microprocessor in the receiver. the position can be displayed as latitude and longitude, or as a marker on an electronic map. gps receivers are incorporated in almost all cellphones and in vehicles such as automobiles, aircraft, and ships, and are used to guide drones, missiles, cruise missiles, and even artillery shells to their target, and handheld gps receivers are produced for hikers and the military. radio beacon – a fixed location terrestrial radio transmitter which transmits a continuous radio signal used by aircraft and ships for navigation. the locations of beacons are plotted on navigational maps used by aircraft and ships. vhf omnidirectional range ( vor ) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108. 00 and 117. 95 mhz in the very high frequency ( vhf ) band. an automated navigational instrument on the aircraft displays a bearing to a nearby vor transmitter. a vor beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at
https://en.wikipedia.org/wiki/Radio
the dominant part of the difference between the observed and model frequencies of the sun can be approximated by a power law. we show that when this empirical law is employed to correct the model frequencies and then the small frequency separations are used for solar age determination, the results are consistent with the meteoritic age ( 4. 563 gyr < t < 4. 576 gyr ). we present the results and compare with those obtained by using the ratios of small to large frequency separations.
arxiv:1004.2215
in studies of the dynamic failure of brittle hydrogels, a bound has been placed on the process zone scale - the scale where material separation and ultimate failure occur. for the polyacrylamide hydrogel system under study, this bound is set at 20 microns. thus, any subtle alterations to the material at a \ emph { smaller } scale should not in principle alter the dynamic fracture response of the hydrogel. here we test this directly by embedding sub - micron - scale latex polystyrene microspheres within the brittle polyacrylamide hydrogel at a solids fraction of 0. 1 \ %. we verify that the spheres are well - distributed throughout the hydrogel material at this concentration with optical microscopy, and reconstruct the 3d distribution of these spheres using laser scanning confocal microscopy in backscatter mode. finally, we test the fracture behavior of this gel with the dilute, embedded sub - micron spheres, and find that the brittle material failure modality common to this material \ emph { without } the sub - micron spheres is indeed retained. by comparing the crack tip opening displacement, fracture energy and the crack ' s speed with established data from prior experimental work, we demonstrate that this material ' s failure is brittle, as it is in good agreement with the pure hydrogel system.
arxiv:2004.04137
this paper focuses on the control of a system composed of an unmanned aerial vehicle ( uav ) and an unmanned ground vehicle ( ugv ) which cooperate to manipulate an object. the two units are subject to actuator saturations and cooperate to move the object to a desired pose, characterized by its position and inclination. the paper proposes a control strategy where the ground vehicle is tasked to deploy the object to a certain position, whereas the aerial vehicle adjusts its inclination. the ground vehicle is governed by a saturated proportional - derivative control law. the aerial vehicle is regulated by means of a cascade control specifically designed for this problem that is able to exploit the mechanical interconnection. the stability of the overall system is proved through input - to - state stability and small gain theorem arguments. to solve the problem of constraints satisfaction, a nonlinear reference governor scheme is implemented. numerical simulations are provided to demonstrate the effectiveness of the proposed method.
arxiv:1602.08987
we present a study of the luminosity density distribution of the galactic bar using number counts of red clump giants ( rcgs ) from the ogle - iii survey. the data were recently published by nataf et al. ( 2013 ) for 9019 fields towards the bulge and have $ 2. 94 \ times 10 ^ 6 $ rc stars over a viewing area of $ 90. 25 \, \ textrm { deg } ^ 2 $. the data include the number counts, mean distance modulus ( $ \ mu $ ), dispersion in $ \ mu $ and full error matrix, from which we fit the data with several tri - axial parametric models. we use the markov chain monte carlo ( mcmc ) method to explore the parameter space and find that the best - fit model is the $ e _ 3 $ model, with the distance to the gc is 8. 13 kpc, the ratio of semi - major and semi - minor bar axis scale lengths in the galactic plane $ x _ { 0 }, y _ { 0 } $, and vertical bar scale length $ z _ 0 $, is $ x _ 0 : y _ 0 : z _ 0 \ approx 1. 00 : 0. 43 : 0. 40 $ ( close to being prolate ). the scale length of the stellar density profile along the bar ' s major axis is $ \ sim $ 0. 67 kpc and has an angle of $ 29. 4 ^ \ circ $, slightly larger than the value obtained from a similar study based on ogle - ii data. the number of estimated rc stars within the field of view is $ 2. 78 \ times 10 ^ 6 $, which is systematically lower than the observed value. we subtract the smooth parametric model from the observed counts and find that the residuals are consistent with the presence of an x - shaped structure in the galactic centre, the excess to the estimated mass content is $ \ sim 5. 8 % $. we estimate the total mass of the bar is $ \ sim 1. 8 \ times 10 ^ { 10 } m _ \ odot $. our results can be used as a key ingredient to construct new density models of the milky way and will have implications on the predictions of the optical depth to gravitational microlensing and the patterns of hydrodynamical gas flow in the milky way.
arxiv:1303.6430
in this work, we provide a q - generalization of flexible algebras and related bialgebraic structures, including center - symmetric ( also called antiflexible ) algebras, and their bialgebras. their basic properties are derived and discussed. their connection with known algebraic structures, previously developed in the literature, is established. a q - generalization of myung theorem is given. main properties related to bimodules, matched pairs and dual bimodules as well as their algebraic consequences are investigated and analyzed. finally, the equivalence between q - generalized flexible algebras, their manin triple and bialgebras is established.
arxiv:1712.07751
recommender systems are ubiquitous yet often difficult for users to control, and adjust if recommendation quality is poor. this has motivated conversational recommender systems ( crss ), with control provided through natural language feedback. however, as with most application domains, building robust crss requires training data that reflects system usage $ \ unicode { x2014 } $ here conversations with user utterances paired with items that cover a wide range of preferences. this has proved challenging to collect scalably using conventional methods. we address the question of whether it can be generated synthetically, building on recent advances in natural language. we evaluate in the setting of item set recommendation, noting the increasing attention to this task motivated by use cases like music, news, and recipe recommendation. we present talkthewalk, which synthesizes realistic high - quality conversational data by leveraging domain expertise encoded in widely available curated item collections, generating a sequence of hypothetical yet plausible item sets, then using a language model to produce corresponding user utterances. we generate over one million diverse playlist curation conversations in the music domain, and show these contain consistent utterances with relevant item sets nearly matching the quality of an existing but small human - collected dataset for this task. we demonstrate the utility of the generated synthetic dataset on a conversational item retrieval task and show that it improves over both unsupervised baselines and systems trained on a real dataset.
arxiv:2301.11489
we show that a pseudospectral representation of the wavefunction using multiple spatial domains of variable size yields a highly accurate, yet efficient method to solve the time - dependent schr \ " odinger equation. the overall spatial domain is split into non - overlapping intervals whose size is chosen according to the local de broglie wavelength. a multi - domain weak formulation of the schr \ " odinger equation is obtained by representing the wavefunction by lagrange polynomials with compact support in each domain, discretized at the legendre - gauss - lobatto points. the resulting hamiltonian is sparse, allowing for efficient diagonalization and storage. accurate time evolution is carried out by the chebychev propagator, involving only sparse matrix - vector multiplications. our approach combines the efficiency of mapped grid methods with the accuracy of spectral representations based on gaussian quadrature rules and the stability and convergence properties of polynomial propagators. we apply this method to high - harmonic generation and examine the role of the initial state for the harmonic yield near the cutoff.
arxiv:1611.09034
in this study, we employed fourier - based quantum phase estimation ( qpe ) to calculate x - ray absorption spectroscopy ( xas ) spectra. the primary focus of this study is the calculation of the xas spectra of transition metal $ l _ { 2, 3 } $ - edges, which are dominated by strong correlation effects. first, the fe $ l _ { 2, 3 } $ - edge x - ray absorption near - edge structure of fepo $ _ 4 $ is calculated using a noiseless simulator. the present computation involves a comparison of three types of input states : a uniform superposition state, optimal entangled input state, and slater function state. subsequently, we investigated the resolution error of the qpe and statistical error attributed to the measurements. it was revealed that post - processing to introduce lorentzian broadening reduces the statistical error, which becomes a significant problem for a large number of qubits. subsequently, we implemented qpe on a trapped - ion quantum computer, encompassing three orbitals within the active space. to this end, we implemented qpe using dynamic circuits to reduce ancilla qubits and [ [ k + 2, k, 2 ] ] quantum error detection code to mitigate the quantum noise inherent in current quantum computers. as a result, it was demonstrated that hardware noise was reduced, and spectra close to the noiseless ones were obtained.
arxiv:2505.08612
the unique properties of central potential of the form $ - \ beta e ^ { - r } r ^ { \ gamma } $ were studied using the recently developed critical parameter technique. the particular cases of $ \ gamma = 0 $ and $ \ gamma = - 1 $ yield, respectively, the exponential and yukawa potentials widely used in the atomic, molecular and nuclear physics. we found different behavior of the energy levels of this potential for three different ranges of the value of $ \ gamma $. for $ \ gamma \ geq0 $ it was found that the energy of bound states with the same principal quantum number $ n $ decreases with increasing angular momentum $ \ ell $. the gaussian and woods - saxon potentials also show this behavior. on the contrary, for $ - 2 \ leq \ gamma \ leq - 1 $ increasing $ \ ell $ gives a higher energy, resembling the hulthen potential. however, a potential with $ - 1 < \ gamma < 0 $ possesses mixed properties, which give rise to several interesting results. for one, the order of energy levels with different quantum numbers is not preserved when varying the parameter $ \ beta $. this leads to a quantum degeneracy of the states, and in fact, for a given value of $ \ gamma $ we can find the values $ \ beta _ { thr } $ for which two energy levels with different quantum numbers coincide. another interesting phenomena is the possibility, for some values of $ \ gamma $ in this range, for two new energy levels with different quantum numbers to appear simultaneously when $ \ beta $ reaches their common critical value.
arxiv:1205.4408
we present chandra acis - i and acis - s observations ( $ \ sim $ 200 ks in total ) of the x - ray luminous elliptical galaxy ngc 4636, located in the outskirts of the virgo cluster. a soft band ( 0. 5 - 2 kev ) image shows the presence of a bright core in the center surrounded by an extended x - ray corona and two pronounced quasi - symmetric, 8 kpc long, arm - like features. each of this features defines the rimof an ellipsoidal bubble. an additional bubble - like feature, whose northern rim is located $ \ sim2 $ kpc south of the north - eastern arm, is detected as well. we present surface brightness and temperature profiles across the rims of the bubbles, showing that their edges are sharp and characterized by temperature jumps of about 20 - 25 %. through a comparison of the observed profiles with theoretical shock models, we demonstrate that a scenario where the bubbles were produced by shocks, probably driven by energy deposited off - center by jets, is the most viable explanation to the x - ray morphology observed in the central part of ngc 4636.
arxiv:0909.2942
the ability to walk in new scenarios is a key milestone on the path toward real - world applications of legged robots. in this work, we introduce meta strategy optimization, a meta - learning algorithm for training policies with latent variable inputs that can quickly adapt to new scenarios with a handful of trials in the target environment. the key idea behind mso is to expose the same adaptation process, strategy optimization ( so ), to both the training and testing phases. this allows mso to effectively learn locomotion skills as well as a latent space that is suitable for fast adaptation. we evaluate our method on a real quadruped robot and demonstrate successful adaptation in various scenarios, including sim - to - real transfer, walking with a weakened motor, or climbing up a slope. furthermore, we quantitatively analyze the generalization capability of the trained policy in simulated environments. both real and simulated experiments show that our method outperforms previous methods in adaptation to novel tasks.
arxiv:1909.12995
the pseudo - fermion representation for $ s = 1 / 2 $ quantum spins introduces unphysical states in the hilbert space which can be projected out using the popov - fedotov trick. however, state - of - the - art implementation of the functional renormalization group method for pseudo - fermions have so far omitted the popov - fedotov projection. instead, restrictions to zero temperature were made and absence of unphysical contributions to the ground - state was assumed. we question this belief by exact diagonalization of several small - system counterexamples where unphysical states do contribute to the ground state. we then introduce popov - fedotov projection to pseudo - fermion functional renormalization, enabling finite temperature computations with only minor technical modifications to the method. at large and intermediate temperatures, our results are perturbatively controlled and we confirm their accuracy in benchmark calculations. at lower temperatures, the accuracy degrades due to truncation errors in the hierarchy of flow equations. interestingly, these problems cannot be alleviated by switching to the parquet approximation. we introduce the spin projection as a method - intrinsic quality check. we also show that finite temperature magnetic ordering transitions can be studied via finite - size scaling.
arxiv:2209.13484
in this work, a stochastic, physics - based model for lithium - ion batteries ( libs ) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. to this end, the proposed uncertainty quantification ( uq ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. within this uq framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so - called sobol ' indices. such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the lib production cost. an lic $ _ 6 $ / licoo $ _ 2 $ cell with 19 uncertain parameters discharged at 0. 25c, 1c and 4c rates is considered to study the performance and accuracy of the proposed uq approach. the results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.
arxiv:1505.07776
we study the diffusion and thermal conductivity of charged particles in the weakly ionized plasma with the power - law q - distributions in nonextensive statistics and without the magnetic field. electrons and ions have different q - parameters and temperature. we derive new expressions of the diffusion, thermal diffusion, thermal conductivity and thermoelectric coefficients of electrons and ions respectively in the plasma. it is shown that these transport coefficients depend significantly on the q - parameters in the power - law q - distributed plasma and thus they have different properties from those derived in the traditional statistics with a maxwellian distribution.
arxiv:1903.03589
kernels are powerful and versatile tools in machine learning and statistics. although the notion of universal kernels and characteristic kernels has been studied, kernel selection still greatly influences the empirical performance. while learning the kernel in a data driven way has been investigated, in this paper we explore learning the spectral distribution of kernel via implicit generative models parametrized by deep neural networks. we called our method implicit kernel learning ( ikl ). the proposed framework is simple to train and inference is performed via sampling random fourier features. we investigate two applications of the proposed ikl as examples, including generative adversarial networks with mmd ( mmd gan ) and standard supervised learning. empirically, mmd gan with ikl outperforms vanilla predefined kernels on both image and text generation benchmarks ; using ikl with random kitchen sinks also leads to substantial improvement over existing state - of - the - art kernel learning algorithms on popular supervised learning benchmarks. theory and conditions for using ikl in both applications are also studied as well as connections to previous state - of - the - art methods.
arxiv:1902.10214
we investigate the cohomology rings of regular semisimple hessenberg varieties whose hessenberg functions are of the form $ h = ( h ( 1 ), n \ dots, n ) $ in lie type $ a _ { n - 1 } $. the main result of this paper gives an explicit presentation of the cohomology rings in terms of generators and their relations. our presentation naturally specializes to borel ' s presentation of the cohomology ring of the flag variety and it is compatible with the representation of the symmetric group $ \ mathfrak { s } _ n $ on the cohomology constructed by j. tymoczko. as a corollary, we also give an explicit presentation of the $ \ mathfrak { s } _ n $ - invariant subring of the cohomology ring.
arxiv:1704.00934
we consider the optimal stopping problem with non - linear $ f $ - expectation ( induced by a bsde ) without making any regularity assumptions on the reward process $ \ xi $. and with general filtration. we show that the value family can be aggregated by an optional process $ y $. we characterize the process $ y $ as the $ \ mathcal { e } ^ f $ - snell envelope of $ \ xi $. we also establish an infinitesimal characterization of the value process $ y $ in terms of a reflected bsde with $ \ xi $ as the obstacle. to do this, we first establish a comparison theorem for irregular rbsdes. we give an application to the pricing of american options with irregular pay - off in an imperfect market model.
arxiv:1611.09179
we study the alpha = j _ 2 / j _ 1 - dependence of the magnetization process in the j _ 1 - j _ 2 model on a square lattice with frustrating couplings j _ 2 along the diagonals. perturbation expansions around alpha = j _ 2 / j _ 1 = 0 and 1 / alpha = 0 $ yield an adequate description of the magnetization curve in the antiferromagnetic and collinear antiferromagnetic phase, respectively. the transition from one phase to the other ( 0. 5 < alpha < 0. 7 ) leaves pronounced structures in the longitudinal and transverse structure factors at p = ( pi, pi ) and p = ( 0, pi ).
arxiv:cond-mat/0110341
igrj19294 + 1816 was discovered by integral in 2009 during a bright x - ray outburst and was classified as a possible be x - ray binary or supergiant fast x - ray transient. on 2010 october 28, the source displayed a second x - ray outburst and a 2 months - long monitoring with swift was carried out to follow the evolution of the source x - ray flux during the event. we report on the integral and swift observations of the second x - ray outburst observed from igrj19294 + 1816. we detected pulsations in the x - ray emission from the source at \ sim12. 5 s up to 50 kev. the source x - ray flux decreased smoothly during the two months of observation displaying only marginal spectral changes. due to the relatively rapid decay of the source x - ray flux, no significant variations of the source spin period across the event could be measured. this prevented a firm confirmation of the previously suggested orbital period of the source at 117 d. this periodicity was also searched by using archival swift / bat data. we detected a marginally significant peak in the periodogram and determined the best period at 116. 2 \ pm0. 6 days ( estimated chance probability of a spurious detection 1 % ). the smooth decline of the source x - ray flux across the two months of observations after the onset of the second outburst, together with its relatively low value of the spin period and the absence of remarkable changes in the spectral parameters ( i. e., the absorption column density ), suggests that igrj19294 + 1816 is most likely another member of the be x - ray binaries discovered by integral and not a supergiant fast x - ray transient.
arxiv:1105.2727
we show that every $ 3 $ - connected $ k _ { 2, \ ell } $ - minor free graph with minimum degree at least $ 4 $ has maximum degree at most $ 7 \ ell $. as a consequence, we show that every 3 - connected $ k _ { 2, \ ell } $ - minor free graph with minimum degree at least $ 5 $ and no twins of degree $ 5 $ has bounded size. our proofs use steiner trees and nested cuts ; in particular, they do not rely on ding ' s characterization of $ k _ { 2, \ ell } $ - minor free graphs.
arxiv:2301.02133
= = like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. in this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. instead, it is about entities that we can create directly through mental activity. in addition, some adherents of these schools reject non - constructive proofs, such as using proof by contradiction when showing the existence of an object or when trying to establish the truth of some proposition. important work was done by errett bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 foundations of constructive analysis. = = = = finitism = = = = finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. in her book philosophy of set theory, mary tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists. the most famous proponent of finitism was leopold kronecker, who said : god created the natural numbers, all else is the work of man. ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. another variant of finitism is euclidean arithmetic, a system developed by john penn mayberry in his book the foundations of mathematics in the theory of sets. mayberry ' s system is aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super - exponentiation is not a legitimate finitary function. = = = structuralism = = = structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. for instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. likewise all the other whole numbers are defined by their places in a structure, the number line. other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra. structuralism is an epistemologically realistic view in that it holds that
https://en.wikipedia.org/wiki/Philosophy_of_mathematics
one - shot generative domain adaption aims to transfer a pre - trained generator on one domain to a new domain using one reference image only. however, it remains very challenging for the adapted generator ( i ) to generate diverse images inherited from the pre - trained generator while ( ii ) faithfully acquiring the domain - specific attributes and styles of the reference image. in this paper, we present a novel one - shot generative domain adaption method, i. e., difa, for diverse generation and faithful adaptation. for global - level adaptation, we leverage the difference between the clip embedding of reference image and the mean embedding of source images to constrain the target generator. for local - level adaptation, we introduce an attentive style loss which aligns each intermediate token of adapted image with its corresponding token of the reference image. to facilitate diverse generation, selective cross - domain consistency is introduced to select and retain the domain - sharing attributes in the editing latent $ \ mathcal { w } + $ space to inherit the diversity of pre - trained generator. extensive experiments show that our method outperforms the state - of - the - arts both quantitatively and qualitatively, especially for the cases of large domain gaps. moreover, our difa can easily be extended to zero - shot generative domain adaption with appealing results. code is available at https : / / github. com / 1170300521 / difa.
arxiv:2207.08736
we report 4. 5 micron luminosities for 27 nearby ( d < 5 mpc ) dwarf irregular galaxies measured with the spitzer infrared array camera. we have constructed the 4. 5 micron luminosity - metallicity ( l - z ) relation for 25 dwarf galaxies with secure distance and interstellar medium oxygen abundance measurements. the 4. 5 micron l - z relation is 12 + log ( o / h ) = ( 5. 78 + / - 0. 21 ) + ( - 0. 122 + / - 0. 012 ) m _ [ 4. 5 ], where m _ [ 4. 5 ] is the absolute magnitude at 4. 5 micron. the dispersion in the near - infrared l - z relation is smaller than the corresponding dispersion in the optical l - z relation. the subsequently derived stellar mass - metallicity m - z relation is 12 + log ( o / h ) = ( 5. 65 + / - 0. 23 ) + ( 0. 298 + / - 0. 030 ) log mstar. and extends the sdss m - z relation to lower mass by about 2. 5 dex. we find that the dispersion in the m - z relation is similar over five orders of magnitude in stellar mass, and that the relationship between stellar mass and interstellar medium metallicity is similarly tight from high - mass to low - mass systems. we find a larger scatter at low mass in the relation between effective yield and total baryonic mass. in fact, there are a few dwarf galaxies with large yields, which is difficult to explain if galactic winds are ubiquitous in dwarf galaxies. the low scatter in the l - z and m - z relationships are difficult to understand if galactic superwinds or blowout are responsible for the low metallicities at low mass or luminosity. naively, one would expect an ever increasing scatter at lower masses, which is not observed.
arxiv:astro-ph/0605036
the lux - zeplin ( lz ) experiment will search for dark matter particle interactions with a detector containing a total of 10 tonnes of liquid xenon within a double - vessel cryostat. the large mass and proximity of the cryostat to the active detector volume demand the use of material with extremely low intrinsic radioactivity. we report on the radioassay campaign conducted to identify suitable metals, the determination of factors limiting radiopure production, and the selection of titanium for construction of the lz cryostat and other detector components. this titanium has been measured with activities of $ ^ { 238 } $ u $ _ { e } $ ~ $ < $ 1. 6 ~ mbq / kg, $ ^ { 238 } $ u $ _ { l } $ ~ $ < $ 0. 09 ~ mbq / kg, $ ^ { 232 } $ th $ _ { e } $ ~ $ = 0. 28 \ pm 0. 03 $ ~ mbq / kg, $ ^ { 232 } $ th $ _ { l } $ ~ $ = 0. 25 \ pm 0. 02 $ ~ mbq / kg, $ ^ { 40 } $ k ~ $ < $ 0. 54 ~ mbq / kg, and $ ^ { 60 } $ co ~ $ < $ 0. 02 ~ mbq / kg ( 68 \ % cl ). such low intrinsic activities, which are some of the lowest ever reported for titanium, enable its use for future dark matter and other rare event searches. monte carlo simulations have been performed to assess the expected background contribution from the lz cryostat with this radioactivity. in 1, 000 days of wimp search exposure of a 5. 6 - tonne fiducial mass, the cryostat will contribute only a mean background of $ 0. 160 \ pm0. 001 $ ( stat ) $ \ pm0. 030 $ ( sys ) counts.
arxiv:1702.02646
we consider the 2d hubbard model in the strong - coupling case ( u > > w ) and at low electron density ( nd ^ 2 < < 1 ). we find an antibound state as a pole in the two - particle t - matrix. the contribution of this pole in the self - energy reproduces a two - pole structure in the dressed one - particle green - function similar to the hubbard - i approximation. we also discuss briefly the engelbrecht - randeria mode which corresponds to the pairing of two holes below the bottom of the band for u > > w and low electron density. both poles produce non - trivial corrections to landau fermi - liquid picture already at low electron density but do not destroy it in 2d
arxiv:1105.4428
in recent years, natural language processing ( nlp ) models have demonstrated remarkable capabilities in various domains beyond traditional text generation. in this work, we introduce peptidegpt, a protein language model tailored to generate protein sequences with distinct properties : hemolytic activity, solubility, and non - fouling characteristics. to facilitate a rigorous evaluation of these generated sequences, we established a comprehensive evaluation pipeline consisting of ideas from bioinformatics to retain valid proteins with ordered structures. first, we rank the generated sequences based on their perplexity scores, then we filter out those lying outside the permissible convex hull of proteins. finally, we predict the structure using esmfold and select the proteins with plddt values greater than 70 to ensure ordered structure. the properties of generated sequences are evaluated using task - specific classifiers - peptidebert and happenn. we achieved an accuracy of 76. 26 % in hemolytic, 72. 46 % in non - hemolytic, 78. 84 % in non - fouling, and 68. 06 % in solubility protein generation. our experimental results demonstrate the effectiveness of peptidegpt in de novo protein design and underscore the potential of leveraging nlp - based approaches for paving the way for future innovations and breakthroughs in synthetic biology and bioinformatics. codes, models, and data used in this study are freely available at : https : / / github. com / aayush - shah14 / peptidegpt.
arxiv:2410.19222
the diverse isotopic and elemental signatures produced in different nucleosynthetic sites are passed on to successive generations of stars. by tracing these chemical signatures back through the stellar populations of the galaxy, it is possible to unravel its nucleosynthetic history and even to study stars which are now extinct. this review considers recent applications of " stellar genetics " to examine the earliest episodes of nucleosynthesis in the universe, in population iii stars and the big bang.
arxiv:astro-ph/0111109
we study a space - fractional stefan problem with the dirichlet boundary conditions. it is a model that describes superdiffusive phenomena. our main result is the existence of the unique classical solution to this problem. in the proof we apply evolution operators theory and the schauder fixed point theorem. it appears that studying fractional stefan problem with dirichlet boundary conditions requires a substantial modifications of the approach in comparison with the existing results for problems with different kinds of boundary conditions.
arxiv:2308.03502
we give a combinatorial description of the rational cohomology of the moduli spaces of pointed genus 1 curves with $ n $ marked points and level $ n $ structures. more precisely, we explicitly describe the $ e _ 2 $ term of the leray spectral sequence of the forgetful mapping $ \ euscript { m } _ { 1, n } ( n ) \ to \ euscript { m } _ { 1, 1 } ( n ) $ and show that the result is isomorphic to the rational cohomology of $ \ euscript { m } _ { 1, n } ( n ) $ as a rational mixed hodge structure equipped with an action of the symmetric group $ \ mathfrak { s } _ n $. the classical moduli space $ \ euscript { m } _ { 1, n } $ is the particular case n = 1.
arxiv:1303.5693
we introduce gauge - invariant quark and gluon angular momentum distributions after making a generalization of the angular momentum density operators. from the quark angular momentum distribution, we define the gauge - invariant and leading - twist quark { \ it orbital } angular momentum distribution $ l _ q ( x ) $. the latter can be extracted from data on the polarized and unpolarized quark distributions and the off - forward distribution $ e ( x ) $ in the forward limit. we comment upon the evolution equations obeyed by this as well as other orbital distributions considered in the literature.
arxiv:hep-ph/9804337
graphs play a central role in modeling complex relationships in data, yet most graph learning methods falter when faced with cold - start nodes - - new nodes lacking initial connections - - due to their reliance on adjacency information. to tackle this, we propose sparc, a groundbreaking framework that introduces a novel approach to graph learning by utilizing generalizable spectral embeddings. with a simple yet powerful enhancement, sparc empowers state - of - the - art methods to make predictions on cold - start nodes effectively. by eliminating the need for adjacency information during inference and effectively capturing the graph ' s structure, we make these methods suitable for real - world scenarios where new nodes frequently appear. experimental results demonstrate that our framework outperforms existing models on cold - start nodes across tasks such as node classification, node clustering, and link prediction. sparc provides a solution to the cold - start problem, advancing the field of graph learning.
arxiv:2411.01532
accurate and up - to - date models describing the be - havior of software systems are seldom available in practice. to address this issue, software engineers may use specification mining techniques, which can automatically derive models that capture the behavior of the system under analysis. so far, most specification mining techniques focused on the functional behavior of the systems, with specific emphasis on models that represent the ordering of operations, such as tempo - ral rules and finite state models. although useful, these models are inherently partial. for instance, they miss the timing behavior, which is extremely relevant for many classes of systems and com - ponents, such as shared libraries and user - driven applications. mining specifications that include both the functional and the timing aspects can improve the applicability of many testing and analysis solutions. this paper addresses this challenge by presenting the timed k - tail ( tkt ) specification mining technique that can mine timed automata from program traces. since timed automata can effectively represent the interplay between the functional and the timing behavior of a system, tkt could be exploited in those contexts where time - related information is relevant. our empirical evaluation shows that tkt can efficiently and effectively mine accurate models. the mined models have been used to identify executions with anomalous timing. the evaluation shows that most of the anomalous executions have been correctly identified while producing few false positives.
arxiv:1705.08399
we represent the slow, glassy equilibrium dynamics of a line in a two - dimensional random potential landscape as driven by an array of asymptotically independent two - state systems, or loops, fluctuating on all length scales. the assumption of independence enables a fairly complete analytic description. we obtain good agreement with monte carlo simulations when the free energy barriers separating the two sides of a loop of size l are drawn from a distribution whose width and mean scale as l ^ ( 1 / 3 ), in agreement with recent results for scaling of such barriers.
arxiv:cond-mat/9512039
since the onset of the covid - 19 outbreak in wuhan, china, numerous forecasting models have been proposed to project the trajectory of coronavirus infection cases. we propose a new discrete - time markov chain transition matrix model that directly incorporates stochastic behavior and for which parameter estimation is straightforward from available data. using such data from china ' s hubei province ( for which wuhan is the provincial capital city ), the model is shown to be flexible, robust, and accurate. as a result, it has been adopted by the first shanghai assistance medical team in wuhan ' s jinyintan hospital, which was the first designated hospital to take covid - 19 patients in the world. the forecast has been used for preparing medical staff, intensive care unit ( icu ) beds, ventilators, and other critical care medical resources and for supporting real - time medical management decisions. empirical data from china ' s first two months ( january / february ) of fighting covid - 19 was collected and used to enhance the model by embedding npi efficiency into the model. we applied the model to forecast italy, south korea, and iran on march 9. later we made forecasts for spain, germany, france, us on march 24. again, the model has performed very well, proven to be flexible, robust, and accurate for most of these countries / regions outside china.
arxiv:2007.01201
the susceptibility of deep neural networks ( dnns ) to adversarial attacks undermines their reliability across numerous applications, underscoring the necessity for an in - depth exploration of these vulnerabilities and the formulation of robust defense strategies. the deepfool algorithm by moosavi - dezfooli et al. ( 2016 ) represents a pivotal step in identifying minimal perturbations required to induce misclassification of input images. nonetheless, its generic methodology falls short in scenarios necessitating targeted interventions. additionally, previous research studies have predominantly concentrated on the success rate of attacks without adequately addressing the consequential distortion of images, the maintenance of image quality, or the confidence threshold required for misclassification. to bridge these gaps, we introduce the enhanced targeted deepfool ( et deepfool ) algorithm, an evolution of deepfool that not only facilitates the specification of desired misclassification targets but also incorporates a configurable minimum confidence score. our empirical investigations demonstrate the superiority of this refined approach in maintaining the integrity of images and minimizing perturbations across a variety of dnn architectures. unlike previous iterations, such as the targeted deepfool by gajjar et al. ( 2022 ), our method grants unparalleled control over the perturbation process, enabling precise manipulation of model responses. preliminary outcomes reveal that certain models, including alexnet and the advanced vision transformer, display commendable robustness to such manipulations. this discovery of varying levels of model robustness, as unveiled through our confidence level adjustments, could have far - reaching implications for the field of image recognition. our code is available at https : / / github. com / fazlelabib / et _ deepfool.
arxiv:2310.13019