text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
pixel - based language models have emerged as a compelling alternative to subword - based language modelling, particularly because they can represent virtually any script. pixel, a canonical example of such a model, is a vision transformer that has been pre - trained on rendered text. while pixel has shown promising cross - script transfer abilities and robustness to orthographic perturbations, it falls short of outperforming monolingual subword counterparts like bert in most other contexts. this discrepancy raises questions about the amount of linguistic knowledge learnt by these models and whether their performance in language tasks stems more from their visual capabilities than their linguistic ones. to explore this, we probe pixel using a variety of linguistic and visual tasks to assess its position on the vision - to - language spectrum. our findings reveal a substantial gap between the model ' s visual and linguistic understanding. the lower layers of pixel predominantly capture superficial visual features, whereas the higher layers gradually learn more syntactic and semantic abstractions. additionally, we examine variants of pixel trained with different text rendering strategies, discovering that introducing certain orthographic constraints at the input level can facilitate earlier learning of surface - level features. with this study, we hope to provide insights that aid the further development of pixel - based language models.
|
arxiv:2410.12011
|
structural condition identification based on monitoring data is important for automatic civil infrastructure asset management. nevertheless, the monitoring data is almost always insufficient, because the real - time monitoring data of a structure only reflects a limited number of structural conditions, while the number of possible structural conditions is infinite. with insufficient monitoring data, the identification performance may significantly degrade. this study aims to tackle this challenge by proposing a deep transfer learning ( tl ) approach for structural condition identification. it effectively integrates physics - based and data - driven methods, by generating various training data based on the calibrated finite element ( fe ) model, pretraining a deep learning ( dl ) network, and transferring its embedded knowledge to the real monitoring / testing domain. its performance is demonstrated in a challenging case, vibration - based condition identification of steel frame structures with bolted connection damage. the results show that even though the training data are from a different domain and with different types of labels, intrinsic physics can be learned through the pretraining process, and the tl results can be clearly improved, with the identification accuracy increasing from 81. 8 % to 89. 1 %. the comparative studies show that shmnet with three convolutional layers stands out as the pretraining dl architecture, with 21. 8 % and 25. 5 % higher identification accuracy values over the other two networks, vggnet - 16 and resnet - 18. the findings of this study advance the potential application of the proposed approach towards expert - level condition identification based on limited real - world training data.
|
arxiv:2307.15249
|
agents affected by their own future states in a one - dimensional discrete dynamical system ( 1 - dds ) can replicate two - dimensional images. it is shown that such replication requires a toroidal spacetime and three rules are needed to calculate the number of iterations required for exact replication. it is argued that retrocausal updation used by 1 - dds can replicate any n - dimensional digital object. it is shown that the way iterations reach a final image are different for randomly generated images and non - random, meaningful images. two instances of real - world events that seem to imply such retrocausal replication are discussed.
|
arxiv:2009.00835
|
we construct gravitational modifications that go beyond horndeski, namely theories with extended nonminimal derivative couplings, in which the coefficient functions depend not only on the scalar field but also on its kinetic energy. such theories prove to be ghost - free in a cosmological background. we investigate the early - time cosmology and show that a de sitter inflationary phase can be realized as a pure result of the novel gravitational couplings. additionally, we study the late - time evolution, where we obtain an effective dark energy sector which arises from the scalar field and its extended couplings to gravity. we extract various cosmological observables and analyse their behavior at small redshifts for three choices of potentials, namely, for the exponential, the power - law, and the higgs potential. we show that the universe passes from deceleration to acceleration in the recent cosmological past, while the effective dark - energy equation - of - state parameter tends to the cosmological - constant value at present. finally, the effective dark energy can be phantom - like, although the scalar field is canonical, which is an advantage of the model.
|
arxiv:1609.01503
|
we consider a one - dimensional family of rational surfaces with automorphisms. in a degeneration of this family, the limiting map is the identity map on a special fiber. we check that the map on the total space of the family has indeterminacy in the special fiber. however, we show that after blowing - up at an indeterminate curve, there is an induced birational map on the exceptional divisor over the indeterminate curve. moreover, we show that this map has dynamical degree 16.
|
arxiv:2407.20896
|
breakjunction experiments allow investigating electronic and spintronic properties at the atomic and molecular scale. these experiments generate by their very nature broad and asymmetric distributions of the observables of interest, and thus a full statistical interpretation is warranted. we show here that understanding the complete distribution is essential for obtaining reliable estimates. we demonstrate this for au atomic point contacts, where by adopting bayesian reasoning we can reliably estimate the distance to the transition state, $ x { \ ddag } $, the associated free energy barrier, $ { \ delta } g { \ ddag } $, and the curvature $ \ nu $ of the free energy surface. obtaining robust estimates requires less experimental effort than with previous methods, fewer assumptions, and thus leads to a significant reassessment of the kinetic parameters in this paradigmatic atomic - scale structure. our proposed bayesian reasoning offers a powerful and general approach when interpreting inherently stochastic data that yield broad, asymmetric distributions for which analytical models of the distribution may be developed.
|
arxiv:2309.10812
|
in this paper, we propose trellis coded quantization ( tcq ) based limited feedback techniques for massive multiple - input single - output ( miso ) frequency division duplexing ( fdd ) systems in temporally and spatially correlated channels. we exploit the correlation present in the channel to effectively quantize channel direction information ( cdi ). for multiuser ( mu ) systems with matched - filter ( mf ) precoding, we show that the number of feedback bits required by the random vector quantization ( rvq ) codebook to match even a small fraction of the perfect cdi signal - to - interference - plus - noise ratio ( sinr ) performance is large. with such large numbers of bits, the exhaustive search required by conventional codebook approaches make them infeasible for massive miso systems. motivated by this, we propose a differential tcq scheme for temporally correlated channels that transforms the source constellation at each stage in a trellis using 2d translation and scaling techniques. we derive a scaling parameter for the source constellation as a function of the temporal correlation and the number of bs antennas. we also propose a tcq based limited feedback scheme for spatially correlated channels where the channel is quantized directly without performing decorrelation at the receiver. simulation results show that the proposed tcq schemes outperform the existing noncoherent tcq ( ntcq ) schemes, by improving the spectral efficiency and beamforming gain of the system. the proposed differential tcq also reduces the feedback overhead of the system compared to the differential ntcq method.
|
arxiv:1405.6052
|
we investigate various excited states of sine - gordon model on a strip with dirichlet boundary conditions on both boundaries using a non linear integral equation ( nlie ) approach.
|
arxiv:hep-th/0312176
|
rotating and magnetized protoneutron stars ( pnss ) may drive relativistic magneto - centrifugally accelerated winds as they cool immediately after core collapse. the wind fluid near the star is composed of neutrons and protons, and the neutrons become relativistic while collisionally coupled with the ions. here, we argue that the neutrons in the flow eventually undergo inelastic collisions around the termination shock inside the stellar material, producing ~ 0. 1 - 1 gev neutrinos, without relying on cosmic - ray acceleration mechanisms. even higher - energy neutrinos may be produced via particle acceleration mechanisms. we show that pingu and hyper - kamiokande can detect such neutrinos from nearby core - collapse supernovae, by reducing the atmospheric neutrino background via coincident detection of mev neutrinos or gravitational waves and optical observations. detection of these gev and / or higher - energy neutrinos would provide important clues to the physics of magnetic acceleration, nucleosynthesis, the relation between supernovae and gamma - ray bursts, and the properties of newly born neutron stars.
|
arxiv:1303.2612
|
weak - value amplification ( wva ) provides a way for amplified detection of a tiny physical signal at the expense of a lower detection probability. despite this trade - off, due to its robustness against certain types of noise, wva has advantages over conventional measurements in precision metrology. moreover, it has been shown that wva - based metrology can reach the heisenberg - limit using entangled resources, but preparing macroscopic entangled resources remains challenging. here we demonstrate a novel wva scheme based on iterative interactions, achieving the heisenberg - limited precision scaling without resorting to entanglement. this indicates that the perceived advantages of the entanglement - assisted wva are in fact due to iterative interactions between each particle of an entangled system and a meter, rather than coming from the entanglement itself. our work opens a practical pathway for achieving the heisenberg - limited wva without using fragile and experimentally - demanding entangled resources.
|
arxiv:2109.03762
|
the multiplicity fraction of stars, down to the substellar regime, is a parameter of fundamental importance for stellar formation, evolution, and planetology. the census of multiple stars in the solar neighborhood is however incomplete. our study is aimed at detecting companions of hipparcos catalog stars from the proper motion anomaly ( pma ) they induce on their host star, namely, the difference between their long - term hipparcos - gaia and short - term gaia proper motion vectors. we also aim to detect resolved, gravitationally bound companions of the hipparcos stars ( 117, 955 stars ) and of the gaia edr3 stars closer than 100 pc ( 542, 232 stars ). in order to identify gravitationally bound visual companions of our sample, we searched the edr3 catalog for common proper - motion ( cpm ) candidates. the detection of tangential velocity anomalies with a median accuracy of 26 cm / s per parsec of distance is demonstrated. this improvement by a factor 2. 5 in accuracy, as compared to dr2, results in pma detection limits on companions that are well into the planetary mass regime for many targets. we identify 37, 515 hipparcos stars with a significant pma ( s / n > 3 ), namely, a fraction of 32 % and 12, 914 ( 11 % ) hosting cpm bound candidate companions. after including the edr3 renormalised unit weight error ( ruwe > 1. 4 ) as an additional indicator, 50, 720 stars of the hipparcos catalog ( 43 % ) exhibit at least one signal of binarity. among the edr3 stars within 100 pc, we find cpm bound candidate companions for 39, 490 stars ( 7. 3 % ). the search for companions using a combination of the pma, cpm, and ruwe indicators improves the exhaustivity of the multiplicity survey. the detection of cpm companions of very bright stars ( heavily saturated on the gaia detectors ) that are classical benchmark objects for stellar physics provides a useful proxy for estimating their distance with a higher accuracy than with hipparcos.
|
arxiv:2109.10912
|
we introduce a variant of the multiplicative sewing lemma in [ gerasimovi \ v { c } s, hocquet, nilssen ; j. funct. anal. 281 ( 2021 ) ] which yields arbitrary high order weak approximations to stochastic differential equations, extending the cubature approximation on wiener space introduced by lyons and victoir. our analysis allows to derive stability estimates and explicit weak convergence rates. as a particular example, a cubature approximation for stochastic differential equations driven by continuous gaussian martingales is given.
|
arxiv:2206.10297
|
we fit the parameters of a differential equations model describing the production of gap gene proteins hunchback and knirps along the antero - posterior axis of the embryo of \ emph { drosophila }. as initial data for the differential equations model, we take the antero - posterior distribution of the proteins bicoid, hunchback and tailless at the beginning of cleavage cycle 14. we calibrate and validate the model with experimental data using single - and multi - objective evolutionary optimization techniques. in the multi - objective optimization technique, we compute the associated pareto fronts. we analyze the cross regulation mechanism between the gap - genes protein pair hunchback - knirps and we show that the posterior distribution of hunchback follow the experimental data if hunchback is negatively regulated by the huckebein protein. this approach enables to predict the posterior localization on the embryo of the protein huckebein, and we validate with the experimental data the genetic regulatory network responsible for the antero - posterior distribution of the gap gene protein hunchback. we discuss the importance of pareto multi - objective optimization techniques in the calibration and validation of biological models.
|
arxiv:0912.4391
|
long short - term memory ( lstm ) networks have recently shown remarkable performance in several tasks dealing with natural language generation, such as image captioning or poetry composition. yet, only few works have analyzed text generated by lstms in order to quantitatively evaluate to which extent such artificial texts resemble those generated by humans. we compared the statistical structure of lstm - generated language to that of written natural language, and to those produced by markov models of various orders. in particular, we characterized the statistical structure of language by assessing word - frequency statistics, long - range correlations, and entropy measures. our main finding is that while both lstm and markov - generated texts can exhibit features similar to real ones in their word - frequency statistics and entropy measures, lstm - texts are shown to reproduce long - range correlations at scales comparable to those found in natural language. moreover, for lstm networks a temperature - like parameter controlling the generation process shows an optimal value - - - for which the produced texts are closest to real language - - - consistent across all the different statistical features investigated.
|
arxiv:1804.04087
|
the increasing popularity of cloud computing has resulted in a proliferation of data centers. effective placement of data centers improves network performance and minimizes clients ' perceived latency. the problem of determining the optimal placement of data centers in a large network is a classical uncapacitated $ k $ - median problem. traditional works have focused on centralized algorithms, which requires knowledge of the overall network topology and information about the customers ' service demands. moreover, centralized algorithms are computationally expensive and do not scale well with the size of the network. we propose a fully distributed algorithm with linear complexity to optimize the locations of data centers. the proposed algorithm utilizes an iterative two - step optimization approach. specifically, in each iteration, it first partitions the whole network into $ k $ regions through a distributed partitioning algorithm ; then within each region, it determines the local approximate optimal location through a distributed message - passing algorithm. when the underlying network is a tree topology, we show that the overall cost is monotonically decreasing between successive iterations and the proposed algorithm converges in a finite number of iterations. extensive simulations on both synthetic and real internet topologies show that the proposed algorithm achieves performance comparable with that of centralized algorithms that require global information and have higher computational complexity.
|
arxiv:1802.01289
|
although the set of permutation symmetries of a complex network can be very large, few of the symmetries give rise to stable synchronous patterns. here we present a new framework and develop techniques for controlling synchronization patterns in complex network of coupled chaotic oscillators. specifically, according to the network permutation symmetry, we design a small - size and weighted network, namely the control network, and use it to control the large - size complex network by means of pinning coupling. we argue mathematically that for \ emph { any } of the network symmetries, there always exists a critical pinning strength beyond which the unstable synchronous pattern associated to this symmetry can be stabilized. the feasibility of the control method is verified by numerical simulations of both artificial and real - work networks, and is demonstrated by experiment of coupled chaotic circuits. our studies pave a way to the control of dynamical patterns in complex networks.
|
arxiv:1511.00892
|
numerical simulations of hydrated proteins show that protein hydration shells are polarized into a ferroelectric cluster with a large magnitude of its average dipole moment. the emergence of this new mesophase dramatically alters the statistics of electrostatic fluctuations at the protein / water interface. the linear - response relation between the average electrostatic potential and its variance breaks down, with the breadth of the electrostatic noise far exceeding the expectations of the linear response theories. the dynamics of these non - gaussian electrostatic fluctuations are dominated by a slow ( ~ 1 ns ) component which freezes in at the temperature of dynamical transition of proteins.
|
arxiv:1001.4476
|
is 4 years. unless prevented by physical limits of computation and time quantization, this process would achieve infinite computing power in 4 years, properly earning the name " singularity " for the final state. this form of intelligence explosion is described in yudkowsky ( 1996 ). = = emergence of superintelligence = = a superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. " superintelligence " may also refer to the form or degree of intelligence possessed by such an agent. john von neumann, vernor vinge and ray kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present - day humans to predict what human beings ' lives would be like in a post - singularity world. the related concept " speed superintelligence " describes an ai that can function like a human mind, only much faster. for example, with a million - fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. such a difference in information processing speed could drive the singularity. technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. some argue that advances in artificial intelligence ( ai ) will probably result in general reasoning systems that bypass human cognitive limitations. others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. a number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. the 2016 book the age of em by robin hanson describes a hypothetical future scenario in which human brains are scanned and digitized, creating " uploads " or digital versions of human consciousness. in this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence. = = variations = = = = = non - ai singularity = = = some writers use " the singularity " in a broader way to refer to any radical changes in society brought about by new technology ( such as molecular nanotechnology ), although vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. = = predictions = = there have been numerous dates predicted for the attainment of singularity.
|
https://en.wikipedia.org/wiki/Technological_singularity
|
the relationship between the size of the whole and the size of the parts in language and music is known to follow menzerath - altmann law at many levels of description ( morphemes, words, sentences... ). qualitatively, the law states that larger the whole, the smaller its parts, e. g., the longer a word ( in syllables ) the shorter its syllables ( in letters or phonemes ). this patterning has also been found in genomes : the longer a genome ( in chromosomes ), the shorter its chromosomes ( in base pairs ). however, it has been argued recently that mean chromosome length is trivially a pure power function of chromosome number with an exponent of - 1. the functional dependency between mean chromosome size and chromosome number in groups of organisms from three different kingdoms is studied. the fit of a pure power function yields exponents between - 1. 6 and 0. 1. it is shown that an exponent of - 1 is unlikely for fungi, gymnosperm plants, insects, reptiles, ray - finned fishes and amphibians. even when the exponent is very close to - 1, adding an exponential component is able to yield a better fit with regard to a pure power - law in plants, mammals, ray - finned fishes and amphibians. the parameters of menzerath - altmann law in genomes deviate significantly from a power law with a - 1 exponent with the exception of birds and cartilaginous fishes.
|
arxiv:1201.1746
|
we consider the estimation of the location of the pole and memory parameter, \ lambda ^ 0 and \ alpha, respectively, of covariance stationary linear processes whose spectral density function f ( \ lambda ) satisfies f ( \ lambda ) \ sim c | \ lambda - \ lambda ^ 0 | ^ { - \ alpha } in a neighborhood of \ lambda ^ 0. we define a consistent estimator of \ lambda ^ 0 and derive its limit distribution z _ { \ lambda ^ 0 }. as in related optimization problems, when the true parameter value can lie on the boundary of the parameter space, we show that z _ { \ lambda ^ 0 } is distributed as a normal random variable when \ lambda ^ 0 \ in ( 0, \ pi ), whereas for \ lambda ^ 0 = 0 or \ pi, z _ { \ lambda ^ 0 } is a mixture of discrete and continuous random variables with weights equal to 1 / 2. more specifically, when \ lambda ^ 0 = 0, z _ { \ lambda ^ 0 } is distributed as a normal random variable truncated at zero. moreover, we describe and examine a two - step estimator of the memory parameter \ alpha, showing that neither its limit distribution nor its rate of convergence is affected by the estimation of \ lambda ^ 0. thus, we reinforce and extend previous results with respect to the estimation of \ alpha when \ lambda ^ 0 is assumed to be known a priori. a small monte carlo study is included to illustrate the finite sample performance of our estimators.
|
arxiv:math/0508317
|
we rely on a recent method for determining edge spectra and we use it to compute the chern numbers for hofstadter models on the honeycomb lattice having rational magnetic flux per unit cell. based on the bulk - edge correspondence, the chern number $ \ sigma _ h $ is given as the winding number of an eigenvector of a $ 2 \ times 2 $ transfer matrix, as a function of the quasi - momentum $ k \ in ( 0, 2 \ pi ) $. this method is computationally efficient ( of order $ o ( n ^ 4 ) $ in the resolution of the desired image ). it also shows that for the honeycomb lattice the solution for $ \ sigma _ h $ for flux $ p / q $ in the $ r $ - th gap conforms with the diophantine equation $ r = \ sigma _ h \ cdot p + s \ cdot q $, which determines $ \ sigma _ h \ mod q $. a window such as $ \ sigma _ h \ in ( - q / 2, q / 2 ) $, or possibly shifted, provides a natural further condition for $ \ sigma _ h $, which however turns out not to be met. based on extensive numerical calculations, we conjecture that the solution conforms with the relaxed condition $ \ sigma _ h \ in ( - q, q ) $.
|
arxiv:1403.1270
|
this paper investigates the impact of query topology on the difficulty of answering conjunctive queries in the presence of owl 2 ql ontologies. our first contribution is to clarify the worst - case size of positive existential ( pe ), non - recursive datalog ( ndl ), and first - order ( fo ) rewritings for various classes of tree - like conjunctive queries, ranging from linear queries to bounded treewidth queries. perhaps our most surprising result is a superpolynomial lower bound on the size of pe - rewritings that holds already for linear queries and ontologies of depth 2. more positively, we show that polynomial - size ndl - rewritings always exist for tree - shaped queries with a bounded number of leaves ( and arbitrary ontologies ), and for bounded treewidth queries paired with bounded depth ontologies. for fo - rewritings, we equate the existence of polysize rewritings with well - known problems in boolean circuit complexity. as our second contribution, we analyze the computational complexity of query answering and establish tractability results ( either nl - or logcfl - completeness ) for a range of query - ontology pairs. combining our new results with those from the literature yields a complete picture of the succinctness and complexity landscapes for the considered classes of queries and ontologies.
|
arxiv:1406.3047
|
communications industry. given its different forms, there are various ways of representing uncertainty and modelling economic agents ' responses to it. game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. it provides a mathematical foundation of industrial organisation, discussed above, to model different types of firm behaviour, for example in a solipsistic industry ( few sellers ), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. in behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own. in this, it generalises maximisation approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. the field dates from the 1944 classic theory of games and economic behavior by john von neumann and oskar morgenstern. it has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology. risk aversion may stimulate activity that in well - functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. financial economics or simply finance describes the allocation of financial resources. it also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation. some market organisations may give rise to inefficiencies associated with uncertainty. based on george akerlof ' s " market for lemons " article, the paradigm example is of a dodgy second - hand car market. customers without knowledge of whether a car is a " lemon " depress its price below what a quality second - hand car would be. information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. related problems in insurance are adverse selection, such that those at most risk are most likely to insure ( say reckless drivers ), and moral hazard, such that insurance results in riskier behaviour ( say more reckless driving ). both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ( " incomplete markets " ). moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. information economics, which studies such problems, has relevance in
|
https://en.wikipedia.org/wiki/Economics
|
stencil computations are widely used in hpc applications. today, many hpc platforms use gpus as accelerators. as a result, understanding how to perform stencil computations fast on gpus is important. while implementation strategies for low - order stencils on gpus have been well - studied in the literature, not all of proposed enhancements work well for high - order stencils, such as those used for seismic modeling. furthermore, coping with boundary conditions often requires different computational logic, which complicates efficient exploitation of the thread - level parallelism on gpus. in this paper, we study high - order stencils and their unique characteristics on gpus. we manually crafted a collection of implementations of a 25 - point seismic modeling stencil in cuda and related boundary conditions. we evaluate their code shapes, memory hierarchy usage, data - fetching patterns, and other performance attributes. we conducted an empirical evaluation of these stencils using several mature and emerging tools and discuss our quantitative findings. among our implementations, we achieve twice the performance of a proprietary code developed in c and mapped to gpus using openacc. additionally, several of our implementations have excellent performance portability.
|
arxiv:2009.04619
|
here we report on a transparent method to characterize individual layers in a double - layer electron system which forms in a wide quantum well and to determine their electron densities. the technique relies on the simultaneous measurement of the capacitances between the electron system and gates located on either side of the well. modifications to the electron wave function due to the population of the second subband and appearance of an additional electron layer can be detected. the magnetic field dependence of these capacitances is dominated by quantum corrections caused by the occupation of landau levels in the nearest electron layer. the technique should be equally applicable to other implementations of a double layer electron system.
|
arxiv:1712.10171
|
let $ v $ be a representation of the modular group $ \ gamma $ of dimension $ p $. we show that the $ \ mathbb { z } $ - graded space $ \ mathcal { h } ( v ) $ of holomorphic vector - valued modular forms associated to $ v $ is a free module of rank $ p $ over the algebra $ \ mathcal { m } $ of classical holomorphic modular forms. we study the nature of $ \ mathcal { h } $ considered as a functor from $ \ gamma $ - modules to graded $ \ mathcal { m } $ - lattices and give some applications, including the calculation of the hilbert - poincar \ ' { e } of $ \ mathcal { h } ( v ) $ in some cases.
|
arxiv:0901.4367
|
we investigate the physical properties of equilibrium sequences of non - self - gravitating surfaces that characterize thick disks around a rotating deformed compact object described by a stationary generalization of the static q - metric. the spacetime corresponds to an exact solution of einstein ' s field equations so that we can perform the analysis for arbitrary values of the quadrupole moment and rotation parameter. to study the properties of this disk ' s model, we analyze bounded trajectories in this spacetime. further, we find that depending on the values of the parameters, we can have various disc structures that can easily be distinguished from the static case and also from the schwarzschild background. we argue that this study may be used to evaluate the rotation and quadrupole parameters of the central compact object.
|
arxiv:2205.03842
|
it is meaningful to detect outliers in traffic data for traffic management. however, this is a massive task for people from large - scale database to distinguish outliers. in this paper, we present two methods : kernel smoothing na \ " ive bayes ( nb ) method and gaussian mixture model ( gmm ) method to automatically detect any hardware errors as well as abnormal traffic events in traffic data collected at a four - arm junction in hong kong. traffic data was recorded in a video format, and converted to spatial - temporal ( st ) traffic signals by statistics. the st signals are then projected to a two - dimensional ( 2d ) ( x, y ) - coordinate plane by principal component analysis ( pca ) for dimension reduction. we assume that inlier data are normal distributed. as such, the nb and gmm methods are successfully applied in outlier detection ( od ) for traffic data. the kernel smooth nb method assumes the existence of kernel distributions in traffic data and uses bayes ' theorem to perform od. in contrast, the gmm method believes the traffic data is formed by the mixture of gaussian distributions and exploits confidence region for od. this paper would address the modeling of each method and evaluate their respective performances. experimental results show that the nb algorithm with triangle kernel and gmm method achieve up to 93. 78 % and 94. 50 % accuracies, respectively.
|
arxiv:1512.08413
|
we have obtained improved spectra of key fundamental band lines of h3 +, r ( 1, 1 ) l, r ( 3, 3 ) l, and r ( 2, 2 ) l, and ro - vibrational transitions of co on sightlines toward the luminous infrared sources gcirs 3 and gcirs 1w, each located in the central cluster of the galactic center within several arcseconds of sgr a *. the spectra reveal absorption occurring in three kinds of gaseous environments : ( 1 ) cold dense and diffuse gas associated with foreground spiral / lateral arms ; ( 2 ) warm and diffuse gas absorbing over a wide and mostly negative velocity range, which appears to fill a significant fraction of the galaxy ' s central molecular zone ( cmz ) ; and ( 3 ) warm, dense and compact clouds with velocities near + 50 km s ^ - 1 probably within 1 - 2 pc of the center. the absorptions by the first two cloud types are nearly identical for all the sources in the central cluster, and are similar to those previously observed on sightlines from sgr a * to 30 pc east of it. cloud type ( 3 ), which has only been observed toward the central cluster, shows distinct differences between the sightlines to gcirs 3 and gcirs 1w, which are separated on the sky by only 0. 33 pc in projection. we identify this material as part of an inward extension of the circumnuclear disk previously known from hcn mapping. lower limits on the products of the hydrogen ionization rate zeta and the path length l are 2. 3 x 10 ^ 5 cm s ^ - 1 and 1. 5 x 10 ^ 3 cm s ^ - 1 for the warm and diffuse cmz gas and for the warm and dense clouds in the core, respectively. the limits indicate that the ionization rates in these regions are well above 10 ^ - 15 s ^ - 1.
|
arxiv:1404.2271
|
we show that layered quenched randomness in planar magnets leads to an unusual intermediate phase between the conventional ferromagnetic low - temperature and paramagnetic high - temperature phases. in this intermediate phase, which is part of the griffiths region, the spin - wave stiffness perpendicular to the random layers displays anomalous scaling behavior, with a continuously variable anomalous exponent, while the magnetization and the stiffness parallel to the layers both remain finite. analogous results hold for superfluids and superconductors. we study the two phase transitions into the anomalous elastic phase, and we discuss the universality of these results, and implications of finite sample size as well as possible experiments.
|
arxiv:1003.5201
|
we observe that a slight adjustment of a method of caffarelli, li, and nirenberg yields that plurisubharmonic functions extend across subharmonic singularities as long as the singularities form a closed set of measure zero. this solves a problem posed by chirka.
|
arxiv:2102.01591
|
the tilted balance among competing interactions can yield a rich variety of ground states of quantum matter. in most ce - based heavy fermion systems, this can often be qualitatively described by the famous doniach phase diagram, owing to the competition between the kondo screening and the ruderman - kittel - kasuya - yoshida exchange interaction. here, we report an unusual pressure - temperature phase diagram beyond the doniach one in cecup2. at ambient pressure, cecup2 displays typical heavy - fermion behavior, albeit with a very low carrier density. with lowering temperature, it shows a crossover from a non fermi liquid to a fermi liquid at around 2. 4 k. but surprisingly, the kondo coherence temperature decreases with increasing pressure, opposite to that in most ce - based heavy fermion compounds. upon further compression, two superconducting phases are revealed. at 48. 0 gpa, the transition temperature reaches 6. 1 k, the highest among all ce - based heavy fermion superconductors. we argue for possible roles of valence tuning and fluctuations associated with its special crystal structure in addition to the hybridization effect. these unusual phase diagrams suggest that cecup2 is a novel platform for studying the rich heavy fermions physics beyond the conventional doniach paradigm.
|
arxiv:2104.02992
|
by a rectangular distributive lattice we mean the direct product of two non - singleton finite chains. we prove that the retracts ( ordered by set inclusion and together with the empty set ) of a rectangular distributive lattice $ g $ form a lattice, which we denote by ret ( $ g $ ). also, we describe and count the retracts of $ g $. some easy properties of retracts, retractions, and retraction kernels of ( mainly distributive ) lattices are observed and several examples are presented, including a 12 - element modular lattice $ m $ such that ret ( $ m $ ) is not a lattice.
|
arxiv:2112.12498
|
the energy shift in molecular spectra due to interaction of nuclear magnetic quadrupole moment ( $ m $ ) with electrons is equal to $ \ delta e _ m = mw _ m p _ { m } $, where $ w _ m $ is a constant determined by the electronic structure of the molecule and $ p _ { m } $ is a dimensionless constant. we extended the method for calculation of parity nonconservation effects in triatomic molecules developed in ref. [ a. petrov and a. zakharova, phys. rev. a $ { \ bf 105 } $, l050801 ( 2022 ) ] to the case of $ p _ { m } $ constant and applied it to $ ^ { 173 } $ yboh in the first excited $ v = 1 $ bending mode. results of our calculations are required for the extraction of the $ m $ value from the yboh experiment.
|
arxiv:2208.13881
|
the microlensing optical depth to baade ' s window constrains the minimum total mass in baryonic matter within the solar circle to be greater than 3. 9 x 10 ^ { 10 } solar masses, assuming the inner galaxy is barred with viewing angle of roughly 20 degrees. from the kinematics of solar neighbourhood stars, the local surface density of dark matter is about 30 + / - 15 solar masses per square parsec. we construct cuspy haloes normalised to the local dark matter density and calculate the circular - speed curve of the halo in the inner galaxy. this is added in quadrature to the rotation curve provided by the stellar and ism discs, together with a bar sufficiently massive so that the baryonic matter in the inner galaxy reproduces the microlensing optical depth. such models violate the observational constraint provided by the tangent - velocity data in the inner galaxy ( typically at radii 2 - 4 kpc ). the high baryonic contribution required by the microlensing is consistent with implications from hydrodynamical modelling and the pattern speed of the galactic bar. we conclude that the cuspy haloes favoured by the cold dark matter cosmology ( and its variants ) are inconsistent with the observational data on the galaxy.
|
arxiv:astro-ph/0108505
|
we study the klein four - group $ ( k _ 4 ) $ symmetry of the time - dependent schr \ " odinger equation for the conformal mechanics model of de alfaro - fubini - furlan ( aff ) with confining harmonic potential and coupling constant $ g = \ nu ( \ nu + 1 ) \ geq - 1 / 4 $. we show that it undergoes a complete or partial ( at half - integer $ \ nu $ ) breaking on eigenstates of the system, and is the automorphism of the $ \ mathfrak { osp } ( 2, 2 ) $ superconformal symmetry in super - extensions of the model by inducing a transformation between the exact and spontaneously broken phases of $ \ mathcal { n } = 2 $ poincar \ ' e supersymmetry. we exploit the $ k _ 4 $ symmetry and its relation with the conformal symmetry to construct the dual darboux transformations which generate spectrally shifted pairs of the rationally deformed aff models. two distinct pairs of intertwining operators originated from darboux duality allow us to construct complete sets of the spectrum generating ladder operators that identify specific finite - gap structure of a deformed system and generate three distinct related versions of nonlinearly deformed $ \ mathfrak { sl } ( 2, { \ mathbb r } ) $ algebra as its symmetry. we show that at half - integer $ \ nu $, the jordan states associated with confluent darboux transformations enter the construction, and the spectrum of rationally deformed aff systems undergoes structural changes.
|
arxiv:1902.00538
|
we present the first results of the vlba imaging and polarimetry survey ( vips ), a 5 ghz vlbi survey of 1, 127 sources with flat radio spectra. through automated data reduction and imaging routines, we have produced publicly available i, q, and u images and have detected polarized flux density from 37 % of the sources. we have also developed an algorithm to use each source ' s i image to automatically classify it as a point - like source, a core - jet, a compact symmetric object ( cso ) candidate, or a complex source. the mean ratio of the polarized to total 5 ghz flux density for vips sources with detected polarized flux density ranges from 1 % to 20 % with a median value of about 5 %. we have also found significant evidence that the directions of the jets in core - jet systems tend to be perpendicular to the electric vector position angles ( evpas ). the data is consistent with a scenario in which ~ 24 % of the polarized core - jets have evpas that are anti - aligned with the directions of their jet components and which have a substantial amount of faraday rotation. in addition to these initial results, plans for future follow - up observations are discussed.
|
arxiv:astro-ph/0611459
|
ar / vr, and ai ). the 2014 it budget of the us federal government was nearly $ 82 billion. it costs, as a percentage of corporate revenue, have grown 50 % since 2002, putting a strain on it budgets. when looking at current companies ' it budgets, 75 % are recurrent costs, used to " keep the lights on " in the it department, and 25 % are the cost of new initiatives for technology development. the average it budget has the following breakdown : 34 % personnel costs ( internal ), 31 % after correction 16 % software costs ( external / purchasing category ), 29 % after correction 33 % hardware costs ( external / purchasing category ), 26 % after correction 17 % costs of external service providers ( external / services ), 14 % after correction the estimated amount of money spent in 2022 is just over us $ 6 trillion. = = technological capacity = = the world ' s technological capacity to store information grew from 2. 6 ( optimally compressed ) exabytes in 1986 to 15. 8 in 1993, over 54. 5 in 2000, and to 295 ( optimally compressed ) exabytes in 2007, and some 5 zettabytes in 2014. this is the informational equivalent to 1. 25 stacks of cd - rom from the earth to the moon in 2007, and the equivalent of 4, 500 stacks of printed books from the earth to the sun in 2014. the world ' s technological capacity to receive information through one - way broadcast networks was 432 exabytes of ( optimally compressed ) information in 1986, 715 ( optimally compressed ) exabytes in 1993, 1. 2 ( optimally compressed ) zettabytes in 2000, and 1. 9 zettabytes in 2007. the world ' s effective capacity to exchange information through two - way telecommunication networks was 281 petabytes of ( optimally compressed ) information in 1986, 471 petabytes in 1993, 2. 2 ( optimally compressed ) exabytes in 2000, 65 ( optimally compressed ) exabytes in 2007, and some 100 exabytes in 2014. the world ' s technological capacity to compute information with humanly guided general - purpose computers grew from 3. 0 × 10 ^ 8 mips in 1986, to 6. 4 x 10 ^ 12 mips in 2007. = = sector in the oecd = = the following is a list of oecd countries by share of ict sector in total value added in 2013. = = ict development index = =
|
https://en.wikipedia.org/wiki/Information_and_communications_technology
|
for a regular polyhedron ( or polygon ) centered at the origin, the coordinates of the vertices are eigenvectors of the graph laplacian for the skeleton of that polyhedron ( or polygon ) associated with the first ( non - trivial ) eigenvalue. in this paper, we generalize this relationship. for a given graph, we study the eigenvalue optimization problem of maximizing the first ( non - trivial ) eigenvalue of the graph laplacian over non - negative edge weights. we show that the spectral realization of the graph using the eigenvectors corresponding to the solution of this problem, under certain assumptions, is a centered, unit - distance graph realization that has maximal total variance. this result gives a new method for generating unit - distance graph realizations and is based on convex duality. a drawback of this method is that the dimension of the realization is given by the multiplicity of the extremal eigenvalue, which is typically unknown prior to solving the eigenvalue optimization problem. our results are illustrated with a number of examples.
|
arxiv:2206.10010
|
spectra of stochastic gravitational waves ( gw ) generated in cosmological first - order phase transitions are computed within strongly correlated theories with a dual holographic description. the theories are mostly used as models of dark sectors. in particular, we consider the so - called witten - sakai - sugimoto model, a $ su ( n ) $ gauge theory coupled to different matter fields in both the fundamental and the adjoint representations. the model has a well - known top - down holographic dual description which allows us to perform reliable calculations in the strongly coupled regime. we consider the gw spectra from bubble collisions and sound waves arising from two different kinds of first - order phase transitions : a confinement / deconfinement one and a chiral symmetry breaking / restoration one. depending on the model parameters, we find that the gw spectra may fall within the sensibility region of ground - based and space - based interferometers, as well as of pulsar timing arrays. in the latter case, the signal could be compatible with the recent potential observation by nanograv. when the two phase transitions happen at different critical temperatures, characteristic spectra with double frequency peaks show up. moreover, in this case we explicitly show how to correct the redshift factors appearing in the formulae for the gw power spectra to account for the fact that adiabatic expansion from the first transition to the present times cannot be assumed anymore.
|
arxiv:2011.08757
|
despite recent advances in automating theorem proving in full first - order theories, inductive reasoning still poses a serious challenge to state - of - the - art theorem provers. the reason for that is that in first - order logic induction requires an infinite number of axioms, which is not a feasible input to a computer - aided theorem prover requiring a finite input. mathematical practice is to specify these infinite sets of axioms as axiom schemes. unfortunately these schematic definitions cannot be formalized in first - order logic, and therefore not supported as inputs for first - order theorem provers. in this work we introduce a new method, inspired by the field of axiomatic theories of truth, that allows to express schematic inductive definitions, in the standard syntax of multi - sorted first - order logic. further we test the practical feasibility of the method with state - of - the - art theorem provers, comparing it to solvers ' native techniques for handling induction. this paper is an extended version of the lfmtp 21 submission with the same title.
|
arxiv:2106.05066
|
we derive general results relating revivals in the dynamics of quantum many - body systems to the entanglement properties of energy eigenstates. for a d - dimensional lattice system of n sites initialized in a low - entangled and short - range correlated state, our results show that a perfect revival of the state after a time at most poly ( n ) implies the existence of " quantum many - body scars ", whose number grows at least as the square root of n up to poly - logarithmic factors. these are energy eigenstates with energies placed in an equally - spaced ladder and with r \ ' enyi entanglement entropy scaling as log ( n ) plus an area law term for any region of the lattice. this shows that quantum many - body scars are a necessary condition for revivals, independent of particularities of the hamiltonian leading to them. we also present results for approximate revivals, for revivals of expectation values of observables and prove that the duration of revivals of states has to become vanishingly short with increasing system size.
|
arxiv:1911.05637
|
entomophthoralean fungi are insect pathogenic fungi and are characterized by their active discharge of infective conidia that infect insects. our aim was to study the effects of temperature on the discharge and to characterize the variation in the associated temporal pattern of a newly discovered pandora species with focus on peak location and shape of the discharge. mycelia were incubated at various temperatures in darkness, and conidial discharge was measured over time. we used a novel modification of a statistical model ( pavpop ), that simultaneously estimates phase and amplitude effects, into a setting of generalized linear models. this model is used to test hypotheses of peak location and discharge of conidia. the statistical analysis showed that high temperature leads to an early and fast decreasing peak, whereas there were no significant differences in total number of discharged conidia. using the proposed model we also quantified the biological variation in the timing of the peak location at a fixed temperature.
|
arxiv:1811.04446
|
computational fluid dynamics ( cfd ) is increasingly used to study blood flows in patient - specific arteries for understanding certain cardiovascular diseases. the techniques work quite well for relatively simple problems, but need improvements when the problems become harder in the case when ( 1 ) the geometry becomes complex ( from a few branches to a full pulmonary artery ), ( 2 ) the model becomes more complex ( from fluid - only calculation to coupled fluid - structure interaction calculation ), ( 3 ) both the fluid and wall models become highly nonlinear, and ( 4 ) the computer on which we run the simulation is a supercomputer with tens of thousands of processor cores. to push the limit of cfd in all four fronts, in this paper, we develop and study a highly parallel algorithm for solving a monolithically coupled fluid - structure system for the modeling of the interaction of the blood flow and the arterial wall. as a case study, we consider a patient - specific, full size pulmonary artery obtained from ct ( computed tomography ) images, with an artificially added layer of wall with a fixed thickness. the fluid is modeled with a system of incompressible navier - stokes equations and the wall is modeled by a geometrically nonlinear elasticity equation. as far as we know this is the first time the unsteady blood flow in a full pulmonary artery is simulated without assuming a rigid wall. the proposed numerical algorithm and software scale well beyond 10, 000 processor cores on a supercomputer for solving the fluid - structure interaction problem discretized with a stabilized finite element method in space and an implicit scheme in time involving hundreds of millions of unknowns.
|
arxiv:1810.04289
|
we study the problem of syncing the lip movement in a video with the audio stream. our solution finds an optimal alignment using a dual - domain recurrent neural network that is trained on synthetic data we generate by dropping and duplicating video frames. once the alignment is found, we modify the video in order to sync the two sources. our method is shown to greatly outperform the literature methods on a variety of existing and new benchmarks. as an application, we demonstrate our ability to robustly align text - to - speech generated audio with an existing video stream. our code and samples are available at https : / / github. com / itsyoavshalev / end - to - end - lip - synchronization - with - a - temporal - autoencoder.
|
arxiv:2203.16224
|
we investigate the magnetic properties of $ s = 1 $ antiferromagnetic diamond lattice, ni $ x _ { 2 } $ ( pyrimidine ) $ _ { 2 } $ ( $ x $ = cl, br ), hosting a single - ion anisotropy ( sia ) orientation which alternates between neighbouring sites. through neutron diffraction measurements of the $ x $ = cl compound, the ordered state spins are found to align collinearly along a pseudo - easy - axis, a unique direction created by the intersection of two easy planes. similarities in the magnetization, exhibiting spin - flop transitions, and the magnetic susceptibility in the two compounds imply that the same magnetic structure and a pseudo - easy - axis is also present for $ x $ = br. we estimate the hamiltonian parameters by combining analytical calculations and monte - carlo ( mc ) simulations of the spin - flop and saturation field. the mc simulations also reveal that the spin - flop transition occurs when the applied field is parallel to the pseudo - easy - axis. contrary to conventional easy - axis systems, there exist field directions perpendicular to the pseudo - easy - axis for which the magnetic saturation is approached asymptotically and no symmetry - breaking phase transition is observed at finite fields.
|
arxiv:2405.15623
|
supersymmetry analyses will potentially be a central area for experiments at the lhc and at a future e + e - linear collider. results from the two facilities will mutually complement and augment each other so that a comprehensive and precise picture of the supersymmetric world can be developed. we will demonstrate in this report how coherent analyses at lhc and lc experiments can be used to explore the breaking mechanism of supersymmetry and to reconstruct the fundamental theory at high energies, in particular at the grand unification scale. this will be exemplified for minimal supergravity in detailed experimental simulations performed for the snowmass reference point sps1a.
|
arxiv:hep-ph/0403133
|
the bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. it is well - known that the value function reformulation provides an equivalent single - level optimization problem but it results in a nonsmooth optimization problem which never satisfies the usual constraint qualification such as the mangasarian - fromovitz constraint qualification ( mfcq ). in this paper we show that even the first order sufficient condition for metric subregularity ( which is in general weaker than mfcq ) fails at each feasible point of the bilevel program. we introduce the concept of directional calmness condition and show that under { the } directional calmness condition, the directional necessary optimality condition holds. { while the directional optimality condition is in general sharper than the non - directional one, } the directional calmness condition is in general weaker than the classical calmness condition and hence is more likely to hold. { we perform the directional sensitivity analysis of the value function and } propose the directional quasi - normality as a sufficient condition for the directional calmness. an example is given to show that the directional quasi - normality condition may hold for the bilevel program.
|
arxiv:2004.01783
|
the long time behavior of a kind of fully magnetic effected nonlinear piezoelectric beam with viscoelastic infinite memory is considered. the well - posedness of this nonlinear coupled pdes system is showed by mean of the semigroup theories and banach fixed point theorem. based on frequency domain analysis, it is proved that the corresponding coupled linear system can be indirectly stabilized exponentially by only one viscoelastic infinite memory term, which is located on one equation of these strongly coupled pdes. then the exponential decay of the solution to the nonlinear coupled pdes ' system is established by the energy estimation method under certain condition.
|
arxiv:2204.03824
|
the key kinetic barrier to dolomite formation is related to the surface mg2 + - h2o complex, which hinders binding of surface mg2 + ions to the co3 2 - ions in solution. it has been proposed that this reaction can be catalyzed by dissolved hydrogen sulfide. to characterize the role of dissolved hydrogen sulfide in the dehydration of surface mg 2 + ions, ab initio simulations based on density functional theory ( dft ) were carried out to study the thermodynamics of competitive adsorption of hydrogen sulfide and water on dolomite ( 104 ) surfaces from solution. we find that water is thermodynamically more stable on the surface with the difference in adsorption energy of - 13. 6 kj / mol ( in vacuum ) and - 12. 8 kj / mol ( in aqueous solution ). however, aqueous hydrogen sulfide adsorbed on the surface increases the mg2 + - h2o distances on surrounding surface sites. two possible mechanisms were proposed for the catalytic effects of adsorbed hydrogen sulfide on the anhydrous ca - mg - carbonate crystallization at low temperature.
|
arxiv:1608.03332
|
quality - of - service ( qos ) data plays a crucial role in cloud service selection. since users cannot access all services, qos can be represented by a high - dimensional and incomplete ( hdi ) matrix. latent factor analysis ( lfa ) models have been proven effective as low - rank representation techniques for addressing this issue. however, most lfa models rely on first - order optimizers and use l2 - norm regularization, which can lead to lower qos prediction accuracy. to address this issue, this paper proposes a double regularized second - order latent factor ( drslf ) model with two key ideas : a ) integrating l1 - norm and l2 - norm regularization terms to enhance the low - rank representation performance ; b ) incorporating second - order information by calculating the hessian - vector product in each conjugate gradient step. experimental results on two real - world response - time qos datasets demonstrate that drslf has a higher low - rank representation capability than two baselines.
|
arxiv:2505.03822
|
this paper builds model - theoretic tools to detect changes in complexity among the simple theories. we develop a generalization of dividing, called shearing, which depends on a so - called context c. this leads to defining c - superstability, a syntactical notion, which includes supersimplicity as a special case. we prove a separation theorem showing that for any countable context c and any two theories $ t _ 1 $, $ t _ 2 $ such that $ t _ 1 $ is c - superstable and $ t _ 2 $ is c - unsuperstable, and for arbitrarily large $ \ mu $, it is possible to build models of any theory interpreting both $ t _ 1 $ and $ t _ 2 $ whose restriction to $ \ tau ( t _ 1 ) $ is $ \ mu $ - saturated and whose restriction to $ \ tau ( t _ 2 ) $ is not $ \ aleph _ 1 $ - saturated. ( this suggests " c - superstable " is really a dividing line. ) the proof uses generalized ehrenfeucht - mostowski models, and along the way, we clarify the use of these techniques to realize certain types while omitting others. in some sense, shearing allows us to study the interaction of complexity coming from the usual notion of dividing in simple theories and the more combinatorial complexity detected by the general definition. this work is inspired by our recent progress on keisler ' s order, but does not use ultrafilters, rather aiming to build up the internal model theory of these classes.
|
arxiv:1810.09604
|
let $ d \ geq 2 $ be an integer and let $ 2d / ( d - 1 ) < q \ leq \ infty $. in this paper we investigate the sharp form of the mixed norm fourier extension inequality \ begin { equation * } \ big \ | \ widehat { f \ sigma } \ big \ | _ { l ^ q _ { { \ rm rad } } l ^ 2 _ { { \ rm ang } } ( \ mathbb { r } ^ d ) } \ leq { \ bf c } _ { d, q } \, \ | f \ | _ { l ^ 2 ( \ mathbb { s } ^ { d - 1 }, { \ rm d } \ sigma ) }, \ end { equation * } established by l. vega in 1988. letting $ \ mathcal { a } _ d \ subset ( 2d / ( d - 1 ), \ infty ] $ be the set of exponents for which the constant functions on $ \ mathbb { s } ^ { d - 1 } $ are the unique extremizers of this inequality, we show that : ( i ) $ \ mathcal { a } _ d $ contains the even integers and $ \ infty $ ; ( ii ) $ \ mathcal { a } _ d $ is an open set in the extended topology ; ( iii ) $ \ mathcal { a } _ d $ contains a neighborhood of infinity $ ( q _ 0 ( d ), \ infty ] $ with $ q _ 0 ( d ) \ leq \ left ( \ tfrac { 1 } { 2 } + o ( 1 ) \ right ) d \ log d $. in low dimensions we show that $ q _ 0 ( 2 ) \ leq 6. 76 \, ; \, q _ 0 ( 3 ) \ leq 5. 45 \, ; \, q _ 0 ( 4 ) \ leq 5. 53 \, ; \, q _ 0 ( 5 ) \ leq 6. 07 $. in particular, this breaks for the first time the even exponent barrier in sharp fourier restriction theory. the crux of the matter in our approach is to establish a hierarchy between certain weighted norms of bessel functions, a nontrivial question of independent interest within the theory of special functions.
|
arxiv:1710.10365
|
we study the dissipative dynamics of a harmonic oscillator which couples linearly through its position and its momentum to two independent heat baths at the same temperature. we argue that this model describes a large spin in a ferromagnet. we find that some effects of the two heat baths partially cancel each other. this leads to unexpected features such as underdamped oscillations and long relaxation times in the strong coupling regime. such a partial frustration of dissipation can be ascribed to the canonically conjugate character of position and momentum. we compare this model to the scenario where a single heat bath couples linearly to both the position and the momentum of the central oscillator. in that case less surprising behavior occurs for strong coupling. the dynamical evolution of the quantum purity for a single and a double wave packet is also investigated.
|
arxiv:cond-mat/0608484
|
optically polarizable nitrogen - vacancy ( nv ) center in diamond enables the hyperpolarization of $ ^ { 13 } $ c nuclear spins at low magnetic field and room temperature. however, achieving a high level of polarization comparable to conventional dynamic nuclear polarization has remained challenging. here we demonstrate that, at below 10 mt, a $ ^ { 13 } $ c polarization of 5 % can be obtained, equivalent to an enhancement ratio over $ 7 \ times 10 ^ 6 $. we used high - purity diamond with a low initial nitrogen concentration ( $ < $ 1 ppm ), which also results in a long storage time exceeding 100 minutes. by aligning the magnetic field along [ 100 ], the number of nv spins participating in polarization transfer increases fourfold. we conducted a comprehensive optimization of field intensity and microwave ( mw ) frequency - sweep parameters for this field orientation. the optimum mw sweep width suggests that polarization transfer occurs primarily to bulk $ ^ { 13 } $ c spins through the integrated solid effect followed by nuclear spin diffusion.
|
arxiv:2409.19489
|
we present density functional theory calculations on the crystal structure, equation of state, vibrational properties and electronic structure of nitrogen - rich solid energetic material guanidinium 2 - methyl - 5 - nitraminotetrazolate ( g - mnat ). the ground state structural properties calculated with dispersion corrected density functionals are in good agreement with experiment. the computed equilibrium crystal structure is further used to calculate the equation of state and zone - center vibrational frequencies of the material. the electronic band structure is calculated and found that the material is an indirect band gap semiconductor with a value of 3. 04 ev.
|
arxiv:1402.3432
|
let $ h $ be a complex hilbert space and let $ { \ mathcal f } _ { s } ( h ) $ be the real vector space of all self - adjoint finite rank operators on $ h $. we prove the following non - injective version of wigner ' s theorem : every linear operator on $ { \ mathcal f } _ { s } ( h ) $ sending rank one projections to rank one projections ( without any additional assumption ) is either induced by a linear or conjugate - linear isometry or constant on the set of rank one projections.
|
arxiv:1808.02783
|
the analysis of the first two years of ogle data revealed 9 microlensing events of the galactic bulge stars, with the characteristic time scales in the range $ 8. 6 < t _ 0 < 62 $ days, where $ t _ 0 = r _ e / v $. the optical depth to microlensing is larger than $ ( 3. 3 \ pm 1. 2 ) \ times 10 ^ { - 6 } $, in excess of current theoretical estimates, indicating a much higher efficiency for microlensing by either bulge or disk lenses. we argue that the lenses are likely to be ordinary stars in the galactic bar, which has its long axis elongated towards us. a relation between $ t _ 0 $ and the lens masses remains unknown until a quantitative model of bar microlensing becomes available. at this time we have no evidence that the ogle events are related to dark matter. the geometry of lens distribution can be determined observationally when the microlensing rate is measured over a larger range of galactic longitudes, like $ - 10 ^ o < l < + 10 ^ o $, and the relative proper motions of the galactic bulge ( bar ) stars are measured with the hst.
|
arxiv:astro-ph/9407010
|
this paper addresses the issue of intellectual property management in the knowledge - based economy. the starting point in carrying out the study is the presentation of some concepts regarding in the first phase, the intellectual capital. arguments are made that the knowledge - based economy is a challenge for the current century. the subject of intellectual property is approached through the prism of a topical concept operationalized in the current global economic context. the main institutions that are directly related to this concept are mentioned. the topic of patents related to wos indexed scientific papers is also debated, along with a series of statistics and studies on the state of patent protection worldwide in the top fields. the last part of the paper contains the conclusions and own points of view on the debated topic.
|
arxiv:2501.08232
|
a finite group r is a ci - group if, whenever s and t are subsets of r with the cayley graphs cay ( r, s ) and cay ( r, t ) isomorphic, there exists an automorphism x of r with s ^ x = t. the classification of ci - groups is an open problem in the theory of cayley graphs and is closely related to the isomorphism problem for graphs. this paper is a contribution towards this classification, as we show that every dihedral group of order 6p, with p > 3 prime, is a ci - group.
|
arxiv:1402.4373
|
this paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then log s / d ( f0 | | f1 ) where s is the level of sparsity, and d ( f0 | | f1 ) the kullback - leibler divergence between the underlying distributions. for comparison, we show any non - sequential procedure fails provided the number of measurements grows at a rate less than log n / d ( f1 | | f0 ), where n is the total dimension of the problem. lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than ( log s + log log n ) / d ( f0 | | f1 ), a mere additive factor more than the lower bound.
|
arxiv:1105.4540
|
the high - dimensional linear model $ y = x \ beta ^ 0 + \ epsilon $ is considered and the focus is put on the problem of recovering the support $ s ^ 0 $ of the sparse vector $ \ beta ^ 0. $ we introduce lasso - zero, a new $ \ ell _ 1 $ - based estimator whose novelty resides in an " overfit, then threshold " paradigm and the use of noise dictionaries concatenated to $ x $ for overfitting the response. to select the threshold, we employ the quantile universal threshold based on a pivotal statistic that requires neither knowledge nor preliminary estimation of the noise level. numerical simulations show that lasso - zero performs well in terms of support recovery and provides an excellent trade - off between high true positive rate and low false discovery rate compared to competitors. our methodology is supported by theoretical results showing that when no noise dictionary is used, lasso - zero recovers the signs of $ \ beta ^ 0 $ under weaker conditions on $ x $ and $ s ^ 0 $ than the lasso and achieves sign consistency for correlated gaussian designs. the use of noise dictionary improves the procedure for low signals.
|
arxiv:1805.05133
|
with dynamic electricity pricing, the operation of water distribution systems ( wds ) is expected to become more variable. the pumps moving water from reservoirs to tanks and consumers, can serve as energy storage alternatives if properly operated. nevertheless, optimal wds scheduling is challenged by the hydraulic law, according to which the pressure along a pipe drops proportionally to its squared water flow. the optimal water flow ( owf ) task is formulated here as a mixed - integer non - convex problem incorporating flow and pressure constraints, critical for the operation of fixed - speed pumps, tanks, reservoirs, and pipes. the hydraulic constraints of the owf problem are subsequently relaxed to second - order cone constraints. to restore feasibility of the original non - convex constraints, a penalty term is appended to the objective of the relaxed owf. the modified problem can be solved as a mixed - integer second - order cone program, which is analytically shown to yield wds - feasible minimizers under certain sufficient conditions. under these conditions, by suitably weighting the penalty term, the minimizers of the relaxed problem can attain arbitrarily small optimality gaps, thus providing owf solutions. numerical tests using real - world demands and prices on benchmark wds demonstrate the relaxation to be exact even for setups where the sufficient conditions are not met.
|
arxiv:1806.07988
|
experimental tests of the standard model of particle physics ( sm ) find excellent agreement with its predictions. since the original formation of the sm, experiments have provided little guidance regarding the explanations of phenomena outside the sm, such as the baryon asymmetry and dark matter. nor have we understood the aesthetic and theoretical problems of the sm, despite years of searching for physics beyond the standard model ( bsm ) at particle colliders. some bsm particles can be produced at colliders yet evade being discovered, if the reconstruction and analysis procedures not matched to characteristics of the particle. an example is particles with large lifetimes. as interest in searches for such long - lived particles ( llps ) grows rapidly, a review of the topic is presented in this article. the broad range of theoretical motivations for llps and the experimental strategies and methods employed to search for them are described. results from decades of llp searches are reviewed, as are opportunities for the next generation of searches at both existing and future experiments.
|
arxiv:1810.12602
|
this paper shows how to construct explicitly an automaton that generates an arbitrary numerical semigroup.
|
arxiv:2303.12715
|
bloch oscillations ( bos ) of quantum particles manifest themselves as periodic spreading and re - localization of the associated wave functions when traversing lattice potentials subject to external gradient forces. albeit bos are deeply rooted into the very foundations of quantum mechanics, all experimental observations of this phenomenon so far have only contemplated dynamics of one or two particles initially prepared in separable local states, which is well described by classical wave physics. evidently, a more general description of genuinely quantum bos will be achieved upon excitation of a bloch - oscillator lattice system by nonlocal states, that is, containing correlations in contradiction with local realism. here we report the first experimental observation of bos of two - particle einstein - podolsky - rosen states ( epr ), whose associated n - particle wave functions are nonlocal by nature. the time evolution of two - photon epr states in bloch - oscillators, whether symmetric, antisymmetric or partially symmetric, reveals unexpected transitions from particle antibunching to bunching. consequently, the initial state can be tailored to produce spatial correlations akin to bosons, fermions or anyons. these results pave the way for a wider class of photonic quantum simulators.
|
arxiv:1501.01764
|
the procrustes - wasserstein problem consists in matching two high - dimensional point clouds in an unsupervised setting, and has many applications in natural language processing and computer vision. we consider a planted model with two datasets $ x, y $ that consist of $ n $ datapoints in $ \ mathbb { r } ^ d $, where $ y $ is a noisy version of $ x $, up to an orthogonal transformation and a relabeling of the data points. this setting is related to the graph alignment problem in geometric models. in this work, we focus on the euclidean transport cost between the point clouds as a measure of performance for the alignment. we first establish information - theoretic results, in the high ( $ d \ gg \ log n $ ) and low ( $ d \ ll \ log n $ ) dimensional regimes. we then study computational aspects and propose the ping - pong algorithm, alternatively estimating the orthogonal transformation and the relabeling, initialized via a franke - wolfe convex relaxation. we give sufficient conditions for the method to retrieve the planted signal after one single step. we provide experimental results to compare the proposed approach with the state - of - the - art method of grave et al. ( 2019 ).
|
arxiv:2405.14532
|
predicting likely noise levels, determining an acceptable level for that noise, and determining how the noise can be controlled. environmental acoustics work is usually done by acoustic consultants or those working in environmental health. recent research work has put a strong emphasis on soundscapes, the positive use of sound ( e. g. fountains, bird song ), and the preservation of tranquility. = = = musical acoustics = = = musical acoustics is concerned with researching and describing the physics of music and its perception – how sounds employed as music work. this includes : the function and design of musical instruments including electronic synthesizers ; the human voice ( the physics and neurophysiology of singing ) ; computer analysis of music and composition ; the clinical use of music in music therapy, and the perception and cognition of music. = = = noise control = = = noise control is a set of strategies to reduce noise pollution by reducing noise at its source, by inhibiting sound propagation using noise barriers or similar, or by the use of ear protection ( earmuffs or earplugs ). control at the source is the most cost - effective way of providing noise control. noise control engineering applied to cars and trucks is known as noise, vibration, and harshness ( nvh ). other techniques to reduce product noise include vibration isolation, application of acoustic absorbent and acoustic enclosures. acoustical engineering can go beyond noise control to look at what is the best sound for a product, for instance, manipulating the sound of door closures on automobiles. = = = psychoacoustics = = = psychoacoustics tries to explain how humans respond to what they hear, whether that is an annoying noise or beautiful music. in many branches of acoustic engineering, a human listener is a final arbitrator as to whether a design is successful, for instance, whether sound localisation works in a surround sound system. " psychoacoustics seeks to reconcile acoustical stimuli and all the scientific, objective, and physical properties that surround them, with the physiological and psychological responses evoked by them. " = = = speech = = = speech is a major area of study for acoustical engineering, including the production, processing and perception of speech. this can include physics, physiology, psychology, audio signal processing and linguistics. speech recognition and speech synthesis are two important aspects of the machine processing of speech. ensuring speech is transmitted intelligibly, efficiently and with high quality ; in rooms, through public address systems and through telephone systems are other important areas of
|
https://en.wikipedia.org/wiki/Acoustical_engineering
|
for an $ n $ - vertex digraph $ g = ( v, e ) $, a \ emph { shortcut set } is a ( small ) subset of edges $ h $ taken from the transitive closure of $ g $ that, when added to $ g $ guarantees that the diameter of $ g \ cup h $ is small. shortcut sets, introduced by thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. a folklore result in this context shows that every $ n $ - vertex digraph admits a shortcut set of linear size ( i. e., of $ o ( n ) $ edges ) that reduces the diameter to $ \ widetilde { o } ( \ sqrt { n } ) $. despite extensive research over the years, the question of whether one can reduce the diameter to $ o ( \ sqrt { n } ) $ with $ \ widetilde { o } ( n ) $ shortcut edges has been left open. we provide the first improved diameter - sparsity tradeoff for this problem, breaking the $ \ sqrt { n } $ diameter barrier. specifically, we show an $ o ( n ^ { \ omega } ) $ - time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to $ \ widetilde { o } ( n ^ { 1 / 3 } ) $. this narrows the gap w. r. t the current diameter lower bound of $ \ omega ( n ^ { 1 / 6 } ) $ by [ huang and pettie, swat ' 18 ]. moreover, we show that a diameter of $ \ widetilde { o } ( n ^ { 1 / 2 } ) $ can in fact be achieved with a \ emph { sublinear } number of $ o ( n ^ { 3 / 4 } ) $ shortcut edges. formally, letting $ s ( n, d ) $ be the bound on the size of the shortcut set required in order to reduce the diameter of any $ n $ - vertex digraph to at most $ d $, our algorithms yield : \ [ s ( n, d ) = \ begin { cases } \ widetilde { o } ( n ^ 2 / d ^ 3 ), & \ text { for ~ } d \ leq n ^ { 1 / 3 }, \ \ \ widetilde { o } ( (
|
arxiv:2111.13240
|
the spin - orbit coupling may generate spin transverse force on moving electron spin, which gives a heuristic picture for the quantum transverse transport of electron. a relation between the spin and anomalous hall conductance and spin force was established, and applied to several systems. it was predicted that the sign change of anomalous hall conductance can occur in diluted magnetic semiconductors of narrow band and can be applied to identify intrinsic mechanism experimentally.
|
arxiv:cond-mat/0601152
|
poisson ' s equation plays a fundamental role as a tool for performance evaluation and optimization of markov chains. for continuous - time birth - death chains with possibly unbounded transition and cost rates as addressed herein, when analytical solutions are unavailable its numerical solution can in theory be obtained by a simple forward recurrence. yet, this may suffer from numerical instability, which can hide the structure of exact solutions. this paper presents three main contributions : ( 1 ) it establishes a structural result ( convexity of the relative cost function ) under mild conditions on transition and cost rates, which is relevant for proving structural properties of optimal policies in markov decision models ; ( 2 ) it elucidates the root cause, extent and prevalence of instability in numerical solutions by standard forward recurrence ; and ( 3 ) it presents a novel forward - backward recurrence scheme to compute accurate numerical solutions. the results are applied to the accurate evaluation of the bias and the asymptotic variance, and are illustrated in an example.
|
arxiv:2207.13550
|
a fantastic resource for learning about the inner workings of everyday items... break [ ing ] down complex concepts into easy - to - understand explanations, providing viewers with a greater appreciation for the technology that surrounds them ". lifehacker ' s michelle ehrhardt wrote that watson ' s " documentary style approach is comprehensive yet approachable, and while topics often have some bearing on what you have in your house right now, the channel has also done lgr oddware - style breakdowns on odd trends or gadgets that aren ' t really around anymore ". ehrhardt called watson " a sort of guru for home appliances ", " explain [ ing ] the history and methodology behind common devices like air conditioners, dishwashers, and power outlets in a genuinely fun way that might also teach you a few tricks and tips that will make your life better ". adam juniper, writing in digital camera world, called watson and free ' s video on the magicube " a brilliant job of placing the different single - use flash technologies in context — historically and economically — showing how they work and then going above and beyond in explaining exactly how they work ". watson ' s video on the automatic sunbeam radiant toaster went viral in 2019, with sean hollister of the verge praising it as " [ possibly ] the smartest thing you watch today ". hollister similarly praised watson ' s video detailing the mechanics of the popcorn button present on most consumer microwaves. the channel has also received praise from academics. the media studies scholar marek jancovic called watson ' s video on the famous ringer of the western electric model 500 telephone — in which watson deduces that modern feature films still use a sample of the ring derived from a sound effect lp record pressed off - center and severely warped — an example of what jancovic calls " media epigraphy ". jancovic wrote that watson ' s findings represent " impressive deductions [ w ] orthy of a detective novel ". dan macisaac, a professor of physics at suny buffalo state, has praised watson ' s explainers on home wiring, calling some of the concepts discussed illuminating, particularly on the details of plug design, electrical outlet orientation, north american home wiring, and the dangers of certain extension cords. macisaac recommended some technology connections videos as supplementary material for his introduction electromagnetism course. in 2023, watson published a video on the lack of use of brake lights in some electric vehicles during regenerative braking. he demonstrated
|
https://en.wikipedia.org/wiki/Technology_Connections
|
we review new constraints on the yukawa - type corrections to newtonian gravity obtained recently from gravitational experiments and from the measurements of the casimir force. special attention is paid to the constraints following from the most precise dynamic determination of the casimir pressure between the two parallel plates by means of a micromechanical torsional oscillator. the possibility of setting limits on the predictions of chameleon field theories using the results of gravitational experiments and casimir force measurements is discussed.
|
arxiv:0802.0866
|
aims. the statistical equilibrium of neutral and ionized silicon in the atmospheres of metal - poor stars is discussed. non - local thermodynamic equilibrium effects are investigated and the silicon abundances in metal - poor stars determined. methods. we have used high resolution, high signal to noise ratio spectra from the uves spectragraph at the eso vlt telescope. line formation calculations of si i and si ii in the atmospheres of metal - poor stars are presented for atomic models of silicon including 174 terms and 1132 line transitions. recent improved calculations of si i and si ii photoionization cross - sections are taken into account, and the influence of the free - free quasi - molecular absorption in the ly alpha wing is investigated by comparing theoretical and observed fluxes of metal - poor stars. all abundance results are derived from lte and nlte statistical equilibrium calculations and spectrum synthesis methods. results. it is found that the extreme ultraviolet radiation is very important for metal - poor stars, especially for the high temperature, very metal - poor stars. the radiative bound - free cross - sections also play a very important role for these stars. conclusions. nlte effects for si are found to be important for metal - poor stars, in particular for warm metal - poor stars. it is found that these effects depend on the temperature. for warm metal - poor stars, the nlte abundance correction reaches ~ 0. 2 dex relative to standard lte calculations. our results indicate that si is overabundant for metal - poor stars.
|
arxiv:0907.4928
|
we study $ gl ( 2 ) $ - structures on differential manifolds. the structures play a fundamental role in the geometric theory of ordinary differential equations. we prove that any $ gl ( 2 ) $ - structure on an even dimensional manifold give rise to a certain almost - complex structure on a bundle over the original manifold. further, we exploit a natural notion of integrability for the $ gl ( 2 ) $ - structures, which is a counterpart of the self - duality for the 4 - dimensional conformal structures. we relate the integrability of the $ gl ( 2 ) $ - structures to the integrability of the almost - complex structures. this allows to perform a twistor - like construction for the $ gl ( 2 ) $ - geometry. moreover, we provide an explicit construction of a canonical connection for any $ gl ( 2 ) $ - structure.
|
arxiv:1910.12669
|
in this paper, we present the lingoly benchmark, a novel benchmark for advanced reasoning abilities in large language models. using challenging linguistic olympiad puzzles, we evaluate ( i ) capabilities for in - context identification and generalisation of linguistic patterns in very low - resource or extinct languages, and ( ii ) abilities to follow complex task instructions. the lingoly benchmark covers more than 90 mostly low - resource languages, minimising issues of data contamination, and contains 1, 133 problems across 6 formats and 5 levels of human difficulty. we assess performance with both direct accuracy and comparison to a no - context baseline to penalise memorisation. scores from 11 state - of - the - art llms demonstrate the benchmark to be challenging, and models perform poorly on the higher difficulty problems. on harder problems, even the top model only achieved 38. 7 % accuracy, a 24. 7 % improvement over the no - context baseline. large closed models typically outperform open models, and in general, the higher resource the language, the better the scores. these results indicate, in absence of memorisation, true multi - step out - of - domain reasoning remains a challenge for current language models.
|
arxiv:2406.06196
|
this paper presents translatotron 3, a novel approach to unsupervised direct speech - to - speech translation from monolingual speech - text datasets by combining masked autoencoder, unsupervised embedding mapping, and back - translation. experimental results in speech - to - speech translation tasks between spanish and english show that translatotron 3 outperforms a baseline cascade system, reporting $ 18. 14 $ bleu points improvement on the synthesized unpaired - conversational dataset. in contrast to supervised approaches that necessitate real paired data, or specialized modeling to replicate para - / non - linguistic information such as pauses, speaking rates, and speaker identity, translatotron 3 showcases its capability to retain it. audio samples can be found at http : / / google - research. github. io / lingvo - lab / translatotron3
|
arxiv:2305.17547
|
in this second part we study first the group $ aut _ { \ mathbb { q } } ( s ) $ of numerically trivial automorphisms of a properly elliptic surface $ s $, that is, of a minimal surface with kodaira dimension $ \ kappa ( s ) = 1 $, in the case $ \ chi ( s ) \ geq 1 $. our first surprising result is that, against what has been believed for over 40 years, we have nontrivial such groups for $ p _ g ( s ) > 0 $. indeed, we show even that there is no absolute upper bound for their cardinalities $ | aut _ { \ mathbb { q } } ( s ) | $. at any rate, we give explicit and nearly optimal upper bounds for $ | aut _ { \ mathbb { q } } ( s ) | $ in terms of the numerical invariants of $ s $, as $ \ chi ( s ) $, or the irregularity $ q ( s ) $, or the bigenus $ p _ 2 ( s ) $. moreover, we come quite close to a complete description of the possible groups $ aut _ { \ mathbb { q } } ( s ) $ as 2 - generated finite abelian groups, and we give an effective criterion for surfaces to have trivial $ aut _ { \ mathbb { q } } ( s ) $. our second surprising results concern the group $ aut _ { \ mathbb { z } } ( s ) $ of cohomologically trivial automorphisms ; we are able to give the explicit upper bounds for $ | aut _ { \ mathbb { z } } ( s ) | $ in special cases : $ 9 $ when $ p _ g ( s ) = 0 $, and the sharp upper bound $ 3 $ when $ s $ ( i. e., the pluricanonical elliptic fibration ) is isotrivial. we produce also non isotrivial examples where $ aut _ { \ mathbb { z } } ( s ) $ is a cyclic group of order $ 2 $ or $ 3 $.
|
arxiv:2412.17033
|
we propose a method to identify quasars radiating closest to the eddington limit, defining primary and secondary selection criteria in the optical, uv and x - ray spectral range based on the 4d eigenvector 1 formalism. we then show that it is possible to derive a redshift - independent estimate of luminosity for extreme eddington ratio sources. using preliminary samples of these sources in three redshift intervals ( as well as two mock samples ), we test a range of cosmological models. results are consistent with concordance cosmology but the data are insufficient for deriving strong constraints. mock samples indicate that application of the method proposed in this paper using dedicated observations would allow to set stringent limits on omega _ m and significant constraints on omega _ lambda.
|
arxiv:1405.2727
|
we investigate the dynamical importance of a newly recognized possible source of significant feedback generated during structure formation ; namely cosmic ray ( cr ) pressure. we present evidence for the existence of numerous shocks in the hot gas of galaxy clusters ( gcs ). we employ for the first time an explicit numerical treatment of cr acceleration and transport in hydro simulations of structure formation. according to our results, crs provide an important fraction of the total pressure inside gcs, up to several tenths. this was true even at high redshift ( z = 2 ), meaning that such non - thermal component could affect the evolution of structure formation.
|
arxiv:astro-ph/0005445
|
examining a catalogue of isolated galaxy pairs, a preferred orbital intervelocity of ~ 150 km / s was recently reported. this discovery is difficult to reconcile with the expectations from newtonian numerical simulations of cosmological structure formations. in a previous paper we have shown that a preferred intervelocity for galaxy pairs is expected in modified newtonian dynamics ( mond ). here a detailed analysis of the mond predictions is presented, showing that a remarkable agreement with the observations can be achieved for a mass to light ratio m / l ~ 1 in solar units. this agrees with the expectations for a typical stellar population, without requiring non - baryonic dark matter for these systems.
|
arxiv:2202.13766
|
in this work, we study the evolution of the susceptible individuals during the spread of an epidemic modeled by the susceptible - infected - recovered ( sir ) process spreading on the top of complex networks. using an edge - based compartmental approach and percolation tools, we find that a time - dependent quantity $ \ phi _ s ( t ) $, namely, the probability that a given neighbor of a node is susceptible at time $ t $, is the control parameter of a node void percolation process involving those nodes on the network not - reached by the disease. we show that there exists a critical time $ t _ c $ above which the giant susceptible component is destroyed. as a consequence, in order to preserve a macroscopic connected fraction of the network composed by healthy individuals which guarantee its functionality, any mitigation strategy should be implemented before this critical time $ t _ c $. our theoretical results are confirmed by extensive simulations of the sir process.
|
arxiv:1206.2720
|
evolving software is challenging, even more when it exists in many different variants. such software evolves not only in time, but also in space - - another dimension of complexity. while evolution in space is supported by a variety of product - line and variability management tools, many of which originating from research, their level of evaluation varies significantly, which threatens their relevance for practitioners and future research. many tools have only been evaluated on ad hoc datasets, minimal examples or available preprocessor - based product lines, missing the early clone & own phases and the re - engineering into configurable platforms - - large parts of the actual evolution lifecycle of variant - rich systems. our long - term goal is to provide benchmarks to increase the maturity of evaluating such tools. however, providing manually curated benchmarks that cover the whole evolution lifecycle and that are detailed enough to serve as ground truths, is challenging. we present the framework vpbench to generates source - code histories of variant - rich systems. vpbench comprises several modular generators relying on evolution operators that systematically and automatically evolve real codebases and document the evolution in detail. we provide simple and more advanced generators - - e. g., relying on code transplantation techniques to obtain whole features from external, real - world projects. we define requirements and demonstrate how vpbench addresses them for the generated version histories, focusing on support for evolution in time and space, the generation of detailed meta - data about the evolution, also considering compileability and extensibility.
|
arxiv:2112.01315
|
long - wave near - infrared ( lwnir ) dyes have garnered significant attention, particularly in biomedical applications, due to their ability to enhance light absorption, making them highly effective for in vivo imaging and phototherapy. among these dyes, cyanines are notable for their broad tunability across the ultraviolet ( uv ) to lwnir spectrum and their ability to form j - aggregates, which result in narrow absorption and enhanced emission peaks, often accompanied by a red - shift in their spectra. in this study, we investigate the fluorescence properties of three known tricarbocyanine dyes, uncovering new emission bands in the lwnir region. these dyes exhibit two distinct fluorescence peaks between 1605 and 1840 nm, with an emission tail extending up to ~ 2200 nm. the intensity of these peaks varies depending on dye concentration. furthermore, we assess the photostability, ph sensitivity, and thermal stability of the dyes, providing key insights into their potential for stable and efficient biomedical applications. our study provides a deep investigation of the spectral characteristics of these dyes, seeking to enhance their potential application in biomedical imaging.
|
arxiv:2505.02602
|
in this paper, we examine and analyze the challenges associated with developing and introducing language technologies to low - resource language communities. while doing so, we bring to light the successes and failures of past work in this area, challenges being faced in doing so, and what they have achieved. throughout this paper, we take a problem - facing approach and describe essential factors which the success of such technologies hinges upon. we present the various aspects in a manner which clarify and lay out the different tasks involved, which can aid organizations looking to make an impact in this area. we take the example of gondi, an extremely - low resource indian language, to reinforce and complement our discussion.
|
arxiv:1912.03457
|
the field of galactic archaeology aims to understand the origins and evolution of the stellar populations in the milky way, as a way to understand galaxy formation and evolution in general. the galah ( galactic archaeology with hermes ) survey is an ambitious australian - led project to explore the galactic history of star formation, chemical evolution, minor mergers and stellar migration. galah is using the hermes spectrograph, a novel, highly multiplexed, four - channel high - resolution optical spectrograph, to collect high - quality r ~ 28, 000 spectra for one million stars in the milky way. from these data we will determine stellar parameters, radial velocities and abundances for up to 29 elements per star, and carry out a thorough chemical tagging study of the nearby galaxy. there are clear complementarities between galah and other ongoing and planned galactic archaeology surveys, and also with ancillary stellar data collected by of major cosmological surveys. combined, these data sets will provide a revolutionary view of the structure and history of the milky way.
|
arxiv:1507.00079
|
excitons in quantum dots are excellent sources of polarization - entangled photon pairs, but a quantitative understanding of their interaction with the nuclear spin bath is still missing. here we investigate the role of hyperfine energy shifts using experimentally accessible parameters and derive an upper limit to the achievable entanglement fidelity. our results are consistent with all available literature, indicate that spin - noise is often the dominant process limiting the entanglement in ingaas quantum dots, and suggest routes to alleviate its effect.
|
arxiv:2302.05983
|
we study the effect of a magnetic field on the strage quark matter and apply to strange star. we found that the strange star becomes more compact in presence of strong magnetic field.
|
arxiv:hep-ph/9508251
|
nowadays, academic research relies not only on sharing with the academic community the scientific results obtained by research groups while studying certain phenomena, but also on sharing computer codes developed within the community. in the field of atomistic modeling these were software packages for classical atomistic modeling, later - - quantum - mechanical modeling, and now with the fast growth of the field of machine - learning potentials, the packages implementing such potentials. in this paper we present the mlip - 3 package for constructing moment tensor potentials and performing their active training. this package builds on the mlip - 2 package ( novikov et al. ( 2020 ), the mlip package : moment tensor potentials with mpi and active learning. machine learning : science and technology, 2 ( 2 ), 025002. ), however with a number of improvements, including active learning on atomic neighborhoods of a possibly large atomistic simulation.
|
arxiv:2304.13144
|
in this work, we develop a method named twinning, for partitioning a dataset into statistically similar twin sets. twinning is based on split, a recently proposed model - independent method for optimally splitting a dataset into training and testing sets. twinning is orders of magnitude faster than the split algorithm, which makes it applicable to big data problems such as data compression. twinning can also be used for generating multiple splits of a given dataset to aid divide - and - conquer procedures and $ k $ - fold cross validation.
|
arxiv:2110.02927
|
the robot ' s objective is to rehabilitate the pipe joints of fresh water supply systems by crawling into water canals and applying a restoration material to repair the pipes. the robot ' s structure consists of six wheeled - legs, three on the front separated 120 { \ deg } and three on the back in the same configuration, supporting the structure along the centre of the pipe. in this configuration the robot is able to clean and seal with a rotating tool, similar to a cylindrical robot, covering the entire 3d in - pipe space.
|
arxiv:2001.10057
|
self - supervised learning ( ssl ) has recently achieved tremendous empirical advancements in learning image representation. however, our understanding of the principle behind learning such a representation is still limited. this work shows that joint - embedding ssl approaches primarily learn a representation of image patches, which reflects their co - occurrence. such a connection to co - occurrence modeling can be established formally, and it supplements the prevailing invariance perspective. we empirically show that learning a representation for fixed - scale patches and aggregating local patch representations as the image representation achieves similar or even better results than the baseline methods. we denote this process as bagssl. even with 32x32 patch representation, bagssl achieves 62 % top - 1 linear probing accuracy on imagenet. on the other hand, with a multi - scale pretrained model, we show that the whole image embedding is approximately the average of local patch embeddings. while the ssl representation is relatively invariant at the global scale, we show that locality is preserved when we zoom into local patch - level representation. further, we show that patch representation aggregation can improve various sota baseline methods by a large margin. the patch representation is considerably easier to understand, and this work makes a step to demystify self - supervised representation learning.
|
arxiv:2206.08954
|
full one - loop electroweak - corrections for an $ e ^ - e ^ + \ rightarrow t \ bar { t } $ process associated with sequential $ t \ rightarrow b \ mu \ nu _ \ mu $ decay are discussed. at the one - loop level, the spin - polarization effects of the initial electron and positron beams are included in the total and differential cross sections. a narrow - width approximation is used to treat the top - quark production and decay while including full spin correlations between them. we observed that the radiative corrections due to the weak interaction have a large polarization dependence on both the total and differential cross sections. therefore, experimental observables that depend on angular distributions such as the forward - backward asymmetry of the top production angle must be treated carefully including radiative corrections. we also observed that the energy distribution of bottom quarks is majorly affected by the radiative corrections.
|
arxiv:1706.03432
|
we calculate the frequency and damping of the scissors mode in a classical gas as a function of temperature and coupling strength. our results show good agreement with the main features observed in recent measurements of the scissors mode in an ultracold gas of $ ^ 6 $ li atoms. the comparison between theory and experiment involves no fitting parameters and thus allows an identification of non - classical effects at and near the unitarity limit.
|
arxiv:0709.1617
|
essay selected for honorable mention 2014 by the gravity research foundation. we study an isothermal system of semi - degenerate self - gravitating fermions in general relativity ( gr ). the most general solutions present mass density profiles with a central degenerate compact core governed by quantum statistics followed by an extended plateau, and ending in a power law behaviour $ r ^ { - 2 } $. by fixing the fermion mass $ m $ in the kev regime, the different solutions depending on the free parameters of the model : the degeneracy and temperature parameters at the center, are systematically constructed along the one - parameter sequences of equilibrium configurations up to the critical point, which is represented by the maximum in a central density ( $ \ rho _ 0 $ ) vs. core mass ( $ m _ c $ ) diagram. we show that for fully degenerate cores, the oppenheimer - volkoff ( ov ) mass limit $ m _ { c } ^ { cr } \ propto m _ { pl } ^ 3 / m ^ 2 $ is obtained, while instead for low degenerate cores, the critical core mass increases showing the temperature effects in a non linear way. the main result of this work is that when applying this theory to model the distribution of dark matter in big elliptical galaxies from miliparsec distance - scales up to $ 10 ^ 2 $ kpc, we do not find any critical core - halo configuration of self - gravitating fermions, able to explain both the most super - massive dark object at their center together with the dm halo simultaneously.
|
arxiv:1405.7505
|
we describe the in - orbit performance of the soft x - ray imaging system consisting of the soft x - ray telescope and the soft x - ray imager aboard hitomi. verification and calibration of imaging and spectroscopic performance are carried out making the best use of the limited data of less than three weeks. basic performance including a large field of view of 38 ' x38 ' is verified with the first light image of the perseus cluster of galaxies. amongst the small number of observed targets, the on - minus - off pulse image for the out - of - time events of the crab pulsar enables us to measure a half power diameter of the telescope as about 1. 3 '. the average energy resolution measured with the onboard calibration source events at 5. 89 kev is 179 pm 3 ev in full width at half maximum. light leak and cross talk issues affected the effective exposure time and the effective area, respectively, because all the observations were performed before optimizing an observation schedule and parameters for the dark level calculation. screening the data affected by these two issues, we measure the background level to be 5. 6x10 ^ { - 6 } counts s ^ { - 1 } arcmin ^ { - 2 } cm ^ { - 2 } in the energy band of 5 - 12 kev, which is seven times lower than that of the suzaku xis - bi.
|
arxiv:1709.08829
|
detection rules have traditionally been designed for rational agents that minimize the bayes risk ( average decision cost ). with the advent of crowd - sensing systems, there is a need to redesign binary hypothesis testing rules for behavioral agents, whose cognitive behavior is not captured by traditional utility functions such as bayes risk. in this paper, we adopt prospect theory based models for decision makers. we consider special agent models namely optimists and pessimists in this paper, and derive optimal detection rules under different scenarios. using an illustrative example, we also show how the decision rule of a human agent deviates from the bayesian decision rule under various behavioral models, considered in this paper.
|
arxiv:1610.01085
|
we propose a simplified model of outflow / jet driven by the blandford - payne ( bp ) process from advection dominated accretion flows ( adaf ) and derive the expressions of the bp power and disk luminosity based on the conservation laws of mass, angular momentum and energy. we fit the 2 - - 10 kev luminosity and kinetic power of 15 active galactic nucleus ( agns ) of sub - eddington luminosity. it is found that there exists an anti - correlation between the accretion rate and the advection parameter, which could be used to explain the correlation between eddington - scaled kinetic power and bolometric luminosity of the 15 samples. in addition, the ledlow - owen relation for fr i / ii dichotomy is re - expressed in a parameter space consisting of logarithm of dimensionless accretion rate versus that of the bh mass. it turns out that the fr i / ii dichotomy is determined mainly by the dimensionless accretion rate, being insensitive to the bh mass. and the dividing accretion rate is less than the critical accretion rate for adafs, suggesting that fr i sources are all in the adaf state.
|
arxiv:0906.1323
|
we study the twisted cohomology groups of $ a _ \ infty $ - algebras defined by twisting elements and their behavior under morphisms and homotopies using the bar construction. we define higher massey products on the cohomology groups of general $ a _ \ infty $ - algebras and establish the naturality under morphisms and their dependency on defining systems. the above constructions are also considered for $ c _ \ infty $ - algebras. we construct a spectral sequence converging to the twisted cohomology groups an show that the higher differentials are given by the $ a _ \ infty $ - algebraic massey products.
|
arxiv:0912.1775
|
the observational data of the near infrared bands ( h and k ) have been used for the modeling mean light curves. also the visual observational data have been fitted the same. the infrared and visual mean light curves were compared. all parameters and fourier - coefficients of the mean light curves were obtained. the periodogram analysis of the variation of the brightness have been carried out.
|
arxiv:1607.03722
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.