text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
as a consequence of the huge advancement of the electronic health record ( ehr ) in healthcare settings, the my health record ( mhr ) is introduced in australia. however security and privacy of the mhr system have been encumbering the development of the system. even though the mhr system is claimed as patient - cenred and patient - controlled, there are several instances where healthcare providers ( other than the usual provider ) and system operators who maintain the system can easily access the system and these unauthorised accesses can lead to a breach of the privacy of the patients. this is one of the main concerns of the consumers that affect the uptake of the system. in this paper, we propose a patient centred mhr framework which requests authorisation from the patient to access their sensitive health information. the proposed model increases the involvement and satisfaction of the patients in their healthcare and also suggests mobile security system to give an online permission to access the mhr system.
|
arxiv:1802.00575
|
assuming a conjectural upper bound for the least prime in an arithmetic progression, we show that n - bit integers may be multiplied in o ( n log n 4 ^ ( log ^ * n ) ) bit operations.
|
arxiv:1611.07144
|
electronic properties of the graphene layer sandwiched between two hexagonal boron nitride sheets have been studied using the first - principles calculations and the minimal tight - binding model. it is shown that for the abc - stacked structure in the absence of external field the bands are linear in the vicinity of the dirac points as in the case of single - layer graphene. for certain atomic configuration, the electric field effect allows opening of a band gap of over 230 mev. we believe that this mechanism of energy gap tuning could significantly improve the characteristics of graphene - based field - effect transistors and pave the way for future electronic applications.
|
arxiv:1007.3238
|
the fokker - - planck equation can be used in a partially - coherent imaging context to model the evolution of the intensity of a paraxial x - ray wave field with propagation. this forms a natural generalisation of the transport - of - intensity equation. the x - ray fokker - - planck equation can simultaneously account for both propagation - based phase contrast, and the diffusive effects of sample - induced small - angle x - ray scattering, when forming an x - ray image of a thin sample. two derivations are given for the fokker - - planck equation associated with x - ray imaging, together with a kramers - - moyal generalisation thereof. both equations are underpinned by the concept of unresolved speckle due to unresolved sample micro - structure. these equations may be applied to the forward problem of modelling image formation in the presence of both coherent and diffusive energy transport. they may also be used to formulate associated inverse problems of retrieving the phase shifts due to a sample placed in an x - ray beam, together with the diffusive properties of the sample. the domain of applicability for the fokker - - planck and kramers - - moyal equations for paraxial imaging is at least as broad as that of the transport - of - intensity equation which they generalise, hence the technique is also expected to be useful for paraxial imaging using visible light, electrons and neutrons.
|
arxiv:1908.01473
|
in this work, we introduce a novel numerical method for solving initial value problems associated with a given differential. our approach utilizes a spline approximation of the theoretical solution alongside the integral formulation of the analytical solution. furthermore, we offer a rigorous proof of the method ' s order and provide a comprehensive stability analysis. additionally, we showcase the effectiveness method through some examples, comparing with taylor ' s methods of same order.
|
arxiv:2409.20369
|
an exact representation of the baker - campbell - hausdorff formula as a power series in just one of the two variables is constructed. closed form coefficients of this series are found in terms of hyperbolic functions, which contain all of the dependence of the second variable. it is argued that this exact series may then be truncated and expected to give a good approximation to the full expansion if only the perturbative variable is small. this improves upon existing formulae, which require both to be small. as such this may allow access to larger phase spaces in physical problems which employ the baker - campbell - hausdorff formula, along with enabling new problems to be tackled.
|
arxiv:1807.07884
|
we propose a novel scheme for accurately determining hundred - hertz linewidth by delayed self - heterodyne method in which delay time far less than the laser ' s coherence time. that exceeds the former understanding as to the delayed self - heterodyne technique which requiring a prohibitively long fiber. the self - heterodyne autocorrelation function and power spectrum are evaluated and by numerical analysis we ensure that - 3db of power spectrum is applied to the self - heterodyne linewidth measurements. for laser linewidth less than 100 hz, linewidth can be measured directly by 10 km fiber, and in more general case linewidth can be deduced from - 20 db or - 40 db of the fitting lorentzian curve.
|
arxiv:1307.2417
|
we report the most sensitive search yet made for solar - like oscillations. we observed the star alpha cen a in balmer - line equivalent widths over six nights with both the 3. 9m anglo - australian telescope and the eso 3. 6m telescope. we set an upper limit on oscillations of 1. 4 times solar and found tentative evidence for p - mode oscillations. we also found a power excess at low frequencies which has the same slope and strength as the power seen from granulation in the sun. we therefore suggest that we have made the first detection of temporal fluctuations due to granulation in a solar - like star.
|
arxiv:astro-ph/9811299
|
exact solutions of a magnetized plasma in a vorticity containing shear flow for constant temperature are presented. this is followed by the modification of these solutions by thermomagnetic currents in the presence of temperature gradients. it is shown that solutions which are unstable for a subsonic flow, are stable if the flow is supersonic. the results are applied to the problem of vorticity shear flow stabilization of a linear z - pinch discharge.
|
arxiv:0909.4750
|
mechanochemically synthesized amorphous 50sisomo [ 50agi - 25ag _ 2o - 25moo _ 3 ] fast ionic conductor shows high ionic conductivity of ~ 6x10 ^ - 3 { \ omega } ^ - 1 cm - 1 at room temperature. the highest ionic conductivity is achieved for 36 h milled sample, which is more than three orders of magnitude higher than that of crystalline agi at room temperature. the samples are thermally stable at least up to ~ 70 { \ deg } c. thermoelectric power studies on 50 sisomo amorphous fast ionic conductors ( a - sic ) have been carried out in the temperature range 300 - 330k. thermoelectric power ( s ) is found to vary linearly with the inverse of the absolute temperature, and can be expressed by the equation - s = [ ( 0. 19 \ times 10 ^ 3 / t ) + 0. 25 ] mv / k. the heat of transport ( q * ) of ag + ion i. e. 0. 19 ev is nearly equal to the activation energy ( e ) i. e. 0. 20 ev of ag + ion migration calculated from the conductivity plots indicating that the material has an average structure. this is also in consonance with earlier theories on heats of transport of ions in ionic solids.
|
arxiv:1204.1602
|
discrete - state stochastic models have become a well - established approach to describe biochemical reaction networks that are influenced by the inherent randomness of cellular events. in the last years severalmethods for accurately approximating the statistical moments of such models have become very popular since they allow an efficient analysis of complex networks. we propose a generalized method of moments approach for inferring the parameters of reaction networks based on a sophisticated matching of the statistical moments of the corresponding stochastic model and the sample moments of population snapshot data. the proposed parameter estimation method exploits recently developed moment - based approximations and provides estimators with desirable statistical properties when a large number of samples is available. we demonstrate the usefulness and efficiency of the inference method on two case studies. the generalized method of moments provides accurate and fast estimations of unknown parameters of reaction networks. the accuracy increases when also moments of order higher than two are considered. in addition, the variance of the estimator decreases, when more samples are given or when higher order moments are included.
|
arxiv:1605.01213
|
for classical discrete system under constant composition typically referred to substitutional alloys, we examine local nonlinearity in canonical average phi. we have respectively investigated the local and global behavior of nonlinearity through previously - introduced vector field a and through tropical limit of the vector field. while these studies indicated the importance of constraints to structural degree of freedoms ( sdfs ) for global nonlinearity, it has been still unclear how the constraints to sdf affects local nonlinearity. based on statistical manifold, we make intuitive bridge between the sdf - based information and local nonlinearity, decomposing the local nonlinearity into two ( for binary alloys with pair correlations ) or three ( for otherwise ) contributions in terms of the kullback - leibler divergence, where this decomposition is independent of temperature and many - body interaction, and is defined on individual configuration. we also find that we can provide a - dependent as well as a - independent decomposition of the local nonlinearity, where non - separability in sdfs and its nonadditive character is independent of a, which indicates that information about evolution of the vector field should be required to address the non - separability and nonadditivity. the present work enables to quantify how configuration - dependent constraints to sdf affect local nonlinearity in canonical average for multicomponent alloys.
|
arxiv:1811.09612
|
in this paper, we study phase structure of $ z _ 2 $ lattice gauge theories that appear as an effective field theory describing low - energy properties of frustrated antiferromagnets in two dimensions. spin operators are expressed in terms of schwinger bosons, and an emergent u ( 1 ) gauge symmetry reduces to a $ z _ 2 $ gauge symmetry as a result of condensation of a bilinear operator of the schwinger boson describing a short - range spiral order. we investigated the phase structure of the gauge theories by means of the monte - carlo simulations, and found that there exist three phases, phase with a long - range spiral order, a dimer state, and a spin liquid with deconfined spinons. detailed phase structure and properties of phase transitions depend on details of the models.
|
arxiv:0909.5030
|
arboreal networks are a generalization of rooted trees, defined by keeping the tree - like structure, but dropping the requirement for a single root. just as the class of cographs is precisely the class of undirected graphs that can be explained by a labelled rooted tree ( t, t ), we show that the class of distance - hereditary graphs is precisely the class of undirected graphs that can be explained by a labelled arboreal network ( n, t ).
|
arxiv:2502.08251
|
thermodynamic behaviors in a quantum brownian motion coupled to a classical heat bath is studied. we then define a heat operator by generalizing the stochastic energetics and show the energy balance ( first law ) and the upper bound of the expectation value of the heat operator ( second law ). we further find that this upper bound depends on the memory effect induced by quantum fluctuations and hence the maximum extractable work can be qualitatively modified in quantum thermodynamics.
|
arxiv:1604.03476
|
point cloud analysis has received much attention recently ; and segmentation is one of the most important tasks. the success of existing approaches is attributed to deep network design and large amount of labelled training data, where the latter is assumed to be always available. however, obtaining 3d point cloud segmentation labels is often very costly in practice. in this work, we propose a weakly supervised point cloud segmentation approach which requires only a tiny fraction of points to be labelled in the training stage. this is made possible by learning gradient approximation and exploitation of additional spatial and color smoothness constraints. experiments are done on three public datasets with different degrees of weak supervision. in particular, our proposed method can produce results that are close to and sometimes even better than its fully supervised counterpart with 10 $ \ times $ fewer labels.
|
arxiv:2004.04091
|
the milky way is often considered to be the best example of a spiral for which the dark matter not only dominates the outer kinematics, but also plays a major dynamical role in the inner galaxy : the galactic disk is therefore said to be ` ` sub - maximal. ' ' this conclusion is important to the understanding of the evolution of galaxies and the viability of particular dark matter models. the galactic evidence rests on a number of structural and kinematic measurements, many of which have recently been revised. the new constraints indicate not only that the galaxy is a more typical member of its class ( sb - sc spirals ) than previously thought, but also require a re - examination of the question of whether or not the milky way disk is maximal. by applying to the milky way the same definition of ` ` maximal disk ' ' that is applied to external galaxies, it is shown that the new observational constraints are consistent with a galactic maximal disk of reasonable $ m / l $. in particular, the local disk column can be substantially less than the oft - quoted required $ \ sigma _ { \ odot } \ approx 100 \ msolar pc ^ { - 2 } $ - as low as $ 40 \ msolar pc ^ { - 2 } $ in the extreme case - and still be maximal, in the sense that the dark halo provides negligible rotation support in the inner galaxy. this result has possible implications for any conclusion that rests on assumptions about the potentials of the galactic disk or dark halo, and in particular for the interpretation of microlensing results along both lmc and bulge lines of sight.
|
arxiv:astro-ph/9608164
|
for any integer $ k \ geq 2 $, let $ \ { q _ { n } ^ { ( k ) } \ } _ { n \ geq - ( k - 2 ) } $ denote the $ k $ - generalized pell - lucas sequence which starts with $ 0, \ dots, 2, 2 $ ( $ k $ terms ) where each next term is the sum of the $ k $ preceding terms. in this paper, we find all the $ k $ - generalized pell - lucas numbers that are concatenations of two repdigits.
|
arxiv:2303.05293
|
outflows up to 1500 km / s of atomic neutral hydrogen are detected in a growing number of radio galaxies. outflows with similar velocities are also detected in ionized gas, suggesting a common origin for the extreme kinematics of these two phases of the gas. the high detection rate ofsuch outflows in young ( or restarted ) radio sources appears to be related to the existence of a dense ism around these objects. such a dense ism can have important consequences for the evolution of the radio source and the galaxy as a whole. here we summarize the recent results obtained and the characteristics derived so far for these outflows. we also discuss possible mechanisms ( e. g. interaction between the radio plasma and the ism and adiabatically expanding broad emission lines clouds ) that can be at the origin of these phenomena.
|
arxiv:astro-ph/0410222
|
in this paper, we propose several consensus protocols of the first and second order for networked multi - agent systems and provide explicit representations for their asymptotic states. these representations involve the eigenprojection of the laplacian matrix of the dependency digraph. in particular, we study regularization models for the problem of coordination when the dependency digraph does not contain a converging tree. in such models of the first kind, the system is supplemented by a dummy agent, a " hub " that uniformly, but very weakly influences the agents and, in turn, depends on them. in the models of the second kind, we assume the presence of very weak background links between the agents. besides that, we present a description of the asymptotics of the classical second - order consensus protocol.
|
arxiv:1811.10430
|
a new functional for the entropy that is asymptotically correct both in the high and low density limits is proposed. the new form is [ s = s ^ { ( id ) } + s ^ { ( ln ) } + s ^ { ( r ) } + s ^ { ( c ) } ] where the new term s ^ { ( c ) } depends on the p - bodies density fluctuations $ \ alpha _ p $ and has the form [ s ^ { ( c ) } = < n > { ln 2 - 1 + \ sum _ { p = 2 } ^ \ infty \ frac { ( \ ln 2 ) ^ p } { p! } \ alpha _ p - [ \ exp ( \ alpha _ 2 - 1 ) - \ alpha _ 2 ] } + \ hat s ], where $ \ hat s $ renormalizes the ring approximation s ^ { ( r ) }. this result is obtained by analyzing the functional dependence of the most general expression of the entropy : two main results for s ^ { ( c ) } are proven : i ) in the thermodynamic limit, only the functional dependence on the one body distribution function survives and ii ) by summing to infinite order the leading contributions in the density a new numerical expression for the entropy is proposed with a new renormalized ring approximation included. the relationship of these results to the incompressible approximation to entropy is discussed.
|
arxiv:cond-mat/0003230
|
we study superconducting fese ( tc = 9 k ) exhibiting the tetragonal - orthorhombic structural transition ( ts = 90 k ) without any antiferromagnetic ordering, by utilizing angle - resolved photoemission spectroscopy. in the detwinned orthorhombic state, the energy position of the dyz orbital band at the brillouin zone corner is 50 mev higher than that of dxz, indicating the orbital order similar to nafeas and bafe2as2 families. evidence of orbital order also appears in the hole bands at the brillouin zone center. precisely measured temperature dependence using strain - free samples shows that the onset of the orbital ordering ( to ) occurs very close to ts, thus suggesting that the electronic nematicity above ts is considerably weaker in fese compared to bafe2as2 family.
|
arxiv:1407.1418
|
we have recently introduced the two new computing models of self - similar cellular automata and self - similar petri nets. self - similar automata result from a progressive, infinite tessellation of space and time. self - similar petri nets consist of a potentially infinite sequence of coupled transitions with ever increasing firing rates. both models are capable of hypercomputations and can, for instance, ` ` solve ' ' the halting problem for turing machines. we survey the main definitions and propositions and add new results regarding the indeterminism of self - similar cellular automata.
|
arxiv:0908.0835
|
in this article we study the action of the one loop dilatation operator on operators with a classical dimension of order n. these operators belong to the su ( 2 ) sector and are constructed using two complex fields y and z. for these operators non - planar diagrams contribute already at the leading order in n and the planar and large n limits are distinct. the action of the one loop and the two loop dilatation operator reduces to a set of decoupled oscillators and factorizes into an action on the z fields and an action on the y fields. direct computation has shown that the action on the y fields is the same at one and two loops. in this article, using the su ( 2 ) symmetry algebra as well as structural features of field theory, we give compelling evidence that the factor in the dilatation operator that acts on the ys is given by the one loop expression, at any loop order.
|
arxiv:1312.6227
|
in nature and engineering world, the acquired signals are usually affected by multiple complicated factors and appear as multicomponent nonstationary modes. in such and many other situations, it is necessary to separate these signals into a finite number of monocomponents to represent the intrinsic modes and underlying dynamics implicated in the source signals. in this paper, we consider the mode retrieval of a multicomponent signal which has crossing instantaneous frequencies ( ifs ), meaning that some of the components of the signal overlap in the time - frequency domain. we use the chirplet transform ( ct ) to represent a multicomponent signal in the three - dimensional space of time, frequency and chirp rate and introduce a ct - based signal separation scheme ( ct3s ) to retrieve modes. in addition, we analyze the error bounds for if estimation and component recovery with this scheme. we also propose a matched - filter along certain specific time - frequency lines with respect to the chirp rate to make nonstationary signals be further separated and more concentrated in the three - dimensional space of ct. furthermore, based on the approximation of source signals with linear chirps at any local time, we propose an innovative signal reconstruction algorithm, called the group filter - matched ct3s ( gfct3s ), which also takes a group of components into consideration simultaneously. gfct3s is suitable for signals with crossing ifs. it also decreases component recovery errors when the ifs curves of different components are not crossover, but fast - varying and close to one and other. numerical experiments on synthetic and real signals show our method is more accurate and consistent in signal separation than the empirical mode decomposition, synchrosqueezing transform, and other approaches
|
arxiv:2010.01498
|
in this paper we present the results of observations of seventeen hii regions in thirteen galaxies from the sigrid sample of isolated gas rich irregular dwarf galaxies. the spectra of all but one of the galaxies exhibit the auroral [ oiii ] 4363a line, from which we calculate the electron temperature, te, and gas - phase oxygen abundance. five of the objects are blue compact dwarf ( bcd ) galaxies, of which four have not previously been analysed spectroscopically. we include one unusual galaxy which exhibits no evidence of the [ nii ] { \ lambda } { \ lambda } 6548, 6584a lines, suggesting a particularly low metallicity ( < zsolar / 30 ). we compare the electron temperature based abundances with those derived using eight of the new strong line diagnostics presented by dopita et al. ( 2013 ). using a method derived from first principles for calculating total oxygen abundance, we show that the discrepancy between the te - based and strong line gas - phase abundances have now been reduced to within ~ 0. 07 dex. the chemical abundances are consistent with what is expected from the luminosity - metallicity relation. we derive estimates of the electron densities and find them to be between ~ 5 and ~ 100 cm - 3. we find no evidence for a nitrogen plateau for objects in this sample with metallicities 0. 5 > zsolar > 0. 15.
|
arxiv:1403.6903
|
we consider a continuous time stochastic dynamic game between a stopper ( player $ 1 $, the \ textit { owner } of an asset yielding an income ) and a controller ( player $ 2 $, the \ textit { manager } of the asset ), where the manager is either effective or non - effective. an effective manager can choose to exert low or high effort which corresponds to a high or a low positive drift for the accumulated income of the owner with random noise in terms of brownian motion ; where high effort comes at a cost for the manager. the manager earns a salary until the game is stopped by the owner, after which also no income is earned. a non - effective manager cannot act but still receives a salary. for this game we study ( nash ) equilibria using stochastic filtering methods ; in particular, in equilibrium the manager controls the learning rate ( regarding the manager type ) of the owner. first, we consider a strong formulation of the game which requires restrictive assumptions for the admissible controls, and find an equilibrium of ( double ) threshold type. second, we consider a weak formulation, where a general set of admissible controls is considered. we show that the threshold equilibrium of the strong formulation is also an equilibrium in the weak formulation.
|
arxiv:2307.01623
|
efficient decomposition of permutation unitaries is vital as they frequently appear in quantum computing. in this paper, we identify the key properties that impact the decomposition process of permutation unitaries. then, we classify these decompositions based on the identified properties, establishing a comprehensive framework for analysis. we demonstrate the applicability of the presented framework through the widely used multi - controlled toffoli gate, revealing that the existing decompositions in the literature belong to only three out of ten of the identified classes. motivated by this finding, we propose transformations that can adapt a given decomposition into a member of another class, enabling resource reduction.
|
arxiv:2312.11644
|
using mean relative peculiar velocity measurements for pairs of galaxies, we estimate the cosmological density parameter $ \ omega _ m $ and the amplitude of density fluctuations $ \ sigma _ 8 $. our results suggest that our statistic is a robust and reproducible measure of the mean pairwise velocity and thereby the $ \ omega _ m $ parameter. we get $ \ omega _ m = 0. 30 ^ { + 0. 17 } _ { - 0. 07 } $ and $ \ sigma _ 8 = 1. 13 ^ { + 0. 22 } _ { - 0. 23 } $. these estimates do not depend on prior assumptions on the adiabaticity of the initial density fluctuations, the ionization history, or the values of other cosmological parameters.
|
arxiv:astro-ph/0305078
|
we use the harmonic maps ansatz to find exact solutions of the einstein - maxwell - dilaton - axion ( emda ) equations. the solutions are harmonic maps invariant to the symplectic real group in four dimensions $ sp ( 4, \ rreal ) \ sim o ( 5 ) $. we find solutions of the emda field equations for the one and two dimensional subspaces of the symplectic group. specially, for illustration of the method, we find space - times that generalise the schwarzschild solution with dilaton, axion and electromagnetic fields.
|
arxiv:0905.4097
|
for a bipartite honeycomb lattice, we show that the berry phase depends not only on the shape of the system but also on the hopping couplings. using the entanglement entropy spectra obtained by diagonalizing the block green ' s function matrices, the maximal entangled state with the eigenvalue $ \ lambda _ m = 1 / 2 $ of the reduced density matrix is shown to have one - to - one correspondence to the zero energy states of the lattice with open boundaries, which depends on the berry phase. for the systems with finite bearded edges along $ x $ - direction we find critical hopping couplings : the maximal entangled states ( zero - energy states ) appear pair by pair if one increases the hopping coupling $ h $ over the critical couplings $ h _ c $ s.
|
arxiv:1004.3707
|
this paper introduces laft, a novel feature transformation method designed to incorporate user knowledge and preferences into anomaly detection using natural language. accurately modeling the boundary of normality is crucial for distinguishing abnormal data, but this is often challenging due to limited data or the presence of nuisance attributes. while unsupervised methods that rely solely on data without user guidance are common, they may fail to detect anomalies of specific interest. to address this limitation, we propose language - assisted feature transformation ( laft ), which leverages the shared image - text embedding space of vision - language models to transform visual features according to user - defined requirements. combined with anomaly detection methods, laft effectively aligns visual features with user preferences, allowing anomalies of interest to be detected. extensive experiments on both toy and real - world datasets validate the effectiveness of our method.
|
arxiv:2503.01184
|
rzbench is a benchmark suite that was specifically developed to reflect the requirements of scientific supercomputer users at the university of erlangen - nuremberg ( fau ). it comprises a number of application and low - level codes under a common build infrastructure that fosters maintainability and expandability. this paper reviews the structure of the suite and briefly introduces the most relevant benchmarks. in addition, some widely known standard benchmark codes are reviewed in order to emphasize the need for a critical review of often - cited performance results. benchmark data is presented for the hlrb - ii at lrz munich and a local infiniband woodcrest cluster as well as two uncommon system architectures : a bandwidth - optimized infiniband cluster based on single socket nodes ( " port townsend " ) and an early version of sun ' s highly threaded t2 architecture ( " niagara 2 " ).
|
arxiv:0712.3389
|
we prove localization and zariski - mayer - vietoris for higher grothendieck - witt groups, alias hermitian $ k $ - groups, of schemes admitting an ample family of line - bundles. no assumption on the characteristic is needed, and our schemes can be singular. along the way, we prove additivity, fibration and approximation theorems for the hermitian $ k $ - theory of exact categories with weak equivalences and duality.
|
arxiv:0811.4632
|
a phylogeny describes the evolutionary history of an evolving population. evolutionary search algorithms can perfectly track the ancestry of candidate solutions, illuminating a population ' s trajectory through the search space. however, phylogenetic analyses are typically limited to post - hoc studies of search performance. we introduce phylogeny - informed subsampling, a new class of subsampling methods that exploit runtime phylogenetic analyses for solving test - based problems. specifically, we assess two phylogeny - informed subsampling methods - - individualized random subsampling and ancestor - based subsampling - - on three diagnostic problems and ten genetic programming ( gp ) problems from program synthesis benchmark suites. overall, we found that phylogeny - informed subsampling methods enable problem - solving success at extreme subsampling levels where other subsampling methods fail. for example, phylogeny - informed subsampling methods more reliably solved program synthesis problems when evaluating just one training case per - individual, per - generation. however, at moderate subsampling levels, phylogeny - informed subsampling generally performed no better than random subsampling on gp problems. our diagnostic experiments show that phylogeny - informed subsampling improves diversity maintenance relative to random subsampling, but its effects on a selection scheme ' s capacity to rapidly exploit fitness gradients varied by selection scheme. continued refinements of phylogeny - informed subsampling techniques offer a promising new direction for scaling up evolutionary systems to handle problems with many expensive - to - evaluate fitness criteria.
|
arxiv:2402.01610
|
nustar observed g1. 9 + 0. 3, the youngest known supernova remnant in the milky way, for 350 ks and detected emission up to $ \ sim $ 30 kev. the remnant ' s x - ray morphology does not change significantly across the energy range from 3 to 20 kev. a combined fit between nustar and chandra shows that the spectrum steepens with energy. the spectral shape can be well fitted with synchrotron emission from a power - law electron energy distribution with an exponential cutoff with no additional features. it can also be described by a purely phenomenological model such as a broken power - law or a power - law with an exponential cutoff, though these descriptions lack physical motivation. using a fixed radio flux at 1 ghz of 1. 17 jy for the synchrotron model, we get a column density of n $ _ { \ rm h } $ = $ ( 7. 23 \ pm0. 07 ) \ times 10 ^ { 22 } $ cm $ ^ { - 2 } $, a spectral index of $ \ alpha = 0. 633 \ pm0. 003 $, and a roll - off frequency of $ \ nu _ { \ rm rolloff } = ( 3. 07 \ pm0. 18 ) \ times 10 ^ { 17 } $ hz. this can be explained by particle acceleration, to a maximum energy set by the finite remnant age, in a magnetic field of about 10 $ \ mu $ g, for which our roll - off implies a maximum energy of about 100 tev for both electrons and ions. much higher magnetic - field strengths would produce an electron spectrum that was cut off by radiative losses, giving a much higher roll - off frequency that is independent of magnetic - field strength. in this case, ions could be accelerated to much higher energies. a search for $ ^ { 44 } $ ti emission in the 67. 9 kev line results in an upper limit of $ 1. 5 \ times 10 ^ { - 5 } $ $ \, \ mathrm { ph } \, \ mathrm { cm } ^ { - 2 } \, \ mathrm { s } ^ { - 1 } $ assuming a line width of 4. 0 kev ( 1 sigma ).
|
arxiv:1411.6752
|
we prove that the set of $ ( x _ 1, \ ldots, x _ d ) \ in [ 0, 1 ) ^ d $, such that $ $ \ underline { \ lim } _ { n \ to \ infty } \ left | \ sum _ { n = 1 } ^ n \ exp ( 2 \ pi i ( x _ 1n + \ ldots + x _ dn ^ d ) ) \ right | = 0, $ $ contains a dense $ \ mathcal { g } _ \ delta $ set in $ [ 0, 1 ) ^ d $ and has a positive hausdorff dimension. similar statements are also established for the generalised gaussian sums $ $ \ sum _ { n = 1 } ^ n \ exp ( 2 \ pi i x n ^ d ), \ qquad x \ in [ 0, 1 ). $ $
|
arxiv:1907.03101
|
we have researched the condition for symplectic discretization to preserve local boundedness for the space of 2 - dimensional hamiltonian dynamical systems in this paper.
|
arxiv:1303.4834
|
controlling infectious diseases is a major health priority because they can spread and infect humans, thus evolving into epidemics or pandemics. therefore, early detection of infectious diseases is a significant need, and many researchers have developed models to diagnose them in the early stages. this paper reviewed research articles for recent machine - learning ( ml ) algorithms applied to infectious disease diagnosis. we searched the web of science, sciencedirect, pubmed, springer, and ieee databases from 2015 to 2022, identified the pros and cons of the reviewed ml models, and discussed the possible recommendations to advance the studies in this field. we found that most of the articles used small datasets, and few of them used real - time data. our results demonstrated that a suitable ml technique depends on the nature of the dataset and the desired goal.
|
arxiv:2206.07365
|
far - uv circular dichroism ( cd ) spectroscopy provides a rapid, sensitive, nondestructive tool to analyze protein conformation by monitoring secondary structure composition. originally intended for educational purposes, a spreadsheet - based program that implements a rudimentary routine for fitting and simulating far - uv protein cd spectra became quite used in research papers too, as it allowed very quick deconvolution of spectra into secondary structure compositions and easy simulation of spectra expected for defined secondary structure contents. to make such software more readily available, i present here an online version that runs directly on web browsers allowing even faster analyses on any modern device and without the need of any spreadsheet or third programs. the new version further extends the original capabilities to fit and simulate alpha, beta and random coil contents now including also beta turns ; it allows to quickly select the effective spectral window used for fitting and enables interactive exploration of the effects of changes in secondary structure composition on the resulting spectra. the web app allows ubiquitous implementation in biophysics courses for example right on student smartphones, and seamless, rapid tests in research settings before moving to more advanced analysis programs ( a few proposed here too ). the web app is freely available without registration at http : / / lucianoabriata. altervista. org / jsinscience / cd / cd3. html
|
arxiv:2006.06275
|
the natural action of the symmetric group on the configuration spaces f ( x ; n ) induces an action on the kriz model e ( x ; n ). the represen - tation theory of this dga is studied and a big acyclic subcomplex which is sn - invariant is described.
|
arxiv:1204.1272
|
several data - driven approaches based on information theory have been proposed for analyzing high - order interactions involving three or more components of a network system. most of these methods are defined only in the time domain and rely on the assumption of stationarity in the underlying dynamics, making them inherently unable to detect frequency - specific behaviors and track transient functional links in physiological networks. this study introduces a new framework which enables the time - varying and time - frequency analysis of high - order interactions in network of random processes through the spectral representation of vector autoregressive models. the time - and frequency - resolved analysis of synergistic and redundant interactions among groups of processes is ensured by a robust identification procedure based on a recursive least squares estimator with a forgetting factor. validation on simulated networks illustrates how the time - frequency analysis is able to highlight transient synergistic behaviors emerging in specific frequency bands which cannot be detected by time - domain stationary analyses. the application on brain evoked potentials in rats elicits the presence of redundant information timed with whisker stimulation and mostly occurring in the contralateral hemisphere. the proposed framework enables a comprehensive time - varying and time - frequency analysis of the hierarchical organization of dynamic networks. as our approach goes beyond pairwise interactions, it is well suited for the study of transient high - order behaviors arising during state transitions in many network systems commonly studied in physiology, neuroscience and other fields.
|
arxiv:2503.12421
|
we define the multidegrees of a tropical variety. we prove that the positivity of a multidegree of a certain tropical variety is governed by the dimensions of the images of the tropical variety under suitable projection maps. as an application, we give a tropical proof of the criterion of the positivity of the multidegrees of a closed subscheme of a multi - projective space, originally proved by castillo et al.
|
arxiv:2306.10589
|
ultra - heavy dark matter is a class of candidates for which direct detection experiments are ineffective due to the suppressed dark matter flux. we explore the potential of large underwater acoustic arrays, developed for ultra - high energy neutrino detection, to detect ultra - heavy dark matter. as ultra - heavy dark matter traverses seawater, it deposits energy through nuclear scattering, generating thermo - acoustic waves detectable by hydrophones. we derive the dark matter - induced acoustic pressure wave from first principles and characterise attenuation effects, including frequency - dependent modifications due to viscous and chemical relaxation effects in seawater, providing an improved framework for signal modelling. our sensitivity analysis for a hypothetical 100 cubic kilometre hydrophone array in the mediterranean sea shows that such an array could probe unexplored regions of parameter space for ultra - heavy dark matter, with sensitivity to both spin - independent and spin - dependent interactions. our results establish acoustic detection as a promising dark matter search method, paving the way for analysing existing hydrophone data and guiding future detector designs.
|
arxiv:2502.17593
|
let $ m $ be an excluded minor for the class of $ \ mathbb { p } $ - representable matroids for some partial field $ \ mathbb p $, and let $ n $ be a $ 3 $ - connected strong $ \ mathbb { p } $ - stabilizer that is non - binary. we prove that either $ m $ is bounded relative to $ n $, or, up to replacing $ m $ by a $ \ delta $ - $ y $ - equivalent excluded minor, we can choose a pair of elements $ \ { a, b \ } $ such that either $ m \ backslash \ { a, b \ } $ is $ n $ - fragile, or $ m ^ * \ backslash \ { a, b \ } $ is $ n ^ * $ - fragile.
|
arxiv:1603.09713
|
quantifying the variation in yield component traits of maize ( zea mays l. ), which together determine the overall productivity of this globally important crop, plays a critical role in plant genetics research, plant breeding, and the development of improved farming practices. grain yield per acre is calculated by multiplying the number of plants per acre, ears per plant, number of kernels per ear, and the average kernel weight. the number of kernels per ear is determined by the number of kernel rows per ear multiplied by the number of kernels per row. traditional manual methods for measuring these two traits are time - consuming, limiting large - scale data collection. recent automation efforts using image processing and deep learning encounter challenges such as high annotation costs and uncertain generalizability. we tackle these issues by exploring large vision models for zero - shot, annotation - free maize kernel segmentation. by using an open - source large vision model, the segment anything model ( sam ), we segment individual kernels in rgb images of maize ears and apply a graph - based algorithm to calculate the number of kernels per row. our approach successfully identifies the number of kernels per row across a wide range of maize ears, showing the potential of zero - shot learning with foundation vision models combined with image processing techniques to improve automation and reduce subjectivity in agronomic data collection. all our code is open - sourced to make these affordable phenotyping methods accessible to everyone.
|
arxiv:2502.13399
|
i provide a prescription to define space, at a given moment, for an arbitrary observer in an arbitrary ( sufficiently regular ) curved space - time. this prescription, based on synchronicity ( simultaneity ) arguments, defines a foliation of space - time, which corresponds to a family of canonically associated observers. it provides also a natural global reference frame ( with space and time coordinates ) for the observer, in space - time ( or rather in the part of it which is causally connected to him ), which remains minkowskian along his world - line. this definition intends to provide a basis for the problem of quantization in curved space - time, and / or for non inertial observers. application to mikowski space - time illustrates clearly the fact that different observers see different spaces. it allows, for instance, to define space everywhere without ambiguity, for the langevin observer ( involved in the langevin pseudoparadox of twins ). applied to the rindler observer ( with uniform acceleration ) it leads to the rindler coordinates, whose choice is so justified with a physical basis. this leads to an interpretation of the unruh effect, as due to the observer ' s dependence of the definition of space ( and time ). this prescription is also applied in cosmology, for inertial observers in the friedmann - lemaitre models : space for the observer appears to differ from the hypersurfaces of homogeneity, which do not obey the simultaneity requirement. i work out two examples : the einstein - de sitter model, in which space, for an inertial observer, is not flat nor homogeneous, and the de sitter case.
|
arxiv:gr-qc/0107010
|
although known for almost a century, the photophoretic force has only recently been considered in astrophysical context for the first time. in our work, we have examined the effect of photophoresis, acting together with stellar gravity, radiation pressure, and gas drag, on the evolution of solids in transitional circumstellar disks. we have applied our calculations to four different systems : the disks of hr 4796a and hd 141569a, which are several myr old ab - type stars, and two hypothetical systems that correspond to the solar nebula after disk dispersal has progressed sufficiently for the disk to become optically thin. our results suggest that solid objects migrate inward or outward, until they reach a certain size - dependent stability distance from the star. the larger the bodies, the closer to the star they tend to accumulate. photophoresis increases the stability radii, moving objects to larger distances. what is more, photophoresis may cause formation of a belt of objects, but only in a certain range of sizes and only around low - luminosity stars. the effects of photophoresis are noticeable in the size range from several micrometers to several centimeters ( for older transitional disks ) or even several meters ( for younger, more gaseous, ones ). we argue that due to gas damping, rotation does not substantially inhibit photophoresis.
|
arxiv:0711.2595
|
courses for graduate students. they typically run for 2 years full - time, with varying amounts of research involved. = = norway = = norway follows the bologna process. for engineering, the master of science academic degree has been recently introduced and has replaced the previous award forms " sivilingeniør " ( engineer, a. k. a. engineering master ) and " hovedfag " ( academic master ). both were awarded after 5 years of university - level studies and required the completion of a scientific thesis. " siv. ing ", is a protected title traditionally awarded to engineering students who completed a five - year education at the norwegian university of science and technology ( norwegian : norges teknisk - naturvitenskapelige universitet, ntnu ) or other university programs deemed to be equivalent in academic merit. historically there was no bachelor ' s degree involved and today ' s program is a five years master ' s degree education. the " siv. ing " title is in the process of being phased out, replaced by ( for now, complemented by ) the " m. sc. " title. by and large, " siv. ing " is a title tightly being held on to for the sake of tradition. in academia, the new program offers separate three - year bachelor and two - year master programs. it is awarded in the natural sciences, mathematics and computer science fields. the completion of a scientific thesis is required. all master ' s degrees are designed to certify a level of education and qualify for a doctorate program. master of science in business is the english title for those taking a higher business degree, " siviløkonom " in norwegian. in addition, there is, for example, the ' master of business administration ' ( mba ), a practically oriented master ' s degree in business, but with less mathematics and econometrics, due to its less specific entry requirements and smaller focus on research. = = pakistan = = pakistan inherited its conventions pertaining to higher education from united kingdom after independence in 1947. master of science degree is typically abbreviated as m. sc. ( as in the united kingdom ) and which is awarded after 16 years of education ( equivalent with a bachelor ' s degree in the us and many other countries ). recently, in pursuance to some of the reforms by the higher education commission of pakistan ( the regulatory body of higher education in pakistan ), the traditional 2 - year bachelor of science ( b. sc. ) degree has been replaced by
|
https://en.wikipedia.org/wiki/Master_of_Science
|
we present a unification of mixed - space quantum representations in condensed matter physics ( cmp ) and quantum field theory ( qft ). the unifying formalism is based on being able to expand any quantum operator, for bosons, fermions, and spin systems, using a universal basis operator y ( u, v ) involving mixed hilbert spaces of p and q, respectively, where p and q are momentum and position operators in cmp ( which can be considered as a bozonization of free bloch electrons which incorporates the pauli exclusion principle and fermi - dirac distribution ), whereas these are related to the creation and annihilation operators in qft, where { \ psi } ^ { { \ dag } } = - ip and { \ psi } = q. the expansion coefficient is the fourier transform of the wigner quantum distribution function ( lattice weyl transform ) otherwise known as the characteristic distribution function. thus, in principle, fermionization via jordan - wigner for spin systems, as well as the holstein - - primakoff transformation from boson to the spin operators can be performed depending on the ease of the calculations. unitary transformation on the creation and annihilation operators themselves is also employed, as exemplified by the bogoliubov transformation. moreover, whenever y ( u, v ) is already expressed in matrix form, m _ { ij }, e. g. the pauli spin matrices, the jordan - - schwinger transformation is a map to bilinear expressions of creation and annihilation operators which expedites computation of representations. we show that the well - known coherent states formulation of quantum physics is a special case of the present unification. a new formulation of qft based on q - distribution of functional - field variables is suggested. the case of nonequilibrium quantum transport physics, which not only involves non - hermitian operators but also time - reversal symmetry breaking, is discussed in the appendix.
|
arxiv:2204.07691
|
we define a derived version of mazur ' s galois deformation ring. it is a pro - simplicial ring $ \ mathcal { r } $ classifying deformations of a fixed galois representation to simplicial coefficient rings ; its zeroth homotopy group $ \ pi _ 0 \ mathcal { r } $ recovers mazur ' s deformation ring. we give evidence that these rings $ \ mathcal { r } $ occur in the wild : for suitable galois representations, the langlands program predicts that $ \ pi _ 0 \ mathcal { r } $ should act on the homology of an arithmetic group. we explain how the taylor - - wiles method can be used to upgrade such an action to a graded action of $ \ pi _ * \ mathcal { r } $ on the homology.
|
arxiv:1608.07236
|
we propose the use of microcanonical analyses for numerical studies of peptide aggregation transitions. performing multicanonical monte carlo simulations of a simple hydrophobic - polar continuum model for interacting heteropolymers of finite length, we find that the microcanonical entropy behaves convex in the transition region, leading to a negative microcanonical specific heat. as this effect is also seen in first - order - like transitions of other finite systems, our results provide clear evidence for recent hints that the characterisation of phase separation in first - order - like transitions of finite systems profits from this microcanonical view.
|
arxiv:0710.4575
|
we explore the dynamics of the r - modes in accreting neutron stars in two ways. first, we explore how dissipation in the magneto - viscous boundary layer ( mvbl ) at the crust - core interface governs the damping of r - mode perturbations in the fluid interior. two models are considered : one assuming an ordinary - fluid interior, the other taking the core to consist of superfluid neutrons, type ii superconducting protons, and normal electrons. we show, within our approximations, that no solution to the magnetohydrodynamic equations exists in the superfluid model when both the neutron and proton vortices are pinned. however, if just one species of vortex is pinned, we can find solutions. when the neutron vortices are pinned and the proton vortices are unpinned there is much more dissipation than in the ordinary - fluid model, unless the pinning is weak. when the proton vortices are pinned and the neutron vortices are unpinned the dissipation is comparable or slightly less than that for the ordinary - fluid model, even when the pinning is strong. we also find in the superfluid model that relatively weak radial magnetic fields ~ 10 ^ 9 g ( 10 ^ 8 k / t ) ^ 2 greatly affect the mvbl, though the effects of mutual friction tend to counteract the magnetic effects. second, we evolve our two models in time, accounting for accretion, and explore how the magnetic field strength, the r - mode saturation amplitude, and the accretion rate affect the cyclic evolution of these stars. if the r - modes control the spin cycles of accreting neutron stars we find that magnetic fields can affect the clustering of the spin frequencies of low mass x - ray binaries ( lmxbs ) and the fraction of these that are currently emitting gravitational waves.
|
arxiv:gr-qc/0206001
|
we consider a single spin in a constant magnetic field or an anisotropy field. we show that additional external time - periodic fields with zero mean may generate nonzero time - averaged spin components which are vanishing for the time - averaged hamiltonian. the reason is a lowering of the dynamical symmetry of the system. a harmonic signal with proper orientation is enough to display the effect. we analyze the problem both with and without dissipation, both for quantum spins ( s = 1 / 2, 1 ) and classical spins. the results are of importance for controlling the system ' s state using high or low frequency fields and for using new resonance techniques which probe internal system parameters, to name a few.
|
arxiv:cond-mat/0002463
|
we propose the nonlinear regression convolutional encoder - decoder ( nrced ), a novel framework for mapping a multivariate input to a multivariate output. in particular, we implement our algorithm within the scope of 12 - lead surface electrocardiogram ( ecg ) reconstruction from intracardiac electrograms ( egm ) and vice versa. the goal of performing this task is to allow for improved point - of - care monitoring of patients with an implanted device to treat cardiac pathologies. we will achieve this goal with 12 - lead ecg reconstruction and by providing a new diagnostic tool for classifying atypical heartbeats. the algorithm is evaluated on a dataset retroactively collected from 14 patients. correlation coefficients calculated between the reconstructed and the actual ecg show that the proposed nrced method represents an efficient, accurate, and superior way to synthesize a 12 - lead ecg. we can also achieve the same reconstruction accuracy with only one egm lead as input. we also tested the model in a non - patient specific way and saw a reasonable correlation coefficient. the model was also executed in the reverse direction to produce egm signals from a 12 - lead ecg and found that the correlation was comparable to the forward direction. lastly, we analyzed the features learned in the model and determined that the model learns an overcomplete basis of our 12 - lead ecg space. we then use this basis of features to create a new diagnostic tool for identifying atypical and diseased heartbeats. this resulted in a roc curve with an associated area under the curve value of 0. 98, demonstrating excellent discrimination between the two classes.
|
arxiv:2012.06003
|
cubesat technology is an emerging alternative to large - scale space telescopes due to its short development time and cost - effectiveness. mevcube is a proposed cubesat mission to study the least explored mev gamma - ray sky, also known as the ` mev gap '. besides being sensitive to a plethora of astrophysical phenomena, mevcube can also be important in the hunt for dark matter. if dark matter is made up of evaporating primordial black holes, then it can produce photons in the sensitivity range of mevcube. besides, particle dark matter can also decay or annihilate to produce final state gamma - ray photons. we perform the first comprehensive study of dark matter discovery potential of a near - future mevcube cubesat mission. in all cases, we find that mevcube will have much better discovery reach compared to existing limits in the parameter space. this may be an important step towards discovering dark matter through its non - gravitational interactions.
|
arxiv:2501.13162
|
the connection between black hole thermodynamics and chemistry is extended to the lower - dimensional regime by considering the rotating and charged btz metric in the $ ( 2 + 1 ) $ - d and a $ ( 1 + 1 ) $ - d limits of einstein gravity. the smarr relation is naturally upheld in both btz cases, where those with $ q \ ne 0 $ violate the reverse isoperimetric inequality and are thus superentropic. the inequality can be maintained, however, with the addition of a new thermodynamic work term associated with the mass renormalization scale. the $ d \ rightarrow 0 $ limit of a generic $ d + 2 $ - dimensional einstein gravity theory is also considered to derive the smarr and komar relations, although the opposite sign definitions of the cosmological constant and thermodynamic pressure from the $ d > 2 $ cases must be adopted in order to satisfy the relation. the requirement of positive entropy implies a lower bound on the mass of a $ ( 1 + 1 ) $ - d black hole. promoting an associated constant of integration to a thermodynamic variable allows one to define a " rotation " in one spatial dimension. neither the $ d = 3 $ nor the $ d \ rightarrow 2 $ black holes exhibit any interesting phase behaviour.
|
arxiv:1509.05481
|
recent inelastic neutron scattering ( ins ) experiments on la $ _ { 2 - x } $ sr $ _ x $ cuo $ _ 4 $ have established the existence of a { \ it magnetic coherence effect }, i. e., strong frequency and momentum dependent changes of the spin susceptibility, $ \ chi ' ' $, in the superconducting phase. we show, using the spin - fermion model for incommensurate antiferromagnetic spin fluctuations, that the magnetic coherence effect establishes the ability of ins experiments to probe the electronic spectrum of the cuprates, in that the effect arises from the interplay of an incommensurate magnetic response, the form of the underlying fermi surface, and the opening of the d - wave gap in the fermionic spectrum. in particular, we find that the magnetic coherence effect observed in ins experiments on la $ _ { 2 - x } $ sr $ _ x $ cuo $ _ 4 $ requires that the fermi surface be closed around $ ( \ pi, \ pi ) $ up to optimal doping. we present several predictions for the form of the magnetic coherence effect in yba $ _ 2 $ cu $ _ 3 $ o $ _ { 6 + x } $ in which an incommensurate magnetic response has been observed in the superconducting state.
|
arxiv:cond-mat/0004378
|
origami structures have been proposed as a means of creating three - dimensional structures from the micro - to the macroscale, and as a means of fabricating mechanical metamaterials. the design of such structures requires a deep understanding of the kinematics of origami fold patterns. here, we study the configurations of non - euclidean origami, folding structures with gaussian curvature concentrated on the vertices. the kinematics of such structures depends crucially on the sign of the gaussian curvature. the configuration space of non - intersecting, oriented vertices with positive gaussian curvature decomposes into disconnected subspaces ; there is no pathway between them without tearing the origami. in contrast, the configuration space of negative gaussian curvature vertices remain connected. this provides a new mechanism by which the mechanics and folding of an origami structure could be controlled.
|
arxiv:1910.01008
|
we show that, on the riemann hypothesis, $ \ limsup _ { x \ to \ infty } i ( x ) / x ^ { 2 } \ leq 0. 8603 $, where $ i ( x ) = \ int _ x ^ { 2x } ( \ psi ( x ) - x ) ^ 2 \, dx. $ this proves ( and improves on ) a claim by pintz from 1982. we also show unconditionally that $ \ frac { 1 } { 5 \, 374 } \ leq i ( x ) / x ^ 2 $ for sufficiently large $ x $, and that the $ i ( x ) / x ^ { 2 } $ has no limit as $ x \ rightarrow \ infty $.
|
arxiv:2008.06140
|
vehicle - to - vehicle ( v2v ) communication is intended to improve road safety through distributed information sharing ; however, this type of system faces a design challenge : it is difficult to predict and optimize how human agents will respond to the introduction of this information. bayesian games are a standard approach for modeling such scenarios ; in a bayesian game, agents probabilistically adopt various types on the basis of a fixed, known distribution. agents in such models ostensibly perform bayesian inference, which may not be a reasonable cognitive demand for most humans. to complicate matters, the information provided to agents is often implicitly dependent on agent behavior, meaning that the distribution of agent types is a function of the behavior of agents ( i. e., the type distribution is endogenous ). in this paper, we study an existing model of v2v communication, but relax it along two dimensions : first, we pose a behavior model which does not require human agents to perform bayesian inference ; second, we pose an equilibrium model which avoids the challenging endogenous recursion. surprisingly, we show that the simplified non - bayesian behavior model yields the exact same equilibrium behavior as the original bayesian model, which may lend credibility to bayesian models. however, we also show that the original endogenous equilibrium model is strictly necessary to obtain certain informational paradoxes ; these paradoxes do not appear in the simpler exogenous model. this suggests that standard bayesian game models with fixed type distributions are not sufficient to express certain important phenomena.
|
arxiv:2307.03382
|
the recent emergence of strain gradient engineering directly affects the nanomechanics, optoelectronics and thermal transport fields in 2d materials. more specifically, large suspended graphene under very high stress represents the quintessence for nanomechanical mass detection through unique molecular reactions. different techniques have been used to induce strain in 2d materials, for instance by applying tip indentation, pressure or substrate bending on a graphene membrane. nevertheless, an efficient way to control the strain of a structure is to engineer the system geometry as shown in everyday life in architecture and acoustics. similarly, we studied the concentration of strain in artificial nanoconstrictions ( ~ 100 nm ) in a suspended epitaxial bilayer graphene membrane with different geometries and lengths ranging from 10 to 40 micrometer. we carefully isolated the strain signature from micro - raman measurements and extracted information on a scale below the laser spot size by analyzing the broadened shape of our raman peaks, up to 100 cm - 1. we potentially measured a strong strain concentration in a nanoconstriction up to 5percent, which is 20 times larger than the native epitaxial graphene strain. moreover, with a bilayer graphene, our configuration naturally enhanced the native asymmetric strain between the upper and lower graphene layers. in contrast to previous results, we can achieve any kind of complex strain tensor in graphene thanks to our structural approach. this method completes the previous strain - induced techniques and opens up new perspectives for bilayer graphene and 2d heterostructures based devices.
|
arxiv:1901.08487
|
consider a stationary poisson process in a $ d $ - dimensional hyperbolic space. for $ r > 0 $ define the point process $ \ xi _ r ^ { ( k ) } $ of exceedance heights over a suitable threshold of the hyperbolic volumes of $ k $ th nearest neighbour balls centred around the points of the poisson process within a hyperbolic ball of radius $ r $ centred at a fixed point. the point process $ \ xi _ r ^ { ( k ) } $ is compared to an inhomogeneous poisson process on the real line with intensity function $ e ^ { - u } $ and point process convergence in the kantorovich - rubinstein distance is shown. from this, a quantitative limit theorem for the hyperbolic maximum $ k $ th nearest neighbour ball with a limiting gumbel distribution is derived.
|
arxiv:2209.12730
|
the conditions for the development of a kelvin - helmholtz instability ( khi ) for the quark - gluon plasma ( qgp ) flow in a peripheral heavy - ion collision is investigated. the projectile and target side particles are separated by an energetically motivated hypothetical surface, characterized with a phenomenological surface tension. in such a view, a classical potential flow approximation is considered and the onset of the khi is studied. the growth rate of the instability is computed as function of phenomenological parameters characteristic for the qgp fluid : viscosity, surface tension and flow layer thickness.
|
arxiv:1302.1691
|
in this paper, an algorithm designed to detect characteristic cough events in audio recordings is presented, significantly reducing the time required for manual counting. using time - frequency representations and independent subspace analysis ( isa ), sound events that exhibit characteristics of coughs are automatically detected, producing a summary of the events detected. using a dataset created from publicly available audio recordings, this algorithm has been tested on a variety of synthesized audio scenarios representative of those likely to be encountered by subjects undergoing an ambulatory cough recording, achieving a true positive rate of 76 % with an average of 2. 85 false positives per minute.
|
arxiv:2104.06798
|
the belief function approach to uncertainty quantification as proposed in the demspter - shafer theory of evidence is established upon the general mathematical models for set - valued observations, called random sets. set - valued predictions are the most natural representations of uncertainty in machine learning. in this paper, we introduce a concept called epistemic deep learning based on the random - set interpretation of belief functions to model epistemic learning in deep neural networks. we propose a novel random - set convolutional neural network for classification that produces scores for sets of classes by learning set - valued ground truth representations. we evaluate different formulations of entropy and distance measures for belief functions as viable loss functions for these random - set networks. we also discuss methods for evaluating the quality of epistemic predictions and the performance of epistemic random - set neural networks. we demonstrate through experiments that the epistemic approach produces better performance results when compared to traditional approaches of estimating uncertainty.
|
arxiv:2206.07609
|
existence of incomplete and imprecise data has moved the database paradigm from deterministic to proba - babilistic information. probabilistic databases contain tuples that may or may not exist with some probability. as a result, the number of possible deterministic database instances that can be observed from a probabilistic database grows exponentially with the number of probabilistic tuples. in this paper, we consider the problem of answering both aggregate and non - aggregate queries on massive probabilistic databases. we adopt the tuple independence model, in which each tuple is assigned a probability value. we develop a method that exploits probability generating functions ( pgf ) to answer such queries efficiently. our method maintains a polynomial for each tuple. it incrementally builds a master polynomial that expresses the distribution of the possible result values precisely. we also develop an approximation method that finds the distribution of the result value with negligible errors. our experiments suggest that our methods are orders of magnitude faster than the most recent systems that answer such queries, including maybms and sprout. in our experiments, we were able to scale up to several terabytes of data on tpc - h queries, while existing methods could only run for a few gigabytes of data on the same queries.
|
arxiv:1307.0844
|
we compute the primordial curvature spectrum generated during warm inflation, including shear viscous effects. the primordial spectrum is dominated by the thermal fluctuations of the radiation bath, sourced by the dissipative term of the inflaton field. the dissipative coefficient \ upsilon, computed from first principles in the close - to - equilibrium approximation, depends in general on the temperature t, and this dependence renders the system of the linear fluctuations coupled. whenever the dissipative coefficient is larger than the hubble expansion rate h, there is a growing mode in the fluctuations before horizon crossing. however, dissipation intrinsically means departures from equilibrium, and therefore the presence of a shear viscous pressure in the radiation fluid. this in turn acts as an extra friction term for the radiation fluctuations that tends to damp the growth of the perturbations. independently of the t functional dependence of the dissipation and the shear viscosity, we find that when the shear viscous coefficient \ zeta _ s is larger than 3 \ rho _ r / h at horizon crossing, \ rho _ r being the radiation energy density, the shear damping effect wins and there is no growing mode in the spectrum.
|
arxiv:1106.0701
|
discussion paper on " fast approximate inference for arbitrarily large semiparametric regression models via message passing " by wand [ arxiv : 1602. 07412 ].
|
arxiv:1609.05615
|
a field - theoretical description of the photoproduction of two pions off the nucleon is presented that applies to real as well as virtual photons in the one - photon approximation. the lorentz - covariant theory is complete at the level of all explicit faddeev - type three - body final - state mechanisms of dressed interacting hadrons, including those of the nonlinear dyson - schwinger type. all electromagnetic currents are constructed to satisfy their respective ( generalized ) ward - takahashi identities and thus satisfy local gauge invariance as a matter of course. the faddeev - type ordering structure results in a natural expansion of the full two - pion photoproduction current $ \ mpp ^ \ mu $ in terms of multiple loops that preserve gauge invariance order by order in the number of loops, which in turn lends itself naturally to practical applications of increasing sophistication with increasing number of loops.
|
arxiv:1211.0703
|
reliability is essential for storing files in many applications of distributed storage systems. to maintain reliability, when a storage node fails, a new node should be regenerated by a repair process. most of the previous results on the repair problem assume perfect ( error - free ) links in the networks. however, in practice, especially in a wireless network, the transmitted packets ( for repair ) may be lost due to, e. g., link failure or buffer overflow. we study the repair problem of distributed storage systems in packet erasure networks, where a packet loss is modeled as an erasure. the minimum repair - bandwidth, namely the amount of information sent from the surviving nodes to the new node, is established under the ideal assumption of infinite number of packet transmissions. we also study the bandwidth - storage tradeoffs in erasure networks. then, the use of repairing storage nodes ( nodes with smaller storage space ) is proposed to reduce the repair - bandwidth. we study the minimal storage of repairing storage nodes. for the case of a finite number of packet transmissions, the probability of successful repairing is investigated. we show that the repair with a finite number of packet transmissions may use much larger bandwidth than the minimum repair - bandwidth. finally, we propose a combinatorial optimization problem, which results in the optimal repair - bandwidth for the given packet erasure probability and finite packet transmissions.
|
arxiv:1405.3188
|
in this paper, we consider the spectral theory of linear differential - algebraic equations ( daes ) for periodic daes in canonical form, i. e., \ begin { equation * } j \ frac { df } { dt } + hf = \ lambda wf, \ end { equation * } where $ j $ is a constant skew - hermitian $ n \ times n $ matrix that is not invertible, both $ h = h ( t ) $ and $ w = w ( t ) $ are $ d $ - periodic hermitian $ n \ times n $ - matrices with lebesgue measurable functions as entries, and $ w ( t ) $ is positive semidefinite and invertible for a. e. $ t \ in \ mathbb { r } $ ( i. e., lebesgue almost everywhere ). under some additional hypotheses on $ h $ and $ w $, called the local index - 1 hypotheses, we study the maximal and the minimal operators $ l $ and $ l _ 0 ' $, respectively, associated with the differential - algebraic operator $ \ mathcal { l } = w ^ { - 1 } ( j \ frac { d } { dt } + h ) $, both treated as an unbounded operators in a hilbert space $ l ^ 2 ( \ mathbb { r } ; w ) $ of weighted square - integrable vector - valued functions. we prove the following : ( i ) the minimal operator $ l _ 0 ' $ is a densely defined and closable operator ; ( ii ) the maximal operator $ l $ is the closure of $ l _ 0 ' $ ; ( iii ) $ l $ is a self - adjoint operator on $ l ^ 2 ( \ mathbb { r } ; w ) $ with no eigenvalues of finite multiplicity, but may have eigenvalues of infinite multiplicity. as an important application, we show that for 1d photonic crystals with passive lossless media, maxwell ' s equations for the electromagnetic fields become, under separation of variables, periodic daes in canonical form satisfying our hypotheses so that our spectral theory applies to them ( a primary motivation for this paper ).
|
arxiv:2211.02134
|
let $ f _ 1,..., f _ m $ be $ m \ ge 2 $ germs of biholomorphisms of $ \ c ^ n $, fixing the origin, with $ ( \ d f _ 1 ) _ o $ diagonalizable and such that $ f _ 1 $ commutes with $ f _ h $ for any $ h = 2,..., m $. we prove that, under certain arithmetic conditions on the eigenvalues of $ ( \ d f _ 1 ) _ o $ and some restrictions on their resonances, $ f _ 1,..., f _ m $ are simultaneously holomorphically linearizable if and only if there exists a particular complex manifold invariant under $ f _ 1,..., f _ m $.
|
arxiv:0812.3579
|
based on simplifications of previous numerical calculations [ graf and l \ " { o } wen, phys. rev. e \ textbf { 59 }, 1932 ( 1999 ) ], we propose algebraic free energy expressions for the smectic - a liquid crystal phase and the crystal phases of hard spherocylinders. quantitative agreement with simulations is found for the resulting equations of state. the free energy expressions can be used to straightforwardly compute the full phase behavior for all aspect ratios and to provide a suitable benchmark for exploring how attractive interrod interactions mediate the phase stability through perturbation approaches such as free - volume or van der waals theory.
|
arxiv:2006.01555
|
electrolyte gating is a powerful technique for accumulating large carrier densities in surface two - dimensional electron systems ( 2des ). yet this approach suffers from significant sources of disorder : electrochemical reactions can damage or alter the surface of interest, and the ions of the electrolyte and various dissolved contaminants sit angstroms from the 2des. accordingly, electrolyte gating is well - suited to studies of superconductivity and other phenomena robust to disorder, but of limited use when reactions or disorder must be avoided. here we demonstrate that these limitations can be overcome by protecting the sample with a chemically inert, atomically smooth sheet of hexagonal boron nitride ( bn ). we illustrate our technique with electrolyte - gated strontium titanate, whose mobility improves more than tenfold when protected with bn. we find this improvement even for our thinnest bn, of measured thickness 6 a, with which we can accumulate electron densities nearing 10 ^ 14 cm ^ - 2. our technique is portable to other materials, and should enable future studies where high carrier density modulation is required but electrochemical reactions and surface disorder must be minimized.
|
arxiv:1410.3034
|
the design, construction and performance characteristics of a simple axial - field ionization chamber suitable for identifying ions in a radioactive beam are presented. optimized for use with low - energy radioactive beams ( < 5 mev / a ) the detector presents only three 0. 5 $ \ mu $ m / cm $ ^ 2 $ foils to the beam in addition to the detector gas. a fast charge sensitive amplifier ( csa ) integrated into the detector design is also described. coupling this fast csa to the axial field ionization chamber produces an output pulse with a risetime of 60 - 70 ns and a fall time of 100 ns, making the detector capable of sustaining a relatively high rate. tests with an $ \ alpha $ source establish the detector energy resolution as $ \ sim $ 8 $ \ % $ for an energy deposit of $ \ sim $ 3. 5 mev. the energy resolution with beams of 2. 5 and 4. 0 mev / a $ ^ { 39 } $ k ions and the dependence of the energy resolution on beam intensity is measured. at an instantaneous rate of 3 x 10 $ ^ 5 $ ions / s the energy resolution has degraded to 14 % with a pileup of 12 %. the good energy resolution of this detector at rates up to 3 x 10 $ ^ 5 $ ions / s makes it an effective tool in the characterization of low - energy radioactive beams.
|
arxiv:1608.03179
|
depth estimation is a cornerstone for autonomous driving, yet acquiring per - pixel depth ground truth for supervised learning is challenging. self - supervised surround depth estimation ( sssde ) from consecutive images offers an economical alternative. while previous sssde methods have proposed different mechanisms to fuse information across images, few of them explicitly consider the cross - view constraints, leading to inferior performance, particularly in overlapping regions. this paper proposes an efficient and consistent pose estimation design and two loss functions to enhance cross - view consistency for sssde. for pose estimation, we propose to use only front - view images to reduce training memory and sustain pose estimation consistency. the first loss function is the dense depth consistency loss, which penalizes the difference between predicted depths in overlapping regions. the second one is the multi - view reconstruction consistency loss, which aims to maintain consistency between reconstruction from spatial and spatial - temporal contexts. additionally, we introduce a novel flipping augmentation to improve the performance further. our techniques enable a simple neural model to achieve state - of - the - art performance on the ddad and nuscenes datasets. last but not least, our proposed techniques can be easily applied to other methods. the code is available at https : / / github. com / denyingmxd / cvcdepth.
|
arxiv:2407.04041
|
the degree of synchronization and the amount of dynamical cluster formation in electroencephalographic ( eeg ) signals are characterized by employing two order parameters introduced in the context of coupled chaotic systems subject to external noise. these parameters are calculated in eeg signals from a group of healthy subjects and a group of epileptic patients, including a patient experiencing an epileptic crisis. the evolution of these parameters shows the occurrence of intermittent synchronization and clustering in the brain activity during an epileptic crisis. significantly, the existence of an instantaneous maximum of synchronization previous to the onset of a crisis is revealed by this procedure. the mean values of the order parameters and their standard deviations are compared between both groups of individuals.
|
arxiv:nlin/0506014
|
the intensity of a default time is obtained by assuming that the default indicator process has an absolutely continuous compensator. here we drop the assumption of absolute continuity with respect to the lebesgue measure and only assume that the compensator is absolutely continuous with respect to a general $ \ sigma $ - finite measure. this allows for example to incorporate the merton - model in the generalized intensity based framework. an extension of the black - cox model is also considered. we propose a class of generalized merton models and study absence of arbitrage by a suitable modification of the forward rate approach of heath - jarrow - morton ( 1992 ). finally, we study affine term structure models which fit in this class. they exhibit stochastic discontinuities in contrast to the affine models previously studied in the literature.
|
arxiv:1512.03896
|
we consider cones in a hilbert space associated to two von neumann algebras and determine when one algebra is included in the other. if a cone is assocated to a von neumann algebra, the jordan structure is naturally recovered from it and we can characterize projections of the given von neumann algebra with the structure in some special situations.
|
arxiv:0801.4259
|
task - oriented communications, mostly using learning - based joint source - channel coding ( jscc ), aim to design a communication - efficient edge inference system by transmitting task - relevant information to the receiver. however, only transmitting task - relevant information without introducing any redundancy may cause robustness issues in learning due to the channel variations, and the jscc which directly maps the source data into continuous channel input symbols poses compatibility issues on existing digital communication systems. in this paper, we address these two issues by first investigating the inherent tradeoff between the informativeness of the encoded representations and the robustness to information distortion in the received representations, and then propose a task - oriented communication scheme with digital modulation, named discrete task - oriented jscc ( dt - jscc ), where the transmitter encodes the features into a discrete representation and transmits it to the receiver with the digital modulation scheme. in the dt - jscc scheme, we develop a robust encoding framework, named robust information bottleneck ( rib ), to improve the communication robustness to the channel variations, and derive a tractable variational upper bound of the rib objective function using the variational approximation to overcome the computational intractability of mutual information. the experimental results demonstrate that the proposed dt - jscc achieves better inference performance than the baseline methods with low communication latency, and exhibits robustness to channel variations due to the applied rib framework.
|
arxiv:2209.10382
|
we study the density of states and the optical conductivity of a kondo lattice which is immersed in a massless dirac fermi sea, as characterized by a linear dispersion relation. as a result of the hybridization $ v $ with the $ f $ - electron levels, the pseudo - gap in the conduction band becomes duplicated and is shifted both into the upper and the lower quasiparticle band. we find that due to the linear dispersion of the dirac fermions, the kondo insulator gap is observable in the optical conductivity in contrast to the kondo lattice system in a conventional conduction band, and the resulting gap \, [ $ \ delta _ { \ rm gap } ( t ) $ ] depends on temperature. the reason is that the kondo insulator gap is an { \ it indirect gap } in conventional kondo lattices, while it becomes a { \ it direct gap } in the dirac fermi sea. we find that the optical conductivity attains two peaks and is vanishing exactly at $ 2 b ( t ) v $ where $ b $ depends on temperature.
|
arxiv:1312.5041
|
we describe verse by verse, our experiment in augmenting the creative process of writing poetry with an ai. we have created a group of ai poets, styled after various american classic poets, that are able to offer as suggestions generated lines of verse while a user is composing a poem. in this paper, we describe the underlying system to offer these suggestions. this includes a generative model, which is tasked with generating a large corpus of lines of verse offline and which are then stored in an index, and a dual - encoder model that is tasked with recommending the next possible set of verses from our index given the previous line of verse.
|
arxiv:2103.17205
|
superconductivity ( sc ) occurring at low densities of mobile electrons is still a mystery since the standard theories do not apply in this regime. we address this problem by using a microscopic model for ferroelectric ( fe ) modes, which mediate an effective attraction between electrons. when the dispersion of modes, around zero momentum, is steep, forward scattering is the main pairing process and the self - consistent equation for the gap function can be solved analytically. the solutions exhibit unique features : different momentum components of the gap function are decoupled, and at the critical regime of the fe modes, different frequency components are also decoupled. this leads to effects that can be observed experimentally : the gap function can be non - monotonic in temperature and the critical temperature can be independent of the chemical potential. the model is applicable to lightly doped polar semiconductors, in particular, strontium titanate.
|
arxiv:1803.09987
|
fine - tuning large - scale pretrained models has led to tremendous progress in well - studied modalities such as vision and nlp. however, similar gains have not been observed in many other modalities due to a lack of relevant pretrained models. in this work, we propose orca, a general cross - modal fine - tuning framework that extends the applicability of a single large - scale pretrained model to diverse modalities. orca adapts to a target task via an align - then - refine workflow : given the target input, orca first learns an embedding network that aligns the embedded feature distribution with the pretraining modality. the pretrained model is then fine - tuned on the embedded data to exploit the knowledge shared across modalities. through extensive experiments, we show that orca obtains state - of - the - art results on 3 benchmarks containing over 60 datasets from 12 modalities, outperforming a wide range of hand - designed, automl, general - purpose, and task - specific methods. we highlight the importance of data alignment via a series of ablation studies and demonstrate orca ' s utility in data - limited regimes.
|
arxiv:2302.05738
|
we study the conductance afforded by a normal - metal probe which is directly contacting the helical edge modes of a quantum spin hall insulator ( qshi ). we show a $ 2e ^ 2 / h $ conductance peak at zero temperature in qshi - based superconductor - - ferromagnet hybrids due to the formation of a single majorana bound state ( mbs ). in a corresponding josephson junction hosting a pair of mbss, a $ 4e ^ 2 / h $ conductance peak is found at zero temperature. the conductance quantization is robust to changes of the relevant system parameters and, remarkably, remains unaltered with increasing the distance between probe and mbss. in the low temperature limit, the conductance peak is robust as the probe is placed within the localization length of mbss. our findings can therefore provide an effective way to detect the existence of mbss in qshi systems.
|
arxiv:2110.04472
|
the observed velocities of pulsars suggest the possibility that sterile neutrinos with mass of several kev are emitted from a cooling neutron star. the same sterile neutrinos could constitute all or part of cosmological dark matter. the neutrino - driven kicks can exhibit delays depending on the mass and the mixing angle, which can be compared with the pulsar data. we discuss the allowed ranges of sterile neutrino parameters, consistent with the latest cosmological and x - ray bounds, which can explain the pulsar kicks for different delay times.
|
arxiv:0801.4734
|
and jacques tits. the university of chicago ' s 1960 – 61 group theory year brought together group theorists such as daniel gorenstein, john g. thompson and walter feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by aschbacher and smith in 2004. this project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. research concerning this classification proof is ongoing. group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate. = = elementary consequences of the group axioms = = basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. for example, repeated applications of the associativity axiom show that the unambiguity of a ⋅ b ⋅ c = ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) { \ displaystyle a \ cdot b \ cdot c = ( a \ cdot b ) \ cdot c = a \ cdot ( b \ cdot c ) } generalizes to more than three factors. because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. = = = uniqueness of identity element = = = the group axioms imply that the identity element is unique ; that is, there exists only one identity element : any two identity elements e { \ displaystyle e } and f { \ displaystyle f } of a group are equal, because the group axioms imply e = e ⋅ f = f { \ displaystyle e = e \ cdot f = f }. it is thus customary to speak of the identity element of the group. = = = uniqueness of inverses = = = the group axioms also imply that the inverse of each element is unique. let a group element a { \ displaystyle a } have both b { \ displaystyle b } and c { \ displaystyle c } as inverses. then b = b ⋅ e ( e is the identity element ) = b ⋅ ( a ⋅ c ) ( c and a are inverses of each other ) = ( b ⋅ a ) ⋅ c ( associativity ) = e ⋅ c ( b is an inverse of a ) = c ( e is the identity element and b = c ) { \ displaystyle { \ begin { aligned } b
|
https://en.wikipedia.org/wiki/Group_(mathematics)
|
time lags occur in a vast range of real - world dynamical systems due to finite reaction times or propagation speeds. here we derive an analytical approach to determine the asymptotic stability of synchronous states in networks of coupled inertial oscillators with constant delay. building on the master stability formalism, our technique provides necessary and sufficient delay master stability conditions. we apply it to two classes of potential future power grids, where processing delays in control dynamics will likely pose a challenge as renewable energies proliferate. distinguishing between phase and frequency delay, our method offers an insight into how bifurcation points depend on the network topology of these system designs.
|
arxiv:1911.09730
|
decision making is a challenging task in online recommender systems. the decision maker often needs to choose a contextual item at each step from a set of candidates. contextual bandit algorithms have been successfully deployed to such applications, for the trade - off between exploration and exploitation and the state - of - art performance on minimizing online costs. however, the applicability of existing contextual bandit methods is limited by the over - simplified assumptions of the problem, such as assuming a simple form of the reward function or assuming a static environment where the states are not affected by previous actions. in this work, we put forward policy gradients for contextual recommendations ( pgcr ) to solve the problem without those unrealistic assumptions. it optimizes over a restricted class of policies where the marginal probability of choosing an item ( in expectation of other items ) has a simple closed form, and the gradient of the expected return over the policy in this class is in a succinct form. moreover, pgcr leverages two useful heuristic techniques called time - dependent greed and actor - dropout. the former ensures pgcr to be empirically greedy in the limit, and the latter addresses the trade - off between exploration and exploitation by using the policy network with dropout as a bayesian approximation. pgcr can solve the standard contextual bandits as well as its markov decision process generalization. therefore it can be applied to a wide range of realistic settings of recommendations, such as personalized advertising. we evaluate pgcr on toy datasets as well as a real - world dataset of personalized music recommendations. experiments show that pgcr enables fast convergence and low regret, and outperforms both classic contextual - bandits and vanilla policy gradient methods.
|
arxiv:1802.04162
|
a quasigroup $ q $ is called maximally nonassociative if for $ x, y, z \ in q $ we have that $ x \ cdot ( y \ cdot z ) = ( x \ cdot y ) \ cdot z $ only if $ x = y = z $. we show that, with finitely many exceptions, there exists a maximally nonassociative quasigroup of order $ n $ whenever $ n $ is not of the form $ n = 2p _ 1 $ or $ n = 2p _ 1p _ 2 $ for primes $ p _ 1, p _ 2 $ with $ p _ 1 \ le p _ 2 < 2p _ 1 $.
|
arxiv:1912.07040
|
this paper proposes a reconciliation of two different theories of information. the first, originally proposed in a lesser - known work by claude shannon, describes how the information content of channels can be described qualitatively, but still abstractly, in terms of information elements, i. e. equivalence relations over the data source domain. shannon showed that these elements form a complete lattice, with the order expressing when one element is more informative than another. in the context of security and information flow this structure has been independently rediscovered several times, and used as a foundation for reasoning about information flow. the second theory of information is dana scott ' s domain theory, a mathematical framework for giving meaning to programs as continuous functions over a particular topology. scott ' s partial ordering also represents when one element is more informative than another, but in the sense of computational progress, i. e. when one element is a more defined or evolved version of another. to give a satisfactory account of information flow in programs it is necessary to consider both theories together, to understand what information is conveyed by a program viewed as a channel ( \ ` a la shannon ) but also by the definedness of its encoding ( \ ` a la scott ). we combine these theories by defining the lattice of computable information ( loci ), a lattice of preorders rather than equivalence relations. loci retains the rich lattice structure of shannon ' s theory, filters out elements that do not make computational sense, and refines the remaining information elements to reflect how scott ' s ordering captures the way that information is presented. we show how the new theory facilitates the first general definition of termination - insensitive information flow properties, a weakened form of information flow property commonly targeted by static program analyses.
|
arxiv:2211.10099
|
a monochromatic gamma ray line results when dark matter particles in the galactic halo annihilate to produce a two body final state which includes a photon. such a signal is very distinctive from astrophysical backgrounds, and thus represents an incisive probe of theories of dark matter. we compare the recent null results of searches for gamma ray lines in the galactic center and other regions of the sky with the predictions of effective theories describing the interactions of dark matter particles with the standard model. we find that the null results of these searches provide constraints on the nature of dark matter interactions with ordinary matter which are complementary to constraints from other observables, and stronger than collider constraints in some cases.
|
arxiv:1009.0008
|
the results of a kinematic study of the galactic bulge based on spectra of red giants in fields at projected distances of 1. 4 - 1. 8 kpc from the galactic center are presented. there is a marked trend of kinematics with metallicity, in the sense that the more metal - poor population has higher velocity dispersion and lower rotation velocity than the metal - rich population. the k giants more metal - - poor than [ fe / h ] = - 1 have halo - like kinematics, with no significant rotation and sigma = 120 km / s independent of galactocentric distance. the velocity dispersion of the giants with [ fe / h ] > - 1 decreases with increasing galactocentric distance, and this population is rotating with v = 0. 9 km / s / degree. the present observations, together with the observed metallicity gradient imply bulge formation through dissipational collapse. in such a picture, low angular momentum gas lost from the formation of the halo was deposited in the bulge. the bulge would then be younger than the halo. observations of the galactic globular cluster system are consistent with this picture, if we associate the metal - rich globular clusters within 3 kpc of the galactic center with the bulge rather than with the thick disk or halo. data on the rr lyr in baade ' s window are also consistent with this picture if these stars are considered part of the inner halo rather than the bulge.
|
arxiv:astro-ph/9509109
|
we study an inflationary scenario in supergravity model with a gauge kinetic function. we find exact anisotropic power - law inflationary solutions when both the potential function for an inflaton and the gauge kinetic function are exponential type. the dynamical system analysis tells us that the anisotropic power - law inflation is an attractor for a large parameter region.
|
arxiv:1010.5307
|
the goal of this paper is to design optimal multilevel solvers for the finite element approximation of second order linear elliptic problems with piecewise constant coefficients on bisection grids. local multigrid and bpx preconditioners are constructed based on local smoothing only at the newest vertices and their immediate neighbors. the analysis of eigenvalue distributions for these local multilevel preconditioned systems shows that there are only a fixed number of eigenvalues which are deteriorated by the large jump. the remaining eigenvalues are bounded uniformly with respect to the coefficients and the meshsize. therefore, the resulting preconditioned conjugate gradient algorithm will converge with an asymptotic rate independent of the coefficients and logarithmically with respect to the meshsize. as a result, the overall computational complexity is nearly optimal.
|
arxiv:1006.3277
|
if a large quantum computer ( qc ) existed today, what type of physical problems could we efficiently simulate on it that we could not simulate on a classical turing machine? in this paper we argue that a qc could solve some relevant physical " questions " more efficiently. the existence of one - to - one mappings between different algebras of observables or between different hilbert spaces allow us to represent and imitate any physical system by any other one ( e. g., a bosonic system by a spin - 1 / 2 system ). we explain how these mappings can be performed showing quantum networks useful for the efficient evaluation of some physical properties, such as correlation functions and energy spectra.
|
arxiv:quant-ph/0304063
|
the radiative viscosity of superfluid $ npe $ matter is studied, and it is found that to the lowest order of $ \ delta \ mu / t $ the ratio of radiative viscosity to bulk viscosity is the same as that of the normal matter.
|
arxiv:1001.0382
|
precise measurements of $ b \ to c \ tau \ bar \ nu $ decays require large resource - intensive monte carlo ( mc ) samples, which incorporate detailed simulations of detector responses and physics backgrounds. extracted parameters may be highly sensitive to the underlying theoretical models used in the mc generation. because new physics ( np ) can alter decay distributions and acceptances, the standard practice of fitting np wilson coefficients to sm - based measurements of the $ r ( d ^ { ( * ) } ) $ ratios can be biased. the newly developed hammer software tool enables efficient reweighting of mc samples to arbitrary np scenarios or to any hadronic matrix elements. we demonstrate how hammer allows avoidance of biases through self - consistent fits directly to the np wilson coefficients. we also present example analyses that demonstrate the sizeable biases that can otherwise occur from naive np interpretations of sm - based measurements. the hammer library is presently interfaced with several existing experimental analysis frameworks and we provide an overview of its structure.
|
arxiv:2002.00020
|
the magnetic relaxation and hysteresis of a system of single domain particles with dipolar interactions are studied by monte carlo simulations. we model the system by a chain of heisenberg classical spins with randomly oriented easy - axis and log - normal distribution of anisotropy constants interacting through dipole - dipole interactions. extending the so - called $ t \ ln ( t / \ tau _ 0 ) $ method to interacting systems, we show how to relate the simulated relaxation curves to the effective energy barrier distributions responsible for the long - time relaxation. we find that the relaxation law changes from quasi - logarithmic to power - law when increasing the interaction strength. this fact is shown to be due to the appearence of an increasing number of small energy barriers caused by the reduction of the anisotropy energy barriers as the local dipolar fields increase.
|
arxiv:cond-mat/0311139
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.