text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
in this paper we introduce the three main notions of probability used by physicists and discuss how these are to be used when invoking spacelike separated observers in a relativistic format. we discuss a standard eprb experiment and concentrate upon problems of the interpretation of probabilities. we promote a particularly conservative interpretation of this experiment ( which need not invoke an objective notion of collapse ) where probabilities are, tentatively, passively lorentz invariant. we also argue that the heisenberg picture is preferable in relativistic situations due to a conflict between the schrodinger picture and passive lorentz transformations of probabilities. throughout most of this paper we discuss the relative frequency interpretation of probability as this is most commonly used. we also introduce the logically necessary notion of ` prior - frequency ' in discussing whether the choice by an observer can have any causal effect upon the measurement results of another. we also critically examine the foundational use of relative frequency in no - signalling theorems. we argue that sqt and sr are probabilistically compatible, although we do not discuss whether they are compatible on the level of individual events.
|
arxiv:quant-ph/0501131
|
in 1983, bouchet proved that every bidirected graph with a nowhere - zero integer - flow has a nowhere - zero 216 - flow, and conjectured that 216 could be replaced with 6. this paper shows that for cyclically 5 - edge - connected bidirected graphs that number can be replaced with 8.
|
arxiv:2309.00704
|
sahlqvist theory is extended to the fragments of the intuitionistic propositional calculus that include the conjunction connective. this allows us to introduce a sahlqvist theory of intuitionistic character amenable to arbitrary protoalgebraic deductive systems. as an application, we obtain a sahlqvist theorem for the fragments of the intuitionistic propositional calculus that include the implication connective and for the extensions of the intuitionistic linear logic.
|
arxiv:2208.00691
|
an explicit and complete set of constants of the motion are constructed algorithmically for friedmann - lema \ ^ { i } tre - robertson - walker ( flrw ) models consisting of an arbitrary number of non - interacting species. the inheritance of constants of the motion from simpler models as more species are added is stressed. it is then argued that all flrw models admit what amounts to a unique candidate for a gravitational epoch function ( a dimensionless scalar invariant derivable from the riemann tensor without differentiation which is monotone throughout the evolution of the universe ). the same relations that lead to the construction of constants of the motion allow an explicit evaluation of this function. in the simplest of all models, the $ \ lambda $ cdm model, it is shown that the epoch function exists for all models with $ \ lambda > 0 $, but for almost no models with $ \ lambda \ leq 0 $.
|
arxiv:gr-qc/0603028
|
public disclosure of important security information, such as knowledge of vulnerabilities or exploits, often occurs in blogs, tweets, mailing lists, and other online sources months before proper classification into structured databases. in order to facilitate timely discovery of such knowledge, we propose a novel semi - supervised learning algorithm, pace, for identifying and classifying relevant entities in text sources. the main contribution of this paper is an enhancement of the traditional bootstrapping method for entity extraction by employing a time - memory trade - off that simultaneously circumvents a costly corpus search while strengthening pattern nomination, which should increase accuracy. an implementation in the cyber - security domain is discussed as well as challenges to natural language processing imposed by the security domain.
|
arxiv:1308.4648
|
this report deals with the basic concepts on deducing transit times for quantum scattering : the stationary phase method and its relation with delay times for relativistic and non - relativistic tunneling particles. we notice that the applicability of this method is constrained by several subtleties in deriving the phase time that describes the localization of scattered wave packets. we investigate the general relation between phase times and dwell times for quantum tunneling / scattering. considering a symmetrical collision of two identical wave packets with an one - dimensional barrier, we demonstrate that these two distinct transit time definitions are explicitly connected. the traversal times are obtained for a symmetrized ( two identical bosons ) and an antisymmetrized ( two identical fermions ) quantum colliding configuration. multiple wave packet decomposition shows us that the phase time ( group delay ) describes the exact position of the scattered particles and, in addition to the exact relation with the dwell time, leads to correct conceptual understanding of both transit time definitions. at last, we extend the non - relativistic formalism to the solutions for the tunneling zone of a one - dimensional electrostatic potential in the relativistic ( dirac to klein - gordon ) wave equation where the incoming wave packet exhibits the possibility of being almost totally transmitted through the potential barrier. the conditions for the occurrence of accelerated tunneling transmission probabilities are all quantified and the problematic superluminal interpretation based on the non - relativistic tunneling dynamics is revisited.
|
arxiv:0903.2530
|
the magnetic dipole moment ( mdm ) and the electric dipole moment ( edm ) of leptons are calculated under the assumption of lepton flavor violation ( lfv ) induced by spin - 1 unparticles with both vector and axial - vector couplings to leptons, including a cp - violating phase. the experimental limits on the muon mdm and lfv process such as the decay l _ i - > i _ jl _ kl _ k are then used to constrain the lfv couplings for particular values of the unparticle operator dimension d _ u and the unparticle scale \ lambda _ u, assuming that lfv transitions between the tau and muon leptons are dominant. it is found that the current experimental constraints favor a scenario with dominance of the vector couplings over the axial - vector couplings. we also obtain estimates for the edms of the electron and the muon, which are well below the current experimental limits.
|
arxiv:1109.4890
|
we study two families of cyclotomic graphs and perfect codes in them. they are cayley graphs on the additive group of $ \ mathbb { z } [ \ zeta _ m ] / a $, with connection sets $ \ { \ pm ( \ zeta _ m ^ i + a ) : 0 \ le i \ le m - 1 \ } $ and $ \ { \ pm ( \ zeta _ m ^ i + a ) : 0 \ le i \ le \ phi ( m ) - 1 \ } $, respectively, where $ \ zeta _ m $ ( $ m \ ge 2 $ ) is an $ m $ th primitive root of unity, $ a $ a nonzero ideal of $ \ mathbb { z } [ \ zeta _ m ] $, and $ \ phi $ euler ' s totient function. we call them the $ m $ th cyclotomic graph and the second kind $ m $ th cyclotomic graph, and denote them by $ g _ { m } ( a ) $ and $ g ^ * _ { m } ( a ) $, respectively. we give a necessary and sufficient condition for $ d / a $ to be a perfect $ t $ - code in $ g ^ * _ { m } ( a ) $ and a necessary condition for $ d / a $ to be such a code in $ g _ { m } ( a ) $, where $ t \ ge 1 $ is an integer and $ d $ an ideal of $ \ mathbb { z } [ \ zeta _ m ] $ containing $ a $. in the case when $ m = 3, 4 $, $ g _ m ( ( \ alpha ) ) $ is known as an eisenstein - jacobi and gaussian networks, respectively, and we obtain necessary conditions for $ ( \ beta ) / ( \ alpha ) $ to be a perfect $ t $ - code in $ g _ m ( ( \ alpha ) ) $, where $ 0 \ ne \ alpha, \ beta \ in \ mathbb { z } [ \ zeta _ m ] $ with $ \ beta $ dividing $ \ alpha $. in the literature such conditions are known to be sufficient when $ m = 4 $ and $ m = 3 $ under an additional condition. we give a classification of all first kind frobenius circulants of valency $ 2p $ and prove that they are all $ p $ th cyclotomic graphs, where
|
arxiv:1502.03272
|
this article explores the relationship between schubert varieties and equivariant embeddings, using the framework of homogeneous fiber bundles over flag varieties. we show that the homogenous fiber bundles obtained from bott - samelson - demazure - hansen varieties are always toroidal. furthermore, we identify the wonderful varieties among them. we give a short proof of a conjecture of gao, hodges, and yong for deciding when a schubert variety is spherical with respect to an action of a levi subgroup. by using bp - decompositions, we obtain a characterization of the smooth spherical schubert varieties. among the other applications of our results are : 1 ) a characterization of the spherical bott - samelson - demazure - hansen varieties, 2 ) an alternative proof of the fact that, in type a, every singular schubert variety of torus complexity 1 is a spherical schubert variety, and 3 ) a proof of the fact that, for simply laced algebraic groups of adjoint type, every spherical $ g $ - schubert variety is locally rigid, that is to say, the first cohomology of its tangent sheaf vanishes.
|
arxiv:2305.00468
|
this paper presents the calculation of the electrical power transported by the electromagnetic fields of two parallel wires carrying opposite dc currents. the poynting vector is developed in bipolar coordinates and symbolically integrated over different surfaces. for perfectly conducting wires, the purely longitudinal power in the space surrounding the wires is shown to be equal to that which is produced by the battery ( and consumed by the load resistor ). for resistive wires, the longitudinal power transported by the fields is shown to diminish according to the distance traveled, and the loss is proved to be equal to the power entering the wires via the fields at their surfaces.
|
arxiv:2305.11827
|
we prove an inequality for the spectral radius of products of non - negative matrices conjectured by x. zhan. we show that for all $ n \ times n $ non - negative matrices $ a $ and $ b $, $ \ rho ( a \ circ b ) \ le \ rho ( ( a \ circ a ) ( b \ circ b ) ) ^ { 1 / 2 } \ le \ rho ( ab ) $, where $ \ circ $ represents the hadamard product.
|
arxiv:0907.3312
|
speech synthesis methods can create realistic - sounding speech, which may be used for fraud, spoofing, and misinformation campaigns. forensic methods that detect synthesized speech are important for protection against such attacks. forensic attribution methods provide even more information about the nature of synthesized speech signals because they identify the specific speech synthesis method ( i. e., speech synthesizer ) used to create a speech signal. due to the increasing number of realistic - sounding speech synthesizers, we propose a speech attribution method that generalizes to new synthesizers not seen during training. to do so, we investigate speech synthesizer attribution in both a closed set scenario and an open set scenario. in other words, we consider some speech synthesizers to be " known " synthesizers ( i. e., part of the closed set ) and others to be " unknown " synthesizers ( i. e., part of the open set ). we represent speech signals as spectrograms and train our proposed method, known as compact attribution transformer ( cat ), on the closed set for multi - class classification. then, we extend our analysis to the open set to attribute synthesized speech signals to both known and unknown synthesizers. we utilize a t - distributed stochastic neighbor embedding ( tsne ) on the latent space of the trained cat to differentiate between each unknown synthesizer. additionally, we explore poly - 1 loss formulations to improve attribution results. our proposed approach successfully attributes synthesized speech signals to their respective speech synthesizers in both closed and open set scenarios.
|
arxiv:2210.07546
|
finely tuning mpi applications and understanding the influence of keyparameters ( number of processes, granularity, collective operationalgorithms, virtual topology, and process placement ) is critical toobtain good performance on supercomputers. with the high consumptionof running applications at scale, doing so solely to optimize theirperformance is particularly costly. havinginexpensive but faithful predictions of expected performance could bea great help for researchers and system administrators. themethodology we propose decouples the complexity of the platform, whichis captured through statistical models of the performance of its maincomponents ( mpi communications, blas operations ), from the complexityof adaptive applications by emulating the application and skippingregular non - mpi parts of the code. we demonstrate the capability of our method with high - performancelinpack ( hpl ), the benchmark used to rank supercomputers in thetop500, which requires careful tuning. we briefly present ( 1 ) how theopen - source version of hpl can be slightly modified to allow a fastemulation on a single commodity server at the scale of asupercomputer. then we present ( 2 ) an extensive ( in ) validation studythat compares simulation with real experiments and demonstrates our ability to predict theperformance of hpl within a few percent consistently. this study allows us toidentify the main modeling pitfalls ( e. g., spatial and temporal nodevariability or network heterogeneity and irregular behavior ) that needto be considered. last, we show ( 3 ) how our " surrogate " allowsstudying several subtle hpl parameter optimization problems whileaccounting for uncertainty on the platform.
|
arxiv:2102.07674
|
according to kelvin, a point pressure source uniformly traveling over the surface of deep calm water leaves behind universal wake pattern confined within $ 39 ^ { \ circ } $ sector and consisting of the so - called transverse and diverging wavefronts. actual ship wakes differ in their appearance from both each other and kelvin ' s prediction. the difference can be attributed to a deviation from the point source limit and for given shape of the disturbance quantified by the froude number $ f $. we show that within linear theory effect of arbitrary disturbance on the wake pattern can be mimicked by an effective pressure distribution. further, resulting wake patterns are qualitatively different depending on whether water - piercing is present or not ( " sharp " vs " smooth " disturbances ). for smooth pressure sources, we generalize kelvin ' s stationary phase argument to encompass finite size effects and classify resulting wake patterns. specifically, we show that there exist two characteristic froude numbers, $ f _ { 1 } $ and $ f _ { 2 } > f _ { 1 } $, such as the wake is only present if $ f \ gtrsim f _ { 1 } $. for $ f _ { 1 } \ lesssim f \ lesssim f _ { 2 } $, the wake consists of the transverse wavefronts confined within a sector of an angle that may be smaller than kelvin ' s. an additional $ 39 ^ { \ circ } $ wake made of both the transverse and diverging wavefronts is found for $ f \ gtrsim f _ { 2 } $. if the pressure source has sharp boundary, the wake is always present and features additional interference effects. specifically, for a constant pressure line segment source mimicking slender ship the wake pattern can be understood as due to two opposing effect wakes resembling ( but not identical to ) kelvin ' s and originating at segment ' s ends.
|
arxiv:1902.01884
|
we have studied the optical and electrical spectra from an n i p led as a function of magnetic field. this sample incorporated three gaas quantum wells in the intrinsic region. this device had excess n type doping and as a result. the quantum wells were populated by a two dimension landau electron gas. the broad b0 field emission band evolved into a series of discrete features in the presence of a magnetic field. these were identified as interband transitions between the different values of l. landau levels associated with the sub - bands, with the selection rule. an energy splitting between the two polarised components was observed for each landau level transition. this was found to be equal to the sum of the conduction and valence band spin splittings. we used the know value of electron g factor to determine the valence band spin splittings. our experimental values were compared to the numerically calculated values and were found to be in reasonable agreement.
|
arxiv:2101.00937
|
in the setting of continuous maps between compact orientable manifolds of the same dimension, there is a well known averaging formula for the coincidence lefschetz number in terms of the lefschetz numbers of lifts to some finite covering space. we state and prove an analogous averaging formula for the coincidence reidemeister trace. this generalizes a recent formula in fixed point theory by liu and zhao. we give two separate and independent proofs of our main result : one using methods developed by kim and the first author for averaging nielsen numbers, and one using an axiomatic approach for the local reidemeister trace. we also give some examples and state some open questions for the nonorientable case.
|
arxiv:1610.09035
|
tipping is a phenomenon in multistable systems where small changes in inputs cause huge changes in outputs. when the parameter varies within a certain time scale, the rate will affect the tipping behaviors. these behaviors are undesirable in thermoacoustic systems, which are widely used in aviation, power generation and other industries. thus, this paper aims at considering the tipping behaviors of the thermoacoustic system with the time - varying parameters and the combined excitations of additive and multiplicative colored noises. transient dynamical behaviors for the proposed thermoacoustic model are implemented through the reduced fokker - planck - kolmogorov equation derived by a standard stochastic averaging method. then, the tipping problems of the rate - dependent thermoacoustic systems with random fluctuations are studied by virtue of the obtained probability density functions. our results show that the rate delays the value of the tipping parameter compared to the one with the quasi - steady assumption, which is called as a rate - dependent tipping - delay phenomenon. besides, the influences of the initial values, the rate, the changing time of the parameters, and the correlation time of the noises on the rate - dependent tipping - delay phenomenon are analyzed in detail. these results are of great significance for research in related fields such as aviation and land gas turbines.
|
arxiv:2001.08987
|
to improve signal - to - interference ratio ( sir ) and make better use of file diversity provided by random caching, we consider two types of linear receivers, i. e., maximal ratio combining ( mrc ) receiver and partial zero forcing ( pzf ) receiver, at users in a large - scale cache - enabled single - input multi - output ( simo ) network. first, for each receiver, by utilizing tools from stochastic geometry, we derive a tractable expression and a tight upper bound for the successful transmission probability ( stp ). in the case of the mrc receiver, we also derive a closed - form expression for the asymptotic outage probability in the low sir threshold regime. then, for each receiver, we maximize the stp. in the case of the mrc receiver, we consider the maximization of the tight upper bound on the stp by optimizing the caching distribution, which is a non - convex problem. we obtain a stationary point, by solving an equivalent difference of convex ( dc ) programming problem using concave - convex procedure ( cccp ). we also obtain a closed - form asymptotically optimal solution in the low sir threshold regime. in the case of the pzf receiver, we consider the maximization of the tight upper bound on the stp by optimizing the caching distribution and the degrees of freedom ( dof ) allocation ( for boosting the signal power ), which is a mixed discrete - continuous problem. based on structural properties, we obtain a low - complexity near optimal solution by using an alternating optimization approach. the analysis and optimization results reveal the impact of antenna resource at users on random caching. finally, by numerical results, we show that the random caching design with the pzf receiver achieves significant performance gains over the random caching design with the mrc receiver and some baseline caching designs.
|
arxiv:1801.02743
|
this paper studies the classification of high - dimensional gaussian signals from low - dimensional noisy, linear measurements. in particular, it provides upper bounds ( sufficient conditions ) on the number of measurements required to drive the probability of misclassification to zero in the low - noise regime, both for random measurements and designed ones. such bounds reveal two important operational regimes that are a function of the characteristics of the source : i ) when the number of classes is less than or equal to the dimension of the space spanned by signals in each class, reliable classification is possible in the low - noise regime by using a one - vs - all measurement design ; ii ) when the dimension of the spaces spanned by signals in each class is lower than the number of classes, reliable classification is guaranteed in the low - noise regime by using a simple random measurement design. simulation results both with synthetic and real data show that our analysis is sharp, in the sense that it is able to gauge the number of measurements required to drive the misclassification probability to zero in the low - noise regime.
|
arxiv:1607.02801
|
the organizers of this meeting have asked me to present perspectives of nuclear physics. this means to identify the areas where nuclear physics will be expanding in the next future. in six chapters a short overview of these areas will be given, where i expect that nuclear physics willdevelop quite fast : a. quantum chromodynamics and effective field theories in the confinement region ; b. nuclear structure at the limits ; c. high energy heavy ion collisions ; d. nuclear astrophysics ; e. neutrino physics ; f. test of physics beyond the standard model by rare processes. after a survey over these six points i will pick out a few topics where i will go more in details. there is no time to give for all six points detailed examples. i shall discuss the following examples of the six topics mentionned above : 1. the perturbative chiral quark model and the nucleon $ \ sigma $ - term, 2. vampir ( variation after mean field projection in realistic model spaces and with realistic forces ) as an example of the nuclear structure renaissance, 3. measurement of important astrophysical nuclear reactions in the gamow peak, 4. the solar neutrino problem. as examples for testing new physics beyond the standard model by rare processes i had prepared to speak about the measurement of the electric neutron dipole moment and of the neutrinoless double beta decay. but the time is limited and so i have to skip these points, although they are extremely interesting.
|
arxiv:nucl-th/0212030
|
meal preparation is an important instrumental activity of daily living ~ ( iadl ). while existing research has explored robotic assistance in meal preparation tasks such as cutting and cooking, the crucial task of peeling has received less attention. robot - assisted peeling, conventionally a bimanual task, is challenging to deploy in the homes of care recipients using two wheelchair - mounted robot arms due to ergonomic and transferring challenges. this paper introduces a robot - assisted peeling system utilizing a single robotic arm and an assistive cutting board, inspired by the way individuals with one functional hand prepare meals. our system incorporates a multimodal active perception module to determine whether an area on the food is peeled, a human - in - the - loop long - horizon planner to perform task planning while catering to a user ' s preference for peeling coverage, and a compliant controller to peel the food items. we demonstrate the system on 12 food items representing the extremes of different shapes, sizes, skin thickness, surface textures, skin vs flesh colors, and deformability.
|
arxiv:2404.06570
|
crowdsourcing platforms enable companies to propose tasks to a large crowd of users. the workers receive a compensation for their work according to the serious of the tasks they managed to accomplish. the evaluation of the quality of responses obtained from the crowd remains one of the most important problems in this context. several methods have been proposed to estimate the expertise level of crowd workers. we propose an innovative measure of expertise assuming that we possess a dataset with an objective comparison of the items concerned. our method is based on the definition of four factors with the theory of belief functions. we compare our method to the fagin distance on a dataset from a real experiment, where users have to assess the quality of some audio recordings. then, we propose to fuse both the fagin distance and our expertise measure.
|
arxiv:1907.10588
|
in this paper, we study the discrete morse flow for the ricci flow on football, which is the 2 - sphere with removed north and south poles and with the metric $ g _ 0 $ of constant scalar curvature, and and for porous media equation on a bounded regular domain in the plane. we show that with a suitable assumption about $ g ( 0 ) $ we have a weak approximated discrete morse flow for the approximated ricci flow and porous media equation on any time intervals.
|
arxiv:1203.2225
|
we study the distant red galaxy ( drg, j - k _ s > 2. 3 ) neighbour population of quasi stellar objects ( qsos ) selected from the sloan digital sky survey ( sdss ) in the redshift range 1 < z < 2. we perform a similar analysis for optically obscured agns ( i. e. with a limiting magnitude i > 24 ) detected in the mid - infrared ( 24 $ \ mu $ m ) with the spitzer space telescope and a mean redshift $ z \ sim 2. 2 $ in the flamingos extragalactic survey ( flamex ). both qsos and obscured agn target samples cover 4. 7 deg $ ^ 2 $ in the same region of the sky. we find a significant difference in the environment of these two target samples. neighbouring galaxies close to qsos tend to be bluer than galaxies in optically obscured source environments. we also present results on the cross - correlation function of drgs around qsos and optically faint mid - infrared sources. the corresponding correlation length obtained for the qso sample targets is $ r _ 0 $ = $ 5. 4 \ pm1. 6 $ mpc h $ ^ { - 1 } $ and a slope of $ \ gamma $ = $ 1. 94 \ pm0. 10 $. for the optically obscured galaxy sample we find $ r _ 0 $ = $ 8. 9 \ pm1. 4 $ mpc h $ ^ { - 1 } $ and a slope of $ \ gamma $ = $ 2. 27 \ pm0. 20 $. these results indicate that optically faint obscured sources are located in denser environment of evolved red galaxies compare to qsos.
|
arxiv:astro-ph/0702155
|
in this study, we investigate the convergence rates for the homogenization of elliptic equations with lower - order terms under the spectral gap assumption, in both bounded domains and the entire space. our analysis demonstrates that lower - order terms significantly affect the convergence rate, particularly in the full space, where the rate changes from \ ( o ( \ epsilon ) \ ) ( observed without lower - order terms ) to \ ( o ( \ epsilon ^ { d / ( { d + 2 } ) } ) \ ) due to their influence. in contrast, in bounded domains, the convergence rate remains \ ( o ( \ epsilon ^ { 1 / 2 } ) \ ), as boundary conditions exert a stronger influence than the lower - order terms. to manage the complexities introduced by lower - order terms, we developed a novel technique that localizes the analysis within small grids, enabling the application of the poincar \ ' e inequality for effective estimates. this work builds upon existing frameworks, offering a refined approach to quantitative homogenization with lower - order terms.
|
arxiv:2410.22726
|
the role of interfacial nonidealities and disorder on thermal transport across interfaces is traditionally assumed to add resistance to heat transfer, decreasing the thermal boundary conductance ( tbc ). $ ^ 1 $ however, recent computational works have suggested that interfacial defects can enhance this thermal boundary conductance through emergence of unique vibrations that are intrinsic to the material interface and defect atoms, $ ^ { 2 - 6 } $ a finding that contradicts traditional theory and conventional understanding. by manipulating the local heat flux of atomic vibrations that comprise these interfacial modes, in principle, the tbc can be increased. in this work, we provide evidence that interfacial defects can enhance the tbc across interfaces through the emergence of unique high frequency vibrational modes that arise from atomic mass defects at the interface with relatively small masses. we demonstrate ultrahigh tbc at amorphous sioc : h / sic : h interfaces, approaching 1 gw m $ ^ { - 2 } $ k $ ^ { - 1 } $, that is further increased through the introduction of nitrogen defects. the fact that disordered interfaces can exhibit such high conductances, which can be further increased with additional defects offers a unique direction in controlling interfacial thermal transport that becomes important in manipulating heat transfer across materials with high densities of interfaces.
|
arxiv:1710.09440
|
realizability notions in mathematical logic have a long history, which can be traced back to the work of stephen kleene in the 1940s, aimed at exploring the foundations of intuitionistic logic. kleene ' s initial realizability laid the ground for more sophisticated notions such as kreisel ' s modified realizability and various modern approaches. in this context, our work aligns with the lineage of realizability strategies that emphasize the accumulation, rather than the propagation of precise witnesses. in this paper, we introduce a new notion of realizability, namely herbrandized modified realizability. this novel form of ( cumulative ) realizability, presented within the framework of semi - intuitionistic logic is based on a recently developed star combinatory calculus, which enables the gathering of witnesses into nonempty finite sets. we also show that the previous analysis can be extended from logic to ( heyting ) arithmetic.
|
arxiv:2402.16437
|
this article is dedicated to the investigation of difficulties involved in the understanding of the homomorphism concept. it doesn ' t restrict to group - theory but on the contrary raises the issue of developing teaching strategies aiming at gaining access to structuralist thinking. emphasis is put on epistemological analysis and its interaction with didactics in an attempt to make abstract algebra more accessible.
|
arxiv:1303.7089
|
large language models ( llms ) have revolutionized natural language processing ( nlp ) tasks by achieving state - of - the - art performance across a range of benchmarks. central to the success of these models is the integration of sophisticated architectural components aimed at improving training stability, convergence speed, and generalization capabilities. among these components, normalization operation, such as layer normalization ( layernorm ), emerges as a pivotal technique, offering substantial benefits to the overall model performance. however, previous studies have indicated that normalization operations can substantially elevate processing latency and energy usage. in this work, we adopt the principles of algorithm and hardware co - design, introducing a holistic normalization accelerating method named haan. the evaluation results demonstrate that haan can achieve significantly better hardware performance compared to state - of - the - art solutions.
|
arxiv:2502.11832
|
we investigate the problem of mapping, through the morita equivalence, odd dimensional noncommutative lattice gauge theories onto suitable matrix models. we specialize our analysis to noncommutative three dimensional qed ( ncqed ) and scalar qed ( ncsqed ), for which we explicitly build the corresponding matrix model.
|
arxiv:hep-th/0211060
|
neuroprosthesis, as one type of precision medicine device, is aiming for manipulating neuronal signals of the brain in a closed - loop fashion, together with receiving stimulus from the environment and controlling some part of our brain / body. in terms of vision, incoming information can be processed by the brain in millisecond interval. the retina computes visual scenes and then sends its output as neuronal spikes to the cortex for further computation. therefore, the neuronal signal of interest for retinal neuroprosthesis is spike. closed - loop computation in neuroprosthesis includes two stages : encoding stimulus to neuronal signal, and decoding it into stimulus. here we review some of the recent progress about visual computation models that use spikes for analyzing natural scenes, including static images and dynamic movies. we hypothesize that for a better understanding of computational principles in the retina, one needs a hypercircuit view of the retina, in which different functional network motifs revealed in the cortex neuronal network should be taken into consideration for the retina. different building blocks of the retina, including a diversity of cell types and synaptic connections, either chemical synapses or electrical synapses ( gap junctions ), make the retina an ideal neuronal network to adapt the computational techniques developed in artificial intelligence for modeling of encoding / decoding visual scenes. altogether, one needs a systems approach of visual computation with spikes to advance the next generation of retinal neuroprosthesis as an artificial visual system.
|
arxiv:2001.04064
|
multimodal contrastive pretraining, exemplified by models like clip, has been found to be vulnerable to backdoor attacks. while current backdoor defense methods primarily employ conventional data augmentation to create augmented samples aimed at feature alignment, these methods fail to capture the distinct features of backdoor samples, resulting in suboptimal defense performance. observations reveal that adversarial examples and backdoor samples exhibit similarities in the feature space within the compromised models. building on this insight, we propose adversarial backdoor defense ( abd ), a novel data augmentation strategy that aligns features with meticulously crafted adversarial examples. this approach effectively disrupts the backdoor association. our experiments demonstrate that abd provides robust defense against both traditional uni - modal and multimodal backdoor attacks targeting clip. compared to the current state - of - the - art defense method, cleanclip, abd reduces the attack success rate by 8. 66 % for badnet, 10. 52 % for blended, and 53. 64 % for badclip, while maintaining a minimal average decrease of just 1. 73 % in clean accuracy.
|
arxiv:2409.15968
|
drug discovery is the process of identifying compounds which have potentially meaningful biological activity. a major challenge that arises is that the number of compounds to search over can be quite large, sometimes numbering in the millions, making experimental testing intractable. for this reason computational methods are employed to filter out those compounds which do not exhibit strong biological activity. this filtering step, also called virtual screening reduces the search space, allowing for the remaining compounds to be experimentally tested. in this paper we propose several novel approaches to the problem of virtual screening based on canonical correlation analysis ( cca ) and on a kernel - based extension. spectral learning ideas motivate our proposed new method called indefinite kernel cca ( ikcca ). we show the strong performance of this approach both for a toy problem as well as using real world data with dramatic improvements in predictive accuracy of virtual screening over an existing methodology.
|
arxiv:1202.3302
|
there is an increasing demand for sentiment analysis of text from social media which are mostly code - mixed. systems trained on monolingual data fail for code - mixed data due to the complexity of mixing at different levels of the text. however, very few resources are available for code - mixed data to create models specific for this data. although much research in multilingual and cross - lingual sentiment analysis has used semi - supervised or unsupervised methods, supervised methods still performs better. only a few datasets for popular languages such as english - spanish, english - hindi, and english - chinese are available. there are no resources available for malayalam - english code - mixed data. this paper presents a new gold standard corpus for sentiment analysis of code - mixed text in malayalam - english annotated by voluntary annotators. this gold standard corpus obtained a krippendorff ' s alpha above 0. 8 for the dataset. we use this new corpus to provide the benchmark for sentiment analysis in malayalam - english code - mixed texts.
|
arxiv:2006.00210
|
this work analyzes the gompertz - pareto distribution ( gpd ) of personal income, formed by the combination of the gompertz curve, representing the overwhelming majority of the economically less favorable part of the population of a country, and the pareto power law, which describes its tiny richest part. equations for the lorenz curve, gini coefficient and the percentage share of the gompertzian part relative to the total income are all written in this distribution. we show that only three parameters, determined by linear data fitting, are required for its complete characterization. consistency checks are carried out using income data of brazil from 1981 to 2007 and they lead to the conclusion that the gpd is consistent and provides a coherent and simple analytical tool to describe personal income distribution data.
|
arxiv:1010.1994
|
extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea are further divided into multiple recognized phyla. archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of haloquadratum walsbyi. despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. archaea use more energy sources than eukaryotes : these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both
|
https://en.wikipedia.org/wiki/Biology
|
what have become known as the " darmois " and " lichnerowicz " junction conditions are often stated to be equivalent, " essentially " equivalent, in a " sense " equivalent, and so on. one even sees not infrequent reference to the " darmois - lichnerowicz " conditions. whereas the equivalence of these conditions is manifest in gaussian - normal coordinates, a fact that has been known for close to a century, this equivalence does not extend to a loose definition of " admissible " coordinates ( coordinates in which the metric and its first order derivatives are continuous ). we show this here by way of a simple, but physically relevant, example. in general, a loose definition of the " lichnerowicz " conditions gives additional restrictions, some of which simply amount to a convenient choice of gauge, and some of which amount to real physical restrictions, away from strict " admissible " coordinates. the situation was totally confused by a very influential, and now frequently misquoted, paper by bonnor and vickers, that erroneously claimed a proof of the equivalence of the " darmois " and " lichnerowicz " conditions within this loose definition of " admissible " coordinates. a correct proof, based on a strict definition of " admissible " coordinates, was given years previous by israel. it is that proof, generally unrecognized, that we must refer to. attention here is given to a clarification of the subject, and to the history of the subject, which, it turns out, is rather fascinating in itself.
|
arxiv:1705.01090
|
seidel - smith and hendricks used equivariant floer cohomology to define some spectral sequences from symplectic khovanov homology and heegaard floer homology. these spectral sequences give rise to smith - type inequalities. similar - looking spectral sequences have been defined by lee, bar - natan, ozsv \ ' ath - szab \ ' o, lipshitz - treumann, szab \ ' o, sarkar - seed - szab \ ' o, and others. in this paper we give another construction of equivariant floer cohomology with respect to a finite group action and use it to prove some invariance properties of these spectral sequences ; prove that some of these spectral sequences agree ; improve hendricks ' s smith - type inequalities ; give some theoretical and practical computability results for these spectral sequences ; define some new spectral sequences conjecturally related to sarkar - seed - szab \ ' o ' s ; and introduce a new concordance homomorphism and concordance invariants. we also digress to prove invariance of manolescu ' s reduced symplectic khovanov homology.
|
arxiv:1510.02449
|
a two dimensional eigenvalue problem ( 2devp ) of a hermitian matrix pair $ ( a, c ) $ is introduced in this paper. the 2devp can be viewed as a linear algebraic formulation of the well - known eigenvalue optimization problem of the parameter matrix $ h ( \ mu ) = a - \ mu c $. we present fundamental properties of the 2devp such as the existence, the necessary and sufficient condition for the finite number of 2d - eigenvalues and variational characterizations. we use eigenvalue optimization problems from the minmax of two rayleigh quotients and the computation of distance to instability to show their connections with the 2devp and new insights of these problems derived from the properties of the 2devp.
|
arxiv:1911.08109
|
. in 1998, ccs was incorporated into the university of edinburgh ' s school of informatics. = = binding problem in cognitive science = = one of the core aims of cognitive science is to achieve an integrated theory of cognition. this requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated ( sub - ) cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. one approach is to solve this " binding problem " ( that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ( " feature binding " ) to the most complex cognitive representations, like symbol structures ( " variable binding " ) ), by means of integrative synchronization mechanisms. in other words, one of the coordinating mechanisms appears to be the temporal ( phase ) synchronization of neural activity based on dynamical self - organizing processes in neural networks, described by the binding - by - synchrony ( bbs ) hypothesis from neurophysiology. connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. in perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ( " feature binding ", " feature linking " ). in language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ( " variable binding " ) ( see also the " symbolism vs. connectionism debate " in connectionism ). however, despite significant advances in understanding the integrated theory of cognition ( specifically the binding problem ), the debate on this issue of beginning cognition is still in progress. from the different perspectives noted above, this problem can be reduced to the issue of how organisms at the simple reflexes stage of development overcome the threshold of the environmental chaos of sensory stimuli : electromagnetic waves, chemical interactions, and pressure fluctuations. the so - called primary data entry ( pde ) thesis poses doubts about the ability of such an organism to overcome this cue threshold on its own. in
|
https://en.wikipedia.org/wiki/Cognitive_science
|
we consider the time evolution of two entropy - like quantities, the holographic entanglement entropy and causal holographic information, in a model of holographic thermalization dual to the gravitational collapse of a thin planar shell. unlike earlier calculations valid in different limits, we perform a full treatment of the dynamics of the system, varying both the shell ' s equation of state and initial position. in all cases considered, we find that between an early period related to the acceleration of the shell and a late epoch of saturation towards the thermal limit, the entanglement entropy exhibits universal linear growth in time in accordance with the prediction of liu and suh. as intermediate steps of our analysis, we explicitly construct a coordinate system continuous at the location of an infinitely thin shell and derive matching conditions for geodesics and extremal surfaces traversing this region.
|
arxiv:1405.7015
|
we present measurements of branching fractions and cp - violating asymmetries in b0 - > rho + - pi - + and b0 - > rho - k + decays. the results are obtained from a data sample of 88. 9 10 ^ 6 upsilon ( 4s ) - > b bbar decays collected with the babar detector at the pep - ii asymmetric - energy b factory at slac. from a time - dependent maximum likelihood fit we measure the charge - averaged branching fractions b ( b0 - > rho + - pi - + ) = ( 22. 6 + - 1. 8 ( stat ) + - 2. 2 ( syst ) ) 10 ^ ( - 6 ) and b ( b0 - > rho - k + ) = ( 7. 3 + 1. 3 - 1. 2 + - 1. 3 ) 10 ^ ( - 6 ) ; and the cp - violating charge asymmetries acp ( rho pi ) = - 0. 18 + - 0. 08 + - 0. 03 and acp ( rho k ) = 0. 28 + - 0. 17 + - 0. 08 ; the direct cp violation parameter c ( rho pi ) = 0. 36 + - 0. 18 + - 0. 04 and the mixing - induced cp violation parameter s ( rho pi ) = 0. 19 + - 0. 24 + - 0. 03 ; and the dilution parameters dc ( rho pi ) = 0. 28 + 0. 18 - 0. 19 + - 0. 04 and ds ( rho pi ) = 0. 15 + - 0. 25 + - 0. 03.
|
arxiv:hep-ex/0306030
|
for any $ n \ ge 2 $ and $ \ alpha = ( \ alpha _ 1, \ cdots, \ alpha _ { n + 1 } ) \ in ( 0, \ infty ) ^ { n + 1 } $, let $ \ mu ^ { ( n ) } _ { \ alpha } $ be the dirichlet distribution with parameter $ \ alpha $ on the set $ \ delta ^ { ( n ) } : = \ { x \ in [ 0, 1 ] ^ n : \ \ sum _ { 1 \ le i \ le n } x _ i \ le 1 \ }. $ the multivariate dirichlet diffusion is associated with the dirichlet form $ $ { \ scr e } _ \ alpha ^ { ( n ) } ( f, f ) : = \ sum _ { n = 1 } ^ n \ int _ { \ delta ^ { ( n ) } } \ bigg ( 1 - \ sum _ { 1 \ le i \ le n } x _ i \ bigg ) x _ n ( \ partial _ n f ) ^ 2 ( x ) \, \ mu ^ { ( n ) } _ \ alpha ( d x ) $ $ with domain $ { \ scr d } ( { \ scr e } _ \ alpha ^ { ( n ) } ) $ being the closure of $ c ^ 1 ( \ delta ^ { ( n ) } ) $. we prove the nash inequality $ $ \ mu _ \ alpha ^ { ( n ) } ( f ^ 2 ) \ le c { \ scr e } _ \ alpha ^ { ( n ) } ( f, f ) ^ { \ frac p { p + 1 } } \ mu _ \ alpha ^ { ( n ) } ( | f | ) ^ { \ frac 2 { p + 1 } }, \ \ f \ in { \ scr d } ( { \ scr e } _ \ alpha ^ { ( n ) } ), \ mu _ \ alpha ^ { ( n ) } ( f ) = 0 $ $ for some constant $ c > 0 $ and $ p = ( \ alpha _ { n + 1 } - 1 ) ^ + + \ sum _ { i = 1 } ^ n 1 \ lor ( 2 \ alpha _ i ), $ where the constant $ p $ is sharp when $ \ max _ { 1 \ le i \ le n } \ alpha _ i \ le 1 / 2
|
arxiv:1801.09209
|
a systematic study of the electronic properties of single layer sb ( antimonene ) nanoribbons is presented. by using a 6 - orbital tight - binding hamiltonian, we study the electronic band structure of finite ribbons with zigzag or armchair termination. we show that there is good agreement between ab initio calculations and the tight - binding model. we study how the size of the gap can be controlled by applying an external bias potential. an electric field applied perpendicular to the antimonene layer is found to increase the band gap, while a transverse bias potential leads to a position dependent reduction of the band gap. both kinds of bias potential break inversion symmetry of the crystal. this, together with the strong intrinsic spin - orbit coupling of antimonene, leads to spin - splitting of the valence band states.
|
arxiv:1807.04597
|
observations of the polarization of the cosmic microwave backround ( cmb ) have the potential to place much tighter constraints on cosmological parameters than observations of the fluctuations in temperature alone. we discuss using cmb polarization to constrain parameters relevant for distinguishing among popular models for cosmological inflation, using the map and planck satellite missions as example cases. of particular interest is the ability to detect tiny contributions to the cmb anisotropy from tensor modes, which is fundamentally limited by cosmic variance in temperature - only observations. the ability to detect a tensor / scalar ratio $ r \ sim 0. 01 $ would allow precision tests of interesting inflation models, and is possible with a modest increase in sensitivity over that planned for the planck satellite, or potentially by ground - based experiments.
|
arxiv:astro-ph/9806259
|
the security performance of chaos - based image encryption algorithms heavily depends on the complexity of the underlying chaotic system. to enhance encryption effectiveness, it is crucial to design chaotic systems with improved dynamic properties. this paper proposes a novel approach, the 3d cascaded cross - coupling method ( 3d - ccc ), for constructing 3d hyperchaotic systems by combining three one - dimensional chaotic systems, which can be identical or different. using this method, we develop a new 3d hyperchaotic map, 3d - icccls, which exhibits superior chaotic characteristics, including good ergodicity, randomness, positive lyapunov exponents, and high spectral entropy. furthermore, we introduce a color image encryption algorithm based on 3d - icccls. the proposed scheme treats the three color channels as an integrated unit, employing cross - channel bit mixing followed by simultaneous permutation and diffusion. this approach achieves a strong encryption effect in a single round. experimental results demonstrate that the algorithm provides a large key space, high key sensitivity, and strong resistance against common attacks,
|
arxiv:2503.23655
|
the two - pass information bottleneck ( tpib ) based speaker diarization system operates independently on different conversational recordings. tpib system does not consider previously learned speaker discriminative information while diarizing new conversations. hence, the real time factor ( rtf ) of tpib system is high owing to the training time required for the artificial neural network ( ann ). this paper attempts to improve the rtf of the tpib system using an incremental transfer learning approach where the parameters learned by the ann from other conversations are updated using current conversation rather than learning parameters from scratch. this reduces the rtf significantly. the effectiveness of the proposed approach compared to the baseline ib and the tpib systems is demonstrated on standard nist and ami conversational meeting datasets. with a minor degradation in performance, the proposed system shows a significant improvement of 33. 07 % and 24. 45 % in rtf with respect to tpib system on the nist rt - 04eval and ami - 1 datasets, respectively.
|
arxiv:1902.08051
|
we study o ( alpha ^ 2 beta _ 0 ) perturbative corrections to matrix elements entering two - body exclusive decays of the form b - > pi pi, pi k in the qcd factorization formalism, including chirally enhanced power corrections, and discuss the effect of these corrections on direct cp asymmetries, which receive their first contribution at o ( alpha ). we find that the o ( alpha ^ 2 beta _ 0 ) corrections are often as large as the o ( alpha ) corrections. we find large uncertainties due to renormalization scale dependence as well as poor knowledge of the non - perturbative parameters. we assess the effect of the perturbative corrections on the direct cp violation parameters of b - > pi ^ + pi ^ -.
|
arxiv:hep-ph/0504024
|
advancements in sensors, algorithms, and compute hardware have made 3d perception feasible in real time. current methods to compare and evaluate the quality of a 3d model, such as chamfer, hausdorff, and earth - mover ' s distance, are uni - dimensional and have limitations, including an inability to capture coverage, local variations in density and error, and sensitivity to outliers. in this paper, we propose an evaluation framework for point clouds ( empir3d ) that consists of four metrics : resolution to quantify the ability to distinguish between individual parts in the point cloud, accuracy to measure registration error, coverage to evaluate the portion of missing data, and artifact score to characterize the presence of artifacts. through detailed analysis, we demonstrate the complementary nature of each of these dimensions and the improvements they provide compared to the aforementioned uni - dimensional measures. furthermore, we illustrate the utility of empir3d by comparing our metrics with uni - dimensional metrics for two 3d perception applications ( slam and point cloud completion ). we believe that empir3d advances our ability to reason about point clouds and helps better debug 3d perception applications by providing a richer evaluation of their performance. our implementation of empir3d, custom real - world datasets, evaluations on learning methods, and detailed documentation on how to integrate the pipeline will be made available upon publication.
|
arxiv:2306.03660
|
this paper is a shortened version of the full paper that was published in the journal frontiers of psychology in may 2022. in recent decades, the scientific study of consciousness has significantly increased our understanding of this elusive phenomenon. yet, despite critical development in our understanding of the functional side of consciousness, we still lack a fundamental theory regarding its phenomenal aspect. the phenomenal aspect of consciousness is the first - person answer to what it is like question, and it has thus far proved recalcitrant to direct scientific investigation. the question of how the brain, or any cognitive system, can create conscious experience out of neural representations poses a great conundrum to science. naturalistic dualists argue that it is composed of a primitive, private, nonreductive element of reality. illusionists, on the other hand, argue that it is merely a cognitive illusion. we contend that both the dualist and illusionist positions are flawed because they tacitly assume consciousness to be an absolute property that does not depend on the observer. we developed a conceptual and a mathematical argument for a relativistic theory of consciousness in which a system either has or does not have phenomenal consciousness with respect to some observer. according to the theory, phenomenal consciousness is neither private nor delusional, just relativistic. in the frame of reference of the cognitive system, it will be observable ( first - person perspective ) and in other frame of reference it will not ( third - person perspective ). these two cognitive frames of reference are both correct, just as in the case of an observer that claims to be at rest while another will claim that the observer has constant velocity. neither observer position can be privileged, as they both describe the same underlying reality.
|
arxiv:2502.07247
|
a polytopal digraph $ g ( p ) $ is an orientation of the skeleton of a convex polytope $ p $. the possible non - degenerate pivot operations of the simplex method in solving a linear program over $ p $ can be represented as a special polytopal digraph known as an lp digraph. presently there is no general characterization of which polytopal digraphs are lp digraphs, although four necessary properties are known : acyclicity, unique sink orientation ( uso ), the holt - klee property and the shelling property. the shelling property was introduced by avis and moriyama ( 2009 ), where two examples are given in $ d = 4 $ dimensions of polytopal digraphs satisfying the first three properties but not the shelling property. the smaller of these examples has $ n = 7 $ vertices. avis, miyata and moriyama ( 2009 ) constructed for each $ d \ ge 4 $ and $ n \ ge d + 2 $, a $ d $ - polytope $ p $ with $ n $ vertices which has a polytopal digraph which is an acyclic uso that satisfies the holt - klee property, but does not satisfy the shelling property. the construction was based on a minimal such example, which has $ d = 4 $ and $ n = 6 $. in this paper we explore the shelling condition further. first we give an apparently stronger definition of the shelling property, which we then prove is equivalent to the original definition. using this stronger condition we are able to give a more general construction of such families. in particular, we show that given any 4 - dimensional polytope $ p $ with $ n _ 0 $ vertices whose unique sink is simple, we can extend $ p $ for any $ d \ ge 4 $ and $ n \ ge n _ 0 + d - 4 $ to a $ d $ - polytope with these properties that has $ n $ vertices. finally we investigate the strength of the shelling condition for $ d $ - crosspolytopes, for which develin ( 2004 ) has given a complete characterization of lp orientations.
|
arxiv:1110.3078
|
we present a new method for renormalisation group improvement of the effective potential of a quantum field theory with an arbitrary number of scalar fields. the method amounts to solving the renormalisation group equation for the effective potential with the boundary conditions chosen on the hypersurface where quantum corrections vanish. this hypersurface is defined through a suitable choice of a field - dependent value for the renormalisation scale. the method can be applied to any order in perturbation theory and it is a generalisation of the standard procedure valid for the one - field case. in our method, however, the choice of the renormalisation scale does not eliminate individual logarithmic terms but rather the entire loop corrections to the effective potential. it allows us to evaluate the improved effective potential for arbitrary values of the scalar fields using the tree - level potential with running coupling constants as long as they remain perturbative. this opens the possibility of studying various applications which require an analysis of multi - field effective potentials across different energy scales. in particular, the issue of stability of the scalar potential can be easily studied beyond tree level.
|
arxiv:1801.05258
|
we show that any parabolic generating pair of a genus - one hyperbolic 2 - bridge knot group is equivalent to the upper or lower meridian pair. as an application, we obtain a complete classification of the epimorphisms from 2 - bridge knot groups to genus - one hyperbolic 2 - bridge knot groups.
|
arxiv:1508.03793
|
a novel monte carlo technique has been developed to determine lifetimes of excited states in the tens - to - hundreds femtoseconds range. the method is applied to low - energy heavy - ion binary reactions populating nuclei with complex velocity distributions. its relevance is demonstrated in connection with the $ ^ { 18 } $ o ( 7. 0 mev / u ) + $ ^ { 181 } $ ta experiment, performed at ganil with the agata + vamos + paris setup, to study neutron - rich o, c, n,... nuclei. excited states in $ ^ { 17 } $ o and $ ^ { 19 } $ o, with known lifetimes, are used to validate the method over the $ \ sim $ 20 - 400 fs lifetime - sensitivity range. emphasis is given to the unprecedented position resolution provided by $ \ gamma $ - tracking arrays, which turns out to be essential for reaching the required accuracy in doppler - shift correction, at the basis of the detailed analysis of $ \ gamma $ - ray lineshape and resulting state lifetime determination. the technique is anticipated to be an important tool for lifetime investigations in exotic neutron - rich nuclei, produced with intense isol - type beams.
|
arxiv:2012.05180
|
we establish uniform error bounds of the l1 discretization of the caputo derivative of h \ " older continuous functions. the result can be understood as : error = ( degree of smoothness - order of the derivative ). we present an elementary proof and illustrate its optimality with numerical examples.
|
arxiv:2411.10833
|
radio interferometric data are used to estimate the sky brightness distributions in radio frequencies. here we focus on estimators of the large - scale structure and the power spectrum of the sky brightness distribution inferred from radio interferometric observations and assess their efficacy using simulated observations of the model sky. we find that while the large - scale distribution can be unbiasedly estimated from the reconstructed image from the interferometric data, estimates of the power spectrum of the intensity fluctuations calculated from the image are generally biased. the bias is more pronounced for diffuse emission. the visibility based power spectrum estimator, however, gives an unbiased estimate of the true power spectrum. we conclude that for an observation with diffuse emission the reconstructed image can be used to estimate the large - scale distribution of the intensity, while to estimate the power spectrum, visibility based methods should be preferred.
|
arxiv:1805.08398
|
the non - gaussian nature of the epoch of reionization ( eor ) 21 - cm signal has a significant impact on the error variance of its power spectrum $ p ( { \ bf \ textit { k } } ) $. we have used a large ensemble of semi - numerical simulations and an analytical model to estimate the effect of this non - gaussianity on the entire error - covariance matrix $ { \ mathcal { c } } _ { ij } $. our analytical model shows that $ { \ mathcal { c } } _ { ij } $ has contributions from two sources. one is the usual variance for a gaussian random field which scales inversely of the number of modes that goes into the estimation of $ p ( { \ bf \ textit { k } } ) $. the other is the trispectrum of the signal. using the simulated 21 - cm signal ensemble, an ensemble of the randomized signal and ensembles of gaussian random ensembles we have quantified the effect of the trispectrum on the error variance $ { \ mathcal { c } } _ { ij } $. we find that its relative contribution is comparable to or larger than that of the gaussian term for the $ k $ range $ 0. 3 \ leq k \ leq 1. 0 \, { \ rm mpc } ^ { - 1 } $, and can be even $ \ sim 200 $ times larger at $ k \ sim 5 \, { \ rm mpc } ^ { - 1 } $. we also establish that the off - diagonal terms of $ { \ mathcal { c } } _ { ij } $ have statistically significant non - zero values which arise purely from the trispectrum. this further signifies that the error in different $ k $ modes are not independent. we find a strong correlation between the errors at large $ k $ values ( $ \ ge 0. 5 \, { \ rm mpc } ^ { - 1 } $ ), and a weak correlation between the smallest and largest $ k $ values. there is also a small anti - correlation between the errors in the smallest and intermediate $ k $ values. these results are relevant for the $ k $ range that will be probed by the current and upcoming eor 21 - cm experiments.
|
arxiv:1508.00896
|
as with classic statistics, functional regression models are invaluable in the analysis of functional data. while there are now extensive tools with accompanying theory available for linear models, there is still a great deal of work to be done concerning nonlinear models for functional data. in this work we consider the additive function - on - function regression model, a type of nonlinear model that uses an additive relationship between the functional outcome and functional covariate. we present an estimation methodology built upon reproducing kernel hilbert spaces, and establish optimal rates of convergence for our estimates in terms of prediction error. we also discuss computational challenges that arise with such complex models, developing a representer theorem for our estimate as well as a more practical and computationally efficient approximation. simulations and an application to cumulative intraday returns around the 2008 financial crisis are also provided.
|
arxiv:1708.03372
|
the numerical implementation of finite element discretization method for the stream function formulation of a linearized navier - stokes equations is considered. algorithm 1 is applied using argyris element. three global orderings of nodes are selected and registered in order to conclude the best banded structure of matrix and a fluid flow calculation is considered to test a problem which has a known solution. visualization of global node orderings, matrix sparsity patterns and stream function contours are displayed showing the main features of the flow.
|
arxiv:math/0406070
|
we study the spacetime obtained by superimposing two equal aichelburg - sexl shock waves in d dimensions traveling, head - on, in opposite directions. considering the collision in a boosted frame, one shock becomes stronger than the other, and a perturbative framework to compute the metric in the future of the collision is setup. the geometry is given, in first order perturbation theory, as an integral solution, in terms of initial data on the null surface where the strong shock has support. we then extract the radiation emitted in the collision by using a d - dimensional generalisation of the landau - lifschitz pseudo - tensor and compute the percentage of the initial centre of mass energy epsilon emitted as gravitational waves. in d = 4 we find epsilon = 25. 0 %, in agreement with the result of d ' eath and payne. as d increases, this percentage increases monotonically, reaching 40. 0 % in d = 10. our result is always within the bound obtained from apparent horizons by penrose, in d = 4, yielding 29. 3 %, and eardley and giddings, in d > 4, which also increases monotonically with dimension, reaching 41. 2 % in d = 10. we also present the wave forms and provide a physical interpretation for the observed peaks, in terms of the null generators of the shocks.
|
arxiv:1105.2298
|
abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. in other words, to be abstract is to remove context and application. two of the most highly abstract areas of modern mathematics are category theory and model theory. = = description = = many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. for example, geometry has its origins in the calculation of distances and areas in the real world, and algebra started with methods of solving problems in arithmetic. abstraction is an ongoing process in mathematics and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. for example, the first steps in the abstraction of geometry were historically made by the ancient greeks, with euclid ' s elements being the earliest extant documentation of the axioms of plane geometry — though proclus tells of an earlier axiomatisation by hippocrates of chios. in the 17th century, descartes introduced cartesian co - ordinates which allowed the development of analytic geometry. further steps in abstraction were taken by lobachevsky, bolyai, riemann and gauss, who generalised the concepts of geometry to develop non - euclidean geometries. later in the 19th century, mathematicians generalised geometry even further, developing such areas as geometry in n dimensions, projective geometry, affine geometry and finite geometry. finally felix klein ' s " erlangen program " identified the underlying theme of all of these geometries, defining each of them as the study of properties invariant under a given group of symmetries. this level of abstraction revealed connections between geometry and abstract algebra. in mathematics, abstraction can be advantageous in the following ways : it reveals deep connections between different areas of mathematics. known results in one area can suggest conjectures in another related area. techniques and methods from one area can be applied to prove results in other related areas. patterns from one mathematical object can be generalized to other similar objects in the same class. on the other hand, abstraction can also be disadvantageous in that highly abstract concepts can be difficult to learn. a degree of mathematical maturity and experience may be needed for conceptual assimilation of abstractions. bertrand russell, in the scientific outlook ( 1931 ), writes that " ordinary language is totally unsuit
|
https://en.wikipedia.org/wiki/Abstraction_(mathematics)
|
we derive the next order correction to the dirac exchange energy for the free electron gas in a box with zero boundary conditions in the thermodynamic limit. the correction is of the order of the surface area of the box, and comes from three different contributions : ( i ) a real - space boundary layer, ( ii ) a boundary - condition - induced small shift of fermi momentum and bulk density, and ( iii ) a long - range electrostatic finite - size correction. moreover we show that the lda, in addition to capturing the bulk term exactly, also produces a correction of the correct order but not the correct size. gga corrections are found to be capable of capturing the surface term exactly, provided the gradient enhancement factor satisfies a simple explicit integral constraint. for current ggas such as b88 and pbe we find that the new constraint is not satisfied and the size of the surface correction is overestimated by about ten percent. the new constraint might thus be of interest for the design of future exchange functionals.
|
arxiv:2303.11370
|
microbeam radiation therapy ( mrt ) utilizes coplanar synchrotron radiation beamlets and is a proposed treatment approach for several tumour diagnoses that currently have poor clinical treatment outcomes, such as gliosarcomas. prescription dose estimations for treating preclinical gliosarcoma models in mrt studies at the imaging and medical beamline at the australian synchrotron currently rely on monte carlo ( mc ) simulations. the steep dose gradients associated with the 50 $ \, \ mu $ m wide coplanar beamlets present a significant challenge for precise mc simulation of the mrt irradiation treatment field in a short time frame. much research has been conducted on fast dose estimation methods for clinically available treatments. however, such methods, including gpu monte carlo implementations and machine learning ( ml ) models, are unavailable for novel and emerging cancer radiation treatment options like mrt. in this work, the successful application of a fast and accurate machine learning dose prediction model in a retrospective preclinical mrt rodent study is presented for the first time. the ml model predicts the peak doses in the path of the microbeams and the valley doses between them, delivered to the gliosarcoma in rodent patients. the predictions of the ml model show excellent agreement with low - noise mc simulations, especially within the investigated tumour volume. this agreement is despite the ml model being deliberately trained with mc - calculated samples exhibiting significantly higher statistical uncertainties. the successful use of high - noise training set data samples, which are much faster to generate, encourages and accelerates the transfer of the ml model to different treatment modalities for other future applications in novel radiation cancer therapies.
|
arxiv:2212.05659
|
fuzzy logic programming is a growing declarative paradigm aiming to integrate fuzzy logic into logic programming. one of the most difficult tasks when specifying a fuzzy logic program is determining the right weights for each rule, as well as the most appropriate fuzzy connectives and operators. in this paper, we introduce a symbolic extension of fuzzy logic programs in which some of these parameters can be left unknown, so that the user can easily see the impact of their possible values. furthermore, given a number of test cases, the most appropriate values for these parameters can be automatically computed.
|
arxiv:1608.04688
|
in the 20th century, as a result of scientific progress and the second industrial revolution, technology stopped being considered a distinct academic discipline and took on the meaning : the systemic use of knowledge to practical ends. = = history = = = = = prehistoric = = = tools were initially developed by hominids through observation and trial and error. around 2 mya ( million years ago ), they learned to make the first stone tools by hammering flakes off a pebble, forming a sharp hand axe. this practice was refined 75 kya ( thousand years ago ) into pressure flaking, enabling much finer work. the discovery of fire was described by charles darwin as " possibly the greatest ever made by man ". archaeological, dietary, and social evidence point to " continuous [ human ] fire - use " at least 1. 5 mya. fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten. the cooking hypothesis proposes that the ability to cook promoted an increase in hominid brain size, though some researchers find the evidence inconclusive. archaeological evidence of hearths was dated to 790 kya ; researchers believe this is likely to have intensified human socialization and may have contributed to the emergence of language. other technological advances made during the paleolithic era include clothing and shelter. no consensus exists on the approximate time of adoption of either technology, but archaeologists have found archaeological evidence of clothing 90 - 120 kya and shelter 450 kya. as the paleolithic era progressed, dwellings became more sophisticated and more elaborate ; as early as 380 kya, humans were constructing temporary wood huts. clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions ; humans began to migrate out of africa around 200 kya, initially moving to eurasia. = = = neolithic = = = the neolithic revolution ( or first agricultural revolution ) brought about an acceleration of technological innovation, and a consequent increase in social complexity. the invention of the polished stone axe was a major advance that allowed large - scale forest clearance and farming. this use of polished stone axes increased greatly in the neolithic but was originally used in the preceding mesolithic in some areas such as ireland. agriculture fed larger populations, and the transition to sedentism allowed for the simultaneous raising of more children, as infants no longer needed to be carried around by nomads. additionally, children could contribute labor to the raising of crops
|
https://en.wikipedia.org/wiki/Technology
|
in the problem of online load balancing on uniformly related machines with bounded migration, jobs arrive online one after another and have to be immediately placed on one of a given set of machines without knowledge about jobs that may arrive later on. each job has a size and each machine has a speed, and the load due to a job assigned to a machine is obtained by dividing the first value by the second. the goal is to minimize the maximum overall load any machine receives. however, unlike in the pure online case, each time a new job arrives it contributes a migration potential equal to the product of its size and a certain migration factor. this potential can be spend to reassign jobs either right away ( non - amortized case ) or at any later time ( amortized case ). semi - online models of this flavor have been studied intensively for several fundamental problems, e. g., load balancing on identical machines and bin packing, but uniformly related machines have not been considered up to now. in the present paper, the classical doubling strategy on uniformly related machines is combined with migration to achieve an $ ( 8 / 3 + \ varepsilon ) $ - competitive algorithm and a $ ( 4 + \ varepsilon ) $ - competitive algorithm with $ o ( 1 / \ varepsilon ) $ amortized and non - amortized migration, respectively, while the best known competitive ratio in the pure online setting is roughly $ 5. 828 $.
|
arxiv:2209.00565
|
$ q $ - breathers are exact time - periodic solutions of extended nonlinear systems continued from the normal modes of the corresponding linearized system. they are localized in the space of normal modes. the existence of these solutions in a weakly anharmonic atomic chain explained essential features of the fermi - pasta - ulam ( fpu ) paradox. we study $ q $ - breathers in one - two - and three - dimensional discrete nonlinear sch \ " { o } dinger ( dnls ) lattices - - theoretical playgrounds for light propagation in nonlinear optical waveguide networks, and the dynamics of cold atoms in optical lattices. we prove the existence of these solutions for weak nonlinearity. we find that the localization of $ q $ - breathers is controlled by a single parameter which depends on the norm density, nonlinearity strength and seed wave vector. at a critical value of that parameter $ q $ - breathers delocalize via resonances, signaling a breakdown of the normal mode picture and a transition into strong mode - mode interaction regime. in particular this breakdown takes place at one of the edges of the normal mode spectrum, and in a singular way also in the center of that spectrum. a stability analysis of $ q $ - breathers supplements these findings. for three - dimensional lattices, we find $ q $ - breather vortices, which violate time reversal symmetry and generate a vortex ring flow of energy in normal mode space.
|
arxiv:0801.1055
|
spin photocurrents generated by homogeneous optical excitation with circularly polarized radiation in quantum wells ( qws ) are reviewed. the absorption of circularly polarized light results in optical spin orientation due to the transfer of the angular momentum of photons to electrons of a two - dimensional electron gas ( 2deg ). it is shown that in quantum wells belonging to one of the gyrotropic crystal classes a non - equilibrium spin polarization of uniformly distributed electrons causes a directed motion of electron in the plane of the qw. a characteristic feature of this electric current, which occurs in unbiased samples, is that it reverses its direction upon changing the radiation helicity from left - handed to right - handed and vice versa. two microscopic mechanisms are responsible for the occurrence of an electric current linked to a uniform spin polarization in a qw : the spin polarization induced circular photogalvanic effect and the spin - galvanic effect. in both effects the current flow is driven by an asymmetric distribution of spin polarized carriers in k - space of systems with lifted spin degeneracy due to k - linear terms in the hamiltonian. spin photocurrents provide methods to investigate spin relaxation and to conclude on the in - plane symmetry of qws. the effect can also be utilized to develop fast detectors to determine the degree of circular polarization of a radiation beam. furthermore spin photocurrents at infrared excitation were used to demonstrate and investigate monopolar spin orientation of free carriers.
|
arxiv:cond-mat/0304266
|
lower a posteriori error bounds obtained using the standard bubble function approach are reviewed in the context of anisotropic meshes. a numerical example is given that clearly demonstrates that the short - edge jump residual terms in such bounds are not sharp. hence, for linear finite element approximations of the laplace equation in polygonal domains, a new approach is employed to obtain essentially sharper lower a posteriori error bounds and thus to show that the upper error estimator in the recent paper [ n. kopteva, numer. math., 137 ( 2017 ), 607 - 642 ] is efficient on certain anisotropic meshes.
|
arxiv:1906.05703
|
turbulent and vortical flows are ubiquitous and their characterization is crucial for the understanding of several natural and industrial processes. among different techniques to study spatio - temporal flow fields, complex networks represent a recent and promising tool to deal with the large amount of data on turbulent flows and shed light on their physical mechanisms. the aim of this review is to bring together the main findings achieved so far from the application of network - based techniques to study turbulent and vortical flows. a critical discussion on the potentialities and limitations of the network approach is provided, thus giving an ordered portray of the current diversified literature. the present review can boost future network - based research on turbulent and vortical flows, promoting the establishment of complex networks as a widespread tool for turbulence analysis.
|
arxiv:2011.01639
|
it is demonstrated that hypersurfaces with a flat centroaffine metric are governed by a system of nonlinear pdes known as the equations of associativity of 2 - dimensional topological field theory.
|
arxiv:math/0205248
|
we are conducting a project aimed at surveys and repeated observations of red variables ( or long - period variables ) in globular clusters. using the irsf / sirius near - infrared facility located at south africa, we are observing 145 globular clusters that are accessible from the site. in this contribution, we present our observations and preliminary results. we have discovered many red variables, especially in the bulge region, whose memberships to the clusters remain to be confirmed. using a sample of all red variables ( both already known and newly discovered ones ) in globular clusters except those projected to the bulge region, we produce a log p - k diagram and compare it with those for the bulge and the large magellanic cloud. a prominent feature is that the bright part of overtone - pulsators ' sequence ( b + and c ' ) is absent.
|
arxiv:astro-ph/0509714
|
we propose a new shared task of semantic retrieval from legal texts, in which a so - called contract discovery is to be performed, where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts. the task differs substantially from conventional nli and shared tasks on legal information extraction ( e. g., one has to identify text span instead of a single document, page, or paragraph ). the specification of the proposed task is followed by an evaluation of multiple solutions within the unified framework proposed for this branch of methods. it is shown that state - of - the - art pretrained encoders fail to provide satisfactory results on the task proposed. in contrast, language model - based solutions perform better, especially when unsupervised fine - tuning is applied. besides the ablation studies, we addressed questions regarding detection accuracy for relevant text fragments depending on the number of examples available. in addition to the dataset and reference results, lms specialized in the legal domain were made publicly available.
|
arxiv:1911.03911
|
bounded irreducible local siegel disks include classical siegel disks of polynomials, bounded irreducible siegel disks of rational and entire functions, and the examples of herman and moeckel. we show that there are only two possibilities for the structure of the boundary of such a disk : either the boundary admits a nice decomposition onto a circle, or it is an indecomposable continuum.
|
arxiv:math/9210225
|
for a free presentation $ 0 \ to r \ to f \ to g \ to 0 $ of a leibniz algebra $ g $, the baer invariant $ { \ cal m } ^ { \ sf lie } ( g ) = \ frac { r \ cap [ f, f ] _ { lie } } { [ f, r ] _ { lie } } $ is called the schur multiplier of $ g $ relative to the liezation functor or schur lie - multiplier. for a two - sided ideal $ n $ of a leibniz algebra $ g $, we construct a four - term exact sequence relating the schur lie - multiplier of $ g $ and $ g / n $, which is applied to study and characterize lie - nilpotency, lie - stem covers and lie - capability of leibniz algebras.
|
arxiv:1703.07148
|
this paper formulates and studies a novel algorithm for federated learning from large collections of local datasets. this algorithm capitalizes on an intrinsic network structure that relates the local datasets via an undirected " empirical " graph. we model such big data over networks using a networked linear regression model. each local dataset has individual regression weights. the weights of close - knit sub - collections of local datasets are enforced to deviate only little. this lends naturally to a network lasso problem which we solve using a primal - dual method. we obtain a distributed federated learning algorithm via a message passing implementation of this primal - dual method. we provide a detailed analysis of the statistical and computational properties of the resulting federated learning algorithm.
|
arxiv:2010.14159
|
stars can be either disrupted as tidal disruption events ( tdes ) or swallowed as a whole by massive black holes ( mbhs ) at galactic centers when they approach sufficiently close to these mbhs. in this work, we investigate the correlations of such stellar consumption rates with both the mbh mass $ m _ { \ rm bh } $ and the inner slope of the host galaxy mass density distribution $ \ alpha $. we introduce a simplified analytical power - law model with a power - law stellar mass density distribution surrounding mbhs and separate the contributions of two - body relaxation and stellar orbital precession for the stellar orbital angular momentum evolution in nonspherical galaxy potentials. the stellar consumption rates derived from this simplified model can be well consistent with the numerical results obtained with a more realistic treatment of stellar distributions and dynamics around mbhs, providing an efficient way to estimate tde rates. the origin of the correlations of stellar consumption rates with $ m _ { \ rm bh } $ and $ \ alpha $ are explained by the dependence of this analytical model on those mbh / host galaxy properties and by the separation of the stellar angular momentum evolution mechanisms. we propose that the strong positive correlation between the rates of stellar consumption due to two - body relaxation and $ \ alpha $ provides one interpretation for the overrepresentation of tdes found in some rare e + a / poststarburst galaxies. we find high tde rates for giant stars, up to those for solar - type stars. the understanding of the origin of the correlations of the stellar consumption rates will be necessary for obtaining the demographics of mbhs and their host galaxies via tdes.
|
arxiv:2306.10996
|
high - throughput molecular profiling technologies have produced high - dimensional multi - omics data, enabling systematic understanding of living systems at the genome scale. studying molecular interactions across different data types helps reveal signal transduction mechanisms across different classes of molecules. in this paper, we develop a novel bayesian representation learning method that infers the relational interactions across multi - omics data types. our method, bayesian relational learning ( bayrel ) for multi - omics data integration, takes advantage of a priori known relationships among the same class of molecules, modeled as a graph at each corresponding view, to learn view - specific latent variables as well as a multi - partite graph that encodes the interactions across views. our experiments on several real - world datasets demonstrate enhanced performance of bayrel in inferring meaningful interactions compared to existing baselines.
|
arxiv:2010.05895
|
in this short note we define a new cohomology for a lie algebroid $ \ mathcal { a } $, that we call the \ emph { twisted cohomology } of $ \ mathcal { a } $ by an odd cocycle $ \ theta $ in the lie algebroid cohomology of $ \ mathcal { a } $. we proof that this cohomology only depends on the lie algebroid cohomology class $ [ \ theta ] $ of the odd cocycle $ \ theta $. we give a few examples showing that this new cohomology encompasses various well - known cohomology theories.
|
arxiv:1706.04482
|
we conduct a pilot study selectively evaluating the cognitive abilities ( decision making and spatial reasoning ) of two recently released generative transformer models, chatgpt and dall - e 2. input prompts were constructed following neutral a priori guidelines, rather than adversarial intent. post hoc qualitative analysis of the outputs shows that dall - e 2 is able to generate at least one correct image for each spatial reasoning prompt, but most images generated are incorrect ( even though the model seems to have a clear understanding of the objects mentioned in the prompt ). similarly, in evaluating chatgpt on the rationality axioms developed under the classical von neumann - morgenstern utility theorem, we find that, although it demonstrates some level of rational decision - making, many of its decisions violate at least one of the axioms even under reasonable constructions of preferences, bets, and decision - making prompts. chatgpt ' s outputs on such problems generally tended to be unpredictable : even as it made irrational decisions ( or employed an incorrect reasoning process ) for some simpler decision - making problems, it was able to draw correct conclusions for more complex bet structures. we briefly comment on the nuances and challenges involved in scaling up such a ' cognitive ' evaluation or conducting it with a closed set of answer keys ( ' ground truth ' ), given that these models are inherently generative and open - ended in responding to prompts.
|
arxiv:2302.09068
|
in this work, ag0 nanoparticles ( nps ) were synthesized and detected by flow - injection analysis coupled to collinear dual - beam thermal lens spectrometric ( tls ) detection. the estimated limit of detection was 0. 8 microgram / l. the use of 2 the ionpac cryptand g1 column enabled ag0 nps detection in the presence of interfering ions normally present in water. ag0 nanofluids ( nfs ) were further characterized by time - resolved tls and beam deflection spectrometry to determine the nfs thermal diffusivity and conductivity. the applied methods were found to be fast, simple, reliable, and highly sensitive.
|
arxiv:2503.21355
|
controller tuning is a labor - intensive process that requires human intervention and expert knowledge. bayesian optimization has been applied successfully in different fields to automate this process. however, when tuning on hardware, such as in automotive applications, strict safety requirements often arise. to obtain safety guarantees, many existing safe bayesian optimization methods rely on assumptions that are hard to verify in practice. this leads to the use of unjustified heuristics in many applications, which invalidates the theoretical safety guarantees. furthermore, applications often require multiple safety constraints to be satisfied simultaneously. building on recently proposed lipschitz - only safe bayesian optimization, we develop an algorithm that relies on readily interpretable assumptions and satisfies multiple safety constraints at the same time. we apply this algorithm to the problem of automatically tuning a trajectory - tracking controller of a self - driving car. results both from simulations and an actual test vehicle underline the algorithm ' s ability to learn tracking controllers without leaving the track or violating any other safety constraints.
|
arxiv:2501.12969
|
cryptoassets such as cryptocurrencies and tokens are increasingly traded on decentralized exchanges. the advantage for users is that the funds are not in custody of a centralized external entity. however, these exchanges are prone to manipulative behavior. in this paper, we illustrate how wash trading activity can be identified on two of the first popular limit order book - based decentralized exchanges on the ethereum blockchain, idex and etherdelta. we identify a lower bound of accounts and trading structures that meet the legal definitions of wash trading, discovering that they are responsible for a wash trading volume in equivalent of 159 million u. s. dollars. while self - trades and two - account structures are predominant, complex forms also occur. we quantify these activities, finding that on both exchanges, more than 30 \ % of all traded tokens have been subject to wash trading activity. on etherdelta, 10 % of the tokens have almost exclusively been wash traded. all data is made available for future research. our findings underpin the need for countermeasures that are applicable in decentralized systems.
|
arxiv:2102.07001
|
we study signatures of cosmic superstring networks containing strings of multiple tensions and y - junctions, on the cosmic microwave background ( cmb ) temperature and polarisation spectra. focusing on the crucial role of the string coupling constant $ g _ s $, we show that the number density and energy density of the scaling network are dominated by different types of string in the $ g _ s \ sim 1 $ and $ g _ s \ ll 1 $ limits. this can lead to an observable shift in the position of the b - mode peak - - - a distinct signal leading to a direct constraint on $ g _ s $. we forecast the joint bounds on $ g _ s $ and the fundamental string tension $ \ mu _ f $ from upcoming and future cmb polarisation experiments, as well as the signal to noise in detecting the difference between b - mode signals in the limiting cases of large and small $ g _ s $. we show that such a detectable shift is within reach of planned experiments.
|
arxiv:1105.6198
|
quantum walks are roughly analogous to classical random walks, and like classical walks they have been used to find new ( quantum ) algorithms. when studying the behavior of large graphs or combinations of graphs it is useful to find the response of a subgraph to signals of different frequencies. in so doing we can replace an entire subgraph with a single vertex with frequency dependent scattering coefficients. in this paper a simple technique for quickly finding the scattering coefficients of any quantum graph will be presented. these scattering coefficients can be expressed entirely in terms of the characteristic polynomial of the graph ' s time step operator. moreover, with these in hand we can easily derive the " impulse response " which is the key to predicting the response of a graph to any signal. this gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected.
|
arxiv:1503.00253
|
graph neural networks ( gnns ) have shown promising results in various tasks, among which link prediction is an important one. gnn models usually follow a node - centric message passing procedure that aggregates the neighborhood information to the central node recursively. following this paradigm, features of nodes are passed through edges without caring about where the nodes are located and which role they played. however, the neglected topological information is shown to be valuable for link prediction tasks. in this paper, we propose structure enhanced graph neural network ( seg ) for link prediction. seg introduces the path labeling method to capture surrounding topological information of target nodes and then incorporates the structure into an ordinary gnn model. by jointly training the structure encoder and deep gnn model, seg fuses topological structures and node features to take full advantage of graph information. experiments on the ogb link prediction datasets demonstrate that seg achieves state - of - the - art results among all three public datasets.
|
arxiv:2201.05293
|
inspired by the works of hughes [ 17, 18 ], we formalize and prove the well posedness of a hyperbolic - - elliptic system whose solutions describe the dynamics of a moving crowd. the resulting model is here shown to be well posed and the time of evacuation from a bounded environment is proved to be finite. this model also provides a microscopic description of the individuals ' behaviors.
|
arxiv:1610.07450
|
the uniform shear flow for the rarefied gas is governed by the time - dependent spatially homogeneous boltzmann equation with a linear shear force. the main feature of such flow is that the temperature may increase in time due to the shearing motion that induces viscous heat and the system becomes far from equilibrium. for maxwell molecules, we establish the unique existence, regularity, shear - rate - dependent structure and non - negativity of self - similar profiles for any small shear rate. the non - negativity is justified through the large time asymptotic stability even in spatially inhomogeneous perturbation framework, and the exponential rates of convergence are also obtained with the size proportional to the second order shear rate. the analysis supports the numerical result that the self - similar profile admits an algebraic high - velocity tail that is the key difficulty to overcome in the proof.
|
arxiv:2008.02551
|
miner $ \ nu $ a ( main injector experiment $ \ nu $ - a ) is a new few - gev neutrino cross section experiment that began taking data in the fnal numi ( fermi national accelerator laboratory neutrinos at the main injector ) beam - line in march of 2010. miner $ \ nu $ a employs a fine - grained scintillator detector capable of complete kinematic characterization of neutrino interactions. this paper describes the miner $ \ nu $ a data acquisition system ( daq ) including the read - out electronics, software, and computing architecture.
|
arxiv:1209.1120
|
background : the recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. this article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational bayesian expectation maximization algorithm ( vbem ). the discussion is illustrated by the example of single - molecule fluorescence resonance energy transfer ( smfret ) versus time data, where the smfret time series is modeled as a hidden markov model ( hmm ) with gaussian observables. a detailed description of smfret is provided as well. results : the vbem algorithm returns the model ' s evidence and an approximating posterior parameter distribution given the data. the former provides a metric for model selection via maximum evidence ( me ), and the latter a description of the model ' s parameters learned from the data. me / vbem provide several advantages over the more commonly used approach of maximum likelihood ( ml ) optimized by the expectation maximization ( em ) algorithm, the most important being a natural form of model selection and a well - posed ( non - divergent ) optimization problem. conclusions : the results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.
|
arxiv:1009.0857
|
after a brief review of the results of solar, atmospheric and long - baseline neutrino oscillation experiments which led to the current three - neutrino mixing paradigm, we discuss indications of neutrino oscillation experiments in favor of short - baseline oscillations which require the existence of one or more sterile neutrinos. we show that the simplest possibility of existence of one sterile neutrino is not enough to fit all data of short - baseline neutrino oscillation experiments because of two tensions : a tension between neutrino and antineutrino data and a tension between appearance and disappearance data. the tension between neutrino and antineutrino data is eliminated with the addition of a second sterile neutrino which allows cp - violating effects in short - baseline experiments. in this case the tension between appearance and disappearance data is reduced, but cannot be eliminated.
|
arxiv:1106.4479
|
high mass galaxies, with halo masses $ m _ { 200 } \ ge 10 ^ { 10 } m _ { \ odot } $, reveal a remarkable near - linear relation between their globular cluster ( gc ) system mass and their host galaxy halo mass. extending this relation to the mass range of dwarf galaxies has been problematic due to the difficulty in measuring independent halo masses. here we derive new halo masses based on stellar and hi gas kinematics for a sample of nearby dwarf galaxies with gc systems. we find that the gc system mass - - halo mass relation for galaxies populated by gcs holds from halo masses of $ m _ { 200 } \ sim 10 ^ { 14 } m _ { \ odot } $ down to below $ m _ { 200 } $ $ \ sim 10 ^ 9 m _ { \ odot } $, although there is a substantial increase in scatter towards low masses. in particular, three well - studied ultra diffuse galaxies, with dwarf - like stellar masses, reveal a wide range in their gc - to - halo mass ratios. we compare our gc system - - halo mass relation to the recent model of el badry et al., finding that their fiducial model does not reproduce our data in the low mass regime. this may suggest that gc formation needs to be more efficient than assumed in their model, or it may be due to the onset of stochastic gc occupation in low mass halos. finally, we briefly discuss the stellar mass - halo mass relation for our low mass galaxies with gcs, and we suggest some nearby dwarf galaxies for which searches for gcs may be fruitful.
|
arxiv:1809.07831
|
this article presents the results of research into the causes of the gibbs paradox in the formulation discussed by j. w. gibbs himself. in this formulation, we are talking about an inexplicable ( paradoxical ) jump in the entropy of mixing of two ideal gases during the transition from mixing different to mixing identical gases. it is shown that the entropy of mixing of different ideal gases and the entropy of mixing of identical ideal gases are different ( non - identical ) functions of the same gas parameters. that, called a paradoxical jump in the entropy of mixing, is not a jump in the value of some function, but is the difference in the values of various functions, on condition that the variables and parameters on which these functions depend remain constant. those who were looking for an explanation of the original gibbs paradox did not notice this and tried to solve an unsolvable falsely posed problem : to find a parameter that change during the transition from different to identical gases caused the difference in the values of non - identical functions.
|
arxiv:2301.00653
|
network function virtualization ( nfv ) has the potential to significantly reduce the capital and operating expenses, shorten product release cycle, and improve service agility. in this paper, we focus on minimizing the total number of virtual network function ( vnf ) instances to provide a specific service ( possibly at different locations ) to all the flows in a network. certain network security and analytics applications may allow fractional processing of a flow at different nodes ( corresponding to datacenters ), giving an opportunity for greater optimization of resources. through a reduction from the set cover problem, we show that this problem is np - hard and cannot even be approximated within a factor of ( 1 - o ( 1 ) ) ln ( m ) ( where m is the number of flows ) unless p = np. then, we design two simple greedy algorithms and prove that they achieve an approximation ratio of ( 1 - o ( 1 ) ) ln ( m ) + 2, which is asymptotically optimal. for special cases where each node hosts multiple vnf instances ( which is typically true in practice ), we also show that our greedy algorithms have a constant approximation ratio. further, for tree topologies we develop an optimal greedy algorithm by exploiting the inherent topological structure. finally, we conduct extensive numerical experiments to evaluate the performance of our proposed algorithms in various scenarios.
|
arxiv:1702.01154
|
in daugman - style iris recognition, the textures of the left and right irises of the same person are traditionally considered as being as different as the irises of two unrelated persons. however, previous research indicates that humans can detect that two iris images are from different eyes of the same person, or eyes of monozygotic twins, with an accuracy of about 80 %. in this work, we employ a siamese network architecture and contrastive learning to categorize a pair of iris images as coming from monozygotic or non - monozygotic irises. this could potentially be applied, for example, as a fast, noninvasive test to determine if twins are monozygotic or non - monozygotic. we construct a dataset comprising both synthetic monozygotic pairs ( images of different irises of the same individual ) and natural monozygotic pairs ( images of different images from persons who are identical twins ), in addition to non - monozygotic pairs from unrelated individuals, ensuring a comprehensive evaluation of the model ' s capabilities. to gain deeper insights into the learned representations, we train and analyze three variants of the model using ( 1 ) the original input images, ( 2 ) iris - only images, and ( 3 ) non - iris - only images. this comparison reveals the critical importance of iris - specific textural details and contextual ocular cues in identifying monozygotic iris patterns. the results demonstrate that models leveraging full eye - region information outperform those trained solely on iris - only data, emphasizing the nuanced interplay between iris and ocular characteristics. our approach achieves accuracy levels using the full iris image that exceed those previously reported for human classification of monozygotic iris pairs. this study presents the first classifier designed to determine whether a pair of iris images originates from monozygotic individuals.
|
arxiv:2503.09749
|
we present causal amortized active structure learning ( caasl ), an active intervention design policy that can select interventions that are adaptive, real - time and that does not require access to the likelihood. this policy, an amortized network based on the transformer, is trained with reinforcement learning on a simulator of the design environment, and a reward function that measures how close the true causal graph is to a causal graph posterior inferred from the gathered data. on synthetic data and a single - cell gene expression simulator, we demonstrate empirically that the data acquired through our policy results in a better estimate of the underlying causal graph than alternative strategies. our design policy successfully achieves amortized intervention design on the distribution of the training environment while also generalizing well to distribution shifts in test - time design environments. further, our policy also demonstrates excellent zero - shot generalization to design environments with dimensionality higher than that during training, and to intervention types that it has not been trained on.
|
arxiv:2405.16718
|
an intact cuo $ _ 2 $ plane is widely believed to be a prerequisite for the high - $ t _ c $ superconductivity in cuprate superconductors. however, an exception may exist in the superconducting ba $ _ 2 $ cuo $ _ { 3 + \ delta } $ materials where cuo chains play a more important role. from first - principles density functional theory calculations, we have studied the electronic and magnetic structures of ba $ _ 2 $ cuo $ _ { 3 + \ delta } $. the stoichiometric ba $ _ 2 $ cuo $ _ 3 $ and ba $ _ 2 $ cuo $ _ 4 $ contain quasi - one - dimensional cuo chains and intact two - dimensional cuo $ _ 2 $ planes, respectively. in comparison with the nonmagnetic metal ba $ _ 2 $ cuo $ _ 4 $, ba $ _ 2 $ cuo $ _ 3 $ is found to be an antiferromagnetic ( afm ) mott insulator. it possesses a nearest - neighbor intra - chain antiferromagnetic ( afm ) coupling and a weak inter - chain interaction, and its lowest unoccupied band and highest occupied band are contributed by cu 3 $ d _ { b ^ 2 - c ^ 2 } $ - orbital ( or $ d _ { x ^ 2 - y ^ 2 } $ - orbital if we denote the $ bc $ - plane as the $ xy $ - plane ) and o 2 $ p $ - orbitals, respectively. total energy calculations indicate that the oxygen vacancies in ba $ _ 2 $ cuo $ _ { 3 + \ delta } $ prefer to reside in the planar sites rather than the apical oxygens in the cuo chains, in agreement with the experimental observation. furthermore, we find that the magnetic frustrations or spin fluctuations can be effectively induced by moderate charge doping. this suggests that the superconducting pairing in oxygen - enriched ba $ _ 2 $ cuo $ _ { 3 + \ delta } $ or oxygen - deficient ba $ _ 2 $ cuo $ _ { 4 - \ delta } $ is likely to be mainly driven by the afm fluctuations within cuo chains.
|
arxiv:1901.11392
|
the rich order parameter of spin density waves allows for unusual object of a complex topological nature : a half - integer dislocation combined with a semi - vortex of a staggered magnetization. it becomes energetically preferable to ordinary dislocation due to enhanced coulomb interactions in the semiconducting regime. generation of these objects changes e. g. the narrow band noise frequency.
|
arxiv:cond-mat/0004313
|
relations between so - called harness processes and initial enlargements of the filtration of a levy process with its positions at fixed times are investigated.
|
arxiv:math/0406563
|
new instrumental and telescopes covering the optical and ultra - violet spectral regions have revealed a range of small - scale dynamic features, many which may be related. for example, the range of spicule - like features hints towards a spectrum of features and not just two types ; however, direct observational evidence in terms of tracking spicules across multiple wavelengths are needed in order to provide further insight into the dynamics of the sun ' s outer atmosphere. this paper uses h $ \ alpha $ data obtained with the crisp imaging spectropolarimeter instrument on the swedish 1 - m solar telescope, and in the transition region using the interface region imaging spectrograph with the sji 1400 { \ aa } channel plus spectral data via the si iv 1394 { \ aa } line to track spicules termed rapid blue - shifted excursions ( rbes ). the rbes as seen in the h $ \ alpha $ blue - wing images presented here can be sub - divided into two categories ; a single or multi - threaded feature. based on the h $ \ alpha $ spectra, the features can be divided into events showing broadening and line core absorption, events showing broadening and line core emission, events with a pure blue shifted h $ \ alpha $ profile without any absorption in the red wing, broadened line profile with the absorption in the blue stronger compared to the red wing. from the rbe - like events which have a si iv 1394 { \ aa } line profile, 78 % of them show a si iv line flux increase. most of these features show a second broadened si iv component which is slightly blue - shifted.
|
arxiv:2306.02945
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.