text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we provide a long - time existence and sub - convergence result for the elastic flow of a three network in $ \ mathbb { r } ^ { n } $ under some mild topological assumptions. the evolution is such that the sum of the elastic energies of the three curves plus their weighted lengths decrease in time. natural boundary conditions are considered at the boundary of the curves and at the triple junction.
|
arxiv:1812.11367
|
we simulate the response of a particle - stabilized emulsion droplet in an external force field, such as gravity, acting equally on all $ n $ particles. we show that the field strength required for breakup ( at fixed initial area fraction ) decreases markedly with droplet size, because the forces act cumulatively, not individually, to detach the interfacial particles. the breakup mode involves the collective destabilization of a solidified particle raft occupying the lower part of the droplet, leading to a critical force per particle that scales approximately as $ n ^ { - 1 / 2 } $.
|
arxiv:1203.0410
|
spatio - temporal receptive fields ( strf ) of visual neurons are often estimated using spike - triggered averaging of binary pseudo - random stimulus sequences. the spike train of a visual neuron is recorded simultaneously with the stimulus presentation. the neuron ' s strf is estimated by averaging the stimulus frames that coincide with spikes at fixed latencies. although this is a widely used technique, an analytical method for determining the statistical significance of the estimated value of the strf pixels seems to be lacking. such a significance test would be useful for identifying the significant features of the strf and investigating their relationship with experimental variables. here, the distribution of the estimated strf pixel values is derived for given spike trains, under the null hypothesis that spike occurrences and stimulus values are statistically independent. this distribution is then used for computing amplitude thresholds to determine the strf pixels where the null hypothesis can be rejected at a desired two - tailed significance level. it is also proposed that the size of the receptive field may be inferred from the significant pixels. the application of the proposed method is illustrated on spike trains collected from individual mouse retinal ganglion cells.
|
arxiv:2407.16013
|
although syntactic information is beneficial for many nlp tasks, combining it with contextual information between words to solve the coreference resolution problem needs to be further explored. in this paper, we propose an end - to - end parser that combines pre - trained bert with a syntactic relation graph attention network ( rgat ) to take a deeper look into the role of syntactic dependency information for the coreference resolution task. in particular, the rgat model is first proposed, then used to understand the syntactic dependency graph and learn better task - specific syntactic embeddings. an integrated architecture incorporating bert embeddings and syntactic embeddings is constructed to generate blending representations for the downstream task. our experiments on a public gendered ambiguous pronouns ( gap ) dataset show that with the supervision learning of the syntactic dependency graph and without fine - tuning the entire bert, we increased the f1 - score of the previous best model ( rgcn - with - bert ) from 80. 3 % to 82. 5 %, compared to the f1 - score by single bert embeddings from 78. 5 % to 82. 5 %. experimental results on another public dataset - ontonotes 5. 0 demonstrate that the performance of the model is also improved by incorporating syntactic dependency information learned from rgat.
|
arxiv:2309.04977
|
in this paper we address the issue of the gribov copies in su ( n ), n > 2, euclidean yang - mills theories quantized in the maximal abelian gauge. a few properties of the gribov region in this gauge are established. similarly to the case of su ( 2 ), the gribov region turns out to be convex, bounded along the off - diagonals directions in field space, and unbounded along the diagonal ones. the implementation of the restriction to the gribov region in the functional integral is discussed through the introduction of the horizon function, whose construction will be outlined in detail. the influence of this restriction on the behavior of the gluon and ghost propagators of the theory is also investigated together with a set of dimension two condensates.
|
arxiv:1002.1659
|
in synaptic molecular communication, the activation of postsynaptic receptors by neurotransmitters ( nts ) is governed by a stochastic reaction - diffusion process and, hence, inherently random. it is currently not fully understood how this randomness impacts downstream signaling in the target cell and, ultimately, neural computation and learning. the statistical characterization of the reaction - diffusion process is difficult because the reversible bi - molecular reaction of nts and receptors renders the system nonlinear. consequently, existing models for the receptor occupancy in the synaptic cleft rely on simplifying assumptions and approximations which limit their practical applicability. in this work, we propose a novel statistical model for the reaction - diffusion process governing synaptic signal transmission in terms of the chemical master equation ( cme ). we show how to compute the cme efficiently and verify the accuracy of the obtained results with stochastic particle - based computer simulations ( pbss ). furthermore, we compare the proposed model to two benchmark models proposed in the literature and show that it provides more accurate results when compared to pbss. finally, the proposed model is used to study the impact of the system parameters on the statistical dependence between binding events of nts and receptors. in summary, the proposed model provides a step forward towards a complete statistical characterization of synaptic signal transmission.
|
arxiv:2109.14986
|
this paper presents a method for the robust selection of measurements in a simultaneous localization and mapping ( slam ) framework. existing methods check consistency or compatibility on a pairwise basis, however many measurement types are not sufficiently constrained in a pairwise scenario to determine if either measurement is inconsistent with the other. this paper presents group - $ k $ consistency maximization ( g $ k $ cm ) that estimates the largest set of measurements that is internally group - $ k $ consistent. solving for the largest set of group - $ k $ consistent measurements can be formulated as an instance of the maximum clique problem on generalized graphs and can be solved by adapting current methods. this paper evaluates the performance of g $ k $ cm using simulated data and compares it to pairwise consistency maximization ( pcm ) presented in previous work.
|
arxiv:2209.02658
|
in this paper, we propose a malware categorization method that models malware behavior in terms of instructions using pagerank. pagerank computes ranks of web pages based on structural information and can also compute ranks of instructions that represent the structural information of the instructions in malware analysis methods. our malware categorization method uses the computed ranks as features in machine learning algorithms. in the evaluation, we compare the effectiveness of different pagerank algorithms and also investigate bagging and boosting algorithms to improve the categorization accuracy.
|
arxiv:1608.00866
|
we show that every globally asymptotically stable system with a twice continuously differentiable vector field admits a local polynomial lyapunov function on an arbitrary bounded neighborhood of the origin.
|
arxiv:1201.3311
|
we present detailed science cases that a large fraction of the indian agn community is interested in pursuing with the upcoming square kilometre array ( ska ). these interests range from understanding low luminosity active galactic nuclei in the nearby universe to powerful radio galaxies at high redshifts. important unresolved science questions in agn physics are discussed. ongoing low - frequency surveys with the ska pathfinder telescope gmrt, are highlighted.
|
arxiv:1610.08174
|
type iii radio bursts are intense radio emissions triggered by beams of energetic electrons often associated with solar flares. these exciter beams propagate outwards from the sun along an open magnetic field line in the corona and in the interplanetary ( ip ) medium. we performed a statistical survey of 29 simple and isolated ip type iii bursts observed by stereo / waves instruments between january 2013 and september 2014. we investigated their time - frequency profiles in order to derive the speed and acceleration of exciter electron beams. we show these beams noticeably decelerate in the ip medium. obtained speeds range from $ \ sim $ 0. 02c up to $ \ sim $ 0. 35c depending on initial assumptions. it corresponds to electron energies between tens of ev and hundreds of kev, and in order to explain the characteristic energies or speeds of type iii electrons ( $ \ sim 0. 1 $ c ) observed simultaneously with langmuir waves at 1 au, the emission of type iii bursts near the peak should be predominately at double plasma frequency. derived properties of electron beams can be used as input parameters for computer simulations of interactions between the beam and the plasma in the ip medium.
|
arxiv:1507.06874
|
wavefront sensing is a widely - used non - interferometric, single - shot, and quantitative technique providing the spatial - phase of a beam. the phase is obtained by integrating the measured wavefront gradient. complex and random wavefields intrinsically contain a high density of singular phase structures ( optical vortices ) associated with non - conservative gradients making this integration step especially delicate. here, using a high - resolution wavefront sensor, we demonstrate experimentally a systematic approach for achieving the complete and quantitative reconstruction of complex wavefronts. based on the stokes ' theorem, we propose an image segmentation algorithm to provide an accurate determination of the charge and location of optical vortices. this technique is expected to benefit to several fields requiring complex media characterization.
|
arxiv:2101.07114
|
the $ \ kappa $ - deformation of the ( 2 + 1 ) d anti - de sitter, poincar \ ' e and de sitter groups is presented through a unified approach in which the curvature of the spacetime ( or the cosmological constant ) is considered as an explicit parameter. the drinfel ' d - double and the poisson - lie structure underlying the $ \ kappa $ - deformation are explicitly given, and the three quantum kinematical groups are obtained as quantizations of such poisson - lie algebras. as a consequence, the non - commutative ( 2 + 1 ) d spacetimes that generalize the $ \ kappa $ - minkowski space to the ( anti - ) de sitter ones are obtained. moreover, noncommutative 4d spaces of ( time - like ) geodesics can be defined, and they can be interpreted as a novel possibility to introduce non - commutative worldlines. furthermore, quantum ( anti - ) de sitter algebras are presented both in the known basis related with 2 + 1 quantum gravity and in a new one which generalizes the bicrossproduct one. in this framework, the quantum deformation parameter is related with the planck length, and the existence of a kind of " duality " between the cosmological constant and the planck scale is also envisaged.
|
arxiv:hep-th/0401244
|
we make a connection between the structure of the bidisc and a distinguished subgroup of its automorphism group. the automorphism group of the bidisc, as we know, is of dimension six and acts transitively. we observe that it contains a subgroup that is isomorphic to the automorphism group of the open unit disc and this subgroup partitions the bidisc into a complex curve and a family of strongly pseudo - convex hypersurfaces that are non - spherical as cr - manifolds. our work reverses this process and shows that any $ 2 $ - dimensional kobayashi - hyperbolic manifold whose automorphism group ( which is known, from the general theory, to be a lie group ) has a $ 3 $ - dimensional subgroup that is non - solvable ( as a lie group ) and that acts on the manifold to produce a collection of orbits possessing essentially the characteristics of the concretely known collection of orbits mentioned above, is biholomorphic to the bidisc. the distinguished subgroup is interesting in its own right. it turns out that if we consider any subdomain of the bidisc that is a union of a proper sub - collection of the collection of orbits mentioned above, then the automorphism group of this subdomain can be expressed very simply in terms of this distinguished subgroup.
|
arxiv:2104.05001
|
, pymod, tip - structfast, compass, 3d - pssm, samt02, samt99, hhpred, fague, 3d - jigsaw, meta - pp, rosetta, and i - tasser. = = = = protein threading = = = = protein threading can be used when a reliable homologue for the query sequence cannot be found. this method begins by obtaining a query sequence and a library of template structures. next, the query sequence is threaded over known template structures. these candidate models are scored using scoring functions. these are scored based upon potential energy models of both query and template sequence. the match with the lowest potential energy model is then selected. methods and servers for retrieving threading data and performing calculations are listed here : genthreader, pgenthreader, pdomthreader, orfeus, prospect, bioshell - threading, ffaso3, raptorx, hhpred, loopp server, sparks - x, segmer, threader2, esypred3d, libra, topits, raptor, coth, muster. for more information on rational design see site - directed mutagenesis. = = = multivalent binding = = = multivalent binding can be used to increase the binding specificity and affinity through avidity effects. having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time - effective tool for targeted binding. = = = = multivalent proteins = = = = multivalent proteins are relatively easy to produce by post - translational modifications or multiplying the protein - coding dna sequence. the main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. in the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics. the most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy. = = = directed evolution = = = in directed evolution, random mutagenesis
|
https://en.wikipedia.org/wiki/Protein_engineering
|
although numerous simulations have been done to understand the effects of intense bursts of star formation on high surface brightness galaxies, few attempts have been made to understand how localized starbursts would affect both the color and surface brightness of low surface brightness ( lsb ) galaxies. to remedy this, we have run 53 simulations involving bursts of star formation activity on lsb galaxies, varying both the underlying galaxy properties and the parameters describing the starbursts. we discovered that although changing the total color of a galaxy was fairly straightforward, it was virtually impossible to alter a galaxy ' s central surface brightness and thereby remove it from the lsb galaxy classification without placing a high ( and fairly artificial ) threshold for the underlying gas density. the primary effect of large amounts of induced star formation was to produce a centralized core ( bulge ) component which is generally not observed in lsb galaxies. the noisy morphological appearance of lsb galaxies as well as their noisy surface brightness profiles can be reproduced by considering small bursts of star formation that are localized within the disk. the trigger mechanism for such bursts is likely distant / weak tidal encounters. the stability of disk central surface brightness to these periods of star formation argues that the large space density of lsb galaxies at z = 0 should hold to substantially higher redshifts.
|
arxiv:astro-ph/9808359
|
we present a class of solutions in einstein - yang - mills - systems with arbitrary gauge groups and space - time dimensions, which are symmetric under the action of the group of spatial rotations. our approach is based on the dimensional reduction method for gauge and gravitational fields and relates symmetric eym - solutions to certain solutions of two - dimensional einstein - yang - mills - higgs - dilaton theory. application of this method to four - dimensional spherically symmetric ( pseudo ) riemannian space - time yields, in particular, new solutions describing both a magnetic and an electric charge in the center of a black hole. moreover, we give an example of a solution with nonabelian gauge group in six - dimensional space - time. we also comment on the stability of the obtained solutions.
|
arxiv:gr-qc/9602006
|
this paper presents a benchmark of stream processing throughput comparing apache spark streaming ( under file -, tcp socket - and kafka - based stream integration ), with a prototype p2p stream processing framework, harmonicio. maximum throughput for a spectrum of stream processing loads are measured, specifically, those with large message sizes ( up to 10mb ), and heavy cpu loads - - more typical of scientific computing use cases ( such as microscopy ), than enterprise contexts. a detailed exploration of the performance characteristics with these streaming sources, under varying loads, reveals an interplay of performance trade - offs, uncovering the boundaries of good performance for each framework and streaming source integration. we compare with theoretic bounds in each case. based on these results, we suggest which frameworks and streaming sources are likely to offer good performance for a given load. broadly, the advantages of spark ' s rich feature set comes at a cost of sensitivity to message size in particular - - common stream source integrations can perform poorly in the 1mb - 10mb range. the simplicity of harmonicio offers more robust performance in this region, especially for raw cpu utilization.
|
arxiv:1807.07724
|
we present a methodology to construct efficient high - order in time accurate numerical schemes for a class of gradient flows with appropriate lipschitz continuous nonlinearity. there are several ingredients to the strategy : the exponential time differencing ( etd ), the multi - step ( ms ) methods, the idea of stabilization, and the technique of interpolation. they are synthesized to develop a generic $ k ^ { th } $ order in time efficient linear numerical scheme with the help of an artificial regularization term of the form $ a \ tau ^ k \ frac { \ partial } { \ partial t } \ mathcal { l } ^ { p ( k ) } u $ where $ \ mathcal { l } $ is the positive definite linear part of the flow, $ \ tau $ is the uniform time step - size. the exponent $ p ( k ) $ is determined explicitly by the strength of the lipschitz nonlinear term in relation to $ \ mathcal { l } $ together with the desired temporal order of accuracy $ k $. to validate our theoretical analysis, the thin film epitaxial growth without slope selection model is examined with a fourth - order etd - ms discretization in time and fourier pseudo - spectral in space discretization. our numerical results on convergence and energy stability are in accordance with our theoretical results.
|
arxiv:2102.10988
|
in this paper, we present a simple baseline for visual grounding for autonomous driving which outperforms the state of the art methods, while retaining minimal design choices. our framework minimizes the cross - entropy loss over the cosine distance between multiple image roi features with a text embedding ( representing the give sentence / phrase ). we use pre - trained networks for obtaining the initial embeddings and learn a transformation layer on top of the text embedding. we perform experiments on the talk2car dataset and achieve 68. 7 % ap50 accuracy, improving upon the previous state of the art by 8. 6 %. our investigation suggests reconsideration towards more approaches employing sophisticated attention mechanisms or multi - stage reasoning or complex metric learning loss functions by showing promise in simpler alternatives.
|
arxiv:2009.06066
|
coal blending is a critically important process in the coal mining industry as it directly influences the number of product tonnes and the total revenue generated by a mine site. coal blending represents a challenging and complex problem with numerous blending possibilities, multiple constraints and competing objectives. at many mine sites, blending decisions are made using heuristics that have been developed through experience or made by using computer assisted control algorithms or linear programming. while current blending procedures have achieved profitable outcomes in the past, they often result in a sub - optimal utilization of high quality coal. this sub - optimality has a considerable negative impact on mine site productivity as it can reduce the amount of lower quality rom that is blended and sold. this article reviews the coal blending problem and discusses some of the difficult trade - offs and challenges that arise in trying to address this problem. we highlight some of the risks from making simplifying assumptions and the limitations of current software optimization systems. we conclude by explaining how the mining industry would significantly benefit from research and development into optimization algorithms and technologies that are better able to combine computer optimization algorithm capabilities with the important insights of engineers and quality control specialists.
|
arxiv:1405.0276
|
we investigate the physical layer security over n - wave with diffuse power ( nwdp ) fading model, which is typically encountered in realistic wireless scenarios in the context of millimeter - wave communications and emerging 5g technologies. more specifically, novel and exact expressions of the secrecy outage probability ( sop ) and the lower bound of the secrecy outage probability are derived in terms of well - known elementary and special functions. furthermore, we provide very useful insights on the impact of increasing : the number, the relative amplitudes and the power of the dominant specular components for improving secrecy performance. in this context, we show that it is possible to achieve a very good secrecy performance when : ( i ) the relative amplitudes of the dominant specular components for the eavesdropper channel are sufficiently large compared to those relative amplitudes of the dominant components for the legitimate channel eavesdropper channel, and ( ii ) the power of the bob ' s dominant components is significantly greater than the power of the eve ' s dominant rays. the validity of the proposed expressions is confirmed via monte carlo simulations.
|
arxiv:2002.05206
|
we present the calculation of the elastic and inelastic high - - energy small - - angle electron - - positron scattering with a { \ it per mille } accuracy. pacs numbers 12. 15. lk, 12. 20. - - m, 12. 20. ds, 13. 40. - - f
|
arxiv:hep-ph/9607228
|
we prove two lower bounds for the complexity of non - log - concave sampling within the framework of balasubramanian et al. ( 2022 ), who introduced the use of fisher information ( fi ) bounds as a notion of approximate first - order stationarity in sampling. our first lower bound shows that averaged lmc is optimal for the regime of large fi by reducing the problem of finding stationary points in non - convex optimization to sampling. our second lower bound shows that in the regime of small fi, obtaining a fi of at most $ \ varepsilon ^ 2 $ from the target distribution requires $ \ text { poly } ( 1 / \ varepsilon ) $ queries, which is surprising as it rules out the existence of high - accuracy algorithms ( e. g., algorithms using metropolis - hastings filters ) in this context.
|
arxiv:2210.02482
|
in this paper, we use the " complexity equals action " ( ca ) conjecture to discuss the action growth rate in a black hole with multiple killing horizons for a higher curvature theory of gravity. based on the noether charge formalism of iyer and wald, a general formalism can be resorting to finding the action growth rate within the wdw patch at the late time approximation. moreover, as an application, we apply this formalism to a $ u ( 1 ) $ invariance matter fields and utilise our results in two specific cases. our results are universal and can be considered as the extension of the asymptotic ads to the arbitrary asymptotic one.
|
arxiv:1810.00758
|
we design a consistent galerkin scheme for the approximation of the vectorial modified korteweg - de vries equation. we demonstrate that the scheme conserves energy up to machine precision. in this sense the method is consistent with the energy balance of the continuous system. this energy balance ensures there is no numerical dissipation allowing for extremely accurate long time simulations free from numerical artifacts. various numerical experiments are shown demonstrating the asymptotic convergence of the method with respect to the discretisation parameters. some simulations are also presented that correctly capture the unusual interactions between solitons in the vectorial setting.
|
arxiv:1710.03527
|
the notion of a moreau envelope is central to the analysis of first - order optimization algorithms for machine learning. yet, it has not been developed and extended to be applied to a deep network and, more broadly, to a machine learning system with a differentiable programming implementation. we define a compositional calculus adapted to moreau envelopes and show how to integrate it within differentiable programming. the proposed framework casts in a mathematical optimization framework several variants of gradient back - propagation related to the idea of the propagation of virtual targets.
|
arxiv:2012.15458
|
the use of synthetic fuels is a promising way to reduce emissions significantly. to accelerate cost - effective large - scale synthetic fuel deployment, we optimize a novel 1 mw ptl - plant in terms of ptl - efficiency and fuel production costs. for numerous plants, the available waste heat and temperature level depend on the operating point. thus, to optimize efficiency and costs, the choice of the operating point is included in the heat exchanger network synthesis. all nonlinearities are approximated using piecewise linear models and transferred to milp. adapting the epsilon constraint method allows us to solve the multi - criteria problem with uniformly distributed solutions on the pareto front. the results show that compared to the conventional design process, the production cost can be reduced to 1. 83 eur / kg and the ptl - efficiency can be increased to 61. 30 %. by applying the presented method, climate - neutral synthetic fuels can be promoted and emissions can be reduced in the long term.
|
arxiv:2310.09294
|
we used 3. 6, 8. 0, 70, 160 micron spitzer space telescope data, james clerk maxwell telescope harp - b co j = ( 3 - 2 ) data, national radio astronomy observatory 12 meter telescope co j = ( 1 - 0 ) data, and very large array hi data to investigate the relations among pahs, cold ( ~ 20 k ) dust, molecular gas, and atomic gas within ngc 2403, an sabcd galaxy at a distance of 3. 13 mpc. the dust surface density is mainly a function of the total ( atomic and molecular ) gas surface density and galactocentric radius. the gas - to - dust ratio monotonically increases with radius, varying from ~ 100 in the nucleus to ~ 400 at 5. 5 kpc. the slope of the gas - to - dust ratio is close to that of the oxygen abundance, suggesting that metallicity strongly affects the gas - to - dust ratio within this galaxy. the exponential scale length of the radial profile for the co j = ( 3 - 2 ) emission is statistically identical to the scale length for the stellar continuum - subtracted 8 micron ( pah 8 micron ) emission. however, co j = ( 3 - 2 ) and pah 8 micron surface brightnesses appear uncorrelated when examining sub - kpc sized regions.
|
arxiv:0911.3369
|
studies of superconductivity in multiband correlated electronic systems has become one of the central topics in condensed matter / materials physics. in this paper, we present the results of thermodynamic measurements on the superconducting filled skutterudite system pr $ _ { 1 - x } $ ce $ _ x $ pt $ _ 4 $ ge $ _ { 12 } $ ( $ 0 \ leq x \ leq 0. 2 $ ) to investigate how substitution of ce at pr sites affects superconductivity. we find that an increase in ce concentration leads to a suppression of the superconducting transition temperature from $ t _ { c } \ sim 7. 9 $ k for $ x = 0 $ to $ t _ c \ sim 0. 6 $ k for $ x = 0. 14 $. our analysis of the specific heat data for $ x \ leq 0. 07 $ reveals that superconductivity must develop in at least two bands : the superconducting order parameter has nodes on one fermi pocket and remains fully gapped on the other. both the nodal and nodeless gap values decrease, with the nodal gap being suppressed more strongly, with ce substitution. ultimately, the higher ce concentration samples ( $ x > 0. 07 $ ) display a nodeless gap only.
|
arxiv:1607.03563
|
this paper describes jmetalpy, an object - oriented python - based framework for multi - objective optimization with metaheuristic techniques. building upon our experiences with the well - known jmetal framework, we have developed a new multi - objective optimization software platform aiming not only at replicating the former one in a different programming language, but also at taking advantage of the full feature set of python, including its facilities for fast prototyping and the large amount of available libraries for data processing, data analysis, data visualization, and high - performance computing. as a result, jmetalpy provides an environment for solving multi - objective optimization problems focused not only on traditional metaheuristics, but also on techniques supporting preference articulation and dynamic problems, along with a rich set of features related to the automatic generation of statistical data from the results generated, as well as the real - time and interactive visualization of the pareto front approximations produced by the algorithms. jmetalpy offers additionally support for parallel computing in multicore and cluster systems. we include some use cases to explore the main features of jmetalpy and to illustrate how to work with it.
|
arxiv:1903.02915
|
recent imaging of protoplanetary disks with high resolution and contrast have revealed a striking variety of substructure. of particular interest are cases where near - infrared scattered light images show evidence for low - intensity annular " gaps. " the origins of such structures are still uncertain, but the interaction of the gas disk with planets is a common interpretation. we study the impact that the evolution of the solid material can have on the observable properties of disks in a simple scenario without any gravitational or hydrodynamical disturbances to the gas disk structure. even with a smooth and continuous gas density profile, we find that the scattered light emission produced by small dust grains can exhibit ring - like depressions similar to those presented in recent observations. the physical mechanisms responsible for these features rely on the inefficient fragmentation of dust particles. the occurrence and position of the proposed " gap " features depend most strongly on the dust - to - gas ratio, the fragmentation threshold velocity, the strength of the turbulence, and the age of the disk, and should be generic ( at some radius ) for typically adopted disk parameters. the same physical processes can affect the thermal emission at optically thin wavelengths ( $ \ sim $ 1 mm ), although the behavior can be more complex ; unlike for disk - planet interactions, a " gap " should not be present at these longer wavelengths.
|
arxiv:1510.05660
|
data integrity is crucial for ensuring data correctness and quality, maintained through integrity constraints that must be continuously checked, especially in data - intensive systems like oltp. while dbmss handle common constraints well, complex constraints often require ad - hoc solutions. research since the 1980s has focused on automatic and simplified integrity constraint checking, leveraging the assumption that databases are consistent before updates. this paper discusses using program transformation operators to generate simplified integrity constraints, focusing on complex constraints expressed in denial form. in particular, we target a class of integrity constraints, called extended denials, which are more general than tuple - generating dependencies and equality - generating dependencies. these techniques can be readily applied to standard database practices and can be directly translated into sql.
|
arxiv:2412.20871
|
we present the results of a new selection technique to identify powerful ( $ l _ { \ rm 500 \, mhz } > 10 ^ { 27 } \, $ whz $ ^ { - 1 } $ ) radio galaxies towards the end of the epoch of reionisation. our method is based on the selection of bright radio sources showing radio spectral curvature at the lowest frequency ( $ \ sim 100 \, $ mhz ) combined with the traditional faintness in $ k - $ band for high redshift galaxies. this technique is only possible thanks to the galactic and extra - galactic all - sky murchison wide - field array ( gleam ) survey which provides us with 20 flux measurements across the $ 70 - 230 \, $ mhz range. for this pilot project, we focus on the gama 09 field to demonstrate our technique. we present the results of our follow - up campaign with the very large telescope, australian telescope compact array and the atacama large millimetre array ( alma ) to locate the host galaxy and to determine its redshift. of our four candidate high redshift sources, we find two powerful radio galaxies in the $ 1 < z < 3 $ range, confirm one at $ z = 5. 55 $ and present a very tentative $ z = 10. 15 $ candidate. their near - infrared and radio properties show that we are preferentially selecting some of the most radio luminous objects, hosted by massive galaxies very similar to powerful radio galaxies at $ 1 < z < 5 $. our new selection and follow - up technique for finding powerful radio galaxies at $ z > 5. 5 $ has a high $ 25 - 50 \ % $ success rate.
|
arxiv:2111.08104
|
collaborative recommendation is an information - filtering technique that attempts to present information items that are likely of interest to an internet user. traditionally, collaborative systems deal with situations with two types of variables, users and items. in its most common form, the problem is framed as trying to estimate ratings for items that have not yet been consumed by a user. despite wide - ranging literature, little is known about the statistical properties of recommendation systems. in fact, no clear probabilistic model even exists which would allow us to precisely describe the mathematical forces driving collaborative filtering. to provide an initial contribution to this, we propose to set out a general sequential stochastic model for collaborative recommendation. we offer an in - depth analysis of the so - called cosine - type nearest neighbor collaborative method, which is one of the most widely used algorithms in collaborative filtering, and analyze its asymptotic performance as the number of users grows. we establish consistency of the procedure under mild assumptions on the model. rates of convergence and examples are also provided.
|
arxiv:1010.0499
|
we prove that the order complex of a geometric lattice has a convex ear decomposition. as a consequence, if d ( l ) is the order complex of a rank ( r + 1 ) geometric lattice l, then for all i \ leq r / 2 the h - vector of d ( l ) satisfies h ( i - 1 ) \ leq h ( i ) and h ( i ) \ leq h ( r - i ). we also obtain several inequalities for the flag h - vector of d ( l ) by analyzing the weak bruhat order of the symmetric group. as an application, we obtain a zonotopal cd - analogue of the dowling - wilson characterization of geometric lattices which minimize whitney numbers of the second kind. in addition, we are able to give a combinatorial flag h - vector proof of h ( i - 1 ) \ leq h ( i ) when i \ leq ( 2 / 7 ) ( r + 5 / 2 ).
|
arxiv:math/0405535
|
we review the recent progress in the density functional theory for superconductors ( scdft ). motivated by the long - studied plasmon mechanism of superconductivity, we have constructed an exchange - correlation kernel entering the scdft gap equation which includes the plasmon effect. for the case of lithium under high pressures, we show that the plasmon effect substantially enhances the transition temperature ( tc ) by cooperating with the conventional phonon mechanism and results in a better agreement between the theoretical and experimentally observed tc. our present formalism will be a first step to density functional theory for unconventional superconductors.
|
arxiv:1401.1578
|
the absolute branching ratio of the $ k ^ + \ rightarrow \ pi ^ + \ pi ^ - \ pi ^ + ( \ gamma ) $ decay, inclusive of final - state radiation, has been measured using $ \ sim $ 17 million tagged $ k ^ + $ mesons collected with the kloe detector at da $ \ phi $ ne, the frascati $ \ phi $ - factory. the result is : \ [ br ( k ^ + \ rightarrow \ pi ^ + \ pi ^ - \ pi ^ + ( \ gamma ) ) = 0. 05565 \ pm 0. 00031 _ { stat } \ pm 0. 00025 _ { syst } \ ] a factor $ \ simeq $ 5 more precise with respect to the previous result. this work completes the program of precision measurements of the dominant kaon branching ratios at kloe.
|
arxiv:1407.2028
|
the water ascent in tall trees is subject to controversy : the vegetal biologists debate on the validity of the cohesion - tension theory which considers strong negative pressures in microtubes of xylem carrying the crude sap. this article aims to point out that liquids are submitted at the walls to intermolecular forces inferring density gradients making heterogeneous liquid layers and therefore disqualifying the navier - stokes equations for nanofilms. the crude sap motion takes the disjoining pressure gradient into account and the sap flow dramatically increases such that the watering of nanolayers may be analogous to a microscopic flow. application to microtubes of xylem avoids the problem of cavitation and enables us to understand why the ascent of sap is possible for very high trees.
|
arxiv:1106.1275
|
the first szego limit theorem has been extended by bump - diaconis and tracy - widom to limits of other minors of toeplitz matrices. we extend their results still further to allow more general measures and more general determinants. we also give a new extension to higher dimensions, which extends a theorem of helson and lowdenslager.
|
arxiv:math/0205052
|
we prove that the magnitude ( co ) homology of an enriched category can, under some technical assumptions, be described in terms of derived functors between certain abelian categories. we show how this statement is specified for the cases of quasimetric spaces, finite quasimetric spaces, and finite digraphs. for quasimetric spaces, we define the notion of a distance module over a quasimetric space, define the functor of ( co ) invariants of a distance module and show that the magnitude ( co ) homology can be presented via its derived functors. as a corollary we obtain that the magnitude cohomology of a quasimetric space can be presented in terms of ext functors in the category of distance modules. for finite quasimetric spaces, we show that magnitude ( co ) homology can be presented in terms of tor and ext functors over a certain graded algebra, called the distance algebra of the quasimetric space. for finite digraphs, the distance algebra is a bound quiver algebra. in addition, we show that the magnitude cohomology algebra of a finite quasimetric space can be described as a yoneda algebra.
|
arxiv:2402.14466
|
in many modern imaging applications the desire to reconstruct high resolution images, coupled with the abundance of data from acquisition using ultra - fast detectors, have led to new challenges in image reconstruction. a main challenge is that the resulting linear inverse problems are massive. the size of the forward model matrix exceeds the storage capabilities of computer memory, or the observational dataset is enormous and not available all at once. row - action methods that iterate over samples of rows can be used to approximate the solution while avoiding memory and data availability constraints. however, their overall convergence can be slow. in this paper, we introduce a sampled limited memory row - action method for linear least squares problems, where an approximation of the global curvature of the underlying least squares problem is used to speed up the initial convergence and to improve the accuracy of iterates. we show that this limited memory method is a generalization of the damped block kaczmarz method, and we prove linear convergence of the expectation of the iterates and of the error norm up to a convergence horizon. numerical experiments demonstrate the benefits of these sampled limited memory row - action methods for massive 2d and 3d inverse problems in tomography applications.
|
arxiv:1912.07962
|
many ionic surfactants with wide applications in personal - care and house - hold detergency show limited water solubility at lower temperatures ( krafft point ). this drawback can be overcome by using mixed solutions, where the ionic surfactant is incorporated in mixed micelles with another surfactant, which is soluble at lower temperatures. the solubility and electrolytic conductivity for a binary surfactant mixture of anionic methyl ester sulfonates ( mes ) with nonionic alkyl polyglucoside and alkyl polyoxyethylene ether at 5 deg c during long - term storage were measured. phase diagrams were established ; a general theoretical model for their explanation was developed and checked experimentally. the binary and ternary phase diagrams for studied surfactant mixtures include phase domains : mixed micelles ; micelles and crystallites ; crystallites, and molecular solution. the proposed general methodology, which utilizes the equations of molecular thermodynamics at minimum number of experimental measurements, is convenient for construction of such phase diagrams. the results could increase the range of applicability of mes - surfactants with relatively high krafft temperature, but with various useful properties such as excellent biodegradability and skin compatibility ; stability in hard water ; good wetting and cleaning performance.
|
arxiv:2105.11824
|
in this brief review, i summarize the new development on the correspondence between noncommuative ( nc ) field theory and gravity, shortly referred to as the ncft / gravity correspondence. i elucidate why a gauge theory in nc spacetime should be a theory of gravity. a basic reason for the ncft / gravity correspondence is that the $ \ lambda $ - symmetry ( or b - field transformations ) in nc spacetime can be considered as a par with diffeomorphisms, which results from the darboux theorem. this fact leads to a striking picture about gravity : gravity can emerge from a gauge theory in nc spacetime. gravity is then a collective phenomenon emerging from gauge fields living in fuzzy spacetime.
|
arxiv:hep-th/0612231
|
we describe a new method involving wavelet transforms for deriving the wind velocity associated with atmospheric turbulence layers from generalized scidar measurements. the algorithm analyses the cross - correlation of a series of scintillation patterns separated by lapses of dt, 2dt, 3dt, 4dt and 5dt using wavelet transforms. wavelet analysis provides the position, direction and altitude of the different turbulence layers detected in each cross - correlation. the comparison and consistency of the turbulent layer displacements in consecutive cross - correlations allow the determination of their velocities and avoid misidentifications associated with noise and / or overlapping layers. to validate the algorithm, we have compared the velocity of turbulence layers derived on four nights with the wind vertical profile provided by balloon measurements. the software is fully automated and is able to analyse huge amounts of generalized scidar measurements.
|
arxiv:astro-ph/0608595
|
nowadays adult content represents a non negligible proportion of the web content. it is of the utmost importance to protect children from this content. search engines, as an entry point for web navigation are ideally placed to deal with this issue. in this paper, we propose a method that builds a safe index i. e. adult - content free for search engines. this method is based on a filter that uses only textual information from the web page and the associated url.
|
arxiv:1512.00198
|
in this article, we study the thermodynamic behavior of anisotropic shape ( rod and disk ) nanoparticle within the block copolymer matrix by using self - consistent field theory ( scft ) simulation. in particular, we introduce various defect structures of block copolymers to precisely control the location of anisotropic particles. different from the previous studies using spherical nanoparticles within the block copolymer model defects, anisotropic particles are aligned with preferred orientation near the defect center due to the combined effects of stretching and interfacial energy of block copolymers. our results are important for precise controlling of anisotropic nanoparticle arrays for designing various functional nano materials.
|
arxiv:1707.08959
|
we discuss second quantization, discrete symmetry transformations and inner products in free non - hermitian scalar quantum field theories with pt symmetry, focusing on a prototype model of two complex scalar fields with anti - hermitian mass mixing. whereas the definition of the inner product is unique for theories described by hermitian hamiltonians, its formulation is not unique for non - hermitian hamiltonians. energy eigenstates are not orthogonal with respect to the conventional dirac inner product, so we must consider additional discrete transformations to define a positive - definite norm. we clarify the relationship between canonical - conjugate operators and introduce the additional discrete symmetry c ', previously introduced for quantum - mechanical systems, and show that the c ' pt inner product does yield a positive - definite norm, and hence is appropriate for defining the fock space in non - hermitian models with pt symmetry in terms of energy eigenstates. we also discuss similarity transformations between pt - symmetric non - hermitian scalar quantum field theories and hermitian theories, showing that they would require modification in the presence of interactions. as an illustration of our discussion, we compare particle mixing in a hermitian theory and in the corresponding non - hermitian model with pt symmetry, showing how the latter maintains unitarity and exhibits mixing between scalar and pseudoscalar bosons.
|
arxiv:2006.06656
|
query evaluation over probabilistic databases is known to be intractable in many cases, even in data complexity, i. e., when the query is fixed. although some restrictions of the queries [ 19 ] and instances [ 4 ] have been proposed to lower the complexity, these known tractable cases usually do not apply to combined complexity, i. e., when the query is not fixed. this leaves open the question of which query and instance languages ensure the tractability of probabilistic query evaluation in combined complexity. this paper proposes the first general study of the combined complexity of conjunctive query evaluation on probabilistic instances over binary signatures, which we can alternatively phrase as a probabilistic version of the graph homomorphism problem, or of a constraint satisfaction problem ( csp ) variant. we study the complexity of this problem depending on whether instances and queries can use features such as edge labels, disconnectedness, branching, and edges in both directions. we show that the complexity landscape is surprisingly rich, using a variety of technical tools : automata - based compilation to d - dnnf lineages as in [ 4 ], \ b { eta } - acyclic lineages using [ 10 ], the x - property for tractable csp from [ 24 ], graded dags [ 27 ] and various coding techniques for hardness proofs.
|
arxiv:1703.03201
|
let $ \ omega ^ d $ be the $ d $ - dimensional drinfeld symmetric space for a finite extension $ f $ of $ \ mathbb { q } _ p $. let $ \ sigma ^ 1 $ be a geometrically connected component of the first drinfeld covering of $ \ omega ^ d $ and let $ \ mathbb { f } $ be the residue field of the unique degree $ d + 1 $ unramified extension of $ f $. we show that the natural homomorphism determined by the second drinfeld covering from the group of characters of $ ( \ mathbb { f }, + ) $ to $ \ text { pic } ( \ sigma ^ 1 ) [ p ] $ is injective. in particular, $ \ text { pic } ( \ sigma ^ 1 ) [ p ] \ neq 0 $. we also show that all vector bundles on $ \ omega ^ 1 $ are trivial, which extends the classical result that $ \ text { pic } ( \ omega ^ 1 ) = 0 $.
|
arxiv:2307.12942
|
internet of things ( iot ) occupies a vital aspect of our everyday lives. iot networks composed of smart - devices which communicate and transfer the information without the physical intervention of humans. due to such proliferation and autonomous nature of iot systems make these devices threatened and prone to a severe kind of threats. in this paper, we introduces a behavior capturing, and verification procedures in blockchain supported smart - iot systems that can be able to show the trust - level confidence to outside networks. we defined a custom \ emph { behavior monitor } and implement on a selected node that can extract the activity of each device and analyzes the behavior using deep machine learning strategy. besides, we deploy trusted execution technology ( tee ) which can be used to provide a secure execution environment ( enclave ) for sensitive application code and data on the blockchain. finally, in the evaluation phase we analyze various iot devices data that is infected by mirai attack. the evaluation results show the strength of our proposed method in terms of accuracy and time required for detection.
|
arxiv:2001.01841
|
we derive the dynamical equations for a non - local gravity model in the palatini formalism and we discuss some of the properties of this model. we have shown that, in some specific cases, the vacuum solutions of general relativity are also vacuum solutions of the non - local model, so we conclude that, at least in this case, the singularities of einstein ' s gravity are not removed.
|
arxiv:1511.03578
|
we present a hybrid quantum - classical electrostatic particle - in - cell ( pic ) method, where the electrostatic field poisson solver is implemented on a quantum computer simulator using a hybrid classical - quantum neural network ( hnn ) using data - driven and physics - informed learning approaches. the hnn is trained on classical pic simulation results and executed via a pennylane quantum simulator. the remaining computational steps, including particle motion and field interpolation, are performed on a classical system. to evaluate the accuracy and computational cost of this hybrid approach, we test the hybrid quantum - classical electrostatic pic against the two - stream instability, a standard benchmark in plasma physics. our results show that the quantum poisson solver achieves comparable accuracy to classical methods. it also provides insights into the feasibility of using quantum computing and hnns for plasma simulations. we also discuss the computational overhead associated with current quantum computer simulators, showing the challenges and potential advantages of hybrid quantum - classical numerical methods.
|
arxiv:2505.09260
|
the prevalence of sexual reproduction ( " sex " ) in eukaryotes is an enigma of evolutionary biology. sex increases genetic variation only tells its long - term superiority in essence. the accumulation of harmful mutations causes an immediate and ubiquitous pressure for organisms. contrary to the common sense, our theoretical model suggests that reproductive rate can influence initiatively the accumulation of harmful mutations. the interaction of reproductive rate and the integrated harm of mutations causes a critical reproductive rate r *. a population will become irreversibly extinct once the reproductive rate reduces to lower than r *. a sexual population has a r * lower than 1 and an asexual population has a r * higher than 1. the mean reproductive rate of a population reached to the carrying capacity has to reduce to 1. that explains the widespread sex as well as the persistence of facultative and asexual organisms. computer simulations support significantly our conclusion.
|
arxiv:1306.5373
|
we study the effect of rho ^ 0 - gamma mixing in e ^ + e ^ - to pi ^ + pi ^ - and its relevance for the comparison of the square modulus of the pion from - factor | f ^ ( e ) _ pi | ^ 2, as measured in e ^ + e ^ - annihilation experiments, and | f ^ ( tau ) _ pi | ^ 2 the corresponding quantity obtained after accounting for known isospin breaking effects by an isospin rotation from the tau - decay spectra. after correcting the tau data for the missing rho - gamma mixing contribution, besides the other known isospin symmetry violating corrections, the pi pi i = 1 part of the hadronic vacuum polarization contribution to the muon g - 2 are fully compatible between tau based and e ^ + e ^ - based evaluations. tau data thus confirm result obtained with e ^ + e ^ - data. our evaluation based on all e ^ + e ^ - data including more recent babar and kloe data yields a _ mu ^ ( had ) = 690. 75 ( 4. 72 ) x 10 ^ { - 10 } ( e ^ + e ^ - based ), while including tau data we find a _ mu ^ ( had ) = 690. 96 ( 4. 65 ) x 10 ^ { - 10 } ( e ^ + e ^ - + tau based ). this backs the ~ 3 sigma deviation between theory and experiment. for the tau di - pion branching fraction we find b ^ { cvc } _ { pi pi ^ 0 } = 25. 20 \ pm0. 0. 17 \ pm0. 28 from e ^ + e ^ - + cvc, while b _ { pi pi ^ 0 } = 25. 34 \ pm0. 0. 06 \ pm0. 08 is evaluated directly from the tau spectra.
|
arxiv:1101.2872
|
in this review paper, we summarise the goals of asteroseismic studies of close binary stars. we first briefly recall the basic principles of asteroseismology, and highlight how the binarity of a star can be an asset, but also a complication, for the interpretation of the stellar oscillations. we discuss a few sample studies of pulsations in close binaries and summarise some case studies. this leads us to conclude that asteroseismology of close binaries is a challenging field of research, but with large potential for the improvement of current stellar structure theory. finally, we highlight the best observing strategy to make efficient progress in the near future.
|
arxiv:astro-ph/0701459
|
against heavily defended enemy sites such as command and control centers or surface - to - air missile ( sam ) batteries. enemy radar will cover the airspace around these sites with overlapping coverage, making undetected entry by conventional aircraft nearly impossible. stealthy aircraft can also be detected, but only at short ranges around the radars ; for a stealthy aircraft there are substantial gaps in the radar coverage. thus a stealthy aircraft flying an appropriate route can remain undetected by radar. even if a stealth aircraft is detected, fire - control radars operating in c, x and ku bands cannot paint ( for missile guidance ) low observable ( lo ) jets except at very close ranges. many ground - based radars exploit doppler filter to improve sensitivity to objects having a radial velocity component relative to the radar. mission planners use their knowledge of enemy radar locations and the rcs pattern of the aircraft to design a flight path that minimizes radial speed while presenting the lowest - rcs aspects of the aircraft to the threat radar. to be able to fly these " safe " routes, it is necessary to understand an enemy ' s radar coverage ( see electronic intelligence ). airborne or mobile radar systems such as airborne early warning and control ( aew & c, awacs ) can complicate tactical strategy for stealth operation. = = research = = after the invention of electromagnetic metasurfaces, the conventional means to reduce rcs have been improved significantly. as mentioned earlier, the main objective in purpose shaping is to redirect scattered waves away from the backscattered direction, which is usually the source. however, this usually compromises aerodynamic performance. one feasible solution, which has extensively been explored in recent time, is to use metasurfaces which can redirect scattered waves without altering the geometry of a target. such metasurfaces can primarily be classified in two categories : ( i ) checkerboard metasurfaces, ( ii ) gradient index metasurfaces. similarly, negative index metamaterials are artificial structures for which refractive index has a negative value for some frequency range, such as in microwave, infrared, or possibly optical. these offer another way to reduce detectability, and may provide electromagnetic near - invisibility in designed wavelengths. plasma stealth is a phenomenon proposed to use ionized gas, termed a plasma, to reduce rcs of vehicles. interactions between electromagnetic radiation and ionized gas have been studied extensively for many purposes, including concealing vehicles from radar. various methods might form a
|
https://en.wikipedia.org/wiki/Stealth_technology
|
we report the structural and physical properties of epitaxial bi2fecro6 thin films on epitaxial srruo3 grown on ( 100 ) - oriented srtio3 substrates by pulsed laser ablation. the 300 nm thick films exhibit both ferroelectricity and magnetism at room temperature with a maximum dielectric polarization of 2. 8 microc / cm2 at emax = 82 kv / cm and a saturated magnetization of 20 emu / cc ( corresponding to ~ 0. 26 bohr magneton per rhombohedral unit cell ), with coercive fields below 100 oe. our results confirm the predictions made using ab - initio calculations about the existence of multiferroic properties in bi2fecro6.
|
arxiv:cond-mat/0608178
|
a group of massive galaxies at redshifts of $ z \ gtrsim 7 $ have been recently detected by the james webb space telescope ( jwst ), which were unexpected to form so early within the framework of standard big bang cosmology. in this work, we propose that this puzzle can be explained by the presence of some primordial black holes ( pbhs ) with a mass of $ \ sim 1000 m _ \ odot $. these pbhs, clothed in dark matter halo and undergoing super - eddington accretion, serve as seeds for the early galaxy formation with masses of $ \ sim 10 ^ { 8 } - 10 ^ { 10 } ~ m _ \ odot $ at high redshift, thus accounting for the jwst observations. using a hierarchical bayesian inference framework to constrain the pbh mass distribution models, we find that the lognormal model with $ m _ { \ rm c } \ sim 750m _ \ odot $ is preferred over other hypotheses. these rapidly growing bhs are expected to emit strong radiation and may appear as high - redshift compact objects, similar to those recently discovered by jwst. although we focuse on pbhs in this work, the bound on the initial mass of the seed black holes remains robust even if they were formed through astrophysical channels.
|
arxiv:2303.09391
|
large occlusions result in a significant decline in image classification accuracy. during inference, diverse types of unseen occlusions introduce out - of - distribution data to the classification model, leading to accuracy dropping as low as 50 %. as occlusions encompass spatially connected regions, conventional methods involving feature reconstruction are inadequate for enhancing classification performance. we introduce learn : latent enhancing feature reconstruction network - - an auto - encoder based network that can be incorporated into the classification model before its classifier head without modifying the weights of classification model. in addition to reconstruction and classification losses, training of learn effectively combines intra - and inter - class losses calculated over its latent space - - which lead to improvement in recovering latent space of occluded data, while preserving its class - specific discriminative information. on the occludedpascal3d + dataset, the proposed learn outperforms standard classification models ( vgg16 and resnet - 50 ) by a large margin and up to 2 % over state - of - the - art methods. in cross - dataset testing, our method improves the average classification accuracy by more than 5 % over the state - of - the - art methods. in every experiment, our model consistently maintains excellent accuracy on in - distribution data.
|
arxiv:2402.06936
|
we consider the problem of identifying the causal direction between two discrete random variables using observational data. unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable : inspired by occam ' s razor, we assume that the exogenous variable is simple in the true causal direction. we quantify simplicity using r \ ' enyi entropy. our main result is that, under natural assumptions, if the exogenous variable has low $ h _ 0 $ entropy ( cardinality ) in the true direction, it must have high $ h _ 0 $ entropy in the wrong direction. we establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. we show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $ n $ marginal distributions, also known as minimum entropy coupling problem. we propose an efficient greedy algorithm for the minimum entropy coupling problem, that for $ n = 2 $ provably finds a local optimum. this gives a greedy algorithm for finding the exogenous variable with minimum $ h _ 1 $ ( shannon entropy ). our greedy entropy - based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. one advantage of our approach is that we make no use of the values of random variables but only their distributions. our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.
|
arxiv:1611.04035
|
= information systems ( is ) is the study of complementary networks of hardware and software ( see information technology ) that people and organizations use to collect, filter, process, create, and distribute data. the acm ' s computing careers describes is as : " a majority of is [ degree ] programs are located in business schools ; however, they may have different names such as management information systems, computer information systems, or business information systems. all is degrees combine business and computing topics, but the emphasis between technical and organizational issues varies among programs. for example, programs differ substantially in the amount of programming required. " the study of is bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. the field of computer information systems ( cis ) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while is emphasizes functionality over design. = = = information technology = = = information technology ( it ) is the application of computers and telecommunications equipment to store, retrieve, transmit, and manipulate data, often in the context of a business or other enterprise. the term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e - commerce, and computer services. = = research and emerging technologies = = dna - based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. potential infrastructure for future technologies includes dna origami on photolithography and quantum antennae for transferring information between ion traps. by 2011, researchers had entangled 14 qubits. fast digital circuits, including those based on josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors. fiber - optic and photonic ( optical ) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with cpu and semiconductor memory components. this allows the separation of ram from cpu by optical interconnects. ibm has created an integrated circuit with both electronic and optical information processing in one chip. this is denoted cmos - integrated nanophotonics ( cinp ). one benefit of optical interconnects is that motherboards, which formerly
|
https://en.wikipedia.org/wiki/Computing
|
in this paper, approximate linear minimum variance ( lmv ) filters for continuous - discrete state space models are introduced. the filters are obtained by means of a recursive approximation to the predictions for the first two moments of the state equation. it is shown that the approximate filters converge to the exact lmv filter when the error between the predictions and their approximations decreases. as particular instance, the order - $ \ beta $ local linearization filters are presented and expounded in detail. practical algorithms are also provided and their performance in simulation is illustrated with various examples. the proposed filters are intended for the recurrent practical situation where a nonlinear stochastic system should be identified from a reduced number of partial and noisy observations distant in time.
|
arxiv:1207.6023
|
we consider the coupling between two networks, each having n nodes whose individual dynamics is modeled by a two - state master equation. the intra - network interactions are all to all, whereas the inter - network interactions involve only a small percentage of the total number of nodes. we demonstrate that the dynamics of the mean field for a single network has an equivalent description in terms of a langevin equation for a particle in a double - well potential. the coupling of two networks or equivalent coupling of two langevin equations demonstrates synchronization or antisynchronization between two systems, depending on the sign of the interaction. the anti - synchronized behavior is explained in terms of the potential function and the inter - network interaction. the relative entropy is used to establish that the conditions for maximum information transfer between the networks are consistent with the principle of complexity management and occurs when one system is near the critical state. the limitations of the langevin modeling of the network coupling are also discussed.
|
arxiv:1508.03270
|
it is of course well known that the usual definitions of riemann integration and riemann integrals are equivalent to simpler definitions which can be expressed in terms of just one sequence of partitions, using dyadic intervals or dyadic squares or dyadic cubes for univariate, double, or triple integrals respectively. these lecture notes, intended mainly for undergraduates, present and prove some basic standard results of riemann integration in detail, taking advantage of this simpler definition. the last section of the notes provides a proof of the equivalence of this definition with the classical one. but the implicit suggestion is that the classical definition need be the concern of specialists only, and that regular students can probably do just about everything that they need to do with riemann integration by working only with the simpler dyadic definition. this is a preliminary version of these notes which will surely be updated later. it is very difficult to believe that there do not exist any other documents which give a systematic treatment of riemann integration using this approach. the author would be very grateful for any information about any such documents.
|
arxiv:1311.6021
|
a set $ b $ is said to be \ emph { sum - free } if there are no $ x, y, z \ in b $ with $ x + y = z $. we show that there exists a constant $ c > 0 $ such that any set $ a $ of $ n $ integers contains a sum - free subset $ a ' $ of size $ | a ' | \ geqslant n / 3 + c \ log \ log n $. this answers a longstanding problem in additive combinatorics, originally due to erd \ h { o } s.
|
arxiv:2502.08624
|
for fifth - generation ( 5g ) and 5g - advanced networks, outage reduction within the context of reliability is a key objective since outage denotes the time period when a user equipment ( ue ) cannot communicate with the network. earlier studies have shown that in the experimental high mobility scenario considered, outage is dominated by the interruption time that stems from the random access channel ( rach ) - based handover process from the serving cell to the target cell. a handover by itself is a necessary mobility process to prevent mobility failures and their associated outage. this paper proposes a rach - less handover signaling scheme for the 3rd generation partnership project ( 3gpp ) conditional handover ( cho ) mechanism. the proposed scheme exploits the decoupling between the cho preparation and execution phases to establish initial synchronization between the ue and the target cell through an early acquisition of the timing advance. this significantly curtails the rach process and therefore the handover interruption time. results based on a system - level simulation - based mobility study have shown that the proposed scheme significantly reduces the outage and its constituent handover interruption time relatively by 18. 7 % and 43. 2 %, respectively.
|
arxiv:2403.10286
|
in this paper we propose that a restricted version of logical inference can be implemented with self - attention networks. we are aiming at showing that llms ( large language models ) constructed with transformer networks can make logical inferences. we would reveal the potential of llms by analyzing self - attention networks, which are main components of transformer networks. our approach is not based on semantics of natural languages but operations of logical inference. % point of view. we show that hierarchical constructions of self - attention networks with feed forward networks ( ffns ) can implement top - down derivations for a class of logical formulae. we also show bottom - up derivations are also implemented for the same class. we believe that our results show that llms implicitly have the power of logical inference.
|
arxiv:2410.11396
|
in this note we show that the standard \ mbox { rayleigh - schr \ " odinger } ( rs ) perturbation method gives the same result as the hypervirial pertubative method ( hpm ), for an approximate analytic expression for the energy eigenvalues of the bounded quartic oscillator. this connection between the hpm and the rs method went unnoticed for a long time, apparently because it was not obvious that the resulting polygamma sums to be evaluated in the rs method could, in fact, be expressed in closed form.
|
arxiv:1702.01116
|
as a vital strategic resource, oil has an essential influence on the world economy, diplomacy and military development. using oil trade data to dynamically monitor and warn about international trade risks is an urgent need. based on the un comtrade data from 1988 to 2017, we construct unweighted and weighted global oil trade networks ( otns ). complex network theories have some advantages in analyzing global oil trade as a system with numerous economies and complicated relationships. this paper establishes a trading - based network model for global oil trade to study the evolving efficiency, criticality and robustness of economies and the relationships between oil trade partners. the results show that for unweighted otns, the efficiency of oil flows gradually increases with growing complexity of the otns, and the weighted efficiency indicators are more capable of highlighting the impact of major events on the otns. the identified critical economies and trade relationships have more important strategic significance in the real market. the simulated deliberate attacks corresponding to national bankruptcy, trade blockade, and economic sanctions have a more significant impact on the robustness than random attacks. when the economies are promoting high - quality economic development, and continuously enhancing positions in the otn, more attention needs be paid to the identified critical economies and trade relationships. to conclude, some suggestions for application are given according to the results.
|
arxiv:2004.05325
|
quasi - particle interference ( qpi ) measurements have provided a powerful tool for determining the momentum dependence of the gap of unconventional superconductors. here we examine the possibility of using such measurements to probe the frequency and momentum dependence of the electron self - energy. for illustration, we calculate the qpi response function for a cuprate - like fermi surface with an electron self - energy from an rpa approximation. then we try to reextract the self - energy from the dispersion of the peaks in the qpi response function using different approaches. we show that in principle it is possible to extract the self - energy from the qpi response for certain nested momentum directions. we discuss some of the limitations that one faces.
|
arxiv:1310.2761
|
in this work, we show how to construct indistinguishability obfuscation from subexponential hardness of four well - founded assumptions. we prove : let $ \ tau \ in ( 0, \ infty ), \ delta \ in ( 0, 1 ), \ epsilon \ in ( 0, 1 ) $ be arbitrary constants. assume sub - exponential security of the following assumptions, where $ \ lambda $ is a security parameter, and the parameters $ \ ell, k, n $ below are large enough polynomials in $ \ lambda $ : - the sxdh assumption on asymmetric bilinear groups of a prime order $ p = o ( 2 ^ \ lambda ) $, - the lwe assumption over $ \ mathbb { z } _ { p } $ with subexponential modulus - to - noise ratio $ 2 ^ { k ^ \ epsilon } $, where $ k $ is the dimension of the lwe secret, - the lpn assumption over $ \ mathbb { z } _ p $ with polynomially many lpn samples and error rate $ 1 / \ ell ^ \ delta $, where $ \ ell $ is the dimension of the lpn secret, - the existence of a boolean prg in $ \ mathsf { nc } ^ 0 $ with stretch $ n ^ { 1 + \ tau } $, then, ( subexponentially secure ) indistinguishability obfuscation for all polynomial - size circuits exists.
|
arxiv:2008.09317
|
in this paper, we study limit cycle bifurcations for a kind of non - smooth polynomial differential systems by perturbing a piecewise linear hamiltonian system with a center at the origin and a homoclinic loop around the origin. by using the first melnikov function of piecewise near - hamiltonian systems, we give lower bounds of the maximal number of limit cycles in hopf and homoclnic bifurcations, and derive an upper bound of the number of limit cycles that bifurcate from the periodic annulus between the center and the homoclinic loop up to the first order in. in the case when the degree of perturbing terms is low, we obtain a precise result on the number of zeros of the first melnikov function.
|
arxiv:1109.6476
|
in this article, we have studied transformation formulas of zeta function at odd integers over an arbitrary number field which in turn generalizes ramanujan ' s identity for the riemann zeta function. the above transformation leads to a new number field extension of eisenstein series, which satisfies the transformation $ z \ mapsto - 1 / z $ like an integral weight modular form over sl $ _ 2 ( \ z ) $. the results provide number of important applications, which are important in studying the behaviour of odd zeta values as well as lambert series in an arbitrary number field.
|
arxiv:2304.07870
|
multi - detector observations of individual air showers are critical to make significant progress to precisely determine cosmic - ray quantities such as mass and energy of individual events and thus bring us a step forward in answering the open questions in cosmic - ray physics. an enhancement of icetop, the surface array of the icecube neutrino observatory, is currently underway and includes adding antennas and scintillators to the existing array of ice - cherenkov tanks. the radio component will improve the characterization of the primary particles by providing an estimation of x $ _ \ text { max } $ and a direct sampling of the electromagnetic cascade, both important for per - event mass classification. a prototype station has been operated at the south pole and has observed showers, simultaneously, with the tanks, scintillator panels, and antennas. the observed radio signals of these events are unique as they are measured in the 70 to 350 \, mhz band, higher than many other cosmic - ray experiments. we present a comparison of the detected events with the waveforms from coreas simulations, convoluted with the end - to - end electronics response, as a verification of the analysis chain. using the detector response and the measurements of the prototype station as input, we update a monte - carlo - based study on the potential of the enhanced surface array for the hybrid detection of air showers by scintillators and radio antennas.
|
arxiv:2107.09666
|
since its introduction in 2006 - 2007, paris traceroute and its multipath detection algorithm ( mda ) have been used to conduct well over a billion ip level multipath route traces from platforms such as m - lab. unfortunately, the mda requires a large number of packets in order to trace an entire topology of load balanced paths between a source and a destination, which makes it undesirable for platforms that otherwise deploy paris traceroute, such as ripe atlas. in this paper we present a major update to the paris traceroute tool. our contributions are : ( 1 ) mda - lite, an alternative to the mda that significantly cuts overhead while maintaining a low failure probability ; ( 2 ) fakeroute, a simulator that enables validation of a multipath route tracing tool ' s adherence to its claimed failure probability bounds ; ( 3 ) multilevel multipath route tracing, with, for the first time, a traceroute tool that provides a router - level view of multipath routes ; and ( 4 ) surveys at both the ip and router levels of multipath routing in the internet, showing, among other things, that load balancing topologies have increased in size well beyond what has been previously reported as recently as 2016. the data and the software underlying these results are publicly available.
|
arxiv:1809.10070
|
the recently proposed aether scalar tensor ( aest ) model reproduces both the successes of particle dark matter on cosmological scales and those of modified newtonian dynamics ( mond ) on galactic scales. but the aest model reproduces mond only up to a certain maximum galactocentric radius. since mond is known to fit very well to observations at these scales, this raises the question of whether the aest model comes into tension with data. we tested whether or not the aest model is in conflict with observations using a recent analysis of data for weak gravitational lensing. we solved the equations of motion of the aest model, analyzed the solutions ' behavior, and compared the results to observational data. the aest model shows some deviations from mond at the radii probed by weak gravitational lensing. the data show no clear indication of these predicted deviations.
|
arxiv:2301.03499
|
in this article we investigate the number of subrings of $ \ z ^ d $ using subring zeta functions and $ p $ - adic integration.
|
arxiv:1008.2053
|
we critically reexamine two possible dark matter candidate within the standard model. first, we consider the $ uuddss $ exa - quark. its qcd binding energy could be large enough to make it ( quasi ) stable. we show that the cosmological dark matter abundance is reproduced thermally if its mass is 1. 2 gev. however, we also find that such mass is excluded by the stability of oxygen nuclei. second, we consider the possibility that the instability in the higgs potential leads to the formation of primordial black holes while avoiding vacuum decay during inflation. we show that the non - minimal higgs coupling to gravity must be as small as allowed by quantum corrections, $ | \ xi _ h | < 0. 01 $. even so, one must assume that the universe survived in $ e ^ { 120 } $ independent regions to fluctuations that lead to vacuum decay with probability 1 / 2 each.
|
arxiv:1803.10242
|
lusin ' s theorem states that, for every borel - measurable function $ \ bf { f } $ on $ \ mathbb r $ and every $ \ epsilon > 0 $, there exists a continuous function $ \ bf { g } $ on $ \ mathbb r $ which is equal to $ \ bf { f } $ except on a set of measure $ < \ epsilon $. we give a proof of this result using computability theory, relating it to the near - uniformity of the turing jump operator, and use this proof to derive several uniform computable versions. easier results, which we prove by the same methods, include versions of lusin ' s theorem with baire category in place of lebesgue measure and also with cantor space $ 2 ^ { \ mathbb n } $ in place of $ \ mathbb r $. the distinct processes showing generalized lowness for generic sets and for a set of full measure are seen to explain the differences between versions of lusin ' s theorem.
|
arxiv:1908.06302
|
it was recently found that the information - to - work conversion in a quantum szilard engine can be increased by using a working medium of bosons with attractive interactions. in the original scheme, the work output depends on the insertion and removal position of an impenetrable barrier that acts like a piston, separating the chambers of the engine. here, we show that the barrier removal process can be made fully reversible, resulting in a full information - to - work conversion if we also allow for the interaction strength to change during the cycle. hence, it becomes possible to reach the maximum work output per cycle dictated by the second law of thermodynamics. these findings can, for instance, be experimentally verified with ultra - cold atoms as a working medium, where a change of interaction strength can be controlled by feshbach resonances.
|
arxiv:1803.07918
|
conventional sensor - based localization relies on high - precision maps, which are generally built using specialized mapping techniques involving high labor and computational costs. in the architectural, engineering and construction industry, building information models ( bim ) are available and can provide informative descriptions of environments. this paper explores an effective way to localize a mobile 3d lidar sensor on bim - generated maps considering both geometric and semantic properties. first, original bim elements are converted to semantically augmented point cloud maps using categories and locations. after that, a coarse - to - fine semantic localization is performed to align laser points to the map based on iterative closest point registration. the experimental results show that the semantic localization can track the pose successfully with only one lidar sensor, thus demonstrating the feasibility of the proposed mapping - free localization framework. the results also show that using semantic information can help reduce localization errors on bim - generated maps.
|
arxiv:2205.00816
|
we report on the non - adiabatic offset of the initial electron momentum distribution in the plane of polarization upon single ionization of argon by strong field tunneling and show how to experimentally control the degree of non - adiabaticity. two - color counter - and co - rotating fields ( 390 and 780 nm ) are compared to show that the non - adiabatic offset strongly depends on the temporal evolution of the laser electric field. we introduce a simple method for the direct access to the non - adiabatic offset using two - color counter - and co - rotating fields. further, for a single - color circularly polarized field at 780 nm we show that the radius of the experimentally observed donut - like distribution increases for increasing momentum in the light propagation direction. our observed initial momentum offsets are well reproduced by the strong - field approximation ( sfa ). a mechanistic picture is introduced that links the measured non - adiabatic offset to the magnetic quantum number of virtually populated intermediate states.
|
arxiv:1805.05898
|
additive cost register automata ( acra ) map strings to integers using a finite set of registers that are updated using assignments of the form " x : = y + c " at every step. the corresponding class of additive regular functions has multiple equivalent characterizations, appealing closure properties, and a decidable equivalence problem. in this paper, we solve two decision problems for this model. first, we define the register complexity of an additive regular function to be the minimum number of registers that an acra needs to compute it. we characterize the register complexity by a necessary and sufficient condition regarding the largest subset of registers whose values can be made far apart from one another. we then use this condition to design a pspace algorithm to compute the register complexity of a given acra, and establish a matching lower bound. our results also lead to a machine - independent characterization of the register complexity of additive regular functions. second, we consider two - player games over acras, where the objective of one of the players is to reach a target set while minimizing the cost. we show the corresponding decision problem to be exptime - complete when costs are non - negative integers, but undecidable when costs are integers.
|
arxiv:1304.7029
|
the explanation presented in [ taichenachev et al, phys. rev. a { \ bf 61 }, 011802 ( 2000 ) ] according to which the electromagnetically induced absorption ( eia ) resonances observed in degenerate two level systems are due to coherence transfer from the excited to the ground state is experimentally tested in a hanle type experiment observing the parametric resonance on the $ % d1 $ line of $ ^ { 87 } $ rb. while eia occurs in the $ f = 1 \ to f ^ { \ prime } = 2 $ transition in a cell containing only $ rb $ vapor, collisions with a buffer gas ( $ 30 torr $ of $ ne $ ) cause the sign reversal of this resonance as a consequence of collisional decoherence of the excited state. a theoretical model in good qualitative agreement with the experimental results is presented.
|
arxiv:quant-ph/0211065
|
false data injection attacks ( fdias ) pose a significant security threat to power system state estimation. to detect such attacks, recent studies have proposed machine learning ( ml ) techniques, particularly deep neural networks ( dnns ). however, most of these methods fail to account for the risk posed by adversarial measurements, which can compromise the reliability of dnns in various ml applications. in this paper, we present a dnn - based fdia detection approach that is resilient to adversarial attacks. we first analyze several adversarial defense mechanisms used in computer vision and show their inherent limitations in fdia detection. we then propose an adversarial - resilient dnn detection framework for fdia that incorporates random input padding in both the training and inference phases. our simulations, based on an ieee standard power system, demonstrate that this framework significantly reduces the effectiveness of adversarial attacks while having a negligible impact on the dnns ' detection performance.
|
arxiv:2102.09057
|
we study a symmetric random walk ( rw ) in one spatial dimension in environment, formed by several zones of finite width, where the probability of transition between two neighboring points and corresponding diffusion coefficient are considered to be differently fixed. we derive analytically the probability to find a walker at the given position and time. the probability distribution function is found and has no gaussian form because of properties of adsorption in the bulk of zones and partial reflection at the separation points. time dependence of the mean squared displacement of a walker is studied as well and revealed the transient anomalous behavior as compared with ordinary rw.
|
arxiv:1610.08046
|
for the calabi - yau threefolds $ x $ constructed by c. schoen as fiber products of generic rational elliptic surfaces, we show that the action of the automorphism group of $ x $ on the k \ " ahler cone of $ x $ has a rationally polyhedral fundamental domain. the second author has conjectured that this statement will hold in general, the example presented here being the first non - trivial case in which the statement has been checked. the conjecture was motivated by the desire to use a construction of e. looijenga to compactify certain moduli spaces which arise in the study of conformal field theory and ` ` mirror symmetry. ' '
|
arxiv:alg-geom/9212004
|
we analyze the generic structure of einstein tensor projected onto a 2 - d spacelike surface s defined by unit timelike and spacelike vectors u _ i and n _ i respectively, which describe an accelerated observer ( see text ). assuming that flow along u _ i defines an approximate killing vector x _ i, we then show that near the corresponding rindler horizon, the flux j _ a = g _ ab x ^ b along the ingoing null geodesics k _ i normalised to have unit killing energy, given by j. k, has a natural thermodynamic interpretation. moreover, change in cross - sectional area of the k _ i congruence yields the required change in area of s under virtual displacements \ emph { normal } to it. the main aim of this note is to clearly demonstrate how, and why, the content of einstein equations under such horizon deformations, originally pointed out by padmanabhan, is essentially different from the result of jacobson, who employed the so called clausius relation in an attempt to derive einstein equations from such a clausius relation. more specifically, we show how a \ emph { very specific geometric term } [ reminiscent of hawking ' s quasi - local expression for energy of spheres ] corresponding to change in \ emph { gravitational energy } arises inevitably in the first law : de _ g / d { \ lambda } \ alpha \ int _ { h } da r _ ( 2 ) ( see text ) - - the contribution of this purely geometric term would be missed in attempts to obtain area ( and hence entropy ) change by integrating the raychaudhuri equation.
|
arxiv:1010.2207
|
and remains hard to compute in general. = = applications = = = = = linear algebra and commutative algebra = = = if a = 0, then the equation ax = b has a unique solution x in a field f, namely x = a β 1 b. { \ displaystyle x = a ^ { - 1 } b. } this immediate consequence of the definition of a field is fundamental in linear algebra. for example, it is an essential ingredient of gaussian elimination and of the proof that any vector space has a basis. the theory of modules ( the analogue of vector spaces over rings instead of fields ) is much more complicated, because the above equation may have several or no solutions. in particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring z of the integers. = = = finite fields : cryptography and coding theory = = = a widely applied cryptographic routine uses the fact that discrete exponentiation, i. e., computing an = a β
a β
β
a ( n factors, for an integer n β₯ 1 ) in a ( large ) finite field fq can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i. e., determining the solution n to an equation an = b. in elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i. e., the solutions of an equation of the form y2 = x3 + ax + b. finite fields are also used in coding theory and combinatorics. = = = geometry : field of functions = = = functions on a suitable topological space x into a field f can be added and multiplied pointwise, e. g., the product of two functions is defined by the product of their values within the domain : ( f β
g ) ( x ) = f ( x ) β
g ( x ). this makes these functions a f - commutative algebra. for having a field of functions, one must consider algebras of functions that are integral domains. in this case the ratios of two functions, i. e., expressions of the form f ( x ) g ( x ), { \ displaystyle { \ frac { f ( x ) } { g ( x ) } }, } form a field, called field of functions. this occurs in two main cases. when x is a complex
|
https://en.wikipedia.org/wiki/Field_(mathematics)
|
we describe a new paradigm for understanding both relativistic motions and particle acceleration in the m87 jet : a magnetically dominated relativistic flow that naturally produces four relativistic magnetohydrodynamic ( mhd ) shocks ( forward / reverse fast and slow modes ). we apply this model to a set of optical super - and subluminal motions discovered by biretta and coworkers with the { \ em hubble space telescope } during 1994 - - 1998. the model concept consists of ejection of a { \ em single } relativistic poynting jet, which possesses a coherent helical ( poloidal + toroidal ) magnetic component, at the remarkably flaring point hst - 1. we are able to reproduce quantitatively proper motions of components seen in the { \ em optical } observations of hst - 1 with the same model we used previously to describe similar features in radio vlbi observations in 2005 - - 2006. this indicates that the quad relativistic mhd shock model can be applied generally to recurring pairs of super / subluminal knots ejected from the upstream edge of the hst - 1 complex as observed from radio to optical wavelengths, with forward / reverse fast - mode mhd shocks then responsible for observed moving features. moreover, we identify such intrinsic properties as the shock compression ratio, degree of magnetization, and magnetic obliquity and show that they are suitable to mediate diffusive shock acceleration of relativistic particles via the first - order fermi process. we suggest that relativistic mhd shocks in poynting - flux dominated helical jets may play a role in explaining observed emission and proper motions in many agns.
|
arxiv:1403.3477
|
we introduce a variant of the mac model ( hudson and manning, iclr 2018 ) with a simplified set of equations that achieves comparable accuracy, while training faster. we evaluate both models on clevr and cogent, and show that, transfer learning with fine - tuning results in a 15 point increase in accuracy, matching the state of the art. finally, in contrast, we demonstrate that improper fine - tuning can actually reduce a model ' s accuracy as well.
|
arxiv:1811.06529
|
we discuss some easy statements dealing with linear inhomogeneous diophantine approximation. surprisingly, we did not find some of them in the literature.
|
arxiv:2205.14961
|
the advanced machine learning algorithm nestore ( next strong related earthquake ) was developed to forecast strong aftershocks in earthquake sequences and has been successfully tested in italy, western slovenia, greece, and california. nestore calculates the probability of aftershocks reaching or exceeding the magnitude of the main earthquake minus one and classifies clusters as type a or b based on a 0. 5 probability threshold. in this study, nestore was applied to japan using data from the japan meteorological agency catalog ( 1973 - 2024 ). due to japan ' s high seismic activity and class imbalance, new algorithms were developed to complement nestore. the first is a hybrid cluster identification method using etas - based stochastic declustering and deterministic graph - based selection. the second, repenese ( relevant features, class imbalance percentage, neighbour detection, selection ), is optimized for detecting outliers in skewed class distributions. a new seismicity feature was proposed, showing good results in forecasting cluster classes in japan. trained with data from 1973 to 2004 and tested from 2005 to 2023, the method correctly forecasted 75 % of a clusters and 96 % of b clusters, achieving a precision of 0. 75 and an accuracy of 0. 94 six hours after the mainshock. it accurately classified the 2011 t \ = ohoku event cluster. near - real - time forecasting was applied to the sequence after the april 17, 2024 m6. 6 earthquake in shikoku, classifying it as a " type b cluster, " with validation expected on october 31, 2024.
|
arxiv:2408.12956
|
for networked control systems, cyber - security issues have gained much attention in recent years. in this paper, we consider the so - called zero dynamics attacks, which form an important class of false data injection attacks, with a special focus on the effects of quantization in a sampled - data control setting. when the attack signals must be quantized, some error will be necessarily introduced, potentially increasing the chance of detection through the output of the system. in this paper, we show however that the attacker may reduce such errors by avoiding to directly quantize the attack signal. we look at two approaches for generating quantized attacks which can keep the error in the output smaller than a specified level by using the knowledge of the system dynamics. the methods are based on a dynamic quantization technique and a modified version of zero dynamics attacks. numerical examples are provided to verify the effectiveness of the proposed methods.
|
arxiv:2303.11982
|
this work presents a 340 - ghz frequency - modulated continuous - wave ( fmcw ) pulse - doppler radar. the radar system is based on a transceiver module with about one milli - watt output power and more than 30 - ghz bandwidth. the front - end optics consists of an off - axis parabola fed by a horn antenna from the transceiver unit, resulting in a collimated radar beam. the digital radar waveform generation allows for coherent and arbitrary fmcw pulse waveforms. the performance in terms of sensitivity and resolution ( range / cross - range / velocity ) is demonstrated, and the system ' s ability to detect and map single particles ( 0. 1 - 10 mm diameter ), as well as clouds of particles, at a 5 - m distance, is presented. a range resolution of 1 cm and a cross - range resolution of a few centimeters ( 3 - db beam - width ) allow for the characterization of the dynamics of particle clouds with a measurement voxel size of a few cubic centimeters. the monitoring of particle dynamics is of interest in several industrial applications, such as in the manufacturing of pharmaceuticals and the control / analysis of fluidized bed combustion reactors.
|
arxiv:2301.00558
|
that allows large language models ( llms ) to solve a problem as a series of intermediate steps before giving a final answer. in 2022, google brain reported that chain - of - thought prompting improves reasoning ability by inducing the model to answer a multi - step problem with steps of reasoning that mimic a train of thought. chain - of - thought techniques were developed to help llms handle multi - step reasoning tasks, such as arithmetic or commonsense reasoning questions. for example, given the question, " q : the cafeteria had 23 apples. if they used 20 to make lunch and bought 6 more, how many apples do they have? ", google claims that a cot prompt might induce the llm to answer " a : the cafeteria had 23 apples originally. they used 20 to make lunch. so they had 23 - 20 = 3. they bought 6 more apples, so they have 3 + 6 = 9. the answer is 9. " when applied to palm, a 540 billion parameter language model, according to google, cot prompting significantly aided the model, allowing it to perform comparably with task - specific fine - tuned models on several tasks, achieving state - of - the - art results at the time on the gsm8k mathematical reasoning benchmark. it is possible to fine - tune models on cot reasoning datasets to enhance this capability further and stimulate better interpretability. an example of a cot prompting : q : { question } a : let ' s think step by step. as originally proposed by google, each cot prompt included a few q & a examples. this made it a few - shot prompting technique. however, according to researchers at google and the university of tokyo, simply appending the words " let ' s think step - by - step ", has also proven effective, which makes cot a zero - shot prompting technique. openai claims that this prompt allows for better scaling as a user no longer needs to formulate many specific cot q & a examples. = = = in - context learning = = = in - context learning, refers to a model ' s ability to temporarily learn from prompts. for example, a prompt may include a few examples for a model to learn from, such as asking the model to complete " maison β house, chat β cat, chien β " ( the expected response being dog ), an approach called few - shot learning. in - context learning is an emergent ability of large language models. it is an emergent property of model scale, meaning that breaks in downstream scaling
|
https://en.wikipedia.org/wiki/Prompt_engineering
|
vector addition systems with states ( vass ) provide a well - known and fundamental model for the analysis of concurrent processes, parameterized systems, and are also used as abstract models of programs in resource bound analysis. in this paper we study the problem of obtaining asymptotic bounds on the termination time of a given vass. in particular, we focus on the practically important case of obtaining polynomial bounds on termination time. our main contributions are as follows : first, we present a polynomial - time algorithm for deciding whether a given vass has a linear asymptotic complexity. we also show that if the complexity of a vass is not linear, it is at least quadratic. second, we classify vass according to quantitative properties of their cycles. we show that certain singularities in these properties are the key reason for non - polynomial asymptotic complexity of vass. in absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $ \ theta ( n ^ k ) $, for some integer $ k \ leq d $, where $ d $ is the dimension of the vass. we present a polynomial - time algorithm computing the optimal $ k $. for general vass, the same algorithm, which is based on a complete technique for the construction of ranking functions in vass, produces a valid lower bound, i. e., a $ k $ such that the termination complexity is $ \ omega ( n ^ k ) $. our results are based on new insights into the geometry of vass dynamics, which hold the potential for further applicability to vass analysis.
|
arxiv:1804.10985
|
parse trees are fundamental syntactic structures in both computational linguistics and compilers construction. we argue in this paper that, in both fields, there are good incentives for model - checking sets of parse trees for some word according to a context - free grammar. we put forward the adequacy of propositional dynamic logic ( pdl ) on trees in these applications, and study as a sanity check the complexity of the corresponding model - checking problem : although complete for exponential time in the general case, we find natural restrictions on grammars for our applications and establish complexities ranging from nondeterministic polynomial time to polynomial space in the relevant cases.
|
arxiv:1211.5256
|
for a complex analytic map f from n - space to p - space with n < p and with an isolated instability at the origin, the disentanglement of f is a local stabilization of f that is analogous to the milnor fibre for functions. for mono - germs it is known that the disentanglement is a wedge of spheres of possibly varying dimensions. in this paper we give a condition that allows us to deduce that the same is true for a large class of multi - germs.
|
arxiv:0807.3662
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.