text
stringlengths
1
3.65k
source
stringlengths
15
79
in this study, we analyze the recently proposed charge transfer fluctuations within a finite pseudo - rapidity space. as the charge transfer fluctuation is a measure of the local charge correlation length, it is capable of detecting inhomogeneity in the hot and dense matter created by heavy ion collisions. we predict that going from peripheral to central collisions, the charge transfer fluctuations at midrapidity should decrease substantially while the charge transfer fluctuations at the edges of the observation window should decrease by a small amount. these are consequences of having a strongly inhomogeneous matter where the qgp component is concentrated around midrapidity. we also show how to constrain the values of the charge correlations lengths in both the hadronic phase and the qgp phase using the charge transfer fluctuations.
arxiv:nucl-th/0506025
the kneser - poulsen conjecture says that if a finite collection of balls in a d - dimensional euclidean space is rearranged so that the distance between each pair of centers does not get smaller, then the volume of the union of these balls also does not get smaller. in this paper we prove that if in the initial configuration the intersection of any two balls has common points with no more than d + 1 other balls, then the conjecture holds.
arxiv:1006.0529
to defend a perimeter, but they cannot do so with 100 % certainty. heuristics may find an attacker within the network, but often generate so many alerts that critical alerts are missed. in a large enterprise, the alert volume may reach millions of alerts per day. security operations personnel cannot process most of the activity easily, yet it only takes one successful penetration to compromise an entire network. this means cyber - attackers can penetrate these networks and move unimpeded for months, stealing data and intellectual property. deception technology produces alerts that are the end product of a binary process. probability is essentially reduced to two values : 0 % and 100 %. any party that seeks to identify, ping, enter, view any trap or utilizes a lure is immediately identified as malicious by this behavior because anyone touching these traps or lures should not be doing so. this certainty is an advantage over the many extraneous alerts generated by heuristics and probability - based. best practice shows that deception technology is not a stand - alone strategy. deception technology is an additional compatible layer to the existing defense - in - depth cyber defense. partner integrations make it most useful. the goal is to add protection for the most advanced and sophisticated human attackers that will successfully penetrate the perimeter. = = see also = = cybercrime network security proactive cyber defense = = references = = = = further reading = = lance spitzner ( 2002 ). honeypots tracking hackers. addison - wesley. isbn 0 - 321 - 10895 - 7. sean bodmer ; max kilger ; gregory carpenter ; jade jones ( 2012 ). reverse deception : organized cyber threat counter - exploitation. mcgraw - hill education. isbn 978 - 0071772495.
https://en.wikipedia.org/wiki/Deception_technology
distributed stream processing systems ( dsps ) like apache storm and spark streaming enable composition of continuous dataflows that execute persistently over data streams. they are used by internet of things ( iot ) applications to analyze sensor data from smart city cyber - infrastructure, and make active utility management decisions. as the ecosystem of such iot applications that leverage shared urban sensor streams continue to grow, applications will perform duplicate pre - processing and analytics tasks. this offers the opportunity to collaboratively reuse the outputs of overlapping dataflows, thereby improving the resource efficiency. in this paper, we propose \ emph { dataflow reuse algorithms } that given a submitted dataflow, identifies the intersection of reusable tasks and streams from a collection of running dataflows to form a \ emph { merged dataflow }. similar algorithms to unmerge dataflows when they are removed are also proposed. we implement these algorithms for the popular apache storm dsps, and validate their performance and resource savings for 35 synthetic dataflows based on public opmw workflows with diverse arrival and departure distributions, and on 21 real iot dataflows from riotbench.
arxiv:1709.03332
the future $ e ^ + e ^ - $ colliders, the doubly heavy baryon generated via the photoproduction mechanism is promisingly observable and can be well studied.
arxiv:2310.14315
we present systematic study of isospin impurities ( $ \ alpha _ { \ rm isb } $ ) to the wave functions of $ t = 1 / 2 $, $ 11 \ leq a \ leq 47 $ mirror nuclei and the isospin - symmetry - breaking ( isb ) corrections ( $ \ delta _ { \ rm isb } ^ { \ rm v } $ ) to their ground state vector $ \ beta $ - decays using, for the first time, multi - reference charge - dependent density functional theory ( mr - dft ) that includes strong - force - rooted class - iii interaction adjusted to correct for the nolen - schiffer anomaly in nuclear masses. we demonstrate that, unexpectedly, the strong - force - rooted isovector force gives rise to a large systematic increase of $ \ alpha _ { \ rm isb } $ and $ \ delta _ { \ rm isb } ^ { \ rm v } $ as compared to the results obtained within mr - dft that uses coulomb interaction as the only source of isb. this, in turn, increases a central value of the $ v _ { \ rm ud } $ element of the ckm matrix extracted from the $ t = 1 / 2 $ mirrors bringing it closer to the value obtained form the purely vector superallowed $ 0 ^ + \ to 0 ^ + $ transitions. in order to compute the value of $ v _ { \ rm ud } $, we performed precision calculation of the fermi matrix elements in $ a = 19, 21, 35 $, and 37 mirror nuclei using dft - rooted configuration - interaction model that includes all relevant axially - deformed particle - hole configurations built upon nilsson orbitals originating from the spherical $ sd $ shell. our calculations yield $ | v _ { \ rm ud } | = 0. 9736 ( 16 ) $.
arxiv:1909.09350
a perfect tensor of order $ d $ is a state of four $ d $ - level systems that is maximally entangled under any bipartition. these objects have attracted considerable attention in quantum information and many - body theory. perfect tensors generalize the combinatorial notion of orthogonal latin squares ( ols ). deciding whether ols of a given order exist has historically been a difficult problem. the case $ d = 6 $ proved particularly thorny, and was popularized by leonhard euler in terms of a putative constellation of " 36 officers ". it took more than a century to show that euler ' s puzzle has no solution. after yet another century, its quantum generalization was resolved in the affirmative : 36 entangled officers can be suitably arranged. however, the construction and verification of known instances relies on elaborate computer codes. ( in particular, leonhard would have had no means of dealing with such solutions to his own puzzle - - an unsatisfactory state of affairs ). in this paper, we present the first human - made order - $ 6 $ perfect tensors. we decompose the hilbert space $ ( \ mathbb { c } ^ 6 ) ^ { \ otimes 2 } $ of two quhexes into the direct sum $ ( \ mathbb { c } ^ 3 ) ^ { \ otimes 2 } \ oplus ( \ mathbb { c } ^ 3 ) ^ { \ otimes 3 } $ comprising superpositions of two - qutrit and three - qutrit states. perfect tensors arise when certain clifford unitaries are applied separately to the two sectors. technically, our construction realizes solutions to the perfect functions ansatz recently proposed by rather. generalizing an observation of bruzda and \. zyczkowski, we show that any solution of this kind gives rise to a two - unitary complex hadamard matrix, of which we construct infinite families. finally, we sketch a formulation of the theory of perfect tensors in terms of quasi - orthogonal decompositions of matrix algebras.
arxiv:2504.15401
efficient riccati equation based techniques for the approximate solution of discrete time linear regulator problems are restricted in their application to problems with quadratic terminal payoffs. where non - quadratic terminal payoffs are required, these techniques fail due to the attendant non - quadratic value functions involved. in order to compute these non - quadratic value functions, it is often necessary to appeal directly to dynamic programming in the form of grid - or element - based iterations for the value function. these iterations suffer from poor scalability with respect to problem dimension and time horizon. in this paper, a new max - plus based method is developed for the approximate solution of discrete time linear regulator problems with non - quadratic payoffs. this new method is underpinned by the development of new fundamental solutions to such linear regulator problems, via max - plus duality. in comparison with a typical grid - based approach, a substantial reduction in computational effort is observed in applying this new max - plus method. a number of simple examples are presented that illustrate this and other observations.
arxiv:1306.5060
an efficient scheme of generating ultra - tightly focused proton bunch with radius in nanometer scale is proposed. a needlelike proton filament of transverse size in nanometer scale with the density of and charge quantity is obtained based on multi - dimension particle - in - cell ( pic ) simulations. the regime is achieved via laser irradiating on a solid target with pre - channeled density profile. the theoretical analysis mentions that the transverse electric field dramatically transits from a defocusing dipole to double dipoles structure with the change of the initial target density distribution from uniform to pre - channeled. the inner dipole of the electric field tightly focuses the proton beam into the order of magnitude of nanometer. 3d simulations verify the scheme in the realistic condition. various pre - channeled density profiles including linear, parabolic and arbitrary steeped prove to work well for the regime, which declares the robustness and the performability of the scheme in experiment.
arxiv:2205.08408
the source coding problem with action - dependent side information at the decoder has recently been introduced to model data acquisition in resource - constrained systems. in this paper, an efficient algorithm for numerical computation of the rate - distortion - cost function for this problem is proposed, and a convergence proof is provided. moreover, a two - stage code design based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical wyner - ziv codes, one for each action. specific coding / decoding strategies are designed based on ldgm codes and message passing. through numerical examples, the proposed code design is shown to achieve performance close to the lower bound dictated by the rate - distortion - cost function.
arxiv:1301.6190
a complete astrophysical and dynamical study of the close visual binary system ( cvbs ) ( a7v + f0v ), finsen 350, is presented. beginning with the entire observational spectral energy distribution ( sed ) and the magnitude difference between the subcomponents, al - wardat ' s complex method for analyzing close visual binary stars ( cvbs ) was applied as a reverse method of building the individual and entire synthetic seds of the system. this was combined with docobo ' s analytic method to calculate the new orbits. although possible short ( $ \ approx $ 9 years ) and long period ( $ \ approx $ 18 years ) orbits could be considered taking into account the similar results of the stellar masses obtained for each of them ( 3. 07 and 3. 41 $ m _ { \ odot } $, respectively ), we confirmed that the short solution is correct. in addition, other physical, geometrical and dynamical parameters of this system such as the effective temperatures, surface gravity accelerations, absolute magnitudes, radii, the dynamical parallax, etc., are reported. the main sequence phase of both components with age around 0. 79 gy is approved.
arxiv:1802.03811
distributed multiple - input multiple - output ( d - mimo ) is a promising technology for simultaneous communication and positioning. however, phase synchronization between multiple access points in d - mimo is challenging and methods that function without the need for phase synchronization are highly desired. therefore, we present a method for d - mimo that performs direct positioning of a moving device based on the delay - doppler characteristics of the channel state information ( csi ). our method relies on particle - filter - based bayesian inference with a state - space model. we use recent measurements from a sub - 6 ghz d - mimo ofdm system in an industrial environment to demonstrate near - centimeter accuracy under partial line - of - sight ( los ) conditions and decimeter accuracy under fully obstructed los.
arxiv:2404.15936
in this paper, we have proposed a system based on k - l transform to recognize different hand gestures. the system consists of five steps : skin filtering, palm cropping, edge detection, feature extraction, and classification. firstly the hand is detected using skin filtering and palm cropping was performed to extract out only the palm portion of the hand. the extracted image was then processed using the canny edge detection technique to extract the outline images of palm. after palm extraction, the features of hand were extracted using k - l transform technique and finally the input gesture was recognized using proper classifier. in our system, we have tested for 10 different hand gestures, and recognizing rate obtained was 96 %. hence we propose an easy approach to recognize different hand gestures.
arxiv:1306.2599
the paper describes some probabilistic and combinatorial aspects of the nonlinear fourier transform associated with the akns - zs problems. in the first of the two main results, we show that a family of polytopes that appear in a power expansion of the nonlinear fourier transforms is distributed according to the beta probability distribution. we establish this result by studying an euler type discretization of the nonlinear fourier transform. this approach provides our second main result, discovering a novel discrete probability distribution that approximates the beta distribution. the numbers of alternating ordered partitions of an integer into distinct parts are distributed according to our new distribution. using another discretization, we also find a formula for the values of alternating ordered partitions into non - distinct parts. we find a connection between this discretization and the multinomial distribution.
arxiv:2108.10158
this paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. the goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. we evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 percent. an analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. this suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data.
arxiv:1106.0219
we introduce another notion of bounded logarithmic mean oscillation in the n - torus and give an equivalent definition in terms of boundedness of multi - parameter paraproducts from the dyadic little bmo of cotlar - sadosky to the product bmo of chang - fefferman. we also obtain a sufficient condition for the boundedness of iterated commutators with hilbert transforms betweeen the strong notions of these two spaces.
arxiv:1205.6328
this paper proposes qdfo, a dataflow - based optimization approach to microsoft qir. qdfo consists of two main functions : one is to preprocess the qir code so that the llvm optimizer can capture more optimization opportunities, and the other is to optimize the qir code so that duplicate loading and constructing of qubits and qubit arrays can be avoided. we evaluated our work on the ibm challenge dataset, the results show that our method effectively reduces redundant operations in the qir code. we also completed a preliminary implementation of qdfo and conducted a case study on the real - world code. our observational study indicates that the llvm optimizer can further optimize the qir code preprocessed by our algorithm. both the experiments and the case study demonstrate the effectiveness of our approach.
arxiv:2406.19592
muon detectors and neutron monitors were recently installed at syowa station, in the antarctic, to observe different types of secondary particles resulting from cosmic ray interactions simultaneously from the same location. continuing observations will give new insight into the response of muon detectors to atmospheric and geomagnetic effects. operation began in february, 2018 and the system has been stable with a duty - cycle exceeding 94 %. muon data shows a clear seasonal variation, which is expected from the atmospheric temperature effect. we verified successful operation by showing that the muon and neutron data are consistent with those from other locations by comparing intensity variations during a space weather event. we have established a web page to make real time data available with interactive graphics ( http : / / polaris. nipr. ac. jp / ~ cosmicrays / ).
arxiv:2101.09887
understanding the motion of debris cloud produced by the anti - satellite test can help us to know the danger of these tests. this study presents the orbit status of 57 fragments observed by the celestrak and presented in the norad two - line element sets of india anti - satellite test. there are 10 of these observed fragments have altitudes of the apogee larger than 1000. 0km, the maximum one is 1725. 7km. we also numerical calculated the number of debris, the results show that the number of debris with the diameter larger than 0. 2m is 14, the number of debris with the diameter larger than 0. 01m is 6587, and the number of debris with the diameter larger than 0. 001m is 7. 22e + 5. the results of the secondary collision of the debris will produced more fragments in the space. the life time of the fragments depends on the initial orbit parameters and the sizes of the debris.
arxiv:2008.05142
creative technology ltd., or creative labs pte ltd., is a singaporean multinational electronics company mainly dealing with audio technologies and products such as speakers, headphones, sound cards and other digital media. founded by sim wong hoo, creative was highly influential in the advancement of pc audio in the 1990s following the introduction of its sound blaster card and technologies ; the company continues to develop sound blaster products including embedding them within partnered mainboard manufacturers and laptops. the company also has overseas offices in shanghai, tokyo, dublin and the silicon valley. creative technology has been listed on the singapore exchange ( sgx ) since 1994. = = history = = = = = 1981 – 1996 = = = creative technology was founded in 1981 by childhood friends and ngee ann polytechnic schoolmates sim wong hoo and ng kai wa. originally a computer repair shop in pearl ' s centre in chinatown, the company eventually developed an add - on memory board for the apple ii computer. later, creative spent $ 500, 000 developing the cubic ct, an ibm - compatible pc adapted for the chinese language and featuring multimedia features like enhanced color graphics and a built - in audio board capable of producing speech and melodies. with lack of demand for multilingual computers and few multimedia software applications available, the cubic was a commercial failure. shifting focus from language to music, creative developed the creative music system, a pc add - on card. sim established creative labs, inc. in the united states ' silicon valley and convinced software developers to support the sound card, renamed game blaster and marketed by radioshack ' s tandy division. the success of this audio interface led to the development of the standalone sound blaster sound card, introduced at the 1989 comdex show just as the multimedia pc market, fueled by intel ' s 386 cpu and microsoft windows 3. 0, took off. the success of sound blaster helped grow creative ' s revenue from us $ 5. 4 million in 1989 to us $ 658 million in 1994. in 1993, the year after creative ' s initial public offering, in 1992, former ashton - tate ceo ed esber joined creative labs as ceo to assemble a management team to support the company ' s rapid growth. esber brought in a team of us executives, including rich buchanan ( graphics ), gail pomerantz ( marketing ), and rich sorkin ( sound products, and later communications, oem and business development ). this group played key roles in reversing a brutal market share decline caused by
https://en.wikipedia.org/wiki/Creative_Technology
semi - leptonic and leptonic decays of b - mesons are important probes for testing sm and theories beyond it because of their relative cleanliness and far less theoretical uncertainties. in semi - leptonic decays based on quark level transition $ b \ to s \ tau ^ + \ tau ^ - $ apart from branching ratio one can study many other ( possible ) observables associated with final state leptons like, lepton pair forward backward asymmetry, lepton polarization asymmetries etc. but as proposed recently if we can tag the b - meson than one can measure the polarization asymmetries of both the leptons. here we will study the polarization asymmetries of both the final state leptons in sm and minimal supersymmetric extension ( mssm ) to it.
arxiv:hep-ph/0305242
the measurement of the mass of the w boson is one of the prime goals of the tevatron experiments. in this contribution, a review is given of the most recent determinations of the w boson mass ( mw ) at the tevatron. the combined tevatron result, mw = 80. 420 + / - 0. 031 gev, is now more precise than the combined lep result, leading to a world average value of mw = 80. 399 + / - 0. 023 gev.
arxiv:1009.2903
it is a well known analytic result in general relativity that the 2 - dimensional area of the apparent horizon of a black hole remains invariant regardless of the motion of the observer, and in fact is independent of the $ t = constant $ slice, which can be quite arbitrary in general relativity. nonetheless the explicit computation of horizon area is often substantially more difficult in some frames ( complicated by the coordinate form of the metric ), than in other frames. here we give an explicit demonstration for very restricted metric forms of ( schwarzschild and kerr ) vacuum black holes. in the kerr - schild coordinate expression for these spacetimes they have an explicit lorentz - invariant form. we consider { \ it boosted } versions with the black hole moving through the coordinate system. since these are stationary black hole spacetimes, the apparent horizons are two dimensional cross sections of their event horizons, so we compute the areas of apparent horizons in the boosted space with ( boosted ) $ t = constant $, and obtain the same result as in the unboosted case. note that while the invariance of area is generic, we deal only with black holes in the kerr - schild form, and consider only one particularly simple change of slicing which amounts to a boost. even with these restrictions we find that the results illuminate the physics of the horizon as a null surface and provide a useful pedagogical tool. as far as we can determine, this is the first explicit calculation of this type demonstrating the area invariance of horizons. further, these calculations are directly relevant to transformations that arise in computational representation of moving black holes. we present an application of this result to initial data for boosted black holes.
arxiv:0708.0276
the power law $ 1 / f ^ { \ alpha } $ in the power spectrum characterizes the fluctuating observables of many complex natural systems. considering the energy levels of a quantum system as a discrete time series where the energy plays the role of time, the level fluctuations can be characterized by the power spectrum. using a family of quantum billiards, we analyze the order to chaos transition in terms of this power spectrum. a power law $ 1 / f ^ { \ alpha } $ is found at all the transition stages, and it is shown that the exponent $ \ alpha $ is related to the chaotic component of the classical phase space of the quantum system.
arxiv:cond-mat/0502130
we study four - dimensional gauge theories on oriented and non - spin spacetime manifolds. on such manifolds, each line operator arises only either as a boson or a fermion. based on physical arguments, a method of systematically assigning spin labels to line operators is proposed, and several consistency checks are performed. this is used to classify all possible sets of allowed line operators - - including their spins - - for gauge theories with simple lie algebras. the lagrangian descriptions of the theories with these sets of allowed line operators are given. finally, the one - form symmetries of these theories are studied by coupling to background gauge fields, and their ' t hooft anomalies are computed.
arxiv:1911.00589
it is investigated under which conditions an adiabatic adaption of the dynamic and spectral information of vector mesons to the changing medium in heavy ion collisions, as assumed in schematic model calculations and microscopic transport simulations, is a valid assumption. therefore time dependent medium modifications of low mass vector mesons are studied within a non - equilibrium quantum field theoretical description. timescales for the adaption of the spectral properties are given and non - equilibrium dilepton yields are calculated, leading to the result that memory effects are not negligible for most scenarios.
arxiv:hep-ph/0504278
we consider how to forecast progress in the domain of quantum computing. for this purpose we collect a dataset of quantum computer systems to date, scored on their physical qubits and gate error rate, and we define an index combining both metrics, the generalized logical qubit. we study the relationship between physical qubits and gate error rate, and tentatively conclude that they are positively correlated ( albeit with some room for doubt ), indicating a frontier of development that trades - off between them. we also apply a log - linear regression on the metrics to provide a tentative upper bound on how much progress can be expected over time. within the ( generally optimistic ) assumptions of our model, including the key assumption that exponential progress in qubit count and gate fidelity will continue, we estimate that that proof - of - concept fault - tolerant computation based on superconductor technology is unlikely ( < 5 % confidence ) to be exhibited before 2026, and that quantum devices capable of factoring rsa - 2048 are unlikely ( < 5 % confidence ) to exist before 2039. it is of course possible that these milestones will in fact be reached earlier, but that this would require faster progress than has yet been seen.
arxiv:2009.05045
we investigate the consequence of the dimension reduction on the magnetic anisotropy of fept and copt nanoparticles. using an extension of the magnetic anisotropy model of n \ ' eel, we show that, due to a statistical finite size effect, chemically disordered clusters can display a magnetic anisotropy energy ( mae ) as high as 0. 5 \ times10 ^ 6 j / m3, more than one order of magnitude higher than the bulk mae. concerning l10 ordered clusters, we show that the surface induces a reduction of the mae as compared to the bulk, due to the symmetry breaking at the cluster surface, which modifies the chemical order.
arxiv:1105.6292
the born cross sections of $ e ^ { + } e ^ { - } \ to \ sigma ^ { 0 } \ bar { \ sigma } ^ { 0 } $ are measured at center - of - mass energies from $ 2. 3864 $ to $ 3. 0200 $ gev using data samples with an integrated luminosity of $ 328. 5 $ pb $ ^ { - 1 } $ collected with the besiii detector operating at the bepcii collider. the analysis makes use of a novel reconstruction method for energies near production threshold, while a single - tag method is employed at other center - of - mass energies. the measured cross sections are consistent with earlier results from babar, with a substantially improved precision. the cross - section lineshape can be well described by a perturbative qcd - driven energy function. in addition, the effective form factors of the $ \ sigma ^ { 0 } $ baryon are determined. the results provide precise experimental input for testing various theoretical predictions.
arxiv:2110.04510
among the solar proxies studied in the sun in time k cet ( hd 20630 ) stands out as potentially having a mass very close to solar and a young age. on this study, we monitored the magnetic field and the chromospheric activity from the ca ii h & k lines of k cet. we used the least - square - deconvolution ( lsd, donati et al. 1997 ) by simultaneously extracting the information contained in all 8, 000 photospheric lines of the echelogram ( k1 type star ). to reconstruct a reliable magnetic map and characterize the surface differential rotation of k cet we used 14 exposures spread over 2 months, in order to cover at least two rotational cycles ( prot ~ 9. 2 days ). the lsd technique was applied to detect the zeeman signature of the magnetic field in each of our 14 observations and to measure its longitudinal component. in order to reconstruct the magnetic field geometry of k cet, we applied the zeeman doppler imaging ( zdi ) inversion method. zdi revealed a structure in the radial magnetic field consisting of a polar magnetic spot. on this study, we present the fisrt look results of a high - resolution spectropolarimetric campaign to characterize the activity and the magnetic fields of this young solar proxy.
arxiv:1310.7620
we investigate low - temperature transport characteristics of a side - coupled double quantum dot where only one of the dots is directly connected to the leads. we observe fano resonances, which arise from interference between discrete levels in one dot and the kondo effect, or cotunneling in general, in the other dot, playing the role of a continuum. the kondo resonance is partially suppressed by destructive fano interference, reflecting novel fano - kondo competition. we also present a theoretical calculation based on the tight - binding model with slave boson mean field approximation, which qualitatively reproduces the experimental findings.
arxiv:0912.1926
the flow of a gas through porous medium is considered in the case of pressure dependent permeability. approximate self - similar solutions of the boundary - value problems are found.
arxiv:1409.8236
we present a simpler proof of the existence of an exact number of one or more limit cycles to the lienard system $ \ dot { x } = y - f ( x ) $, $ \ dot { y } = - g ( xt ) $, under weaker conditions on the odd functions $ f ( x ) $ and $ g ( x ) $ as compared to those available in literature. we also give improved estimates of amplitudes of the limit cycle of the van der pol equation for various values of the nonlinearity parameter. moreover, the amplitude is shown to be independent of the asymptotic nature of $ f $ as $ | x | \ to \ infty $.
arxiv:1008.2372
the last phase of the formation of rocky planets is dominated by collisions among moon - to mars - sized planetary embryos. simulations of this phase need to handle the difficulty of including the post - impact material without saturating the numerical integrator. a common approach is to include the collision - generated material by clustering it into few bodies with the same mass and uniformly scattering them around the collision point. however, this approach oversimplifies the properties of the collision material by neglecting features that can play important roles in the final structure and composition of the system. in this study, we present a statistical analysis of the orbital architecture, mass, and size distributions of the material generated through embryo - embryo collisions and show how they can be used to develop a model that can be directly incorporated into the numerical integrations. for instance, results of our analysis indicate that the masses of the fragments follow an exponential distribution with an exponent of $ - 2. 21 \ pm0. 17 $ over the range of $ 10 ^ { - 7 } $ to $ 2 \ times 10 ^ { - 2 } $ earth - masses. the distribution of the post - impact velocities show that a large number of fragments are scattered toward the central star. the latter is a new finding that may be quite relevant to the delivery of material from the outer regions of the asteroid belt to the accretion zones of terrestrial planets. finally, we present an analytical model for the 2d distribution of fragments that can be directly incorporated into numerical integrations.
arxiv:2110.02977
this paper defines and discusses mouse level computational intelligence ( mlci ) as a grand challenge for the coming century. it provides a specific roadmap to reach that target, citing relevant work and review papers and discussing the relation to funding priorities in two nsf funding activities : the ongoing energy, power and adaptive systems program ( epas ) and the recent initiative in cognitive optimization and prediction ( copn ). it elaborates on the first step, vector intelligence, a challenge in the development of universal learning systems, which itself will require considerable new research to attain. this in turn is a crucial prerequisite to true functional understanding of how mammal brains achieve such general learning capabilities.
arxiv:1404.0554
in spin - polarized itinerant electron systems, collective spin - wave modes arise from dynamical exchange and correlation ( xc ) effects. we here consider spin waves in doped paramagnetic graphene with adjustable zeeman - type band splitting. the spin waves are described using time - dependent spin - density - functional response theory, treating dynamical xc effects within the slater and singwi - tosi - land - sjolander approximations. we obtain spin - wave dispersions and spin stiffnesses as a function of doping and spin polarization, and discuss prospects for their experimental observation.
arxiv:2110.00045
the wasserstein distance on multivariate non - degenerate gaussian densities is a riemannian distance. after reviewing the properties of the distance and the metric geodesic, we present an explicit form of the riemannian metrics on positive - definite matrices and compute its tensor form with respect to the trace inner product. the tensor is a matrix which is the solution to a lyapunov equation. we compute the explicit formula for the riemannian exponential, the normal coordinates charts and the riemannian gradient. finally, the levi - civita covariant derivative is computed in matrix form together with the differential equation for the parallel transport. while all computations are given in matrix form, nonetheless we discuss also the use of a special moving frame.
arxiv:1801.09269
for any finite type connected surface $ s $, we give an infinite presentation of the fundamental group $ \ pi _ 1 ( s, \ ast ) $ of $ s $ based at an interior point $ \ ast \ in { s } $ whose generators are represented by simple loops. when $ s $ is non - orientable, we also give an infinite presentation of the subgroup of $ \ pi _ 1 ( s, \ ast ) $ generated by elements which are represented by simple loops whose regular neighborhoods are annuli.
arxiv:2003.13224
we study the variation of the mean cross section with the density of the samples in the quantum scattering of a particle by a disordered target. the target consists of a set of pointlike scatterers, each having an equal probability of being anywhere inside a sphere whose radius may be modified. we first prove that scattering by a pointlike scatterer is characterized by a single phase shift $ { \ delta } $ which takes on its values in $ ] 0 \,, { \ pi } [ $ and that the scattering by $ { \ rm n } $ pointlike scatterers is described by a system of only $ { \ rm n } $ equations. we then show with the help of numerical calculations that there are two stages in the variation of the mean cross section as the density of the samples ( the radius of the target ) increases ( decreases ). depending on the value of $ { \ delta } $, the mean cross section first either increases or decreases, each one of the two behaviours being originated by double scattering ; it decreases uniformly for any value of $ { \ delta } $ as the density increases further on, a behaviour which results from multiple scattering and which follows that of the cross section for diffusion by a hard sphere potential of decreasing radius. the expression of the mean cross section is derived in the particular case of an unlimited number of contributions of successive scatterings.
arxiv:1908.10798
sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. for this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sense - tagged corpora and learning methods. however, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. to address this problem, we propose the first - ever meta - sense embedding method - - neighbour preserving meta - sense embeddings, which learns meta - sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta - embedding space. our proposed method can combine source sense embeddings that cover different sets of word senses. experimental results on word sense disambiguation ( wsd ) and word - in - context ( wic ) tasks show that the proposed meta - sense embedding method consistently outperforms several competitive baselines.
arxiv:2305.19092
the unusually narrow features in the fluorescence from rubidium - 85 driven by cooling and repumper laser fields, reported in an earlier experiment [ 1 ] are explained on the basis of a four - level density matrix calculation. quantum effects alter the efficiency of atom transfer by the probe ( repumper ) laser to the levels connected by the pump ( cooling ) laser. this combined with the double resonance condition [ 1 ], results in velocity selection from co - propagating and counter propagating probe and pump beams resulting in narrow fluorescence peaks from a thermal gas at room temperature.
arxiv:cond-mat/0401279
this dissertation explores two main topics : 1 ) color transparency and quasi - elastic knockout reactions involving pions and $ \ rho $ mesons ; and 2 ) determination of the $ j / \ psi $ - nucleon scattering amplitude and scattering length via $ j / \ psi $ electroproduction on the deuteron. it is shown that at the energies available at the compass experiment at cern, color transparency should be detectable in the reaction $ \ pi + a \ to \ pi + p + ( a - 1 ) ^ * $ ( proton knockout ). it is also shown that color transparency should be detectable in the electroproduction reaction $ \ gamma ^ * + a \ to \ rho + p + ( a - 1 ) ^ * $ at small $ q ^ 2 $ ( where $ q ^ 2 $ is the virtuality of the photon ) but large $ t $ ( 4 - momentum transfer squared to the knocked out proton ), which represents an as - yet unexplored kinematic region in the search for ct effects in electroproduction of vector mesons. calculations are also presented for the reaction $ \ gamma ^ * + d \ to j / \ psi + p + n $ at jlab energies in order to determine the feasibility of measuring the elastic $ j / \ psi $ - nucleon scattering amplitude and / or scattering length. it is found that it may be possible to measure the $ j / \ psi $ - nucleon scattering amplitude at lower energies than previous measurements, but the scattering length cannot be measured.
arxiv:1303.2736
antiferromagnetic spin - 1 chains host the celebrated symmetry protected topological haldane phase, whose spin - 1 / 2 edge states were evidenced in bulk by, e. g., electron spin resonance ( esr ). recent success in assembling effective spin - 1 antiferromagnetic chains from nanographene and porphyrin molecules opens the possibility of local, site - by - site, characterization. the nascent technique of combined esr - stm is able to measure the spin dynamics with atomic real - space resolution, and could fully reveal and manipulate the spin - 1 / 2 degree of freedom. in this work, we combine exact diagonalization and dmrg to investigate the local dynamic spin structure factor of the different phases of the bilinear - biquadratic hamiltonian with single - ion anisotropy in presence of an external magnetic field. we find that the signature of the haldane phase is a low - energy peak created by singlet - triplet transitions in the edge - state manifold. we predict that the signature peak is experimentally observable, although for chains of length above n = 30 its energy should be first tuned by application of external magnetic field. we fully characterize the peak in real - space and energy, and further show its robustness to weak anisotropy and a relevant range of temperatures.
arxiv:2312.07147
many different types of fractional calculus have been defined, which may be categorised into broad classes according to their properties and behaviours. two types that have been much studied in the literature are the hadamard - type fractional calculus and tempered fractional calculus. this paper establishes a connection between these two definitions, writing one in terms of the other by making use of the theory of fractional calculus with respect to functions. by extending this connection in a natural way, a generalisation is developed which unifies several existing fractional operators : riemann - - liouville, caputo, classical hadamard, hadamard - type, tempered, and all of these taken with respect to functions. the fundamental calculus of these generalised operators is established, including semigroup and reciprocal properties as well as application to some example functions. function spaces are constructed in which the new operators are defined and bounded. finally, some formulae are derived for fractional integration by parts with these operators.
arxiv:1907.04551
it is often stated that quantum mechanics only makes statistical predictions and that a quantum state is described by the various probability distributions associated with it. can we describe a quantum state completely in terms of probabilities and then use it to describe quantum dynamics? what is the origin of the probability distribution for a maximally specified quantum state? is quantum mechanics ` local ' or is there an essential nonlocality ( nonseparability ) inherent in quantum mechanics? these questions are discussed in this paper. the decay of an unstable quantum state and the time dependence of a minimum uncertainty states for future times as well as past times are also discussed.
arxiv:quant-ph/0109159
we explore a new solution to the $ \ mu $ - problem. in the scenario of susy - breaking mediation through anti - generation fields, we find that the $ b \ mu $ term has its origin in a seesaw - type mechanism as well as in a loop diagram through gauge interactions. it is shown that the dominant contributions to the $ b \ mu $ term are controlled by the flavor symmetry in the model.
arxiv:hep-ph/0011004
the increasing population of elderly people is associated with the need to meet their increasing requirements and to provide solutions that can improve their quality of life in a smart home. in addition to fear and anxiety towards interfacing with systems ; cognitive disabilities, weakened memory, disorganized behavior and even physical limitations are some of the problems that elderly people tend to face with increasing age. the essence of providing technology - based solutions to address these needs of elderly people and to create smart and assisted living spaces for the elderly ; lies in developing systems that can adapt by addressing their diversity and can augment their performances in the context of their day to day goals. therefore, this work proposes a framework for development of a personalized intelligent assistant to help elderly people perform activities of daily living ( adls ) in a smart and connected internet of things ( iot ) based environment. this personalized intelligent assistant can analyze different tasks performed by the user and recommend activities by considering their daily routine, current affective state and the underlining user experience. to uphold the efficacy of this proposed framework, it has been tested on a couple of datasets for modelling an average user and a specific user respectively. the results presented show that the model achieves a performance accuracy of 73. 12 % when modelling a specific user, which is considerably higher than its performance while modelling an average user, this upholds the relevance for development and implementation of this proposed framework.
arxiv:2107.07344
the study in [ phys. rev. lett. 117, 014102 ( 2016 ) ] discovered a novel type of chimera state known as coherence - resonance chimera ( crc ), which combines the effects of coherence resonance ( cr ) and the spatial property of classical chimeras. in this letter, we present yet another novel form of chimera, which we refer to as self - induced - stochastic - resonance breathing chimera ( sisr - bc ), which differs fundamentally from the crc in that it combines the mechanism and effects of self - induced stochastic resonance ( sisr, previously shown in [ phys. rev. e 72, 031105 ( 2005 ) ] to be intrinsically different from cr ), the symmetry breaking in the rotational coupling between the slow and fast subsystems of the coupled oscillators, and the property of breathing chimera - - a form of chimera state characterized by non - stationary periodic dynamics of coherent - incoherent patterns with a periodically oscillating global order parameter. unlike other types of chimeras, including crc, sisr - bc demonstrates remarkable resilience to a relatively wide range of stochastic perturbations and persists even when the purely excitable system is significantly distant from the hopf bifurcation threshold - - thanks to the mechanism of sisr - - and globally attract random distributions of initial conditions. considering its potential impact on information processing in neuronal networks, sisr - bc could have special significance and applications.
arxiv:2305.04538
the heine - borel theorem for uncountable coverings has recently emerged as an interesting and central principle in higher - order reverse mathematics and computability theory, formulated as follows : hbu is the heine - borel theorem for uncountable coverings given as $ \ cup _ { x \ in [ 0, 1 ] } ( x - \ psi ( x ), x + \ psi ( x ) ) $ for arbitrary $ \ psi : [ 0, 1 ] \ rightarrow \ mathbb { r } ^ { + } $, i. e. the original formulation going back to cousin ( 1895 ) and lindel \ " of ( 1903 ). in this paper, we show that hbu is equivalent to its restriction to functions continuous almost everywhere, an elegant robustness result. we also obtain a nice splitting hbu $ \ leftrightarrow $ [ whbu $ ^ { + } $ + hbc $ _ { 0 } $ + wkl $ _ 0 ] $ where whbu $ ^ { + } $ is a strengthening of vitali ' s covering theorem and where hbc $ _ { 0 } $ is the heine - borel theorem for countable collections ( and \ textbf { not sequences } ) of basic open intervals, as formulated by borel himself in 1898.
arxiv:2106.05602
fu orionis is the prototype of a class of eruptive young stars ( ` ` fuors ' ' ) characterized by strong optical outbursts. we recently completed an exploratory survey of fuors using xmm - newton to determine their x - ray properties, about which little was previously known. the prototype fu ori and v1735 cyg were detected. the x - ray spectrum of fu ori was found to be unusual, consisting of a cool moderately - absorbed component plus a hotter component viewed through an absorption column density that is an order of magnitude higher. we present here a sensitive ( 99 ks ) follow - up x - ray observation of fu ori obtained at higher angular resolution with chandra acis - s. the unusual multi - component spectrum is confirmed. the hot component is centered on fu ori and dominates the emission above 2 kev. it is variable ( a signature of magnetic activity ) and is probably coronal emission originating close to fu ori ' s surface viewed through cool gas in fu ori ' s strong wind or accretion stream. in contrast, the x - ray centroid of the soft emission below 2 kev is offset 0. 20 arcsec to the southeast of fu ori, toward the near - ir companion ( fu ori s ). this offset amounts to slightly less than half the separation between the two stars. the most likely explanation for the offset is that the companion contributes significantly to the softer x - ray emission below 2 kev ( and weakly above 2 kev ). the superimposed x - ray contributions from fu ori and the companion resolve the paradox posed by xmm - newton of an apparently single x - ray source viewed through two different absorption columns.
arxiv:1008.4090
this article discusses the possibility of automating of the student ' s projecting through the use of automated project management system. there are described the purpose, structure and formalism of automated workplace of student - designer ( awsd ), and shown its structural - functional diagram.
arxiv:1311.2056
emissions at the time. = = = = 20th century = = = = in the 1900s, the discipline of environmental science as it is known today began to take shape. the century is marked by significant research, literature, and international cooperation in the field. in the early 20th century, criticism from dissenters downplayed the effects of global warming. at this time, few researchers were studying the dangers of fossil fuels. after a 1. 3 degrees celsius temperature anomaly was found in the atlantic ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect ( although only carbon dioxide and water vapor were known to be greenhouse gases then ). nuclear development following the second world war allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling. environmental science was brought to the forefront of society in 1962 when rachel carson published an influential piece of environmental literature, silent spring. carson ' s writing led the american public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide ddt. another important work, the tragedy of the commons, was published by garrett hardin in 1968 in response to accelerating natural degradation. in 1969, environmental science once again became a household term after two striking disasters : ohio ' s cuyahoga river caught fire due to the amount of pollution in its waters and a santa barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. consequently, the united states passed an abundance of legislation, including the clean water act and the great lakes water quality agreement. the following year, in 1970, the first ever earth day was celebrated worldwide and the united states environmental protection agency ( epa ) was formed, legitimizing the study of environmental science in government policy. in the next two years, the united nations created the united nations environment programme ( unep ) in stockholm, sweden to address global environmental degradation. much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. in 1978, hundreds of people were relocated from love canal, new york after carcinogenic pollutants were found to be buried underground near residential areas. the next year, in 1979, the nuclear power plant on three mile island in pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. in response to landfills and toxic waste often disposed of near their homes, the
https://en.wikipedia.org/wiki/Environmental_science
rate - independent systems arise in a number of applications. usually, weak solutions to such problems with potentially very low regularity are considered, requiring mathematical techniques capable of handling nonsmooth functions. in this work we prove the existence of h \ " older - regular strong solutions for a class of rate - independent systems. we also establish additional higher regularity results that guarantee the uniqueness of strong solutions. the proof proceeds via a time - discrete rothe approximation and careful elliptic regularity estimates depending in a quantitative way on the ( local ) convexity of the potential featuring in the model. in the second part of the paper we show that our strong solutions may be approximated by a fully discrete numerical scheme based on a spatial finite element discretization, whose rate of convergence is consistent with the regularity of strong solutions whose existence and uniqueness are established.
arxiv:1702.01427
if dark matter consists of cold, neutral particles with a non - zero magnetic moment, then, in the presence of an external magnetic field, a measurable gyromagnetic faraday effect becomes possible. this enables direct constraints on the nature and distribution of such dark matter through detailed measurements of the polarization and temperature of the cosmic microwave background radiation.
arxiv:astro-ph/0611684
we measured the optical properties of mixed valent vanadium oxide nanoscrolls and their metal exchanged derivatives in order to investigate the charge dynamics in these compounds. in contrast to the prediction of a metallic state for the metal exchanged derivatives within a rigid band model, we find that the injected charges in mn $ ^ { 2 + } $ exchanged vanadium oxide nanoscrolls are pinned. a low - energy electronic excitation associated with the pinned carriers appears in the far infrared and persists at low temperature, suggesting that the nanoscrolls are weak metals in their bulk form, dominated by inhomogeneous charge disproportionation and madelung energy effects.
arxiv:0704.3861
the classic likelihood ratio test for testing the equality of two covariance matrices breakdowns due to the singularity of the sample covariance matrices when the data dimension $ p $ is larger than the sample size $ n $. in this paper, we present a conceptually simple method using random projection to project the data onto the one - dimensional random subspace so that the conventional methods can be applied. both one - sample and two - sample tests for high - dimensional covariance matrices are studied. asymptotic results are established and numerical results are given to compare our method with state - of - the - art methods in the literature.
arxiv:1511.01611
$ k $ - subset sampling is ubiquitous in machine learning, enabling regularization and interpretability through sparsity. the challenge lies in rendering $ k $ - subset sampling amenable to end - to - end learning. this has typically involved relaxing the reparameterized samples to allow for backpropagation, with the risk of introducing high bias and high variance. in this work, we fall back to discrete $ k $ - subset sampling on the forward pass. this is coupled with using the gradient with respect to the exact marginals, computed efficiently, as a proxy for the true gradient. we show that our gradient estimator, simple, exhibits lower bias and variance compared to state - of - the - art estimators, including the straight - through gumbel estimator when $ k = 1 $. empirical results show improved performance on learning to explain and sparse linear regression. we provide an algorithm for computing the exact elbo for the $ k $ - subset distribution, obtaining significantly lower loss compared to sota.
arxiv:2210.01941
the study of multipartite entanglement is much less developed than the bipartite scenario. recently, a string of results have proposed using tools from topological data analysis ( tda ) to attach topological quantities to multipartite states. however, these quantities are not directly connected to concrete information processing tasks making their interpretations vague. we take the first steps in connecting these abstract topological quantities to operational interpretations of entanglement in two scenarios. the first is we provide a bound on the integrated euler characteristic defined by hamilton and leditzky via an average distillable entanglement, which we develop as a generalization of the meyer - wallach entanglement measure studied by a. j. scott in 2004. this allows us to connect the distance of an error correcting code to the integrated euler characteristic. the second is we provide a characterization of a class of graph states containing the ghz state via the birth and death times of the connected components and 1 - dimensional cycles of the entanglement complex. in other words, the entanglement distance behavior of the first betti number $ \ beta _ 1 ( \ varepsilon ) $ allows us to determine if a state is locally equivalent to a ghz state, potentially providing new verification schemes for highly entangled states.
arxiv:2505.00642
= = an isometry is a distance - preserving map between metric spaces. given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. in a two - dimensional or three - dimensional space, two geometric figures are congruent if they are related by an isometry : related by either a rigid motion, or a composition of a rigid motion and a reflection. up to a relation by a rigid motion, they are equal if related by a direct isometry. isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc. = = symmetries of differential equations = = a symmetry of a differential equation is a transformation that leaves the differential equation invariant. knowledge of such symmetries may help solve the differential equation. a line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. knowledge of a line symmetry can be used to simplify an ordinary differential equation through reduction of order. for ordinary differential equations, knowledge of an appropriate set of lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration. symmetries may be found by solving a related set of ordinary differential equations. solving these equations is often much simpler than solving the original differential equations. = = symmetry in probability = = in the case of a finite number of possible outcomes, symmetry with respect to permutations ( relabelings ) implies a discrete uniform distribution. in the case of a real interval of possible outcomes, symmetry with respect to interchanging sub - intervals of equal length corresponds to a continuous uniform distribution. in other cases, such as " taking a random integer " or " taking a random real number ", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry. there is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero. a possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i. e.
https://en.wikipedia.org/wiki/Symmetry_in_mathematics
we perform a nf = 2 + 1 lattice qcd simulation to determine the quark spin fractions of hadrons using the feynman - hellmann theorem. by introducing an external spin operator to the fermion action, the matrix elements relevant for quark spin fractions are extracted from the linear response of the hadron energies. simulations indicate that the feynman - hellmann method offers statistical precision that is comparable to the standard three - point function approach, with the added benefit that it is less susceptible to excited state contamination. this suggests that the feynman - hellmann technique offers a promising alternative for calculations of quark line disconnected contributions to hadronic matrix elements. at the su ( 3 ) - flavour symmetry point, we find that the connected quark spin fractions are universally in the range 55 - 70 % for vector mesons and octet and decuplet baryons. there is an indication that the amount of spin suppression is quite sensitive to the strength of su ( 3 ) breaking.
arxiv:1405.3019
in this paper, we aim to compute numerical approximation integral by using an adaptive monte carlo algorithm. we propose a stratified sampling algorithm based on an iterative method which splits the strata following some quantities called indicators which indicate where the variance takes relative big values. the stratification method is based on the optimal allocation strategy in order to decrease the variance from iteration to another. numerical experiments show and confirm the efficiency of our algorithm.
arxiv:1507.05721
a significant aspect of the phase - ii upgrade of the atlas detector is the replacement of the current inner detector with the atlas inner tracker ( itk ). the atlas itk is an all - silicon detector consisting of a pixel tracker and a strip tracker. sensors for the itk strip tracker have been developed to withstand the high radiation environment in the atlas detector after the high luminosity upgrade of the large hadron collider at cern, which will significantly increase the rate of particle collisions and resulting particle tracks. during their operation in the atlas detector, sensors for the itk strip tracker are expected to accumulate fluences up to 1. 6 x 10 ^ 15 n _ eq / cm ^ 2 ( including a safety factor of 1. 5 ), which will significantly affect their performance. one characteristic of interest for highly irradiated sensors is the shape and homogeneity of the electric field inside its active area. for the results presented here, diodes with edge structures similar to full size atlas sensors were irradiated up to fluences comparable to those in the atlas itk strip tracker and their electric fields mapped using a micro - focused x - ray beam ( beam diameter 2x3 { \ mu } m ^ 2 ). this study shows the extension and shape of the electric field inside highly irradiated diodes over a range of applied bias voltages. additionally, measurements of the outline of the depleted sensor areas allow a comparison of the measured leakage current for different fluences with expectations for the corresponding active areas.
arxiv:2103.15807
we give a metric characterization of the scalar curvature of a smooth riemannian manifold, analyzing the maximal distance between $ ( n + 1 ) $ points in infinitesimally small neighborhoods of a point. since this characterization is purely in terms of the distance function, it could be used to approach the problem of defining the scalar curvature on a non - smooth metric space. in the second part we will discuss this issue, focusing in particular on alexandrov spaces and surfaces with bounded integral curvature.
arxiv:1710.07178
can crowds serve as useful allies in policy design? how do non - expert crowds perform relative to experts in the assessment of policy measures? does the geographic location of non - expert crowds, with relevance to the policy context, alter the performance of non - experts crowds in the assessment of policy measures? in this work, we investigate these questions by undertaking experiments designed to replicate expert policy assessments with non - expert crowds recruited from virtual labor markets. we use a set of ninety - six climate change adaptation policy measures previously evaluated by experts in the netherlands as our control condition to conduct experiments using two discrete sets of non - expert crowds recruited from virtual labor markets. we vary the composition of our non - expert crowds along two conditions : participants recruited from a geographical location directly relevant to the policy context and participants recruited at - large. we discuss our research methods in detail and provide the findings of our experiments.
arxiv:1702.04219
it is shown that a necessary and sufficient condition for an archimedean copula generator to generate a $ d $ - dimensional copula is that the generator is a $ d $ - monotone function. the class of $ d $ - dimensional archimedean copulas is shown to coincide with the class of survival copulas of $ d $ - dimensional $ \ ell _ 1 $ - norm symmetric distributions that place no point mass at the origin. the $ d $ - monotone archimedean copula generators may be characterized using a little - known integral transform of williamson [ duke math. j. 23 ( 1956 ) 189 - - 207 ] in an analogous manner to the well - known bernstein - - widder characterization of completely monotone generators in terms of the laplace transform. these insights allow the construction of new archimedean copula families and provide a general solution to the problem of sampling multivariate archimedean copulas. they also yield useful expressions for the $ d $ - dimensional kendall function and kendall ' s rank correlation coefficients and facilitate the derivation of results on the existence of densities and the description of singular components for archimedean copulas. the existence of a sharp lower bound for archimedean copulas with respect to the positive lower orthant dependence ordering is shown.
arxiv:0908.3750
in higher educational institutes, many students have to struggle hard to complete different courses since there is no dedicated support offered to students who need special attention in the registered courses. machine learning techniques can be utilized for students ' grades prediction in different courses. such techniques would help students to improve their performance based on predicted grades and would enable instructors to identify such individuals who might need assistance in the courses. in this paper, we use collaborative filtering ( cf ), matrix factorization ( mf ), and restricted boltzmann machines ( rbm ) techniques to systematically analyze a real - world data collected from information technology university ( itu ), lahore, pakistan. we evaluate the academic performance of itu students who got admission in the bachelor ' s degree program in itu ' s electrical engineering department. the rbm technique is found to be better than the other techniques used in predicting the students ' performance in the particular course.
arxiv:1708.08744
we propose a learning algorithm capable of learning from label proportions instead of direct data labels. in this scenario, our data are arranged into various bags of a certain size, and only the proportions of each label within a given bag are known. this is a common situation in cases where per - data labeling is lengthy, but a more general label is easily accessible. several approaches have been proposed to learn in this setting with linear models in the multiclass setting, or with nonlinear models in the binary classification setting. here we investigate the more general nonlinear multiclass setting, and compare two differentiable loss functions to train end - to - end deep neural networks from bags with label proportions. we illustrate the relevance of our methods on an image classification benchmark, and demonstrate the possibility to learn accurate image classifiers from bags of images.
arxiv:1905.12909
collection of massive well - annotated samples is effective in improving object detection performance but is extremely laborious and costly. instead of data collection and annotation, the recently proposed cut - paste methods [ 12, 15 ] show the potential to augment training dataset by cutting foreground objects and pasting them on proper new backgrounds. however, existing cut - paste methods cannot guarantee synthetic images always precisely model visual context, and all of them require external datasets. to handle above issues, this paper proposes a simple yet effective instance - switching ( is ) strategy, which generates new training data by switching instances of same class from different images. our is naturally preserves contextual coherence in the original images while requiring no external dataset. for guiding our is to obtain better object performance, we explore issues of instance imbalance and class importance in datasets, which frequently occur and bring adverse effect on detection performance. to this end, we propose a novel progressive and selective instance - switching ( psis ) method to augment training data for object detection. the proposed psis enhances instance balance by combining selective re - sampling with a class - balanced loss, and considers class importance by progressively augmenting training dataset guided by detection performance. the experiments are conducted on the challenging ms coco benchmark, and results demonstrate our psis brings clear improvement over various state - of - the - art detectors ( e. g., faster r - cnn, fpn, mask r - cnn and sniper ), showing the superiority and generality of our psis. code and models are available at : https : / / github. com / hwang64 / psis.
arxiv:1906.00358
let $ spp ( n ) $ be the set $ \ left \ { \ big ( | a + a |, | a a | \ big ) : a \ subseteq { \ mathbb n }, | a | = n \ right \ } $ of sum - product pairs, where $ a + a $ is the sumset $ \ { a + b : a, b \ in a \ } $ and $ a a $ is the product set $ \ { ab : a, b \ in a \ } $. we construct a dataset consisting of 1162868 sets whose sum - product pairs are at least $ 84 \ % $ of $ spp ( n ) $ for each $ n \ le 32 $. notably, we do * * not * * see evidence in favor of erd \ h { o } s ' s sum - product conjecture in our dataset. for $ n \ le 6 $, we prove the exact value of $ spp ( n ) $. we include a number of conjectures, open problems, and observations motivated by this dataset, a large number of color visualizations.
arxiv:2411.08139
we study the problem of underscreened kondo physics in an interacting electronic system modeled by a luttiger liquid ( ll ). we find that the leading temperature dependence of thermodynamical quantities like the specific heat, spin susceptibility are fermi liquid like in nature. however, anomalous power law exponents are seen in the subleading terms. we also discuss possible realizations through single and double quantum dot configurations coupled to ll leads and its consequences for electronic transport. the leading low temperature transport behavior is seen to exhibit in general, non fermi liquid ll behavior unlike the thermodynamical quantities.
arxiv:cond-mat/0607237
to write down a path integral for the ashtekar gravity one must solve three fundamental problems. first, one must understand rules of complex contour functional integration with holomorphic action. second, one should find which gauges are compatible with reality conditions. third, one should evaluate the faddeev - popov determinant produced by these conditions. in the present paper we derive the brst path integral for the hilbert - palatini gravity. we show, that for certain class of gauge conditions this path integral can be re - written in terms of the ashtekar variables. reality conditions define contours of integration. for our class of gauges all ghost terms coincide with what one could write naively just ignoring any jacobian factors arising from the reality conditions.
arxiv:gr-qc/9806001
proteolysis of the multimeric blood coagulation protein von willebrand factor ( vwf ) by adamts13 is crucial for prevention of microvascular thrombosis. adamts13 cleaves vwf within the mechanosensitive a2 domain, which is believed to open under shear flow. here, we combine fluorescence correlation spectroscopy ( fcs ) and a microfluidic shear cell to monitor real - time kinetics of full - length vwf proteolysis as a function of shear stress. for comparison, we also measure the michaelis - menten kinetics of adamts13 cleavage of wild - type vwf in the absence of shear but partially denaturing conditions. under shear, adamts13 activity on full - length vwf arises without denaturing agent as evidenced by fcs and gel - based multimer analysis. in agreement with brownian hydrodynamics simulations, we find a sigmoidal increase of the enzymatic rate as a function of shear at a threshold shear rate 5522 / s. the same flow - rate dependence of adamts13 activity we also observe in blood plasma, which is relevant to predict hemostatic dysfunction.
arxiv:1512.05127
the frame - like covariant lagrangian formulation of bosonic and fermionic mixed - symmetry type higher spin massless fields propagating on the ads ( d ) background is proposed. higher spin fields are described in terms of gauge p - forms which carry tangent indices representing certain traceless tensor or gamma transversal spinor - tensor representations of the ads ( d ) algebra o ( d - 1, 2 ) ( or o ( d, 1 ) for bosonic fields in ds ( d ) ). manifestly gauge invariant abelian higher spin field strengths are introduced for the general case. we describe the general framework and demonstrate how it works for the mixed - symmetry type fields associated with the three - cell " hook " and arbitrary two - row rectangular tableaux. the manifestly gauge invariant actions for these fields are presented in a simple form. the flat limit is also analyzed.
arxiv:hep-th/0311164
the developers of ethereum smart contracts often implement administrating patterns, such as censoring certain users, creating or destroying balances on demand, destroying smart contracts, or injecting arbitrary code. these routines turn an erc20 token into an administrated token - the type of ethereum smart contract that we scrutinize in this research. we discover that many smart contracts are administrated, and the owners of these tokens carry lesser social and legal responsibilities compared to the traditional centralized actors that those tokens intend to disrupt. this entails two major problems : a ) the owners of the tokens have the ability to quickly steal all the funds and disappear from the market ; and b ) if the private key of the owner ' s account is stolen, all the assets might immediately turn into the property of the attacker. we develop a pattern recognition framework based on 9 syntactic features characterizing administrated erc20 tokens, which we use to analyze existing smart contracts deployed on ethereum mainnet. our analysis of 84, 062 unique ethereum smart contracts reveals that nearly 58 % of them are administrated erc20 tokens, which accounts for almost 90 % of all erc20 tokens deployed on ethereum. to protect users from the frivolousness of unregulated token owners without depriving the ability of these owners to properly manage their tokens, we introduce safelyadministrated - a library that enforces a responsible ownership and management of erc20 tokens. the library introduces three mechanisms : deferred maintenance, board of trustees and safe pause. we implement and test safelyadministrated in the form of solidity abstract contract, which is ready to be used by the next generation of safely administrated erc20 tokens.
arxiv:2107.10979
for a knot k in s ^ 3, let t ( k ) be the characteristic toric sub - orbifold of the orbifold ( s ^ 3, k ) as defined by bonahon and siebenmann. if k has unknotting number one, we show that an unknotting arc for k can always be found which is disjoint from t ( k ), unless either k is an em - knot ( of eudave - munoz ) or ( s ^ 3, k ) contains an em - tangle after cutting along t ( k ). as a consequence, we describe exactly which large algebraic knots ( ie algebraic in the sense of conway and containing an essential conway sphere ) have unknotting number one and give a practical procedure for deciding this ( as well as determining an unknotting crossing ). among the knots up to 11 crossings in conway ' s table which are obviously large algebraic by virtue of their description in the conway notation, we determine which have unknotting number one. combined with the work of ozsvath - szabo, this determines the knots with 10 or fewer crossings that have unknotting number one. we show that an alternating, large algebraic knot with unknotting number one can always be unknotted in an alternating diagram. as part of the above work, we determine the hyperbolic knots in a solid torus which admit a non - integral, toroidal dehn surgery. finally, we show that having unknotting number one is invariant under mutation.
arxiv:math/0601265
deep learning techniques have significantly advanced in providing accurate visual odometry solutions by leveraging large datasets. however, generating uncertainty estimates for these methods remains a challenge. traditional sensor fusion approaches in a bayesian framework are well - established, but deep learning techniques with millions of parameters lack efficient methods for uncertainty estimation. this paper addresses the issue of uncertainty estimation for pre - trained deep - learning models in monocular visual odometry. we propose formulating a factor graph on an implicit layer of the deep learning network to recover relative covariance estimates, which allows us to determine the covariance of the visual odometry ( vo ) solution. we showcase the consistency of the deep learning engine ' s covariance approximation with an empirical analysis of the covariance model on the euroc datasets to demonstrate the correctness of our formulation.
arxiv:2403.13170
denote by $ g $ the group of interval exchange transformations ( iets ) on the unit interval. let $ g _ { per } \ subset g $ be the subgroup generated by torsion elements in $ g $ ( periodic iets ), and let $ g _ { rot } \ subset g $ be the subset of 2 - iets ( rotations ). the elements of the subgroup $ g _ 1 = < g _ { per }, g _ { rot } > \ subset g $ ( generated by the sets $ g _ { per } $ and $ g _ { rot } $ ) are characterized constructively in terms of their sah - arnoux - fathi ( saf ) invariant. the characterization implies that a non - rotation type 3 - iet lies in $ g _ 1 $ if and only if the lengths of its exchanged intervals are linearly dependent over $ \ q $. in particular, $ g _ 1 \ subsetneq g $. the main tools used in the paper are the saf invariant and a recent result by y. vorobets that $ g _ { per } $ coincides with the commutator subgroup of $ g $.
arxiv:1208.1023
we report an extensive search for lyman - alpha emitters ( laes ) at z = 6. 5 in the subaru deep field. subsequent spectroscopy with subaru and keck identified eight more laes, giving a total of 17 spectroscopically confirmed laes at z = 6. 5. based on this spectroscopic sample of 17, complemented by a photometric sample of 58 laes, we have derived a more accurate lyman - alpha luminosity function of laes at z = 6. 5, which reveals an apparent deficit at the bright end of ~ 0. 75 mag fainter l *, compared with that observed at z = 5. 7. the difference in the lae luminosity functions between z = 5. 7 and 6. 5 is significant at the 3 - sigma level, which is reduced to 2 - sigma when cosmic variance is taken into account. this result may imply that the reionization of the universe has not been completed at z = 6. 5. we found that the spatial distribution of laes at z = 6. 5 was homogeneous over the field. we discuss the implications of these results for the reionization of the universe.
arxiv:astro-ph/0604149
one of the elegant achievements in the history of proof theory is the characterization of the provably total recursive functions of an arithmetical theory by its proof - theoretic ordinal as a way to measure the time complexity of the functions. unfortunately, the machinery is not sufficiently fine - grained to be applicable on the weak theories on the one hand and to capture the bounded functions with bounded definitions of strong theories, on the other. in this paper, we develop such a machinery to address the bounded theorems of both strong and weak theories of arithmetic. in the first part, we provide a refined version of ordinal analysis to capture the feasibly definable and bounded functions that are provably total in $ \ mathrm { pa } + \ bigcup _ { \ beta \ prec \ alpha } \ mathrm { ti } ( \ prec _ { \ beta } ) $, the extension of peano arithmetic by transfinite induction up to the ordinals below $ \ alpha $. roughly speaking, we identify the functions as the ones that are computable by a sequence of $ \ mathrm { pv } $ - provable polynomial time modifications on an initial polynomial time value, where the computational steps are indexed by the ordinals below $ \ alpha $, decreasing by the modifications. in the second part, and choosing $ l \ leq k $, we use similar technique to capture the functions with bounded definitions in the theory $ t ^ k _ 2 $ ( resp. $ s ^ k _ 2 $ ) as the functions computable by exponentially ( resp. polynomially ) long sequence of $ \ mathrm { pv } _ { k - l + 1 } $ - provable reductions between $ l $ - turn games starting with an explicit $ \ mathrm { pv } _ { k - l + 1 } $ - provable winning strategy for the first game.
arxiv:2404.11218
the nearest neighbor rule is a classic yet essential classification model, particularly in problems where the supervising information is given by pairwise dissimilarities and the embedding function are not easily obtained. prototype selection provides means of generalization and improving efficiency of the nearest neighbor model, but many existing methods assume and rely on the analyses of the input vector space. in this paper, we explore a dissimilarity - based, parametrized model of the nearest neighbor rule. in the proposed model, the selection of the nearest prototypes is influenced by the parameters of the respective prototypes. it provides a formulation for minimizing the violation of the extended nearest neighbor rule over the training set in a tractable form to exploit numerical techniques. we show that the minimization problem reduces to a large - margin principle learning and demonstrate its advantage by empirical comparisons with other prototype selection methods.
arxiv:1509.08102
segregation patterns of size - bidisperse particle mixtures in a fully - three - dimensional flow produced by alternately rotating a spherical tumbler about two perpendicular axes are studied over a range of particle sizes and volume ratios using both experiments and a continuum model. pattern formation results from the interaction of size segregation with chaotic regions and non - mixing islands of the flow. specifically, large particles in the flowing surface layer are preferentially deposited in non - mixing islands despite the effects of collisional diffusion and chaotic transport. the protocol - dependent structure of the unstable manifolds of the flow surrounding the non - mixing islands provides further insight into why certain segregation patterns are more robust than others.
arxiv:1901.02988
designing effective game tutorials is crucial for a smooth learning curve for new players, especially in games with many rules and complex core mechanics. evaluating the effectiveness of these tutorials usually requires multiple iterations with testers who have no prior knowledge of the game. recent vision - language models ( vlms ) have demonstrated significant capabilities in understanding and interpreting visual content. vlms can analyze images, provide detailed insights, and answer questions about their content. they can recognize objects, actions, and contexts in visual data, making them valuable tools for various applications, including automated game testing. in this work, we propose an automated game - testing solution to evaluate the quality of game tutorials. our approach leverages vlms to analyze frames from video game tutorials, answer relevant questions to simulate human perception, and provide feedback. this feedback is compared with expected results to identify confusing or problematic scenes and highlight potential errors for developers. in addition, we publish complete tutorial videos and annotated frames from different game versions used in our tests. this solution reduces the need for extensive manual testing, especially by speeding up and simplifying the initial development stages of the tutorial to improve the final game experience.
arxiv:2408.08396
inspired by human vision, we propose a new periphery - fovea multi - resolution driving model that predicts vehicle speed from dash camera videos. the peripheral vision module of the model processes the full video frames in low resolution. its foveal vision module selects sub - regions and uses high - resolution input from those regions to improve its driving performance. we train the fovea selection module with supervision from driver gaze. we show that adding high - resolution input from predicted human driver gaze locations significantly improves the driving accuracy of the model. our periphery - fovea multi - resolution model outperforms a uni - resolution periphery - only model that has the same amount of floating - point operations. more importantly, we demonstrate that our driving model achieves a significantly higher performance gain in pedestrian - involved critical situations than in other non - critical situations.
arxiv:1903.09950
we investigate the creation and control of emergent collective behavior and quantum correlations using feedback in an emitter - waveguide system using a minimal model. employing homodyne detection of photons emitted from a laser - driven emitter ensemble into the modes of a waveguide allows to generate intricate dynamical phases. in particular, we show the emergence of a time - crystal phase, the transition to which is controlled by the feedback strength. feedback enables furthermore the control of many - body quantum correlations, which become manifest in spin squeezing in the emitter ensemble. developing a theory for the dynamics of fluctuation operators we discuss how the feedback strength controls the squeezing and investigate its temporal dynamics and dependence on system size. the largely analytical results allow to quantify spin squeezing and fluctuations in the limit of large number of emitters, revealing critical scaling of the squeezing close to the transition to the time - crystal. our study corroborates the potential of integrated emitter - waveguide systems - - which feature highly controllable photon emission channels - - for the exploration of collective quantum phenomena and the generation of resources, such as squeezed states, for quantum enhanced metrology.
arxiv:2102.02719
we present an abstract method for deriving decay estimates on the resolvents and semigroups of non - symmetric operators in banach spaces in terms of estimates in another smaller reference banach space. this applies to a class of operators writing as a regularizing part, plus a dissipative part. the core of the method is a high - order quantitative factorization argument on the resolvents and semigroups. we then apply this approach to the fokker - planck equation, to the kinetic fokker - planck equation in the torus, and to the linearized boltzmann equation in the torus. we finally use this information on the linearized boltzmann semi - group to study perturbative solutions for the nonlinear boltzmann equation. we introduce a non - symmetric energy method to prove nonlinear stability in this context in $ l ^ 1 _ v l ^ \ infty _ x ( 1 + | v | ^ k ) $, $ k > 2 $, with sharp rate of decay in time. as a consequence of these results we obtain the first constructive proof of exponential decay, with sharp rate, towards global equilibrium for the full nonlinear boltzmann equation for hard spheres, conditionally to some smoothness and ( polynomial ) moment estimates. this improves the result in [ 32 ] where polynomial rates at any order were obtained, and solves the conjecture raised in [ 91, 29, 86 ] about the optimal decay rate of the relative entropy in the h - theorem.
arxiv:1006.5523
numerous and all unsuccessful attempts of experimental search for monopole in cosmic rays and on accelerators in high energy particle collisions have been done since the possibility of existence of a magnetic monopole has been surveyed in 1931. also the searches have been carried out in mica for monopole tracks as well as for relict monopoles, entrapped by ferromagnetic inclusions in iron - ores, moon rock and meteorites. a new method of search for supermassive cosmic and relict monopoles by magnetically ordered film is considered. this approach resembles the traditional method of nuclear emulsion chamber. apparently the proposed method is particularly attractive for detection of relict monopoles, released from melting iron ore.
arxiv:hep-ph/9909528
several explanations for the existence of ultra high energy cosmic rays ( uhecr ) invoke the idea that they originate from the decay of massive particles created in the reheating following inflation. it has been suggested that the decay products can explain the observed isotropic flux of uhecr. we have calculated the anisotropy expected for various models of the dark matter distribution and find that present data are too sparse above 4 x 10 ^ 19 ev to discriminate between different models. however, after three years of operation of the southern part of the pierre auger observatory great progress in testing the proposals is expected.
arxiv:astro-ph/9905240
this paper presents two efficient primality tests that quickly and accurately test all integers up to $ 2 ^ { 64 } $.
arxiv:2311.07048
we present a consistent effective theory that violates the null energy condition ( nec ) without developing any instabilities or other pathological features. the model is the ghost condensate with the global shift symmetry softly broken by a potential. we show that this system can drive a cosmological expansion with dh / dt > 0. demanding the absence of instabilities in this model requires dh / dt < ~ h ^ 2. we then construct a general low - energy effective theory that describes scalar fluctuations about an arbitrary frw background, and argue that the qualitative features found in our model are very general for stable systems that violate the nec. violating the nec allows dramatically non - standard cosmological histories. to illustrate this, we construct an explicit model in which the expansion of our universe originates from an asymptotically flat state in the past, smoothing out the big - bang singularity within control of a low - energy effective theory. this gives an interesting alternative to standard inflation for solving the horizon problem. we also construct models in which the present acceleration has w < - 1 ; a periodic ever - expanding universe and a model with a smooth ` ` bounce ' ' connecting a contracting and expanding phase.
arxiv:hep-th/0606090
this paper considers a finite sample perspective on the problem of identifying an lti system from a finite set of possible systems using trajectory data. to this end, we use the maximum likelihood estimator to identify the true system and provide an upper bound for its sample complexity. crucially, the derived bound does not rely on a potentially restrictive stability assumption. additionally, we leverage tools from information theory to provide a lower bound to the sample complexity that holds independently of the used estimator. the derived sample complexity bounds are analyzed analytically and numerically.
arxiv:2409.11141
neural radiance field ( nerf ) has shown impressive results in novel view synthesis, particularly in virtual reality ( vr ) and augmented reality ( ar ), thanks to its ability to represent scenes continuously. however, when just a few input view images are available, nerf tends to overfit the given views and thus make the estimated depths of pixels share almost the same value. unlike previous methods that conduct regularization by introducing complex priors or additional supervisions, we propose a simple yet effective method that explicitly builds depth - aware consistency across input views to tackle this challenge. our key insight is that by forcing the same spatial points to be sampled repeatedly in different input views, we are able to strengthen the interactions between views and therefore alleviate the overfitting problem. to achieve this, we build the neural networks on layered representations ( \ textit { i. e. }, multiplane images ), and the sampling point can thus be resampled on multiple discrete planes. furthermore, to regularize the unseen target views, we constrain the rendered colors and depths from different input views to be the same. although simple, extensive experiments demonstrate that our proposed method can achieve better synthesis quality over state - of - the - art methods.
arxiv:2402.16407
in this letter we have presented a novel version of " long - lived " gluinos in supersymmetric models with the gluino the lightest ordinary supersymmetric particle [ losp ] and axino lsp. within certain ranges of the axion decay constant $ f _ a < 1 \ times 10 ^ { 10 } $ gev, the gluino mass bounds are reduced to less than 1000 gev. the best limits can be obtained by looking for decaying r - hadrons in the detector where the gluino decays to a gluon and axino in the calorimeters. susy models with a gluino losp can occur over a significant region of parameter space in either { \ em mirage - mediation } or general gauge - mediated susy breaking models. the gluino losp is not constrained by cosmology, but in this scenario the axion / axino may be good dark matter candidates.
arxiv:1508.04373
we study time - reversal symmetry in dynamical systems with finite phase space, with applications to birational maps reduced over finite fields. for a polynomial automorphism with a single family of reversing symmetries, a universal ( i. e., map - independent ) distribution function r ( x ) = 1 - e ^ { - x } ( 1 + x ) has been conjectured to exist, for the normalized cycle lengths of the reduced map in the large field limit ( j. a. g. roberts and f. vivaldi, nonlinearity 18 ( 2005 ) 2171 - 2192 ). we show that these statistics correspond to those of a composition of two random involutions, having an appropriate number of fixed points. this model also explains the experimental observation that, asymptotically, almost all cycles are symmetrical, and that the probability of occurrence of repeated periods is governed by a poisson law.
arxiv:0905.4135
we examine the gluon distributions in nuclei in the asymptotic region defined by $ q ^ 2 \ to \ infty $, $ x \ to 0 $. an analysis using the double asymptotic scaling variables of ball and forte is proposed. new scaling relations are predicted which can help disentangling the different mechanisms of low $ x $ perturbative qcd evolution in nuclei.
arxiv:hep-ph/0003235
in this paper, we introduce the concept of two - way coding, which originates in communication theory characterizing coding schemes for two - way channels, into control theory, particularly to facilitate the analysis and design of feedback control systems under injection attacks. moreover, we propose the notion of attack decoupling, and show how the controller and the two - way coding can be co - designed to nullify the transfer function from attack to plant, rendering the attack effect zero both in transient phase and in steady state.
arxiv:1909.01999
there is a huge number of excellent and comprehensive textbooks on quantum mechanics. they mainly differ for the approach, more or less oriented to the formalism rather than to the phenomenology, as well as for the topics covered. these lectures have been based mainly on the classical textbook by gasiorowicz ( 1974 ). i must confess that the main reason for my choice of gasiorowicz ( 1974 ) is affective, as it was the textbook where i first learned the basic principles of quantum mechanics. beyond my personal taste, i now recognize that gasiorowicz ( 1974 ) is still a very good textbook on quantum mechanics, with a rigorous theoretical approach accompanied by a wide collection of applications. if the textbook by gasiorowicz was my main basis, i have taken much also from other textbooks such as phillips ( 2003 ), as well as from the excellent classical textbook by dirac ( 1981 ). in order to avoid complications in the mathematics and in the notation, the topic is presented in these notes with reference to one - dimensional systems, with just a few marginal extensions to the three - dimensional formulation.
arxiv:1201.4234
the increasing interest in open source software has led to the emergence of large language - specific package distributions of reusable software libraries, such as npm and rubygems. these software packages can be subject to vulnerabilities that may expose dependent packages through explicitly declared dependencies. using snyk ' s vulnerability database, this article empirically studies vulnerabilities affecting npm and rubygems packages. we analyse how and when these vulnerabilities are disclosed and fixed, and how their prevalence changes over time. we also analyse how vulnerable packages expose their direct and indirect dependents to vulnerabilities. we distinguish between two types of dependents : packages distributed via the package manager, and external github projects depending on npm packages. we observe that the number of vulnerabilities in npm is increasing and being disclosed faster than vulnerabilities in rubygems. for both package distributions, the time required to disclose vulnerabilities is increasing over time. vulnerabilities in npm packages affect a median of 30 package releases, while this is 59 releases in rubygems packages. a large proportion of external github projects is exposed to vulnerabilities coming from direct or indirect dependencies. 33 % and 40 % of dependency vulnerabilities to which projects and packages are exposed, respectively, have their fixes in more recent releases within the same major release range of the used dependency. our findings reveal that more effort is needed to better secure open source package distributions.
arxiv:2106.06747
intrusion detection systems ( ids ) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. the procedure of implementing a ids for internet of things ( iot ) networks is not without challenges due to the variability of these systems and specifically the difficulty in accessing data. the specifics of these very constrained devices render the design of an ids capable of dealing with the varied attacks a very challenging problem and a very active research subject. in the current state of literature, a number of approaches have been proposed to improve the efficiency of intrusion detection, catering to some of these limitations, such as resource constraints and mobility. in this article, we review works on ids specifically for these kinds of devices from 2008 to 2018, collecting a total of 51 different ids papers. we summarise the current themes of the field, summarise the techniques employed to train and deploy the idss and provide a qualitative evaluations of these approaches. while these works provide valuable insights and solutions for sub - parts of these constraints, we discuss the limitations of these solutions as a whole, in particular what kinds of attacks these approaches struggle to detect and the setup limitations that are unique to this kind of system. we find that although several paper claim novelty of their approach little inter paper comparisons have been made, that there is a dire need for sharing of datasets and almost no shared code repositories, consequently raising the need for a thorough comparative evaluation.
arxiv:2105.08096
among metrics of constant positive curvature on a punctured compact riemann surface with conical singularities at the punctures, dihedral monodromy means that the action of the monodromy group globally preserves a pair of antipodal points. using recent results about local invariants of quadratic differentials, we give a complete characterization of the set of conical angles realized by some cone spherical metric with dihedral monodromy.
arxiv:2112.00594
we consider dimensional crossover for an $ o ( n ) $ landau - ginzburg - wilson model on a $ d $ - dimensional film geometry of thickness $ l $ in the large $ n $ - limit. we calculate the full universal crossover scaling forms for the free energy and the equation of state. we compare the results obtained using ` ` environmentally friendly ' ' renormalization with those found using a direct, non - renormalization group approach. a set of effective critical exponents are calculated and scaling laws for these exponents are shown to hold exactly, thereby yielding non - trivial relations between the various thermodynamic scaling functions.
arxiv:cond-mat/9601146