text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
quasigroup equational definitions are given.
|
arxiv:1003.3175
|
the average transverse energy of nucleons and intermediate mass fragments observed in the heavy ion reaction xe ( 50a mev ) + sn shows the same linear increase as a function of their mass as observed in heavy ion collisions up to the highest ene rgies available today and fits well into the systematics. at higher energies this observation has been interpreted as a sign of a strong radial flow in an otherwise thermalized system. investigating the reaction with quantum molecular dynamics simulations we find in between 50a mev and 200a mev a change in the reaction mechanism. at 50a mev the apparent radial flow is merely caused by an in - plane flow and coulomb repulsion. the average transverse fragment energy does not change in the course of the reaction and is equal to the initial fragment energy due to the fermi motion. at 200a mev, there are two kinds of fragments : those formed from spectator matte r and those from the center of the reaction. there the transverse energy is caused by the pr essure from the compressed nuclear matter. in both cases we observe a binary event stru cture, even in central collisions. this demonstrates as well the non thermal character of the reaction. the actual process which leads to multifragmentation is rather complex and is discussed in detail.
|
arxiv:nucl-th/9805010
|
we study the dictionary learning ( aka sparse coding ) problem of obtaining a sparse representation of data points, by learning \ emph { dictionary vectors } upon which the data points can be written as sparse linear combinations. we view this problem from a geometry perspective as the spanning set of a subspace arrangement, and focus on understanding the case when the underlying hypergraph of the subspace arrangement is specified. for this fitted dictionary learning problem, we completely characterize the combinatorics of the associated subspace arrangements ( i. e. \ their underlying hypergraphs ). specifically, a combinatorial rigidity - type theorem is proven for a type of geometric incidence system. the theorem characterizes the hypergraphs of subspace arrangements that generically yield ( a ) at least one dictionary ( b ) a locally unique dictionary ( i. e. \ at most a finite number of isolated dictionaries ) of the specified size. we are unaware of prior application of combinatorial rigidity techniques in the setting of dictionary learning, or even in machine learning. we also provide a systematic classification of problems related to dictionary learning together with various algorithms, their assumptions and performance.
|
arxiv:1402.7344
|
modern cryptography algorithms are commonly used to ensure information security. prime numbers are needed in many asymmetric cryptography algorithms. for example, rsa algorithm selects two large prime numbers and multiplies to each other to obtain a large composite number whose factorization is very difficult. producing a prime number is not an easy task as they are not distributed regularly through integers. primality testing algorithms are used to determine whether a particular number is prime or composite. in this paper, an intensive survey is thoroughly conducted among the several primality testing algorithms showing the pros and cons, the time complexity, and a brief summary of each algorithm. besides, an implementation of these algorithms is accomplished using java and python as programming languages to evaluate the efficiency of both the algorithms and the programming languages.
|
arxiv:2006.08444
|
total variation ( tv ) is a widely used regularizer for stabilizing the solution of ill - posed inverse problems. in this paper, we propose a novel proximal - gradient algorithm for minimizing tv regularized least - squares cost functional. our method replaces the standard proximal step of tv by a simpler alternative that computes several independent proximals. we prove that the proposed parallel proximal method converges to the tv solution, while requiring no sub - iterations. the results in this paper could enhance the applicability of tv for solving very large scale imaging inverse problems.
|
arxiv:1510.00466
|
we study the motion of a random walker in one longitudinal and d transverse dimensions with a quenched power law correlated velocity field in the longitudinal x - direction. the model is a modification of the matheron - de marsily ( mdm ) model, with long - range velocity correlation. for a velocity correlation function, dependent on transverse co - ordinates y as 1 / ( a + | { y _ 1 - y _ 2 } | ) ^ alpha, we analytically calculate the two - time correlation function of the x - coordinate. we find that the motion of the x - coordinate is a fractional brownian motion ( fbm ), with a hurst exponent h = max [ 1 / 2, ( 1 - alpha / 4 ), ( 1 - d / 4 ) ]. from this and known properties of fbm, we calculate the disorder averaged persistence probability of x ( t ) up to time t. we also find the lines in the parameter space of d and alpha along which there is marginal behaviour. we present results of simulations which support our analytical calculation.
|
arxiv:cond-mat/0511008
|
we provide an informal overview on the theory of transport equations with non smooth velocity fields, and on some applications of this theory to the well - posedness of hyperbolic systems of conservation laws.
|
arxiv:0911.2675
|
the minimum heat cost of computation is subject to bounds arising from landauer ' s principle. here, i derive bounds on finite modelling - - the production or anticipation of patterns ( time - series data ) - - by devices that model the pattern in a piecewise manner and are equipped with a finite amount of memory. when producing a pattern, i show that the minimum dissipation is proportional to the information in the model ' s memory about the pattern ' s history that never manifests in the device ' s future behaviour and must be expunged from memory. i provide a general construction of model that allow this dissipation to be reduced to zero. by also considering devices that consume, or effect arbitrary changes on a pattern, i discuss how these finite models can form an information reservoir framework consistent with the second law of thermodynamics.
|
arxiv:1912.03217
|
we discuss whether some perturbed friedmann - robertson - walker ( frw ) universes could be creatable, i. e., could have vanishing energy, linear momentum and angular momentum, as it could be expectable if the universe arose as a quantum fluctuation. on account of previous results, the background is assumed to be either closed ( with very small curvature ) or flat. in the first case, fully arbitrary linear perturbations are considered ; whereas in the flat case, it is assumed the existence of : ( i ) inflationary scalar perturbations, that is to say, gaussian adiabatic scalar perturbations having an spectrum close to the harrison - zel ' dovich one, and ( ii ) arbitrary tensor perturbations. we conclude that, any closed perturbed universe is creatable, and also that, irrespective of the spectrum and properties of the inflationary gravitational waves, perturbed flat frw universes with standard inflation are not creatable. some considerations on pre - inflationary scalar perturbations are also presented. the creatable character of perturbed frw universes is studied, for the first time, in this paper.
|
arxiv:0804.0861
|
in this short paper, we establish connection formulae for trivariate $ q $ - polynomials.
|
arxiv:2205.00713
|
the mean apparent magnitude of starlink mini direct - to - cell ( dtc ) satellites is 4. 62 while the mean of magnitudes adjusted to a uniform distance of 1000 km is 5. 50. dtcs average 4. 9 times brighter than other starlink mini spacecraft at a common distance. we cannot currently separate the effects of the dtc antenna itself, the different attitude modes that may be required for dtc operations and to what extent brightness mitigation procedures were in place at the times of our observations. in a best case scenario, where dtc brightness mitigation is as successful as that for other minis and the dtc antenna does not add significantly to brightness, we estimate that dtcs will be about 2. 6 times as bright as the others based upon their lower altitudes. the dtcs spend a greater fraction of their time in the earth ' s shadow than satellites at higher altitudes. that will offset some of their impact on astronomical observing.
|
arxiv:2407.03092
|
we demonstrate that the plasmon frequency and drude weight of the electron liquid in a doped graphene sheet are strongly renormalized by electron - electron interactions even in the long - wavelength limit. this effect is not captured by the random phase approximation ( rpa ), commonly used to describe electron fluids and is due to coupling between the center of mass motion and the pseudospin degree of freedom of the graphene ' s massless dirac fermions. making use of diagrammatic perturbation theory to first order in the electron - electron interaction, we show that this coupling enhances both the plasmon frequency and the drude weight relative to the rpa value. we also show that interactions are responsible for a significant enhancement of the optical conductivity at frequencies just above the absorption threshold. our predictions can be checked by far - infrared spectroscopy or inelastic light scattering.
|
arxiv:1101.4291
|
in this paper, we analyze a knapsack schemes. the one is suggested by su, which is relied on a new method entitled permutation combination method. we demonstrate that this permutation method is useless to the security of the scheme. since the special super increasing construction, we can break this scheme employ the algorithm provided by shamir scheme. finally, we provide an enhanced version of su scheme to avoid these attacks.
|
arxiv:1005.4012
|
the o ( n ) model of layered antiferro - and ferromagnets with a weak interlayer coupling and / or easy - axis anisotropy is considered. a renormalization group ( rg ) analysis in this model is performed, the results for n = 3 being expected to agree with those of the 1 / m expansion in the cp ^ { m - 1 } model at m = 2. the quantum and classical cases are considered. a crossover from an isotropic 2d - like to 3d heisenberg ( or 2d ising ) regime is investigated within the 1 / n expansion. analytical results for the temperature dependence of the ( sublattice ) magnetization are obtained in different regimes. the rg results for the ordering temperature are derived. in the quantum case they coincide with the corresponding results of the 1 / n expansion. the numerical calculations on the base of the equations obtained yield a good agreement with experimental data on the layered perovskites la2cuo4, k2nif4 and rb2nif4, and the monte carlo results for the anisotropic classical systems.
|
arxiv:cond-mat/9704220
|
the activations of language transformers like gpt - 2 have been shown to linearly map onto brain activity during speech comprehension. however, the nature of these activations remains largely unknown and presumably conflate distinct linguistic classes. here, we propose a taxonomy to factorize the high - dimensional activations of language models into four combinatorial classes : lexical, compositional, syntactic, and semantic representations. we then introduce a statistical method to decompose, through the lens of gpt - 2 ' s activations, the brain activity of 345 subjects recorded with functional magnetic resonance imaging ( fmri ) during the listening of ~ 4. 6 hours of narrated text. the results highlight two findings. first, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices. second, contrary to previous claims, syntax and semantics are not associated with separated modules, but, instead, appear to share a common and distributed neural substrate. overall, this study introduces a versatile framework to isolate, in the brain activity, the distributed representations of linguistic constructs.
|
arxiv:2103.01620
|
we present 850 $ \ mu $ m polarization and $ \ rm c ^ { 18 } o ( 3 - 2 ) $ molecular line observations toward the x - shaped nebula in the california molecular cloud using the jcmt scuba - 2 / pol - 2 and harp instruments. the 850 $ \ mu $ m emission shows that the observed region includes two elongated filamentary structures ( fil1 and fil2 ) having chains of regularly spaced cores. we measured the mass per unit length of the filament and found that fil1 and fil2 are thermally super - and subcritical, respectively, but both are subcritical if nonthermal turbulence is considered. the mean projected spacings ( $ \ delta \ bar s $ ) of cores in fil1 and fil2 are 0. 13 and 0. 16 pc, respectively. $ \ delta \ bar s $ are smaller than $ 4 \ times $ filament width expected in the classical cylinder fragmentation model. the large - scale magnetic field orientations shown by planck are perpendicular to the long axes of fil1 and fil2, while those in the filaments obtained from the high - resolution polarization data of jcmt are disturbed, but those in fil1 tend to have longitudinal orientations. using the modified davis - chandrasekhar - fermi ( dcf ) method, we estimated the magnetic field strengths ( $ b _ { \ rm pos } $ ) of filaments which are 110 $ \ pm $ 80 and 90 $ \ pm $ 60 $ \ mu $ g. we calculated the gravitational, kinematic, and magnetic energies of the filaments, and found that the fraction of magnetic energy is larger than 60 % in both filaments. we propose that a dominant magnetic energy may lead the filament to be fragmented into aligned cores as suggested by tang et al., and a shorter core spacing can be due to a projection effect via the inclined geometry of filaments or due to a non - negligible, longitudinal magnetic fields in case of fil1.
|
arxiv:2305.09949
|
let $ c $ be a smooth irreducible projective curve and let $ ( l, h ^ 0 ( c, l ) ) $ be a complete and generated linear series on $ c $. denote by $ m _ l $ the kernel of the evaluation map $ h ^ 0 ( c, l ) \ otimes \ mathcal o _ c \ to l $. the exact sequence $ 0 \ to m _ l \ to h ^ 0 ( c, l ) \ otimes \ mathcal o _ c \ to l \ to 0 $ fits into a commutative diagram that we call the butler ' s diagram. this diagram induces in a natural way a multiplication map on global sections $ m _ w : w ^ { \ vee } \ otimes h ^ 0 ( k _ c ) \ to h ^ 0 ( s ^ { \ vee } \ otimes k _ c ) $, where $ w \ subseteq h ^ 0 ( c, l ) $ is a subspace and $ s ^ { \ vee } $ is the dual of a subbundle $ s \ subset m _ l $. when the subbundle $ s $ is a stable bundle, we show that the map $ m _ w $ is surjective. when $ c $ is a brill - noether general curve, we use the surjectivity of $ m _ w $ to give another proof on the semistability of $ m _ l $, moreover we fill up a gap of an incomplete argument by butler : with the surjectivity of $ m _ w $ we give conditions to determinate the stability of $ m _ l $, and such conditions implies the well known stability conditions for $ m _ l $ stated precisely by butler. finally we obtain the equivalence between the stability of $ m _ l $ and the linear stability of $ ( l, h ^ 0 ( l ) ) $ on $ \ gamma $ - gonal curves.
|
arxiv:1705.06829
|
we investigate the magnetic field effect on the spin gap state in ceru2al10 by measuring the magnetization and electrical resistivity. we found that the magnetization curve for the magnetic field h / / c shows a metamagnetic - like anomaly at h * ~ 4 t below t _ 0 = 27 k, but no anomaly for h / / a and h / / b. a shoulder of the electrical resistivity at ts ~ 5 k for i / / c is suppressed by applying a longitudinal magnetic field above 5 t. many anomalies are also found in the magnetoresistance for hkc below ~ 5 k. the obtained magnetic phase diagram consists of at least two or three phases below t _ 0. these results strongly indicate the existence of a fine structure at a low energy side in a spin gap state with the excitation energy of 8 mev recently observed in the inelastic neutron scattering experiments.
|
arxiv:1005.4587
|
the term defect tolerance ( dt ) is used often to rationalize the exceptional optoelectronic properties of halide perovskites ( haps ) and their devices. even though dt lacked direct experimental evidence, it became a " fact " in the field. dt in semiconductors implies that structural defects do not translate to electrical and optical effects ( e. g., due to charge trapping ), associated with such defects. we present the first direct experimental evidence for dt in pb - haps by comparing the structural quality of 2 - dimensional ( 2d ), 2d - 3d, and 3d pb - iodide hap crystals with their optoelectronic characteristics using high - sensitivity methods. importantly, we get information from the materials ' bulk, because we sample at least a few hundred nanometers, up to several micrometers, from the sample ' s surface, which allows for assessing intrinsic bulk ( and not only surface - ) properties of haps. the results point to dt in 3d, 2d - 3d, and 2d pb - haps. overall, our data provide an experimental basis to rationalize dt in pb - haps. these experiments and findings can guide the search for, and design of other materials with dt.
|
arxiv:2305.16017
|
we consider a generating set of reparametrization invariants that can be constructed from the couplings and masses entering the scalar potential of the general two - higgs - doublet model ( 2hdm ). being independent of higgs - basis rotations, they generate a polynomial ring of basis invariants that represent the physical content of the model. ignoring for the moment gauge and yukawa interactions, we derive six - loop renormalization group equations ( rge ) for all the invariants entering the set. we do not compute a single feynman diagram but rely heavily on the general rge results for scalar theories. we use linear algebra together with techniques from invariant theory. the latter not only allow one to compute the number of linearly independent invariants entering beta functions at a certain loop order ( via hilbert series ) but also provide a convenient tool for dealing with polynomial relations ( so - called syzygies ) between invariants from the generating set.
|
arxiv:2501.14087
|
we prove an atomic type decomposition for the noncommutative martingale hardy space $ \ h _ p $ for all $ 0 < p < 2 $ by an explicit constructive method using algebraic atoms as building blocks. using this elementary construction, we obtain a weak form of the atomic decomposition of $ \ h _ p $ for all $ 0 < p < 1, $ and provide a constructive proof of the atomic decomposition for $ p = 1 $. we also study $ ( p, \ 8 ) _ c $ - atoms, and show that every $ ( p, 2 ) _ c $ - atom can be decomposed into a sum of $ ( p, \ 8 ) _ c $ - atoms ; consequently, for every $ 0 < p \ le 1 $, the $ ( p, q ) _ c $ - atoms lead to the same atomic space for all $ 2 \ le q \ le \ 8 $. as applications, we obtain a characterization of the dual space of the noncommutative martingale hardy space $ \ h _ p $ ( $ 0 < p < 1 $ ) as a noncommutative lipschitz space via the weak form of the atomic decomposition. our constructive method can also be applied to proving some sharp martingale inequalities.
|
arxiv:2001.08775
|
design structure matrices ( dsms ) are useful to represent high - level system structure, modeling interactions between design entities. dsms are used for many visualization and abstraction activities. in this work, we propose the use of an existing dsm clustering algorithm to recover software architecture module views. to make it suitable to this domain, optimization has proved necessary. it was achieved through performance analysis and parameter tuning on the original algorithm. results show that dsm clustering can be an alternative to other clustering algorithms.
|
arxiv:1709.07538
|
sturm oscillation theorem for second order differential equations was generalized to systems and higher order equations with positive leading coefficient by several authors. what we propose here is a sturm oscillation theorem for systems of even order having strongly indefinite leading coefficient.
|
arxiv:0705.3516
|
the goal of the paper is to prove the equivalence of distributional and synthetic ricci curvature lower bounds for a weighted riemannian manifold with continuous metric tensor having christoffel symbols in $ l ^ 2 _ { { \ rm loc } } $, and with weight in $ c ^ 0 \ cap w ^ { 1, 2 } _ { { \ rm loc } } $.
|
arxiv:2402.06486
|
we present an unified approach on the behavior of two random growth models ( external dla and internal dla ) on infinite graphs, the second one being an internal counterpart of the first one. even though the two models look pretty similar, their behavior is completely different : while external dla tends to build irregularities and fractal - like structures, internal dla tends to fill up gaps and to produce regular clusters. we will also consider the aforementioned models on fractal graphs like sierpinski gasket and carpet, and present some recent results and possible questions to investigate.
|
arxiv:1902.03800
|
this work analyzes and parallelizes learnedsort, the novel algorithm that sorts using machine learning models based on the cumulative distribution function. learnedsort is analyzed under the lens of algorithms with predictions, and it is argued that learnedsort is a learning - augmented samplesort. a parallel learnedsort algorithm is developed combining learnedsort with the state - of - the - art samplesort implementation, ips4o. benchmarks on synthetic and real - world datasets demonstrate improved parallel performance for parallel learnedsort compared to ips4o and other sorting algorithms.
|
arxiv:2307.08637
|
the magnetic properties of graphene on finite geometries are studied using a self - consistent mean - field theory of the hubbard model. this approach is known to predict ferromagnetic edge states close to the zig - zag edges in single - layer graphene quantum dots and nanoribbons. in order to assess the accuracy of this method, we perform complementary exact diagonalization and quantum monte carlo simulations. we observe good quantitative agreement for all quantities investigated provided that the coulomb interaction is not too strong.
|
arxiv:0910.5360
|
we use the theory of resolutions for a given hilbert function to investigate the multiplicity conjectures of huneke and srinivasan and herzog and srinivasan. to prove the conjectures for all modules with a particular hilbert function, we show that it is enough to prove the statements only for elements at the bottom of the partially ordered set of resolutions with that hilbert function. this enables us to test the conjectured upper bound for the multiplicity efficiently with the computer algebra system macaulay 2, and we verify the upper bound for many artinian modules in three variables with small socle degree. moreover, with this approach, we show that though numerical techniques have been sufficient in several of the known special cases, they are insufficient to prove the conjectures in general. finally, we apply a result of herzog and srinivasan on ideals with a quasipure resolution to prove the upper bound for cohen - macaulay quotients by ideals with generators in high degrees relative to the regularity.
|
arxiv:math/0506024
|
we compute the dominant, logarithmically enhanced radiative corrections to the electron spectrum in the bound muon decay in the whole experimentally interesting range. the corrected spectrum fits well the twist results. the remaining theoretical error, dominated by the nuclear charge distribution, can be reduced in the muon - electron conversion searches by measuring the spectrum slightly below the new physics signal window.
|
arxiv:1608.05447
|
the characteristics and performance of a dual position sensitive multi - wire proportional counter ( dps - mwpc ) used to measure the scattering angle, interaction position on the target and the velocity of reaction products, detected in the vamos + + magnetic spectrometer, are reported. the detector consists of a pair of position sensitive low pressure mwpcs and provides both fast timing signals, along with the two - dimensional position coordinates required to define the trajectory of the reaction products. a time - of - flight resolution of 305 ( 11 ) ps ( fwhm ) was measured. the measured resolutions ( fwhm ) were 2. 5 ( 3 ) mrad and 560 ( 70 ) { \ mu } m for the scattering angle and the interaction point at the target respectively. the subsequent improvement of the doppler correction of the energy of the gamma - rays, detected in the gamma - ray tracking array agata in coincidence with isotopically identified ions in vamos + +, is also discussed.
|
arxiv:1512.08881
|
the reach of a submanifold is a crucial regularity parameter for manifold learning and geometric inference from point clouds. this paper relates the reach of a submanifold to its convexity defect function. using the stability properties of convexity defect functions, along with some new bounds and the recent submanifold estimator of aamari and levrard [ ann. statist. 47 177 - 204 ( 2019 ) ], an estimator for the reach is given. a uniform expected loss bound over a c ^ k model is found. lower bounds for the minimax rate for estimating the reach over these models are also provided. the estimator almost achieves these rates in the c ^ 3 and c ^ 4 cases, with a gap given by a logarithmic factor.
|
arxiv:2001.08006
|
the activity and stability of a platinum nanoparticle ( np ) is not only affected by its size but additionally depends on its shape. to this end, simulations can identify structure - property relationships to make a priori decisions on the most promising structures. while activity is routinely probed by electronic structure calculations on simplified surface models, modeling the stability of np model systems in electrochemical reactions is challenging due to the long timescale of relevant processes such as oxidation beyond the point of reversibility. in this work, a routine for simulating electrocatalyst stability is presented. the procedure is referred to as greg after its main ingredients - a grand - canonical simulation approach using reactive force fields to model electrochemical reactions as a function of the galvanic cell potential. the greg routine is applied to study the oxidation of 3 nm octahedral, cubic, dodecahedral, cuboctahedral, spherical, and tetrahexahedral platinum nps. the oxidation process is analyzed using adsorption isobars as well as interaction energy heat maps that provide the basis for constructing electrochemical phase diagrams. onset potentials for surface oxidation increase in the sequence cube ~ = dodecahedron < = octahedron < = tetrahexahdron < sphere < cuboctahedron, establishing a relationship between oxidation behavior and surface facet structure. the electrochemical results are rationalized using structural and electronic analysis.
|
arxiv:2201.07605
|
core - collapse supernova ( sn ) explosions mark the end of the tumultuous life of massive stars. determining the nature of their progenitors is a crucial step towards understanding the properties of sne. until recently, no progenitor has been directly detected for sn of type ibc, which are believed to come from massive stars that lose their hydrogen envelope through stellar winds and from binary systems where the companion has stripped the h envelope from the primary. here we analyze recently reported observations of iptf13bvn, which could possibly be the first detection of a sn ib progenitor based on pre - explosion images. very interestingly, the recently published geneva models of single stars can reproduce the observed photometry of the progenitor candidate and its mass - loss rate, confirming a recently proposed scenario. we find that a single wr star with initial mass in the range 31 - 35 msun fits the observed photometry of the progenitor of iptf13bvn. the progenitor likely has a luminosity of log ( l / lsun ) ~ 5. 55, surface temperature ~ 45000 k, and mass of ~ 10. 9 msun at the time of explosion. our non - rotating 32 msun model overestimates the derived radius of the progenitor, although this could likely be reconciled with a fine - tuned model of a more massive ( between 40 and 50 msun ), hotter, and luminous progenitor. our models indicate a very uncertain ejecta mass of ~ 8 msun, which is higher than the average of the sn ib ejecta mass that is derived from the lightcurve ( 2 - 4 msun ). this possibly high ejecta mass could produce detectable effects in the iptf13bvn lightcurve and spectrum. if the candidate is indeed confirmed to be the progenitor, our results suggest that stars with relatively high initial masses ( > 30 msun ) can produce visible sn explosions at their deaths and do not collapse directly to a black hole.
|
arxiv:1307.8434
|
data mining is a way of extracting data or uncovering hidden patterns of information from databases. so, there is a need to prevent the inference rules from being disclosed such that the more secure data sets cannot be identified from non sensitive attributes. this can be done through removing or adding certain item sets in the transactions sanitization. the purpose is to hide the inference rules, so that the user may not be able to discover any valuable information from other non sensitive data and any organisation can release all samples of their data without the fear of knowledge discovery in databases which can be achieved by investigating frequently occurring item sets, rules that can be mined from them with the objective of hiding them. another way is to release only limited samples in the new database so that there is no information loss and it also satisfies the legitimate needs of the users. the major problem is uncovering hidden patterns, which causes a threat to the database security. sensitive data are inferred from non - sensitive data based on the semantics of the application the user has, commonly known as the inference problem. two fundamental approaches to protect sensitive rules from disclosure are that, preventing rules from being generated by hiding the frequent sets of data items and reducing the importance of the rules by setting their confidence below a user - specified threshold.
|
arxiv:1308.6744
|
we calculate the momentum flux and pressure of ions measured by the ion composition analyzer ( ica ) on the rosetta mission at comet 67p / churyumov - gerasimenko. the total momentum flux stays roughly constant over the mission, but the contributions of different ion populations change depending on heliocentric distance. the magnetic pressure, calculated from rosetta magnetometer measurements, roughly corresponds with the cometary ion momentum flux. when the spacecraft enters the solar wind ion cavity, the solar wind fluxes drop drastically, while the cometary momentum flux becomes roughly ten times the solar wind fluxes outside of the ion cavity, indicating that pickup ions behave similarly to the solar wind ions in this region. we use electron density from the langmuir probe to calculate the electron pressure, which is particularly important close to the comet nucleus where flow changes from antisunward to radially outward.
|
arxiv:2006.12836
|
given a set of n data objects and their pairwise dissimilarities, the goal of the minimum quartet tree cost ( mqtc ) problem is to construct an optimal tree from the total number of possible combinations of quartet topologies on n, where optimality means that the sum of the dissimilarities of the embedded ( or consistent ) quartet topologies is minimal. we provide details and formulation of this novel challenging problem, and the preliminaries of an exact algorithm under current development which may be useful to improve the mqtc heuristics to date into more efficient hybrid approaches.
|
arxiv:1807.00566
|
we present a spatially - resolved ( ~ 3 pc pix $ ^ { - 1 } $ ) analysis of the distribution, kinematics, and excitation of warm h2 gas in the nuclear starburst region of m83. our jwst / miri ifu spectroscopy reveals a clumpy reservoir of warm h2 ( > 200 k ) with a mass of ~ 2. 3 x 10 $ ^ { 5 } $ msun in the area covered by all four mrs channels. we additionally use the [ ne ii ] 12. 8 $ { \ mu } $ m and [ ne iii ] 15. 5 $ { \ mu } $ m lines as tracers of the star formation rate, ionizing radiation hardness, and kinematics of the ionized ism, finding tantalizing connections to the h2 properties and to the ages of the underlying stellar populations. finally, qualitative comparisons to the trove of public, high - spatial - resolution multiwavelength data available on m83 shows that our mrs spectroscopy potentially traces all stages of the process of creating massive star clusters, from the embedded proto - cluster phase through the dispersion of ism from stellar feedback.
|
arxiv:2410.09020
|
we consider a dirichlet problem for the allen - cahn equation in a smooth, bounded or unbounded, domain $ \ omega \ subset { \ bf r } ^ n. $ under suitable assumptions, we prove an existence result and a uniform exponential estimate for symmetric solutions. in dimension n = 2 an additional asymptotic result is obtained. these results are based on a pointwise estimate obtained for local minimizers of the allen - cahn energy.
|
arxiv:1405.1541
|
rheumatoid arthritis ( ra ) is a chronic autoimmune disease that primarily affects peripheral synovial joints, like fingers, wrist and feet. radiology plays a critical role in the diagnosis and monitoring of ra. limited by the current spatial resolution of radiographic imaging, joint space narrowing ( jsn ) progression of ra with the same reason above can be less than one pixel per year with universal spatial resolution. insensitive monitoring of jsn can hinder the radiologist / rheumatologist from making a proper and timely clinical judgment. in this paper, we propose a novel and sensitive method that we call partial image phase - only correlation which aims to automatically quantify jsn progression in the early stages of ra. the majority of the current literature utilizes the mean error, root - mean - square deviation and standard deviation to report the accuracy at pixel level. our work measures jsn progression between a baseline and its follow - up finger joint images by using the phase spectrum in the frequency domain. using this study, the mean error can be reduced to 0. 0130mm when applied to phantom radiographs with ground truth, and 0. 0519mm standard deviation for clinical radiography. with its sub - pixel accuracy far beyond manual measurement, we are optimistic that our work is promising for automatically quantifying jsn progression.
|
arxiv:2205.09315
|
modern hash table designs strive to minimize space while maximizing speed. the most important factor in speed is the number of cache lines accessed during updates and queries. this is especially important on pmem, which is slower than dram and in which writes are more expensive than reads. this paper proposes two stronger design objectives : stability and low - associativity. a stable hash table doesn ' t move items around, and a hash table has low associativity if there are only a few locations where an item can be stored. low associativity ensures that queries need to examine only a few memory locations, and stability ensures that insertions write to very few cache lines. stability also simplifies scaling and crash safety. we present iceberght, a fast, crash - safe, concurrent, and space - efficient hash table for pmem based on the design principles of stability and low associativity. iceberght combines in - memory metadata with a new hashing technique, iceberg hashing, that is ( 1 ) space efficient, ( 2 ) stable, and ( 3 ) supports low associativity. in contrast, existing hash - tables either modify numerous cache lines during insertions ( e. g. cuckoo hashing ), access numerous cache lines during queries ( e. g. linear probing ), or waste space ( e. g. chaining ). moreover, the combination of ( 1 ) - ( 3 ) yields several emergent benefits : iceberght scales better than other hash tables, supports crash - safety, and has excellent performance on pmem ( where writes are particularly expensive ).
|
arxiv:2210.04068
|
we present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. it conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join ( dcj ) rearrangements in the presence of duplications. the problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. we show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and dcj rearrangements needed to explain any history graph. these bounds become tight for a special type of unambiguous history graph called an ancestral variation graph ( avg ), which constrains in its combinatorial structure the number of operations required. we finally demonstrate that for a given history graph $ g $, a finite set of avgs describe all parsimonious interpretations of $ g $, and this set can be explored with a few sampling moves.
|
arxiv:1303.2246
|
we present a correspondence between positive operator valued measures ( povms ) and sets of generalized coherent states. positive operator valued measures describe quantum observables and, similarly to quantum states, also quantum observables can be mixed. we show how the formalism of generalized coherent states leads to a useful characterization of extremal povms. we prove that covariant phase space observables related to squeezed states are extremal, while the ones related to number states are not extremal.
|
arxiv:1112.4280
|
because of its nonequilibrium character, active matter in a steady state can drive engines that autonomously deliver work against a constant mechanical force or torque. as a generic model for such an engine, we consider systems that contain one or several active components and a single passive one that is asymmetric in its geometrical shape or its interactions. generally, one expects that such an asymmetry leads to a persistent, directed current in the passive component, which can be used for the extraction of work. we validate this expectation for a minimal model consisting of an active and a passive particle on a one - dimensional lattice. it leads us to identify thermodynamically consistent measures for the efficiency of the conversion of isotropic activity to directed work. for systems with continuous degrees of freedom, work cannot be extracted using a one - dimensional geometry under quite general conditions. in contrast, we put forward two - dimensional shapes of a movable passive obstacle that are best suited for the extraction of work, which we compare with analytical results for an idealised work - extraction mechanism. for a setting with many noninteracting active particles, we use a mean - field approach to calculate the power and the efficiency, which we validate by simulations. surprisingly, this approach reveals that the interaction with the passive obstacle can mediate cooperativity between otherwise noninteracting active particles, which enhances the extracted power per active particle significantly.
|
arxiv:1905.00373
|
we prove that the stationary magnetic potential vector and the electrostatic potential entering the dynamic magnetic schr \ " odinger equation can be lipschitz stably retrieved through finitely many local boundary measurements of the solution. the proof is by means of a specific global carleman estimate for the schr \ " odinger equation, established in the first part of the paper.
|
arxiv:1805.10076
|
sustainability ( defined as ' the capacity to keep up ' ) encompasses a wide set of aims : ranging from energy efficient software products ( environmental sustainability ), reduction of software development and maintenance costs ( economic sustainability ), to employee and end - user wellbeing ( social sustainability ). in this report we explore the role that sustainability plays in software product line engineering ( spl ). the report is based on the ' sustainability in software product lines ' panel held at splc 2014.
|
arxiv:1505.03736
|
glyph - based visualization is one of the main techniques for visualizing complex multivariate data. with small glyphs, data variables are typically encoded with relatively low visual and perceptual precision. glyph designers have to contemplate the trade - offs in allocating visual channels when there is a large number of data variables. while there are many successful glyph designs in the literature, there is not yet a systematic method for assisting visualization designers to evaluate different design options that feature different types of trade - offs. in this paper, we present an evaluation scheme based on the multi - criteria decision analysis ( mcda ) methodology. the scheme provides designers with a structured way to consider their glyph designs from a range of perspectives, while rendering a semi - quantitative template for evaluating different design options. in addition, this work provides guideposts for future empirical research to obtain more quantitative measurements that can be used in mcda - aided glyph design processes.
|
arxiv:2303.08554
|
we report the result from a search for charged - current coherent pion production induced by muon neutrinos with a mean energy of 1. 3 gev. the data are collected with a fully active scintillator detector in the k2k long - baseline neutrino oscillation experiment. no evidence for coherent pion production is observed and an upper limit of $ 0. 60 \ times 10 ^ { - 2 } $ is set on the cross section ratio of coherent pion production to the total charged - current interaction at 90 % confidence level. this is the first experimental limit for coherent charged pion production in the energy region of a few gev.
|
arxiv:hep-ex/0506008
|
in this paper we apply the self - consistent generalized langevin equation theory ( scgle ) of dynamic arrest for colloidal mixtures to predict the glass transition of a colloidal fluid permeating a porous matrix of obstacles with random distribution. we obtained the transition diagrams for different size asymmetries and so we give an asserted description of recent simulations results [ k. kim, k. miyazaki, and s. saito, europhys. lett. 88, 36002 ( 2009 ) ] of quenched - annealed and equilibrated - mixture systems which reveal very different qualitative scenarios which are in apparent contradiction with theoretical predictions of mode coupling theory ( mct ) [ v. krakoviack. phys. rev. e 75, 031503 ( 2007 ) ]. we show that scgle theory predicts the existence of a reentrant region in em systems as predicted using mc theory. however, opposite to mct predictions, we show that it is practically impossible to distinguish a rentrant region in qa systems if it would exist. qualitative comparisons are in good agreement with simulation results and thus, we propose scgle theory as a useful tool for the interpretation of the arrest transition in ideal porous systems.
|
arxiv:1108.6291
|
the discrepancies between the measurements of rare ( semi - ) leptonic $ b $ decays and the corresponding standard model predictions point convincingly towards the existence of new physics for which a heavy neutral gauge boson ( $ z ^ \ prime $ ) is a prime candidate. however, the effect of the mixing of the $ z ^ \ prime $ with the sm $ z $, even though it cannot be avoided by any symmetry, is usually assumed to be small and thus neglected in phenomenological analyses. in this letter we point out that a mixing of the naturally expected size leads to lepton flavour universal contributions, providing a very good fit to $ b $ data. furthermore, the global electroweak fit is affected by $ z - z ^ \ prime $ mixing where the tension in the $ w $ mass, recently confirmed and strengthened by the cdf measurement, prefers a non - zero value of it. we find that a $ z ^ \ prime $ boson with a mass between $ \ approx 1 - 5 \, \ rm { tev } $ can provide a unified explanations of the $ b $ anomalies and the $ w $ mass. this strongly suggests that the breaking of the new gauge symmetry giving raise to the $ z ^ \ prime $ boson is linked to electroweak symmetry breaking with intriguing consequences for model building.
|
arxiv:2201.08170
|
mapping low dynamic range ( ldr ) images with different exposures to high dynamic range ( hdr ) remains nontrivial and challenging on dynamic scenes due to ghosting caused by object motion or camera jitting. with the success of deep neural networks ( dnns ), several dnns - based methods have been proposed to alleviate ghosting, they cannot generate approving results when motion and saturation occur. to generate visually pleasing hdr images in various cases, we propose a hybrid hdr deghosting network, called hyhdrnet, to learn the complicated relationship between reference and non - reference images. the proposed hyhdrnet consists of a content alignment subnetwork and a transformer - based fusion subnetwork. specifically, to effectively avoid ghosting from the source, the content alignment subnetwork uses patch aggregation and ghost attention to integrate similar content from other non - reference images with patch level and suppress undesired components with pixel level. to achieve mutual guidance between patch - level and pixel - level, we leverage a gating module to sufficiently swap useful information both in ghosted and saturated regions. furthermore, to obtain a high - quality hdr image, the transformer - based fusion subnetwork uses a residual deformable transformer block ( rdtb ) to adaptively merge information for different exposed regions. we examined the proposed method on four widely used public hdr image deghosting datasets. experiments demonstrate that hyhdrnet outperforms state - of - the - art methods both quantitatively and qualitatively, achieving appealing hdr visualization with unified textures and colors.
|
arxiv:2304.06943
|
we look for spectral type differential equations for the generalized jacobi polynomials found by t. h. koornwinder in 1984 and for the sobolev - laguerre polynomials. we introduce a method which makes use of computeralgebra packages like maple and mathematica and we will give some preliminary results.
|
arxiv:math/9908141
|
highlight the range of spectra that we find in the survey.
|
arxiv:1510.02106
|
universal force fields generalizable across the periodic table represent a new trend in computational materials science. however, the applications of universal force fields in material simulations are limited by their slow inference speed and the lack of first - principles accuracy. instead of building a single model simultaneously satisfying these characteristics, a strategy that quickly generates material - specific models from the universal model may be more feasible. here, we propose a new workflow pattern, pfd, which automatically generates machine - learning force fields for specific materials from a pre - trained universal model through fine - tuning and distillation. by fine - tuning the pre - trained model, our pfd workflow generates force fields with first - principles accuracy while requiring one to two orders of magnitude less training data compared to traditional methods. the inference speed of the generated force field is further improved through distillation, meeting the requirements of large - scale molecular simulations. comprehensive testing across diverse materials including complex systems such as amorphous carbon, interface, etc., reveals marked enhancements in training efficiency, which suggests the pfd workflow a practical and reliable approach for force field generation in computational material sciences.
|
arxiv:2502.20809
|
in a world increasingly awash with data, the need to extract meaningful insights from data has never been more crucial. functional data analysis ( fda ) goes beyond traditional data points, treating data as dynamic, continuous functions, capturing ever - changing phenomena nuances. this article introduces fda, merging statistics with real - world complexity, ideal for those with mathematical skills but no fda background.
|
arxiv:2404.16598
|
starting with newton ' s law of universal gravitation, we generalize it step - by - step to obtain einstein ' s geometric theory of gravity. newton ' s gravitational potential satisfies the poisson equation. we relate the potential to a component of the metric tensor by equating the nonrelativistic result of the principle of stationary proper time to the lagrangian for a classical gravitational field. in the poisson equation the laplacian of the component of the metric tensor is generalized to a cyclic linear combination of the second derivatives of the metric tensor. in local coordinates it is a single component of the ricci tensor. this component of the ricci tensor is proportional to the mass density that is related to a single component of the energy - momentum stress tensor and its trace. we thus obtain a single component of einstein ' s gravitational field equation in local coordinates. from the principle of general covariance applied to a single component, we obtain all tensor components of einstein ' s gravitational field equation.
|
arxiv:1309.4789
|
in e - commerce, content quality of the product catalog plays a key role in delivering a satisfactory experience to the customers. in particular, visual content such as product images influences customers ' engagement and purchase decisions. with the rapid growth of e - commerce and the advent of artificial intelligence, traditional content management systems are giving way to automated scalable systems. in this paper, we present a machine learning driven visual content management system for extremely large e - commerce catalogs. for a given product, the system aggregates images from various suppliers, understands and analyzes them to produce a superior image set with optimal image count and quality, and arranges them in an order tailored to the demands of the customers. the system makes use of an array of technologies, ranging from deep learning to traditional computer vision, at different stages of analysis. in this paper, we outline how the system works and discuss the unique challenges related to applying machine learning techniques to real - world data from e - commerce domain. we emphasize how we tune state - of - the - art image classification techniques to develop solutions custom made for a massive, diverse, and constantly evolving product catalog. we also provide the details of how we measure the system ' s impact on various customer engagement metrics.
|
arxiv:1811.07996
|
we study systems of finite - number neutrons in a harmonic trap at the unitary limit. two very different types of neutron - neutron interactions are applied, namely, the meson - theoretic cd - bonn potential and hard - core square - well interactions, all tuned to possess infinite scattering lengths, and with effective ranges comparable to or larger than the trap size. the potentials are renormalized to equivalent, scattering - length preserving low - momentum potentials, $ v _ { { \ rm low } - k } $, with which the particle - particle hole - hole ring diagrams are summed to all orders to yield the ground - state energy $ e _ 0 $ of the finite neutron system. we find the ratio $ e _ 0 / e _ 0 ^ { \ rm free } $ ( where $ e _ 0 ^ { \ rm free } $ denotes the ground - state energy of the corresponding non - interacting system ) to be remarkably independent from variations of the harmonic trap parameter, the number of neutrons, the decimation momentum of $ v _ { { \ rm low } - k } $, and the type and effective range of the unitarity potential. our results support a special virial linear scaling relation of $ e _ 0 $. certain properties of landau ' s quasi - particles for trapped neutrons at the unitary limit are also discussed.
|
arxiv:1809.00205
|
gsm networks are very expensive. the network design process requires too many decisions in a combinatorial explosion. for this reason, the larger is the network, the harder is to achieve a totally human based optimized solution. the bsc ( base station control ) nodes have to be geographically well allocated to reduce the transmission costs. there are decisions of association between bts and bsc those impacts in the correct dimensioning of these bsc. the choice of bsc quantity and model capable of carrying the cumulated traffic of its affiliated bts nodes in turn reflects on the total cost. in addition, the last component of the total cost is due to transmission for linking bsc nodes to msc. these trunks have a major significance since the number of required e1 lines is larger than bts to bsc link. this work presents an integer programming model and a computational tool for designing gsm ( global system for mobile communications ) networks, regarding bss ( base station subsystem ) with optimized cost.
|
arxiv:0909.1045
|
we review the properties of low mass dense molecular cloud cores, including starless, prestellar, and class 0 protostellar cores, as derived from observations. in particular we discuss them in the context of the current debate surrounding the formation and evolution of cores. there exist several families of model scenarios to explain this evolution ( with many variations of each ) that can be thought of as a continuum of models lying between two extreme paradigms for the star and core formation process. at one extreme there is the dynamic, turbulent picture, while at the other extreme there is a slow, quasi - static vision of core evolution. in the latter view the magnetic field plays a dominant role, and it may also play some role in the former picture. polarization and zeeman measurements indicate that some, if not all, cores contain a significant magnetic field. wide - field surveys constrain the timescales of the core formation and evolution processes, as well as the statistical distribution of core masses. the former indicates that prestellar cores typically live for 2 - - 5 free - fall times, while the latter seems to determine the stellar initial mass function. in addition, multiple surveys allow one to compare core properties in different regions. from this it appears that aspects of different models may be relevant to different star - forming regions, depending on the environment. prestellar cores in cluster - forming regions are smaller in radius and have higher column densities, by up to an order of magnitude, than isolated prestellar cores. this is probably due to the fact that in cluster - forming regions the prestellar cores are formed by fragmentation of larger, more turbulent cluster - forming cores, which in turn form as a result of strong external compression.
|
arxiv:astro-ph/0603474
|
endo - - pajitnov manifolds are generalizations to higher dimensions of the inoue surfaces $ s ^ m $. we study the existence of complex submanifolds in endo - - pajitnov manifolds. we identify a class of these manifolds that do contain compact complex submanifolds and establish an algebraic condition under which an endo - - pajitnov manifold contains no compact complex curves.
|
arxiv:2502.19520
|
we examine the solar neutrino problem in the context of the realistic three neutrino mixing scenario including the sno charged current ( cc ) rate. the two independent mass squared differences $ \ delta m ^ 2 _ { 21 } $ and $ \ delta m ^ 2 _ { 31 } \ approx \ delta m ^ 2 _ { 32 } $ are taken to be in the solar and atmospheric ranges respectively. we incorporate the constraints on $ \ delta $ m $ ^ 2 _ { 31 } $ as obtained by the superkamiokande atmospheric neutrino data and determine the allowed values of $ \ delta m ^ 2 _ { 21 } $, $ \ theta _ { 12 } $ and $ \ theta _ { 13 } $ from a combined analysis of solar and chooz data. our aim is to probe the changes in the values of the mass and mixing parameters with the inclusion of the sno data as well as the changes in the two - generation parameter region obtained from the solar neutrino analysis with the inclusion of the third generation. we find that the inclusion of the sno cc rate in the combined solar + chooz analysis puts a more restrictive bound on $ \ theta _ { 13 } $. since the allowed values of $ \ theta _ { 13 } $ are constrained to very small values by the chooz experiment there is no qualitative change over the two generation allowed regions in the $ \ delta m ^ 2 _ { 21 } - \ tan ^ 2 \ theta _ { 12 } $ plane. the best - fit comes in the lma region and no allowed area is obtained in the sma region at 3 $ \ sigma $ level from combined solar and chooz analysis.
|
arxiv:hep-ph/0110307
|
we prove that the scaling limit of the weakly self - avoiding walk on a $ d $ - dimensional discrete torus is brownian motion on the continuum torus if the length of the rescaled walk is $ o ( v ^ { 1 / 2 } ) $ where $ v $ is the volume ( number of points ) of the torus and if $ d > 4 $. we also prove that the diffusion constant of the resulting torus brownian motion is the same as the diffusion constant of the scaling limit of the usual weakly self - avoiding walk on $ \ mathbb { z } ^ d $. this provides further manifestation of the fact that the weakly self - avoiding walk model on the torus does not feel that it is on the torus up until it reaches about $ v ^ { 1 / 2 } $ steps which we believe is sharp.
|
arxiv:2203.07695
|
the method is proposed for the study of many - point boundary value problems for systems of nonlinear ode, by reducing them to special equivalent integral equations, and allows us [ in contrast with the known method [ 1 ] ] to consider boundary and initial value problems. here we avoid the mechanism of green ' s function whose construction is quite nontrivial, especially in case of many - point boundary value problems. the proposed algorithm [ 2, 3 ] makes it easier to write out the condition of unique solvability of such problems. key words : boundary - value problems, singular perturbation theory, spectrum, asymptotic behavior.
|
arxiv:1205.2133
|
planets less massive than about 10 mearth are expected to have no massive h - he atmosphere and a cometary composition ( 50 % rocks, 50 % water, by mass ) provided they formed beyond the snowline of protoplanetary disks. due to inward migration, such planets could be found at any distance between their formation site and the star. if migration stops within the habitable zone, this will produce a new kind of planets, called ocean - planets. ocean - planets typically consist in a silicate core, surrounded by a thick ice mantle, itself covered by a 100 km deep ocean. the existence of ocean - planets raises important astrobiological questions : can life originate on such body, in the absence of continent and ocean - silicate interfaces? what would be the nature of the atmosphere and the geochemical cycles? in this work, we address the fate of hot ocean - planets produced when migration ends at a closer distance. in this case the liquid / gas interface can disappear, and the hot h2o envelope is made of a supercritical fluid. although we do not expect these bodies to harbor life, their detection and identification as water - rich planets would give us insight as to the abundance of hot and, by extrapolation, cool ocean - planets.
|
arxiv:astro-ph/0701608
|
objective : the objectives encompassed ( 1 ) the creation of recuerdame, a digital app specifically designed for occupational therapists, aiming to support these professionals in the processes of planning, organizing, developing, and documenting reminiscence therapies for older people with dementia, and ( 2 ) the evaluation of the designed prototype through a participatory and user - centered design approach, exploring the perceptions of end - users. methods : this exploratory research used a mixed - methods design. the app was developed in two phases. in the first phase, the research team identified the requirements and designed a prototype. in the second phase, experienced occupational therapists evaluated the prototype. results : the research team determined the app ' s required functionalities, grouped into eight major themes : register related persons and caregivers ; record the patient ' s life story memories ; prepare a reminiscence therapy session ; conduct a session ; end a session ; assess the patient ; automatically generate a life story ; other requirements. the first phase ended with the development of a prototype. in the second phase, eight occupational therapists performed a series of tasks using all the application ' s functionalities. most of these tasks were very easy ( single ease question ). the level of usability was considered excellent ( system usability scale ). participants believed that the app would save practitioners time, enrich therapy sessions and improve their effectiveness. the qualitative results were summarized in two broad themes : ( a ) acceptability of the app ; and ( b ) areas for improvement. conclusionsparticipating occupational therapists generally agreed that the co - designed app appears to be a versatile tool that empowers these professionals to manage reminiscence interventions.
|
arxiv:2410.13556
|
we perform a time - dependent ionization analysis to constrain plasma heating requirements during a fast partial halo coronal mass ejection ( cme ) observed on 2000 june 28 by the ultraviolet coronagraph spectrometer ( uvcs ) aboard the solar and heliospheric observatory ( soho ). we use two methods to derive densities from the uvcs measurements, including a density sensitive o v line ratio at 1213. 85 and 1218. 35 angstroms, and radiative pumping of the o vi 1032, 1038 doublet by chromospheric emission lines. the most strongly constrained feature shows cumulative plasma heating comparable to or greater than the kinetic energy, while features observed earlier during the event show cumulative plasma heating of order or less than the kinetic energy. soho michelson doppler imager ( mdi ) observations are used to estimate the active region magnetic energy. we consider candidate plasma heating mechanisms and provide constraints when possible. because this cme was associated with a relatively weak flare, the contribution by flare energy ( e. g., through thermal conduction or energetic particles ) is probably small ; however, the flare may have been partially behind the limb. wave heating by photospheric motions requires heating rates significantly larger than those previously inferred for coronal holes, but the eruption itself could drive waves which heat the plasma. heating by small - scale reconnection in the flux rope or by the cme current sheet is not significantly constrained. uvcs line widths suggest that turbulence must be replenished continually and dissipated on time scales shorter than the propagation time in order to be an intermediate step in cme heating.
|
arxiv:1104.2298
|
in multi - agent cooperative tasks, the presence of heterogeneous agents is familiar. compared to cooperation among homogeneous agents, collaboration requires considering the best - suited sub - tasks for each agent. however, the operation of multi - agent systems often involves a large amount of complex interaction information, making it more challenging to learn heterogeneous strategies. related multi - agent reinforcement learning methods sometimes use grouping mechanisms to form smaller cooperative groups or leverage prior domain knowledge to learn strategies for different roles. in contrast, agents should learn deeper role features without relying on additional information. therefore, we propose qtypemix, which divides the value decomposition process into homogeneous and heterogeneous stages. qtypemix learns to extract type features from local historical observations through the te loss. in addition, we introduce advanced network structures containing attention mechanisms and hypernets to enhance the representation capability and achieve the value decomposition process. the results of testing the proposed method on 14 maps from smac and smacv2 show that qtypemix achieves state - of - the - art performance in tasks of varying difficulty.
|
arxiv:2408.07098
|
we consider a random walk in a fixed z environment composed of two point types : ( q, 1 - q ) and ( p, 1 - p ) for 1 / 2 < q < p. we study the expected hitting time at n for a given number k of p - drifts in the interval [ 1, n - 1 ], and find that this time is minimized asymptotically by equally spaced p - drifts.
|
arxiv:1210.6846
|
faser $ \ nu $ at the cern large hadron collider ( lhc ) is designed to directly detect collider neutrinos for the first time and study their cross sections at tev energies, where no such measurements currently exist. in 2018, a pilot detector employing emulsion films was installed in the far - forward region of atlas, 480 m from the interaction point, and collected 12. 2 fb $ ^ { - 1 } $ of proton - proton collision data at a center - of - mass energy of 13 tev. we describe the analysis of this pilot run data and the observation of the first neutrino interaction candidates at the lhc. this milestone paves the way for high - energy neutrino measurements at current and future colliders.
|
arxiv:2105.06197
|
we show that the well - known hastings - mcleod solution to the second painlev \ ' { e } equation is pole - free in the region $ \ arg x \ in [ - \ frac { \ pi } { 3 }, \ frac { \ pi } { 3 } ] \ cup [ \ frac { 2 \ pi } { 3 }, \ frac { 4 \ pi } { 3 } ] $, which proves an important special case of a general conjecture concerning pole distributions of painlev \ ' { e } transcedents proposed by novokshenov. our strategy is to construct explicit quasi - solutions approximating the hastings - mcleod solution in different regions of the complex plane, and estimate the errors rigorously. the main idea is very similar to the one used to prove dubrovin ' s conjecture for the first painlev \ ' { e } equation, but there are various technical improvements.
|
arxiv:1410.3338
|
we develop a general numerical method aimed at studying particle production from vacuum states in a variety of settings. as a first example we look at particle production in a simple cosmological model. we apply the same approach to the dynamical casimir effect, with special focus on the case of an oscillating mirror. we confirm previous estimates and obtain long - time production rates and particle spectra for both resonant and off - resonant frequencies. finally, we simulate a system with space and time - dependent optical properties, analogous to a one - dimensional expanding dielectric bubble. we obtain simple expressions for the dependence of the final particle number on the expansion velocity and final dielectric constant.
|
arxiv:hep-ph/0310131
|
we study presentations, defined by sidki, resulting in groups $ y ( m, n ) $ that are conjectured to be finite orthogonal groups of dimension $ m + 1 $ in characteristic two. this conjecture, if true, shows an interesting pattern, possibly connected with bott periodicity. it would also give new presentations for a large family of finite orthogonal groups in characteristic two, with no generator having the same order as the cyclic group of the field. we generalise the presentation to an infinite version $ y ( m ) $ and explicitly relate this to previous work done by sidki. the original groups $ y ( m, n ) $ can be found as quotients over congruence subgroups of $ y ( m ) $. we give two representations of our group $ y ( m ) $. one into an orthogonal group of dimension $ m + 1 $ and the other, using clifford algebras, into the corresponding pin group, both defined over a ring in characteristic two. hence, this gives two different actions of the group. sidki ' s homomorphism into $ sl _ { 2 ^ { m - 2 } } ( r ) $ is recovered and extended as an action on a submodule of the clifford algebra.
|
arxiv:1410.1354
|
an evolution equation for a population of strings evolving under the genetic operators : selection, mutation and crossover is derived. the corresponding equation describing the evolution of schematas is found by performing an exact coarse graining of this equation. in particular exact expressions for schemata reconstruction are derived which allows for a critical appraisal of the ` ` building - block hypothesis ' ' of genetic algorithms. a further coarse - graining is made by considering the contribution of all length - l schematas to the evolution of population observables such as fitness growth. as a test function for investigating the emergence of structure in the evolution the increase per generation of the in - schemata fitness averaged over all schematas of length l, $ \ delta _ l $, is introduced. in finding solutions of the evolution equations we concentrate more on the effects of crossover, in particular we consider crossover in the context of kauffman nk models with k = 0, 2. for k = 0, with a random initial population, in the first step of evolution the contribution from schemata reconstruction is equal to that of schemata destruction leading to a scale invariant situation where the contribution to fitness of schematas of size l is independent of l. this balance is broken in the next step of evolution leading to a situation where schematas that are either much larger or much smaller than half the string size dominate over those with $ l \ approx n / 2 $. the balance between block destruction and reconstruction is also broken in a k > 0 landscape. it is conjectured that the effective degrees of freedom for such landscapes are landscape connective trees that break down into effectively fit smaller blocks, and not the blocks themselves. numerical simulations confirm this ` ` connective tree hypothesis ' ' by showing that correlations drop off with connective distance and not with intrachromosomal distance.
|
arxiv:adap-org/9611005
|
classically, determining the gradient of a black - box function f : r ^ p - > r requires p + 1 evaluations. using the quantum fourier transform, two evaluations suffice. this is based on the approximate local periodicity of exp ( 2 * pi * i * f ( x ) ). it is shown that sufficiently precise machine arithmetic results in gradient estimates of any required accuracy.
|
arxiv:quant-ph/0507109
|
this paper describes the npu system submitted to interspeech 2020 far - field speaker verification challenge ( ffsvc ). we particularly focus on far - field text - dependent sv from single ( task1 ) and multiple microphone arrays ( task3 ). the major challenges in such scenarios are short utterance and cross - channel and distance mismatch for enrollment and test. with the belief that better speaker embedding can alleviate the effects from short utterance, we introduce a new speaker embedding architecture - resnet - bam, which integrates a bottleneck attention module with resnet as a simple and efficient way to further improve the representation power of resnet. this contribution brings up to 1 % eer reduction. we further address the mismatch problem in three directions. first, domain adversarial training, which aims to learn domain - invariant features, can yield to 0. 8 % eer reduction. second, front - end signal processing, including wpe and beamforming, has no obvious contribution, but together with data selection and domain adversarial training, can further contribute to 0. 5 % eer reduction. finally, data augmentation, which works with a specifically - designed data selection strategy, can lead to 2 % eer reduction. together with the above contributions, in the middle challenge results, our single submission system ( without multi - system fusion ) achieves the first and second place on task 1 and task 3, respectively.
|
arxiv:2008.03521
|
we present several infinite families of potential modular data motivated by examples of drinfeld centers of quadratic categories. in each case, the input is a pair of involutive metric groups with gauss sums differing by a sign, along with some conditions on the fixed points of the involutions and the relative sizes of the groups. from this input we construct $ s $ and $ t $ matrices which satisfy the modular relations and whose verlinde coefficients are non - negative integers. we also check certain restrictions coming from frobenius - schur indicators. these families generalize evans and gannon ' s conjectures for the modular data associated to generalized haagerup and near - group categories for odd groups, and include the modular data of the drinfeld centers of almost all known quadratic categories. in addition to the subfamilies which are conjecturally realized by centers of quadratic categories, these families include many examples of potential modular data which do not correspond to known types of modular tensor categories.
|
arxiv:1906.07397
|
we introduce twenty four two - parameter families of advanced time series forecasting functions using a new and nonparametric approach. we also introduce the concept of powering and derive nonseasonal and seasonal models with examples in education, sales, finance and economy. we compare the performance of our twenty four models to both holt - - winters and arima models for both nonseasonal and seasonal times series. we show in particular that our models not only do not require a decomposition of a seasonal time series into trend, seasonal and random components, but leads also to substantially lower sum of absolute error and a higher number of closer forecasts than both holt - - winters and arima models. finally, we apply and compare the performance of our twenty four models using five - year stock market data of 467 companies of the s & p500.
|
arxiv:2207.04882
|
in this work we study the constraints on the anomalous $ tq \ gamma $ ( $ q = u $, $ c $ ) couplings by photon - produced leading single top production and single top jet associated production through the main reaction $ pp \ rightarrow p \ gamma p \ rightarrow pt \ rightarrow pw ( \ rightarrow \ ell \ nu _ \ ell ) b + x $ and $ pp \ rightarrow p \ gamma p \ rightarrow ptj \ rightarrow pw ( \ rightarrow \ ell \ nu _ \ ell ) bj + x $ assuming a typical lhc multipurpose forward detectors in a model independent effective lagrangian approach. our results show that : for the typical detector acceptance $ 0. 0015 < \ xi _ 1 < 0. 5 $, $ 0. 1 < \ xi _ 2 < 0. 5 $ and $ 0. 0015 < \ xi _ 3 < 0. 15 $ with a luminosity of 2 $ \ rm { fb } ^ { - 1 } $, the lower bounds of $ \ kappa _ { tq \ gamma } $ through leading single top channel ( single top jet channel ) are 0. 0096 ( 0. 0115 ), 0. 0162 ( 0. 0152 ) and 0. 0098 ( 0. 0122 ), respectively, correspond to $ \ rm { br } ( t \ rightarrow q \ gamma ) \ sim 3 \ times 10 ^ { - 5 } $. with a luminosity of 200 $ \ rm { fb } ^ { - 1 } $, the lower bounds of $ \ kappa _ { tq \ gamma } $ are 0. 0031 ( 0. 0034 ), 0. 0051 ( 0. 0047 ) and 0. 0031 ( 0. 0038 ), respectively, correspond to $ \ rm { br } ( t \ rightarrow q \ gamma ) \ sim 4 \ times 10 ^ { - 6 } $. we conclude that both channels can be used to detect such anomalous $ tq \ gamma $ couplings and the detection sensitivity on $ \ kappa _ { tq \ gamma } $ can be improved.
|
arxiv:1402.1817
|
we propose a new approach, based on the puncture method, to construct black hole initial data in the so - called trumpet geometry, i. e. on slices that asymptote to a limiting surface of non - zero areal radius. our approach is easy to implement numerically and, at least for non - spinning black holes, does not require any internal boundary conditions. we present numerical results, obtained with a uniform - grid finite - difference code, for boosted black holes and binary black holes. we also comment on generalizations of this method for spinning black holes.
|
arxiv:0908.0337
|
the monolithic approach to policy representation in markov decision processes ( mdps ) looks for a single policy that can be represented as a function from states to actions. for the monolithic approach to succeed ( and this is not always possible ), a complex feature representation is often necessary since the policy is a complex object that has to prescribe what actions to take all over the state space. this is especially true in large domains with complicated dynamics. it is also computationally inefficient to both learn and plan in mdps using a complex monolithic approach. we present a different approach where we restrict the policy space to policies that can be represented as combinations of simpler, parameterized skills - - - a type of temporally extended action, with a simple policy representation. we introduce learning skills via bootstrapping ( lsb ) that can use a broad family of reinforcement learning ( rl ) algorithms as a " black box " to iteratively learn parametrized skills. initially, the learned skills are short - sighted but each iteration of the algorithm allows the skills to bootstrap off one another, improving each skill in the process. we prove that this bootstrapping process returns a near - optimal policy. furthermore, our experiments demonstrate that lsb can solve mdps that, given the same representational power, could not be solved by a monolithic approach. thus, planning with learned skills results in better policies without requiring complex policy representations.
|
arxiv:1506.03624
|
we use a chern simons landau - ginzburg ( cslg ) framework related to hierarchies of composite bosons to describe 2d harmonically trapped fast rotating bose gases in fractional quantum hall effect ( fqhe ) states. the predicted values for $ \ nu $ ( ratio of particle to vortex numbers ) are $ \ nu $ $ = $ $ { { p } \ over { q } } $ ( $ p $, $ q $ are any integers ) with even product $ pq $, including numerically favored values previously found and predicting a richer set of values. we show that those values can be understood from a bosonic analog of the law of the corresponding states relevant to the electronic fqhe. a tentative global phase diagram for the bosonic system for $ \ nu $ $ < $ 1 is also proposed.
|
arxiv:cond-mat/0612479
|
in their proof of the positive energy theorem, schoen and yau showed that every asymptotically flat spacelike hypersurface m of a lorentzian manifold which is flat along m can be isometrically imbedded with its given second fundamental form into minkowski spacetime as the graph of a function from r ^ n to r ; in particular, m is diffeomorphic to r ^ n. in this short note, we give an alternative proof of this fact. the argument generalises to the asymptotically hyperbolic case, works in every dimension n, and does not need a spin structure.
|
arxiv:1004.5430
|
on sequences analogous to alphabetical order on words list of order topics, list of order theory topics order theory, study of various binary relations known as orders order topology, a topology of total order for totally ordered sets ordinal numbers, numbers assigned to sets based on their set - theoretic order partial order, often called just " order " in order theory texts, a transitive antisymmetric relation total order, a partial order that is also total, in that either the relation or its inverse holds between any unequal elements = = statistics = = order statistics first - order statistics, e. g., arithmetic mean, median, quantiles second - order statistics, e. g., correlation, power spectrum, variance higher - order statistics, e. g., bispectrum, kurtosis, skewness
|
https://en.wikipedia.org/wiki/Order_(mathematics)
|
comet 103p / hartley 2 made a close approach to the earth in october 2010. it was the target of an extensive observing campaign and was visited by the deep impact spacecraft ( mission epoxi ). we present observations of hcn and ch3oh emission lines conducted with the iram plateau de bure interferometer on 22 - 23, 28 october and 4, 5 november 2010 at 1. 1, 1. 9 and 3. 4 mm wavelengths. the thermal emission from the dust coma and nucleus is detected simultaneously. interferometric images with unprecedented spatial resolution are obtained. a sine - wave variation of the thermal continuum is observed in the 23 october data, that we associate with the nucleus thermal light curve. the nucleus contributes up to 30 - 55 % of the observed continuum. the large dust - to - gas ratio ( in the range 2 - 6 ) can be explained by the unusual activity of the comet for its size, which allows decimeter size particles and large boulders to be entrained by the gas. the rotational temperature of ch3oh is measured. we attribute the increase from 35 to 46 k with increasing beam size ( from 150 to 1500 km ) to radiative processes. the hcn production rate displays strong rotation - induced variations. the hcn production curve, as well as those of co2 and h2o measured by epoxi, are interpreted with a geometric model which takes into account the rotation and the shape of the comet. the hcn and h2o production curves are in phase, showing common sources. the 1. 7h delay, in average, of hcn and h2o with respect to the co2 production curve suggests that hcn and h2o are mainly produced by subliming icy grains. the scale length of production of hcn is determined to be on the order of 500 - 1000 km, implying a mean velocity of 100 - 200 m / s for the icy grains producing hcn. the modulation of the co2 prouction and of the velocity offset of the hcn lines are interpreted in terms of localized sources of gas on the nucleus surface.
|
arxiv:1310.2600
|
by means of first - principle flapw - gga calculations, we have investigated the electronic properties of the newly discovered layered quaternary systems srfeasf and cafeasf as parent phases for a new group of oxygen - free feas superconductors. the electronic bands, density of states, fermi surfaces, atomic charges, together with sommerfeld coefficients and molar pauli paramagnetic susceptibility have been evaluated and discussed in comparison with oxyarsenide lafeaso - a parent phase for a new class of high - temperature ( tc about 26 - 56k ) oxygen - containing feas superconductors. similarity of our data for srfeasf and cafeasf with the band structure of oxygen - containing feas superconducting materials may be considered as theoretical background specifying the possibility of superconductivity in these oxygen - free systems.
|
arxiv:0810.3498
|
we address the challenge of exploration in reinforcement learning ( rl ) when the agent operates in an unknown environment with sparse or no rewards. in this work, we study the maximum entropy exploration problem of two different types. the first type is visitation entropy maximization previously considered by hazan et al. ( 2019 ) in the discounted setting. for this type of exploration, we propose a game - theoretic algorithm that has $ \ widetilde { \ mathcal { o } } ( h ^ 3s ^ 2a / \ varepsilon ^ 2 ) $ sample complexity thus improving the $ \ varepsilon $ - dependence upon existing results, where $ s $ is a number of states, $ a $ is a number of actions, $ h $ is an episode length, and $ \ varepsilon $ is a desired accuracy. the second type of entropy we study is the trajectory entropy. this objective function is closely related to the entropy - regularized mdps, and we propose a simple algorithm that has a sample complexity of order $ \ widetilde { \ mathcal { o } } ( \ mathrm { poly } ( s, a, h ) / \ varepsilon ) $. interestingly, it is the first theoretical result in rl literature that establishes the potential statistical advantage of regularized mdps for exploration. finally, we apply developed regularization techniques to reduce sample complexity of visitation entropy maximization to $ \ widetilde { \ mathcal { o } } ( h ^ 2sa / \ varepsilon ^ 2 ) $, yielding a statistical separation between maximum entropy exploration and reward - free exploration.
|
arxiv:2303.08059
|
a set $ s $ of vertices in a graph $ g $ is a dominating set if every vertex of $ v ( g ) \ setminus s $ is adjacent to a vertex in $ s $. a coalition in $ g $ consists of two disjoint sets of vertices $ x $ and $ y $ of $ g $, neither of which is a dominating set but whose union $ x \ cup y $ is a dominating set of $ g $. such sets $ x $ and $ y $ form a coalition in $ g $. a coalition partition, abbreviated $ c $ - partition, in $ g $ is a partition $ \ mathcal { x } = \ { x _ 1, \ ldots, x _ k \ } $ of the vertex set $ v ( g ) $ of $ g $ such that for all $ i \ in [ k ] $, each set $ x _ i \ in \ mathcal { x } $ satisfies one of the following two conditions : ( 1 ) $ x _ i $ is a dominating set of $ g $ with a single vertex, or ( 2 ) $ x _ i $ forms a coalition with some other set $ x _ j \ in \ mathcal { x } $. % the coalition number $ { c } ( g ) $ is the maximum cardinality of a $ c $ - partition of $ g $. let $ { \ cal a } = \ { a _ 1, \ ldots, a _ r \ } $ and $ { \ cal b } = \ { b _ 1, \ ldots, b _ s \ } $ be two partitions of $ v ( g ) $. partition $ { \ cal b } $ is a refinement of partition $ { \ cal a } $ if every set $ b _ i \ in { \ cal b } $ is either equal to, or a proper subset of, some set $ a _ j \ in { \ cal a } $. further if $ { \ cal a } \ ne { \ cal b } $, then $ { \ cal b } $ is a proper refinement of $ { \ cal a } $. partition $ { \ cal a } $ is a minimal $ c $ - partition if it is not a proper refinement of another $ c $ - partition. haynes et al. [ akce int. j. graphs combin. 17 ( 2020 ), no. 2, 653 - - 659
|
arxiv:2307.01222
|
we study special properties of solutions to the ivp associated to the camassa - holm equation on the line related to the regularity and the decay of solutions. the first aim is to show how the regularity on the initial data is transferred to the corresponding solution in a class containing the " peakon solutions ". in particular, we shall show that the local regularity is similar to that exhibited by the solution of the inviscid burger ' s equation with the same initial datum. the second goal is to prove that the decay results obtained in a paper of himonas, misio { \ l } ek, ponce, and zhou extend to the class of solutions considered here.
|
arxiv:1609.06212
|
the phase diagram of kane - mele - heisenberg ( kmh ) model in classical limit ~ \ cite { zare }, contains disordered regions in the coupling space, as the result of to competition among different terms in the hamiltonian, leading to frustration in finding a unique ground state. in this work we explore the nature of these phase in the quantum limit, for a $ s = 1 / 2 $. employing exact diagonalization ( ed ) in $ s _ z $ and nearest neighbor valence bond ( nnvb ) bases, bond and plaquette valence bond mean field theories, we show that the disordered regions are divided into ordered quantum states in the form of plaquette valence bond crystal ( pvbc ) and staggered dimerized ( sd ) phases.
|
arxiv:1407.6807
|
a brownian harmonic oscillator, which dissipates energy either by friction or via emission of electromagnetic radiation, is considered. this brownian emitter is driven by the surrounding thermo - quantum fluctuations, which are theoretically described by the fluctuation - dissipation theorem. it is shown how the abrahamlorentz force leads to dependence of the half - width on the peak frequency of the oscillator amplitude spectral density. it is also found that for the case of a charged particle, moving in vacuum at zero temperature, its root - mean - square velocity fluctuation is a universal constant, equal to roughly 1 / 18 of the speed of light. the relevant klein - kramers and smoluchowski equations are derived as well.
|
arxiv:1602.00420
|
tensor fields depending on other tensor fields are considered. the concept of extended tensor fields is introduced and the theory of differentiation for such fields is developed.
|
arxiv:math/0503332
|
at the " domus galilaeana " in pisa, many original documents and records are kept, which belong to the scientific activity carried out by enrico fermi until 1938. i compared those documentary sources with the supported evidences, the personal recollections, concerning the discovery that hydrogenated substances increase the radioactivity induced by neutrons : such a comparison leads to the conclusion that the discovery occurred on october 20th 1934, i. e., two days before the date that all the accounts that have been supported so far report. that suggests that any historical study to come, and regarding the experiments carried out by fermi and his group on neutrons, cannot neglect to analyse carefully the accounts regarding those experiments and to compare them with the archive records.
|
arxiv:physics/0309046
|
we study babai numbers and babai $ k $ - spectra of paths and cycles. we completely determine the babai numbers of paths $ p _ n $ for $ n > 1 $ and $ 1 \ leq k \ leq n - 1 $, and the babai $ k $ - spectra for $ p _ n $ when $ 1 \ leq k \ leq n / 2 $. we also completely determine babai numbers and babai $ k $ - spectra of all cycles $ c _ n $ for $ k \ in \ { 1, 2 \ } $ and $ n \ geq 3 $ if $ k = 1 $ and $ n > 3 $ if $ k = 2 $.
|
arxiv:2409.04869
|
we present a method of using classical wavelet based multiresolution analysis to separate scales in model and observations during data assimilation with the ensemble kalman filter. in many applications, the underlying physics of a phenomena involve the interaction of features at multiple scales. blending of observational and model error across scales can result in large forecast inaccuracies since large errors at one scale are interpreted as inexact data at all scales. our method uses a transformation of the observation operator in order to separate the information from different scales of the observations. this naturally induces a transformation of the observation covariance and we put forward several algorithms to efficiently compute the transformed covariance. another advantage of our multiresolution ensemble kalman filter is that scales can be weighted independently to adjust each scale ' s effect on the forecast. to demonstrate feasibility we present applications to a one dimensional kuramoto - sivashinsky ( k - s ) model with scale dependent observation noise and an application involving the forecasting of solar photospheric flux. the latter example demonstrates the multiresolution ensemble kalman filter ' s ability to account for scale dependent model error. modeling of photospheric magnetic flux transport is accomplished by the air force data assimilative photospheric transport ( adapt ) model.
|
arxiv:1511.01935
|
we review the standard lock and key ( lk ) model for binding small ligands to larger adsorbent molecule. we discuss three levels of the traditional lk model for binding. within this model the binding constant or the gibbs energy of the binding process is related to the total interaction energy between the ligand and the binding site of the adsorbent molecules. when solvent molecules are present, which is the case in all binding processes in biochemistry, we find that a major part of the gibbs energy of binding could be due to interactions mediated through the solvent molecules. this finding could have major consequences to the applicability of the lk model in drug design, and perhaps require a shift in the prevailing paradigm in this field of research. keywords : lock and key model, binding constant, solvent effect on binding, hydrophilic effect, molecular recognition.
|
arxiv:1806.03499
|
interfacial mixing of elements is a well - known phenomenon found in thin film deposition. for thin - film magnetic heterostructures, interfacial compositional inhomogeneities can have drastic effects on the resulting functionalities. as such, care must be taken to characterize the compositional and magnetic properties of thin films intended for device use. recently, ferrimagnetic mn $ _ 4 $ n thin films have drawn considerable interest due to exhibiting perpendicular magnetic anisotropy, high domain - wall mobility, and good thermal stability. in this study, we employed x - ray photoelectron spectroscopy ( xps ) and polarized neutron reflectometry ( pnr ) measurements to investigate the interfaces of an epitaxially - grown mgo / mn $ _ 4 $ n / pt trilayer deposited at 450 $ ^ { \ circ } $ c. xps revealed the thickness of elemental mixing regions of near 5 nm at both interfaces. using pnr, we found that these interfaces exhibit essentially zero net magnetization at room temperature. despite the high - temperature deposition at 450 $ ^ { \ circ } $ c, the thickness of mixing regions is comparable to those observed in magnetic films deposited at room temperature. micromagnetic simulations show that this interfacial mixing should not deter the robust formation of small skyrmions, consistent with a recent experiment. the results obtained are encouraging in terms of the potential of integrating thermally stable mn $ _ 4 $ n into future spintronic devices.
|
arxiv:2208.02681
|
similarly to the ordinary bosonic liouville field theory, in its n = 1 supersymmetric version an infinite set of operator valued relations, the ` ` higher equations of motions ' ', holds. equations are in one to one correspondence with the singular representations of the super virasoro algebra and enumerated by a couple of natural numbers $ ( m, n ) $. we demonstrate explicitly these equations in the classical case, where the equations of type $ ( 1, n ) $ survive and can be interpreted directly as relations for classical fields. general form of the higher equations of motion is established in the quantum case, both for the neveu - schwarz and ramond series.
|
arxiv:hep-th/0610316
|
the metaverse has gained tremendous popularity in recent years, allowing the interconnection of users worldwide. however, current systems in metaverse scenarios, such as virtual reality glasses, offer a partial immersive experience. in this context, brain - computer interfaces ( bcis ) can introduce a revolution in the metaverse, although a study of the applicability and implications of bcis in these virtual scenarios is required. based on the absence of literature, this work reviews, for the first time, the applicability of bcis in the metaverse, analyzing the current status of this integration based on different categories related to virtual worlds and the evolution of bcis in these scenarios in the medium and long term. this work also proposes the design and implementation of a general framework that integrates bcis with different data sources from sensors and actuators ( e. g., vr glasses ) based on a modular design to be easily extended. this manuscript also validates the framework in a demonstrator consisting of driving a car within a metaverse, using a bci for neural data acquisition, a vr headset to provide realism, and a steering wheel and pedals. four use cases ( ucs ) are selected, focusing on cognitive and emotional assessment of the driver, detection of drowsiness, and driver authentication while using the vehicle. moreover, this manuscript offers an analysis of bci trends in the metaverse, also identifying future challenges that the intersection of these technologies will face. finally, it reviews the concerns that using bcis in virtual world applications could generate according to different categories : accessibility, user inclusion, privacy, cybersecurity, physical safety, and ethics.
|
arxiv:2212.03169
|
we present results of a detailed study of the rate of the accretion of planetesimals by a growing proto - jupiter in the core - accretion model. using a newly developed code, we accurately combine a detailed three - body trajectory calculation with gas drag experienced during the passage of planetesimals in the protoplanet ' s envelope. we find that the motion of planetesimals is excited to the extent that encounters with the proto - planetary envelope become so fast that ram pressure breaks up the planetesimals in most encounters. as a result, the accretion rate is largely independent of the planetesimal size and composition. for the case we explored of a planet forming at 5. 2 au from the sun in a disk with a solid surface density of 6 g / cm ^ 2 ( lozovsky et al. 2017 ) the accretion rate we compute differs in several respects from that assumed by those authors. we find that only 4 - 5 m _ earth is accreted in the first 1. 5x10 ^ 6 years before the onset of rapid gas accretion. most of the mass, some 10 m _ earth, is accreted simultaneously with this rapid gas accretion. in addition, we find that the mass accretion rate remains small, but non - zero for at least a million years after this point, and an additional 0. 3 - 0. 4 m _ earth is accreted during that time. this late accretion, together with a rapid infall of gas could lead to the accreted material being mixed throughout the outer regions, and may account for the enhancement of high - z material in jupiter ' s envelope.
|
arxiv:1911.12998
|
this paper deals with stability of a certain class of fractional order linear and nonlinear systems. the stability is investigated in the time domain and the frequency domain. the general stability conditions and several illustrative examples are presented as well.
|
arxiv:0811.4102
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.