text
stringlengths
1
3.65k
source
stringlengths
15
79
there is a large observational scatter toward low velocities in the stellar mass tully - fisher relation if disturbed and compact objects are included. however, this scatter can be eliminated if one replaces rotation velocity with $ \ rm s _ { \ rm 0. 5 } $, a quantity that includes a velocity dispersion term added in quadrature with the rotation velocity. in this work we use a large suite of hydrodynamic n - body galaxy merger simulations to explore a possible mechanism for creating the observed relations. using mock observations of the simulations, we test for the presence of observational effects and explore the relationship between $ \ rm s _ { \ rm 0. 5 } $ and intrinsic properties of the galaxies. we find that galaxy mergers can explain the scatter in the tf as well as the tight $ \ rm s _ { \ rm 0. 5 } $ - stellar mass relation. furthermore, $ \ rm s _ { \ rm 0. 5 } $ is correlated with the total central mass of a galaxy, including contributions due to dark matter.
arxiv:0902.0566
we demonstrate experimental implementation of robust phase estimation ( rpe ) to learn the phases of x and y rotations on a trapped $ \ textrm { yb } ^ + $ ion qubit. we estimate these phases with uncertainties less than $ 4 \ cdot10 ^ { - 4 } $ radians using as few as 176 total experimental samples per phase, and our estimates exhibit heisenberg scaling. unlike standard phase estimation protocols, rpe neither assumes perfect state preparation and measurement, nor requires access to ancillae. we cross - validate the results of rpe with the more resource - intensive protocol of gate set tomography.
arxiv:1702.01763
very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. by changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical. deep reactive ion etching ( drie ) is a special subclass of rie that is growing in popularity. in this process, etch depths of hundreds of micrometers are achieved with almost vertical sidewalls. the primary technology is based on the so - called " bosch process ", named after the german company robert bosch, which filed the original patent, where two different gas compositions alternate in the reactor. currently, there are two variations of the drie. the first variation consists of three distinct steps ( the original bosch process ) while the second variation only consists of two steps. in the first variation, the etch cycle is as follows : ( i ) sf6 isotropic etch ; ( ii ) c4f8 passivation ; ( iii ) sf6 anisotropic etch for floor cleaning. in the 2nd variation, steps ( i ) and ( iii ) are combined. both variations operate similarly. the c4f8 creates a polymer on the surface of the substrate, and the second gas composition ( sf6 and o2 ) etches the substrate. the polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. as a result, etching aspect ratios of 50 to 1 can be achieved. the process can easily be used to etch completely through a silicon substrate, and etch rates are 3 – 6 times higher than wet etching. after preparing a large number of mems devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. for some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing. = = manufacturing technologies = = bulk micromachining is the oldest paradigm of silicon - based mems. the whole thickness of a silicon
https://en.wikipedia.org/wiki/MEMS
we consider markov processes, which describe e. g. queueing network processes, in a random environment which influences the network by determining random breakdown of nodes, and the necessity of repair thereafter. starting from an explicit steady state distribution of product form available in the literature, we notice that this steady state distribution does not provide information about the correlation structure in time and space ( over nodes ). we study this correlation structure via one step correlations for the queueing - environment process. although formulas for absolute values of these correlations are complicated, the differences of correlations of related networks are simple and have a nice structure. we therefore compare two networks in a random environment having the same invariant distribution, and focus on the time behaviour of the processes when in such a network the environment changes or the rules for traveling are perturbed. evaluating the comparison formulas we compare spectral gaps and asymptotic variances of related processes.
arxiv:1503.00153
we prove a uniqueness result for the broken ray transform acting on the sums of functions and $ 1 $ - forms on surfaces in the presence of an external force and a reflecting obstacle. we assume that the considered twisted geodesic flows have nonpositive curvature. the broken rays are generated from the twisted geodesic flows by the law of reflection on the boundary of a suitably convex obstacle. our work generalizes recent results for the broken geodesic ray transform on surfaces to more general families of curves including the magnetic flows and gaussian thermostats.
arxiv:2306.17604
let $ ( x _ n \ colon n \ in \ z ) $ be a two - sided recurrent markov chain with fixed initial state $ x _ 0 $ and let $ \ nu $ be a probability measure on its state space. we give a necessary and sufficient criterion for the existence of a non - randomized time $ t $ such that $ ( x _ { t + n } \ colon n \ in \ z ) $ has the law of the same markov chain with initial distribution $ \ nu $. in the case when our criterion is satisfied we give an explicit solution, which is also a stopping time, and study its moment properties. we show that this solution minimizes the expectation of $ \ psi ( t ) $ in the class of all non - negative solutions, simultaneously for all non - negative concave functions $ \ psi $.
arxiv:1407.4734
a new wireless communication system denoted as multi - code multi - carrier cdma ( mc - mc cdma ), which is the combination of multi - code cdma and multi - carrier cdma, is analyzed in this paper. this system can satisfy multi - rate services using multi - code schemes and muti - carrier services used for high rate transmission. the system is evaluated using traveling wave tube amplifier ( twta ). this type of amplifiers continue to offer the best microwave high power amplifiers ( hpa ) performance in terms of power efficiency, size and cost, but lag behind solid state power amplifiers ( sspa ' s ) in linearity. this paper presents a technique for improving twta linearity. the use of predistorter ( pd ) linearization technique is described to provide twta performance comparable or superior to conventional sspa ' s. the characteristics of the pd scheme is derived based on the extension of saleh ' s model for hpa.
arxiv:1105.4379
one of the major barriers to using large language models ( llms ) in medicine is the perception they use uninterpretable methods to make clinical decisions that are inherently different from the cognitive processes of clinicians. in this manuscript we develop novel diagnostic reasoning prompts to study whether llms can perform clinical reasoning to accurately form a diagnosis. we find that gpt4 can be prompted to mimic the common clinical reasoning processes of clinicians without sacrificing diagnostic accuracy. this is significant because an llm that can use clinical reasoning to provide an interpretable rationale offers physicians a means to evaluate whether llms can be trusted for patient care. novel prompting methods have the potential to expose the black box of llms, bringing them one step closer to safe and effective use in medicine.
arxiv:2308.06834
the purpose of research was to check up the influence of decrease of nonequality of ventilating ( after bronchodilator ( berotec ) inhalation ( bi ) ) on the magnitude of dynamic compliance of lungs ( cdyn ) at asthma patients with ventilating infringements. methods and materials : 20 patients ( with 2 and 3 degrees of ventilating infringements ( vc < 73 %, fev1 < 51 %, mvv < 56 % ), without restrictive disease of lungs, suffering from bronchial asthma were studied before and after bi by plotting volume, rate flow, against the transpulmonare pressure. about the change of nonequality of ventilating we consider by the change after bi of cdyn, cdyn at once after flow interruption ( cdyn1 ), tissue resistance at inhalation ( rti in ) and exhalation ( rti ex ), parameters of ventilating and general parameters of respiratory mechanics. results : the parameters of ventilating were improved ( p < 0, 05 ). general parameters of respiratory mechanics also improved. rti in and rti ex are made 0, 48 + 0, 16 ; 1, 05 + 0, 25 kpa / l / s before bi and decreased 0, 09 + 0, 04 ; 0, 28 + 0, 09 kpa / l / s after bi ( p < 0, 05 ; p < 0, 05 ). but cdyn and cdyn1 are not changed after bi. conclusions : 1. the decrease of ventilation nonequality and tissue friction after bi do not influence on the initially reduced dynamic compliance of lungs at asthma patients without any restrictive diseases of lungs. 2. the cause of not increasing of dynamic compliance after bi probably due by changes in elastic component of parenchyma of lungs, insensitive to berotec.
arxiv:q-bio/0402025
we report on the temperature dependence of the quasiparticle density of states ( dos ) in the simple binary compound mgb2 directly measured using scanning tunneling microscope ( stm ). to achieve high quality tunneling conditions, a small crystal of mgb2 is used as a tip in the stm experiment. the ` ` sample ' ' is chosen to be a 2h - nbse2 single crystal presenting an atomically flat surface. at low temperature the tunneling conductance spectra show a gap at the fermi energy followed by two well - pronounced conductance peaks on each side. they appear at voltages v $ _ { s } \ simeq \ pm 3. 8 $ mv and v $ _ { l } \ simeq \ pm 7. 8 $ mv. with rising temperature both peaks disappear at the tc of the bulk mgb2, a behavior consistent with the model of two - gap superconductivity. the explanation of the double - peak structure in terms of a particular proximity effect is also discussed.
arxiv:cond-mat/0105592
we summarize the 14th meeting of the international astronomical consortium for high energy calibration ( iachec ) held at \ textit { shonan village } ( kanagawa, japan ) in may 2019. sixty scientists directly involved in the calibration of operational and future high - energy missions gathered during 3. 5 days to discuss the status of the cross - calibration between the current international complement of x - ray observatories, and the possibilities to improve it. this summary consists of reports from the various wgs with topics ranging from the identification and characterization of standard calibration sources, multi - observatory cross - calibration campaigns, appropriate and new statistical techniques, calibration of instruments and characterization of background, communication and preservation of knowledge, and results for the benefit of the astronomical community.
arxiv:2001.11117
we show that a point particle moving in space - time on entwined - pair paths generates schroedinger ' s equation in a static potential in the appropriate continuum linit. this provides a new realist context for the schroedinger equation within the domain of classical stochastic processes. it also suggests that self - quantizing systems may provide considerable insight into conventional quantum mechanics.
arxiv:quant-ph/0206095
we review the status of nu _ mu - - > nu _ tau flavor transitions of atmospheric neutrinos in the 92 kton - year data sample collected in the first phase of the super - kamiokande ( sk ) experiment, in combination with the recent spectral data from the kek - to - kamioka ( k2k ) accelerator experiment ( including 29 single - ring muon events ). we consider a theoretical framework which embeds flavor oscillations plus hypothetical decoherence effects, and where both standard oscillations and pure decoherence represent limiting cases. it is found that standard oscillations provide the best description of the sk + k2k data, and that the associated mass - mixing parameters are determined at 1 sigma ( and d. o. f. = 1 ) as : delta m ^ 2 = ( 2. 6 + - 0. 4 ) x10 ^ { - 3 } ev ^ 2 and sin ^ 2 ( 2theta ) = 1. 00 + 0. 00 - 0. 05. as compared with standard oscillations, the case of pure decoherence is disfavored, although it cannot be ruled out yet. in the general case, additional decoherence effects in the nu _ mu - - > nu _ tau channel do not improve the fit to the sk and k2k data, and upper bounds can be placed on the associated decoherence parameter. such indications, presently dominated by sk, could be strengthened by further k2k data, provided that the current spectral features are confirmed with higher statistics. a detailed description of the statistical analysis of sk and k2k data is also given, using the so - called ` ` pull ' ' approach to systematic uncertainties.
arxiv:hep-ph/0303064
we present an efficient mpi - parallel geometric multigrid library for quadtree ( 2d ) or octree ( 3d ) grids with adaptive refinement. cartesian 2d / 3d and cylindrical 2d geometries are supported, with second - order discretizations for the elliptic operators. periodic, dirichlet, and neumann boundary conditions can be handled, as well as free - space boundary conditions for 3d poisson problems, for which we use an fft - based solver on the coarse grid. scaling results up to 1792 cores are presented. the library can be used to extend adaptive mesh refinement frameworks with an elliptic solver, which we demonstrate by coupling it to mpi - amrvac. several test cases are presented in which the multigrid routines are used to control the divergence of the magnetic field in magnetohydrodynamic simulations.
arxiv:1901.11370
category - level 6d pose estimation, aiming to predict the location and orientation of unseen object instances, is fundamental to many scenarios such as robotic manipulation and augmented reality, yet still remains unsolved. precisely recovering instance 3d model in the canonical space and accurately matching it with the observation is an essential point when estimating 6d pose for unseen objects. in this paper, we achieve accurate category - level 6d pose estimation via cascaded relation and recurrent reconstruction networks. specifically, a novel cascaded relation network is dedicated for advanced representation learning to explore the complex and informative relations among instance rgb image, instance point cloud and category shape prior. furthermore, we design a recurrent reconstruction network for iterative residual refinement to progressively improve the reconstruction and correspondence estimations from coarse to fine. finally, the instance 6d pose is obtained leveraging the estimated dense correspondences between the instance point cloud and the reconstructed 3d model in the canonical space. we have conducted extensive experiments on two well - acknowledged benchmarks of category - level 6d pose estimation, with significant performance improvement over existing approaches. on the representatively strict evaluation metrics of $ 3d _ { 75 } $ and $ 5 ^ { \ circ } 2 cm $, our method exceeds the latest state - of - the - art spd by $ 4. 9 \ % $ and $ 17. 7 \ % $ on the camera25 dataset, and by $ 2. 7 \ % $ and $ 8. 5 \ % $ on the real275 dataset. codes are available at https : / / wangjiaze. cn / projects / 6dposeestimation. html.
arxiv:2108.08755
in the spirit of making high - order discontinuous galerkin ( dg ) methods more competitive, researchers have developed the hybridized dg methods, a class of discontinuous galerkin methods that generalizes the hybridizable dg ( hdg ), the embedded dg ( edg ) and the interior embedded dg ( iedg ) methods. these methods are amenable to hybridization ( static condensation ) and thus to more computationally efficient implementations. like other high - order dg methods, however, they may suffer from numerical stability issues in under - resolved fluid flow simulations. in this spirit, we introduce the hybridized dg methods for the compressible euler and navier - stokes equations in entropy variables. under a suitable choice of the stabilization matrix, the scheme can be shown to be entropy stable and satisfy the second law of thermodynamics in an integral sense. the performance and robustness of the proposed family of schemes are illustrated through a series of steady and unsteady flow problems in subsonic, transonic, and supersonic regimes. the hybridized dg methods in entropy variables show the optimal accuracy order given by the polynomial approximation space, and are significantly superior to their counterparts in conservation variables in terms of stability and robustness, particularly for under - resolved and shock flows.
arxiv:1808.05066
the dynamic behavior of cluster algorithms is analyzed in the classical mean field limit. rigorous analytical results below $ t _ c $ establish that the dynamic exponent has the value $ z _ { sw } = 1 $ for the swendsen - wang algorithm and $ z _ { uw } = 0 $ for the wolff algorithm. an efficient monte carlo implementation is introduced, adapted for using these algorithms for fully connected graphs. extensive simulations both above and below $ t _ c $ demonstrate scaling and evaluate the finite - size scaling function by means of a rather impressive collapse of the data.
arxiv:cond-mat/9603134
recently considerable excitement has arisen due to the experimental observation of a field - induced spin liquid phase in the compound $ \ alpha $ - rucl $ _ 3 $. however, the nature of this putative spin liquid phase and the relevant microscopic model hamiltonian remain still unclear. in this work, we address these questions by performing large - scale numerical simulations of a generalized kitaev - heisenberg model proposed to describe the physics of $ \ alpha $ - rucl $ _ 3 $. while there is no evidence for an intermediate phase for in - plane magnetic fields, our results strongly suggest that a stable intermediate spin liquid phase, sandwiched between a magnetically ordered phase at low fields and a high - field polarized phase, can be induced by out - of - plane magnetic fields. moreover, we show that this field - induced spin liquid phase can be smoothly connected to a spin liquid possessing a spinon fermi surface as proposed recently for the kitaev model. the relevance of our results to $ \ alpha $ - rucl $ _ 3 $ is also discussed.
arxiv:1901.09131
57710 - 9. chandrasekhar, thomas ( 1 december 2006 ). analog communication ( jntu ). tata mcgraw - hill education. isbn 978 - 0 - 07 - 064770 - 1. chaturvedi, pradeep ( 1997 ). sustainable energy supply in asia : proceedings of the international conference, asia energy vision 2020, organised by the indian member committee, world energy council under the institution of engineers ( india ), during november 15 – 17, 1996 at new delhi. concept publishing company. isbn 978 - 81 - 7022 - 631 - 4. dodds, christopher ; kumar, chandra ; veering, bernadette ( march 2014 ). oxford textbook of anaesthesia for the elderly patient. oxford university press. isbn 978 - 0 - 19 - 960499 - 9. fairman, frederick walker ( 11 june 1998 ). linear control theory : the state space approach. john wiley & sons. isbn 978 - 0 - 471 - 97489 - 5. fredlund, d. g. ; rahardjo, h. ; fredlund, m. d. ( 30 july 2012 ). unsaturated soil mechanics in engineering practice. wiley. isbn 978 - 1 - 118 - 28050 - 8. grant, malcolm alister ; bixley, paul f ( 1 april 2011 ). geothermal reservoir engineering. academic press. isbn 978 - 0 - 12 - 383881 - 0. grigsby, leonard l. ( 16 may 2012 ). electric power generation, transmission, and distribution, third edition. crc press. isbn 978 - 1 - 4398 - 5628 - 4. heertje, arnold ; perlman, mark ( 1990 ). evolving technology and market structure : studies in schumpeterian economics. university of michigan press. isbn 978 - 0 - 472 - 10192 - 4. huurdeman, anton a. ( 31 july 2003 ). the worldwide history of telecommunications. john wiley & sons. isbn 978 - 0 - 471 - 20505 - 0. iga, kenichi ; kokubun, yasuo ( 12 december 2010 ). encyclopedic handbook of integrated optics. crc press. isbn 978 - 1 - 4200 - 2781 - 5. jalote, pankaj ( 31 january 2006 ). an integrated approach to software engineering. springer. isbn 978 - 0 - 387 - 28132 - 2. khanna, vin
https://en.wikipedia.org/wiki/Electrical_engineering
we classify gl ( 2, r ) invariant point markings over components of strata of abelian differentials. such point markings exist only when the component is hyperelliptic and arise from marking weierstrass points or two points exchanged by the hyperelliptic involution. we show that these point markings can be used to determine the holomorphic sections of the universal curve restricted to orbifold covers of subvarieties of the moduli space of riemann surfaces that contain a teichmuller disk. the finite blocking problem is also solved for translation surfaces with dense gl ( 2, r ) orbit.
arxiv:1601.07894
data driven transmission line fault location methods have the potential to more accurately locate faults by extracting fault information from available data. however, most of the data driven fault location methods in the literature are not validated by field data for the following reasons. on one hand, the available field data during faults are very limited for one specific transmission line, and using field data for training is close to impossible. on the other hand, if simulation data are utilized for training, the mismatch between the simulation system and the practical system will cause fault location errors. to this end, this paper proposes a physics - informed data - driven fault location method. the data from a practical fault event are first analyzed to extract the ranges of system and fault parameters such as equivalent source impedances, loading conditions, fault inception angles ( fia ) and fault resistances. afterwards, the simulation system is constructed with the ranges of parameters, to generate data for training. this procedure merges the gap between simulation and practical power systems, and at the same time considers the uncertainty of system and fault parameters in practice. the proposed data - driven method does not require system parameters, only requires instantaneous voltage and current measurements at the local terminal, with a low sampling rate of several khz and a short fault time window of half a cycle before and after the fault occurs. numerical experiments and field data experiments clearly validate the advantages of the proposed method over existing data driven methods.
arxiv:2307.09740
although beam emittance is critical for the performance of high - brightness accelerators, optimization is often time limited as emittance calculations, commonly done via quadrupole scans, are typically slow. such calculations are a type of $ \ textit { multipoint query } $, i. e. each query requires multiple secondary measurements. traditional black - box optimizers such as bayesian optimization are slow and inefficient when dealing with such objectives as they must acquire the full series of measurements, but return only the emittance, with each query. we propose a new information - theoretic algorithm, multipoint - bax, for black - box optimization on multipoint queries, which queries and models individual beam - size measurements using techniques from bayesian algorithm execution ( bax ). our method avoids the slow multipoint query on the accelerator by acquiring points through a $ \ textit { virtual objective } $, i. e. calculating the emittance objective from a fast learned model rather than directly from the accelerator. we use multipoint - bax to minimize emittance at the linac coherent light source ( lcls ) and the facility for advanced accelerator experimental tests ii ( facet - ii ). in simulation, our method is 20 $ \ times $ faster and more robust to noise compared to existing methods. in live tests, it matched the hand - tuned emittance at facet - ii and achieved a 24 % lower emittance than hand - tuning at lcls. our method represents a conceptual shift for optimizing multipoint queries, and we anticipate that it can be readily adapted to similar problems in particle accelerators and other scientific instruments.
arxiv:2209.04587
theorists of entropic ( emergent ) gravity put forward that what has been regarded as unobserved dark matter might instead be the product of quantum effects that can be looked at as emergent energy ( ee ). here we describe a novel schr \ " odringer mechanism ( sm ). this sm uncovers the existence of new quantum gravitational states that could be associated to the above mentioned ee. this is done on the basis of the microscopic verlinde - like entropic force advanced in [ physica a { \ bf 511 } ( 2018 ) 139 ], that deviates from the newton ' s form at extremely short distances.
arxiv:1904.04014
a point of a parametric surface which is not regular is irregular. there are several kinds of irregular points. it may occur that an irregular point becomes regular, if one changes the parametrization. this is the case of the poles in the parametrization of the unit sphere by euler angles : it suffices to permute the role of the different coordinate axes for changing the poles. on the other hand, consider the circular cone of parametric equation x = t cos ( u ) y = t sin ( u ) z = t. { \ displaystyle { \ begin { aligned } x & = t \ cos ( u ) \ \ y & = t \ sin ( u ) \ \ z & = t \,. \ end { aligned } } } the apex of the cone is the origin ( 0, 0, 0 ), and is obtained for t = 0. it is an irregular point that remains irregular, whichever parametrization is chosen ( otherwise, there would exist a unique tangent plane ). such an irregular point, where the tangent plane is undefined, is said singular. there is another kind of singular points. there are the self - crossing points, that is the points where the surface crosses itself. in other words, these are the points which are obtained for ( at least ) two different values of the parameters. = = = graph of a bivariate function = = = let z = f ( x, y ) be a function of two real variables, a bivariate function. this is a parametric surface, parametrized as x = t y = u z = f ( t, u ). { \ displaystyle { \ begin { aligned } x & = t \ \ y & = u \ \ z & = f ( t, u ) \,. \ end { aligned } } } every point of this surface is regular, as the two first columns of the jacobian matrix form the identity matrix of rank two. = = = rational surface = = = a rational surface is a surface that may be parametrized by rational functions of two variables. that is, if fi ( t, u ) are, for i = 0, 1, 2, 3, polynomials in two indeterminates, then the parametric surface, defined by x = f 1 ( t, u ) f 0 ( t, u ), y = f 2 ( t, u ) f 0 ( t, u ), z = f 3 ( t,
https://en.wikipedia.org/wiki/Surface_(mathematics)
in the minimal supersymmetric standard model with complex parameters ( cmssm ) we calculate higher order corrections to the higgs boson sector in the feynman - diagrammatic approach using the on - shell renormalization scheme. the application of this approach to the cmssm, being complementary to existing approaches, is analyzed in detail. numerical examples for the leading fermionic corrections, including the leading two - loop effects, are presented. numerical agreement within 10 % with other approaches is found for small and moderate mixing in the scalar top sector. the leading fermionic corrections, supplemented by the full logarithmic one - loop and the leading two - loop contributions are implemented into the public fortran code feynhiggsfastc.
arxiv:hep-ph/0108059
a homogeneous set of an $ n $ - vertex graph is a set $ x $ of vertices ( $ 2 \ le | x | \ le n - 1 $ ) such that every vertex not in $ x $ is either complete or anticomplete to $ x $. a graph is called prime if it has no homogeneous set. a chain of length $ t $ is a sequence of $ t + 1 $ vertices such that for every vertex in the sequence except the first one, its immediate predecessor is its unique neighbor or its unique non - neighbor among all of its predecessors. we prove that for all $ n $, there exists $ n $ such that every prime graph with at least $ n $ vertices contains one of the following graphs or their complements as an induced subgraph : ( 1 ) the graph obtained from $ k _ { 1, n } $ by subdividing every edge once, ( 2 ) the line graph of $ k _ { 2, n } $, ( 3 ) the line graph of the graph in ( 1 ), ( 4 ) the half - graph of height $ n $, ( 5 ) a prime graph induced by a chain of length $ n $, ( 6 ) two particular graphs obtained from the half - graph of height $ n $ by making one side a clique and adding one vertex.
arxiv:1504.05322
we present a term rewrite system that formally models the message authenticator algorithm ( maa ), which was one of the first cryptographic functions for computing a message authentication code and was adopted, between 1987 and 2001, in international standards ( iso 8730 and iso 8731 - 2 ) to ensure the authenticity and integrity of banking transactions. our term rewrite system is large ( 13 sorts, 18 constructors, 644 non - constructors, and 684 rewrite rules ), confluent, and terminating. implementations in thirteen different languages have been automatically derived from this model and used to validate 200 official test vectors for the maa.
arxiv:1703.06573
the complete 2 - loop qed contributions to the muon lifetime have been calculated analytically in the fermi theory. the exact result for the effects of virtual and real photons, virtual electrons, muons and hadrons as well as e + e - pair creation is delta gamma ^ ( 2 ) = gamma _ 0 ( alpha / pi ) ^ 2 [ ( 156815 / 5184 ) - ( 1036 / 27 ) zeta ( 2 ) - ( 895 / 36 ) zeta ( 3 ) + ( 67 / 8 ) zeta ( 4 ) + 53zeta ( 2 ) ln ( 2 ) - ( 0. 042 + / - 0. 002 ) ] where gamma _ 0 is the tree - level width. this eliminates the theoretical error in the extracted value of the fermi coupling constant, g _ f, which was previously the source of the dominant uncertainty. the new value is g _ f = ( 1. 16637 + / - 0. 00001 ) x 10 ^ - 5 gev ^ - 2 with the error being entirely experimental. several experiments are planned for the next generation of muon lifetime measurements and these can proceed unhindered by theoretical uncertainties.
arxiv:hep-ph/9812323
simulations expect an enhanced star - formation and active galactic nuclei ( agn ) activity during galaxy mergers, which can lead to formation of binary / dual agn. agn feedback can enhance or suppress star - formation. we have carried out a pilot study of a sample of 10 dual nuclei galaxies with astrosat ' s ultraviolet imaging telescope ( uvit ). here, we present the initial results for two sample galaxies ( mrk 739, eso 509 ) and deep multi - wavelength data of another galaxy ( mrk 212 ). uvit observations have revealed signatures of positive agn feedback in mrk 739 and mrk 212, and negative feedback in eso 509. deeper uvit observations have recently been approved ; these will provide better constraints on star - formation as well as agn feedback in these systems.
arxiv:2001.02502
we prove under grh that zeros of $ l $ - functions of modular forms of level $ n $ and weight $ k $ become uniformly distributed on the critical line when $ n + k \ to \ infty. $
arxiv:1412.2990
science - fantasy. francis bacon ' s new atlantis ( 1627 ), johannes kepler ' s somnium ( 1634 ), athanasius kircher ' s itinerarium extaticum ( 1656 ), cyrano de bergerac ' s comical history of the states and empires of the moon ( 1657 ) and the states and empires of the sun ( 1662 ), margaret cavendish ' s " the blazing world " ( 1666 ), jonathan swift ' s gulliver ' s travels ( 1726 ), ludvig holberg ' s nicolai klimii iter subterraneum ( 1741 ) and voltaire ' s micromegas ( 1752 ). isaac asimov and carl sagan considered johannes kepler ' s somnium the first science fiction story ; it depicts a journey to the moon and how the earth ' s motion is seen from there. kepler has been called the " father of science fiction ". following the 17th - century development of the novel as a literary form, mary shelley ' s frankenstein ( 1818 ) and the last man ( 1826 ) helped define the form of the science fiction novel. brian aldiss has argued that frankenstein was the first work of science fiction. edgar allan poe wrote several stories considered to be science fiction, including " the unparalleled adventure of one hans pfaall " ( 1835 ), which featured a trip to the moon. jules verne was noted for his attention to detail and scientific accuracy, especially in twenty thousand leagues under the seas ( 1870 ). in 1887, the novel el anacronopete by spanish author enrique gaspar y rimbau introduced the first time machine. an early french / belgian science fiction writer was j. - h. rosny aine ( 1856 – 1940 ). rosny ' s masterpiece is les navigateurs de l ' infini ( the navigators of infinity ) ( 1925 ) in which the word astronaut, " astronautique ", was used for the first time. many critics consider h. g. wells one of science fiction ' s most important authors, or even " the shakespeare of science fiction ". his works include the time machine ( 1895 ), the island of doctor moreau ( 1896 ), the invisible man ( 1897 ), and the war of the worlds ( 1898 ). his science fiction imagined alien invasion, biological engineering, invisibility, and time travel. in his non - fiction futurologist works he predicted the advent of airplanes, military tanks, nuclear weapons, satellite television
https://en.wikipedia.org/wiki/Science_fiction
cross - media retrieval is a research hotspot in multimedia area, which aims to perform retrieval across different media types such as image and text. the performance of existing methods usually relies on labeled data for model training. however, cross - media data is very labor consuming to collect and label, so how to transfer valuable knowledge in existing data to new data is a key problem towards application. for achieving the goal, this paper proposes deep cross - media knowledge transfer ( dckt ) approach, which transfers knowledge from a large - scale cross - media dataset to promote the model training on another small - scale cross - media dataset. the main contributions of dckt are : ( 1 ) two - level transfer architecture is proposed to jointly minimize the media - level and correlation - level domain discrepancies, which allows two important and complementary aspects of knowledge to be transferred : intra - media semantic and inter - media correlation knowledge. it can enrich the training information and boost the retrieval accuracy. ( 2 ) progressive transfer mechanism is proposed to iteratively select training samples with ascending transfer difficulties, via the metric of cross - media domain consistency with adaptive feedback. it can drive the transfer process to gradually reduce vast cross - media domain discrepancy, so as to enhance the robustness of model training. for verifying the effectiveness of dckt, we take the largescale dataset xmedianet as source domain, and 3 widelyused datasets as target domain for cross - media retrieval. experimental results show that dckt achieves promising improvement on retrieval accuracy.
arxiv:1803.03777
we show an extension of sanov ' s theorem on large deviations, controlling the tail probabilities of i. i. d. random variables with matching concentration and anti - concentration bounds. this result has a general scope, applies to samples of any size, and has a short information - theoretic proof using elementary techniques.
arxiv:2008.13293
we report on the first results of a long - term project to derive distances of galaxies at cosmological distances by applying the co - line width - luminosity relation. we have obtained deep co - line observations of galaxies at redshifts up to 29, 000 km / s using the nobeyama 45 - m mm - wave telescope, and some supplementary data were obtained by using the iram 30 - m telescope. we have detected the co line emission for several galaxies, and used their co line widths to estimate the absolute luminosities using the line - width - luminosity relation. in order to obtain photometric data and inclination correction, we also performed optical imaging observations of the co - detected galaxies using the cfht 3. 6 - m telescope at high resolution. the radio and optical data have been combined to derive the distance moduli and distances of the galaxies, and hubble ratios were estimated for these galaxies. we propose that the co line width - luminosity relation can be a powerful method to derive distances of galaxies to redfhift of z = 0. 1 and to derive the hubble ratio in a significant volume of the universe. key words : cosmology - galaxies : general - distance scale - co line
arxiv:astro-ph/9607024
we generalize the shadow codes of cherubini and micheli to include basic polynomials having arbitrary degree, and show that restricting basic polynomials to have degree one or less can result in improved lower bounds on the minimum distance of the code. however, even these improved lower bounds suggest that shadow codes have considerably inferior distance - rate characteristics compared with the concatenation of a reed - solomon outer code and a first - order reed - muller inner code.
arxiv:2408.09287
in this work, we study the wave propagation in a recently proposed acoustic structure, the locally resonant granular crystal. this structure is composed of a one - dimensional granular crystal of hollow spherical particles in contact, containing linear resonators. the relevant model is presented and examined through a combination of analytical approximations ( based on ode and nonlinear map analysis ) and of numerical results. the generic dynamics of the system involves a degradation of the well - known traveling pulse of the standard hertzian chain of elastic beads. nevertheless, the present system is richer, in that as the primary pulse decays, secondary ones emerge and eventually interfere with it creating modulated wavetrains. remarkably, upon suitable choices of parameters, this interference " distills " a weakly nonlocal solitary wave ( a " nanopteron " ). this motivates the consideration of such nonlinear structures through a separate fourier space technique, whose results suggest the existence of such entities not only with a single - side tail, but also with periodic tails on both ends. these tails are found to oscillate with the intrinsic oscillation frequency of the out - of - phase motion between the outer hollow bead and its internal linear attachment.
arxiv:1709.08629
study of stability of nuclei, flow and multifragmentation in heavy - ion collisions.
arxiv:1111.1480
allan variance ( avar ) was first introduced more than 40 years ago as a estimator of the stability of frequency standards, and now it is actively used for investigations of time series in astronomy, geodesy and geodynamics. this method allows one to effectively explore the noise characteristics for various data, such as variations of station and source coordinates, etc. moreover, this technique can be used to investigate the spectral and fractal structure of the noise in measured data. to process unevenly weighted and multidimensional data, which are usual for many astronomy and geodesy applications, avar modifications are proposed by the author. in this paper, a brief overview is given of using of classical and modified avar method in astronomy and geodynamics.
arxiv:1105.3837
this work completes the classification of the cubic vertices for arbitrary spin massless bosons in three dimensions started in a previous companion paper by constructing parity - odd vertices. similarly to the parity - even case, there is a unique parity - odd vertex for any given triple $ s _ 1 \ geq s _ 2 \ geq s _ 3 \ geq 2 $ of massless bosons if the triangle inequalities are satisfied ( $ s _ 1 < s _ 2 + s _ 3 $ ) and none otherwise. these vertices involve two ( three ) derivatives for odd ( even ) values of the sum $ s _ 1 + s _ 2 + s _ 3 $. a non - trivial relation between parity - even and parity - odd vertices is found. similarly to the parity - even case, the scalar and maxwell matter can couple to higher spins through current couplings with higher derivatives. we comment on possible lessons for 2d cft. we also derive both parity - even and parity - odd vertices with chern - simons fields and comment on the analogous classification in two dimensions.
arxiv:1803.02737
we introduce here the idea of meta - learning for training eeg bci decoders. meta - learning is a way of training machine learning systems so they learn to learn. we apply here meta - learning to a simple deep learning bci architecture and compare it to transfer learning on the same architecture. our meta - learning strategy operates by finding optimal parameters for the bci decoder so that it can quickly generalise between different users and recording sessions - - thereby also generalising to new users or new sessions quickly. we tested our algorithm on the physionet eeg motor imagery dataset. our approach increased motor imagery classification accuracy between 60 % to 80 %, outperforming other algorithms under the little - data condition. we believe that establishing the meta - learning or learning - to - learn approach will help neural engineering and human interfacing with the challenges of quickly setting up decoders of neural signals to make them more suitable for daily - life.
arxiv:2103.08664
##isation of science is usually accomplished when scientific information is presented in a way that emphasises the uncertainty associated with the scientific evidence. tactics such as shifting conversation, failing to acknowledge facts, and capitalising on doubt of scientific consensus have been used to gain more attention for views that have been undermined by scientific evidence. examples of issues that have involved the politicisation of science include the global warming controversy, health effects of pesticides, and health effects of tobacco. = = see also = = list of scientific occupations list of years in science logology ( science ) science ( wikiversity ) scientific integrity = = notes = = = = references = = = = external links = =
https://en.wikipedia.org/wiki/Science
we study the off - equilibrium critical dynamics of the three dimensional diluted ising model. we compute the dynamical critical exponent $ z $ and we show that it is independent of the dilution only when we take into account the scaling - corrections to the dynamics. finally we will compare our results with the experimental data.
arxiv:cond-mat/9903095
fully homomorphic encryption ( fhe ) allows an untrusted party to evaluate arithmetic cir - cuits, i. e., perform additions and multiplications on encrypted data, without having the decryp - tion key. one of the most efficient class of fhe schemes include bgv and fv schemes, which are based on the hardness of the rlwe problem. they share some common features : ciphertext sizes grow after each homomorphic multiplication ; multiplication is much more costly than addition, and the cost of homomorphic multiplication scales linearly with the input ciphertext sizes. furthermore, there is a special relinearization operation that reduce the size of a ciphertext, and the cost of relinearization is on the same order of magnitude as homomorpic multiplication. this motivates us to define a discrete optimization problem, which is to decide where ( and how much ) in a given circuit to relinearize, in order to minimize the total computational cost. in this paper, we formally define the relinearize problem. we prove that the problem is np - hard. in addition, in the special case where each vertex has at most one outgoing edge, we give a polynomial - time algorithm.
arxiv:1711.06319
in recent years, the combination of precise quantum monte carlo ( qmc ) methods with realistic nuclear interactions and consistent electroweak currents, in particular those constructed within effective field theories ( efts ), has lead to new insights in light and medium - mass nuclei, neutron matter, and electroweak reactions. this compelling new body of work has been made possible both by advances in qmc methods for nuclear physics, which push the bounds of applicability to heavier nuclei and to asymmetric nuclear matter and by the development of local chiral eft interactions up to next - to - next - to - leading order and minimally nonlocal interactions including $ \ delta $ degrees of freedom. in this review, we discuss these recent developments and give an overview of the exciting results for nuclei, neutron matter and neutron stars, and electroweak reactions.
arxiv:1901.04868
traffic cameras remain the primary source data for surveillance activities such as congestion and incident monitoring. to date, state agencies continue to rely on manual effort to extract data from networked cameras due to limitations of the current automatic vision systems including requirements for complex camera calibration and inability to generate high resolution data. this study implements a three - stage video analytics framework for extracting high - resolution traffic data such vehicle counts, speed, and acceleration from infrastructure - mounted cctv cameras. the key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction for traffic data collection. first, a state - of - the - art vehicle recognition model is implemented to detect and classify vehicles. next, to correct for camera distortion and reduce partial occlusion, an algorithm inspired by two - point linear perspective is utilized to extracts the region of interest ( roi ) automatically, while a 2d homography technique transforms the cctv view to bird ' s - eye view ( bev ). cameras are calibrated with a two - layer matrix system to enable the extraction of speed and acceleration by converting image coordinates to real - world measurements. individual vehicle trajectories are constructed and compared in bev using two time - space - feature - based object trackers, namely motpy and bytetrack. the results of the current study showed about + / - 4. 5 % error rate for directional traffic counts, less than 10 % mse for speed bias between camera estimates in comparison to estimates from probe data sources. extracting high - resolution data from traffic cameras has several implications, ranging from improvements in traffic management and identify dangerous driving behavior, high - risk areas for accidents, and other safety concerns, enabling proactive measures to reduce accidents and fatalities.
arxiv:2401.07220
we introduce randomized algorithms to clifford ' s geometric algebra, generalizing randomized linear algebra to hypercomplex vector spaces. this novel approach has many implications in machine learning, including training neural networks to global optimality via convex optimization. additionally, we consider fine - tuning large language model ( llm ) embeddings as a key application area, exploring the intersection of geometric algebra and modern ai techniques. in particular, we conduct a comparative analysis of the robustness of transfer learning via embeddings, such as openai gpt models and bert, using traditional methods versus our novel approach based on convex optimization. we test our convex optimization transfer learning method across a variety of case studies, employing different embeddings ( gpt - 4 and bert embeddings ) and different text classification datasets ( imdb, amazon polarity dataset, and glue ) with a range of hyperparameter settings. our results demonstrate that convex optimization and geometric algebra not only enhances the performance of llms but also offers a more stable and reliable method of transfer learning via embeddings.
arxiv:2406.02806
a preference profile with $ m $ alternatives and $ n $ voters is $ d $ - manhattan ( resp. $ d $ - euclidean ) if both the alternatives and the voters can be placed into the $ d $ - dimensional space such that between each pair of alternatives, every voter prefers the one which has a shorter manhattan ( resp. euclidean ) distance to the voter. following bogomolnaia and laslier [ journal of mathematical economics, 2007 ] and chen and grottke [ social choice and welfare, 2021 ] who look at $ d $ - euclidean preference profiles, we study which preference profiles are $ d $ - manhattan depending on the values $ m $ and $ n $. first, we show that each preference profile with $ m $ alternatives and $ n $ voters is $ d $ - manhattan whenever $ d $ $ \ geq $ min ( $ n $, $ m $ - $ 1 $ ). second, for $ d = 2 $, we show that the smallest non $ d $ - manhattan preference profile has either three voters and six alternatives, or four voters and five alternatives, or five voters and four alternatives. this is more complex than the case with $ d $ - euclidean preferences ( see [ bogomolnaia and laslier, 2007 ] and [ bulteau and chen, 2020 ].
arxiv:2201.09691
we present a method to recover and study the projected gravitational tidal forces from a galaxy survey containing little or no redshift information. the method and the physical interpretation of the recovered tidal maps as a tracer of the cosmic web are described in detail. we first apply the method to a simulated galaxy survey and study the accuracy with which the cosmic web can be recovered in the presence of different observational effects, showing that the projected tidal field can be estimated with reasonable precision over large regions of the sky. we then apply our method to the 2mass survey and present a publicly available full - sky map of the projected tidal forces in the local universe. as an example of an application of these data we further study the distribution of galaxy luminosities across the different elements of the cosmic web, finding that, while more luminous objects are found preferentially in the most dense environments, there is no further segregation by tidal environment.
arxiv:1512.03402
a statistical analysis of the impact of the diminishing number of operational shutters experienced by the jwst / nirspec micro shutter array since commissioning is presented. it is shown that the number of high priority science targets that nirspec is able to observe simultaneously has so far decreased by 3. 1 %. of greater concern, however, is nirspec ' s diminished ability to carry out autonomous msata target acquisition, which is more sensitive to the loss of shutters than is the multiplexing. in the flagship case of msa observations of deep fields, the number of pointings at which it is not possible to reach the required minimum number of 5 valid reference stars has increased from 4. 9 % to 6. 3 % and is beginning to become noticeable. similarly, the number of higher risk target acquisitions that need to be carried out with fewer than the maximum allowed number of 8 reference stars has grown from 27 % to 31 %.
arxiv:2405.04530
the quantum vortices formed as a result of barrier - suppression ionization of a two - dimensional hydrogen atom by an ultrashort laser pulse are theoretically investigated. using an analytical expression for the wave function of a photoelectron in the momentum representation, the probability flux density is investigated. in this case, both the standard definition of a flux and an alternative one are used. the latter, due to the sensitivity to the phase of the wave function, makes it possible to identify quantum vortices in the momentum space.
arxiv:2310.05937
multi - frame algorithms for single - microphone speech enhancement, e. g., the multi - frame minimum variance distortionless response ( mfmvdr ) filter, are able to exploit speech correlation across adjacent time frames in the short - time fourier transform ( stft ) domain. provided that accurate estimates of the required speech interframe correlation vector and the noise correlation matrix are available, it has been shown that the mfmvdr filter yields a substantial noise reduction while hardly introducing any speech distortion. aiming at merging the speech enhancement potential of the mfmvdr filter and the estimation capability of temporal convolutional networks ( tcns ), in this paper we propose to embed the mfmvdr filter within a deep learning framework. the tcns are trained to map the noisy speech stft coefficients to the required quantities by minimizing the scale - invariant signal - to - distortion ratio loss function at the mfmvdr filter output. experimental results show that the proposed deep mfmvdr filter achieves a competitive speech enhancement performance on the deep noise suppression challenge dataset. in particular, the results show that estimating the parameters of an mfmvdr filter yields a higher performance in terms of pesq and stoi than directly estimating the multi - frame filter or single - frame masks and than conv - tasnet.
arxiv:2011.10345
we study both analytically and numerically the spectrum of inhomogeneous strings with $ \ mathcal { pt } $ - symmetric density. we discuss an exactly solvable model of $ \ mathcal { pt } $ - symmetric string which is isospectral to the uniform string ; for more general strings, we calculate exactly the sum rules $ z ( p ) \ equiv \ sum _ { n = 1 } ^ \ infty 1 / e _ n ^ p $, with $ p = 1, 2, \ dots $ and find explicit expressions which can be used to obtain bounds on the lowest eigenvalue. a detailed numerical calculation is carried out for two non - solvable models depending on a parameter, obtaining precise estimates of the critical values where pair of real eigenvalues become complex.
arxiv:1306.1419
mini - euso is the first mission of the jem - euso program on board the international space station. it was launched in august 2019 and it is operating since october 2019 being located in the russian section ( zvezda module ) of the station and viewing our planet from a nadir - facing uv - transparent window. the instrument is based on the concept of the original jem - euso mission and consists of an optical system employing two fresnel lenses of 25 cm each and a focal surface composed of 36 multi - anode photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity and an overall field of view of 44 $ \ times $ 44 $ ^ \ circ $. mini - euso can map the night - time earth in the near uv range ( predominantly between 290 nm and 430 nm ), with a spatial resolution of about 6. 3 km and different temporal resolutions of 2. 5 $ \ mu $ s, 320 $ \ mu $ s and 41 ms. mini - euso observations are extremely important to better assess the potential of a space - based detector in studying ultra - high energy cosmic rays ( uhecrs ) such as k - euso and poemma. in this contribution we focus the attention on the results of the uv measurements and we place them in the context of uhecr observations from space, namely the estimation of exposure.
arxiv:2308.13723
we present a drop model for integer and fractional quantum hall effects ( fqhe ). we show that the two - dimensional electron gas breaks up into regions with filling factors { \ nu } = 1 and { \ nu } = 0 in disk geometry, and the formation of drops with a finite number of electrons is possible. sequences of filling fractions are constructed on the basis of experimental data. for all sequences there are initial fqhe states, which correspond to a drop with five electrons. the remaining fqhe states are composite states of a drop with five electrons and one or more pairs of electrons.
arxiv:2209.05601
we introduce a one - parameter family of random infinite quadrangulations of the half - plane, which we call the uniform infinite half - planar quadrangulations with skewness ( uihpq $ _ p $ for short, with $ p \ in [ 0, 1 / 2 ] $ measuring the skewness ). they interpolate between kesten ' s tree corresponding to $ p = 0 $ and the usual uihpq with a general boundary corresponding to $ p = 1 / 2 $. as we make precise, these models arise as local limits of uniform quadrangulations with a boundary when their volume and perimeter grow in a properly fine - tuned way, and they represent all local limits of ( sub ) critical boltzmann quadrangulations whose perimeter tend to infinity. our main result shows that the family ( uihpq $ _ p $ ) $ _ p $ approximates the brownian half - planes bhp $ _ \ theta $, $ \ theta \ geq 0 $, recently introduced in baur, miermont, and ray ( 2016 ). for $ p < 1 / 2 $, we give a description of the uihpq $ _ p $ in terms of a looptree associated to a critical two - type galton - watson tree conditioned to survive.
arxiv:1612.08572
a method of delivering a monochromatic electron beam to the lhc interaction points is proposed. in this method, heavy ions are used as carriers of the projectile electrons. acceleration, storage and collision - stability aspects of such a hybrid beam is discussed and a new beam - cooling method is presented. this discussion is followed by a proposal of the parasitic ion - electron collider at lhc ( pie @ lhc ). the pie @ lhc provides an opportunity, for the present lhc detectors, to enlarge the scope of their research program by including the program of electron - proton and electron - nucleuscollisions with minor machine and detector investments.
arxiv:hep-ex/0405028
we search for novel two - dimensional materials that can be easily exfoliated from their parent compounds. starting from 108423 unique, experimentally known three - dimensional compounds we identify a subset of 5619 that appear layered according to robust geometric and bonding criteria. high - throughput calculations using van - der - waals density - functional theory, validated against experimental structural data and calculated random - phase - approximation binding energies, allow to identify 1825 compounds that are either easily or potentially exfoliable, including all that are commonly exfoliated experimentally. in particular, the subset of 1036 easily exfoliable cases - - - layered materials held together mostly by dispersion interactions and with binding energies up to $ 30 - 35 $ mev $ \ cdot \ text { \ aa } ^ { - 2 } $ - - - provides a wealth of novel structural prototypes and simple ternary compounds, and a large portfolio to search materials for optimal properties. for the 258 compounds with up to 6 atoms per primitive cell we comprehensively explore vibrational, electronic, magnetic, and topological properties, identifying in particular 56 ferromagnetic and antiferromagnetic systems, including half - metals and half - semiconductors.
arxiv:1611.05234
this paper presents some recent trends in the research of grid - interactive inverters. particularly, this paper focuses on stability, ancillary services, operation, and security of single and multiple inverters in the modern power grid. a grid - interactive inverter performs as a controllable interface between the distributed energy sources and the power grid. high - penetration of inverters in a power distribution system can create some technical challenges on the power quality, as well as voltage and frequency control of the system. in particular, a weak grid can lead to voltage oscillation and consequently instability. moreover, the power grid is moving toward becoming a cyber - physical system in which smart inverters can exchange information for power marketing and economic dispatching. this puts the inverters at the risk of insecure operations. hence, security enhancement has become another primary concern. finally, the grid - interactive inverters are operated proportional power - sharing while operating together with many inverters. recent research on coordinated operations is also discussed in this paper.
arxiv:2112.06787
in this paper, we address the long time behaviour of solutions of the stochastic schrodinger equation in $ \ mathbb { r } ^ d $. we prove the existence of an invariant measure and establish asymptotic compactness of solutions, implying in particular the existence of an ergodic measure.
arxiv:1605.02014
the axial strange form factor $ f ^ s _ a $ of the nucleon is assumed to be dominated at low momentum transfer by the isoscalar axial vector mesons $ f _ 1 ( 1285 ) $ and $ f _ 1 ( 1420 ) $. the importance of the $ a _ 0 \ pi n $ - triangular vertex correction is demonstrated.
arxiv:hep-ph/9409293
porous magnetic silica beads are promising materials for biological and environmental applications due to their enhanced adsorption and ease of recovery. this work aims to develop a new, inexpensive and environmentally friendly approach based on agglomeration of nanoparticles in aqueous droplets. the use of an emulsion as a geometrical constraint is expected to result in the formation of spherical beads with tunable composition depending on the aqueous phase content. magnetic silica beads are produced at room temperature by colloidal destabilization induced by addition of calcium chloride to a water - in - oil emulsion containing silica and iron oxide nanoparticles. the impact of the salt concentration, emulsification method, concentration of hydrophobic surfactant as well as silica content is presented in this paper. this method enables the production of spherical beads with diameters between 1 and 9 micrometers. the incorporation of magnetic nanoparticles inside the beads structure is confirmed using energy dispersive x - ray spectrometry and scanning transmission electron microscopy and results in the production of magnetic responsive beads with a preparation yield up to 84 percent. by incorporating the surfactant span 80 in the oil phase it is possible to tune the roughness and porosity of the beads.
arxiv:2005.08192
we prove menger - type results in which the obtained paths are pairwise non - adjacent, both for graphs of bounded maximum degree and, more generally, for graphs excluding a topological minor. we further show better bounds in the subcubic case, and in particular obtain a tight result for two paths using a computer - assisted proof.
arxiv:2309.07905
reconfigurable intelligent surface ( ris ) is considered as a promising solution for next - generation wireless communication networks due to a variety of merits, e. g., customizing the communication environment. therefore, deploying multiple riss helps overcome severe signal blocking between the base station ( bs ) and users, which is also a practical and effective solution to achieve better service coverage. however, reaping the full benefits of a multi - riss aided communication system requires solving a non - convex, infinite - dimensional optimization problem, which motivates the use of learning - based methods to configure the optimal policy. this paper adopts a novel heterogeneous graph neural network ( gnn ) to effectively exploit the graph topology in the wireless communication optimization problem. first, we characterize all communication link features and interference relations in our system with a heterogeneous graph structure. then, we endeavor to maximize the weighted sum rate ( wsr ) of all users by jointly optimizing the active beamforming at the bs, the passive beamforming vector of the ris elements, as well as the riss association strategy. unlike most existing work, we consider a more general scenario where the cascaded link for each user is not fixed but dynamically selected by maximizing the wsr. simulation results show that our proposed heterogeneous gnns perform about 10 times better than other benchmarks, and a suitable riss association strategy is also validated to be effective in improving the quality services of users by 30 %.
arxiv:2302.04183
we study a networked control architecture for linear time - invariant plants in which an unreliable data - rate limited network is placed between the controller and the plant input. the distinguishing aspect of the situation at hand is that an unreliable data - rate limited network is placed between controller and the plant input. to achieve robustness with respect to dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. in our formulation, we design sparse packets for rate - limited networks, by adopting an an ell - 0 optimization, which can be effectively solved by an orthogonal matching pursuit method. our formulation ensures asymptotic stability of the control loop in the presence of bounded packet dropouts. simulation results indicate that the proposed controller provides sparse control packets, thereby giving bit - rate reductions for the case of memoryless scalar coding schemes when compared to the use of, more common, quadratic cost functions, as in linear quadratic ( lq ) control.
arxiv:1308.0002
state - space models ( ssm ) are central to describe time - varying complex systems in countless signal processing applications such as remote sensing, networks, biomedicine, and finance to name a few. inference and prediction in ssms are possible when the model parameters are known, which is rarely the case. the estimation of these parameters is crucial, not only for performing statistical analysis, but also for uncovering the underlying structure of complex phenomena. in this paper, we focus on the linear - gaussian model, arguably the most celebrated ssm, and particularly in the challenging task of estimating the transition matrix that encodes the markovian dependencies in the evolution of the multi - variate state. we introduce a novel perspective by relating this matrix to the adjacency matrix of a directed graph, also interpreted as the causal relationship among state dimensions in the granger - causality sense. under this perspective, we propose a new method called graphem based on the well sounded expectation - maximization ( em ) methodology for inferring the transition matrix jointly with the smoothing / filtering of the observed data. we propose an advanced convex optimization solver relying on a consensus - based implementation of a proximal splitting strategy for solving the m - step. this approach enables an efficient and versatile processing of various sophisticated priors on the graph structure, such as parsimony constraints, while benefiting from convergence guarantees. we demonstrate the good performance and the interpretable results of graphem by means of two sets of numerical examples.
arxiv:2209.09969
around - device interaction techniques aim at extending the input space using various sensing modalities on mobile and wearable devices. in this paper, we present our work towards extending the input area of mobile devices using front - facing device - centered cameras that capture reflections in the human eye. as current generation mobile devices lack high resolution front - facing cameras we study the feasibility of around - device interaction using corneal reflective imaging based on a high resolution camera. we present a workflow, a technical prototype and an evaluation, including a migration path from high resolution to low resolution imagers. our study indicates, that under optimal conditions a spatial sensing resolution of 5 cm in the vicinity of a mobile phone is possible.
arxiv:1709.00966
predicting reactants from a specified core product stands as a fundamental challenge within organic synthesis, termed retrosynthesis prediction. recently, semi - template - based methods and graph - edits - based methods have achieved good performance in terms of both interpretability and accuracy. however, due to their mechanisms these methods cannot predict complex reactions, e. g., reactions with multiple reaction center or attaching the same leaving group to more than one atom. in this study we propose a semi - template - based method, the \ textbf { retro } synthesis via \ textbf { s } earch \ textbf { i } n ( hyper ) \ textbf { g } raph ( retrosig ) framework to alleviate these limitations. in the proposed method, we turn the reaction center identification and the leaving group completion tasks as tasks of searching in the product molecular graph and leaving group hypergraph respectively. as a semi - template - based method retrosig has several advantages. first, retrosig is able to handle the complex reactions mentioned above by its novel search mechanism. second, retrosig naturally exploits the hypergraph to model the implicit dependencies between leaving groups. third, retrosig makes full use of the prior, i. e., one - hop constraint. it reduces the search space and enhances overall performance. comprehensive experiments demonstrated that retrosig achieved competitive results. furthermore, we conducted experiments to show the capability of retrosig in predicting complex reactions. ablation experiments verified the efficacy of specific elements, such as the one - hop constraint and the leaving group hypergraph.
arxiv:2402.06772
we address the problem of safely solving complex bimanual robot manipulation tasks with sparse rewards. such challenging tasks can be decomposed into sub - tasks that are accomplishable by different robots concurrently or sequentially for better efficiency. while previous reinforcement learning approaches primarily focus on modeling the compositionality of sub - tasks, two fundamental issues are largely ignored particularly when learning cooperative strategies for two robots : ( i ) domination, i. e., one robot may try to solve a task by itself and leaves the other idle ; ( ii ) conflict, i. e., one robot can interrupt another ' s workspace when executing different sub - tasks simultaneously, which leads to unsafe collisions. to tackle these two issues, we propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub - tasks and objects. we evaluate our method on five bimanual manipulation tasks. experimental results show that our proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more efficient and safer cooperative strategies than all the baselines. our project page with videos is at https : / / mehooz. github. io / bimanual - attention.
arxiv:2106.05907
the dc conductance through a finite hubbard chain of size n coupled to two noninteracting leads is studied at t = 0 in an electron - hole symmetric case. assuming that the perturbation expansion in u is valid for small n ( = 1, 2, 3,... ) owing to the presence of the noninteracting leads, we obtain the self - energy at \ omega = 0 analytically in the real space within the second order in u. then, we calculate the inter - site green ' s function which connects the two boundaries of the chain, g _ { n1 }, solving the dyson equation. the conductance can be obtained through g _ { n1 }, and the result shows an oscillatory behavior as a function of n. for odd n, a perfect transmission occurs independent of u. this is due to the inversion and electron - hole symmetries, and is attributed to a kondo resonance appearing at the fermi level. on the other hand, for even n, the conductance is a decreasing function of n and u.
arxiv:cond-mat/9903126
in multi - terminal networks, feedback increases the capacity region and helps communication devices to coordinate. in this article, we deepen the relationship between coordination and feedback by considering a point - to - point scenario with an information source and a noisy channel. empirical coordination is achievable if the encoder and the decoder can implement sequences of symbols that are jointly typical for a target probability distribution. we investigate the impact of feedback when the encoder has strictly causal or causal observation of the source symbols. for both cases, we characterize the optimal information constraints and we show that feedback improves coordination possibilities. surprisingly, feedback also reduces the number of auxiliary random variables and simplifies the information constraints. for empirical coordination with strictly causal encoding and feedback, the information constraint does not involve auxiliary random variable anymore.
arxiv:1506.04814
the microlensing parallax campaign with the $ spitzer $ space telescope aims to measure masses and distances of microlensing events seen towards the galactic bulge, with a focus on planetary microlensing events. the hope is to measure how the distribution of planets depends on position within the galaxy. in this paper, we compare 50 microlens parallax measurements from the 2015 $ spitzer $ campaign to three different galactic models commonly used in microlensing analyses, and we find that $ \ geq 74 \ % $ of these events have microlensing parallax values higher than the medians predicted by galactic models. the anderson - darling tests indicate probabilities of $ p _ { \ rm ad } < 6. 6 \ times 10 ^ { - 5 } $ for these three galactic models, while the binomial probability of such a large fraction of large microlensing parallax values is $ < 4. 6 \ times 10 ^ { - 4 } $. given that many $ spitzer $ light curves show evidence of large correlated errors, we conclude that this discrepancy is probably due to systematic errors in the $ spitzer $ photometry. we find formally acceptable probabilities of $ p _ { \ rm ad } > 0. 05 $ for subsamples of events with bright source stars ( $ i _ { \ rm s } \ leq 17. 75 $ ) or $ spitzer $ coverage of the light curve peak. this indicates that the systematic errors have a more serious influence on faint events, especially when the light curve peak is not covered by $ spitzer $. we find that multiplying an error bar renormalization factor of 2. 2 by the reported error bars on the $ spitzer $ microlensing parallax measurements provides reasonable agreement with all three galactic models. however, corrections to the uncertainties in the $ spitzer $ photometry itself are a more effective way to address the systematic errors.
arxiv:1905.05794
we study multisymplectic structures taking values in vector bundles with connections from the viewpoint of the hamiltonian symmetry. we introduce the notion of bundle - valued $ n $ - plectic structures and exhibit some properties of them. in addition, we define bundle - valued homotopy momentum sections for bundle - valued $ n $ - plectic manifolds with lie algebroids to discuss momentum map theories in both cases of quaternionic k \ " { a } hler manifolds and hyper - k \ " { a } hler manifolds. furthermore, we generalize the marsden - weinstein - meyer reduction theorem for symplectic manifolds and construct two kinds of reductions of vector - valued 1 - plectic manifolds.
arxiv:2312.02499
a theorem of steinhaus states that if $ e \ subset \ mathbb r ^ d $ has positive lebesgue measure, then the difference set $ e - e $ contains a neighborhood of $ 0 $. similarly, if $ e $ merely has hausdorff dimension $ \ dim _ { \ mathcal h } ( e ) > ( d + 1 ) / 2 $, a result of mattila and sj \ " olin states that the distance set $ \ delta ( e ) \ subset \ mathbb r $ contains an open interval. in this work, we study such results from a general viewpoint, replacing $ e - e $ or $ \ delta ( e ) $ with more general $ \ phi \, $ - configurations for a class of $ \ phi : \ mathbb r ^ d \ times \ mathbb r ^ d \ to \ mathbb r ^ k $, and showing that, under suitable lower bounds on $ \ dim _ { \ mathcal h } ( e ) $ and a regularity assumption on the family of generalized radon transforms associated with $ \ phi $, it follows that the set $ \ delta _ \ phi ( e ) $ of $ \ phi $ - configurations in $ e $ has nonempty interior in $ \ mathbb r ^ k $. further extensions hold for $ \ phi \, $ - configurations generated by two sets, $ e $ and $ f $, in spaces of possibly different dimensions and with suitable lower bounds on $ \ dim _ { \ mathcal h } ( e ) + \ dim _ { \ mathcal h } ( f ) $.
arxiv:1907.12513
the study of mixture models constitutes a large domain of research in statistics. in the first part of this work, we present phi - divergences and the existing methods which produce robust estimators. we are more particularly interested in the so - called dual formula of phi - divergences. we build a new robust estimator based on this formula. we study its asymptotic properties and give a numerical comparison with existing methods on simulated data. we also introduce a proximal - point algorithm whose aim is to calculate divergence - based estimators. we give some of the convergence properties of this algorithm and illustrate them on theoretical and simulated examples. in the second part of this thesis, we build a new structure for two - component mixture models where one component is unknown. the new approach permits to incorporate a prior linear information about the unknown component such as moment - type and l - moments constraints. we study the asymptotic properties of the proposed estimators. several experimental results on simulated data are illustrated showing the advantage of the novel approach and the gain from using the prior information in comparison to existing methods which do not incorporate any prior information except for a symmetry assumption over the unknown component.
arxiv:1611.07247
persistence modules have a natural home in the setting of stratified spaces and constructible cosheaves. in this article, we first give explicit constructible cosheaves for common data - motivated persistence modules, namely, for modules that arise from zig - zag filtrations ( including monotone filtrations ), and for augmented persistence modules ( which encode the data of instantaneous events ). we then identify an equivalence of categories between a particular notion of zig - zag modules and the combinatorial entrance path category on stratified $ \ mathbb { r } $. finally, we compute the algebraic $ k $ - theory of generalized zig - zag modules and describe connections to both euler curves and $ k _ 0 $ of the monoid of persistence diagrams as described by bubenik and elchesen.
arxiv:2110.04591
the charge ordering in the bilayer manganite system la $ _ { 2 - 2x } $ sr $ _ { 1 + 2x } $ mn $ _ { 2 } $ o $ _ { 7 } $ with $ 0. 30 \ le x \ le 0. 50 $ has been studied by neutron diffraction. the charge order is characterized by the propagation vector parallel to the [ 1 0 0 ] direction ( mno $ _ 2 $ direction ), but the correlation length is short - ranged and extremely anisotropic, being $ \ sim 0. 02a ^ { * } $ and $ \ sim 0. 2a ^ { * } $ parallel and perpendicular to the modulation direction, respectively. the observed charge order can be viewed as a quasi - bi - stripe order, and accounts well for the $ x $ dependence of the resistivity. the quasi - bistripe order is stable within the ferromagnetic ( fm ) mno $ _ 2 $ layers in the a - type antiferromagnetic order, but is instabilized by the 3 dimensional fm order.
arxiv:cond-mat/0005193
we review empirical and theoretical findings concerning white dwarfs in galactic globular clusters. since their detection is a critical issue we describe in detail the various efforts to find white dwarfs in globular clusters. we then outline the advantages of using cluster white dwarfs to investigate the formation and evolution of white dwarfs and concentrate on evolutionary channels that appear to be unique to globular clusters. we also discuss the usefulness of globular cluster white dwarfs to provide independent information on the distances and ages of globular clusters, information that is very important far beyond the immediate field of white dwarf research. finally, we mention possible future avenues concerning globular cluster white dwarfs, like the study of strange quark matter or plasma neutrinos.
arxiv:0806.4456
a pair of complementary algorithms are presented. one of the pair is a fast method for connecting graphs with an edge. the other is a fast method for removing edges from a graph. both algorithms employ the same tree based graph representation and so, in concert, can arbitrarily modify any graph. since the clusters of a percolation model may be described as simple connected graphs, an efficient monte carlo scheme can be constructed that uses the algorithms to sweep the occupation probability back and forth between two turning points. this approach concentrates computational sampling time within a region of interest. a high precision value of pc = 0. 59274603 ( 9 ) was thus obtained, by mersenne twister, for the two dimensional square site percolation threshold.
arxiv:0708.0600
a variety of tasks on dynamic graphs, including anomaly detection, community detection, compression, and graph understanding, have been formulated as problems of identifying constituent ( near ) bi - cliques ( i. e., complete bipartite graphs ). even when we restrict our attention to maximal ones, there can be exponentially many near bi - cliques, and thus finding all of them is practically impossible for large graphs. then, two questions naturally arise : ( q1 ) what is a " good " set of near bi - cliques? that is, given a set of near bi - cliques in the input dynamic graph, how should we evaluate its quality? ( q2 ) given a large dynamic graph, how can we rapidly identify a high - quality set of near bi - cliques in it? regarding q1, we measure how concisely, precisely, and exhaustively a given set of near bi - cliques describes the input dynamic graph. we combine these three perspectives systematically on the minimum description length principle. regarding q2, we propose cutnpeel, a fast search algorithm for a high - quality set of near bi - cliques. by adaptively re - partitioning the input graph, cutnpeel reduces the search space and at the same time improves the search quality. our experiments using six real - world dynamic graphs demonstrate that cutnpeel is ( a ) high - quality : providing near bi - cliques of up to 51. 2 % better quality than its state - of - the - art competitors, ( b ) fast : up to 68. 8x faster than the next - best competitor, and ( c ) scalable : scaling to graphs with 134 million edges. we also show successful applications of cutnpeel to graph compression and pattern discovery.
arxiv:2110.14875
we exploit new techniques for generating vortices and controlling their interactions in an optical beam in a nonlinear atomic vapor. a precise control of the vortex positions allows us to observe strong interactions leading to vortex dynamics involving annihilations. with this improved controlled nonlinear system, we get closer to the pure hydrodynamic regime than in previous experiments while a wavefront sensor offers us a direct access to the fluid ' s density and velocity. finally, we developed a relative phase shift method which mimics a time evolution process without changing non - linear parameters. these observations are an important step toward the experimental implementation of 2d turbulent state
arxiv:2203.04059
it is usually supposed that inflation is of the slow - roll variety, and that the inflaton generates the primordial curvature perturbation. according to the curvaton hypothesis, inflation need not be slow - roll, and if it is the inflaton generates a negligible curvature perturbation. we find that the construction of slow - roll inflation models becomes much easier under this hypothesis. also, thermal inflation followed by fast - roll becomes viable, with no slow - roll inflation at all.
arxiv:hep-ph/0209180
this paper presents a computational solution that enables continuous cardiac monitoring through cross - modality inference of electrocardiogram ( ecg ). while some smartwatches now allow users to obtain a 30 - second ecg test by tapping a built - in bio - sensor, these short - term ecg tests often miss intermittent and asymptomatic abnormalities of cardiac functions. it is also infeasible to expect persistently active user participation for long - term continuous cardiac monitoring in order to capture these and other types of cardiac abnormalities. to alleviate the need for continuous user attention and active participation, we design a lightweight neural network that infers ecg from the photoplethysmogram ( ppg ) signal sensed at the skin surface by a wearable optical sensor. we also develop a diagnosis - oriented training strategy to enable the neural network to capture the pathological features of ecg, aiming to increase the utility of reconstructed ecg signals for screening cardiovascular diseases ( cvds ). we also leverage model interpretation to obtain insights from data - driven models, for example, to reveal some associations between cvds and ecg / ppg and to demonstrate how the neural network copes with motion artifacts in the ambulatory application. the experimental results on three datasets demonstrate the feasibility of inferring ecg from ppg, achieving a high fidelity of ecg reconstruction with only about 40k parameters.
arxiv:2012.04949
a quantum - mechanical stability analysis of metallic nanowires within the free - electron model is presented. the stability is determined by an interplay of electron - shell effects, the rayleigh instability due to surface tension, and the peierls instability. although the latter effect limits the maximum length also for wires with " magic radii ", it is found that nanowires in the micrometer range can be stable at room temperature.
arxiv:cond-mat/0307279
centrality metrics have been widely applied to identify the nodes in a graph whose removal is effective in decomposing the graph into smaller sub - components. the node - - removal process is generally used to test network robustness against failures. most of the available studies assume that the node removal task is always successful. yet, we argue that this assumption is unrealistic. indeed, the removal process should take into account also the strength of the targeted node itself, to simulate the failure scenarios in a more effective and realistic fashion. unlike previous literature, herein a { \ em probabilistic node failure model } is proposed, in which nodes may fail with a particular probability, considering two variants, namely : { \ em uniform } ( in which the nodes survival - to - failure probability is fixed ) and { \ em best connected } ( bc ) ( where the nodes survival probability is proportional to their degree ). to evaluate our method, we consider five popular centrality metrics carrying out an experimental, comparative analysis to evaluate them in terms of { \ em effectiveness } and { \ em coverage }, on four real - world graphs. by effectiveness and coverage we mean the ability of selecting nodes whose removal decreases graph connectivity the most. specifically, the graph spectral radius reduction works as a proxy indicator of effectiveness, and the reduction of the largest connected component ( lcc ) size is a parameter to assess coverage. the metric that caused the biggest drop has been then compared with the benchmark analysis ( i. e, the non - probabilistic degree centrality node removal process ) to compare the two approaches. the main finding has been that significant differences emerged through this comparison with a deviation range that varies from 2 \ % up to 80 \ % regardless of the dataset used that highlight the existence of a gap between the common practice with a more realistic approach.
arxiv:2006.13551
one of the uses of sensor arrays is for spatial filtering or beamforming. current digital signal processing methods facilitate complex - weighted beamforming, providing flexibility in array design. previous studies proposed the use of real - valued beamforming weights, which although reduce flexibility in design, may provide a range of benefits, e. g., simplified beamformer implementation or efficient beamforming algorithms. this paper presents a new method for the design of arrays with real - valued weights, that achieve maximum directivity, providing closed - form solution to array weights. the method is studied for linear and spherical arrays, where it is shown that rigid spherical arrays are particularly suitable for real - weight designs as they do not suffer from grating lobes, a dominant feature in linear arrays with real weights. a simulation study is presented for linear and spherical arrays, along with an experimental investigation, validating the theoretical developments.
arxiv:2401.02285
let $ \ mathcal { g } $ be an algebraic quantum group. we introduce an equivariant algebraic $ kk $ - theory for $ \ mathcal { g } $ - module algebras. we study an adjointness theorem related with smash product and trivial action. we also discuss a duality property.
arxiv:1408.1639
image datasets with high - quality pixel - level annotations are valuable for semantic segmentation : labelling every pixel in an image ensures that rare classes and small objects are annotated. however, full - image annotations are expensive, with experts spending up to 90 minutes per image. we propose block sub - image annotation as a replacement for full - image annotation. despite the attention cost of frequent task switching, we find that block annotations can be crowdsourced at higher quality compared to full - image annotation with equal monetary cost using existing annotation tools developed for full - image annotation. surprisingly, we find that 50 % pixels annotated with blocks allows semantic segmentation to achieve equivalent performance to 100 % pixels annotated. furthermore, as little as 12 % of pixels annotated allows performance as high as 98 % of the performance with dense annotation. in weakly - supervised settings, block annotation outperforms existing methods by 3 - 4 % ( absolute ) given equivalent annotation time. to recover the necessary global structure for applications such as characterizing spatial context and affordance relationships, we propose an effective method to inpaint block - annotated images with high - quality labels without additional human effort. as such, fewer annotations can also be used for these applications compared to full - image annotation.
arxiv:2002.06626
werewolf is an incomplete information game, which has several challenges when creating a computer agent as a player given the lack of understanding of the situation and individuality of utterance ( e. g., computer agents are not capable of characterful utterance or situational lying ). we propose a werewolf agent that solves some of those difficulties by combining a large language model ( llm ) and a rule - based algorithm. in particular, our agent uses a rule - based algorithm to select an output either from an llm or a template prepared beforehand based on the results of analyzing conversation history using an llm. it allows the agent to refute in specific situations, identify when to end the conversation, and behave with persona. this approach mitigated conversational inconsistencies and facilitated logical utterance as a result. we also conducted a qualitative evaluation, which resulted in our agent being perceived as more human - like compared to an unmodified llm. the agent is freely available for contributing to advance the research in the field of werewolf game.
arxiv:2409.01575
we present general algorithms ( fully implemented in maple ) for calculations of various quantities related to constrained directed walks for a general set of steps on the square lattice in two dimensions. as a special case, we rederive results of earlier works.
arxiv:cond-mat/0701674
the euler - gauss linear transformation formula for the hypergeometric function was extended by goursat for the case of logarithmic singularities. by replacing the perturbed bessel differential equation by a monodromic functional equation, and studying this equation separately from the differential equation by an appropriate laplace - borel technique, we associate with the latter equation another monodromic relation in the dual complex plane. this enables us to prove a duality theorem and to extend goursat ' s formula to much larger classes of functions.
arxiv:1203.5550
this study conducts a thorough evaluation of text augmentation techniques across a variety of datasets and natural language processing ( nlp ) tasks to address the lack of reliable, generalized evidence for these methods. it examines the effectiveness of these techniques in augmenting training sets to improve performance in tasks such as topic classification, sentiment analysis, and offensive language detection. the research emphasizes not only the augmentation methods, but also the strategic order in which real and augmented instances are introduced during training. a major contribution is the development and evaluation of modified cyclical curriculum learning ( mccl ) for augmented datasets, which represents a novel approach in the field. results show that specific augmentation methods, especially when integrated with mccl, significantly outperform traditional training approaches in nlp model performance. these results underscore the need for careful selection of augmentation techniques and sequencing strategies to optimize the balance between speed and quality improvement in various nlp tasks. the study concludes that the use of augmentation methods, especially in conjunction with mccl, leads to improved results in various classification tasks, providing a foundation for future advances in text augmentation strategies in nlp.
arxiv:2402.09141
the paper studies the properties of stochastic gradient methods with preconditioning. we focus on momentum updated preconditioners with momentum coefficient $ \ beta $. seeking to explain practical efficiency of scaled methods, we provide convergence analysis in a norm associated with preconditioner, and demonstrate that scaling allows one to get rid of gradients lipschitz constant in convergence rates. along the way, we emphasize important role of $ \ beta $, undeservedly set to constant $ 0. 99... 9 $ at the arbitrariness of various authors. finally, we propose the explicit constructive formulas for adaptive $ \ beta $ and step size values.
arxiv:2210.11869
the construction of a $ n _ t = 3 $ cohomological gauge theory on an hyper - k " ahler eight - fold, whose group theoretical descrition was given by blau and thompson, is performed explicitly.
arxiv:hep-th/0304096
tverberg ' s theorem asserts that every ( k - 1 ) ( d + 1 ) + 1 points in r ^ d can be partitioned into k parts, so that the convex hulls of the parts have a common intersection. calder and eckhoff asked whether there is a purely combinatorial deduction of tverberg ' s theorem from the special case k = 2. we dash the hopes of a purely combinatorial deduction, but show that the case k = 2 does imply that every set of o ( k ^ 2 log ^ 2 k ) points admits a tverberg partition into k parts.
arxiv:1009.2384
we report on fabrication and characterization of ultra - thin suspended single crystalline flat silicon membranes with thickness down to 6 nm. we have developed a method to control the strain in the membranes by adding a strain compensating frame on the silicon membrane perimeter to avoid buckling of the released membranes. we show that by changing the properties of the frame the strain of the membrane can be tuned in controlled manner. consequently, both the mechanical properties and the band structure can be engineered and the resulting membranes provide a unique laboratory to study low - dimensional electronic, photonic and phononic phenomena.
arxiv:1303.1658
the reversible pebble game is a combinatorial game played on rooted dags. this game was introduced by bennett ( 1989 ) motivated by applications in designing space efficient reversible algorithms. recently, chan ( 2013 ) showed that the reversible pebble game number of any dag is the same as its dymond - tompa pebble number and raz - mckenzie pebble number. we show, as our main result, that for any rooted directed tree t, its reversible pebble game number is always just one more than the edge rank coloring number of the underlying undirected tree u of t. it is known that given a dag g as input, determining its reversible pebble game number is pspace - hard. our result implies that the reversible pebble game number of trees can be computed in polynomial time. we also address the question of finding the number of steps required to optimally pebble various families of trees. it is known that trees can be pebbled in $ n ^ { o ( \ log ( n ) ) } $ steps where $ n $ is the number of nodes in the tree. using the equivalence between reversible pebble game and the dymond - tompa pebble game ( chan, 2013 ), we show that complete binary trees can be pebbled in $ n ^ { o ( \ log \ log ( n ) ) } $ steps, a substantial improvement over the naive upper bound of $ n ^ { o ( \ log ( n ) ) } $. it remains open whether complete binary trees can be pebbled in polynomial ( in $ n $ ) number of steps. towards this end, we show that almost optimal ( i. e., within a factor of $ ( 1 + \ epsilon ) $ for any constant $ \ epsilon > 0 $ ) pebblings of complete binary trees can be done in polynomial number of steps. we also show a time - space trade - off for reversible pebbling for families of bounded degree trees by a divide - and - conquer approach : for any constant $ \ epsilon > 0 $, such families can be pebbled using $ o ( n ^ \ epsilon ) $ pebbles in $ o ( n ) $ steps. this generalizes an analogous result of kralovic ( 2001 ) for chains.
arxiv:1604.05510
the organic content of protoplanetary disks sets the initial compositions of planets and comets, thereby influencing subsequent chemistry that is possible in nascent planetary systems. we present observations of the complex nitrile - bearing species ch3cn and hc3n towards the disks around the t tauri stars as 209, im lup, lkca 15, and v4046 sgr as well as the herbig ae stars mwc 480 and hd 163296. hc3n is detected towards all disks except im lup, and ch3cn is detected towards v4046 sgr, mwc 480, and hd 163296. rotational temperatures derived for disks with multiple detected lines range from 29 - 73k, indicating emission from the temperate molecular layer of the disk. v4046 sgr and mwc 480 radial abundance profiles are constrained using a parametric model ; the gas - phase ch3cn and hc3n abundances with respect to hcn are a few to tens of percent in the inner 100 au of the disk, signifying a rich nitrile chemistry at planet - and comet - forming disk radii. we find consistent relative abundances of ch3cn, hc3n, and hcn between our disk sample, protostellar envelopes, and solar system comets ; this is suggestive of a robust nitrile chemistry with similar outcomes under a wide range of physical conditions.
arxiv:1803.04986
with the increasing amount of data in society, privacy concerns in data sharing have become widely recognized. particularly, protecting personal attribute information is essential for a wide range of aims from crowdsourcing to realizing personalized medicine. although various differentially private methods based on randomized response have been proposed for single attribute information or specific analysis purposes such as frequency estimation, there is a lack of studies on the mechanism for sharing individuals ' multiple categorical information itself. the existing randomized response for sharing multi - attribute data uses the kronecker product to perturb each attribute information in turn according to the respective privacy level but achieves only a weak privacy level for the entire dataset. therefore, in this study, we propose a privacy - optimized randomized response that guarantees the strongest privacy in sharing multi - attribute data. furthermore, we present an efficient heuristic algorithm for constructing a near - optimal mechanism. the time complexity of our algorithm is o ( k ^ 2 ), where k is the number of attributes, and it can be performed in about 1 second even for large datasets with k = 1, 000. the experimental results demonstrate that both of our methods provide significantly stronger privacy guarantees for the entire dataset than the existing method. in addition, we show an analysis example using genome statistics to confirm that our methods can achieve less than half the output error compared with that of the existing method. overall, this study is an important step toward trustworthy sharing and analysis of multi - attribute data. the python implementation of our experiments and supplemental results are available at https : / / github. com / ay0408 / optimized - rr.
arxiv:2402.07584
we explore the higgs sector in the supersymmetric economical 3 - 3 - 1 model and find new features in this sector. the charged higgs sector is revised i. e., in difference of the previous work, the exact eigenvalues and states are obtained without any approximation. in this model, there are three higgs bosons having masses equal to that of the gauge bosons - - the w and extra x and y. there is one scalar boson with mass of 91. 4 gev, which is closed to the $ z $ boson mass and in good agreement with present limit : 89. 8 gev at 95 % cl. the condition of eliminating for charged scalar tachyon leads to splitting of vev at the first symmetry breaking, namely, $ w \ simeq w ^ \ prime $. the interactions among the standard model gauge bosons and scalar fields in the framework of the supersymmetric economical 3 - 3 - 1 model are presented. from these couplings, at some limit, almost scalar higgs fields can be recognized in accordance with the standard model. the hadronic cross section for production of the bilepton charged higgs boson at the cern lhc in the effective vector boson approximation is calculated. numerical evaluation shows that the cross section can exceed 35. 8 fb.
arxiv:0707.3712
in this paper we discuss and confront recent results on metallicity variations in the local interstellar medium, obtained from observations of hii regions and neutral clouds of the galactic thin disk, and compare them with recent high - quality metallicity determinations of other tracers of the chemical composition of the interstellar medium as b - type stars, classical cepheids and young clusters. we find that the metallicity variations obtained for these last kinds of objects are consistent with each other and with that obtained for hii regions but significantly smaller than those obtained for neutral clouds. we also discuss the presence of a large population of low - metallicity clouds as the possible origin for large metallicity variations in the local galactic thin disk. we find that such hypothesis does not seem compatible with : ( a ) what is predicted by theoretical studies of gas mixing in galactic disks, and ( b ) the models and observations on the metallicity of high - velocity clouds and its evolution as they mix with the surrounding medium in their fall onto the galactic plane. we conclude that that most of the evidence favors that the chemical composition of the interstellar medium in the solar neighborhood is highly homogeneous.
arxiv:2204.12258