text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the national electrical manufacturers association ' s ( nema ) nu 4 - 2008 standard specifies methodology for evaluating the performance of small - animal pet scanners. the standard ' s goal is to enable comparison of different pet scanners over a wide range of technologies and geometries used. in this work, we discuss if the nema standard meets these goals and we point out potential flaws and improvements to the standard. for the evaluation of spatial resolution, the nema standard mandates the use of filtered backprojection reconstruction. this reconstruction method can introduce star - like artifacts for detectors with an anisotropic spatial resolution, usually caused by parallax error. these artifacts can then cause a strong dependence of the resulting spatial resolution on the size of the projection window in image space, whose size is not fully specified in the nema standard. we show that the standard ' s equations for the estimation of the random rate for pet systems with intrinsic radioactivity are circular and not satisfiable. we compare random rates determined by a method based on the standard to to random rates obtained using a delayed coincidence window and two methods based on the singles rates. we discuss to what extend the nema standard ' s protocol for the evaluation of the sensitivity is unclear and ambiguous, which can lead to unclear interpretations of publicizeded results. the standard ' s definition of the recovery coefficients in the image quality phantom includes the maximum activity in a region of interest, which causes a positive correlation of noise and recovery coefficients. this leads to an unintended trade - off between desired uniformity, which is negatively correlated with variance ( i. e. noise ), and recovery. with this work, we want to start a discussion on possible improvements in a next version of the nema nu - 4 standard.
|
arxiv:1910.12070
|
seven - manifolds of g _ 2 holonomy provide a bridge between m - theory and string theory, via kaluza - klein reduction to calabi - yau six - manifolds. we find first - order equations for a new family of g _ 2 metrics d _ 7, with s ^ 3 \ times s ^ 3 principal orbits. these are related at weak string coupling to the resolved conifold, paralleling earlier examples b _ 7 that are related to the deformed conifold, allowing a deeper study of topology change and mirror symmetry in m - theory. the d _ 7 metrics ' non - trivial parameter characterises the squashing of an s ^ 3 bolt, which limits to s ^ 2 at weak coupling. in general the d _ 7 metrics are asymptotically locally conical, with a nowhere - singular circle action.
|
arxiv:hep-th/0112098
|
chaotic systems such as the gravitational n - body problem are ubiquitous in astronomy. machine learning ( ml ) is increasingly deployed to predict the evolution of such systems, e. g. with the goal of speeding up simulations. strategies such as active learning ( al ) are a natural choice to optimize ml training. here we showcase an al failure when predicting the stability of the sitnikov three - body problem, the simplest case of n - body problem displaying chaotic behavior. we link this failure to the fractal nature of our classification problem ' s decision boundary. this is a potential pitfall in optimizing large sets of n - body simulations via al in the context of star cluster physics, galactic dynamics, or cosmology.
|
arxiv:2311.18010
|
the equilibrium properties of polymer droplets on a soft deformable surface are investigated by molecular dynamics simulations of a bead - spring model. the surface consists of a polymer brush with irreversibly end - tethered linear homopolymer chains onto a flat solid substrate. we tune the softness of the surface by varying the grafting density. droplets are comprised of bead - spring polymers of various chain lengths. first, both systems, brush and polymer liquid, are studied independently in order to determine their static and dynamic properties. in particular, using a numerical implementation of an afm experiment, we measure the shear modulus of the brush surface and compare the results to theoretical predictions. then, we study the wetting behavior of polymer droplets with different contact angles and on substrates that differ in softness. density profiles reveal, under certain conditions, the formation of a wetting ridge beneath the three - phase contact line. cap - shaped droplets and cylindrical droplets are also compared to estimate the effect of the line tension with respect to the droplet size. finally, the results of the simulations are compared to a phenomenological free - energy calculation that accounts for the surface tensions and the compliance of the soft substrate. depending on the surface / drop compatibility, surface softness and drop size, a transition between two regimes is observed : from one where the drop surface energy balances the adhesion with the surface, which is the classical young - dupr \ ' e wetting regime, to another one where a coupling occurs between adhesion, droplet and surface elastic energies.
|
arxiv:1107.2898
|
these systems. the growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. this involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance. as the demand for biopharmaceutical products increases, advancements in bioprocess engineering continue to enable more sustainable and cost - effective manufacturing methods. = = education = = auburn university biochemical engineering is not a major offered by many universities and is instead an area of interest under the chemical engineering. the following universities are known to offer degrees in biochemical engineering : brown university β providence, ri christian brothers university β memphis, tn colorado school of mines β golden, co rowan university β glassboro, nj university of colorado boulder β boulder, co university of georgia β athens, ga university of california, davis β davis, ca university college london β london, united kingdom university of southern california β los angeles, ca university of western ontario β ontario, canada indian institute of technology ( bhu ) varanasi β varanasi, up indian institute of technology delhi β delhi institute of technology tijuana β mexico university of baghdad, college of engineering, al - khwarizmi biochemical universidad nacional de rio negro - rio negro, argentina = = see also = = biochemical engineering biofuel from algae biological hydrogen production ( algae ) bioprocess bioproducts engineering bioproducts bioreactor landfill biosystems engineering cell therapy downstream ( bioprocess ) electrochemical energy conversion food engineering industrial biotechnology microbiology moss bioreactor photobioreactor physical chemistry unit operations upstream ( bioprocess ) use of biotechnology in pharmaceutical manufacturing = = references = = shukla, a. a., thommes, j., & hackl, m. ( 2012 ). recent advances in downstream processing of therapeutic monoclonal antibodies. biotechnology advances, 30 ( 3 ), 1548 - 1557. walsh, g. ( 2018 ). biopharmaceuticals : biochemistry and biotechnology ( 3rd ed. ). wiley.
|
https://en.wikipedia.org/wiki/Biochemical_engineering
|
we view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. we argue this simple structure is suboptimal since it requires the model to learn to discard the properties ( e. g. different scales of changes, different levels of abstractness ) of data variations, which is an extra work than equivariance learning. instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. considering it is hard to conduct training on group structures, we focus on lie groups and adopt a parameterization using lie algebra. based on the parameterization, some disentanglement learning constraints are naturally derived. a simple model named commutative lie group vae is introduced to realize the group - based disentanglement learning. experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state - of - the - art performance without extra constraints.
|
arxiv:2106.03375
|
we characterize the limiting distributions of random variables of the form $ p _ n \ left ( ( x _ i ) _ { i \ ge 1 } \ right ) $, where : ( i ) $ ( p _ n ) _ { n \ ge 1 } $ is a sequence of multivariate polynomials, each potentially involving countably many variables ; ( ii ) there exists a constant $ d \ ge 1 $ such that for all $ n \ ge 1 $, the degree of $ p _ n $ is bounded above by $ d $ ; ( iii ) $ ( x _ i ) _ { i \ ge 1 } $ is a sequence of independent and identically distributed random variables, each with zero mean, unit variance, and finite moments of all orders. more specifically, we prove that the limiting distributions of these random variables can always be represented as the law of $ p _ \ infty \ left ( ( x _ i, g _ i ) _ { i \ ge 1 } \ right ) $, where $ p _ \ infty $ is a polynomial of degree at most $ d $ ( potentially involving countably many variables ), and $ ( g _ i ) _ { i \ ge 1 } $ is a sequence of independent standard gaussian random variables, which is independent of $ ( x _ i ) _ { i \ ge 1 } $. we solve this problem in full generality, addressing both gaussian and non - gaussian inputs, and with no extra assumption on the coefficients of the polynomials. in the gaussian case, our proof builds upon several original tools of independent interest, including a new criterion for central convergence based on the concept of maximal directional influence. beyond asymptotic normality, this novel notion also enables us to derive quantitative bounds on the degree of the polynomial representing the limiting law. we further develop techniques regarding asymptotic independence and dimensional reduction. to conclude for polynomials with non - gaussian inputs, we combine our findings in the gaussian case with invariance principles.
|
arxiv:2412.06749
|
we consider folded spinning strings in ads _ 5xs ^ 5 ( with one spin component s in ads _ 5 and j in s ^ 5 ) corresponding to the tr ( d ^ s z ^ j ) operators in the sl ( 2 ) sector of the n = 4 sym theory in the special scaling limit in which both the string mass m ~ \ sqrt \ lambda \ ln s and j are sent to infinity with their ratio fixed. expanding in the parameter \ el = j / m we compute the 2 - loop string sigma model correction to the string energy and show that it agrees with the expression proposed by alday and maldacena in arxiv : 0708. 0672. we suggest that a resummation of the logarithmic \ el ^ 2 \ ln ^ n \ el terms is necessary in order to establish an interpolation to the weakly coupled gauge theory results. in the process, we set up a general framework for the calculation of higher loop corrections to the energy of multi - spin string configurations. in particular, we find that in addition to the direct 2 - loop term in the string energy there is a contribution from lower loop order due to a finite ` ` renormalization ' ' of the relation between the parameters of the classical solution and the fixed spins, i. e. the charges of the so ( 2, 4 ) x so ( 6 ) symmetry.
|
arxiv:0712.2479
|
context. ultracompact x - ray binaries ( ucxbs ) typically consist of a white dwarf donor and a neutron star or black hole accretor. the evolution of ucxbs and very low mass ratio binaries in general is poorly understood. aims. we investigate the evolution of ucxbs in order to learn for which mass ratios and accretor types these systems can exist, and if they do, what are their orbital and neutron star spin periods, mass transfer rates and evolutionary timescales. methods. for different assumptions concerning accretion disk behavior we calculate for which system parameters dynamical instability, thermal - viscous disk instability or the propeller effect emerge. results. at the onset of mass transfer, the survival of the ucxb is determined by how efficiently the accretor can eject matter in the case of a super - eddington mass transfer rate. at later times, the evolution of systems strongly depends on the binary ' s capacity to return angular momentum from the disk to the orbit. we find that this feedback mechanism most likely remains effective. in the case of steady mass transfer, the propeller effect can stop accretion onto recycled neutron stars completely at a sufficiently low mass transfer rate, based on energy considerations. however, mass transfer will likely be non - steady because disk instability allows for accretion of some of the transferred matter. together, the propeller effect and disk instability cause the low mass ratio ucxbs to be visible a small fraction of the time at most, thereby explaining the lack of observations of such systems. conclusions. most likely ucxbs avoid late - time dynamically unstable mass transfer and continue to evolve as the age of the universe allows. this implies the existence of a large population of low mass ratio binaries with orbital periods ~ 70 - 80 min, unless some other mechanism has destroyed these binaries.
|
arxiv:1111.5978
|
widely popular transformer - based nlp models such as bert and turing - nlg have enormous capacity trending to billions of parameters. current execution methods demand brute - force resources such as hbm devices and high speed interconnectivity for data parallelism. in this paper, we introduce a new relay - style execution technique called l2l ( layer - to - layer ) where at any given moment, the device memory is primarily populated only with the executing layer ( s ) ' s footprint. the model resides in the dram memory attached to either a cpu or an fpga as an entity we call eager param - server ( eps ). to overcome the bandwidth issues of shuttling parameters to and from eps, the model is executed a layer at a time across many micro - batches instead of the conventional method of minibatches over whole model. l2l is implemented using 16gb v100 devices for bert - large running it with a device batch size of up to 256. our results show 45 % reduction in memory and 40 % increase in the throughput compared to the state - of - the - art baseline. l2l is also able to fit models up to 50 billion parameters on a machine with a single 16gb v100 and 512gb cpu memory and without requiring any model partitioning. l2l scales to arbitrary depth allowing researchers to develop on affordable devices which is a big step toward democratizing ai. by running the optimizer in the host eps, we show a new form of mixed precision for faster throughput and convergence. in addition, the eps enables dynamic neural architecture approaches by varying layers across iterations. finally, we also propose and demonstrate a constant memory variation of l2l and we propose future enhancements. this work has been performed on gpus first, but also targeted towards all high tflops / watt accelerators.
|
arxiv:2002.05645
|
infinite - dimensional lie superalgebras, particularly borcherds - kac - moody ( bkm ) superalgebras, play a fundamental role in mathematical physics, number theory, and representation theory. in this paper, we study the root multiplicities of bkm superalgebras via their denominator identities, deriving explicit combinatorial formulas in terms of graph invariants associated with marked ( quasi ) dynkin diagrams. we introduce partially commutative lie superalgebras ( pclsas ) and provide a direct combinatorial proof of their denominator identity, where the generating set runs over the super heaps monoid. a key notation in our approach is marked multi - colorings and their associated polynomials, which generalize chromatic polynomials and offer a method for computing root multiplicities. as applications, we characterize the roots of pclsas and establish connections between their universal enveloping algebras and right - angled coxeter groups, leading to explicit formulas for their hilbert series. these results further deepen the interplay between lie superalgebras, graph theory, and algebraic combinatorics.
|
arxiv:2503.11230
|
we consider bernoulli percolation on transitive graphs of polynomial growth. in the subcritical regime ( $ p < p _ c $ ), it is well known that the connection probabilities decay exponentially fast. in the present paper, we study the supercritical phase ( $ p > p _ c $ ) and prove the exponential decay of the truncated connection probabilities ( probabilities that two points are connected by an open path, but not to infinity ). this sharpness result was established by [ ccn87 ] on $ \ mathbb z ^ d $ and uses the difficult slab result of grimmett and marstrand. however, the techniques used there are very specific to the hypercubic lattices and do not extend to more general geometries. in this paper, we develop new robust techniques based on the recent progress in the theory of sharp thresholds and the sprinkling method of benjamini and tassion. on $ \ mathbb z ^ d $, these methods can be used to produce a new proof of the slab result of grimmett and marstrand.
|
arxiv:2107.06326
|
in this paper, we investigate the computability of thermodynamic invariants at zero temperature for one - dimensional subshifts of finite type. in particular, we prove that the residual entropy ( i. e., the joint ground state entropy ) is an upper semi - computable function on the space of continuous potentials, but it is not computable. next, we consider locally constant potentials for which the zero - temperature measure is known to exist. we characterize the computability of the zero - temperature measure and its entropy for potentials that are constant on cylinders of a given length k. in particular, we show the existence of an open and dense set of locally constant potentials for which the zero - temperature measure can be computationally identified as an elementary periodic point measure. finally, we show that our methods do not generalize to treat the case when k is not given
|
arxiv:1809.00147
|
the production of electron - positron pairs from vacuum by counterpropagating laser beams of linear polarization is calculated. in contrast to the usual approximate approach, the spatial dependence and magnetic component of the laser field are taken into account. we show that the latter strongly affects the creation process at high laser frequency : the production probability is reduced, the kinematics is fundamentally modified, the resonant rabi - oscillation pattern is distorted and the resonance positions are shifted, multiplied and split.
|
arxiv:0810.4047
|
we discuss the timescales for alignment of black hole and accretion disc spins in the context of binary systems. we show that for black holes that are formed with substantial angular momentum, the alignment timescales are likely to be at least a substantial fraction of the systems ' lifetimes. this result explains the observed misalignment of the disc and the jet in the microquasar gro j 1655 - 40 and in sax j 1819 - 2525 as being likely due to the bardeen - petterson effect. we discuss the implications of these results on the mass estimate for grs 1915 + 105, which has assumed the jet is perpendicular to the orbital plane of the system and may hence be an underestimate. we show that the timescales for the spin alignment in cygnus x - 3 are consistent with the likely misalignment of disc and jet in that system, and that this is suggested by the observational data.
|
arxiv:astro-ph/0209105
|
the increasing inclusion of machine learning ( ml ) models in safety critical systems like autonomous cars have led to the development of multiple model - based ml testing techniques. one common denominator of these testing techniques is their assumption that training programs are adequate and bug - free. these techniques only focus on assessing the performance of the constructed model using manually labeled data or automatically generated data. however, their assumptions about the training program are not always true as training programs can contain inconsistencies and bugs. in this paper, we examine training issues in ml programs and propose a catalog of verification routines that can be used to detect the identified issues, automatically. we implemented the routines in a tensorflow - based library named tfcheck. using tfcheck, practitioners can detect the aforementioned issues automatically. to assess the effectiveness of tfcheck, we conducted a case study with real - world, mutants, and synthetic training programs. results show that tfcheck can successfully detect training issues in ml code implementations.
|
arxiv:1909.02562
|
we present the results of a massive variability search based on a photometric survey of a six square degree region along the galactic plane at ( $ l = 305 ^ \ circ $, $ b = - 0. 8 ^ \ circ $ ) and ( $ l = 330 ^ \ circ $, $ b = - 2. 5 ^ \ circ $ ). this survey was performed in the framework of the eros ii ( exp \ ' erience de recherche d ' objets sombres ) microlensing program. the variable stars were found among 1, 913, 576 stars that were monitored between april and june 1998 in two passbands, with an average of 60 measurements. a new period - search technique is proposed which makes use of a statistical variable that characterizes the overall regularity of the flux versus phase diagram. this method is well suited when the photometric data are unevenly distributed in time, as is our case. 1, 362 objects whose luminosity varies were selected. among them we identified 9 cepheids, 19 rr lyrae, 34 miras, 176 eclipsing binaries and 266 semi - regular stars. most of them are newly identified objects. the cross - identification with known catalogues has been performed. the mean distance of the rr lyrae is estimated to be $ \ sim 4. 9 \ pm 0. 3 $ kpc undergoing an average absorption of $ \ sim 3. 4 \ pm 0. 2 $ magnitudes. this distance is in good agreement with the one of disc stars which contribute to the microlensing source star population. our catalogue and light curves are available electronically from the cds, strasbourg and from our web site http : / / eros. in2p3. fr.
|
arxiv:astro-ph/0204246
|
many marine invertebrates have larval stages covered in linear arrays of beating cilia, which propel the animal while simultaneously entraining planktonic prey. these bands are strongly conserved across taxa spanning four major superphyla, and they are responsible for the unusual morphologies of many invertebrate larvae. however, few studies have investigated their underlying hydrodynamics. here, we study the ciliary bands of starfish larvae, and discover a beautiful pattern of slowly - evolving vortices that surrounds the swimming animals. closer inspection of the bands reveals unusual ciliary " tangles " analogous to topological defects that break up and re - form as the animal adjusts its swimming stroke. quantitative experiments and modeling demonstrate that these vortices create a physical tradeoff between feeding and swimming in heterogenous environments, which manifests as distinct flow patterns or " eigen - strokes " representing each behavior - - - potentially implicating neuronal control of cilia. this quantitative interplay between larval form and hydrodynamic function may generalize to other invertebrates with ciliary bands, and illustrates the potential effects of active boundary conditions in other biological and synthetic systems.
|
arxiv:1611.01173
|
we develop a fast and accurate calculation method for ionization degrees in protoplanetary and circumplanetary disks including dust grains. we apply our method to calculate the ionization degree of circumplanetary disks. it is important to understand the structure and evolution of protoplanetary / circumplanetary disks since they are thought to be the sites of planet / satellite formation. the turbulence that causes gas accretion is supposed to be driven by magnetorotational instability ( mri ) that occurs only when the ionization degree is high enough for magnetic field to be coupled to gas. we calculate the ionization degrees in circumplanetary disks to estimate the sizes of mri - inactive regions. we properly include the effect of dust grains because they efficiently capture charged particles and make ionization degree lower. inclusion of dust grains complicates the reaction equations and requires expensive computation. in order to accelerate the calculation of ionization reactions, we develop a semianalytic method based on the charge distribution model proposed previously. this method enables us to study the ionization state of disks for a wide range of model parameters. for a previous model of circum - jovian disk, we find that an mri - inactive region covers almost all regions even without dust grains. this suggests that the gas accretion rates in circumplanetary disks are much smaller than previously thought.
|
arxiv:1106.3528
|
the group of affine transformations with rational coefficients, $ aff ( q ) $, acts naturally on the real line, but also on the $ p $ - adic fields. the aim of this note is to show that all these actions are necessary and sufficient to represent bounded $ \ mu $ - harmonic functions for a probability measure $ \ mu $ on $ aff ( q ) $ that is supported by a finitely generated sub - group, that is to describe the poisson boundary.
|
arxiv:math/0403197
|
we consider the downlink of a two - layer heterogeneous network, comprising macro cells ( mcs ) and small cells ( scs ). the existing literature generally assumes independent placements of the access points ( aps ) in different layers ; in contrast, we analyze a dependent placement where sc aps are placed at locations with poor service from the mc layer. our goal is to obtain an estimate of the number of scs required to maintain a target outage rate. such an analysis is trivial if the mcs are located according to a poisson point process ( ppp ), which provides a lower bound on performance. here, we consider mcs placed on a hexagonal grid, which complements the ppp model by providing an upper bound on performance. we first provide accurate bounds for the average interference within a mc when scs are not used. then, by obtaining the outage areas, we estimate the number of scs required within an mc to overcome outage. if resource allocation amongst scs is not used, we show that the problem of outage is not solved completely, and the residual outage area depends on whether co - channel or orthogonal scs are used. simulations show that a much smaller residual outage area is obtained with orthogonal scs.
|
arxiv:1609.06395
|
we employ the hyper central approach to study the masses and magnetic moments of the baryons constituting single charm and beauty quark. the confinement potential is assumed in the hyper central co - ordinates of the coulomb plus power potential form.
|
arxiv:0710.3828
|
edge computing that leverages cloud resources to the proximity of user devices is seen as the future infrastructure for distributed applications. however, developing and deploying edge applications, that rely on cellular networks, is burdensome. such network infrastructures are often based on proprietary components, each with unique programming abstractions and interfaces. to facilitate straightforward deployment of edge applications, we introduce oss based ran on ota commercial spectrum with devops capabilities. oss allows software modifications and integrations of the system components, e. g., epc and edge hosts running applications, required for new data pipelines and optimizations not addressed in standardization. such an oss infrastructure enables further research and prototyping of novel end - user applications in an environment familiar to software engineers without telecommunications background. we evaluated the presented infrastructure with e2e ota testing, resulting in 7. 5mb / s throughput and latency of 21ms, which shows that the presented infrastructure provides low latency for edge applications.
|
arxiv:1905.03883
|
recently, the mobile industry has experienced an extreme increment in number of its users. the gsm network with the greatest worldwide number of users succumbs to several security vulnerabilities. although some of its security problems are addressed in its upper generations, there are still many operators using 2g systems. this paper briefly presents the most important security flaws of the gsm network and its transport channels. it also provides some practical solutions to improve the security of currently available 2g systems.
|
arxiv:1002.3175
|
when do mucus films plug lung airways? using reduced - order simulations of a large ensemble of randomly perturbed films, we show that the answer is not determined by just the film ' s volume. while very thin films always stay open and very thick films always plug, we find a range of intermediate films for which plugging is uncertain. the fastest - growing linear mode of the rayleigh - plateau instability ensures that the film ' s volume is divided among multiple humps. however, the nonlinear growth of these humps can occur unevenly, due to spontaneous axial sliding - - a lucky hump can sweep up a disproportionate share of the film ' s volume and so form a plug. this sliding - induced plugging is robust and prevails with or without gravitational and ciliary transport.
|
arxiv:2504.01656
|
graph - based causal discovery methods aim to capture conditional independencies consistent with the observed data and differentiate causal relationships from indirect or induced ones. successful construction of graphical models of data depends on the assumption of causal sufficiency : that is, that all confounding variables are measured. when this assumption is not met, learned graphical structures may become arbitrarily incorrect and effects implied by such models may be wrongly attributed, carry the wrong magnitude, or mis - represent direction of correlation. wide application of graphical models to increasingly less curated " big data " draws renewed attention to the unobserved confounder problem. we present a novel method that aims to control for the latent space when estimating a dag by iteratively deriving proxies for the latent space from the residuals of the inferred model. under mild assumptions, our method improves structural inference of gaussian graphical models and enhances identifiability of the causal effect. in addition, when the model is being used to predict outcomes, it un - confounds the coefficients on the parents of the outcomes and leads to improved predictive performance when out - of - sample regime is very different from the training data. we show that any improvement of prediction of an outcome is intrinsically capped and cannot rise beyond a certain limit as compared to the confounded model. we extend our methodology beyond ggms to ordinal variables and nonlinear cases. our r package provides both pca and autoencoder implementations of the methodology, suitable for ggms with some guarantees and for better performance in general cases but without such guarantees.
|
arxiv:2101.02332
|
- dropouts ).
|
arxiv:1007.5396
|
robotic blimps, as lighter - than - air aerial systems, offer prolonged duration and enhanced safety in human - robot interactions due to their buoyant lift. however, robust flight against environmental airflow disturbances remains a significant challenge, limiting the broader application of these robots. drawing inspiration from the flight mechanics of birds and their ability to perch against natural wind, this article introduces rgblimp - q, a robotic gliding blimp equipped with a bird - inspired continuum arm. this arm allows for flexible attitude adjustments through moving mass control to enhance disturbance resilience, while also enabling object capture by using claws to counteract environmental disturbances, similar to a bird. this article presents the design, modeling, and prototyping of rgblimp - q, thus extending the advantages of robotic blimps to more complex environments. to the best of the authors ' knowledge, this is the first interdisciplinary design integrating continuum mechanisms onto robotic blimps. experimental results from both indoor and outdoor settings validate the improved flight robustness against environmental disturbances offered by this novel design.
|
arxiv:2406.10810
|
the nuclear k - shell electron - capture ( ec ) and positron ( beta + ) decay constants, lambda _ ( ec ) and lambda _ ( beta ^ + ) of h - like 140pr58 + and he - like 140pr57 + ions, measured recently in the esr ion storage ring at gsi, were calculated using standard weak interaction theory. the calculated ratios r = lambda _ ( ec ) / lambda _ ( beta ^ + ) of the decay constants agree with the experimental values within an accuracy better than 3 %.
|
arxiv:0711.3184
|
a new magnetic, eclipsing cataclysmic variable is identified as the counterpart of the x - ray source rx j0719. 2 + 6557 detected during the rosat all - sky survey. the relative phasing of photometric and spectroscopic periods indicates a self - eclipsing system. doppler tomography points to the heated surface of the secondary as a strong source of emission and of diskless accretion. near - infrared spectroscopy revealed two unusual strong emission features originating on the heated side of the secondary.
|
arxiv:astro-ph/9609166
|
recently, the advancement of self - supervised learning techniques, like masked autoencoders ( mae ), has greatly influenced visual representation learning for images and videos. nevertheless, it is worth noting that the predominant approaches in existing masked image / video modeling rely excessively on resource - intensive vision transformers ( vits ) as the feature encoder. in this paper, we propose a new approach termed as \ textbf { videomac }, which combines video masked autoencoders with resource - friendly convnets. specifically, videomac employs symmetric masking on randomly sampled pairs of video frames. to prevent the issue of mask pattern dissipation, we utilize convnets which are implemented with sparse convolutional operators as encoders. simultaneously, we present a simple yet effective masked video modeling ( mvm ) approach, a dual encoder architecture comprising an online encoder and an exponential moving average target encoder, aimed to facilitate inter - frame reconstruction consistency in videos. additionally, we demonstrate that videomac, empowering classical ( resnet ) / modern ( convnext ) convolutional encoders to harness the benefits of mvm, outperforms vit - based approaches on downstream tasks, including video object segmentation ( + \ textbf { 5. 2 \ % } / \ textbf { 6. 4 \ % } $ \ mathcal { j } \ & \ mathcal { f } $ ), body part propagation ( + \ textbf { 6. 3 \ % } / \ textbf { 3. 1 \ % } miou ), and human pose tracking ( + \ textbf { 10. 2 \ % } / \ textbf { 11. 1 \ % } pck @ 0. 1 ).
|
arxiv:2402.19082
|
we prove that for all $ n \ in \ mathbb { n } $, there exists a constant $ c _ { n } $ such that for all $ d \ in \ mathbb { n } $, for every row contraction $ t $ consisting of $ d $ commuting $ n \ times n $ matrices and every polynomial $ p $, the following inequality holds : \ [ \ | p ( t ) \ | \ le c _ { n } \ sup _ { z \ in \ mathbb { b } _ d } | p ( z ) |. \ ] we apply this result and the considerations involved in the proof to several open problems from the pertinent literature. first, we show that gleason ' s problem cannot be solved contractively in $ h ^ \ infty ( \ mathbb { b } _ d ) $ for $ d \ ge 2 $. second, we prove that the multiplier algebra $ \ operatorname { mult } ( \ mathcal { d } _ a ( \ mathbb { b } _ d ) ) $ of the weighted dirichlet space $ \ mathcal { d } _ a ( \ mathbb { b } _ d ) $ on the ball is not topologically subhomogeneous when $ d \ ge 2 $ and $ a \ in ( 0, d ) $. in fact, we determine all the bounded finite dimensional representations of the norm closed subalgebra $ a ( \ mathcal { d } _ a ( \ mathbb { b } _ d ) ) $ of $ \ operatorname { mult } ( \ mathcal { d } _ a ( \ mathbb { b } _ d ) ) $ generated by polynomials. lastly, we also show that there exists a uniformly bounded nc holomorphic function on the free commutative ball $ \ mathfrak { c } \ mathfrak { b } _ d $ that is levelwise uniformly continuous but not globally uniformly continuous.
|
arxiv:2109.08550
|
decays of radionuclides throughout the earth ' s interior produce geothermal heat, but also are a source of antineutrinos. the ( angle - integrated ) geoneutrino flux places an integral constraint on the terrestrial radionuclide distribution. in this paper, we calculate the angular distribution of geoneutrinos, which opens a window on the differential radionuclide distribution. we develop the general formalism for the neutrino angular distribution, and we present the inverse transformation which recovers the terrestrial radioisotope distribution given a measurement of the neutrino angular distribution. thus, geoneutrinos not only allow a means to image the earth ' s interior, but offering a direct measure of the radioactive earth, both ( 1 ) revealing the earth ' s inner structure as probed by radionuclides, and ( 2 ) allowing for a complete determination of the radioactive heat generation as a function of radius. we present the geoneutrino angular distribution for the favored earth model which has been used to calculate geoneutrino flux. in this model the neutrino generation is dominated by decays in the earth ' s mantle and crust ; this leads to a very ` ` peripheral ' ' angular distribution, in which 2 / 3 of the neutrinos come from angles > 60 degrees away from the downward vertical. we note the possibility of that the earth ' s core contains potassium ; different geophysical predictions lead to strongly varying, and hence distinguishable, central intensities ( < 30 degrees from the downward vertical ). other uncertainties in the models, and prospects for observation of the geoneutrino angular distribution, are briefly discussed. we conclude by urging the development and construction of antineutrino experiments with angular sensitivity. ( abstract abridged. )
|
arxiv:hep-ph/0406001
|
propagating, directionally dependent, polarized spin - currents are created in an anisotropic planar semiconductor microcavity, via rayleigh scattering of optically injected polaritons in the optical spin hall regime. the influence of anisotropy results in the suppression or enhancement of the pseudospin precession of polaritons scattered into different directions. this is exploited to create intense spin currents by excitation on top of localized defects. a theoretical model considering the influence of the total effective magnetic field on the polariton pseudospin quantitatively reproduces the experimental observations.
|
arxiv:0906.0746
|
we introduce the preliminary exploration of aniballoons, a novel form of chat balloon animations aimed at enriching nonverbal affective expression in text - based communications. aniballoons were designed using extracted motion patterns from affective animations and mapped to six commonly communicated emotions. an evaluation study with 40 participants assessed their effectiveness in conveying intended emotions and their perceived emotional properties. the results showed that 80 % of the animations effectively conveyed the intended emotions. aniballoons covered a broad range of emotional parameters, comparable to frequently used emojis, offering potential for a wide array of affective expressions in daily communication. the findings suggest aniballoons ' promise for enhancing emotional expressiveness in text - based communication and provide early insights for future affective design.
|
arxiv:2307.11356
|
the stochastic heavy ball momentum ( shbm ) method has gained considerable popularity as a scalable approach for solving large - scale optimization problems. however, one limitation of this method is its reliance on prior knowledge of certain problem parameters, such as singular values of a matrix. in this paper, we propose an adaptive variant of the shbm method for solving stochastic problems that are reformulated from linear systems using user - defined distributions. our adaptive shbm ( ashbm ) method utilizes iterative information to update the parameters, addressing an open problem in the literature regarding the adaptive learning of momentum parameters. we prove that our method converges linearly in expectation, with a better convergence bound compared to the basic method. notably, we demonstrate that the deterministic version of our ashbm algorithm can be reformulated as a variant of the conjugate gradient ( cg ) method, inheriting many of its appealing properties, such as finite - time convergence. consequently, the ashbm method can be further generalized to develop a brand - new framework of the stochastic cg ( scg ) method for solving linear systems. our theoretical results are supported by numerical experiments.
|
arxiv:2305.05482
|
this paper describes the system developed by the ustc - nelslip team for semeval - 2023 task 2 multilingual complex named entity recognition ( multiconer ii ). a method named statistical construction and dual adaptation of gazetteer ( scdag ) is proposed for multilingual complex ner. the method first utilizes a statistics - based approach to construct a gazetteer. secondly, the representations of gazetteer networks and language models are adapted by minimizing the kl divergence between them at both the sentence - level and entity - level. finally, these two networks are then integrated for supervised named entity recognition ( ner ) training. the proposed method is applied to xlm - r with a gazetteer built from wikidata, and shows great generalization ability across different tracks. experimental results and detailed analysis verify the effectiveness of the proposed method. the official results show that our system ranked 1st on one track ( hindi ) in this task.
|
arxiv:2305.02517
|
a lattice in the euclidean space is standard if it has a basis consisting vectors whose norms equal to the length in its successive minima. in this paper, it is shown that with the $ l ^ 2 $ norm all lattices of dimension $ n $ are standard if and only if $ n \ leqslant 4 $. it is also proved that with an arbitrary norm, every lattice of dimensions 1 and 2 is standard. an example of non - standard lattice of dimension $ n \ geqslant 3 $ is given when the lattice is with the $ l ^ 1 $ norm.
|
arxiv:1703.08765
|
we give a self - contained and enriched review about topology properties in the rapidly growing field of topological states of matter ( tsm ). this review is mainly focus on the beautiful interplay of topology mathematics and condensed matter physics that issuing tsm. fiber bundle theory is a powerful concept to describe the non - trivial topology properties underlying the physical system. so we briefly present some motivation of fiber bundle theory and following that several effective topological methods have been introduced to judge whether a fiber bundle is trivial or not. next, we give some topological invariants that characterizes the non - trivial tsm in the non - interacting systems in all dimensions, which is called topological band theory. following that, we review and generalize the topological response using topological field theory called chern - simons effective theory. finally, the classification of free - fermion systems have been studied by loop space and k - theory.
|
arxiv:1309.2056
|
the statistical method used in the analyses of measurements of neutrino oscillation mixing angle $ \ theta _ { 13 } $ by the daya bay collaboration is based on variational minimization of a $ \ chi ^ 2 $ function defined in terms of quantities of interest and pull factors which are introduced to deal with effects of systematic uncertainties. for both experiments, the number of parameters that need to be determined is great than the number of available data points ( 20 vs 6 for the daya bay and 12 vs 2 for the reno ). while the results for the mixing angle and the normalization factor were reported, results for the other parameters ( pull factors ) were omitted in their publications. there exist multiple sets of parameters from the minimization of the $ \ chi ^ 2 $ function. we investigate the sensitivity of the extracted mixing angle on this non - uniqueness of minimization results for the daya bay data using two methods of minimization. we report results for all parameters, including those of physics interest and pull factors. the obtained results for the mixing angle and the normalization factor are in agreement with those reported by the daya bay collaboration. furthermore, we present plots of confidence level contours in the space of the mixing angle and normalization factor. we also present results from fittings using a reduced $ \ chi ^ 2 $ function with fewer parameters than the one employed by the daya bay collaboration.
|
arxiv:1801.04051
|
we calculate the static polarizability of multilayer graphene and study the effect of stacking arrangement, carrier density, and onsite energy difference on graphene screening properties. at low densities, the energy spectrum of multilayer graphene is described by a set of chiral two - dimensional electron systems and the associated chiral nature determines the screening properties of multilayer graphene showing very different behavior depending on whether the chirality index is even or odd. as density increases, the energy spectrum follows that of the monolayer graphene and thus the polarizability approaches that of monolayer graphene. the qualitative dependence of graphene polarizability on chirality and layering indicates the possibility of distinct graphene quantum phases as a function of the chirality index.
|
arxiv:1202.2132
|
it is folklore that a power bounded operator on a sequentially complete locally convex space generates a uniformly continuous $ c _ 0 $ - semigroup which is given by the corresponding power series representation. recently, doma \ ' nski asked if in this result the assumption of being power bounded can be relaxed. we employ conditions introduced by \. { z } elazko to give a weaker but still sufficient condition for generation and apply our results to operators on classical function and sequence spaces.
|
arxiv:1506.08451
|
robust estimators for linear regression require non - convex objective functions to shield against adverse affects of outliers. this non - convexity brings challenges, particularly when combined with penalization in high - dimensional settings. selecting hyper - parameters for the penalty based on a finite sample is a critical task. in practice, cross - validation ( cv ) is the prevalent strategy with good performance for convex estimators. applied with robust estimators, however, cv often gives sub - par results due to the interplay between multiple local minima and the penalty. the best local minimum attained on the full training data may not be the minimum with the desired statistical properties. furthermore, there may be a mismatch between this minimum and the minima attained in the cv folds. this paper introduces a novel adaptive cv strategy that tracks multiple minima for each combination of hyper - parameters and subsets of the data. a matching scheme is presented for correctly evaluating minima computed on the full training data using the best - matching minima from the cv folds. it is shown that the proposed strategy reduces the variability of the estimated performance metric, leads to smoother cv curves, and therefore substantially increases the reliability and utility of robust penalized estimators.
|
arxiv:2409.12890
|
we have measured the persistent current in individual normal metal rings over a wide range of magnetic fields. from this data, we extract the first six cumulants of the single - ring persistent current distribution. our results are consistent with the theoretical prediction that this distribution should be nearly gaussian ( i. e., that these cumulants should be nearly zero ) for diffusive metallic rings. this measurement highlights the particular sensitivity of persistent current to the mesoscopic fluctuations within a single coherent volume.
|
arxiv:1204.3821
|
by using the abstract version of struwe ' s monotonicity - trick we prove the existence of a positive solution to the problem ( - \ delta ) ^ s u + k u = f ( x, u ) in r ^ n u \ in h ^ s ( r ^ n ), k > 0 where f ( x, t ) : r ^ n \ times r \ rightarrow r is a caratheodory function, 1 - periodic in x and does not satisfy the ambrosetti - rabinowitz condition.
|
arxiv:1601.06281
|
we analyze a new gravitational lens, oac - gl j1223 - 1239, serendipitously found in a deep i - band image of the hubble space telescope ( hst ) advanced camera for surveys ( acs ). the lens is a l _ *, edge - on s0 galaxy at z = 0. 4656. the gravitational arc has a radius of 0. 42 arcsec. we have determined the total mass and the dark matter ( dm ) fraction within the einstein radius as a function of the lensed source redshift, which is presently unknown. for z ~ 1. 3, which is in the middle of the redshift range plausible for the source according to some external constraints, we find the central velocity dispersion to be ~ 180 km / s. with this value, close to that obtained by means of the faber - jackson relation at the lens redshift, we compute a 30 % dm fraction within the einstein radius ( given the uncertainty in the source redshift, the allowed range for the dm fraction is 25 - 35 % in our lensing model ). when compared with the galaxies in the local universe, the lensing galaxy, oac - gl j1223 - 1239 seems to fall in the transition regime between massive dm dominated galaxies and lower - mass, dm deficient systems.
|
arxiv:0809.4125
|
the rapidly developing research field of organic analogue sensors aims to replace traditional semiconductors with naturally occurring materials. photosensors, or photodetectors, change their electrical properties in response to the light levels they are exposed to. organic photosensors can be functionalised to respond to specific wavelengths, from ultra - violet to red light. performing cyclic voltammetry on fungal mycelium and fruiting bodies under different lighting conditions shows no appreciable response to changes in lighting condition. however, functionalising the specimen using pedot : pss yields in a photosensor that produces large, instantaneous current spikes when the light conditions change. future works would look at interfacing this organic photosensor with an appropriate digital back - end for interpreting and processing the response.
|
arxiv:2003.07825
|
the occurrence of a neutron resonance energy is a common feature of unconventional superconductors. in turn, the low - temperature incommensurate sharp peaks observed in the inelastic neutron scattering of la $ _ { 2 - x } $ sr $ _ x $ cuo $ _ 4 $ ( lsco ) correspond to four rods symmetrically distributed around $ [ \ pi, \ pi ] $. here it is shown that within the virtual - electron pair quantum liquid recently introduced the neutron resonance energy and the lsco low - temperature incommensurate sharp peaks are generated by simple and closely related spinon processes. our results indicate that in lsco the neutron resonance energy either does not occur or corresponds to a lower energy $ \ approx 17 $ mev.
|
arxiv:1005.0601
|
statistical parametric mapping ( spm ) is an integrated set of methods for testing hypotheses about the brain ' s structure and function, using data from imaging devices. these methods are implemented in an open source software package, spm, which has been in continuous development for more than 30 years by an international community of developers. this paper reports the release of spm 25. 01, a major new version of the software that incorporates novel analysis methods, optimisations of existing methods, as well as improved practices for open science and software development.
|
arxiv:2501.12081
|
the prompt gamma - ray bursts ' ( grbs ) efficiency is an important clue on the emission mechanism producing the $ \ gamma $ - rays. previous estimates of the kinetic energy of the blast waves, based on the x - ray afterglow luminosity $ l _ x $, suggested that this efficiency is large, with values above 90 \ % in some cases. this poses a problem to emission mechanisms and in particular to the internal shocks model. these estimates are based, however, on the assumption that the x - ray emitting electrons are fast cooling and that their inverse compton ( ic ) losses are negligible. the observed correlations between $ l _ x $ ( and hence the blast wave energy ) and $ e _ { \ gamma \ rm, iso } $, the isotropic equivalent energy in the prompt emission, has been considered as observational evidence supporting this analysis. it is reasonable that the prompt gamma - ray energy and the blast wave kinetic energy are correlated and the observed correlation corroborates, therefore, the notion $ l _ x $ is indeed a valid proxy for the latter. recent findings suggest that the magnetic field in the afterglow shocks is significantly weaker than was earlier thought and its equipartition fraction, $ \ epsilon _ b $, could be as low as $ 10 ^ { - 4 } $ or even lower. motivated by these findings we reconsider the problem, taking now ic cooling into account. we find that the observed $ l _ x - e _ { \ gamma \ rm, iso } $ correlation is recovered also when ic losses are significant. for small $ \ epsilon _ b $ values the blast wave must be more energetic and we find that the corresponding prompt efficiency is significantly smaller than previously thought. for example, for $ \ epsilon _ b \ sim10 ^ { - 4 } $ we infer a typical prompt efficiency of $ \ sim15 \ % $.
|
arxiv:1606.00311
|
we consider nonparametric functional regression when both predictors and responses are functions. more specifically, we let $ ( x _ 1, y _ 1 ),..., ( x _ n, y _ n ) $ be random elements in $ \ mathcal { f } \ times \ mathcal { h } $ where $ \ mathcal { f } $ is a semi - metric space and $ \ mathcal { h } $ is a separable hilbert space. based on a recently introduced notion of weak dependence for functional data, we showed the almost sure convergence rates of both the nadaraya - watson estimator and the nearest neighbor estimator, in a unified manner. several factors, including functional nature of the responses, the assumptions on the functional variables using the orlicz norm and the desired generality on weakly dependent data, make the theoretical investigations more challenging and interesting.
|
arxiv:1111.6230
|
pull request ( pr ) is the main method for code contributions from the external contributors in github. pr review is an essential part of open source software developments to maintain the quality of software. matching a new pr for an appropriate integrator will make the pr reviewing more effective. however, pr and integrator matching are now organized manually in github. to make this process more efficient, we propose a topic - based integrator matching algorithm ( tima ) to predict highly relevant collaborators ( the core developers ) as the integrator to incoming prs. tima takes full advantage of the textual semantics of prs. to define the relationships between topics and collaborators, tima builds a relation matrix about topic and collaborators. according to the relevance between topics and collaborators, tima matches the suitable collaborators as the pr integrator.
|
arxiv:1710.10421
|
the relationship between star formation and infrared emission in galaxies will be investigated. if galaxies were simple objects and young stars were completely covered with dust, then all the absorbed light of the young stars would be re - emitted in the infrared and from the infrared emission of galaxies we would infer the star formation rate ( sfr ) in them accurately. to show the complexities involved in real galaxies, we will use as a case study the late - type spiral galaxies. we will show that the heating of the dust is done mainly by the uv radiation of the young stars and therefore the infrared emission reveals the sfr in them. with a realistic model and its application to a number of galaxies, tight correlations are derived between sfr and total far infrared luminosity on one hand, and dust mass and 850 micron flux on the other. other diagnostics of the sfr are examined and it is shown that there is consistency among them. thus, the sfr for galaxies of all hubble types has been determined as well as for interacting starburst galaxies. combining different methods, the star - formation history of the universe has been determined and will be shown. finally, some early results from the spitzer space telescope will be presented.
|
arxiv:astro-ph/0501373
|
aims. to derive the mass profiles of the different luminous and dark components in clusters. methods. the cluster mass profile is determined by using the jeans equation applied to the projected phase - space distribution of about 3000 galaxies members of 59 nearby clusters from the eso nearby abell cluster survey. the baryonic and subhaloes mass components are determined from the galaxies ' luminosity - density profiles through scaling relations between luminosities and baryonic and dark halo masses. the baryonic mass component associated to the intra - cluster gas is determined using x - ray data from rosat. results. the baryon - to - total mass fraction decreases from a value of 0. 12 near the center, to 0. 08 at the distance of 0. 15 virial radii, then it increases again, to reach a value of 0. 14 at the virial radius. diffuse, cluster - scale, dark matter dominates at all radii, but its contribution to the total mass content decreases outwards to the virial radius, where the dark matter in subhaloes may contribute up to 23 %, and the baryons 14 %, of the total mass. the dark mass, and diffuse dark mass profiles are well fit by both cuspy and cored models. the subhaloes mass distribution is not fit by either model.
|
arxiv:astro-ph/0511309
|
transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from nadh and fadh2 that is coupled to the pumping of protons ( hydrogen ions ) across the inner mitochondrial membrane ( chemiosmosis ), which generates a proton motive force. energy from the proton motive force drives the enzyme atp synthase to synthesize more atps by phosphorylating adps. the transfer of electrons terminates with molecular oxygen being the final electron acceptor. if oxygen were not present, pyruvate would not be metabolized by cellular respiration but undergoes a process of fermentation. the pyruvate is not transported into the mitochondrion but remains in the cytoplasm, where it is converted to waste products that may be removed from the cell. this serves the purpose of oxidizing the electron carriers so that they can perform glycolysis again and removing the excess pyruvate. fermentation oxidizes nadh to nad + so it can be re - used in glycolysis. in the absence of oxygen, fermentation prevents the buildup of nadh in the cytoplasm and provides nad + for glycolysis. this waste product varies depending on the organism. in skeletal muscles, the waste product is lactic acid. this type of fermentation is called lactic acid fermentation. in strenuous exercise, when energy demands exceed energy supply, the respiratory chain cannot process all of the hydrogen atoms joined by nadh. during anaerobic glycolysis, nad + regenerates when pairs of hydrogen combine with pyruvate to form lactate. lactate formation is catalyzed by lactate dehydrogenase in a reversible reaction. lactate can also be used as an indirect precursor for liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy
|
https://en.wikipedia.org/wiki/Biology
|
spiking - neural - networks ( snns ) are promising at edge devices since the event - driven operations of snns provides significantly lower power compared to analog - neural - networks ( anns ). although it is difficult to efficiently train snns, many techniques to convert trained anns to snns have been developed. however, after the conversion, a trade - off relation between accuracy and latency exists in snns, causing considerable latency in large size datasets such as imagenet. we present a technique, named as tcl, to alleviate the trade - off problem, enabling the accuracy of 73. 87 % ( vgg - 16 ) and 70. 37 % ( resnet - 34 ) for imagenet with the moderate latency of 250 cycles in snns.
|
arxiv:2008.04509
|
in this paper we study the low dimensional cohomology groups of hom - lie algebras and their relation with derivations, abelian extensions and crossed modules. on one hand, we introduce the notion of $ \ alpha $ - abelian extensions and we obtain a five term exact sequence in cohomology. on the other hand, we introduce crossed modules of hom - lie algebras showing their equivalence with cat $ ^ 1 $ - hom - lie algebras, and we introduce $ \ alpha $ - crossed modules to have a better understanding of the third cohomology group.
|
arxiv:1802.04061
|
the cosmological constant if considered as a fundamental constant, provides an information treatment for gravitation problems, both cosmological and of black holes. the efficiency of that approach is shown via gedanken experiments for the information behavior of the horizons for schwarzschild - de sitter and kerr - de sitter metrics. a notion of entropy regarding any observer and in all possible non - extreme black hole solutions is suggested, linked also to bekenstein bound. the suggested information approach forbids the existence of naked singularities.
|
arxiv:2103.14555
|
we present the characterization of 8 - 12 ghz whispering gallery mode resonators machined in high - quality sapphire crystals elaborated with different growth techniques. these microwave resonators are intended to constitute the reference frequency of ultra - stable cryogenic sapphire oscillators. we conducted systematic tests near 4 k on these crystals to determine the unloaded q - factor and the turnover temperature for whispering gallery modes in the 8 - 12 ghz frequency range. these characterizations show that high quality sapphire crystals elaborated with the heat exchange or the kyropoulos growth technique are both suitable to meet a fractional frequency stability better than 1x10 - 15 for 1 s to 10. 000 s integration times.
|
arxiv:1504.02711
|
we consider mass correction effects on the polar angular distribution of a baryon - - antibaryon pair created in the chain decay process $ e ^ - e ^ + \ to j / \ psi \ to b \ bar b $, generalizing a previous analysis of carimalo. we show the relevance of the features of the baryon distribution amplitudes and estimate the electromagnetic corrections to the qcd results.
|
arxiv:hep-ph/9412205
|
using tidal disruption of globular clusters by the galactic center, we put limits on the total mass ever enclosed into orbits of observed globular clusters. under the assumption that the rate of mass loss from the galaxy is steady, we then deduce a bound on this rate. in particular this bound can be used to constrain the galactic gravitational wave luminosity.
|
arxiv:astro-ph/0405201
|
we consider $ p $ - blocks with abelian defect groups and in the first part prove a relationship between its loewy length and that for blocks of normal subgroups of index $ p $. using this, we show that if $ b $ is a $ 2 $ - block of a finite group with abelian defect group $ d \ cong c _ { 2 ^ { a _ 1 } } \ times \ cdots \ times c _ { 2 ^ { a _ r } } \ times ( c _ 2 ) ^ s $, where $ a _ i > 1 $ for all $ i $ and $ r \ geq 0 $, then $ d < ll ( b ) \ leq 2 ^ { a _ 1 } + \ cdots + 2 ^ { a _ r } + 2s - r + 1 $, where $ | d | = 2 ^ d $. when $ s = 1 $ the upper bound can be improved to $ 2 ^ { a _ 1 } + \ cdots + 2 ^ { a _ r } + 2 - r $. together these give sharp upper bounds for every isomorphism type of $ d $. a consequence is that when $ d $ is an abelian $ 2 $ - group the loewy length is bounded above by $ | d | $ except when $ d $ is a klein - four group and $ b $ is morita equivalent to the principal block of $ a _ 5 $. we conjecture similar bounds for arbitrary primes and give evidence that it holds for principal $ 3 $ - blocks.
|
arxiv:1607.08795
|
this paper presents the theoretical basis of the fireball / blast wave model, and some implications of recent results on grb source models and cosmic - ray production from grbs. batse observations of the prompt gamma - ray luminous phase, and beppo - sax and long wavelength afterglow observations of grbs are briefly summarized. derivation of spectral and temporal indices of an adiabatic blast wave decelerating in a uniform surrounding medium in the limiting case of a nonrelativistic reverse shock, both for spherical and collimated outflows, is presented as an example of the general theory. external shock model fits for the afterglow lead to the conclusion that grb outflows are jetted. the external shock model also explains the temporal duration distribution and clustering of peak energies in prompt spectra of long - duration grbs, from which the redshift dependence of the grb source rate density can be derived. source models are reviewed in light of the constant energy reservoir result of frail et al. that implies a total grb energy of a few times 10 ^ { 51 } ergs and an average beaming fraction of ~ 1 / 500 of full sky. paczynski ' s isotropic hypernova model is ruled out. the vietri - stella model two - step collapse process is preferred over a hypernova / collapsar model in view of the x - ray observations of grbs and the constant energy reservoir result. second - order processes in grb blast waves can accelerate particles to ultra - high energies. grbs may be the sources of uhecrs and cosmic rays with energies above the knee of the cosmic ray spectrum. high - energy neutrino and gamma - ray observations with glast and ground - based gamma - ray telescopes will be crucial to test grb source models.
|
arxiv:astro-ph/0202254
|
we demonstrate that a first order isotropic - to - nematic phase transition in liquid crystals can be succesfully modeled within the generalized landau - de gennes theory by selecting an appropriate combination of elastic constants. the numerical simulations of the model established in this paper qualitatively reproduce the experimentally observed configurations that include interfaces and topological defects in the nematic phase.
|
arxiv:1902.06342
|
this paper is mainly concerned with the solutions to both forward and backward mean - field stochastic partial differential equation and the corresponding optimal control problem for mean - field stochastic partial differential equation. we first prove the continuous dependence theorems of forward and backward mean - field stochastic partial differential equations and show the existence and uniqueness of solutions to them. then we establish necessary and sufficient optimality conditions of the control problem in the form of pontryagin ' s maximum principles. to illustrate the theoretical results, we apply stochastic maximum principles to study an example, an infinite - dimensional linear - quadratic control problem of mean - field type. further an application to a cauchy problem for a controlled stochastic linear pde of mean - field type are studied.
|
arxiv:1610.02486
|
in mathematics, a submersion is a differentiable map between differentiable manifolds whose differential is everywhere surjective. it is a basic concept in differential topology, dual to that of an immersion. = = definition = = let m and n be differentiable manifolds, and let f : m β n { \ displaystyle f \ colon m \ to n } be a differentiable map between them. the map f is a submersion at a point p β m { \ displaystyle p \ in m } if its differential d f p : t p m β t f ( p ) n { \ displaystyle df _ { p } \ colon t _ { p } m \ to t _ { f ( p ) } n } is a surjective linear map. in this case, p is called a regular point of the map f ; otherwise, p is a critical point. a point q β n { \ displaystyle q \ in n } is a regular value of f if all points p in the preimage f β 1 ( q ) { \ displaystyle f ^ { - 1 } ( q ) } are regular points. a differentiable map f that is a submersion at each point p β m { \ displaystyle p \ in m } is called a submersion. equivalently, f is a submersion if its differential d f p { \ displaystyle df _ { p } } has constant rank equal to the dimension of n. some authors use the term critical point to describe a point where the rank of the jacobian matrix of f at p is not maximal. : indeed, this is the more useful notion in singularity theory. if the dimension of m is greater than or equal to the dimension of n, then these two notions of critical point coincide. however, if the dimension of m is less than the dimension of n, all points are critical according to the definition above ( the differential cannot be surjective ), but the rank of the jacobian may still be maximal ( if it is equal to dim m ). the definition given above is the more commonly used one, e. g., in the formulation of sard ' s theorem. = = submersion theorem = = given a submersion f : m β n { \ displaystyle f \ colon m \ to n } between smooth manifolds of dimensions m { \ displaystyle m } and n { \ displaystyle n }, for each x β m { \
|
https://en.wikipedia.org/wiki/Submersion_(mathematics)
|
we present a unified description of the vector meson and dilepton production in elementary and in heavy ion reactions. the production of vector mesons ( $ \ rho, \ omega $ ) is described via the excitation of nuclear resonances ( $ r $ ). the theoretical framework is an extended vector meson dominance model ( evmd ). the treatment of the resonance decays $ r \ longmapsto nv $ with arbitrary spin is covariant and kinematically complete. the evmd includes thereby excited vector meson states in the transition form factors. this ensures correct asymptotics and provides a unified description of photonic and mesonic decays. the resonance model is successfully applied to the $ \ omega $ production in $ p + p $ reactions. the same model is applied to the dilepton production in elementary reactions ( $ p + p, p + d $ ). corresponding data are well reproduced. however, when the model is applied to heavy ion reactions in the bevalac / sis energy range the experimental dilepton spectra measured by the dls collaboration are significantly underestimated at small invariant masses. as a possible solution of this problem the destruction of quantum interference in a dense medium is discussed. a decoherent emission through vector mesons decays enhances the corresponding dilepton yield in heavy ion reactions. in the vicinity of the $ \ rho / \ omega $ - peak the reproduction of the data requires further a substantial collisional broadening of the $ \ rho $ and in particular of the $ \ omega $ meson.
|
arxiv:nucl-th/0305015
|
compared with contact - based fingerprint acquisition techniques, contactless acquisition has the advantages of less skin distortion, larger fingerprint area, and hygienic acquisition. however, perspective distortion is a challenge in contactless fingerprint recognition, which changes ridge orientation, frequency, and minutiae location, and thus causes degraded recognition accuracy. we propose a learning based shape from texture algorithm to reconstruct a 3d finger shape from a single image and unwarp the raw image to suppress perspective distortion. experimental results on contactless fingerprint databases show that the proposed method has high 3d reconstruction accuracy. matching experiments on contactless - contact and contactless - contactless matching prove that the proposed method improves matching accuracy.
|
arxiv:2205.00967
|
3d single object tracking is a key issue for robotics. in this paper, we propose a transformer module called point - track - transformer ( ptt ) for point cloud - based 3d single object tracking. ptt module contains three blocks for feature embedding, position encoding, and self - attention feature computation. feature embedding aims to place features closer in the embedding space if they have similar semantic information. position encoding is used to encode coordinates of point clouds into high dimension distinguishable features. self - attention generates refined attention features by computing attention weights. besides, we embed the ptt module into the open - source state - of - the - art method p2b to construct ptt - net. experiments on the kitti dataset reveal that our ptt - net surpasses the state - of - the - art by a noticeable margin ( ~ 10 % ). additionally, ptt - net could achieve real - time performance ( ~ 40fps ) on nvidia 1080ti gpu. our code is open - sourced for the robotics community at https : / / github. com / shanjiayao / ptt.
|
arxiv:2108.06455
|
we consider the geodesic flow for a rank one non - positive curvature closed manifold. we prove an asymptotic version of the central limit theorem for families of measures constructed from regular closed geodesics converging to the bowen - margulis - knieper measure of maximal entropy. the technique expands on ideas of denker, senti and zhang, who proved this type of asymptotic lindeberg central limit theorem on periodic orbits for expansive maps with the specification property. we extend these techniques from the uniform to the non - uniform setting, and from discrete - time to continuous - time. we consider h \ " older observables subject only to the lindeberg condition and a weak positive variance condition. if we assume a natural strengthened positive variance condition, the lindeberg condition is always satisfied. our results extend to dynamical arrays of h \ " older observables, and to weighted periodic orbit measures which converge to a unique equilibrium state.
|
arxiv:2008.08537
|
dynamical nature of the josephson vortex ( jv ) system in bi $ _ 2 $ sr $ _ 2 $ cacu $ _ 2 $ o $ _ { 8 + \ delta } $ ( bi2212 ) has been investigated in the presence of the c - axis current with magnetic field alignments very close to the $ ab $ - plane. as a function of magnetic fields, the c - axis jv flux flow resistance oscillates periodically in accordance with the proposed jv triangular structure. we observe that this oscillating period becomes doubled above a certain field, indicating the structure transition from triangle to square structure. this transition field becomes lower in junctions with smaller width perpendicular to the external field. we interpret that this phenomena as the effect of the edge deformation of the jv lattice due to surface current of intrinsic josephson junctions as pointed by koshelev.
|
arxiv:cond-mat/0503498
|
in this note we describe a unique linear embedding of a prime fano 4 - fold f of genus 10 into the grassmannian g ( 3, 6 ). we use this to construct some moduli spaces of bundles on linear sections of f. in particular the moduli space of bundles with mukai vector ( 3, l, 3 ) on a generic polarized k3 surface ( s, l ) of genus 10 is constructed as a double cover of the projective plane branched over a smooth sextic.
|
arxiv:1005.5528
|
with its very wide energy band ( 0. 1 - 100 kev ), bepposax has played a fundamental role in the blazar field, allowing in particular a better study of the highest synchrotron peaked objects. here we summarize the results of an observational program performed with the aim to find and study more extreme bl lac sources. we discuss the seds of the observed objects and their impact on the " blazar sequence " scenario, and consider their relevance as possible tev emitting sources.
|
arxiv:astro-ph/0206482
|
we consider a decision aggregation problem with two experts who each make a binary recommendation after observing a private signal about an unknown binary world state. an agent, who does not know the joint information structure between signals and states, sees the experts ' recommendations and aims to match the action with the true state. under the scenario, we study whether supplemented additionally with second - order information ( each expert ' s forecast on the other ' s recommendation ) could enable a better aggregation. we adopt a minimax regret framework to evaluate the aggregator ' s performance, by comparing it to an omniscient benchmark that knows the joint information structure. with general information structures, we show that second - order information provides no benefit. no aggregator can improve over a trivial aggregator, which always follows the first expert ' s recommendation. however, positive results emerge when we assume experts ' signals are conditionally independent given the world state. when the aggregator is deterministic, we present a robust aggregator that leverages second - order information, which can significantly outperform counterparts without it. second, when two experts are homogeneous, by adding a non - degenerate assumption on the signals, we demonstrate that random aggregators using second - order information can surpass optimal ones without it. in the remaining settings, the second - order information is not beneficial. we also extend the above results to the setting when the aggregator ' s utility function is more general.
|
arxiv:2311.14094
|
we compute the leading order hadronic vacuum polarization ( lo - hvp ) contribution to the anomalous magnetic moment of the muon, $ ( g _ \ mu - 2 ) $, using lattice qcd. calculations are performed with four flavors of 4 - stout - improved staggered quarks, at physical quark masses and at six values of the lattice spacing down to 0. 064 ~ fm. all strong isospin breaking and electromagnetic effects are accounted for to leading order. the infinite - volume limit is taken thanks to simulations performed in volumes of sizes up to 11 ~ fm. our result for the lo - hvp contribution to $ ( g _ \ mu - 2 ) $ has a total uncertainty of 0. 8 \ %. compared to the result of the dispersive approach for this contribution, ours significantly reduces the tension between the standard model prediction for $ ( g _ \ mu - 2 ) $ and its measurement.
|
arxiv:2002.12347
|
galaxy morphology in atomic hydrogen ( hi ) and in the ultra - violet ( uv ) are closely linked. this has motivated their combined use to quantify morphology over the full h i disk for both h i and uv imaging. we apply galaxy morphometrics : concentration, asymmetry, gini, m20 and multimode - intensity - deviation statistics to the first moment - 0 maps of the wallaby survey of galaxies in the hydra cluster center. taking advantage of this new hi survey, we apply the same morphometrics over the full hi extent on archival galex fuv and nuv data to explore how well hi truncated, extended ultraviolet disk ( xuv ) and other morphological phenomena can be captured using pipeline wallaby data products. extended hi and uv disks can be identified relatively straightforward from their respective concentration. combined with wallaby hi, even the shallowest galex data is sufficient to identify xuv disks. our second goal is to isolate galaxies undergoing ram - pressure stripping in the h i morphometric space. we employ four different machine learning techniques, a decision tree, a k - nearest neighbour, a support - vector machine, and a random forest. up to 80 % precision and recall are possible with the random forest giving the most robust results.
|
arxiv:2302.07963
|
in this work we construct a unified model of dark energy and dark matter. this is done with the following three elements : a gravitating scalar field, phi with a non - conventional kinetic term, as in the string theory tachyon ; an arbitrary potential, v ( phi ) ; two measures - - a metric measure ( sqrt { - g } ) and a non - metric measure ( phi ). the model has two interesting features : ( i ) for potentials which are unstable and would give rise to tachyonic scalar field, this model can stabilize the scalar field. ( ii ) the form of the dark energy and dark matter that results from this model is fairly insensitive to the exact form of the scalar field potential.
|
arxiv:1205.1056
|
we derive a combinatorial sufficient condition for a partial correlation hypersurface in the parameter space of a directed gaussian graphical model to be nonsingular, and speculate on whether this condition can be used in algorithms for learning the graph. since the condition is fulfilled in the case of a complete dag on any number of vertices, the result implies an affirmative answer to a question raised by lin - uhler - sturmfels - b \ " uhlmann.
|
arxiv:1806.00320
|
we study certain top intersection products on the hilbert scheme of points on a nonsingular surface relative to an effective smooth divisor. we find a formula relating these numbers to the corresponding intersection numbers on the non - relative hilbert schemes. in particular, we obtain a relative version of the explicit formula found by carlsson - okounkov for the euler class of the twisted tangent bundle of the hilbert schemes.
|
arxiv:1504.01107
|
can we teach a robot to recognize and make predictions for activities that it has never seen before? we tackle this problem by learning models for video from text. this paper presents a hierarchical model that generalizes instructional knowledge from large - scale text corpora and transfers the knowledge to video. given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. to demonstrate the capabilities of our model, we introduce the \ emph { tasty videos dataset v2 }, a collection of 4022 recipes for zero - shot learning, recognition and anticipation. extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.
|
arxiv:2106.03158
|
three isostructural cyano - bridged heptanuclear complexes, [ { cuii ( saldmen ) ( h2o ) } 6 { miii ( cn ) 6 } ] ( clo4 ) 3 $ \ cdotp $ 8h2o ( m = feiii 2 ; coiii, 3 ; criii 4 ), have been obtained by reacting the binuclear copper ( ii ) complex, [ cu2 ( saldmen ) 2 ( mu - h2o ) ( h2o ) 2 ] ( clo4 ) 2 $ \ cdotp $ 2h2o 1, with k3 [ co ( cn ) 6 ], k4 [ fe ( cn ) 6 ], and, respectively, k3 [ cr ( cn ) 6 ] ( hsaldmen is the schiff base resulted from the condensation of salicylaldehyde with n, n - dimethylethylenediamine ). a unique octameric water cluster, with bicyclo [ 2, 2, 2 ] octane - like structure, is sandwiched between the heptanuclear cations in 2, 3 and 4. the cryomagnetic investigations of compounds 2 and 4 reveal ferromagnetic couplings of the central feiii or criii ions with the cuii ions ( jcufe = + 0. 87 cm - 1, jcucr = + 30. 4 cm - 1 ). the intramolecular cu - cu exchange interaction in 3, across the diamagnetic cobalt ( iii ) ion, is - 0. 3 cm - 1. the solid - state1h - nmr spectra of compounds 2 and 3 have been investigated.
|
arxiv:1006.0389
|
careful first - principles density functional calculations reveal the importance of hexagonal versus cubic stacking of closed packed planes of pd as far as local magnetic properties are concerned. we find that, contrary to the stable face centered cubic phase, which is paramagnetic, the hexagonal close - packed phase of pd is ferromagnetic with a magnetic moment of 0. 35 $ \ mu _ { b } $ / atom. our results show that two - dimensional defects with local hcp stacking, like twin boundaries and stacking faults, in the otherwise fcc pd structure, increase the magnetic susceptibility. the ( 111 ) surface also increases the magnetic susceptibility and it becomes ferromagnetic in combination with an individual stacking fault or twin boundary close to it. on the contrary, we find that the ( 100 ) surface decreases the tendency to ferromagnetism. the results are consistent with the magnetic moment recently observed in small pd nanoparticles, with a large surface area and a high concentration of two - dimensional stacking defects.
|
arxiv:cond-mat/0601658
|
we propose an extension of wenzel - kramers - brillouin ( wkb ) approximation for solving the schr \ " odinger equation. a set of coupled differential equations is obtained by considering an ansatz of the wave function with an auxiliary condition on gauging its first derivative. it is shown that the alternating perturbation method can decouple the set of differential equations, yielding the well know bremmer series, and in addition, by virtue of improvement on amplitudes, can refine the phase of the wave function in a sequence of recursive diagonalizations. we therefore find a general quantization formula in which the geometric - optical - like physics is encoded. whenever the ratio of the differential reflection coefficient and the classical momentum remains constant, we show that our general quantized formula will reduce to the closed - form quantization condition that agrees with the result obtained by re - summation the perturbative wkb series to all orders.
|
arxiv:2207.00935
|
pervasive pre - riesz spaces are defined by means of vector lattice covers. to avoid the computation of a vector lattice cover, we give two distinct intrinsic characterizations of pervasive pre - riesz spaces. we introduce weakly pervasive pre - riesz spaces and observe that this property can be easily checked in examples. we relate weakly pervasive pre - riesz spaces to pre - riesz spaces with the riesz decomposition property.
|
arxiv:1803.07454
|
we study the possibility of probing the scale of left - right symmetry breaking in the context of left - right symmetric models ( lrsm ). in lrsm, the right handed fermions transform as doublets under a newly introduced $ su ( 2 ) _ r $ gauge symmetry. this, along with a discrete parity symmetry $ \ mathcal { p } $ ensuring identical gauge couplings of left and right sectors make the model left - right symmetric, providing a dynamical origin of parity violation in electroweak interactions via spontaneous symmetry breaking. the spontaneous breaking of $ \ mathcal { p } $ leads to the formation of domain walls in the early universe. these walls, if made unstable by introducing an explicit parity breaking term, generate gravitational waves ( gw ) with a spectrum characterized by the wall tension or the spontaneous $ \ mathcal { p } $ breaking scale, and the explicit $ \ mathcal { p } $ breaking term. considering explicit $ \ mathcal { p } $ breaking terms to originate from planck suppressed operators provides one - to - one correspondence between the scale of left - right symmetry and sensitivities of near future gw experiments. this is not only complementary to collider and low energy probes of tev scale lrsm but also to gw generated from first order phase transition in lrsm with different spectral shape, peak frequencies as well as symmetry breaking scales.
|
arxiv:2205.12220
|
the ( signed ) projective cubes, as a special class of graphs closely related to the hypercubes, are on the crossroad of geometry, algebra, discrete mathematics and linear algebra. defined as cayley graphs on binary groups, they represent basic linear dependencies. capturing the four - color theorem as a homomorphism target they show how mapping of discrete objects, namely graphs, may relate to special mappings of plane to projective spaces of higher dimensions. in this work, viewed as a signed graph, first we present a number of equivalent definitions each of which leads to a different development. in particular, the new notion of common product of signed graphs is introduced which captures both cartesian and tensor products of graphs. we then have a look at some of their homomorphism properties. we first introduce an inverse technique for the basic no - homomorphism lemma, using which we show that every signed projective cube is of circular chromatic number 4. then observing that the 4 - color theorem is about mapping planar graphs into signed projective cube of dimension 2, we study some conjectures in extension of 4ct. toward a better understanding of these conjectures we present the notion of extended double cover as a key operation in formulating the conjectures. with a deeper look into connection between some of these graphs and algebraic geometry, we discover that projective cube of dimension 4, widely known as the clebsh graph, but also known as greenwood - gleason graph, is the intersection graph of the 16 straight lines of an algebraic surface known as segre surface, which is a del pezzo surface of degree 4. we note that an algebraic surface known as the clebsch surface is one of the most symmetric presentations of a cubic surface. recall that each smooth cubic surface contains 27 lines. hence, from hereafter, we believe, a proper name for this graph should be segre graph.
|
arxiv:2406.10814
|
digital imaging has been steadily improving over the past decades and we are moving towards a wide use of multi - and hyperspectral cameras. a key component of such imaging systems are color filter arrays, which define the spectrum of light detected by each camera pixel. hence, it is essential to develop a variable, robust and scalable way for controlling the transmission of light. nanostructured surfaces, also known as metasurfaces, offer a promising solution as their transmission spectra can be controlled by shaping the wavelength - dependent scattering properties of their constituting elements. here we present, metasurfaces based on silicon nanodisks, which provide filter functions with amplitudes reaching 70 - 90 % of transmission, and well suitable for rgb and cmy color filter arrays, the initial stage towards the further development of hyperspectral filters. we suggest and discuss possible ways to expand the color gamut and improve the color values of such optical filters.
|
arxiv:2004.06423
|
target - specific peptides, such as conotoxins, exhibit exceptional binding affinity and selectivity toward ion channels and receptors. however, their therapeutic potential remains underutilized due to the limited diversity of natural variants and the labor - intensive nature of traditional optimization strategies. here, we present creopep, a deep learning - based conditional generative framework that integrates masked language modeling with a progressive masking scheme to design high - affinity peptide mutants while uncovering novel structural motifs. creopep employs an integrative augmentation pipeline, combining foldx - based energy screening with temperature - controlled multinomial sampling, to generate structurally and functionally diverse peptides that retain key pharmacological properties. we validate this approach by designing conotoxin inhibitors targeting the $ \ alpha $ 7 nicotinic acetylcholine receptor, achieving submicromolar potency in electrophysiological assays. structural analysis reveals that creopep - generated variants engage in both conserved and novel binding modes, including disulfide - deficient forms, thus expanding beyond conventional design paradigms. overall, creopep offers a robust and generalizable platform that bridges computational peptide design with experimental validation, accelerating the discovery of next - generation peptide therapeutics.
|
arxiv:2505.02887
|
we present a large scale exact diagonalization study of the one dimensional spin $ 1 / 2 $ heisenberg model in a random magnetic field. in order to access properties at varying energy densities across the entire spectrum for system sizes up to $ l = 22 $ spins, we use a spectral transformation which can be applied in a massively parallel fashion. our results allow for an energy - resolved interpretation of the many body localization transition including the existence of an extensive many - body mobility edge. the ergodic phase is well characterized by gaussian orthogonal ensemble statistics, volume - law entanglement, and a full delocalization in the hilbert space. conversely, the localized regime displays poisson statistics, area - law entanglement and non ergodicity in the hilbert space where a true localization never occurs. we perform finite size scaling to extract the critical edge and exponent of the localization length divergence.
|
arxiv:1411.0660
|
this paper has been withdrawn by the authors due to an unlikely results.
|
arxiv:0807.4352
|
dataset licensing is currently an issue in the development of machine learning systems. and in the development of machine learning systems, the most widely used are publicly available datasets. however, since the images in the publicly available dataset are mainly obtained from the internet, some images are not commercially available. furthermore, developers of machine learning systems do not often care about the license of the dataset when training machine learning models with it. in summary, the licensing of datasets for machine learning systems is in a state of incompleteness in all aspects at this stage. our investigation of two collection datasets revealed that most of the current datasets lacked licenses, and the lack of licenses made it impossible to determine the commercial availability of the datasets. therefore, we decided to take a more scientific and systematic approach to investigate the licensing of datasets and the licensing of machine learning systems that use the dataset to make it easier and more compliant for future developers of machine learning systems.
|
arxiv:2303.13735
|
we study the thermodynamic behavior of static and spherically symmetric hairy black holes in massive gravity. in this case, the black hole is surrounded in a spherical cavity with a fixed temperature on the surface. it is observed that these black holes have a phase transition similar to the liquid - gas phase transition of a van der waals fluid. also, by treating the cosmological constant $ \ lambda $ as a thermodynamic pressure $ p $, we study the thermodynamic behavior of charged anti - de sitter black holes in an ensemble with a pressure of $ p $ and an electric potential $ \ phi $ as the natural variables. a second order phase transition is observed to take place for all the values of the electric potential $ \ phi $.
|
arxiv:1409.6839
|
the interstellar medium is observed in a hierarchical fractal structure over several orders of magnitude in scale. aiming to understand the origin of this structure, we carry out numerical simulations of molecular cloud fragmentation, taking into account self - gravity, dissipation and energy input. self - gravity is computed through a tree code, with fully or quasi periodic boundary conditions. energy dissipation is introduced through cloud - cloud ineslatic collisions. several schemes are tested for the energy input. it appears that energy input from galactic shear allows to achieve a stationary clumped state for the gas, avoiding final collapse. when a stationary turbulent cascade is established, it is possible to derive meaningful statistical studies on the data such as the fractal dimension of the mass distribution.
|
arxiv:astro-ph/0007119
|
long - term pain conditions after surgery and patients ' responses to pain relief medications are not yet fully understood. while recent studies developed an index for nociception level of patients under general anesthesia, based on multiple physiological parameters, it remains unclear whether and how dynamics of these parameters indicate long - term post - operative pain ( pop ). to extract unbiased and interpretable descriptions of how physiological parameters dynamics change over time and across patients in response to surgical procedures, we employed a multivariate - temporal analysis. we demonstrate the main features of intra - operative physiological responses can be used to predict long - term pop. we propose to use a complex higher - order svd method to accurately decompose the patients ' physiological responses into multivariate structures evolving in time. we used intra - operative vital signs of 175 patients from a mixed surgical cohort to extract three interconnected, low - dimensional complex - valued descriptions of patients ' physiological responses : multivariate factors, reflecting sub - physiological parameters ; temporal factors reflecting common intra - surgery temporal dynamics ; and patients factors, describing patient to patient changes in physiological responses. adoption of complex - hosvd allowed us to clarify the dynamic correlation structure included in intra - operative physiological responses. instantaneous phases of the complex - valued physiological responses within the subspace of principal descriptors enabled us to discriminate between mild versus severe levels of long - term pop. by abstracting patients into different surgical groups, we identified significant surgery - related principal descriptors : each of them potentially encodes different surgical stimulation. the dynamics of patients ' physiological responses to these surgical events are linked to long - term post - operative pain development.
|
arxiv:2109.00888
|
time series modeling for predictive purpose has been an active research area of machine learning for many years. however, no sufficiently comprehensive and meanwhile substantive survey was offered so far. this survey strives to meet this need. a unified presentation has been adopted for entire parts of this compilation. a red thread guides the reader from time series preprocessing to forecasting. time series decomposition is a major preprocessing task, to separate nonstationary effects ( the deterministic components ) from the remaining stochastic constituent, assumed to be stationary. the deterministic components are predictable and contribute to the prediction through estimations or extrapolation. fitting the most appropriate model to the remaining stochastic component aims at capturing the relationship between past and future values, to allow prediction. we cover a sufficiently broad spectrum of models while nonetheless offering substantial methodological developments. we describe three major linear parametric models, together with two nonlinear extensions, and present five categories of nonlinear parametric models. beyond conventional statistical models, we highlight six categories of deep neural networks appropriate for time series forecasting in nonlinear framework. finally, we enlighten new avenues of research for time series modeling and forecasting. we also report software made publicly available for the models presented.
|
arxiv:2104.00164
|
the gamma distribution arises frequently in bayesian models, but there is not an easy - to - use conjugate prior for the shape parameter of a gamma. this inconvenience is usually dealt with by using either metropolis - hastings moves, rejection sampling methods, or numerical integration. however, in models with a large number of shape parameters, these existing methods are slower or more complicated than one would like, making them burdensome in practice. it turns out that the full conditional distribution of the gamma shape parameter is well approximated by a gamma distribution, even for small sample sizes, when the prior on the shape parameter is also a gamma distribution. this article introduces a quick and easy algorithm for finding a gamma distribution that approximates the full conditional distribution of the shape parameter. we empirically demonstrate the speed and accuracy of the approximation across a wide range of conditions. if exactness is required, the approximation can be used as a proposal distribution for metropolis - hastings.
|
arxiv:1802.01610
|
agriculture, vital for global sustenance, necessitates innovative solutions due to a lack of organized domain experts, particularly in developing countries where many farmers are impoverished and cannot afford expert consulting. initiatives like farmers helpline play a crucial role in such countries, yet challenges such as high operational costs persist. automating query resolution can alleviate the burden on traditional call centers, providing farmers with immediate and contextually relevant information. the integration of agriculture and artificial intelligence ( ai ) offers a transformative opportunity to empower farmers and bridge information gaps. language models like transformers, the rising stars of ai, possess remarkable language understanding capabilities, making them ideal for addressing information gaps in agriculture. this work explores and demonstrates the transformative potential of large language models ( llms ) in automating query resolution for agricultural farmers, leveraging their expertise in deciphering natural language and understanding context. using a subset of a vast dataset of real - world farmer queries collected in india, our study focuses on approximately 4 million queries from the state of tamil nadu, spanning various sectors, seasonal crops, and query types.
|
arxiv:2407.04721
|
learning from demonstration ( lfd ) algorithms enable humans to teach new skills to robots through demonstrations. the learned skills can be robustly reproduced from the identical or near boundary conditions ( e. g., initial point ). however, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single lfd representation cannot preserve a consistent similarity across a generalization region. we propose a novel similarity - aware framework including multiple lfd representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. given a demonstration of the skill, our framework constructs a similarity region around a point of interest ( e. g., initial point ) by evaluating individual lfd representations using the similarity metric. any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. we validate our multi - representational framework in three simulated and four sets of real - world experiments using a physical 6 - dof robot. we also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.
|
arxiv:2110.14817
|
this paper aims to study the functional renormalization group for quantum $ ( 2 + p ) $ - spin dynamics of a $ n $ - vector $ \ textbf { x } \ in \ mathbb { r } ^ n $. by fixing the gauge symmetry in the construction of the frg, that breaks the $ o ( n ) $ - symmetry and deriving the corresponding non - trivial ward identity we can : in the first time coarse grain and focus on this study using a more attractive method such as the effective vertex expansion, and in the second time explore this model beyond the symmetry phase. we show finite scale singularities due to the disorder, interpreted as the signal in the perturbation theory. the unconventional renormalization group approach is based on coarse - graining over the eigenvalues of matrix - like disorder, viewed as an effective kinetic term, with an eigenvalue distribution following a deterministic law in the large $ n $ limit. as an illustration, the case where $ p = 3 $ is scrutinized.
|
arxiv:2411.11089
|
in recent years modelling crowd and evacuation dynamics has become very important, with increasing huge numbers of people gathering around the world for many reasons and events. the fact that our global population grows dramatically every year and the current public transport systems are able to transport large amounts of people, heightens the risk of crowd panic or crush. pedestrian models are based on macroscopic or microscopic behaviour. in this paper, we are interested in developing models that can be used for evacuation control strategies. this model will be based on microscopic pedestrian simulation models, and its evolution and design requires a lot of information and data. the people stream will be simulated, based on mathematical models derived from empirical data about pedestrian flows. this model is developed from image data bases, so called empirical data, taken from a video camera or data obtained using human detectors. we consider the individuals as autonomous particles interacting through social and physical forces, which is one approach that has been used to simulate crowd behaviour.
|
arxiv:1501.06496
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.