text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
crustal plateaus are venusian highlands characterized by tectonized terrains. it is commonly interpreted that their topography is isostatically supported and that they represent fossils of an extinct tectonic regime. using gravity and topography we perform a comprehensive investigation of the lithospheric structure of six crustal plateaus. we computed the admittance ( gravity to topography wavelength - dependent ratio ) for each region and compared them to modeled admittances. three compensation scenarios were tested : airy isostasy, a surface - loading flexural model, and a flexural model with surface and subsurface loads. our results show that the topography of most plateaus is supported by crustal thickening and that the addition of a mantle support component is not necessary at the investigated wavelengths. the elastic thickness was constrained to be less than 35 km with a best - fitting average of 15 km, confirming that these regions are consistent with an isostatic regime. the average crustal thickness of the plateaus ranges from 15 to 34 km, and if they are in airy isostasy, this implies that the global average crustal thickness of venus is about 20 km. phoebe regio is the sole exception of our analysis in that crustal thicknesses that are compatible with the other plateaus are obtained only when a buoyant layer is included. heat flow estimations computed from the elastic thickness indicate that the plateaus formed under higher heat flow conditions compared to the current global average and could have caused localized melting. present - day heat flow predictions suggest that eclogitization could occur where the crust is thickest.
|
arxiv:2202.06971
|
the classical chung - feller theorem [ 2 ] tells us that the number of dyck paths of length $ n $ with $ m $ flaws is the $ n $ - th catalan number and independent on $ m $. l. shapiro [ 9 ] found the chung - feller properties for the motzkin paths. mohanty ' s book [ 5 ] devotes an entire section to exploring chung - feller theorem. many chung - feller theorems are consequences of the results in [ 5 ]. in this paper, we consider the $ ( n, m ) $ - lattice paths. we study two parameters for an $ ( n, m ) $ - lattice path : the non - positive length and the rightmost minimum length. we obtain the chung - feller theorems of the $ ( n, m ) $ - lattice path on these two parameters by bijection methods. we are more interested in the pointed $ ( n, m ) $ - lattice paths. we investigate two parameters for an pointed $ ( n, m ) $ - lattice path : the pointed non - positive length and the pointed rightmost minimum length. we generalize the results in [ 5 ]. using the main results in this paper, we may find the chung - feller theorems of many different lattice paths.
|
arxiv:0903.0705
|
the chiral magnetic effect ( cme ) is a collective quantum phenomenon that arises from the interplay between gauge field topology and fermion chiral anomaly, encompassing a wide range of physical systems from semimetals to quark - gluon plasma. this review, with a focus on cme and related effects in heavy ion collisions, aims to provide an introductory discussion on its conceptual foundation and measurement methodology, a timely update on the present status in terms of experimental findings and theoretical progress, as well as an outlook into the open problems and future developments.
|
arxiv:2405.05427
|
in this work the standard kinetic theory assumption of instantaneous collisions is lifted. as a continuation of a previous paper by kanzler, schmeiser, and tora [ krm, 2024 ], a model for higher order non - instantaneous alignment collisions is presented and studied in the asymptotic regime of short collision duration. a first order accurate approximative model is derived as a correction to the instantaneous limit. rigorous results on its well - posedness and on the instantaneous limit are proven. the approximative model is a system of two equations. the possibility of finding an equally accurate scalar approximation is discussed.
|
arxiv:2503.05686
|
class imbalance in graph data presents significant challenges for node classification. while existing methods, such as smote - based approaches, partially mitigate this issue, they still exhibit limitations in constructing imbalanced graphs. generative self - supervised learning ( ssl ) methods, exemplified by graph autoencoders ( gaes ), offer a promising solution by directly generating minority nodes from the data itself, yet their potential remains underexplored. in this paper, we delve into the shortcomings of smote - based approaches in the construction of imbalanced graphs. furthermore, we introduce vigraph, a simple yet effective generative ssl approach that relies on the variational gae as the fundamental model. vigraph strictly adheres to the concept of imbalance when constructing imbalanced graphs and innovatively leverages the variational inference ( vi ) ability of variational gae to generate nodes for minority classes. vigraph introduces comprehensive training strategies, including cross - view contrastive learning at the decoding phase to capture semantic knowledge, adjacency matrix reconstruction to preserve graph structure, and alignment strategy to ensure stable training. vigraph can generate high - quality nodes directly usable for classification, eliminating the need to integrate the generated nodes back to the graph as well as additional retraining found in smote - based methods. we conduct extensive experiments, results from which demonstrate the superiority and generality of our approach.
|
arxiv:2311.01191
|
we present a systematic study of the magnetization in ybrh $ _ { 2 } $ si $ _ { 2 } $ under slightly negative ( 6? % ir substitution ) and positive ( 7 % co substitution ) chemical pressure. we show how the critical field $ h _ { 0 } $, associated with the high - field lifshitz transitions, is shifted to lower ( higher ) values with co ( ir ) substitution. the critical field $ h _ { \ mathrm { n } } $, which identifies the boundary line of the antiferromagnetic ( afm ) phase $ t _ { \ mathrm { n } } ( h ) $ increases with positive pressure and it approaches zero with 6 % ir substitution. on the other side, the crossover field $ h ^ { * } $, associated with the energy scale $ t ^ { * } ( h ) $ where a reconstruction of the fermi surface has been observed, is not much influenced by the chemical substitution. } { following the analysis proposed in refs. \, \ cite { paschen2004, gegenwart2007, friedemann2009, tokiwa2009a } we have fitted the quantity $ \ tilde { m } ( h ) = m + ( dm / dh ) h $ with a crossover function to indentify $ h ^ { * } $. the $ t ^ { * } ( h ) $ line follows an almost linear $ h $ - dependence at sufficiently high fields outside the afm phase, but it deviates from linearity at $ t \ le t _ { \ mathrm { n } } ( 0 ) $ and in yb ( rh $ _ { 0. 93 } $ co $ _ { 0. 07 } $ ) $ _ { 2 } $ si $ _ { 2 } $ it changes slope clearly inside the afm phase. moreover, the fwhm of the fit function depends linearly on temperature outside the phase, but remains constant inside, suggesting either that such an analysis is valid only for $ t \ ge t _ { \ mathrm { n } } ( 0 ) $ or that the fermi surface changes continuously at $ t = 0 $ inside the afm phase. } }
|
arxiv:1303.2224
|
diffusion models have demonstrated significant promise in various generative tasks ; however, they often struggle to satisfy challenging constraints. our approach addresses this limitation by rethinking training - free loss - guided diffusion from an optimization perspective. we formulate a series of constrained optimizations throughout the inference process of a diffusion model. in each optimization, we allow the sample to take multiple steps along the gradient of the proxy constraint function until we can no longer trust the proxy, according to the variance at each diffusion level. additionally, we estimate the state manifold of diffusion model to allow for early termination when the sample starts to wander away from the state manifold at each diffusion step. trust sampling effectively balances between following the unconditional diffusion model and adhering to the loss guidance, enabling more flexible and accurate constrained generation. we demonstrate the efficacy of our method through extensive experiments on complex tasks, and in drastically different domains of images and 3d motion generation, showing significant improvements over existing methods in terms of generation quality. our implementation is available at https : / / github. com / will - s - h / trust - sampling.
|
arxiv:2411.10932
|
above the upper critical dimension, the breakdown of hyperscaling is associated with dangerous irrelevant variables in the renormalization group formalism at least for systems with periodic boundary conditions. while these have been extensively studied, there have been only a few analyses of finite - size scaling with free boundary conditions. the conventional expectation there is that, in contrast to periodic geometries, finite - size scaling is gaussian, governed by a correlation length commensurate with the lattice extent. here, detailed numerical studies of the five - dimensional ising model indicate that this expectation is unsupported, both at the infinite - volume critical point and at the pseudocritical point where the finite - size susceptibility peaks. instead the evidence indicates that finite - size scaling at the pseudocritical point is similar to that in the periodic case. an analytic explanation is offered which allows hyperscaling to be extended beyond the upper critical dimension.
|
arxiv:1402.1657
|
visual teach and repeat has shown relative navigation is a robust and efficient solution for autonomous vision - based path following in difficult environments. adding additional absolute sensors such as global navigation satellite systems ( gnss ) has the potential to expand the domain of visual teach and repeat to environments where the ability to visually localize is not guaranteed. our method of lazy mapping and delaying estimation until a path - tracking error is needed avoids the need to estimate absolute states. as a result, map optimization is not required and paths can be driven immediately after being taught. we validate our approach on a real robot through an experiment in a joint indoor - outdoor environment comprising 3. 5km of autonomous route repeating across a variety of lighting conditions. we achieve smooth error signals throughout the runs despite large sections of dropout for each sensor.
|
arxiv:2101.05107
|
the electronic and structural properties of excitons and trions in monolayer transition metal dichalcogenides are investigated using both a multiband and a single - band model. in the multiband model we construct the excitonic hamiltonian in the product base of the single - particle states at the conduction and valence band edges. we decouple the corresponding energy eigenvalue equation and solve the resulting differential equation self - consistently, using the finite element method ( fem ), to determine the energy eigenvalues and the wave functions. as a comparison, we also consider the simple single - band model which is often used in numerical studies. we solve the energy eigenvalue equation using the fem as well as with the stochastic variational method ( svm ) in which a variational wave function is expanded in a basis of a large number of correlated gaussians. we find good agreement between the results of both methods, as well as with other theoretical works for excitons, and we also compare with available experimental data. for trions the agreement between both methods is not as good due to our neglect of angular correlations when using the fem. finally, when comparing the two models, we see that the presence of the valence bands in the mutiband model leads to differences with the single - band model when ( interband ) interactions are strong.
|
arxiv:1707.07509
|
between clustering and local coexistence holds as for $ n \ to \ infty $.
|
arxiv:1209.1856
|
vortices are ubiquitous in nature and can be observed in fluids, condensed matter, and even in the formation of galaxies. light, too, can evolve like a vortex. optical vortices are exploited in light - matter interaction, free - space communications, and imaging. here, we introduce optical rotatum ; a new degree - of - freedom of light in which an optical vortex experiences a quadratic chirp in its orbital angular momentum along the optical path. we show that such an adiabatic deformation of topology is associated with the accumulation of a berry phase factor which in turn perturbs the propagation constant ( spatial frequency ) of the beam. remarkably, the spatial structure of optical rotatum follows a logarithmic spiral ; a signature that is commonly seen in the pattern formation of seashells and galaxies. our work expands previous literature on structured light, offers new modalities for light - matter interaction, communications, and sensing, and hints to analogous effects in condensed matter physics and bose - einstein condensates.
|
arxiv:2310.16317
|
the dynamics of fluctuations in a fast rotating spherical couette flow experiment in the presence of a strong dipolar magnetic field is investigated in detail, through a thorough analysis of the experimental data as well as a numerical study. fluctuations within the conducting fluid ( liquid sodium ) are characterized by the presence of several oscillation modes, identified as magneto - coriolis ( mc ) modes, with definite symmetry and azimuthal number. a numerical simulation provides eigensolutions which exhibit oscillation frequencies and magnetic signature comparable to the observation. the main characteristics of these hydromagnetic modes is that the magnetic contribution has a fundamental influence on the dynamical properties through the lorentz forces, although its importance remains weak in an energetical point of view. another specificity is that the lorentz forces are confined near the inner sphere where the dipolar magnetic field is the strongest, while the coriolis forces are concentrated in the outer fluid volume close to the outer sphere.
|
arxiv:1601.05684
|
a major goal shared by neuroscience and collective behavior is to understand how dynamic interactions between individual elements give rise to behaviors in populations of neurons and animals, respectively. this goal has recently become within reach thanks to techniques providing access to the connectivity and activity of neuronal ensembles as well as to behaviors among animal collectives. the next challenge using these datasets is to unravel network mechanisms generating population behaviors. this is aided by network theory, a field that studies structure - function relationships in interconnected systems. here we review studies that have taken a network view on modern datasets to provide unique insights into individual and collective animal behaviors. specifically, we focus on how analyzing signal propagation, controllability, symmetry, and geometry of networks can tame the complexity of collective system dynamics. these studies illustrate the potential of network theory to accelerate our understanding of behavior across ethological scales.
|
arxiv:2112.02361
|
this paper examines a novel gradient boosting framework for regression. we regularize gradient boosted trees by introducing subsampling and employ a modified shrinkage algorithm so that at every boosting stage the estimate is given by an average of trees. the resulting algorithm, titled boulevard, is shown to converge as the number of trees grows. we also demonstrate a central limit theorem for this limit, allowing a characterization of uncertainty for predictions. a simulation study and real world examples provide support for both the predictive accuracy of the model and its limiting behavior.
|
arxiv:1806.09762
|
the usual development cycles are too slow for the development of vaccines, diagnostics and treatments in pandemics such as the ongoing sars - cov - 2 pandemic. given the pressure in such a situation, there is a risk that findings of early clinical trials are overinterpreted despite their limitations in terms of size and design. motivated by a non - randomized open - label study investigating the efficacy of hydroxychloroquine in patients with covid - 19, we describe in a unified fashion various alternative approaches to the analysis of non - randomized studies. a widely used tool to reduce the impact of treatment - selection bias are so - called propensity score ( ps ) methods. conditioning on the propensity score allows one to replicate the design of a randomized controlled trial, conditional on observed covariates. extensions include the g - computation approach, which is less frequently applied, in particular in clinical studies. moreover, doubly robust estimators provide additional advantages. here, we investigate the properties of propensity score based methods including three variations of doubly robust estimators in small sample settings, typical for early trials, in a simulation study. r code for the simulations is provided.
|
arxiv:2007.15991
|
this article presents a deep cnn, based on the densenet architecture jointly with a highly discriminating learning methodology, in order to classify seven kinds of skin lesions : melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis / bowen ' s disease, benign keratosis, dermatofibroma, vascular lesion. in particular a 61 layers densenet, pre - trained on imagenet dataset, has been fine - tuned on isic 2018 task 3 challenge dataset exploiting a center loss function.
|
arxiv:1807.06416
|
remote state preparation enables one to prepare and manipulate quantum state non - locally. as an essential quantum resource, optical cat state is usually prepared locally by subtracting photons from a squeezed vacuum state. for remote quantum information processing, it is essential to prepare and manipulate optical cat states remotely based on gaussian entanglement, which remains a challenge. here, we present experimental preparation of optical cat states based on a remotely distributed two - mode gaussian entangled state in a lossy channel. by performing photon subtraction and homodyne projective measurement at alice ' s station, an optical cat state is prepared remotely at bob ' s station. furthermore, the prepared cat state is rotated by changing alice ' s measurement basis of homodyne detection, which demonstrates the remote manipulation of it. by distributing two modes of the two - mode gaussian entangled state in lossy channels, we demonstrate that the remotely prepared cat state can tolerate much more loss in alice ' s channel than that in bob ' s channel. we also show that cat states with amplitudes larger than 2 can be prepared by increasing the squeezing level and subtracting photon numbers. our results make a crucial step toward remote hybrid quantum information processing involving discrete - and continuous - variable techniques.
|
arxiv:2304.08863
|
recent observations of jupiter and saturn suggest that heavy elements may be diluted in the gaseous envelope, providing a compositional gradient that could stabilise ordinary convection and produce a stably - stratified layer near the core of these planets. this region could consist of semi - convective layers with a staircase - like density profile, which have multiple convective zones separated by thin stably - stratified interfaces, as a result of double - diffusive convection. these layers could have important effects on wave propagation and tidal dissipation that have not been fully explored. we analyse the effects of these layers on the propagation and transmission of internal waves within giant planets, extending prior work in a local cartesian model. we adopt a simplified global boussinesq planetary model in which we explore the internal waves in a non - rotating spherical body. we begin by studying the free modes of a region containing semi - convective layers. we then analyse the transmission of internal waves through such a region. the free modes depend strongly on the staircase properties, and consist of modes with both internal and interfacial gravity wave - like behaviour. we determine the frequency shifts of these waves as a function of the number of steps to explore their potential to probe planetary internal structures. we also find that wave transmission is strongly affected by the presence of a staircase. very large - wavelength waves are transmitted efficiently, but small - scale waves are only transmitted if they are resonant with one of the free modes. the effective size of the core is therefore larger for non - resonant modes.
|
arxiv:2003.02595
|
tsunami waves induced by landslides are a threat to human activities and safety along coastal areas. in this paper, we characterize experimentally the waves generated by the gravity - driven collapse of a dry granular column into water. three nonlinear wave regimes are identified depending on the froude number $ \ mathrm { fr } _ f $ based on the ratio of the velocity of the advancing granular front and the velocity of linear gravity waves in shallow water : transient bores for large $ \ mathrm { fr } _ f $, solitary waves for intermediate values of $ \ mathrm { fr } _ f $, and non - linear transition waves at small $ \ mathrm { fr } _ f $. the wave amplitude relative to the water depth increases with $ \ mathrm { fr } _ f $ in the three regimes but with different non - linear scalings, and the relative wavelength is an increasing or decreasing function of $ \ mathrm { fr } _ f $. two of these wave regimes are rationalized by considering that the advancing granular front acts as a vertical piston pushing the water, while the last one is found to be a transition from shallow to deep water conditions. the present modeling contributes to a better understanding of the rich hydrodynamics of the generated waves, with coastal risk assessment as practical applications.
|
arxiv:2105.13991
|
ad hoc teamwork problem describes situations where an agent has to cooperate with previously unseen agents to achieve a common goal. for an agent to be successful in these scenarios, it has to have a suitable cooperative skill. one could implement cooperative skills into an agent by using domain knowledge to design the agent ' s behavior. however, in complex domains, domain knowledge might not be available. therefore, it is worthwhile to explore how to directly learn cooperative skills from data. in this work, we apply meta - reinforcement learning ( meta - rl ) formulation in the context of the ad hoc teamwork problem. our empirical results show that such a method could produce robust cooperative agents in two cooperative environments with different cooperative circumstances : social compliance and language interpretation. ( this is a full paper of the extended abstract version. )
|
arxiv:2111.03431
|
a systematic study of hole compensation effect on magnetic properties, which is controlled by defect compensation through ion irradiation, in ( ga, mn ) as, ( in, mn ) as and ( ga, mn ) p is presented in this work. in all materials, both curie temperature and magnetization decrease upon increasing the hole compensation, confirming the description of hole mediated ferromagnetism according to the p - d zener model. the material dependence of curie temperature and magnetization versus hole compensation reveals that the manipulation of magnetic properties in iii - mn - v dilute ferromagnetic semiconductors by ion irradiation is strongly influenced by the energy level location of the produced defect relative to the band edges in semiconductors.
|
arxiv:1907.05160
|
the entanglement of pair cat states in the phase damping channel is studied by employing the relative entropy of entanglement. it is shown that the pair cat states can always be distillable in the phase damping channel. furthermore, we analyze the fidelity of teleportation for the pair cat states by using joint measurements of the photon - number sum and phase difference.
|
arxiv:quant-ph/0506217
|
one of the main challenges in surrogate modeling is the limited availability of data due to resource constraints associated with computationally expensive simulations. multi - fidelity methods provide a solution by chaining models in a hierarchy with increasing fidelity, associated with lower error, but increasing cost. in this paper, we compare different multi - fidelity methods employed in constructing gaussian process surrogates for regression. non - linear autoregressive methods in the existing literature are primarily confined to two - fidelity models, and we extend these methods to handle more than two levels of fidelity. additionally, we propose enhancements for an existing method incorporating delay terms by introducing a structured kernel. we demonstrate the performance of these methods across various academic and real - world scenarios. our findings reveal that multi - fidelity methods generally have a smaller prediction error for the same computational cost as compared to the single - fidelity method, although their effectiveness varies across different scenarios.
|
arxiv:2404.11965
|
bi - containing iii - v semiconductors constitute an exciting class of metastable compounds with wide - ranging potential optoelectronic and electronic applications. however, the growth of iii - v - bi alloys requires group - iii - rich growth conditions, which pose severe challenges for planar growth. in this work, we exploit the naturally - ga - rich environment present inside the metallic droplet of a self - catalyzed gaas nanowire to synthesize metastable gaas / gaas $ _ { 1 - \ text { x } } $ bi $ _ { \ text { x } } $ axial nanowire heterostructures with high bi contents. the axial gaas $ _ { 1 - \ text { x } } $ bi $ _ { \ text { x } } $ segments are realized with molecular beam epitaxy by first enriching only the vapor - liquid - solid ( vls ) ga droplets with bi, followed by exposing the resulting ga - bi droplets to as $ _ 2 $ at temperatures ranging from 270 to 380 $ \, ^ { \ circ } $ c to precipitate gaas $ _ { 1 - \ text { x } } $ bi $ _ { \ text { x } } $ only under the nanowire droplets. microstructural and elemental characterization reveals the presence of single crystal zincblende gaas $ _ { 1 - \ text { x } } $ bi $ _ { \ text { x } } $ axial nanowire segments with bi contents up to ( 10 $ \ pm $ 2 ) $ \ % $. this work illustrates how the unique local growth environment present during the vls nanowire growth can be exploited to synthesize heterostructures with metastable compounds.
|
arxiv:1903.11039
|
a wealth of cosmological and astrophysical information is expected from many ongoing and upcoming large - scale surveys. it is crucial to prepare for these surveys now and develop tools that can efficiently extract most information. we present hiflow : a fast generative model of the neutral hydrogen ( hi ) maps that is conditioned only on cosmology ( $ \ omega _ { m } $ and $ \ sigma _ { 8 } $ ) and designed using a class of normalizing flow models, the masked autoregressive flow ( maf ). hiflow is trained on the state - of - the - art simulations from the cosmology and astrophysics with machine learning simulations ( camels ) project. hiflow has the ability to generate realistic diverse maps without explicitly incorporating the expected 2d maps structure into the flow as an inductive bias. we find that hiflow is able to reproduce the camels average and standard deviation hi power spectrum ( pk ) within a factor of $ \ lesssim $ 2, scoring a very high $ r ^ { 2 } > 90 \ % $. by inverting the flow, hiflow provides a tractable high - dimensional likelihood for efficient parameter inference. we show that the conditional hiflow on cosmology is successfully able to marginalize over astrophysics at the field level, regardless of the stellar and agn feedback strengths. this new tool represents a first step toward a more powerful parameter inference, maximizing the scientific return of future hi surveys, and opening a new avenue to minimize the loss of complex information due to data compression down to summary statistics.
|
arxiv:2110.02983
|
conics is one of the best known and preserved mathematical works from antiquity, and in it he derives many theorems concerning conic sections that would prove invaluable to later mathematicians and astronomers studying planetary motion, such as isaac newton. while neither apollonius nor any other greek mathematicians made the leap to coordinate geometry, apollonius ' treatment of curves is in some ways similar to the modern treatment, and some of his work seems to anticipate the development of analytical geometry by descartes some 1800 years later. around the same time, eratosthenes of cyrene ( c. 276 – 194 bc ) devised the sieve of eratosthenes for finding prime numbers. the 3rd century bc is generally regarded as the " golden age " of greek mathematics, with advances in pure mathematics henceforth in relative decline. nevertheless, in the centuries that followed significant advances were made in applied mathematics, most notably trigonometry, largely to address the needs of astronomers. hipparchus of nicaea ( c. 190 – 120 bc ) is considered the founder of trigonometry for compiling the first known trigonometric table, and to him is also due the systematic use of the 360 degree circle. heron of alexandria ( c. 10 – 70 ad ) is credited with heron ' s formula for finding the area of a scalene triangle and with being the first to recognize the possibility of negative numbers possessing square roots. menelaus of alexandria ( c. 100 ad ) pioneered spherical trigonometry through menelaus ' theorem. the most complete and influential trigonometric work of antiquity is the almagest of ptolemy ( c. ad 90 – 168 ), a landmark astronomical treatise whose trigonometric tables would be used by astronomers for the next thousand years. ptolemy is also credited with ptolemy ' s theorem for deriving trigonometric quantities, and the most accurate value of π outside of china until the medieval period, 3. 1416. following a period of stagnation after ptolemy, the period between 250 and 350 ad is sometimes referred to as the " silver age " of greek mathematics. during this period, diophantus made significant advances in algebra, particularly indeterminate analysis, which is also known as " diophantine analysis ". the study of diophantine equations and diophantine approximations is a significant area of research to this day. his main work was the arithmetica, a collection of 150 algebraic problems dealing with exact solutions to determinate and indeterminate equations.
|
https://en.wikipedia.org/wiki/History_of_mathematics
|
the long - form video question - answering task requires the comprehension and analysis of extended video content to respond accurately to questions by utilizing both temporal and contextual information. in this paper, we present mm - screenplayer, an advanced video understanding system with multi - modal perception capabilities that can convert any video into textual screenplay representations. unlike previous storytelling methods, we organize video content into scenes as the basic unit, rather than just visually continuous shots. additionally, we developed a ` ` look back ' ' strategy to reassess and validate uncertain information, particularly targeting breakpoint mode. mm - screenplayer achieved highest score in the cvpr ' 2024 long - form video understanding ( loveu ) track 1 challenge, with a global accuracy of 87. 5 % and a breakpoint accuracy of 68. 8 %.
|
arxiv:2406.17309
|
to extract the approximate solutions in the case of nonlinear fractional order differential equations with the homogeneous and nonhomogeneous boundary conditions, the weighted residual method is embedded here. we exploit three methods such as galerkin, least square, and collocation for the efficient numerical solution of nonlinear two - point boundary value problems. some nonlinear cases are examined for observing the maximum absolute errors by the considered methods, demonstrating the accuracy and reliability of the present technique using the modified legendre and modified bernoulli polynomials as weight functions. the mathematical formulations and computational algorithms are more straightforward and uncomplicated to understand. absolute errors and the graphical representation reflect that our method is more accurate and reliable.
|
arxiv:2404.03338
|
we explore the possibility of resolving an image of a damped lyman alpha ( dla ) system in absorption against an extended, diffuse background x - ray source. typical columns of neutral hydrogen in dlas are high enough to block out up to ~ 30 % of the soft x - ray flux at an observed photon energy of 0. 5 kev, and we find that ~ 1 % of the area of extended x - ray sources at z > 1 have their 0. 5 kev flux reduced by at least 20 %. we discuss the observability of such absorption and find that < 2 arcsecond resolution, and > 300 photons per angular resolution element are required in the 0. 3 - 8 kev band for its detection, and in order to distinguish it from intrinsic surface brightness fluctuations. for the surface brightness of the currently known high - redshift extended x - ray sources, this requires an integration time of a few msec on chandra. the detection will be within the reach of a routine observation with a next generation x - ray telescope such as xeus or generation x.
|
arxiv:astro-ph/0409516
|
two - dimensional ( 2d ) carbon allotropes have attracted growing interest for their structural versatility and potential in energy storage and nanoelectronics. we propose athos - graphene ( ag ), a novel 2d carbon allotrope inspired by the geometric patterns of brazilian artist athos bulc \ ~ ao. designed using density functional theory, ag features a periodic structure with high thermodynamic and thermal stability, as evidenced by a low cohesive energy of - 7. 96 ev / atom, the absence of imaginary phonon modes, and robust performance in ab initio molecular dynamics simulations up to 1000 k. it exhibits anisotropic mechanical properties, with young ' s modulus values of 585 gpa and 600 gpa along the x - and y - directions, and poisson ' s ratios of 0. 19 and 0. 17, respectively. electronic structure analyses confirm its metallic behavior, while optical studies reveal anisotropic absorption in the visible and uv regions. for lithium - ion storage, athos - graphene shows strong li adsorption ( - 2. 3 to - 1. 0 ev ), a high theoretical capacity of 836. 78 mah / g, and a low average open - circuit voltage of 0. 54 v. lithium diffusion barriers are as low as 0. 3 ev on the surface and 0. 66 ev between layers, with a high diffusion coefficient greater than 6x10 ^ - 6 cm ^ 2 / s. these features highlight ag as a promising anode material for high - performance lithium - ion batteries.
|
arxiv:2505.04810
|
the orthogonal polarization modes ( opm ) have been reported observationally and accepted widely by pulsar researchers. however, no acceptable theory can show the origin of the opm, which becomes a mystery in pulsar research field. here a possible way to solve this mystery is presented. we ask a question : does there exist any real so - called opm in pulsar radiation? it is proposed in this paper that the ` observed opm ' in individual pulses could be the results of depolarization of pulsar radiation and the observational uncertainties originated from polarimeter in observation. a possible way to check this idea is suggested. if the idea is verified, the pulsar research would be influenced significantly in theory and in observation.
|
arxiv:astro-ph/0007056
|
the possibility of revealing non - classical behaviours in the dynamics of a trapped ion via measurements of the mean value of suitable operators is reported. in particular we focus on the manifestation known as ` ` parity effect \ rq \ rq which may be observed \ emph { directly measuring } the expectation value of an appropriate correlation operator. the experimental feasibility of our proposal is discussed.
|
arxiv:quant-ph/0205153
|
quantum area tensor regge calculus is considered, some properties are discussed. the path integral quantisation is defined for the usual length - based regge calculus considered as a particular case ( a kind of a state ) of the area tensor regge calculus. under natural physical assumptions the quantisation of interest is practically unique up to an additional one - parametric local factor of the type of a power of $ \ det \ | g _ { \ lambda \ mu } \ | $ in the measure. in particular, this factor can be adjusted so that in the continuum limit we would have any of the measures usually discussed in the continuum quantum gravity, namely, misner, dewitt or leutwyler measure. it is the latter two cases when the discrete measure turns out to be well - defined at small lengths and lead to finite expectation values of the lengths.
|
arxiv:gr-qc/0304088
|
the composition of europa ' s trailing hemisphere reflects the combined influences of endogenous geologic resurfacing and exogenous sulfur radiolysis. using spatially resolved visible - wavelength spectra of europa obtained with the hubble space telescope, we map multiple spectral features across the trailing hemisphere and compare their geographies with the distributions of large - scale geology, magnetospheric bombardment, and surface color. based on such comparisons, we interpret some aspects of our spectra as indicative of purely exogenous sulfur radiolysis products and other aspects as indicative of radiolysis products formed from a mixture of endogenous material and magnetospheric sulfur. the spatial distributions of two of the absorptions seen in our spectra - - a widespread downturn toward the near - uv and a distinct feature at 530 nm - - appear consistent with sulfur allotropes previously suggested from ground - based spectrophotometry. however, the geographies of two additional features - - an absorption feature at 360 nm and the spectral slope at red wavelengths - - are more consistent with endogenous material that has been altered by sulfur radiolysis. we suggest irradiated sulfate salts as potential candidates for this material, but we are unable to identify particular species with the available data.
|
arxiv:2012.11737
|
el cvn - type eclipsing binaries are composed of a massive a - type main - sequence primary star and a hotter b - type secondary star. this paper presents the time - series photometric and asteroseismic results of the el cvn - type star 1swasp j024743. 37 - 251549. 2. well - defined eclipsing light curves were constructed by using the novel high - cadence $ bv $ data and archival { \ it tess } data, and the physical parameters of each binary component were derived by modeling the light curves. multiple frequency analysis was performed to investigate the pulsation properties of the binary components. a reliable signal could not be detected in the high - frequency region of 100 - - 300 day $ ^ { - 1 } $, unlike in the previous discovery of three frequencies around 200 day $ ^ { - 1 } $. this indicates that the pulsation amplitudes of the pre - helium white dwarf secondary component decreased considerably. by contrast, 12 frequencies were detected in the range of 33 to 53 day $ ^ { - 1 } $. most of them were classified as $ \ delta $ sct - type pulsations originating from the primary star. theoretical frequencies for the seismic analysis were obtained by adding the non - rotating model frequencies from the gyre and their rotational shifts from the complete calculation approach. grid - based fitting was conducted for various stellar properties. the theoretical frequencies and stellar parameters of the best solution concurred well with the observations. the rotation rate was constrained to 1. 50 $ \ pm $ 0. 02 day $ ^ { - 1 } $, indicating the synchronized rotation of the primary star. the results imply that the complete approach based on the polytropic model is applicable to the seismic analysis of fast - rotating $ \ delta $ sct stars.
|
arxiv:2109.02262
|
a data - driven parametric model order reduction ( mor ) method using a deep artificial neural network is proposed. the present network, which is the least - squares hierarchical variational autoencoder ( lsh - vae ), is capable of performing nonlinear mor for the parametric interpolation of a nonlinear dynamic system with a significant number of degrees of freedom. lsh - vae exploits two major changes to the existing networks : a hierarchical deep structure and a hybrid weighted, probabilistic loss function. the enhancements result in a significantly improved accuracy and stability compared against the conventional nonlinear mor methods, autoencoder, and variational autoencoder. upon lsh - vae, a parametric mor framework is presented based on the spherically linear interpolation of the latent manifold. the present framework is validated and evaluated on three nonlinear and multiphysics dynamic systems. first, the present framework is evaluated on the fluid - structure interaction benchmark problem to assess its efficiency and accuracy. then, a highly nonlinear aeroelastic phenomenon, limit cycle oscillation, is analyzed. finally, the present framework is applied to a three - dimensional fluid flow to demonstrate its capability of efficiently analyzing a significantly large number of degrees of freedom. the performance of lsh - vae is emphasized by comparing its results against that of the widely used nonlinear mor methods, convolutional autoencoder, and $ \ beta $ - vae. the present framework exhibits a significantly enhanced accuracy to the conventional methods while still exhibiting a large speed - up factor.
|
arxiv:2307.06816
|
anomaly activities such as robbery, explosion, accidents, etc. need immediate actions for preventing loss of human life and property in real world surveillance systems. although the recent automation in surveillance systems are capable of detecting the anomalies, but they still need human efforts for categorizing the anomalies and taking necessary preventive actions. this is due to the lack of methodology performing both anomaly detection and classification for real world scenarios. thinking of a fully automatized surveillance system, which is capable of both detecting and classifying the anomalies that need immediate actions, a joint anomaly detection and classification method is a pressing need. the task of joint detection and classification of anomalies becomes challenging due to the unavailability of dense annotated videos pertaining to anomalous classes, which is a crucial factor for training modern deep architecture. furthermore, doing it through manual human effort seems impossible. thus, we propose a method that jointly handles the anomaly detection and classification in a single framework by adopting a weakly - supervised learning paradigm. in weakly - supervised learning instead of dense temporal annotations, only video - level labels are sufficient for learning. the proposed model is validated on a large - scale publicly available ucf - crime dataset, achieving state - of - the - art results.
|
arxiv:2108.08996
|
the experimental problem of converting a measured binomial quantity, the fraction of events in a sample that pass a cut, into a physical binomial quantity, the fraction of events originating from a signal source, is described as a system of linear equations. this linear system illustrates several familiar aspects of experimental data analysis. bayesian probability theory is used to find a solution to this binomial measurement problem that allows for the straightforward construction of confidence intervals. this solution is also shown to provide an unbiased formalism for evaluating the behavior of data sets under different choices of cuts, including a cut designed to increase the significance of a possible, albeit previously unseen, signal. several examples are used to illustrate the features of this method, including the discovery of the top quark and searches for new particles produced in association with $ w ^ { \ pm } $ bosons. it is also demonstrated how to use this method to make projections for the potential discovery of a standard model higgs boson at a tevatron run 2 experiment, as well as the utility of measuring the integrated luminosity through inclusive $ p \ bar { p } \ to w ^ { \ pm } $ production.
|
arxiv:hep-ex/9908044
|
we review a recent investigation of the effect of magnetic catalysis of mass generation in holographic yang - mills theories. we aim at a self - contained and pedagogical form of the review. we provide a brief field theory background and review the basics of holographic flavordynamics. the main part of the review investigates the influence of external magnetic field on holographic gauge theories dual to the d3 / d5 - - and d3 / d7 - - brane intersections. among the observed phenomena are the spontaneous breaking of a global internal symmetry, zeeman splitting of the energy levels and the existence of pseudo goldstone modes. an analytic derivation of the gell - mann - - oaks - - renner relation for the d3 / d7 set up is reviewed. in the d3 / d5 case the pseudo goldstone modes satisfy non - relativistic dispersion relation. the studies reviewed confirm the universal nature of the magnetic catalysis of mass generation.
|
arxiv:1010.0444
|
from a process development perspective, diamond growth via chemical vapor deposition has made significant strides. however, challenges persist in achieving high quality and large - area material production. these difficulties include controlling conditions to maintain uniform growth rates for the entire growth surface. as growth progresses, various factors or defect states emerge, altering the uniform conditions. these changes affect the growth rate and result in the formation of crystalline defects at the microscale. however, there is a distinct lack of methods to identify these defect states and their geometry using images taken during the growth process. this paper details seminal work on defect segmentation pipeline using in - situ optical images to identify features that indicate defective states that are visible at the macroscale. using a semantic segmentation approach as applied in our previous work, these defect states and corresponding derivative features are isolated and classified by their pixel masks. using an annotation focused human - in - the - loop software architecture to produce training datasets, with modules for selective data labeling using active learning, data augmentations, and model - assisted labeling, our approach achieves effective annotation accuracy and drastically reduces the time and cost of labeling by orders of magnitude. on the model development front, we found that deep learning - based algorithms are the most efficient. they can accurately learn complex representations from feature - rich datasets. our best - performing model, based on the yolov3 and deeplabv3plus architectures, achieved excellent accuracy for specific features of interest. specifically, it reached 93. 35 % accuracy for center defects, 92. 83 % for polycrystalline defects, and 91. 98 % for edge defects.
|
arxiv:2404.07306
|
the performance of current scene graph generation models is severely hampered by some hard - to - distinguish predicates, e. g., " woman - on / standing on / walking on - beach " or " woman - near / looking at / in front of - child ". while general sgg models are prone to predict head predicates and existing re - balancing strategies prefer tail categories, none of them can appropriately handle these hard - to - distinguish predicates. to tackle this issue, inspired by fine - grained image classification, which focuses on differentiating among hard - to - distinguish object classes, we propose a method named fine - grained predicates learning ( fgpl ) which aims at differentiating among hard - to - distinguish predicates for scene graph generation task. specifically, we first introduce a predicate lattice that helps sgg models to figure out fine - grained predicate pairs. then, utilizing the predicate lattice, we propose a category discriminating loss and an entity discriminating loss, which both contribute to distinguishing fine - grained predicates while maintaining learned discriminatory power over recognizable ones. the proposed model - agnostic strategy significantly boosts the performances of three benchmark models ( transformer, vctree, and motif ) by 22. 8 \ %, 24. 1 \ % and 21. 7 \ % of mean recall ( mr @ 100 ) on the predicate classification sub - task, respectively. our model also outperforms state - of - the - art methods by a large margin ( i. e., 6. 1 \ %, 4. 6 \ %, and 3. 2 \ % of mean recall ( mr @ 100 ) ) on the visual genome dataset.
|
arxiv:2204.02597
|
continuous long - term monitoring of motor health is crucial for the early detection of abnormalities such as bearing faults ( up to 51 % of motor failures are attributed to bearing faults ). despite numerous methodologies proposed for bearing fault detection, most of them require normal ( healthy ) and abnormal ( faulty ) data for training. even with the recent deep learning ( dl ) methodologies trained on the labeled data from the same machine, the classification accuracy significantly deteriorates when one or few conditions are altered. furthermore, their performance suffers significantly or may entirely fail when they are tested on another machine with entirely different healthy and faulty signal patterns. to address this need, in this pilot study, we propose a zero - shot bearing fault detection method that can detect any fault on a new ( target ) machine regardless of the working conditions, sensor parameters, or fault characteristics. to accomplish this objective, a 1d operational generative adversarial network ( op - gan ) first characterizes the transition between normal and fault vibration signals of ( a ) source machine ( s ) under various conditions, sensor parameters, and fault types. then for a target machine, the potential faulty signals can be generated, and over its actual healthy and synthesized faulty signals, a compact, and lightweight 1d self - onn fault detector can then be trained to detect the real faulty condition in real time whenever it occurs. to validate the proposed approach, a new benchmark dataset is created using two different motors working under different conditions and sensor locations. experimental results demonstrate that this novel approach can accurately detect any bearing fault achieving an average recall rate of around 89 % and 95 % on two target machines regardless of its type, severity, and location.
|
arxiv:2212.06154
|
##ods, plants, and ultimately pond - dwelling organisms that are the food of the endangered amphibian. = = = geosciences = = = geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the earth ' s crust. in some classification systems this can also include hydrology, including oceanography. as an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. fluvial geomorphologists would assist in examining sediment transport in overland flow. physicists would contribute by assessing the changes in light transmission in the receiving waters. biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity. = = regulations driving the studies = = in the united states the national environmental policy act ( nepa ) of 1969 set forth requirements for analysis of federal government actions ( such as highway construction projects and land management decisions ) in terms of specific environmental criteria. numerous state laws have echoed these mandates, applying the principles to local - scale actions. the upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions. one can examine the specifics of environmental science by reading examples of environmental impact statements prepared under nepa such as : wastewater treatment expansion options discharging into the san diego / tijuana estuary, expansion of the san francisco international airport, development of the houston, metro transportation system, expansion of the metropolitan boston mbta transit system, and construction of interstate 66 through arlington, virginia. in england and wales the environment agency ( ea ), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. ( formerly the office of the deputy prime minister ). the agency was set up under the environment act 1995 as an independent body and works closely with uk government to enforce the regulations. = = see also = = environmental engineering science environmental informatics environmental monitoring environmental planning environmental statistics glossary of environmental science list of environmental studies topics = = references = = = = external links = = glossary of environmental terms – global development research center
|
https://en.wikipedia.org/wiki/Environmental_science
|
background : it is still an open research area to theoretically understand why deep neural networks ( dnns ) - - - equipped with many more parameters than training data and trained by ( stochastic ) gradient - based methods - - - often achieve remarkably low generalization error. contribution : we study dnn training by fourier analysis. our theoretical framework explains : i ) dnn with ( stochastic ) gradient - based methods often endows low - frequency components of the target function with a higher priority during the training ; ii ) small initialization leads to good generalization ability of dnn while preserving the dnn ' s ability to fit any function. these results are further confirmed by experiments of dnns fitting the following datasets, that is, natural images, one - dimensional functions and mnist dataset.
|
arxiv:1808.04295
|
this paper presents a comprehensive evaluation of github copilot ' s deployment and impact on developer productivity at zoominfo, a leading go - to - market ( gtm ) intelligence platform. we describe our systematic four - phase approach to evaluating and deploying github copilot across our engineering organization, involving over 400 developers. our analysis combines both quantitative metrics, focusing on acceptance rates of suggestions given by github copilot and qualitative feedback given by developers through developer satisfaction surveys. the results show an average acceptance rate of 33 % for suggestions and 20 % for lines of code, with high developer satisfaction scores of 72 %. we also discuss language - specific performance variations, limitations, and lessons learned from this medium - scale enterprise deployment. our findings contribute to the growing body of knowledge about ai - assisted software development in enterprise settings.
|
arxiv:2501.13282
|
we investigate the asymptotic disconnection time of a large discrete cylinder $ ( \ mathbb { z } / n \ mathbb { z } ) ^ { d } \ times \ mathbb { z } $, $ d \ geq 2 $, by simple and biased random walks. for simple random walk, we derive a sharp asymptotic lower bound that matches the upper bound from [ sznitman, ann. probab., 2009 ]. for biased walks, we obtain bounds that asymptotically match in the principal order when the bias is not too strong, which greatly improves non - matching bounds from [ windisch, ann. appl. probab., 2008 ]. as a crucial tool in the proof, we also obtain a " very strong " coupling between the trace of random walk on the cylinder and random interlacements, which is of independent interest.
|
arxiv:2409.17900
|
global u ( 1 ) strings with cylindrical symmetry are studied in anti - de sitter spacetime. according as the magnitude of negative cosmological constant, they form regular global cosmic strings, extremal black cosmic strings and charged black cosmic strings, but no curvature singularity is involved. the relationship between the topological charge of a neutral global string and the black hole charge is clarified by duality transformation. physical relevance as straight string is briefly discussed.
|
arxiv:gr-qc/9707011
|
the h - principle, which is the analogue, for cr manifolds, of the classical hartogs principle in several complex variables, is known to be valid in the small on a pseudoconcave cr manifold of any codimension. however it fails in the large, as has been shown by the counterexample found in [ hn1 ]. hence there is an underlying obstruction to the global h - principle on a pseudoconcave cr manifold. the purpose of this note is to take the first steps toward a deeper understanding of this obstruction.
|
arxiv:0710.5728
|
today, mobile operators are starting to deploy fifth - generation ( 5g ) networks to expand the coverage ubiquity of broadband wireless service. in contrast, in - flight connectivity remains limited and its quality of service does not always meet the expectations. embracing 5g new radio ( nr ) in air - to - ground ( a2g ) communication systems can help narrow the gap between airborne and ground connectivity. in this article, we focus on 5g nr based direct a2g communications. we first provide an overview of the existing a2g systems which are based on earlier generations of mobile technologies. then we confirm the feasibility of nr a2g systems with a performance study in a range of bands from below 7 ghz to millimeter wave frequencies. the results show that nr a2g systems can provide significantly improved data rates for in - flight connectivity. we also identify the major challenges associated with nr a2g communications, discuss enhancements to counteract the challenges, and point out fruitful avenues for future research.
|
arxiv:2003.06361
|
we develop a new approach for calculating the spin - independent 2 - neutrino exchange potential ( 2 - nep ) between non - relativistic fermions which places emphasis on the neutrino vacuum state, an area of theoretical interest in recent years. the 2 - nep is a natural probe of fundamental issues of neutrino physics such as neutrino masses, flavor mixing, the number of neutrino flavors, neutrino nature ( dirac or majorana ), $ cp $ - violation, and the neutrino vacuum state. we explore the dependence of the 2 - nep on the mixing of neutrino mass states assuming normal and inverted mass ordering for nucleon - nucleon, nucleon - lepton, and lepton - lepton interactions, and the $ cp $ - violation phase in the neutrino mixing matrix.
|
arxiv:1901.05345
|
this paper is devoted to the study of the flatness property of linear time - invariant fractional systems. in the framework of polynomial matrices of the fractional derivative operator, we give a characterization of fractionally flat outputs and a simple algorithm to compute them. we also obtain a characterization of the so - called fractionnally $ 0 $ - flat outputs. we then present an application to a two dimensional heated metallic sheet, whose dynamics may be approximated by a fractional model of order 1 / 2. the trajectory planning of the temperature at a given point of the metallic sheet is obtained thanks to the fractional flatness property, without integrating the system equations. the pertinence of this approach is discussed on simulations.
|
arxiv:1311.3805
|
the 2df galaxy redshift survey ( 2dfgrs ) is designed to measure redshifts for approximately 250000 galaxies. this paper describes the survey design, the spectroscopic observations, the redshift measurements and the survey database. the 2dfgrs uses the 2df multi - fibre spectrograph on the anglo - australian telescope, which is capable of observing 400 objects simultaneously over a 2 - degree diameter field. the source catalogue for the survey is a revised and extended version of the apm galaxy catalogue, and the targets are galaxies with extinction - corrected magnitudes brighter than b _ j = 19. 45. the main survey regions are two declination strips, one in the southern galactic hemisphere spanning 80deg x 15deg around the sgp, and the other in the northern galactic hemisphere spanning 75deg x 10deg along the celestial equator ; in addition, there are 99 fields spread over the southern galactic cap. the survey covers 2000 sq. deg and has a median depth of z = 0. 11. adaptive tiling is used to give a highly uniform sampling rate of 93 % over the whole survey region. redshifts are measured from spectra covering 3600a - 8000a at a two - pixel resolution of 9. 0a and a median s / n of 13 per pixel. all redshift identifications are visually checked and assigned a quality parameter q in the range 1 - 5 ; q > = 3 redshifts are 98. 4 % reliable and have an rms uncertainty of 85 km / s. the overall redshift completeness for q > = 3 redshifts is 91. 8 %, but this varies with magnitude from 99 % for the brightest galaxies to 90 % for objects at the survey limit. the 2dfgrs database is available on the www at http : / / www. mso. anu. edu. au / 2dfgrs
|
arxiv:astro-ph/0106498
|
automatic anomaly detection is a major issue in various areas. beyond mere detection, the identification of the origin of the problem that produced the anomaly is also essential. this paper introduces a general methodology that can assist human operators who aim at classifying monitoring signals. the main idea is to leverage expert knowledge by generating a very large number of indicators. a feature selection method is used to keep only the most discriminant indicators which are used as inputs of a naive bayes classifier. the parameters of the classifier have been optimized indirectly by the selection process. simulated data designed to reproduce some of the anomaly types observed in real world engines.
|
arxiv:1407.0880
|
motivated by sr $ _ 2 $ ruo $ _ 4 $, edge quasiparticle states are analyzed based on the self - consistent solution of the bogolyubov - de gennes equations for a topological chiral $ p $ - wave superconductor. using a tight - binding model of a square lattice for the dominant $ \ gamma $ - band we explore the non - trivial geometry and band structure dependence of the edge states and currents. as a peculiar finding we show that for high band fillings currents flow in reversed direction comparing straight and zigzag edges. we give a simple explanation in terms of the positions of the zero - energy bound states using a quasi - classical picture. we also show that a ginzburg - landau approach can reproduce these results. moreover, the band filling dependence of the most stable domain wall structure is discussed.
|
arxiv:1409.1516
|
building on recent work of a. harper ( 2012 ), and using various results of m. c. chang ( 2014 ) and h. iwaniec ( 1974 ) on the zero - free regions of $ l $ - functions $ l ( s, \ chi ) $ for characters $ \ chi $ with a smooth modulus $ q $, we establish a conjecture of k. soundararajan ( 2008 ) on the distribution of smooth numbers over reduced residue classes for such moduli $ q $. a crucial ingredient in our argument is that, for such $ q $, there is at most one " problem character " for which $ l ( s, \ chi ) $ has a smaller zero - free region. similarly, using the " deuring - heilbronn " phenomenon on the repelling nature of zeros of $ l $ - functions close to one, we also show that soundararajan ' s conjecture holds for a family of moduli having siegel zeros.
|
arxiv:2009.06800
|
we demonstrate a lumped - element josephson parametric amplifier, using a single - ended design that includes an on - chip, high - bandwidth flux bias line. the amplifier can be pumped into its region of parametric gain through either the input port or through the flux bias line. broadband amplification is achieved at a tunable frequency $ \ omega / 2 \ pi $ between 5 to 7 ghz with quantum - limited noise performance, a gain - bandwidth product greater than 500 mhz, and an input saturation power in excess of - 120 dbm. the bias line allows fast frequency tuning of the amplifier, with variations of hundreds of mhz over time scales shorter than 10 ns.
|
arxiv:1308.1376
|
a telecom photon is a suitable information carrier in a fiber - based quantum network due to its lower transmission loss in fiber. because of the paucity of suitable atomic system, usually the photon connecting different memories is in near infrared band, therefore the frequency conversion of the photon in and out of telecomband has to be required to realize the interface between the atomic - based memory and the photon - based carrier. in order for that, two atomic or other systems which could realize the frequency conversion have to be taken into account, and besides, one more atomic system as a storing media is need. so the ability of storing a photon in telecomband is an interesting and exciting topic. in this work, we give a first experimental proof of principle of storing a light in telecomband. the telecom light is directly stored and retrieved later through two nonlinear processes via an inverted - y configuration in a cold atomic ensemble, therefore the interface between the memory and photon in other proposals is not needed. we believe our work may open a new avenue for long - distance quantum communication.
|
arxiv:1210.3963
|
the difficulty in exploring potential energy surfaces, which are nonconvex, stems from the presence of many local minima, typically separated by high barriers and often disconnected in configurational space. we obtain the global minimum on model potential energy surfaces without sampling any minima a priori. instead, a different problem is derived, which is convex and hence easy to solve, but which is guaranteed to either have the same solution or to be a lower bound to the true solution. a systematic way for improving the latter solutions is also given. because many nonconvex problems are projections of higher dimensional convex problems, parrilo has recently showed that by obtaining a sum of squares decomposition of the original problem, which can be subsequently transformed to a semidefinite programme a large class of non - convex problems can be solved efficiently. the semidefinite duality formulation also provides a proof that the global minimum of the energy surface has either been found exactly or has been bounded from below. it additionally provides physical insight into the problem through a geometric interpretation. the sum of squares polynomial representation of the potential energy surface may further reveal information about the nature of the potential energy surface. we demonstrate the applicability of this approach to low dimensional potential energy landscapes and discuss its merits and shortcomings.
|
arxiv:cond-mat/0508293
|
upcoming space - based surveys such as euclid and wfirst - afta plan to measure baryonic acoustic oscillations ( baos ) in order to study dark energy. these surveys will use ir slitless grism spectroscopy to measure redshifts of a large number of galaxies over a significant redshift range. in this paper, we use the wfc3 infrared spectroscopic parallel survey ( wisp ) to estimate the expected number of halpha ( ha ) emitters observable by these future surveys. wisp is an ongoing hst slitless spectroscopic survey, covering the 0. 8 - 1. 65micron wavelength range and allowing the detection of ha emitters up to z ~ 1. 5 and [ oiii ] emitters to z ~ 2. 3. we derive the ha - [ oiii ] bivariate line luminosity function for wisp galaxies at z ~ 1 using a maximum likelihood estimator that properly accounts for uncertainties in line luminosity measurement, and demonstrate how it can be used to derive the ha luminosity function from exclusively fitting [ oiii ] data. using the z ~ 2 [ oiii ] line luminosity function, and assuming that the relation between ha and [ oiii ] luminosity does not change significantly over the redshift range, we predict the ha number counts at z ~ 2 - the upper end of the redshift range of interest for the future surveys. for the redshift range 0. 7 < z < 2, we expect ~ 3000 galaxies / deg ^ 2 for a flux limit of 3x10 ^ { - 16 } ergs / s / cm ^ 2 ( the proposed depth of euclid galaxy redshift survey ) and ~ 20, 000 galaxies / deg ^ 2 for a flux limit of ~ 10 ^ { - 16 } ergs / s / cm ^ 2 ( the baseline depth of wfirst galaxy redshift survey ).
|
arxiv:1505.07843
|
3d chern - simons gauge theory has a strong connection with 2d cft and link invariants in knot theory. we impose some constraints on the $ d ( 2 | 1 ; \ alpha ) $ cs theory in the similar context of the hamiltonian reduction of 2d superconformal algebras. there hilbert states in $ d ( 2 | 1 ; \ alpha ) $ cs theory are partly identified with characters of the large n = 4 scft by their transformation properties.
|
arxiv:hep-th/9808094
|
nowadays, multivariate time series data are increasingly collected in various real world systems, e. g., power plants, wearable devices, etc. anomaly detection and diagnosis in multivariate time series refer to identifying abnormal status in certain time steps and pinpointing the root causes. building such a system, however, is challenging since it not only requires to capture the temporal dependency in each time series, but also need encode the inter - correlations between different pairs of time series. in addition, the system should be robust to noise and provide operators with different levels of anomaly scores based upon the severity of different incidents. despite the fact that a number of unsupervised anomaly detection algorithms have been developed, few of them can jointly address these challenges. in this paper, we propose a multi - scale convolutional recurrent encoder - decoder ( mscred ), to perform anomaly detection and diagnosis in multivariate time series data. specifically, mscred first constructs multi - scale ( resolution ) signature matrices to characterize multiple levels of the system statuses in different time steps. subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter - sensor ( time series ) correlations and an attention based convolutional long - short term memory ( convlstm ) network is developed to capture the temporal patterns. finally, based upon the feature maps which encode the inter - sensor correlations and temporal information, a convolutional decoder is used to reconstruct the input signature matrices and the residual signature matrices are further utilized to detect and diagnose anomalies. extensive empirical studies based on a synthetic dataset and a real power plant dataset demonstrate that mscred can outperform state - of - the - art baseline methods.
|
arxiv:1811.08055
|
colmez, dospinescu and niziol have shown that the only $ p $ - adic representations of $ \ rm { gal } ( \ bar { \ mathbb { q } } _ p / \ mathbb { q } _ p ) $ appearing in the $ p $ - adic \ ' etale cohomology of the coverings of drinfeld ' s half - plane are the $ 2 $ - dimensional cuspidal representations ( i. e. potentially semi - stable, whose associated weil - deligne representation is irreducible ) with hodge - tate weights $ 0 $ and $ 1 $ and their multiplicities are given by the $ p $ - adic langlands correspondence. we generalise this result to arbitrary weights, by considering the $ p $ - adic \ ' etale cohomology with coefficients in the symmetric powers of the universal local system on drinfeld ' s tower. a novelty is the appearance of potentially semistable $ 2 $ - dimensional non - cristabelian representations, with expected multiplicity. the key point is that the local systems we consider turn out to be particularly simple : they are " isotrivial opers " on a curve. we develop a recipe to compute the pro \ ' etale cohomology of such a local system using the hyodo - kato cohomology of the curve and the de rham complex of the flat filtered bundle associated to the local system.
|
arxiv:2405.10048
|
the coupling of the high - lying dipole mode to the low - lying quadrupole modes for the case of deformed gamma - unstable nuclei is studied. results from the geometrical model are compared to those obtained within the dipole boson model. consistent results are obtained in both models. the dipole boson model is treated within the intrinsic frame, with subsequent projection onto the laboratory frame. as an application, calculations of photonuclear cross - sections in gamma - unstable nuclei are presented.
|
arxiv:nucl-th/0007052
|
we consider a bottom - up ads / qcd model with a conformal exponential deformation $ e ^ { k \, z ^ 2 } $ on a lorentz invariant ads background. in this model, we assume the conformal dimension associated with the operator that creates pions at the boundary as $ \ delta = 3 $. regarding the infrared scale related to photon field $ k _ \ gamma $, we analyze two cases : constant and depending on the transferred momentum $ q $. in these two cases, we computed the electromagnetic pion form factor as well as the pion radius. we compare our results with experimental data as well as other theoretical ( holographic and non - holographic ) models. in particular, for the momentum - dependent infrared scale, we find good agreement with the available experimental data as well as non - holographic models.
|
arxiv:2104.04640
|
objective. as proton arc therapy ( pat ) approaches clinical implementation, optimizing treatment plans for this innovative delivery modality remains challenging, especially in addressing arc delivery time. existing algorithms for minimizing delivery time are either optimal but computationally demanding or fast but at the expense of sacrificing many degrees of freedom. in this study, we introduce a flexible method for pre - selecting energy layers ( el ) in pat treatment planning before the actual robust spot weight optimization. our el pre - selection method employs metaheuristics to minimize a bi - objective function, considering a dynamic delivery time proxy and tumor geometrical coverage penalized as a function of selected organs - at - risk crossing. it is capable of parallelizing multiple instances of the problem. we evaluate the method using three different treatment sites, providing a comprehensive dosimetric analysis benchmarked against dynamic proton arc plans generated with early energy layer selection and spot assignment ( elsa ) and impt plans in raystation tps. the algorithm efficiently generates pareto - optimal el pre - selections in approximately 5 minutes. subsequent pat treatment plans derived from these selections and optimized within the tps, demonstrate high - quality target coverage, achieving a high conformity index, and effective sparing of organs at risk. these plans meet clinical goals while achieving a 20 to 40 % reduction in delivery time compared to elsa plans. the proposed algorithm offers speed and efficiency, producing high - quality pat plans by placing proton arc sectors to efficiently reduce delivery time while maintaining good target coverage and healthy tissues sparing.
|
arxiv:2410.07716
|
geometric properties of the set of quantum entangled states are investigated. we propose an explicit method to compute the dimension of local orbits for any mixed state of the general k x m problem and characterize the set of effectively different states ( which cannot be related by local transformations ). thus we generalize earlier results obtained for the simplest 2 x 2 system, which lead to a stratification of the 6d set of n = 4 pure states. we define the concept of absolutely separable states, for which all globally equivalent states are separable.
|
arxiv:quant-ph/0006068
|
given a language model ( lm ), maximum probability is a poor decoding objective for open - ended generation, because it produces short and repetitive text. on the other hand, sampling can often produce incoherent text that drifts from the original topics. we propose contrastive decoding ( cd ), a reliable decoding approach that optimizes a contrastive objective subject to a plausibility constraint. the contrastive objective returns the difference between the likelihood under a large lm ( called the expert, e. g. opt - 13b ) and a small lm ( called the amateur, e. g. opt - 125m ), and the constraint ensures that the outputs are plausible. cd is inspired by the fact that the failures of larger lms ( e. g., repetition, incoherence ) are even more prevalent in smaller lms, and that this difference signals which texts should be preferred. cd requires zero additional training, and produces higher quality text than decoding from the larger lm alone. it also works across model scales ( opt - 13b and gpt2 - 1. 5b ) and significantly outperforms four strong decoding algorithms ( e. g., nucleus, top - k ) in automatic and human evaluations across wikipedia, news and story domains.
|
arxiv:2210.15097
|
shed some light on attacking both of the conjectures when $ \ delta $ is large.
|
arxiv:2005.12909
|
causality among events is widely recognized as a most fundamental structure of spacetime, and causal sets have been proposed as discrete models of the latter in the context of quantum gravity theories, notably in the causal set programme. in the rather different context of what might be called the ' computational universe programme ' - - one which associates the complexity of physical phenomena to the emergent features of models such as cellular automata - - a choice problem arises with respect to the variety of formal systems that, in virtue of their computational universality ( turing - completeness ), qualify as equally good candidates for a computational, unified theory of physics. this paper proposes causal sets as the only objects of physical significance and relevance to be considered under the ' computational universe ' perspective, and as the appropriate abstraction for shielding the unessential details of the many different computationally universal candidate models. at the same time, we propose a fully deterministic, radical alternative to the probabilistic techniques currently considered in the causal set programme for growing discrete spacetimes. we investigate a number of computation models by grouping them into two broad classes, based on the support on which they operate ; in one case this is linear, like a tape or a string of symbols ; in the other, it is a two - dimensional grid or a planar graph. for each model we identify the causality relation among computation events, implement it, and conduct a possibly exhaustive exploration of the associated causal set space, while examining quantitative and qualitative features such as dimensionality, curvature, planarity, emergence of pseudo - randomness, causal set substructures and particles.
|
arxiv:1004.3128
|
modern enterprise servers are increasingly embracing tiered memory systems with a combination of low latency drams and large capacity but high latency non - volatile main memories ( nvmms ) such as intel ' s optane dc pmm. prior works have focused on efficient placement and migration of data on a tiered memory system, but have not studied the optimal placement of page tables. explicit and efficient placement of page tables is crucial for large memory footprint applications with high tlb miss rates because they incur dramatically higher page walk latency when page table pages are placed in nvmm. we show that ( i ) page table pages can end up on nvmm even when enough dram memory is available and ( ii ) page table pages that spill over to nvmm due to dram memory pressure are not migrated back later when memory is available in dram. we study the performance impact of page table placement in a tiered memory system and propose an efficient and transparent page table management technique that ( i ) applies different placement policies for data and page table pages, ( ii ) introduces a differentiating policy for page table pages by placing a small but critical part of the page table in dram, and ( iii ) dynamically and judiciously manages the rest of the page table by transparently migrating the page table pages between dram and nvmm. our implementation on a real system equipped with intel ' s optane nvmm running linux reduces the page table walk cycles by 12 % and total cycles by 20 % on an average. this improves the runtime by 20 % on an average for a set of synthetic and real - world large memory footprint applications when compared with various default linux kernel techniques.
|
arxiv:2103.10779
|
fake products are items that are marketed and sold as genuine, high - quality products but are counterfeit or low - quality knockoffs. these products are often designed to closely mimic the appearance and branding of the genuine product to deceive consumers into thinking they are purchasing the real thing. fake products can range from clothing and accessories to electronics and other goods and can be found in a variety of settings, including online marketplaces and brick - and - mortar stores. blockchain technology can be used to help detect fake products in a few different ways. one of the most common ways is through the use of smart contracts, which are self - executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. this allows for a high level of transparency and traceability in supply chain transactions, making it easier to identify and prevent the sale of fake products and the use of unique product identifiers, such as serial numbers or qr codes, that are recorded on the blockchain. this allows consumers to easily verify the authenticity of a product by scanning the code and checking it against the information recorded on the blockchain. in this study, we will use smart contracts to detect fake products and will evaluate based on gas cost and ethers used for each implementation.
|
arxiv:2308.04006
|
the predominant reason for the damaging power of high - energy radiation is multiple ionization of a molecule, either direct or via the decay of highly excited intermediates, as e. g., in the case of x - ray irradiation. consequently, the molecule is irreparably damaged by the subsequent fragmentation in a coulomb explosion. in an aqueous environment, however, it has been observed that irradiated molecules may be saved from fragmentation presumably by charge and energy dissipation mechanisms. here, we show that the protective effect of the environment sets in even earlier than hitherto expected, namely immediately after single inner - shell ionization. by combining coincidence measurements of the fragmentation of x - ray - irradiated microsolvated pyrimidine molecules with theoretical calculations, we identify direct intermolecular electronic decay as the protective mechanism, outrunning the usually dominant auger decay. our results demonstrate that such processes play a key role in charge delocalization and have to be considered in investigations and models on high - energy radiation damage in realistic environments.
|
arxiv:2106.08639
|
. m., which is on pi day at tau time. peter harremoes has used τ in a mathematical research article which was granted editor ' s award of the year. = = in programming languages and calculators = = the following table documents various programming languages that have implemented the circle constant for converting between turns and radians. all of the languages below support the name " tau " in some casing, but processing also supports " two _ pi " and raku also supports the symbol " τ " for accessing the same value. the constant τ is made available in the google calculator, desmos graphing calculator, and the iphone ' s convert angle option expresses the turn as τ. = = notes = = = = references = = = = external links = = the tau manifesto
|
https://en.wikipedia.org/wiki/Tau_(mathematics)
|
when dealing with non - stationary systems, for which many time series are available, it is common to divide time in epochs, i. e. smaller time intervals and deal with short time series in the hope to have some form of approximate stationarity on that time scale. we can then study time evolution by looking at properties as a function of the epochs. this leads to singular correlation matrices and thus poor statistics. in the present paper, we propose an ensemble technique to deal with a large set of short time series without any consideration of non - stationarity. given a singular data matrix, we randomly select subsets of time series and thus create an ensemble of non - singular correlation matrices. as the selection possibilities are binomially large, we will obtain good statistics for eigenvalues of correlation matrices, which are typically not independent. once we defined the ensemble, we analyze its behavior for constant and block - diagonal correlations and compare numerics with analytic results for the corresponding correlated wishart ensembles. we discuss differences resulting from spurious correlations due to repetitive use of time - series. the usefulness of this technique should extend beyond the stationary case if, on the time scale of the epochs, we have quasi - stationarity at least for most epochs.
|
arxiv:1801.07790
|
general one - loop formulas for loop - induced processes $ \ gamma \ gamma \ rightarrow \ phi _ i \ phi _ j $ with $ \ phi _ i \ phi _ j = hh, ~ hh, ~ hh $ are presented in the paper. analytic expressions evaluated in this work are valid for a class of higgs extensions of the standard models, e. g. inert doublet higgs models, two higgs doublet models, zee - babu models as well as triplet higgs models, etc. analytic expressions for one - loop form factors are written in terms of the basic scalar one - loop two -, three - and four - point functions following the output format of both the packages ~ { \ tt looptools } and { \ tt collier }. physical results can be hence evaluated numerically by using one of the mentioned packages. analytic results are tested by several checks such as the ultraviolet finiteness, infrared finiteness of the one - loop amplitudes. furthermore, the amplitudes also obey the ward identity due to the on - shell initial photons. this identity is also verified numerically in this works. in the applications, we present the phenomenological results for zee - babu model as a typical example in this report. production cross - section for the processes $ \ gamma \ gamma \ rightarrow hh $ are scanned over the parameter space of the zee - babu models.
|
arxiv:2410.06827
|
topics cover most, if not all, of the subdisciplines of manufacturing engineering. students then choose to specialize in one or more subdisciplines towards the end of their degree work. = = = syllabus = = = the foundational curriculum for a bachelor ' s degree in manufacturing engineering or production engineering includes below mentioned syllabus. this syllabus is closely related to industrial engineering and mechanical engineering, but it differs by placing more emphasis on manufacturing science or production science. it includes the following areas : mathematics ( calculus, differential equations, statistics and linear algebra ) mechanics ( statics & dynamics ) solid mechanics fluid mechanics materials science strength of materials fluid dynamics hydraulics pneumatics hvac ( heating, ventilation & air conditioning ) heat transfer applied thermodynamics energy conversion instrumentation and measurement engineering drawing ( drafting ) & engineering design engineering graphics mechanism design including kinematics and dynamics manufacturing processes mechatronics circuit analysis lean manufacturing automation reverse engineering quality control cad ( computer aided design ) cam ( computer aided manufacturing ) project management a degree in manufacturing engineering typically differs from mechanical engineering in only a few specialized classes. mechanical engineering degrees focus more on the product design process and on complex products which requires more mathematical expertise. = = manufacturing engineering certification = = certification and licensure : in some countries, " professional engineer " is the term for registered or licensed engineers who are permitted to offer their professional services directly to the public. professional engineer, abbreviated ( pe - usa ) or ( peng - canada ), is the designation for licensure in north america. to qualify for this license, a candidate needs a bachelor ' s degree from an abet - recognized university in the usa, a passing score on a state examination, and four years of work experience usually gained via a structured internship. in the usa, more recent graduates have the option of dividing this licensure process into two segments. the fundamentals of engineering ( fe ) exam is often taken immediately after graduation and the principles and practice of engineering exam is taken after four years of working in a chosen engineering field. society of manufacturing engineers ( sme ) certification ( usa ) : the sme administers qualifications specifically for the manufacturing industry. these are not degree level qualifications and are not recognized at the professional engineering level. the following discussion deals with qualifications in the usa only. qualified candidates for the certified manufacturing technologist certificate ( cmfgt ) must pass a three - hour, 130 - question multiple - choice exam. the exam covers math, manufacturing processes,
|
https://en.wikipedia.org/wiki/Manufacturing_engineering
|
accountability aims to provide explanations for why unwanted situations occurred, thus providing means to assign responsibility and liability. as such, accountability has slightly different meanings across the sciences. in computer science, our focus is on providing explanations for technical systems, in particular if they interact with their physical environment using sensors and actuators and may do serious harm. accountability is relevant when considering safety, security and privacy properties and we realize that all these incarnations are facets of the same core idea. hence, in this paper we motivate and propose a model for accountability infrastructures that is expressive enough to capture all of these domains. at its core, this model leverages formal causality models from the literature in order to provide a solid reasoning framework. we show how this model can be instantiated for several real - world use cases.
|
arxiv:1608.07882
|
our aim is to give a friendly introduction for students to systolic inequalities. we will stress the relationships between the classical formulation for riemannian metrics and more recent developments related to symplectic measurements and the viterbo conjecture. this will give us a perfect excuse to introduce the reader to some important ideas in riemannian and symplectic geometry.
|
arxiv:2103.09356
|
we study a natural discrete bochner - type inequality on graphs, and explore its merit as a notion of curvature in discrete spaces. an appealing feature of this discrete version seems to be that it is fairly straightforward to compute this notion of curvature parameter for several specific graphs of interest - particularly, abelian groups, slices of the hypercube, and the symmetric group under various sets of generators. we further develop this notion by deriving buser - type inequalities ( a la ledoux ), relating functional and isoperimetric constants associated with a graph. our derivations provide a tight bound on the cheeger constant ( i. e., the edge - isoperimetric constant ) in terms of the spectral gap, for graphs with nonnegative curvature, particularly, the class of abelian cayley graphs - a result of independent interest.
|
arxiv:1501.00516
|
small - angle neutron scattering ( sans ) is an experimental technique to detect material structures in the nanometer to micrometer range. the solution of the structural model constructed from sans strongly depends on the accuracy of the reduced data. the time - of - flight ( tof ) sans data are dependent on the wavelength of the pulsed neutron source. therefore, data reduction must be handled very carefully to transform measured neutron events into neutron scattering intensity. in this study, reduction algorithms for tof sans data are developed and optimized using simulated data from a virtual neutron experiment. each possible effect on the measured data is studied systematically, and suitable corrections are performed to obtain high - quality data. this work will facilitate scientific research and the instrument design at china spallation neutron source.
|
arxiv:1701.04153
|
a current flowing through a superconductor induces a spatial modulation in its superconducting order parameter, characterized by a wavevector $ q $ related to the total momentum of a cooper pair. here we investigate this phenomenon in a $ p $ - wave topological superconductor, described by a one - dimensional kitaev model. we demonstrate that, by treating $ q $ as an extra synthetic dimension, the current carrying non - equilibrium steady state can be mapped into the ground state of a half - filled two - dimensional weyl semimetal, whose fermi surface exhibits lifshitz transitions when varying the model parameters. specifically, the transition from type - i to type - ii weyl phases corresponds to the emergence of a gapless $ p $ - wave superconductor, where cooper pairs coexist with unpaired electrons and holes. such transition is signaled by the appearance of a sharp cusp in the $ q $ - dependence of the supercurrent, at a critical value $ q ^ * $ that is robust to variations of the chemical potential $ \ mu $. we determine the maximal current that the system can sustain in the topological phase, and discuss possible implementations.
|
arxiv:2404.18131
|
a theoretical framework is presented, which allows to explain many experimental facts related to pinning and cross - flow effects between flux tubes in type - ii superconductors. it is shown that critical state principles, in the manner introduced by c. p. bean for parallel vortex lattices, may be used to describe the observed behavior. we formulate a least action principle, giving place to a variational interpretation of the critical state. the coarse - grained electrodynamic response of the superconductor is solved by minimizing the magnetic field changes, for a current density vector constrained to belong to some bounded set ( delta ). it is shown that the selection of delta determines the specific critical state model in use. meaningful choices of delta are discussed in view of the related physical mechanisms.
|
arxiv:cond-mat/0209494
|
the open cluster ngc 6633 was observed with corot in 2011 and simultaneous high - resolution spectroscopy was obtained with the sophie and harps spectrographs. one of the four targets was not found to be a cluster member. for all stars we provide estimates of the seismic and spectroscopic parameters.
|
arxiv:1409.2274
|
we propose new methodologies for stabilizing all - metal antiaromatic clusters like : al $ _ { 4 } $ li $ _ { 4 } $. we demonstrate that these all - metal species can be stabilized by complexation with 3d - transition metals very similar to its organic counterpart, c $ _ 4 $ h $ _ 4 $. complexation to transition metal ions reduce the frontier orbital energies and introduces aromaticity. we consider a series of such complexes [ $ \ eta $ $ ^ 4 $ ( al $ _ 4 $ li $ _ 4 $ ) - fe ( co ) $ _ 3 $, $ \ eta $ $ ^ 2 $ $ \ sigma $ $ ^ 2 $ ( al $ _ 4 $ li $ _ 4 $ ) - ni and ( al $ _ { 4 } $ li $ _ { 4 } $ ) $ _ { 2 } $ ni ] and make a comparison between the all - metal species and the organometallic compounds to prove conclusively our theory. fragmentation energy analysis as well as nics support similar mechanism of complexation induced stability in these all - metal molecules.
|
arxiv:cond-mat/0409508
|
superposition principle of wave function in multi - slit interference experiment has been widely accepted by many quantum mechanics textbooks, however the expression $ { \ psi _ { ab } } = { \ psi _ a } + { \ psi _ b } $ does not strictly hold. the non - classical paths in feynman path integral provide an explanation to the multi - path interference effect, and can be used to quantitatively analyse the multi - slit interference effect. in this work, we apply quantities in bohmian mechanics to the measurement of multi - order interference by analysing the contribution of non - classical paths to bohmian velocity in electron ' s multi - slit interferometer. the result shows that the non - classical paths effect will cause a relative deviation of $ 10 ^ { - 3 } $ in electron ' s bohmian velocity, which is observable by doing weak measurement similar to previous work.
|
arxiv:1610.00102
|
the entanglement asymmetry is an observable independent tool to investigate the relaxation of quantum many body systems through the restoration of an initially broken symmetry of the dynamics. in this paper we use this to investigate the effects of interactions on quantum relaxation in a paradigmatic integrable model. specifically, we study the dynamical restoration of the $ u ( 1 ) $ symmetry corresponding to rotations about the $ z $ - axis in the xxz model quenched from a tilted ferromagnetic state. we find two distinct patterns of behaviour depending upon the interaction regime of the model. in the gapless regime, at roots of unity, we find that the symmetry restoration is predominantly carried out by bound states of spinons of maximal length. the velocity of these bound states is suppressed as the anisotropy is decreased towards the isotropic point leading to slower symmetry restoration. by varying the initial tilt angle, one sees that symmetry restoration is slower for an initally smaller tilt angle, signifying the presence of the quantum mpemba effect. in the gapped regime however, spin transport for non maximally tilted states, is dominated by smaller bound states with longer bound states becoming frozen. this leads to a much longer time scales for restoration compared to the gapless regime. in addition, the quantum mpemba effect is absent in the gapped regime.
|
arxiv:2409.08735
|
predictive models for music are studied by researchers of algorithmic composition, the cognitive sciences and machine learning. they serve as base models for composition, can simulate human prediction and provide a multidisciplinary application domain for learning algorithms. a particularly well established and constantly advanced subtask is the prediction of monophonic melodies. as melodies typically involve non - markovian dependencies their prediction requires a capable learning algorithm. in this thesis, i apply the recent feature discovery and learning method pulse to the realm of symbolic music modeling. pulse is comprised of a feature generating operation and l1 - regularized optimization. these are used to iteratively expand and cull the feature set, effectively exploring feature spaces that are too large for common feature selection approaches. i design a general python framework for pulse, propose task - optimized feature generating operations and various music - theoretically motivated features that are evaluated on a standard corpus of monophonic folk and chorale melodies. the proposed method significantly outperforms comparable state - of - the - art models. i further discuss the free parameters of the learning algorithm and analyze the feature composition of the learned models. the models learned by pulse afford an easy inspection and are musicologically interpreted for the first time.
|
arxiv:1709.08842
|
if i know of a few persons of interest, how can a combination of human language technology and graph theory help me find other people similarly interesting? if i know of a few people committing a crime, how can i determine their co - conspirators? given a set of actors deemed interesting, we seek other actors who are similarly interesting. we use a collection of communications encoded as an attributed graph, where vertices represents actors and edges connect pairs of actors that communicate. attached to each edge is the set of documents wherein that pair of actors communicate, providing content in context - the communication topic in the context of who communicates with whom. in these documents, our identified interesting actors communicate amongst each other and with other actors whose interestingness is unknown. our objective is to nominate the most likely interesting vertex from all vertices with unknown interestingness. as an illustrative example, the enron email corpus consists of communications between actors, some of which are allegedly committing fraud. some of their fraudulent activity is captured in emails, along with many innocuous emails ( both between the fraudsters and between the other employees of enron ) ; we are given the identities of a few fraudster vertices and asked to nominate other vertices in the graph as likely representing other actors committing fraud. foundational theory and initial experimental results indicate that approaching this task with a joint model of content and context improves the performance ( as measured by standard information retrieval measures ) over either content or context alone.
|
arxiv:1201.4118
|
we analyze the transport properties of curved, three - dimensional graphite samples in strong magnetic fields. focusing on a millimeter - scale graphite cylinder as a prototypical curved object, we perform longitudinal and hall voltage measurements while applying quantizing magnetic fields. these measurements are investigated as a function of field strength and angles. most importantly, we find that angle - dependent shubnikov - de hass oscillations are superimposed with angle - independent features. reproducing the experimental observations, we introduce a network model that accounts for the cylindrical geometry effect by conceptualizing the cylinder as composed of strips of planar graphite in an effectively inhomogeneous magnetic field. our work highlights how the interplay between geometric curvature and quantizing magnetic fields can be leveraged to engineer tunable spatial current densities within solid - state systems, and paves the way for understanding transport properties of curved and bent three - dimensional samples more generally.
|
arxiv:2407.14263
|
we report the observation of a two - dimensional electron system ( 2des ) at the $ ( 110 ) $ surface of the transparent bulk insulator sno $ _ 2 $, and the tunability of its carrier density by means of temperature or eu deposition. the 2des is insensitive to surface reconstructions and, surprisingly, it survives even after exposure to ambient conditions - - an extraordinary fact recalling the well known catalytic properties sno $ _ 2 $. our data show that surface oxygen vacancies are at the origin of such 2des, providing key information about the long - debated origin of $ n $ - type conductivity in sno $ _ 2 $, at the basis of a wide range of applications. furthermore, our study shows that the emergence of a 2des in a given oxide depends on a delicate interplay between its crystal structure and the orbital character of its conduction band.
|
arxiv:2002.06681
|
measurement incompatibility is a distinguishing property of quantum physics and an essential resource for many quantum information processing tasks. we introduce an approach to verify the joint measurability of measurements based on phase - space quasiprobability distributions. our results therefore establish a connection between two notions of non - classicality, namely the negativity of quasiprobability distributions and measurement incompatibility. we show how our approach can be applied to the study of incompatibility - breaking channels and derive incompatibility - breaking sufficient conditions for bosonic systems and gaussian channels. in particular, these conditions provide useful tools for investigating the effects of errors and imperfections on the incompatibility of measurements in practice. to illustrate our method, we consider all classes of single - mode gaussian channels. we show that pure lossy channels with 50 % or more losses break the incompatibility of all measurements that can be represented by non - negative wigner functions, which includes the set of gaussian measurements.
|
arxiv:2012.06853
|
in this paper, we extend chandrasekhar ' s method of calculating rotating black holes into $ f ( r ) $ theory. we consider the ricci scalar is a constant and derive the kerr and kerr - ads metric by using the analytical mathematical method. suppose that the spacetime is a 4 - dimensional riemannian manifold with a general stationary axisymmetric metric, we calculate cartan ' s equation of structure and derive the einstein tensor. in order to reduce the solving difficulty, we fix the gauge freedom to transform the metric into a more symmetric form. we solve the field equations in the two cases of the ricci scalar $ r = 0 $ and $ r \ neq 0 $. in the case of $ r = 0 $, the ernst ' s equations are derived. we give the elementary solution of ernst ' s equations and show the way to obtain more solutions including kerr metric. in the case of $ r \ neq 0 $, we reasonably assume that the solution to the equations consists of two parts : the first is kerr part and the second is introduced by the ricci scalar. giving solution to the second part and combining the two parts, we obtain the kerr - ads metric. the calculations are carried out in a general $ f ( r ) $ theory, indicating the kerr and kerr - ads black holes exist widely in general $ f ( r ) $ models. furthermore, the whole solving process can be treated as a standard calculation procedure to obtain rotating black holes, which can be applied to other modified gravities.
|
arxiv:2404.11975
|
this study proposes a nonhomogeneous birth - - death model which captures the dynamics of a directly transmitted infectious disease. our model accounts for an important aspect of observed epidemic data in which only symptomatic infecteds are observed. the nonhomogeneous birth - - death process depends on survival distributions of reproduction and removal, which jointly yield an estimate of the effective reproduction number $ r ( t ) $ as a function of epidemic time. we employ the burr distribution family for the survival functions and, as special cases, proportional rate and accelerated event - time models are also employed for the parameter estimation procedure. as an example, our model is applied to an outbreak of avian influenza ( h7n7 ) in the netherlands, 2003, confirming that the conditional estimate of $ r ( t ) $ declined below unity for the first time on day 23 since the detection of the index case.
|
arxiv:1009.4362
|
magnetic susceptibility ( chi ) and $ ^ { 51 } $ v nmr have been measured in ( v $ _ { 1 - x } $ ti $ _ { x } $ ) $ _ 2 $ o $ _ 3 $ near the phase boundary of the metal - insulator transition. it is established that the transition from antiferromagnetic insulating ( afi ) to antiferromagnetic metallic phases near $ x _ { \ rm c } \ approx 0. 05 $ is not quantum critical but is discontinuous with a jump of the transition temperature. in the afi phase at 4. 2 k, we observed the satellite in the zero - field $ ^ { 51 } $ v nmr spectrum around 181 mhz in addition to the ` ` host ' ' resonance around 203 mhz. the satellite is also observable in the paramagnetic metallic phase of the x = 0. 055 sample. we associated the satellite with the v sites near ti which are in the v $ ^ { 3 + } $ - like oxidation state but has different temperature dependence of the nmr shift from that of the host v site. the host d - spin susceptibility for x = 0. 055 decreases below $ \ sim $ 60 k but remains finite in the low - temperature limit.
|
arxiv:cond-mat/0111059
|
crowd - sourcing models, which leverage the collective opinions / signals of users on online social networks ( osns ), are well - accepted for fake post detection ; however, motivating the users to provide the crowd signals is challenging, even more so in the presence of adversarial users. we design a participation ( mean - field ) game where users of the osn are lured by a reward - based scheme to provide the binary ( real / fake ) signals such that the osn achieves $ ( \ theta, \ delta ) $ - level of actuality identification ( ai ) - not more than $ \ delta $ fraction of non - adversarial users incorrectly judge the real post, and at least $ \ theta $ fraction of non - adversarial users identify the fake post as fake. an appropriate warning mechanism is proposed to influence the decision - making of the users such that the resultant game has at least one nash equilibrium ( ne ) achieving ai. we also identify the conditions under which all nes achieve ai. further, we numerically illustrate that one can always design an ai game if the normalized difference in the innate identification capacities of the users is at least $ 1 \ % $, when desired $ \ theta = 75 \ % $.
|
arxiv:2303.08484
|
given positive integers m, n, s, t, let z ( m, n, s, t ) be the maximum number of ones in a ( 0, 1 ) matrix of size m - by - n that does not contain an all ones submatrix of size s - by - t. we find a flexible upper bound on z ( m, n, s, t ) that implies the known bounds of kovari, sos and turan, and of furedi. as a consequence, we find an upper bound on the spectral radius of a graph of order n without a complete bipartite subgraph k _ { s, t }.
|
arxiv:0903.5350
|
it is appealing to stabilize dark matter by the same discrete symmetry that is used to explain the structure of quark and lepton mass matrices. however, to generate the observed fermion mixing patterns, any flavor symmetry must necessarily be broken, rendering dark matter unstable. we study singlet, doublet and triplet su ( 2 ) multiplets of both scalar and fermion dark matter candidates and enumerate the conditions under which no d < 6 dark matter decay operators are generated even in the case if the flavor symmetry is broken to nothing. we show that the vevs of flavon scalars transforming as higher multiplets ( e. g. triplets ) of the flavor group must be at the electroweak scale. the most economical way for that is to use sm higgs boson ( s ) as flavons. such models can be tested by the lhc experiments. this scenario requires the existence of additional froggatt - nielsen scalars that generate hierarchies in yukawa couplings. we study the conditions under which large and small flavor breaking parameters can coexist without destabilizing the dark matter.
|
arxiv:1111.1270
|
we introduce the concept of autoregressive moving average ( arma ) filters on a graph and show how they can be implemented in a distributed fashion. our graph filter design philosophy is independent of the particular graph, meaning that the filter coefficients are derived irrespective of the graph. in contrast to finite - impulse response ( fir ) graph filters, arma graph filters are robust against changes in the signal and / or graph. in addition, when time - varying signals are considered, we prove that the proposed graph filters behave as arma filters in the graph domain and, depending on the implementation, as first or higher arma filters in the time domain.
|
arxiv:1508.05808
|
we investigate the high - pressure behaviour of beryllium, magnesium and calcium difluorides using ab initio random structure searching and density functional theory ( dft ) calculations, over the pressure range 0 - 70 gpa. beryllium fluoride exhibits extensive polymorphism at low pressures, and we find two new phases for this compound - the silica moganite and cacl2 structures - which are stable over the wide pressure range 12 - 57 gpa. for magnesium fluoride, our searching results show that the orthorhombic ` o - i ' tio2 structure ( pbca, z = 8 ) is stable for this compound between 40 and 44 gpa. our searches find no new phases at the static - lattice level for calcium difluoride between 0 and 70 gpa ; however, a phase with p62m symmetry is close to stability over this pressure range, and our calculations predict that this phase is stabilised at high temperature. the p62m structure exhibits an unstable phonon mode at large volumes which may signal a transition to a superionic state at high temperatures. the group - ii difluorides are isoelectronic to a number of other ab2 - type compounds such as sio2 and tio2, and we discuss our results in light of these similarities.
|
arxiv:1702.00746
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.