text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
thermoelectric ( te ) devices have been attracting increasing attention because of their ability to convert heat directly to electricity. to date, improving the te figure of merit remains the key challenge. the advent of the topological insulator and the emerging nanotechnology open a new way to design high - performance te devices. in this paper, we investigate the te transport properties of the bi2se3 thin film by the first - principle calculations and the boltzmann transport theory. by comparing our calculations with the earlier experimental data, we demonstrate that, for the bi2se3 film of thickness larger than six quintuple layers, the relaxation time of the topological surface states in the bulk gap is about hundreds of femtoseconds, which is about two orders larger than that of the bulk states. such a large relaxation - time difference causes the ratio of the electrical conductance to the thermal conductance much larger than the value predicted by the wiedemann - franz law, as well as the large magnitude of seebeck coefficient, and consequently the large te figure of merit, when the fermi level is near the conduction band edge. we shows that the te performance can be further improved by introducing defects in the middle layers of the thin film. the improvement is generally significant at room temperature and can be further enhanced at higher temperature.
|
arxiv:1608.00348
|
performance of vehicle - to - vehicle ( v2v ) communications depends highly on the employed scheduling approach. while centralized network schedulers offer high v2v communication reliability, their operation is conventionally restricted to areas with full cellular network coverage. in contrast, in out - of - cellular - coverage areas, comparatively inefficient distributed radio resource management is used. to exploit the benefits of the centralized approach for enhancing the reliability of v2v communications on roads lacking cellular coverage, we propose vrls ( vehicular reinforcement learning scheduler ), a centralized scheduler that proactively assigns resources for out - of - coverage v2v communications \ textit { before } vehicles leave the cellular network coverage. by training in simulated vehicular environments, vrls can learn a scheduling policy that is robust and adaptable to environmental changes, thus eliminating the need for targeted ( re - ) training in complex real - life environments. we evaluate the performance of vrls under varying mobility, network load, wireless channel, and resource configurations. vrls outperforms the state - of - the - art distributed scheduling algorithm in zones without cellular network coverage by reducing the packet error rate by half in highly loaded conditions and achieving near - maximum reliability in low - load scenarios.
|
arxiv:2207.06537
|
gas load and pumping determine the quality of vacuum systems. in particle accelerators, once leaks are excluded, outgassing of materials is an important source of gas together with degassing induced by particle beams. understanding, predicting, and measuring gas release from materials in vacuum are among the fundamental tasks of ultrahigh - vacuum experts. the knowledge of outgassing phenomena is essential for the choice of materials and their treatments so that the required gas density is achieved in such demanding and expensive scientific instruments. this note provides the background to understand outgassing in vacuum and gives references for further study.
|
arxiv:2006.07124
|
spatiotemporal variation of the thermal gradient in the melt pool inherited from different heat input patterns or other non - equilibrium transient effects during additive manufacturing can significantly affect the resulting subgrain microstructure evolution. to examine the impact of this variation, we approximate the thermal gradient by various isotherm patterns that move with constant velocity following directional solidification. we report the first three - dimensional phase - field simulations to investigate the effects of isotherm patterns on the cellular structures typically observed in solidified melt pools. results indicate that small variations in the isotherm can considerably impact the microstructural features. we use appropriate statistical characterizations of the solid fraction, solid percolation, and solute partitioning behavior to demonstrate the influence of isotherm patterns on the dendritic structures and semisolid mushy zones. consistent with experimental observations, we find that non - planar isotherms produce finer cells and reduced microsegregation compared to planar isotherms. also, we note that a tilt of the isotherm leads to a tilted state of the resulting cellular arrays. our findings will help in understanding the qualitative aspects of the influence of temperature gradient patterns on the evolution of solidification morphologies, mushy zones, and secondary phases, which are crucial for the macroscopic description of the solidified material.
|
arxiv:2411.18638
|
the use of unmanned aerial vehicles ( uavs ) is growing rapidly across many civil application domains including real - time monitoring, providing wireless coverage, remote sensing, search and rescue, delivery of goods, security and surveillance, precision agriculture, and civil infrastructure inspection. smart uavs are the next big revolution in uav technology promising to provide new opportunities in different applications, especially in civil infrastructure in terms of reduced risks and lower cost. civil infrastructure is expected to dominate the more that $ 45 billion market value of uav usage. in this survey, we present uav civil applications and their challenges. we also discuss current research trends and provide future insights for potential uav uses. furthermore, we present the key challenges for uav civil applications, including : charging challenges, collision avoidance and swarming challenges, and networking and security related challenges. based on our review of the recent literature, we discuss open research challenges and draw high - level insights on how these challenges might be approached.
|
arxiv:1805.00881
|
fisherian randomization inference is often dismissed as testing an uninteresting and implausible hypothesis : the sharp null of no effects whatsoever. we show that this view is overly narrow. many randomization tests are also valid under a more general " bounded " null hypothesis under which all effects are weakly negative ( or positive ), thus accommodating heterogenous effects. by inverting such tests we can form one - sided confidence intervals for the maximum ( or minimum ) effect. these properties hold for all effect - increasing test statistics, which include both common statistics such as the mean difference and uncommon ones such as stephenson rank statistics. the latter ' s sensitivity to extreme effects permits detection of positive effects even when the average effect is negative. we argue that bounded nulls are often of substantive or theoretical interest, and illustrate with two applications : testing monotonicity in an iv analysis and inferring effect sizes in a small randomized experiment.
|
arxiv:1709.07339
|
this paper provides a short and transparent solution for the covering cost of white - grey trees which play a crucial role in the algorithm of bergeron { \ it et al. } \ to compute the rearrangement distance between two multichromosomal genomes in linear time ( { \ it theor. comput. sci. }, 410 : 5300 - 5316, 2009 ). in the process it introduces a new { \ em center } notion for trees, which seems to be interesting on its own.
|
arxiv:1004.2735
|
euphemism identification deciphers the true meaning of euphemisms, such as linking " weed " ( euphemism ) to " marijuana " ( target keyword ) in illicit texts, aiding content moderation and combating underground markets. while existing methods are primarily text - based, the rise of social media highlights the need for multimodal analysis, incorporating text, images, and audio. however, the lack of multimodal datasets for euphemisms limits further research. to address this, we regard euphemisms and their corresponding target keywords as keywords and first introduce a keyword - oriented multimodal corpus of euphemisms ( kom - euph ), involving three datasets ( drug, weapon, and sexuality ), including text, images, and speech. we further propose a keyword - oriented multimodal euphemism identification method ( kom - ei ), which uses cross - modal feature alignment and dynamic fusion modules to explicitly utilize the visual and audio features of the keywords for efficient euphemism identification. extensive experiments demonstrate that kom - ei outperforms state - of - the - art models and large language models, and show the importance of our multimodal datasets.
|
arxiv:2503.21504
|
beam - space mimo has recently been proposed as a promising solution to enable transmitting multiple data streams using a single rf chain and a single pattern - reconfigurable antenna. since in a beam - space mimo system radiation pattern of the transmit antenna is exploited as extra dimension for encoding information, near - field interaction of the transmit antenna with its surrounding objects affects spatial multiplexing performance of the system. through numerical simulations in the previous work, it has been concluded that under bpsk signaling beam - space mimo is not more vulnerable to near - field coupling than its conventional counterpart. in this work, we extend the study to the case of higher - order modulation schemes, where the presence of external perturbation also affects the data constellation points transmitted by a beam - space mimo antenna. to this aim, the error vector magnitude of the transmitted signal is evaluated when placing a qpsk beam - space mimo antenna in close proximity to a hand model of the user. the obtained results emphasize the importance of reconsidering the decoding approach for beam - space mimo systems in practical applications.
|
arxiv:1608.00606
|
providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users ' trust. social sciences suggest that the best explanations are contrastive, explaining not just why one action is taken, but why one action is taken instead of another. we formalize the notion of contrastive explanations for robotic planning policies based on markov decision processes, drawing on insights from the social sciences. we present methods for the automated generation of contrastive explanations with three key factors : selectiveness, constrictiveness, and responsibility. the results of a user study with 100 participants on the amazon mechanical turk platform show that our generated contrastive explanations can help to increase users ' understanding and trust of robotic planning policies while reducing users ' cognitive burden.
|
arxiv:2003.07425
|
we study a large deviation principle for a reflected stochastic partial differential equation on infinite spatial domain. a new sufficient condition for the weak convergence criterion proposed by matoussi, sabbagh and zhang ( { \ it appl. math. optim. } 83 : 849 - 879, 2021 ) plays an important role in the proof.
|
arxiv:2207.06697
|
the ability to imprint a given material property to another through proximity effect in layered two - dimensional materials has opened the way to the creation of designer materials. here, we use molecular - beam epitaxy ( mbe ) for a direct synthesis of a superconductor - magnet hybrid heterostructure by combining superconducting niobium diselenide ( nbse $ _ 2 $ ) with the monolayer ferromagnetic chromium tribromide ( crbr $ _ 3 $ ). using different characterization techniques and density - functional theory ( dft ) calculations, we have confirmed that the crbr $ _ 3 $ monolayer retains its ferromagnetic ordering with a magnetocrystalline anisotropy favoring an out - of - plane spin orientation. low - temperature scanning tunneling microscopy ( stm ) measurements show a slight reduction of the superconducting gap of nbse $ _ 2 $ and the formation of a vortex lattice on the crbr $ _ 3 $ layer in experiments under an external magnetic field. our results contribute to the broader framework of exploiting proximity effects to realize novel phenomena in 2d heterostructures.
|
arxiv:2009.13465
|
information bottleneck ( ib ) is a technique to extract information about one target random variable through another relevant random variable. this technique has garnered significant interest due to its broad applications in information theory and deep learning. hence, there is a strong motivation to develop efficient numerical methods with high precision and theoretical convergence guarantees. in this paper, we propose a semi - relaxed ib model, where the markov chain and transition probability condition are relaxed from the relevance - compression function. based on the proposed model, we develop an algorithm, which recovers the relaxed constraints and involves only closed - form iterations. specifically, the algorithm is obtained by analyzing the lagrangian of the relaxed model with alternating minimization in each direction. the convergence property of the proposed algorithm is theoretically guaranteed through descent estimation and pinsker ' s inequality. numerical experiments across classical and discrete distributions corroborate the analysis. moreover, our proposed algorithm demonstrates notable advantages in terms of computational efficiency, evidenced by significantly reduced run times compared to existing methods with comparable accuracy.
|
arxiv:2404.04862
|
rare - earth nickelates are strongly correlated oxides displaying a metal - to - insulator transition at a temperature tunable by the rare - earth ionic radius. in prnio $ _ 3 $ and ndnio $ _ 3 $, the transition is very sharp and shows an hysteretic behavior akin to a first - order transition. both the temperature at which the transition occurs and the associated resistivity change are extremely sensitive to doping and therefore to off - stoichiometry issues that may arise during thin film growth. here we report that strong deviations in the transport properties of ndnio $ _ 3 $ films can arise in films grown consecutively under nominally identical conditions by pulsed laser deposition ; some samples show a well - developed transition with a resistivity change of up to five orders of magnitude while others are metallic down to low temperatures. through a detailed analysis of \ textit { in - situ } x - ray photoelectron spectroscopy data, we relate this behavior to large levels of cationic off - stoichoimetry that also translate in changes in the ni valence and bandwidth. finally, we demonstrate that this lack of reproducibility can be remarkably alleviated by using single - phase ndnio $ _ 3 $ targets.
|
arxiv:1709.00240
|
we study veech surfaces of genus 2 arising from quadratic differentials that are not squares of abelian differentials. we prove that all such surfaces of type ( 2, 2 ) and ( 2, 1, 1 ) are arithmetic. in ( 1, 1, 1, 1 ) case, we reduce the question to abelian differentials of type ( 2, 2 ) on hyperelliptic genus 3 surfaces with singularities at weierstrass points, and we give an example of a non - arithmetic veech surface.
|
arxiv:math/0504180
|
electron - phonon coupling lifts t $ _ c $ in h $ _ 3 $ s vs d $ _ 3 $ s, but primary origin for nrt background of in both h $ _ 3 $ s and d $ _ 3 $ s remains to be discovered.
|
arxiv:2002.12859
|
this paper presents a simple analytical framework for the dynamic response of cirrus to a local radiative flux convergence, expressible in terms of three independent modes of cloud evolution. horizontally narrow and tenuous clouds within a stable environment adjust to radiative heating by ascending gradually across isentropes while spreading sufficiently fast so as to keep isentropic surfaces nearly flat. more optically dense clouds experience very concentrated heating, and if they are also very broad, they develop a convecting mixed layer. along isentropic spreading still occurs, but in the form of turbulent density currents rather than laminar flows. a third adjustment mode relates to evaporation, which erodes cloudy air as it lofts. the dominant mode is determined from two dimensionless numbers, whose predictive power is shown in comparisons with high resolution numerical cloud simulations. the power and simplicity of the approach hints that fast, sub - grid scale radiative - dynamic atmospheric interactions might be efficiently parameterized within slower, coarse - grid climate models.
|
arxiv:1202.5050
|
in this work, we initiate the complexity study of biclique contraction and balanced biclique contraction. in these problems, given as input a graph g and an integer k, the objective is to determine whether one can contract at most k edges in g to obtain a biclique and a balanced biclique, respectively. we first prove that these problems are np - complete even when the input graph is bipartite. next, we study the parameterized complexity of these problems and show that they admit single exponential - time fpt algorithms when parameterized by the number k of edge contractions. then, we show that balanced biclique contraction admits a quadratic vertex kernel while biclique contraction does not admit any polynomial compression ( or kernel ) under standard complexity - theoretic assumptions.
|
arxiv:2307.10607
|
the remarkable advances in deep learning have led to the emergence of many off - the - shelf classifiers, e. g., large pre - trained models. however, since they are typically trained on clean data, they remain vulnerable to adversarial attacks. despite this vulnerability, their superior performance and transferability make off - the - shelf classifiers still valuable in practice, demanding further work to provide adversarial robustness for them in a post - hoc manner. a recently proposed method, denoised smoothing, leverages a denoiser model in front of the classifier to obtain provable robustness without additional training. however, the denoiser often creates hallucination, i. e., images that have lost the semantics of their originally assigned class, leading to a drop in robustness. furthermore, its noise - and - denoise procedure introduces a significant distribution shift from the original distribution, causing the denoised smoothing framework to achieve sub - optimal robustness. in this paper, we introduce fine - tuning with confidence - aware denoised image selection ( ft - cadis ), a novel fine - tuning scheme to enhance the certified robustness of off - the - shelf classifiers. ft - cadis is inspired by the observation that the confidence of off - the - shelf classifiers can effectively identify hallucinated images during denoised smoothing. based on this, we develop a confidence - aware training objective to handle such hallucinated images and improve the stability of fine - tuning from denoised images. in this way, the classifier can be fine - tuned using only images that are beneficial for adversarial robustness. we also find that such a fine - tuning can be done by updating a small fraction of parameters of the classifier. extensive experiments demonstrate that ft - cadis has established the state - of - the - art certified robustness among denoised smoothing methods across all $ \ ell _ 2 $ - adversary radius in various benchmarks.
|
arxiv:2411.08933
|
recently, the d0 collaboration reported a large cp violation in the same - sign dimuon charge asymmetry which has the $ 3. 2 \ sigma $ deviation from the value estimated in the standard model. in this paper, several new physics models are considered : the mssm, two higgs doublet model, the recent dodeca model, and a new $ z ' $ model. generally, it is hard to achieve such a large cp violation consistently with other experimental constraints. we find that a scheme with extra non - anomalous u ( 1 ) $ ' $ gauge symmetry is barely consistent. in general, the extra $ z ' $ gauge boson induces the flavor changing neutral current interactions at tree level, which is the basic reason allowing a large new physics cp violation. to preserve the u ( 1 ) $ ' $ symmetry at high energy, su ( 2 ) $ _ l $ singlet exotic heavy quarks of mass above 1 tev and the standard model gauge singlet scalars are introduced.
|
arxiv:1010.5123
|
efficient sampling of complex high - dimensional probability distributions is a central task in computational science. machine learning methods like autoregressive neural networks, used with markov chain monte carlo sampling, provide good approximations to such distributions, but suffer from either intrinsic bias or high variance. in this letter, we propose a way to make this approximation unbiased and with low variance. our method uses physical symmetries and variable - size cluster updates which utilize the structure of autoregressive factorization. we test our method for first - and second - order phase transitions of classical spin systems, showing its viability for critical systems and in the presence of metastable states.
|
arxiv:2105.05650
|
in this article we prove the existence and uniqueness of a ( weak ) solution $ u $ in $ l _ p \ left ( ( 0, t ), \ lambda _ { \ gamma + m } \ right ) $ to the cauchy problem \ begin { align } \ notag & \ frac { \ partial u } { \ partial t } ( t, x ) = \ psi ( t, i \ nabla ) u ( t, x ) + f ( t, x ), \ quad ( t, x ) \ in ( 0, t ) \ times \ mathbf { r } ^ d \ label { main eqn } & u ( 0, x ) = 0, \ end { align } where $ d \ in \ mathbb { n } $, $ p \ in ( 1, \ infty ] $, $ \ gamma, m \ in ( 0, \ infty ) $, $ \ lambda _ { \ gamma + m } $ is the lipschitz space on $ \ mathbf { r } ^ d $ whose order is $ \ gamma + m $, $ f \ in l _ p \ left ( ( 0, t ), \ lambda _ { \ gamma } \ right ) $, and $ \ psi ( t, i \ nabla ) $ is a time measurable pseudo - differential operator whose symbol is $ \ psi ( t, \ xi ) $, i. e. $ $ \ psi ( t, i \ nabla ) u ( t, x ) = \ cf ^ { - 1 } \ left [ \ psi ( t, \ xi ) \ cf \ left [ u ( t, \ cdot ) \ right ] ( \ xi ) \ right ] ( x ), $ $ with the assumptions \ begin { align * } \ re [ \ psi ( t, \ xi ) ] \ leq - \ nu | \ xi | ^ { \ gamma }, \ end { align * } and \ begin { align * } | d _ { \ xi } ^ { \ alpha } \ psi ( t, \ xi ) | \ leq \ nu ^ { - 1 } | \ xi | ^ { \ gamma - | \ alpha | }. \ end { align * } furthermore, we show \ begin { align } \ label { e 1028 1 } \ int _ 0 ^ t \ | u ( t, \ cdot ) \ | ^ p _ { \ lambda _ {
|
arxiv:1707.04694
|
by comparing photon diffusion time with gas outflow time, i argue that a large fraction of the energy carried by the jets during the grazing envelope evolution ( gee ) might end in radiation, hence leading to an intermediate luminosity optical transient ( ilot ). in the gee a companion orbiting near the outskirts of the larger primary star accretes mass through an accretion disk, and launches jets that efficiently remove the envelope gas from the vicinity of the secondary star. in cases of high mass accretion rates onto the stellar companion the energy carried by the jets surpass the recombination energy from the ejected mass, and when the primary star is a giant this energy surpasses also the gravitational binding energy of the binary system. some future ilots of giant stars might be better explained by the gee than by merger and common envelope evolution without jets.
|
arxiv:1601.05913
|
miura surfaces are the solutions of a constrained nonlinear elliptic system of equations. this system is derived by homogenization from the miura fold, which is a type of origami fold with multiple applications in engineering. a previous inquiry, gave suboptimal conditions for existence of solutions and proposed an $ h ^ 2 $ - conformal finite element method to approximate them. in this paper, the existence of miura surfaces is studied using a gradient formulation. it is also proved that, under some hypotheses, the constraints propagate from the boundary to the interior of the domain. then, a numerical method based on a stabilized least - square formulation, conforming finite elements and a newton method is introduced to approximate miura surfaces. the numerical method is proved to converge and numerical tests are performed to demonstrate its robustness.
|
arxiv:2209.05567
|
we study the fluctuations in luminosity distances due to gravitational lensing by large scale ( > 35 mpc ) structures, specifically voids and sheets. we use a simplified " swiss cheese " model consisting of a \ lambda - cdm friedman - robertson - walker background in which a number of randomly distributed non - overlapping spherical regions are replaced by mass compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. we compute the distribution of magnitude shifts using a variant of the method of holz & wald ( 1998 ), which includes the effect of lensing shear. the standard deviation of this distribution is ~ 0. 027 magnitudes and the mean is ~ 0. 003 magnitudes for voids of radius 35 mpc, sources at redshift z _ s = 1. 0, with the voids chosen so that 90 % of the mass is on the shell today. the standard deviation varies from 0. 005 to 0. 06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. if the shell walls are given a finite thickness of ~ 1 mpc, the standard deviation is reduced to ~ 0. 013 magnitudes. this standard deviation due to voids is a factor ~ 3 smaller than that due to galaxy scale structures. we summarize our results in terms of a fitting formula that is accurate to ~ 20 %, and also build a simplified analytic model that reproduces our results to within ~ 30 %. our model also allows us to explore the domain of validity of weak lensing theory for voids. we find that for 35 mpc voids, corrections to the dispersion due to lens - lens coupling are of order ~ 4 %, and corrections to due shear are ~ 3 %. finally, we estimate the bias due to source - lens clustering in our model to be negligible.
|
arxiv:1109.1873
|
we reply to david ' s comment ( hep - lat / 9504017 ) on our paper phys. rev. lett. 74 ( 1995 ) 1920.
|
arxiv:hep-lat/9506003
|
we prove that for every planar differential system with a period annulus there exists an involution $ \ sigma $ such that the system is $ \ sigma $ - symmetric. we also prove that for for every planar differential system with a period annulus there exist infinitely many involutions $ \ sigma $ such that the system is $ \ sigma $ - reversible.
|
arxiv:1504.04530
|
ultrasound contrast agents have been recently utilized in therapeutical implementations for targeted delivery of pharmaceutical substances. radial pulsations of the encapsulated microbubbles under the action of an ultrasound field are complex and high nonlinear, particularly for drug and gene delivery applications with high acoustic pressure amplitudes. the dynamics of a polymer - shelled agent is inspected \ textit { in vivo } through applying the method of chaos physics whereas the effects of the outer medium compressibility and the shell were considered. the stability of the ultrasound contrast agent is examined by plotting the bifurcation diagrams over a wide range of variations of influential parameters. the results imply that the composition of the surrounding medium alters the microbubble dynamics, strongly. furthermore, influences of various parameters which present a comprehensive view of the radial oscillations of the microbubble are quantitatively discussed with clear descriptions of the stable and unstable regions of the microbubble oscillations.
|
arxiv:1811.11289
|
we introduce the concept of derivate - based component - trees for images with an arbitrary number of channels. the approach is a natural extension of the classical component - tree devoted to gray - scale images. the similar structure enables the translation of many gray - level image processing techniques based on the component - tree to hyperspectral and color images. as an example application, we present an image segmentation approach that extracts maximally stable homogeneous regions ( mshr ). the approach very similar to mser but can be applied to images with an arbitrary number of channels. as opposed to mser, our approach implicitly segments regions with are both lighter and darker than their background for gray - scale images and can be used in ocr applications where mser will fail. we introduce a local flooding - based immersion for the derivate - based component - tree construction which is linear in the number of pixels. in the experiments, we show that the runtime scales favorably with an increasing number of channels and may improve algorithms which build on mser.
|
arxiv:1705.01906
|
one of the puzzles associated with tidal disruption event candidates ( tdes ) is that there is a dichotomy between the color temperatures of $ { \ rm few } \ times 10 ^ 4 $ ~ k for tdes discovered with optical and uv telescopes, and the color temperatures of $ { \ rm few } \ times 10 ^ 5 - 10 ^ 6 $ ~ k for tdes discovered with x - ray satellites. here we propose that high - temperature tdes are produced when the tidal debris of a disrupted star self - intersects relatively close to the supermassive black hole, in contrast to the more distant self - intersection that leads to lower color temperatures. in particular, we note from simple ballistic considerations that greater apsidal precession in an orbit is the key to closer self - intersection. thus larger values of $ \ beta $, the ratio of the tidal radius to the pericenter distance of the initial orbit, are more likely to lead to higher temperatures of more compact disks which are super - eddington and geometrically and optically thick. for a given star and $ \ beta $, apsidal precession also increases for larger black hole masses, but larger black hole masses imply a lower temperature at the eddington luminosity. thus the expected dependence of the temperature on the mass of the black hole is non - monotonic. we find that in order to produce a soft x - ray temperature tde, a deep plunging stellar orbit with $ \ beta > 3 $ is needed and a black hole mass of $ \ lesssim 5 \ times 10 ^ 6 m _ \ odot $ is favored. although observations of tdes are comparatively scarce and are likely dominated by selection effects, it is encouraging that both expectations are consistent with current data.
|
arxiv:1507.04333
|
we use the coupled cluster method implemented to high orders of approximation to investigate the frustrated spin - $ \ frac { 1 } { 2 } $ $ j _ { 1 } $ - - $ j _ { 2 } $ - - $ j _ { 3 } $ antiferromagnet on the honeycomb lattice with isotropic heisenberg interactions of strength $ j _ { 1 } > 0 $ between nearest - neighbor pairs, $ j _ { 2 } > 0 $ between next - nearest - neighbor pairs, and $ j _ { 3 } > 0 $ between next - next - neareast - neighbor pairs of spins. in particular, we study both the ground - state ( gs ) and lowest - lying triplet excited - state properties in the case $ j _ { 3 } = j _ { 2 } \ equiv \ kappa j _ { 1 } $, in the window $ 0 \ leq \ kappa \ leq 1 $ of the frustration parameter, which includes the ( tricritical ) point of maximum classical frustration at $ \ kappa _ { { \ rm cl } } = \ frac { 1 } { 2 } $. we present gs results for the spin stiffness, $ \ rho _ { s } $, and the zero - field uniform magnetic susceptibility, $ \ chi $, which complement our earlier results for the gs energy per spin, $ e / n $, and staggered magnetization, $ m $, to yield a complete set of accurate low - energy parameters for the model. our results all point towards a phase diagram containing two quasiclassical antiferromagnetic phases, one with n \ ' eel order for $ \ kappa < \ kappa _ { c _ { 1 } } $, and the other with collinear striped order for $ \ kappa > \ kappa _ { c _ { 2 } } $. the results for both $ \ chi $ and the spin gap $ \ delta $ provide compelling evidence for a quantum paramagnetic phase that is gapped over a considerable portion of the intermediate region $ \ kappa _ { c _ { 1 } } < \ kappa < \ kappa _ { c _ { 2 } } $, especially close to the two quantum critical points at $ \ kappa _ { c _ { 1 } } $ and $ \ kappa _ { c _ { 2 } } $. each of our fully independent sets of results for the low - energy parameters is consistent with the values $ \ kappa _ { c
|
arxiv:1504.02275
|
we develop a new formalism for the quantum master equation $ \ delta e ^ { s / \ hbar } = 0 $ and the category of $ { \ rm ibl } _ \ infty $ - algebras and simplify some homotopical algebra arising in the context of oriented surfaces with boundary. we introduce and study a category of mv - algebras, which, on the one hand, contains such important categories as those of $ { \ rm ibl } _ \ infty $ - algebras and $ { \ rm l } _ \ infty $ - algebras, and on the other hand, is homotopically trivial, in particular allowing for a simple solution of the quantum master equation. we also present geometric interpretation of our results.
|
arxiv:1511.01591
|
in this report we present a study on the strength of rocks which are partially fractured from before. we have considered a two dimensional case of a rock in the form of a lattice structure. the fiber bundle model is used for modelling the $ 2 - d $ rock. each lattice site is considered to be a fiber which has a breaking threshold. fractures in this system will be of the form a cluster of sites and the length is defined as the number of sites belonging to a single cluster. we introduce fractures in the system initially and apply load until the rock breaks. the breaking of a rock is characterized by a horizontal fracture which connects the left side of the lattice to the right side. the length distribution and the strength of such systems have been measured.
|
arxiv:1503.08958
|
integrated engineering is a degree program ( and similar concept programs such as interdisciplinary and multidisciplinary engineering ) combining aspects from traditional engineering studies and liberal arts, meant to prepare graduates for multi - disciplinary and project - based workplaces. integrated engineers acquire background in core disciplines such as : materials, solid mechanics, fluid mechanics, and systems involving chemical, electro - mechanical, biological and environmental components. in the united states, an alliance of integrated - type programs has been formed called the alliance for integrated engineering ( a4ie ). = = academia and accreditation = = = = = institutions = = = currently, the following academic institutions are known to offer integrated engineering programs : canada university of british columbia university of western ontario uk the new model institute for technology and engineering ( nmite ) university of bath university of cardiff university of liverpool university of nottingham anglia ruskin university university centre peterborough united states arizona state university florida international university lafayette college lehigh university southern utah university minnesota state university, mankato ( iron range engineering ) texas a & m university university of alabama at birmingham university of texas at el paso ( e - lead program ) university of san diego wake forest university washington and lee university germany baden - wuerttemberg cooperative state university ( dhbw ) south westphalia university of applied sciences estonia tallinn university of technology korea chung - ang university trinidad and tobago university of trinidad and tobago thailand chiang mai university = = = canada = = = integrated engineering originated at the university of western ontario in ontario, canada and in 2000 the applied science faculty of the university of british columbia also began a degree program for integrated engineering. in canada, the program has been fully accredited by the canadian engineering accreditation board and engineers are able to obtain a professional engineer ( p. eng ) certificate. = = = united kingdom = = = in 1988, the engineering council uk, identified the need for routes to qualification for chartered ( professional ) engineers that : meet the identified needs of industry, increase access to engineering education by more students, provide a balanced curriculum combining the subjects that engineers use most often and directed towards the needs of the majority of engineers. this is the fundamental definition for integrated engineering. the qualities looked for by industry when recruiting graduates were identified as : flexibility and broad education, ability to understand non engineering functions, ability to solve problems, knowledge of the principles of engineering and ability to apply them in practical situations, information skills, experience of project work, especially cross linked projects, ability to work as a member of a team, presentation and communication skills. engineering council uk, 1988, an integrated engineering degree
|
https://en.wikipedia.org/wiki/Integrated_engineering
|
chronic kidney disease ( ckd ) is one of the widespread chronic diseases with no known ultimo cure and high morbidity. research demonstrates that progressive chronic kidney disease ( ckd ) is a heterogeneous disorder that significantly impacts kidney structure and functions, eventually leading to kidney failure. with the progression of time, chronic kidney disease has moved from a life - threatening disease affecting few people to a common disorder of varying severity. the goal of this research is to visualize dominating features, feature scores, and values exhibited for early prognosis and detection of ckd using ensemble learning and explainable ai. for that, an ai - driven predictive analytics approach is proposed to aid clinical practitioners in prescribing lifestyle modifications for individual patients to reduce the rate of progression of this disease. our dataset is collected on body vitals from individuals with ckd and healthy subjects to develop our proposed ai - driven solution accurately. in this regard, blood and urine test results are provided, and ensemble tree - based machine - learning models are applied to predict unseen cases of ckd. our research findings are validated after lengthy consultations with nephrologists. our experiments and interpretation results are compared with existing explainable ai applications in various healthcare domains, including ckd. the comparison shows that our developed ai models, particularly the random forest model, have identified more features as significant contributors than xgboost. interpretability ( i ), which measures the ratio of important to masked features, indicates that our xgboost model achieved a higher score, specifically a fidelity of 98 \ %, in this metric and naturally in the fii index compared to competing models.
|
arxiv:2406.06728
|
recently, automatic code comment generation is proposed to facilitate program comprehension. existing code comment generation techniques focus on describing the functionality of the source code. however, there are other aspects such as insights about quality or issues of the code, which are overlooked by earlier approaches. in this paper, we describe a mining approach that recommends insightful comments about the quality, deficiencies or scopes for further improvement of the source code. first, we conduct an exploratory study that motivates crowdsourced knowledge from stack overflow discussions as a potential resource for source code comment recommendation. second, based on the findings from the exploratory study, we propose a heuristic - based technique for mining insightful comments from stack overflow q & a site for source code comment recommendation. experiments with 292 stack overflow code segments and 5, 039 discussion comments show that our approach has a promising recall of 85. 42 %. we also conducted a complementary user study which confirms the accuracy and usefulness of the recommended comments.
|
arxiv:1807.02278
|
thin sheets deposited on a substrate and interfaces of correlated materials offer a plethora of routes towards the realization of exotic phases of matter. in these systems, inversion symmetry is broken which strongly affects the properties of possible instabilities - - in particular in the superconducting channel. by combining symmetry and energetic arguments, we derive general and experimentally accessible selection rules for cooper instabilities in noncentrosymmetric systems which yield necessary and sufficient conditions for spontaneous time - reversal - symmetry breaking at the superconducting transition and constrain the orientation of the triplet vector. we discuss in detail the implications for various different materials. for instance, we conclude that the pairing state in thin layers of sr $ _ 2 $ ruo $ _ 4 $ must, as opposed to its bulk superconducting state, preserve time - reversal symmetry with its triplet vector being parallel to the plane of the system. all pairing states of this system allowed by the selection rules are predicted to display topological majorana modes at dislocations or at the edge of the system. applying our results to the laalo $ _ 3 $ / srtio $ _ 3 $ heterostructures, we find that while the condensates of the ( 001 ) and ( 110 ) oriented interfaces must be time - reversal symmetric, spontaneous time - reversal - symmetry breaking can only occur for the less studied ( 111 ) interface. we also discuss the consequences for thin layers of uru $ _ 2 $ si $ _ 2 $ and upt $ _ 3 $ as well as for single - layer fese. on a more general level, our considerations might serve as a design principle in the search for time - reversal - symmetry - breaking superconductivity in the absence of external magnetic fields.
|
arxiv:1503.03646
|
using mainly $ \ vec { p } $ p $ \ to $ p $ { \ pi ^ + } x $ and $ \ vec { p } $ p $ \ to $ p $ _ { f } $ ~ p $ _ { s } $ x reactions, narrow baryonic structures were observed in the mass range 950 $ \ le $ m $ \ le $ 1800 mev.
|
arxiv:nucl-ex/0207004
|
the pervasiveness of graphs in today ' s real life systems is quite evident, where the system either explicitly exists as graph or can be readily modelled as one. such graphical structure is thus a store house rich information. this has various implication depending on whether we are interested in a node or the graph as a whole. in this paper, we are primarily concerned with the later, that is, the inference that the structure of the graph influences the property of the real life system it represents. a model of such structural influence would be useful in inferencing useful properties of complex and large systems, like vlsi circuits, through its structural property. however, before we can apply some machine learning ( ml ) based technique to model such relationship, an effective representation of the graph is imperative. in this paper, we propose a graph representation which is lossless, linear - sized in terms of number of vertices and gives a 1 - d representation of the graph. our representation is based on prufer encoding for trees. moreover, our method is based on a novel technique, called $ \ mathcal { gt } $ - enhancement whereby we first transform the graph such that it can be represented by a singular tree. the encoding also provides scope to include additional graph property and improve the interpretability of the code.
|
arxiv:2209.01596
|
the visibility of the quantum interference " dip " seen in the hong - ou - mandel experiment is optimized when a symmetric 50 / 50 beamsplitter is used in the interferometer. here we show that the reduction in visibility caused by an asymmetric beamsplitter can be compensated by manipulating the polarization states of the two input photons. we experimentally demonstrate this by using a highly asymmetric 10 / 90 beamsplitter, and converting an initial dip visibility of 22 % to a compensated value of 99 %.
|
arxiv:0907.3887
|
we propose a systematic procedure for extracting gauge invariant and gauge fixed actions for various higher - spin gauge field theories from covariant bosonic open string field theory. by identifying minimal gauge invariant part for the original free string field theory action, we explicitly construct a class of covariantly gauge fixed actions with brst and anti - brst invariance. by expanding the actions with respect to the level n of string states, the actions for various massive fields including higher - spin fields are systematically obtained. as illustrating examples, we explicitly investigate the level n < = 3 part and obtain the consistent actions for massive graviton field, massive 3rd rank symmetric tensor field, or antisymmetric field. we also investigate the tensionless limit of the actions and explicitly derive the gauge invariant and gauge fixed actions for general rank n symmetric and anti - symmetric tensor fields.
|
arxiv:1209.3921
|
the study of gene regulation and expression is often discussed in quantitative terms. in particular, the expression of genes is regularly characterized with respect to how much, how fast, when and where. whether discussing the level of gene expression in a bacterium or its precise location within a developing embryo, the natural language for these experiments is that of numbers. such quantitative data demands quantitative models. we review a class of models ( " thermodynamic models " ) which exploit statistical mechanics to compute the probability that rna polymerase is at the appropriate promoter. this provides a mathematically precise elaboration of the idea that activators are agents of recruitment which increase the probability that rna polymerase will be found at the promoter of interest. we discuss a framework which describes the interactions of repressors, activators, helper molecules and rna polymerase using the concept of effective concentrations, expressed in terms of a function we call the " regulation factor ". this analysis culminates in an expression for the probability of rna polymerase binding at the promoter of interest as a function of the number of regulatory proteins in the cell. in a companion paper [ 1 ], these ideas are applied to several case studies which illustrate the use of the general formalism.
|
arxiv:q-bio/0412010
|
we show that in the perturbative regime defined by the coupling constant, the theta - exact seiberg - witten map applied to noncommutative u ( n ) yang - mills - - with or without supersymmetry - - gives an ordinary gauge theory which is, at the quantum level, dual to the former. we do so by using the on - shell dewitt effective action and dimensional regularization. we explicitly compute the one - loop two - point function contribution to the on - shell dewitt effective action of the ordinary u ( 1 ) theory furnished by the theta - exact seiberg - witten map. we find that the non - local uv divergences found in the propagator in the feynman gauge all but disappear, so that they are not physically relevant. we also show that the quadratic noncommutative ir divergences are gauge - fixing independent and go away in the supersymmetric version of the u ( 1 ) theory.
|
arxiv:1607.01541
|
recent works in self - supervised learning have shown impressive results on single - object images, but they struggle to perform well on complex multi - object images as evidenced by their poor visual grounding. to demonstrate this concretely, we propose visual difference attention ( vda ) to compute visual attention maps in an unsupervised fashion by comparing an image with its salient - regions - masked - out version. we use vda to derive attention maps for state - of - the art ssl methods and show they do not highlight all salient regions in an image accurately, suggesting their inability to learn strong representations for downstream tasks like segmentation. motivated by these limitations, we cast vda as a differentiable operation and propose a new learning objective, differentiable difference attention ( dida ) loss, which leads to substantial improvements in an ssl model ' s visually grounding to an image ' s salient regions.
|
arxiv:2306.14603
|
cloud stacks must isolate application components, while permitting efficient data sharing between components deployed on the same physical host. traditionally, the mmu enforces isolation and permits sharing at page granularity. mmu approaches, however, lead to cloud stacks with large tcbs in kernel space, and page granularity requires inefficient os interfaces for data sharing. forthcoming cpus with hardware support for memory capabilities offer new opportunities to implement isolation and sharing at a finer granularity. we describe cvms, a new vm - like abstraction that uses memory capabilities to isolate application components while supporting efficient data sharing, all without mandating application code to be capability - aware. cvms share a single virtual address space safely, each having only capabilities to access its own memory. a cvm may include a library os, thus minimizing its dependency on the cloud environment. cvms efficiently exchange data through two capability - based primitives assisted by a small trusted monitor : ( i ) an asynchronous read - write interface to buffers shared between cvms ; and ( ii ) a call interface to transfer control between cvms. using these two primitives, we build more expressive mechanisms for efficient cross - cvm communication. our prototype implementation using cheri risc - v capabilities shows that cvms isolate services ( redis and python ) with low overhead while improving data sharing.
|
arxiv:2202.05732
|
an effective ranking model usually requires a large amount of training data to learn the relevance between documents and queries. user clicks are often used as training data since they can indicate relevance and are cheap to collect, but they contain substantial bias and noise. there has been some work on mitigating various types of bias in simulated user clicks to train effective learning - to - rank models based on multiple features. however, how to effectively use such methods on large - scale pre - trained models with real - world click data is unknown. to alleviate the data bias in the real world, we incorporate heuristic - based features, refine the ranking objective, add random negatives, and calibrate the propensity calculation in the pre - training stage. then we fine - tune several pre - trained models and train an ensemble model to aggregate all the predictions from various pre - trained models with human - annotation data in the fine - tuning stage. our approaches won 3rd place in the " pre - training for web search " task in wsdm cup 2023 and are 22. 6 % better than the 4th - ranked team.
|
arxiv:2302.09340
|
definition of the partition function of u ( 1 ) gauge theory is extended to a class of four - manifolds containing all compact spaces and certain asymptotically locally flat ( alf ) ones including the multi - taub - - nut spaces. the partition function is calculated via zeta - function regularization with special attention to its modular properties. in the compact case, compared with the purely topological result of witten, we find a non - trivial curvature correction to the modular weights of the partition function. but s - duality can be restored by adding gravitational counter terms to the lagrangian in the usual way. in the alf case however we encounter non - trivial difficulties stemming from original non - compact alf phenomena. fortunately our careful definition of the partition function makes it possible to circumnavigate them and conclude that the partition function has the same modular properties as in the compact case.
|
arxiv:1005.5639
|
real - time 3d reconstruction of surgical scenes plays a vital role in computer - assisted surgery, holding a promise to enhance surgeons ' visibility. recent advancements in 3d gaussian splatting ( 3dgs ) have shown great potential for real - time novel view synthesis of general scenes, which relies on accurate poses and point clouds generated by structure - from - motion ( sfm ) for initialization. however, 3dgs with sfm fails to recover accurate camera poses and geometry in surgical scenes due to the challenges of minimal textures and photometric inconsistencies. to tackle this problem, in this paper, we propose the first sfm - free 3dgs - based method for surgical scene reconstruction by jointly optimizing the camera poses and scene representation. based on the video continuity, the key of our method is to exploit the immediate optical flow priors to guide the projection flow derived from 3d gaussians. unlike most previous methods relying on photometric loss only, we formulate the pose estimation problem as minimizing the flow loss between the projection flow and optical flow. a consistency check is further introduced to filter the flow outliers by detecting the rigid and reliable points that satisfy the epipolar geometry. during 3d gaussian optimization, we randomly sample frames to optimize the scene representations to grow the 3d gaussian progressively. experiments on the scared dataset demonstrate our superior performance over existing methods in novel view synthesis and pose estimation with high efficiency. code is available at https : / / github. com / wrld / free - surgs.
|
arxiv:2407.02918
|
there is a pressing need to interconnect physical systems such as power grid and vehicles for efficient management and safe operations. owing to the diverse features of physical systems, there is hardly a one - size - fits - all networking solution for developing cyber - physical systems. network slicing is a promising technology that allows network operators to create multiple virtual networks on top of a shared network infrastructure. these virtual networks can be tailored to meet the requirements of different cyber - physical systems. however, it is challenging to design secure network slicing solutions that can efficiently create end - to - end network slices for diverse cyber - physical systems. in this article, we discuss the challenges and security issues of network slicing, study learning - assisted network slicing solutions, and analyze their performance under the denial - of - service attack. we also present a design and implementation of a small - scale testbed for evaluating the network slicing solutions.
|
arxiv:1910.13537
|
soft compliant microrobots have the potential to deliver significant societal impact when deployed in applications such as search and rescue. in this research we present mclari, a body compliant quadrupedal microrobot of 20mm neutral body length and 0. 97g, improving on its larger predecessor, clari. this robot has four independently actuated leg modules with 2 degrees of freedom, each driven by piezoelectric actuators. the legs are interconnected in a closed kinematic chain via passive body joints, enabling passive body compliance for shape adaptation to external constraints. despite scaling its larger predecessor down to 60 percent in length and 38 percent in mass, mclari maintains 80 percent of the actuation power to achieve high agility. additionally, we demonstrate the new capability of passively shape - morphing mclari - omnidirectional laterally confined locomotion - and experimentally quantify its running performance achieving a new unconstrained top speed of 3 bodylengths / s ( 60 mms - 1 ). leveraging passive body compliance, mclari can navigate through narrow spaces with a body compression ratio of up to 1. 5x the neutral body shape.
|
arxiv:2310.04538
|
efficiently modeling spatio - temporal ( st ) physical processes and observations presents a challenging problem for the deep learning community. many recent studies have concentrated on meticulously reconciling various advantages, leading to designed models that are neither simple nor practical. to address this issue, this paper presents a systematic study on existing shortcomings faced by off - the - shelf models, including lack of local fidelity, poor prediction performance over long time - steps, low scalability, and inefficiency. to systematically address the aforementioned problems, we propose an earthfarseer, a concise framework that combines parallel local convolutions and global fourier - based transformer architectures, enabling dynamically capture the local - global spatial interactions and dependencies. earthfarseer also incorporates a multi - scale fully convolutional and fourier architectures to efficiently and effectively capture the temporal evolution. our proposal demonstrates strong adaptability across various tasks and datasets, with fast convergence and better local fidelity in long time - steps predictions. extensive experiments and visualizations over eight human society physical and natural physical datasets demonstrates the state - of - the - art performance of earthfarseer. we release our code at https : / / github. com / easylearningscores / earthfarseer.
|
arxiv:2312.08403
|
in recent years, the expectation that new businesses and economic value can be created by combining / exchanging data from different fields has risen. however, value creation by data exchange involves not only data, but also technologies and a variety of stakeholders that are integrated and in competition with one another. this makes the data exchange ecosystem a challenging subject to study. in this paper, we propose a model describing the stakeholder - centric value chain ( svc ) of data by focusing on the relationships among stakeholders in data businesses and discussing creative ways to use them. the svc model enables the analysis and understanding of the structural characteristics of the data exchange ecosystem. we identified stakeholders who carry potential risk, those who play central roles in the ecosystem, and the distribution of profit among them using business models collected by the svc.
|
arxiv:2005.11005
|
we report on a possible cloud - cloud collision in the dr 21 region, which we found through molecular observations with the nobeyama 45 - m telescope. we mapped an area of 8 ' x12 ' around the region with twenty molecular lines including the 12co ( j = 1 - 0 ) and 13co ( j = 1 - 0 ) emission lines, and sixteen of them were significantly detected. based on the 12co and 13co data, we found five distinct velocity components in the observed region, and we call molecular gas associated with these components - 42, - 22, - 3, 9, and 17 km / s clouds taking after their typical radial velocities. the - 3 km / s cloud is the main filamentary cloud ( 31, 000 mo ) associated with young massive stars such as dr21 and dr21 ( oh ), and the 9 km / s cloud is a smaller cloud ( 3, 400 mo ) which may be an extension of the w75 region in the north. the other clouds are much smaller. we found a clear anticorrelation in the distributions of the - 3 and 9 km / s clouds, and detected faint 12co emission having intermediate velocities bridging the two clouds at their intersection. these facts strongly indicate that the two clouds are colliding against each other. in addition, we found that dr21 and dr21 ( oh ) are located in the periphery of the densest part of the 9 km / s cloud, which is consistent with results of recent numerical simulations of cloud - cloud collisions. we therefore suggest that the - 3 and 9 km / s clouds are colliding, and that the collision induced the massive star formation in the dr21 cloud. the interaction of the - 3 and 9 km / s clouds was previously suggested by dickel et al. ( 1978 ), and our results strongly support their hypothesis of the interaction.
|
arxiv:1905.07467
|
entanglement is an advantageous but at the same time a costly resource utilized in various quantum tasks. for an efficient usage and deployment of entanglement, we envisage the scenario where a pair of spatially separated observers, charu and debu, want to share entanglement without interacting with each other. as a way out, their systems can separately and locally interact with those of alice and bob, respectively, who already share an entangled state. we ask if it is possible to transfer entanglement from the alice - bob pair to multiple charu - debu pairs, where the alice - bob pair only possesses a limited amount of pre - shared entanglement. we find joint unitaries, which when applied by alice and one of the charus, and by bob and the corresponding debu, such that a nonzero amount of the entanglement shared between alice and bob can be sequentially transferred to an indefinite number of pairs of charus and debus. we discuss the amount of entanglement that can be transferred to a fixed number of pairs using these unitaries. also, we determine to how many pairs a fixed amount of entanglement can be transferred. moreover, by optimizing over all possible local unitaries, we analyze the maximum number of pairs to which entanglement can be transferred in such a way that each pair gets at least a fixed amount of entanglement.
|
arxiv:2307.10961
|
the two - loop electroweak radiative corrections to the muon ' s anomalous magnetic moment, $ a _ \ mu \ equiv ( g _ \ mu - 2 ) / 2 $, are presented. we obtain an overall 22. 6 \ % reduction in the electroweak contribution $ a _ \ mu ^ { \ rm ew } $ from $ 195 \ times 10 ^ { - 11 } $ to $ 151 ( 4 ) \ times 10 ^ { - 11 } $. implications for the full standard model prediction and an upcoming high precision measurement of $ a _ \ mu $ are briefly discussed. some aspects of the calculations are discussed in detail.
|
arxiv:hep-ph/9606393
|
we obtain estimates on nonlocal quantities appearing in the volume preserving mean curvature flow ( vpmcf ) in the closed, euclidean setting. as a result we demonstrate that blowups of finite time singularities of vpmcf are ancient solutions to mean curvature flow ( mcf ), prove that monotonicity methods may always be applied at finite times and obtain information on the asymptotics of the flow.
|
arxiv:2207.01123
|
the modification of the wetting properties of marble surfaces upon multi - scale texturing induced by ultrafast laser processing ( 340 fs pulse duration, 1030 nm wavelength ) has been investigated with the aim of evaluating its potential for surface protection. the contact angle ( ca ) of a water drop placed on the surface was used to assess the wettability of the processed areas. although the surfaces are initially hydrophilic upon laser treatment, after a few days they develop a strong hydrophobic behavior. marble surfaces have been irradiated with different scan line separations to elucidate the relative roles of multi - scale roughness ( nano - and micro - texture ) and chemical changes at the surface. the time evolution of the contact angle has been then monitored up to 11 months after treatment. a short and a long - term evolution, associated to the combined effect of multi - scale roughness and the attachment of chemical species at the surface over the time, have been observed. xps and atr measurements are consistent with the progressive hydroxylation of the laser treated surfaces although the additional contribution of hydrocarbon adsorbates to the wettability evolution cannot be ruled - out. the robustness of the results has been tested by ca measurements after cleaning in different conditions with very positive results.
|
arxiv:2108.07875
|
we apply terahertz ( thz ) near - field streaking in a nanofocusing geometry to investigate plasmon polariton propagation on the shaft of a conical nanotip. by evaluating the delay between a streaking spectrogram for plasmon - induced photoemission with a measurement for direct apex excitation, we obtain an average plasmon group velocity, which is in agreement with numerical simulations. combining plasmon - induced photoemission with thz near - field streaking facilitates extensive control over localized photoelectron sources for time - resolved imaging and diffraction.
|
arxiv:2008.11189
|
this article, we are interested in the case where $ \ mu $ is not concentrated on $ \ mathrm { sl } _ d ( \ mathbb { z } ) \ ltimes \ mathbb { q } ^ d / \ mathbb { z } ^ d $ and we prove that, under assumptions on the group spanned by the support of $ \ mu $, the lebesgue ' s measure $ \ nu $ on the torus is the only stationary probability measure and that for any h \ " older - continuous function $ f $ on the torus, $ \ mathbb { e } _ x f ( x _ n ) $ converges exponentially fast to $ \ int f \ mathrm { d } \ nu $. then, we use this to prove the law of large numbers, a non - concentration inequality, the functional central limit theorem and it ' s almost - sure version for the sequence $ ( f ( x _ n ) ) $. in the appendix, we state a non - concentration inequality for products of random matrices without any irreducibility assumption.
|
arxiv:1702.08387
|
we study the mahler measures of the polynomial family $ q _ k ( x, y ) = x ^ 3 + y ^ 3 + 1 - kxy $ using the method previously developed by the authors. an algorithm is implemented to search for cm points with class numbers $ \ leqslant 3 $, we employ these points to derive interesting formulas that link the mahler measures of $ q _ k ( x, y ) $ to $ l $ - values of modular forms. as by - products, some conjectural identities of samart are confirmed, one of them involves the modified mahler measure $ \ tilde { n } ( k ) $ introduced by samart recently. for $ k = \ sqrt [ 3 ] { 729 \ pm405 \ sqrt { 3 } } $, we also prove an equality that expresses a $ 2 \ times 2 $ determinant with entries the mahler measures of $ q _ k ( x, y ) $ as some multiple of the $ l $ - value of two isogenous elliptic curves over $ \ mathbb { q } ( \ sqrt { 3 } ) $.
|
arxiv:2310.12510
|
the dependence of the fractal behaviors of the pomeron induced system in deep inelastic lepton - nucleon scattering upon the diffractive kinematic variables is found rather robust and not sensitive to the distinct parameterization of the pomeron flux factor and structure function. a feasible experimental test of the phenomenological pomeron - exchanged model based on the fractal measurement in desy $ ep $ collider hera is proposed.
|
arxiv:hep-ph/9802269
|
in this work, we study the charmonium spectrum within an unquenched quark model including coupled - channel effects. in couple - channel calculations, we include all of the opened charmed meson channels with the once - subtracted method, meanwhile adopt a suppressed factor to soften the hard vertices given by the $ ^ 3p _ 0 $ model in the high momentum region. we obtain a good description of both the masses and widths for the well - established states in the charmonium spectrum. furthermore, we give predictions for the higher $ s $ -, $ p $ - and $ d $ - wave charmonium states up to mass region of $ \ sim 5. 0 $ gev. the magnitude of mass shifts due to the coupled - channel effects is estimated to be about $ 10s $ mev. although many decay channels are opened for the higher charmonium states, they are relatively narrow states. their widths scatter in the range of $ \ sim 10s - 100 $ mev. many charmonium - like states, such as $ \ chi _ { c1 } ( 3872 ) $, $ \ chi _ { c1 } ( 4274 ) $, $ \ chi _ { c0 } ( 3915 ) $, $ \ chi _ { c0 } ( 4500 ) $, $ \ chi _ { c0 } ( 4700 ) $, $ x ( 4160 ) $, $ x ( 4350 ) $, $ y ( 4500 ) $, and $ \ psi ( 4660 ) $ / $ y ( 4710 ) $, can be accommodated by the charmonium spectrum when the unquenched coupled - channel effects are carefully considered.
|
arxiv:2312.10296
|
in a recent paper, regev and roichman introduced the < _ l order and the l - descent number statistic, des _ l, on the group of colored permutations, c _ a \ wr s _ n. here we define the l - reverse major index statistic, rmaj _ l, on the same group and study the distribution of des _ l and the bi - statistic ( des _ l, rmaj _ l ). we obtain new wreath - product analogues of the eulerian and q - euler - mahonian polynomials, and a generalization of carlitz ' s identity.
|
arxiv:math/0412091
|
we present an integrated magneto - photonic device for all - optical switching of non - volatile multi - bit spintronic memory. the bits are based on stand - alone magneto - tunnel junctions which are perpendicularly magnetized with all - optically switchable free layers, coupled onto photonic crystal nanobeam cavities on an indium phosphide based platform. this device enables switching of the magnetization state of the bits by locally increasing the power absorption of light at resonance with the cavity. we design an add / drop network of cavities to grant random access to multiple bits via a wavelength - division multiplexing scheme. based on a three - dimensional finite - difference time - domain method, we numerically illustrate a compact device capable of switching and accessing 8 bits in different cavities with a 5 nm wavelength spacing in the conventional ( c ) telecommunication band. our multi - bit device holds promise as a new paradigm for developing an ultrafast photonically - addressable spintronic memory and may also empower novel opportunities for photonically - driven spintronic - based neuromorphic computing.
|
arxiv:2402.02485
|
privacy preservation is addressed for decentralized optimization, where $ n $ agents cooperatively minimize the sum of $ n $ convex functions private to these individual agents. in most existing decentralized optimization approaches, participating agents exchange and disclose states explicitly, which may not be desirable when the states contain sensitive information of individual agents. the problem is more acute when adversaries exist which try to steal information from other participating agents. to address this issue, we propose a privacy - preserving decentralized optimization approach based on admm and partially homomorphic cryptography. to our knowledge, this is the first time that cryptographic techniques are incorporated in a fully decentralized setting to enable privacy preservation in decentralized optimization in the absence of any third party or aggregator. to facilitate the incorporation of encryption in a fully decentralized manner, we introduce a new admm which allows time - varying penalty matrices and rigorously prove that it has a convergence rate of $ o ( 1 / t ) $. numerical and experimental results confirm the effectiveness and low computational complexity of the proposed approach.
|
arxiv:1707.04338
|
the rise of ethereum has lead to a flourishing decentralized marketplace that has, unfortunately, fallen victim to frontrunning and maximal extractable value ( mev ) activities, where savvy participants game transaction orderings within a block for profit. one popular solution to address such behavior is flashbots, a private pool with infrastructure and design goals aimed at eliminating the negative externalities associated with mev. while flashbots has established laudable goals to address mev behavior, no evidence has been provided to show that these goals are achieved in practice. in this paper, we measure the popularity of flashbots and evaluate if it is meeting its chartered goals. we find that ( 1 ) flashbots miners account for over 99. 9 % of the hashing power in the ethereum network, ( 2 ) powerful miners are making more than $ 2 \ times $ what they were making prior to using flashbots, while non - miners ' slice of the pie has shrunk commensurately, ( 3 ) mining is just as centralized as it was prior to flashbots with more than 90 % of flashbots blocks coming from just two miners, and ( 4 ) while more than 80 % of mev extraction in ethereum is happening through flashbots, 13. 2 % is coming from other private pools.
|
arxiv:2206.04185
|
we prove that for certain families of semi - algebraic convex bodies in 3 dimensions, the convex hull of $ n $ disjoint bodies has $ o ( n \ lambda _ s ( n ) ) $ features, where $ s $ is a constant depending on the family : $ \ lambda _ s ( n ) $ is the maximum length of order - $ s $ davenport - schinzel sequences with $ n $ letters. the argument is based on an apparently new idea of ` compact families ' of convex bodies or discs, and of ` crossing content ' of disc intersections.
|
arxiv:1311.6331
|
in the article, we investigate localization problem for spinor fields within the 6d standing wave braneworld with the bulk real scalar field, introduced earlier in [ 30 ], and explicitly show that there is no normalizable fermion field zero mode trapped on the brane.
|
arxiv:2402.17869
|
statistic modeling and data - driven learning are the two vital fields that attract many attentions. statistic models intend to capture and interpret the relationships among variables, while data - based learning attempt to extract information directly from the data without pre - processing through complex models. given the extensive studies in both fields, a subtle issue is how to properly integrate data based methods with existing knowledge or models. in this paper, based on the time series data, we propose two different directions to integrate the two, a decomposition - based method and a method exploiting the statistic extraction of data features. the first one decomposes the data into linear stable, nonlinear stable and unstable parts, where suitable statistical models are used for the linear stable and nonlinear stable parts while the appropriate machine learning tools are used for the unstable parts. the second one applies statistic models to extract statistics features of data and feed them as additional inputs into the machine learning platform for training. the most critical and challenging thing is how to determine and extract the valuable information from mathematical or statistical models to boost the performance of machine learning algorithms. we evaluate the proposal using time series data with varying degrees of stability. performance results show that both methods can outperform existing schemes that use models and learning separately, and the improvements can be over 60 %. both our proposed methods are promising in bridging the gap between model - based and data - driven schemes and integrating the two to provide an overall higher learning performance.
|
arxiv:2110.00082
|
we make explicit the statement that minimal supersymmetric su ( 5 ) has been excluded by the super - kamiokande search for the process $ p \ to k ^ { + } \ overline { \ nu } $. this exclusion is made by first placing limits on the colored higgs triplet mass, by forcing the gauge couplings to unify. we also show that taking the superpartners of the first two generations to be very heavy in order to avoid flavor changing neutral currents, the so - called ` ` decoupling ' ' idea, is insufficient to resurrect the minimal susy su ( 5 ). we comment on various mechanisms to further suppress proton decay in susy su ( 5 ). finally, we address the contributions to proton decay from gauge boson exchange in the minimal susy su ( 5 ) and flipped su ( 5 ) models.
|
arxiv:hep-ph/0108104
|
consider the energy critical focusing wave equation on the euclidian space. a blow - up type ii solution of this equation is a solution which has finite time of existence but stays bounded in the energy space. the aim of this work is to exhibit universal properties of such solutions. let w be the unique radial positive stationary solution of the equation. our main result is that in dimension 3, under an appropriate smallness assumption, any type ii blow - up radial solution is essentially the sum of a rescaled w concentrating at the origin and a small remainder which is continuous with respect to the time variable in the energy space. this is coherent with the solutions constructed by krieger, schlag and tataru. one ingredient of our proof is that the unique radial solution which is compact up to scaling is equal to w up to symmetries.
|
arxiv:0910.2594
|
we compute the uncertainty of xivo, a monocular visual - inertial odometry system based on the extended kalman filter, in the presence of gaussian noise, drift, and attribution errors in the feature tracks in addition to gaussian noise and drift in the imu. uncertainty is computed using monte - carlo simulations of a sufficiently exciting trajectory in the midst of a point cloud that bypass the typical image processing and feature tracking steps. we find that attribution errors have the largest detrimental effect on performance. even with just small amounts of gaussian noise and / or drift, however, the probability that xivo ' s performance resembles the mean performance when noise and / or drift is artificially high is greater than 1 in 100.
|
arxiv:2303.16386
|
molecular conformation generation, a critical aspect of computational chemistry, involves producing the three - dimensional conformer geometry for a given molecule. generating molecular conformation via diffusion requires learning to reverse a noising process. diffusion on inter - atomic distances instead of conformation preserves se ( 3 ) - equivalence and shows superior performance compared to alternative techniques, whereas related generative modelings are predominantly based upon heuristical assumptions. in response to this, we propose a novel molecular conformation generation approach driven by the observation that the disintegration of a molecule can be viewed as casting increasing force fields to its composing atoms, such that the distribution of the change of inter - atomic distance shifts from gaussian to maxwell - boltzmann distribution. the corresponding generative modeling ensures a feasible inter - atomic distance geometry and exhibits time reversibility. experimental results on molecular datasets demonstrate the advantages of the proposed shifting distribution compared to the state - of - the - art.
|
arxiv:2309.09985
|
we examine the luminosity function ( lf ) of [ oii ] emission - line galaxies in the high - resolution cosmological simulation massiveblack - ii ( mbii ). from the spectral energy distribution of each galaxy, we select a sub - sample of star - forming galaxies at $ 0. 06 \ le z \ le 3. 0 $ using the [ oii ] emission line luminosity l ( [ oii ] ). we confirm that the specific star formation rate matches that in the gama survey. we show that the [ oii ] lf at z = 1. 0 from the mbii shows a good agreement with the lfs from several surveys below l ( [ oii ] ) = $ 10 ^ { 43. 0 } $ erg / s while the low redshifts ( $ z \ le 0. 3 $ ) show an excess in the prediction of bright [ oii ] galaxies, but still displaying a good match with observations below l ( [ oii ] ) = $ 10 ^ { 41. 6 } $ erg / s. based on the validity in reproducing the properties of [ oii ] galaxies at low redshift ( $ z \ le 1 $ ), we forecast the evolution of the [ oii ] lf at high redshift ( $ z \ le 3 $ ), which can be tested by upcoming surveys such as the hetdex and desi. the slopes of the lfs at bright and faint ends range from - 3 to - 2 showing minima at z = 2. the slope of the bright end evolves approximately as 1 / ( z + 1 ) at z = 2 while the faint end evolves as ~ 3 / ( z + 1 ) at $ 0. 6 \ le z \ le 2 $. in addition, a similar analysis is applied for the evolution of [ oiii ] lfs, which is to be explored in the forthcoming survey wfirst - afta. finally, we show that the auto - correlation function of [ oii ] and [ oiii ] emitting galaxies shows a rapid evolution from z = 2 to 1.
|
arxiv:1508.05106
|
core hole resonance is used in x - ray spectroscopy to incisively probe the local electronic states of many - body systems. here, resonant inelastic x - ray scattering ( rixs ) is studied as a function of incident photon energy on mott insulators srcuo2 and nio to examine how resonance states decay into different excitation symmetries at the transition metal m -, l - and k - edges. quantum interference patterns characteristic of the two major rixs mechanisms are identified within the data, and used to distinguish the attosecond scale scattering dynamics by which fundamental excitations of a many - body system are created. a function is proposed to experimentally evaluate whether a particular excitation has constructive or destructive interference in the rixs cross - section, and corroborates other evidence that an anomalous excitation is present at the leading edge of the mott gap in quasi - one dimensional srcuo2.
|
arxiv:1612.01019
|
in this paper, we study multivariate approximation defined over weighted anisotropic sobolev spaces which depend on two sequences $ { \ bf a } = \ { a _ j \ } _ { j \ geq1 } $ and $ { \ bf b } = \ { b _ j \ } _ { j \ geq1 } $ of positive numbers. we obtain strong equivalences of the approximation numbers, and necessary and sufficient conditions on $ { \ bf a } $, $ { \ bf b } $ to achieve various notions of tractability of the weighted anisotropic sobolev embeddings.
|
arxiv:1907.00589
|
this paper considers the problem of resource - constrained and noise - limited localization and estimation of dynamic targets that are sparsely distributed over a large area. we generalize an existing framework [ bashan et al, 2008 ] for adaptive allocation of sensing resources to the dynamic case, accounting for time - varying target behavior such as transitions to neighboring cells and varying amplitudes over a potentially long time horizon. the proposed adaptive sensing policy is driven by minimization of a modified version of the previously introduced arap objective function, which is a surrogate function for mean squared error within locations containing targets. we provide theoretical upper bounds on the performance of adaptive sensing policies by analyzing solutions with oracle knowledge of target locations, gaining insight into the effect of target motion and amplitude variation as well as sparsity. exact minimization of the multi - stage objective function is infeasible, but myopic optimization yields a closed - form solution. we propose a simple non - myopic extension, the dynamic adaptive resource allocation policy ( d - arap ), that allocates a fraction of resources for exploring all locations rather than solely exploiting the current belief state. our numerical studies indicate that d - arap has the following advantages : ( a ) it is more robust than the myopic policy to noise, missing data, and model mismatch ; ( b ) it performs comparably to well - known approximate dynamic programming solutions but at significantly lower computational complexity ; and ( c ) it improves greatly upon non - adaptive uniform resource allocation in terms of estimation error and probability of detection.
|
arxiv:1404.2201
|
the blue main sequence ( bms ) of $ \ omega $ cen implies a ratio of helium to metal enrichment $ \ delta y / \ delta z \ approx 70 $, which is a major enigma. we show that rotating models of low metallicity stars, which account for the anomalous abundance ratios of extremely metal poor stars, are also useful for understanding the very high $ \ delta y / \ delta z $ ratio in $ \ omega $ cen. models of massive stars with moderate initial rotation velocities produce stellar winds with large he - - and n - - excesses, but without the large c - - ( and o - - ) excesses made by very fast rotation, in agreement with the observed chemical abundance ratios in $ \ omega $ cen. it is still uncertain whether the abundance peculiarities of $ \ omega $ cen result from the fact that the high velocity contributions of supernovae escaped the globular cluster, usually considered as a tidally stripped core of a dwarf galaxy. another possibility is a general dominance of wind ejecta at very low $ z $, due to the formation of black holes. some abundance and isotopic ratios like $ mg / al $, $ na / mg $, $ ne / n $, $ ^ { 12 } c / ^ { 13 } c $, $ ^ { 16 } o / ^ { 18 } o $ and $ ^ { 17 } o / ^ { 18 } o $ may allow us to further discriminate between these scenarios and between the agb and massive star contributions.
|
arxiv:astro-ph/0601425
|
we present a new scheme to compensate for the small - scales approximations resulting from particle - mesh ( pm ) schemes for cosmological n - body simulations. this kind of simulations are fast and low computational cost realizations of the large scale structures, but lack resolution on small scales. to improve their accuracy, we introduce an additional effective force within the differential equations of the simulation, parameterized by a fourier - space neural network acting on the pm - estimated gravitational potential. we compare the results for the matter power spectrum obtained to the ones obtained by the pgd scheme ( potential gradient descent scheme ). we notice a similar improvement in term of power spectrum, but we find that our approach outperforms pgd for the cross - correlation coefficients, and is more robust to changes in simulation settings ( different resolutions, different cosmologies ).
|
arxiv:2207.05509
|
passphrase, and if the ssid is hidden or not, so users can follow links from qr codes, for instance, to join networks without having to manually enter the data. a mecard - like format is supported by android and ios 11 +. common format : wifi : s : < ssid > ; t : < wep | wpa | blank > ; p : < password > ; h : < true | false | blank > ; sample wifi : s : myssid ; t : wpa ; p : mypassw0rd ; ; = = = data security risks = = = wi - fi access points typically default to an encryption - free ( open ) mode. novice users benefit from a zero - configuration device that works out - of - the - box, but this default does not enable any wireless security, providing open wireless access to a lan. to turn security on requires the user to configure the device, usually via a software graphical user interface ( gui ). on unencrypted wi - fi networks connecting devices can monitor and record data ( including personal information ). such networks can only be secured by using other means of protection, such as a vpn, or hypertext transfer protocol over transport layer security ( https ). the older wireless - encryption standard, wired equivalent privacy ( wep ), has been shown easily breakable even when correctly configured. wi - fi protected access ( wpa ) encryption, which became available in devices in 2003, aimed to solve this problem. wi - fi protected access 2 ( wpa2 ) ratified in 2004 is considered secure, provided a strong passphrase is used. the 2003 version of wpa has not been considered secure since it was superseded by wpa2 in 2004. in 2018, wpa3 was announced as a replacement for wpa2, increasing security ; it rolled out on 26 june. = = = piggybacking = = = piggybacking refers to access to a wireless internet connection by bringing one ' s computer within the range of another ' s wireless connection, and using that service without the subscriber ' s explicit permission or knowledge. during the early popular adoption of 802. 11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. recreational logging and mapping of other people ' s access points have become known as wardriving. indeed, many access
|
https://en.wikipedia.org/wiki/Wi-Fi
|
was held in lyon in 1969. the second congress was in exeter in 1972, and after that, it has been held every four years. midway through the twentieth century, the cultural impact of the " electronic age " ( mcluhan ) was also taken up by educational theory and the teaching of mathematics. while previous approach focused on " working with specialized ' problems ' in arithmetic ", the emerging structural approach to knowledge had " small children meditating about number theory and ' sets '. " since the 1980s, there have been a number of efforts to reform the traditional curriculum, which focuses on continuous mathematics and relegates even some basic discrete concepts to advanced study, to better balance coverage of the continuous and discrete sides of the subject : in the 1980s and early 1990s, there was a push to make discrete mathematics more available at the post - secondary level ; from the late 1980s into the new millennium, countries like the us began to identify and standardize sets of discrete mathematics topics for primary and secondary education ; concurrently, academics began compiling practical advice on introducing discrete math topics into the classroom ; researchers continued arguing the urgency of making the transition throughout the 2000s ; and in parallel, some textbook authors began working on materials explicitly designed to provide more balance. similar efforts are also underway to shift more focus to mathematical modeling as well as its relationship to discrete math. = = objectives = = at different times and in different cultures and countries, mathematics education has attempted to achieve a variety of different objectives. these objectives have included : the teaching and learning of basic numeracy skills to all students the teaching of practical mathematics ( arithmetic, elementary algebra, plane and solid geometry, trigonometry, probability, statistics ) to most students, to equip them to follow a trade or craft and to understand mathematics commonly used in news and internet ( such as percentages, charts, probability, and statistics ) the teaching of abstract mathematical concepts ( such as set and function ) at an early age the teaching of selected areas of mathematics ( such as euclidean geometry ) as an example of an axiomatic system and a model of deductive reasoning the teaching of selected areas of mathematics ( such as calculus ) as an example of the intellectual achievements of the modern world the teaching of advanced mathematics to those students who wish to follow a career in science, technology, engineering, and mathematics ( stem ) fields the teaching of heuristics and other problem - solving strategies to solve non - routine problems the teaching of mathematics in social sciences and actuarial sciences, as well as in some selected arts
|
https://en.wikipedia.org/wiki/Mathematics_education
|
we generalise the concepts introduced by baez and dolan to define opetopes constructed from symmetric operads with a category, rather than a set, of objects. we describe the category of 1 - level generalised multicategories, a special case of the concept introduced by hermida, makkai and power, and exhibit a full embedding of this category in the category of symmetric operads with a category of objects. as an analogy to the baez - dolan slice construction, we exhibit a certain multicategory of function replacement as a slice construction in the multitopic setting, and use it to construct multitopes. we give an explicit description of the relationship between opetopes and multitopes.
|
arxiv:math/0304277
|
we study existence and uniqueness of solutions to a class of nonlinear degenerate parabolic equations, in bounded domains. we show that there exists a unique solution which satisfies possibly inhomogeneous dirichlet boundary conditions. to this purpose some barrier functions are properly introduced and used.
|
arxiv:1412.2068
|
uncovering latent values and opinions embedded in large language models ( llms ) can help identify biases and mitigate potential harm. recently, this has been approached by prompting llms with survey questions and quantifying the stances in the outputs towards morally and politically charged statements. however, the stances generated by llms can vary greatly depending on how they are prompted, and there are many ways to argue for or against a given position. in this work, we propose to address this by analysing a large and robust dataset of 156k llm responses to the 62 propositions of the political compass test ( pct ) generated by 6 llms using 420 prompt variations. we perform coarse - grained analysis of their generated stances and fine - grained analysis of the plain text justifications for those stances. for fine - grained analysis, we propose to identify tropes in the responses : semantically similar phrases that are recurrent and consistent across different prompts, revealing natural patterns in the text that a given llm is prone to produce. we find that demographic features added to prompts significantly affect outcomes on the pct, reflecting bias, as well as disparities between the results of tests when eliciting closed - form vs. open domain responses. additionally, patterns in the plain text rationales via tropes show that similar justifications are repeatedly generated across models and prompts even with disparate stances.
|
arxiv:2406.19238
|
future applications of spin - orbit torque will require new mechanisms to improve the efficiency for switching nanoscale magnetic tunnel junctions ( mtjs ), while also controlling the magnetic dynamics to achieve fast, nanosecond scale performance with low write error rates. here we demonstrate a strategy to simultaneously enhance the interfacial magnetic anisotropy energy and suppress interfacial spin memory loss by introducing sub - atomic and monatomic layers of hf at the top and bottom interfaces of the ferromagnetic free layer of an in - plane magnetized three - terminal mtj device. when combined with a beta - w spin hall channel that generates spin - orbit torque, the cumulative effect is a switching current density of 5. 4 x 106 a / cm2, more than a factor of 3 lower than demonstrated in any other spin - orbit - torque magnetic memory device at room temperature, and highly reliable switching with current pulses only 2 ns long.
|
arxiv:1710.06391
|
we analyze the real - time dynamics of the large $ n $ vector model, focusing on heavy states with energies of the order $ n $. in this regime, we demonstrate that interactions become sufficiently strong to produce non - zero condensate of the hubbard - stratonovich field $ \ sigma $, which, in turn, induces particle production. this process leads to a significant transformation of the initial state and potential thermalization. for homogeneous perturbations, our results show that the equations become integrable, yet can still lead to thermalization in the continuum limit. furthermore, we calculate the energies of these heavy states and their contributions to the thermal free energy, thereby determining the free energy of the critical $ o ( n ) $ model by operator counting.
|
arxiv:2411.02258
|
despite the promising performance of current 3d human pose estimation techniques, understanding and enhancing their generalization on challenging in - the - wild videos remain an open problem. in this work, we focus on the robustness of 2d - to - 3d pose lifters. to this end, we develop two benchmark datasets, namely human3. 6m - c and humaneva - i - c, to examine the robustness of video - based 3d pose lifters to a wide range of common video corruptions including temporary occlusion, motion blur, and pixel - level noise. we observe the poor generalization of state - of - the - art 3d pose lifters in the presence of corruption and establish two techniques to tackle this issue. first, we introduce temporal additive gaussian noise ( tagn ) as a simple yet effective 2d input pose data augmentation. additionally, to incorporate the confidence scores output by the 2d pose detectors, we design a confidence - aware convolution ( ca - conv ) block. extensively tested on corrupted videos, the proposed strategies consistently boost the robustness of 3d pose lifters and serve as new baselines for future research.
|
arxiv:2312.06797
|
the first - - principles density functional molecular dynamics simulations have been carried out to investigate the geometric, the electronic, and the finite temperature properties of pure li clusters ( li $ _ { 10 } $, li $ _ { 12 } $ ) and al - - doped li clusters ( li $ _ { 10 } $ al, li $ _ { 10 } $ al $ _ 2 $ ). we find that addition of two al impurities in li $ _ { 10 } $ results in a substantial structural change, while the addition of one al impurity causes a rearrangement of atoms. introduction of al - - impurities in li $ _ { 10 } $ establishes a polar bond between li and nearby al atom ( s ), leading to a multicentered bonding, which weakens the li - - li metallic bonds in the system. these weakened li - - li bonds lead to a premelting feature to occur at lower temperatures in al - - doped clusters. in li $ _ { 10 } $ al $ _ 2 $, al atoms also form a weak covalent bond, resulting into their dimer like behavior. this causes al atoms not to ` melt ' till 800 k, in contrast to the li atoms which show a complete diffusive behavior above 400 k. thus, although one al impurity in li $ _ { 10 } $ cluster does not change its melting characteristics significantly, two impurities results in ` surface melting ' of li atoms whose motions are confined around al dimer.
|
arxiv:cond-mat/0609215
|
generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred. they can learn from large unlabeled conversation datasets, build a deep understanding of conversational context, and generate a wide variety of responses. this flexibility comes at the cost of control. undesirable responses in the training data will be reproduced by the model at inference time, and longer generations often don ' t make sense. instead of generating responses one word at a time, we train a classifier to choose from a predefined list of full responses. the classifier is trained on ( conversation context, response class ) pairs, where each response class is a noisily labeled group of interchangeable responses. at inference, we generate the exemplar response associated with the predicted response class. experts can edit and improve these exemplar responses over time without retraining the classifier or invalidating old training data. human evaluation of 775 unseen doctor / patient conversations shows that this tradeoff improves responses. only 12 % of our discriminative approach ' s responses are worse than the doctor ' s response in the same conversational context, compared to 18 % for the generative model. a discriminative model trained without any manual labeling of response classes achieves equal performance to the generative model.
|
arxiv:1910.03476
|
in this paper, we use a group - theoretic approach to give a concrete description of the geometric structure of the supersingular locus of unitary shimura varieties with exotic good reduction. this approach also is a more uniform way to prove results of this form obtained previously by, for example, vollaard - wedhorn and rapoport - terstiege - wilson.
|
arxiv:1609.08775
|
we explore a scheme for guiding cold atoms through a hollow bessel beam generated by a single axicon and a lens from a 2d magneto - optical trap toward a science chamber. we compare the bessel beam profiles measured along the optical axis to a numerical propagation of the beam ' s wavefront, and we show how it is affected by diffraction during the passage through a long narrow funnel serving as a differential pumping tube between the chambers. we derive an approximate analytic expression for the intensity distribution of the bessel beam and the dipolar optical force acting on the atoms. by a monte - carlo simulation based on a stochastic runge - kutta algorithm of the motion of atoms initially prepared at a given temperature we show that a considerable enhancement of the transfer efficiency can be expected in the presence of a sufficiently intense bessel beam.
|
arxiv:2010.09792
|
the analytical expressions and the numerical values of the renormalisation constants of dimension 3 static - light currents are given at one - loop order of perturbation theory in the framework of heavy quark effective theory and with an improved gauge action : the static quark is described by the hyp - smeared action and the light quark is of wilson kind. this completes a work started few years ago and is actually an intermediate step in the measurement of the decay constants $ f _ { b } $ and $ f _ { b _ { s } } $ by the european twisted mass collaboration [ arxiv : 1107. 1441 [ hep - lat ] ].
|
arxiv:1106.2132
|
continual object detection is essential for enabling intelligent agents to interact proactively with humans in real - world settings. while parameter - isolation strategies have been extensively explored in the context of continual learning for classification, they have yet to be fully harnessed for incremental object detection scenarios. drawing inspiration from prior research that focused on mining individual neuron responses and integrating insights from recent developments in neural pruning, we proposed efficient ways to identify which layers are the most important for a network to maintain the performance of a detector across sequential updates. the presented findings highlight the substantial advantages of layer - level parameter isolation in facilitating incremental learning within object detection models, offering promising avenues for future research and application in real - world scenarios.
|
arxiv:2402.12624
|
in this study, the dynamics of a dissipationless incompressible hall magnetohydrodynamic ( hmhd ) medium are formulated as geodesics on a direct product of two volume - preserving diffeomorphism groups. formulations are given for the geodesic and jacobi equations based on a linear connection with physically desirable properties, which agrees with the levi - civita connection. derivations of the explicit normal - mode expressions for the riemannian metric, levi - civita connection, and related formulae and equations are also provided using the generalized els \ " asser variables ( gevs ). examinations of the stabilities of the hydrodynamic ( hd, $ \ alpha = 0 $ ) and magnetohydrodynamic ( mhd, $ \ alpha \ to0 $ ) motions and the $ o ( \ alpha ) $ hall - term effect in terms of the jacobi equation and the riemannian sectional curvature tensor are presented, where $ \ alpha $ represents the hall - term strength parameter. it is very interesting that the sectional curvatures of the mhd and hmhd systems between two gev modes were found to take both the positive ( stable ) and negative ( unstable ) values, while that of the hd system between two complex helical waves was observed to be negative definite. moreover, for the mhd case, negative sectional curvatures were found to occur only when mode interaction was " local, " i. e., the wavenumber moduli of the main flow ( say $ p $ ) and perturbation ( say $ k $ ) were close to each other ( $ k \ approx p $ ). however, in the nonlocal limit ( $ k \ gg p $ or $ k \ ll p $ ), the sectional curvatures were always positive. this result leads to the conjecture that the mhd interactions mainly excite wavy or non - growing motions ; however, some local interactions cause dynamical instability that leads to chaotic or turbulent plasma motions. additionally, it was found that the tendencies of the $ o ( \ alpha ) $ effects are opposite between the ion cyclotron and whistler modes. comparison with energy - casimir method is discussed.
|
arxiv:1608.05154
|
we present new, sensitive hi observations of the sculptor group galaxy ngc 55 with the australia telescope compact array. we achieve a 5 sigma hi column density sensitivity of 10 ^ 19 cm ^ - 2 over a spectral channel width of 8 km / s for emission filling the 158 " x 84 " synthesised beam. our observations reveal for the first time the full extent of the hi disc of ngc 55 at this sensitivity and at a moderately high spatial resolution of about 1 kpc. the hi disc of ngc 55 appears to be distorted on all scales. there is a strong east - west asymmetry in the column density distribution along the major axis, suggesting that the disc is under the influence of ram - pressure forces. we also find evidence of streaming motions of the gas along the bar of ngc 55. the fitting of tilted rings to the velocity field reveals a strong warping of the outer gas disc which could be the result of tidal interaction with either ngc 300 or a smaller satellite galaxy. finally, we find a large number of distinct clumps and spurs across the entire disc, indicating that internal or external processes, such as satellite accretion or gas outflows, have stirred up the gas disc. we also detect several isolated hi clouds within about 20 kpc projected distance from ngc 55. their dynamical properties and apparent concentration around ngc 55 suggest that most of the clouds are forming a circum - galactic population similar to the high - velocity clouds of the milky way and m31, although two of the clouds could be foreground objects and part of the magellanic stream. while it is difficult to determine the origin of these clouds, our data seem to favour either tidal stripping or gas outflows as the source of the gas.
|
arxiv:1307.2962
|
we report our ( hpqcd ) progress on the calculation of the hadronic vacuum polarisation contribution to the anomalous magnetic moment of muon. in this article we discuss the calculations for the light ( up / down ) quark connected contribution using our method described in phys. rev. d89 ( 2014 ) 11, 114501 and give an estimate for the disconnected contribution. our calculation has been carried out on milc collaboration ' s $ n _ f = 2 + 1 + 1 $ hisq ensembles at multiple values of the lattice spacing, multiple volumes and multiple light sea quark masses ( including physical pion mass configurations ).
|
arxiv:1511.05870
|
pks 2155 - 304 is one of the brightest blazar located in southern hemisphere, monitored with h. e. s. s. since the first light of the experiment. here we report multiwavelength monitoring observations collected during the period of 2015 - 2016 with h. e. s. s., fermi - lat, swift - xrt, swift - uvot, and atom. two years of multiwavelength data with very good temporal coverage allowed to characterize broadband emission observed from the region of pks 2155 - 304 and study potential multifrequency correlations. during the period of monitoring, pks 2155 - 304 revealed complex multiwavelength variability with two outbursts characterized by completely different multiband properties. the 2015 activity of the blazar is characterized by a flare observed at all wavelengths studied. the broadband emission observed during the outburst is well correlated without any time lags. contrary to 2015, in 2016, only orphan outburst in the optical and ultraviolet wavelengths was observed. such an orphan activity is reported for the first time for the blazar pks 2155 - 304.
|
arxiv:1908.01232
|
we introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. the proposed generative stochastic networks ( gsn ) framework is based on learning the transition operator of a markov chain whose stationary distribution estimates the data distribution. because the transition distribution is a conditional distribution generally involving a small move, it has fewer dominant modes, being unimodal in the limit of small moves. thus, it is easier to learn, more like learning to perform supervised function approximation, with gradients that can be obtained by back - propagation. the theorems provided here generalize recent work on the probabilistic interpretation of denoising auto - encoders and provide an interesting justification for dependency networks and generalized pseudolikelihood ( along with defining an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent ). we study how gsns can be used with missing inputs and can be used to sample subsets of variables given the rest. successful experiments are conducted, validating these theoretical results, on two image datasets and with a particular architecture that mimics the deep boltzmann machine gibbs sampler but allows training to proceed with backprop, without the need for layerwise pretraining.
|
arxiv:1503.05571
|
we study supermassive black holes ( bhs ) in merging galaxies, using a suite of hydrodynamical simulations with very high spatial ( ~ 10 pc ) and temporal ( ~ 1 myr ) resolution, where we vary the initial mass ratio, the orbital configuration, and the gas fraction. ( i ) we address the question of when and why, during a merger, increased bh accretion occurs, quantifying gas inflows and bh accretion rates. ( ii ) we also quantify the relative effectiveness in inducing agn activity of merger - related versus secular - related causes, by studying different stages of the encounter : the stochastic ( or early ) stage, the ( proper ) merger stage, and the remnant ( or late ) stage. ( iii ) we assess which galaxy mergers preferentially enhance bh accretion, finding that the initial mass ratio is the most important factor. ( iv ) we study the evolution of the bh masses, finding that the bh mass contrast tends to decrease in minor mergers and to increase in major mergers. this effect hints at the existence of a preferential range of mass ratios for bhs in the final pairing stages. ( v ) in both merging and dynamically quiescent galaxies, the gas accreted by the bh is not necessarily the gas with $ low $ angular momentum, but the gas that $ loses $ angular momentum.
|
arxiv:1409.0004
|
g } }, m _ { \ tilde { t } } \ gtrsim 1 $ tev. in addition, with nugm the lsp neutralino can coannihilate with gluino and / or stop for $ m _ { \ tilde { g } }, m _ { \ tilde { t } } \ approx m _ { \ tilde { \ chi } _ { 1 } ^ { 0 } } \ in [ 0. 9 - 1. 5 ] $ tev. the 100 tev fcc collider can probe the gluino masses up to about 6 tev with $ 36. 1 ~ fb ^ { - 1 } $ integrated luminosity. we also find that the decay $ \ tilde { g } \ rightarrow \ tilde { t } t $ can indirectly probe the stop mass up to about 4 tev.
|
arxiv:1910.01457
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.