text
stringlengths
1
3.65k
source
stringlengths
15
79
we investigate signatures of the minimal supersymmetric inverse seesaw model at the large hadron collider ( lhc ) with three isolated leptons and large missing energy ( 3 \ ell + \ met or 2 \ ell + 1 \ tau + \ met, with \ ell = e, \ mu ) in the final state. this signal has its origin in the decay of chargino - neutralino ( \ chpm1 \ ntrl2 ) pair, produced in pp collisions. the two body decays of the lighter chargino into a charged lepton and a singlet sneutrino has a characteristic decay pattern which is correlated with the observed large atmospheric neutrino mixing angle. this correlation is potentially observable at the lhc by looking at the ratios of cross sections of the trilepton + \ met channels in certain flavour specific modes. we show that even after considering possible leading standard model backgrounds these final states can lead to reasonable discovery significance at the lhc with both 7 tev and 14 tev center - of - mass energy.
arxiv:1201.1556
maxwell electrodynamics, considered as a source of the classical einstein field equations, leads to the singular isotropic friedmann solutions. we show that this singular behavior does not occur for a class of nonlinear generalizations of the electromagnetic theory. a mathematical toy model is proposed for which the analytical nonsingular extension of frw solutions is obtained.
arxiv:gr-qc/9806076
it is proved that the distributions of scaling limits of continuous time random walks ( ctrws ) solve integro - differential equations akin to fokker - planck equations for diffusion processes. in contrast to previous such results, it is not assumed that the underlying process has absolutely continuous laws. moreover, governing equations in the backward variables are derived. three examples of anomalous diffusion processes illustrate the theory.
arxiv:1501.00533
atomically thin boron nitride ( bn ) nanosheets have many properties desirable for surface enhanced raman spectroscopy ( sers ). bn nanosheets have a strong surface adsorption capability towards airborne hydrocarbon and aromatic molecules. for maximized adsorption area and hence sers sensitivity, atomically thin bn nanosheet covered gold nanoparticles have been prepared for the first time. when placed on top of metal nanoparticles, atomically thin bn nanosheets closely follow their contours so that the plasmonic hot spots are retained. electrically insulating bn nanosheets also act as a barrier layer to eliminate metal - induced disturbance in sers. moreover, the sers substrates veiled by bn nanosheets show an outstanding reusability in the long term. as the result, the sensitivity, reproducibility and reusability of sers substrates can be greatly improved. we also demonstrate that large bn nanosheets produced by chemical vapor deposition can be used to scale up the proposed sers substrate for practical application.
arxiv:1606.07183
we combine the collisional picture for open system dynamics and the control of the rate of decoherence provided by the quantum ( anti - ) zeno effect to illustrate the temporal unfolding of the redundant encoding of information into a multipartite environment that is at the basis of quantum darwinism, and to control it. the rate at which such encoding occurs can be enhanced or suppressed by tuning the dynamical conditions of system - environment interaction in a suitable and remarkably simple manner. this would help the design of a new generation of quantum experiments addressing the elusive phenomenology of quantum darwinism and thus its characterization.
arxiv:1907.13157
we discuss the proper definition for the chiral crossover at finite temperature, based on the goldstone ' s theorem. different from the usually used maximum change of chiral condensate, we propose to define the crossover temperature by the mott transition of pseudo - goldstone bosons, which, by definition, guarantees the goldstone ' s theorem. we analytically and numerically demonstrate this property in frame of a pauli - villars regularized njl model. in external magnetic field, we find that the mott transition temperature shows an inverse magnetic catalysis effect.
arxiv:1908.02851
we study the mechanism by which gravitational actions reproduce the trace anomalies of the holographically related conformal field theories. two universal features emerge : a ) the ratios of type b trace anomalies in any even dimension are independent of the gravitational action, being uniquely determined by the underlying algebraic structure b ) the normalization of the type a and the overall normalization of the type b anomalies are given by action dependent expressions with the dimension dependence completely fixed.
arxiv:hep-th/0309064
the energy diffusion coefficients d _ n ( e ) ( n = 1, 2 ) for a system of equal mass particles moving self - consistently in an n - body realisation of a king model are computed from the probability per unit time, p ( e, delta e ), that a star with initial energy e will undergo an energy change delta e. in turn, p is computed from the number of times during the simulation that a particle in a state of given energy undergoes a transition to another state. these particle states are defined directly from the time evolution of e by identifying them with the event occuring between two local maxima in the e ( t ) curve. if one assumes next that energy changes are uncorrelated between different states, one can use diffusion theory to compute d _ n ( e ). the simulations employ n = 512, 2048,..., 32768 particles and are performed using an implementation of aarseth ' s direct integrator n - body1 on a massively parallel computer. the more than seven million transitions measured in the largest n simulation provide excellent statistics. the numerically determined d ( e ) ' s are compared against their theoretical counterparts which are computed from phase - space averaged rates of energy change due to independent binary encounters. the overall agreement between them is impressive over most of the energy range, notwithstanding the very different type of approximations involved, giving considerable support to the valid usage of these theoretical expressions to simulate dynamical evolution in fokker - planck type calculations.
arxiv:astro-ph/9511027
general partners ( gp ) are sometimes paid on a deal - by - deal basis and other times on a whole - portfolio basis. when is one method of payment better than the other? i show that when assets ( projects or firms ) are highly correlated or when gps have low reputation, whole - portfolio contracting is superior to deal - by - deal contracting. in this case, by bundling payouts together, whole - portfolio contracting enhances incentives for gps to exert effort. therefore, it is better suited to alleviate the moral hazard problem which is stronger than the adverse selection problem in the case of high correlation of assets or low reputation of gps. in contrast, for low correlation of assets or high reputation of gps, information asymmetry concerns dominate and deal - by - deal contracts become optimal, as they can efficiently weed out bad projects one by one. these results shed light on recent empirical findings on the relationship between investors and venture capitalists.
arxiv:2104.07049
we give a simpler proof of a result of hodkinson in the context of a blow and blur up construction argueing that the idea at heart is similar to that adopted by andr \ ' eka et all \ cite { sayed }. the idea is to blow up a finite structure, replacing each ' colour or atom ' by infinitely many, using blurs to represent the resulting term algebra, but the blurs are not enough to blur the structure of the finite structure in the complex algebra. a reverse of this process exists in the literature, it builds algebras with infinite blurs converging to one with finite blurs. this idea due to hirsch and hodkinson, uses probabilistic methods of erdos to construct a sequence of graphs with infinite chromatic number one that is 2 colourable. this construction, which works for both relation and cylindric algebras, further shows that the class of strongly representable atom structures is not elementary. we will generalize such a result for any class of algebras between diagonal free algebras and polyadic algebras with and without equality, then we further discuss possibilities for the infinite dimensional case. finally, we suggest a very plausible equivalence, and that is : if $ n > 2 $, is finite, and $ \ a \ in \ ca _ n $ is countable and atomic, then $ \ cm \ at \ a $ is representable if and only if $ \ a \ in \ nr _ n \ ca _ { \ omega } $. we could prove one side.
arxiv:1305.4532
we present a new strategy for multipulse control over decoherence. when a two - level system interacts with a reservoir characterized by a specific frequency, we find that the decoherence is effectively suppressed by synchronizing the pulse - train application with the dynamical motion of the reservoir.
arxiv:quant-ph/0303144
this note is a continuation of the work \ cite { caoxiangyan2014 }. we study the following quasilinear elliptic equations \ [ - \ delta _ { p } u - \ frac { \ mu } { | x | ^ { p } } | u | ^ { p - 2 } u = q ( x ) | u | ^ { \ frac { np } { n - p } - 2 } u, \ quad \, x \ in \ mathbb { r } ^ { n }, \ ] where $ 1 < p < n, 0 \ leq \ mu < \ left ( ( n - p ) / p \ right ) ^ { p } $ and $ q \ in l ^ { \ infty } ( \ r ^ { n } ) $. optimal asymptotic estimates on the gradient of solutions are obtained both at the origin and at the infinity.
arxiv:1502.03968
the origin and the implications of higher dimensional effective operators in 4 - dimensional theories are discussed in non - supersymmetric and supersymmetric cases. particular attention is paid to the role of general, derivative - dependent field redefinitions which one can employ to obtain a simpler form of the effective lagrangian. an application is provided for the minimal supersymmetric standard model extended with dimension - five r - parity conserving operators, to identify the minimal irreducible set of such operators after supersymmetry breaking. among the physical consequences of this set of operators are the presence of corrections to the mssm higgs sector and the generation of " wrong " - higgs yukawa couplings and fermion - fermion - scalar - scalar interactions. these couplings have implications for supersymmetry searches at the lhc.
arxiv:0809.4598
we discuss the consequences of the possibility that vassiliev invariants do not detect knot invertibility as well as the fact that quantum lie group invariants are known not to do so. on the other hand, finite group invariants, such as the set of homomorphisms from the knot group to m _ 11, can detect knot invertibility. for many natural classes of knot invariants, including vassiliev invariants and quantum lie group invariants, we can conclude that the invariants either distinguish all oriented knots, or there exist prime, unoriented knots which they do not distinguish.
arxiv:q-alg/9712048
hamiltonian monte carlo is a prominent markov chain monte carlo algorithm, which employs symplectic integrators to sample from high dimensional target distributions in many applications, such as statistical mechanics, bayesian statistics and generative models. however, such distributions tend to have thin high density regions, posing a significant challenge for symplectic integrators to maintain the small energy errors needed for a high acceptance probability. instead, we propose a variant called conservative hamiltonian monte carlo, using $ r $ - - reversible energy - preserving integrators to retain a high acceptance probability. we show our algorithm can achieve approximate stationarity with an error determined by the jacobian approximation of the energy - preserving proposal map. numerical evidence shows improved convergence and robustness over integration parameters on target distributions with thin high density regions and in high dimensions. moreover, a version of our algorithm can also be applied to target distributions without gradient information.
arxiv:2206.06901
unlike most satellite galaxies in the local group that have long lost their gaseous disks, the magellanic clouds are gas - rich dwarf galaxies most - likely on their first pericentric passage allowing us to study disk evolution on the smallest scales. the magellanic clouds show both disk destruction and ( re ) - creation. the large magellanic cloud has a very extended stellar disk reaching to at least 15 kpc ( 10 radial scalelengths ) while its gaseous disk is truncated at ~ 5 kpc mainly due to its interaction with the hot gaseous halo of the milky way. the stellar disk of the small magellanic cloud, on the other hand, has essentially been destroyed. the old stellar populations show no sign of rotation ( being pressure supported ) and have an irregular and elongated shape. the smc has been severely disturbed by its close encounters with the lmc ( the most recent only 200 myr ago ) which have also stripped out large quantities of gas creating much of the magellanic stream and the magellanic bridge. amazingly, the smc has an intact, rotating hi disk indicating that either the inner hi was preserved from destruction, or, more likely, that the hi disk reformed quickly after the last close encounter with the lmc.
arxiv:1310.6742
the nature of dark matter and properties of neutrinos are among the most pressing issues in contemporary particle physics. the dual - phase xenon time - projection chamber is the leading technology to cover the available parameter space for weakly interacting massive particles ( wimps ), while featuring extensive sensitivity to many alternative dark matter candidates. these detectors can also study neutrinos through neutrinoless double - beta decay and through a variety of astrophysical sources. a next - generation xenon - based detector will therefore be a true multi - purpose observatory to significantly advance particle physics, nuclear physics, astrophysics, solar physics, and cosmology. this review article presents the science cases for such a detector.
arxiv:2203.02309
rhodonea curves are classical planar curves in the unit disk with the characteristic shape of a rose. in this work, we use point samples along such rose curves as node sets for a novel spectral interpolation scheme on the disk. by deriving a discrete orthogonality structure on these rhodonea nodes, we will show that the spectral interpolation problem is unisolvent. the underlying interpolation space is generated by a parity - modified chebyshev - fourier basis on the disk. this allows us to compute the spectral interpolant in an efficient way. properties as continuity, convergence and numerical condition of the scheme depend on the spectral structure of the interpolation space. for rectangular spectral index sets, we show that the interpolant is continuous at the center, the lebesgue constant grows logarithmically and that the scheme converges fast if the function under consideration is smooth. finally, we derive a clenshaw - curtis quadrature rule using function evaluations at the rhodonea nodes and conduct some numerical experiments to compare different parameters of the scheme.
arxiv:1812.00437
using an atomic force microscope we have created nanotube junctions such as buckles and crossings within individual single - wall metallic carbon nanotubes connected to metallic electrodes. the electronic transport properties of these manipulated structures show that they form electronic tunnel junctions. the conductance shows power - law behavior as a function of bias voltage and temperature, which can be well modeled by a luttinger liquid model for tunneling between two nanotube segments separated by the manipulated junction.
arxiv:cond-mat/0009055
classical rich - get - richer models have found much success in being able to broadly reproduce the statistics and dynamics of diverse real complex systems. these rich - get - richer models are based on classical urn models and unfold step - by - step in discrete time. here, we consider a natural variation acting on a temporal continuum in the form of a partial differential equation ( pde ). we first show that the continuum version of herbert simon ' s canonical preferential attachment model exhibits an identical size distribution. in relaxing simon ' s assumption of a linear growth mechanism, we consider the case of an arbitrary growth kernel and find the general solution to the resultant pde. we then extend the pde to multiple spatial dimensions, again determining the general solution. finally, we apply the model to size and wealth distributions of firms. we obtain power law scaling for both to be concordant with simulations as well as observational data.
arxiv:1710.07580
model counting, or counting the satisfying assignments of a boolean formula, is a fundamental problem with diverse applications. given # p - hardness of the problem, developing algorithms for approximate counting is an important research area. building on the practical success of sat - solvers, the focus has recently shifted from theory to practical implementations of approximate counting algorithms. this has brought to focus new challenges, such as the design of auditable approximate counters that not only provide an approximation of the model count, but also a certificate that a verifier with limited computational power can use to check if the count is indeed within the promised bounds of approximation. towards generating certificates, we start by examining the best - known deterministic approximate counting algorithm that uses polynomially many calls to a $ \ sigma _ 2 ^ p $ oracle. we show that this can be audited via a $ \ sigma _ 2 ^ p $ oracle with the query constructed over $ n ^ 2 \ log ^ 2 n $ variables, where the original formula has $ n $ variables. since $ n $ is often large, we ask if the count of variables in the certificate can be reduced - - a crucial question for potential implementation. we show that this is indeed possible with a tradeoff in the counting algorithm ' s complexity. specifically, we develop new deterministic approximate counting algorithms that invoke a $ \ sigma _ 3 ^ p $ oracle, but can be certified using a $ \ sigma _ 2 ^ p $ oracle using certificates on far fewer variables : our final algorithm uses only $ n \ log n $ variables. our study demonstrates that one can simplify auditing significantly if we allow the counting algorithm to access a slightly more powerful oracle. this shows for the first time how audit complexity can be traded for complexity of approximate counting.
arxiv:2312.12362
a clear understanding of where humans move in a scenario, their usual paths and speeds, and where they stop, is very important for different applications, such as mobility studies in urban areas or robot navigation tasks within human - populated environments. we propose in this article, a neural architecture based on vision transformers ( vits ) to provide this information. this solution can arguably capture spatial correlations more effectively than convolutional neural networks ( cnns ). in the paper, we describe the methodology and proposed neural architecture and show the experiments ' results with a standard dataset. we show that the proposed vit architecture improves the metrics compared to a method based on a cnn.
arxiv:2501.18543
although music is typically multi - label, many works have studied hierarchical music tagging with simplified settings such as single - label data. moreover, there lacks a framework to describe various joint training methods under the multi - label setting. in order to discuss the above topics, we introduce hierarchical multi - label music instrument classification task. the task provides a realistic setting where multi - instrument real music data is assumed. various hierarchical methods that jointly train a dnn are summarized and explored in the context of the fusion of deep learning and conventional techniques. for the effective joint training in the multi - label setting, we propose two methods to model the connection between fine - and coarse - level tags, where one uses rule - based grouped max - pooling, the other one uses the attention mechanism obtained in a data - driven manner. our evaluation reveals that the proposed methods have advantages over the method without joint training. in addition, the decision procedure within the proposed methods can be interpreted by visualizing attention maps or referring to fixed rules.
arxiv:2302.08136
we consider the lugiato - lefever ( ll ) model of optical fibers. we construct a two parameter family of steady state solutions, i. e. kerr frequency combs, for small pumping parameter $ h > 0 $ and the correspondingly ( and necessarily ) small detuning parameter, $ \ alpha > 0 $. these are $ o ( 1 ) $ waves, as they are constructed as bifurcation from the standard cnoidal solutions of the cubic nls. we identify the spectrally stable ones, and more precisely, we show that the spectrum of the linearized operator contains the eigenvalues $ 0, - 2 \ alpha $, while the rest of it is a subset of $ \ { \ mu : \ re \ mu = - \ alpha \ } $. this is in line with the expectations for effectively damped hamiltonian systems, such as the ll model.
arxiv:1806.04821
high - fidelity quantum gate operations are essential for achieving scalable quantum circuits. in spin qubit quantum computing systems, metallic gates and antennas which are necessary for qubit operation, initialization, and readout, also cause detriments by enhancing fluctuations of electromagnetic fields. therefore evanescent wave johnson noise ( ewjn ) caused by thermal and vacuum fluctuations becomes an important unmitigated noise, which induces the decay of spin qubits and limits the quantum gate operation fidelity. here, we first develop a quantum electrodynamics theory of ewjn. then we propose a numerical technique based on volume integral equations to quantify ewjn strength in the vicinity of nanofabricated metallic gates with arbitrary geometry. we study the limits to two spin - qubit gate fidelity from ewjn - induced relaxation processes in two experimentally relevant quantum computing platforms : ( a ) silicon quantum dot system and ( b ) nv centers in diamond. finally, we introduce the lindbladian engineering method to optimize the control pulse sequence design and show its enhanced performance over hamiltonian engineering in mitigating the influence of thermal and vacuum fluctuations. our work leverages advances in computational electromagnetics, fluctuational electrodynamics and open quantum systems to suppress the effects of thermal and vacuum fluctuations and reach the limits of two - spin - qubit gate fidelity.
arxiv:2207.09441
in the last three decades, several constructions of quantum error - correcting codes were presented in the literature. among these codes, there are the asymmetric ones, i. e., quantum codes whose $ z $ - distance $ d _ z $ is different from its $ x $ - distance $ d _ x $. the topological quantum codes form an important class of quantum codes, where the toric code, introduced by kitaev, was the first family of this type. after kitaev ' s toric code, several authors focused attention on investigating its structure and the constructions of new families of topological quantum codes over euclidean and hyperbolic surfaces. as a consequence of establishing the existence and the construction of asymmetric topological quantum codes in theorem \ ref { main }, the main result of this paper, we introduce the class of hyperbolic asymmetric codes. hence, families of euclidean and hyperbolic asymmetric topological quantum codes are presented. an analysis regarding the asymptotic behavior of their distances $ d _ x $ and $ d _ z $ and encoding rates $ k / n $ versus the compact orientable surface ' s genus is provided due to the significant difference between the asymmetric distances $ d _ x $ and $ d _ z $ when compared with the corresponding parameters of topological codes generated by other tessellations. this inherent unequal error - protection is associated with the nontrivial homological cycle of the $ \ { p, q \ } $ tessellation and its dual, which may be appropriately explored depending on the application, where $ p \ neq q $ and $ ( p - 2 ) ( q - 2 ) \ ge 4 $. three families of codes derived from the $ \ { 7, 3 \ } $, $ \ { 5, 4 \ } $, and $ \ { 10, 5 \ } $ tessellations are highlighted.
arxiv:2105.01144
person re - identification aims to establish the correct identity correspondences of a person moving through a non - overlapping multi - camera installation. recent advances based on deep learning models for this task mainly focus on supervised learning scenarios where accurate annotations are assumed to be available for each setup. annotating large scale datasets for person re - identification is demanding and burdensome, which renders the deployment of such supervised approaches to real - world applications infeasible. therefore, it is necessary to train models without explicit supervision in an autonomous manner. in this paper, we propose an elegant and practical clustering approach for unsupervised person re - identification based on the cluster validity consideration. concretely, we explore a fundamental concept in statistics, namely \ emph { dispersion }, to achieve a robust clustering criterion. dispersion reflects the compactness of a cluster when employed at the intra - cluster level and reveals the separation when measured at the inter - cluster level. with this insight, we design a novel dispersion - based clustering ( dbc ) approach which can discover the underlying patterns in data. this approach considers a wider context of sample - level pairwise relationships to achieve a robust cluster affinity assessment which handles the complications may arise due to prevalent imbalanced data distributions. additionally, our solution can automatically prioritize standalone data points and prevents inferior clustering. our extensive experimental analysis on image and video re - identification benchmarks demonstrate that our method outperforms the state - of - the - art unsupervised methods by a significant margin. code is available at https : / / github. com / gddingcs / dispersion - based - clustering. git.
arxiv:1906.01308
obus - wewers and pop recently resolved a long - standing conjecture by oort that says : every cyclic cover of a curve in characteristic $ p > 0 $ lifts to characteristic zero. sa \ " idi further asks whether these covers are also " liftable in towers ". we prove that the answer for the equal - characteristic version of this question is affirmative. our proof employs the hurwitz tree technique and the tools developed by obus - wewers.
arxiv:2010.13614
video descriptions are crucial for blind and low vision ( blv ) users to access visual content. however, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet blv users ' needs. to address this gap, we introduce videoa11y, an approach that leverages multimodal large language models ( mllms ) and video accessibility guidelines to generate descriptions tailored for blv individuals. using this method, we have curated videoa11y - 40k, the largest and most comprehensive dataset of 40, 000 videos described for blv users. rigorous experiments across 15 video categories, involving 347 sighted participants, 40 blv participants, and seven professional describers, showed that videoa11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. we evaluated models on videoa11y - 40k using both standard and custom metrics, demonstrating that mllms fine - tuned on this dataset produce high - quality accessible descriptions. code and dataset are available at https : / / people - robots. github. io / videoa11y.
arxiv:2502.20480
information retrieval ( ir ) is the task of obtaining pieces of data ( such as documents ) that are relevant to a particular query or need from a large repository of information. ir is a valuable component of several downstream natural language processing ( nlp ) tasks. practically, ir is at the heart of many widely - used technologies like search engines. while probabilistic ranking functions like the okapi bm25 function have been utilized in ir systems since the 1970 ' s, modern neural approaches pose certain advantages compared to their classical counterparts. in particular, the release of bert ( bidirectional encoder representations from transformers ) has had a significant impact in the nlp community by demonstrating how the use of a masked language model trained on a large corpus of data can improve a variety of downstream nlp tasks, including sentence classification and passage re - ranking. ir systems are also important in the biomedical and clinical domains. given the increasing amount of scientific literature across biomedical domain, the ability find answers to specific clinical queries from a repository of millions of articles is a matter of practical value to medical professionals. moreover, there are domain - specific challenges present, including handling clinical jargon and evaluating the similarity or relatedness of various medical symptoms when determining the relevance between a query and a sentence. this work presents contributions to several aspects of the biomedical semantic information retrieval domain. first, it introduces multi - perspective sentence relevance, a novel methodology of utilizing bert - based models for contextual ir. the system is evaluated using the bioasq biomedical ir challenge. finally, practical contributions in the form of a live ir system for medics and a proposed challenge on the living systematic review clinical task are provided.
arxiv:2008.01526
deep neural networks are known to be data - driven and label noise can have a marked impact on model performance. recent studies have shown great robustness to classic image recognition even under a high noisy rate. in medical applications, learning from datasets with label noise is more challenging since medical imaging datasets tend to have asymmetric ( class - dependent ) noise and suffer from high observer variability. in this paper, we systematically discuss and define the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single - target label noise from wrong diagnosis record. we then propose an uncertainty estimation - based framework to handle these two label noise amid the medical image classification task. we design a dual - uncertainty estimation approach to measure the disagreement label noise and single - target label noise via direct uncertainty prediction and monte - carlo - dropout. a boosting - based curriculum training procedure is later introduced for robust learning. we demonstrate the effectiveness of our method by conducting extensive experiments on three different diseases : skin lesions, prostate cancer, and retinal diseases. we also release a large re - engineered database that consists of annotations from more than ten ophthalmologists with an unbiased golden standard dataset for evaluation and benchmarking.
arxiv:2103.00528
the exciting possibility of direct observation of qcd instantons in heavy - ion collisions has recently been proposed by kharzeev. the underlying phenomenon, known as the chiral magnetic effect, may have been observed recently at rhic, and a first principles calculation is needed to confirm and understand the results. the chiral magnetic effect is thought to be visible in the symmetric phase, at temperatures above the qcd critical temperature, and in the presence of an external magnetic field. we report on first 2 + 1 flavor, domain wall fermion, qcd + qed dynamical simulations above the critical temperature, in a fixed topological sector ( s ), which are used to study the electric charge separation produced by the effect.
arxiv:0911.1348
the increased throughput brought by mimo technology relies on the knowledge of channel state information ( csi ) acquired in the base station ( bs ). to make the csi feedback overhead affordable for the evolution of mimo technology ( e. g., massive mimo and ultra - massive mimo ), deep learning ( dl ) is introduced to deal with the csi compression task. based on the separation principle in existing communication systems, dl based csi compression is used as source coding. however, this separate source - channel coding ( sscc ) scheme is inferior to the joint source - channel coding ( jscc ) scheme in the finite blocklength regime. in this paper, we propose a deep joint source - channel coding ( djscc ) based framework for the csi feedback task. in particular, the proposed method can simultaneously learn from the csi source and the wireless channel. instead of truncating csi via fourier transform in the delay domain in existing methods, we apply non - linear transform networks to compress the csi. furthermore, we adopt an snr adaption mechanism to deal with the wireless channel variations. the extensive experiments demonstrate the validity, adaptability, and generality of the proposed framework.
arxiv:2203.16005
nowadays, supervised deep learning techniques yield the best state - of - the - art prediction performances for a wide variety of computer vision tasks. however, such supervised techniques generally require a large amount of manually labeled training data. in the context of autonomous vehicles perception, this requirement is critical, as the distribution of sensor data can continuously change and include several unexpected variations. it turns out that a category of learning techniques, referred to as self - supervised learning ( ssl ), consists of replacing the manual labeling effort by an automatic labeling process. thanks to their ability to learn on the application time and in varying environments, state - of - the - art ssl techniques provide a valid alternative to supervised learning for a variety of different tasks, including long - range traversable area segmentation, moving obstacle instance segmentation, long - term moving obstacle tracking, or depth map prediction. in this tutorial - style article, we present an overview and a general formalization of the concept of self - supervised learning ( ssl ) for autonomous vehicles perception. this formalization provides helpful guidelines for developing novel frameworks based on generic ssl principles. moreover, it enables to point out significant challenges in the design of future ssl systems.
arxiv:1910.01636
with the rise of self - drive cars and connected vehicles, cars are equipped with various devices to assistant the drivers or support self - drive systems. undoubtedly, cars have become more intelligent as we can deploy more and more devices and software on the cars. accordingly, the security of assistant and self - drive systems in the cars becomes a life - threatening issue as smart cars can be invaded by malicious attacks that cause traffic accidents. currently, canonical machine learning and deep learning methods are extensively employed in car hacking detection. however, machine learning and deep learning methods can easily be overconfident and defeated by carefully designed adversarial examples. moreover, those methods cannot provide explanations for security engineers for further analysis. in this work, we investigated deep bayesian learning models to detect and analyze car hacking behaviors. the bayesian learning methods can capture the uncertainty of the data and avoid overconfident issues. moreover, the bayesian models can provide more information to support the prediction results that can help security engineers further identify the attacks. we have compared our model with deep learning models and the results show the advantages of our proposed model. the code of this work is publicly available
arxiv:2112.09333
we present a convolutional neural network for the classification of correlation responses obtained by correlation filters. the proposed approach can improve the accuracy of classification, as well as achieve invariance to the image classes and parameters.
arxiv:2004.09430
we show that for stochastic measures with freely independent increments, the partition - dependent stochastic measures of math. oa / 9903084 can be expressed purely in terms of the higher stochastic measures and the higher diagonal measures of the original.
arxiv:math/0102062
it seems natural to ask why the universe exists at all. modern physics suggests that the universe can exist all by itself as a self - contained system, without anything external to create or sustain it. but there might not be an absolute answer to why it exists. i argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts ; the universe simply is, without ultimate cause or explanation.
arxiv:1802.02231
the widespread consumer - grade 3d printers and learning resources online enable novices to self - train in remote settings. while troubleshooting plays an essential part of 3d printing, the process remains challenging for many remote novices even with the help of well - developed online sources, such as online troubleshooting archives and online community help. we conducted a formative study with 76 active 3d printing users to learn how remote novices leverage online resources in troubleshooting and their challenges. we found that remote novices cannot fully utilize online resources. for example, the online archives statically provide general information, making it hard to search and relate their unique cases with existing descriptions. online communities can potentially ease their struggles by providing more targeted suggestions, but a helper who can provide custom help is rather scarce, making it hard to obtain timely assistance. we propose 3dpfix, an interactive 3d troubleshooting system powered by the pipeline to facilitate human - ai collaboration, designed to improve novices ' 3d printing experiences and thus help them easily accumulate their domain knowledge. we built 3dpfix that supports automated diagnosis and solution - seeking. 3dpfix was built upon shared dialogues about failure cases from q & a discourses accumulated in online communities. we leverage social annotations ( i. e., comments ) to build an annotated failure image dataset for ai classifiers and extract a solution pool. our summative study revealed that using 3dpfix helped participants spend significantly less effort in diagnosing failures and finding a more accurate solution than relying on their common practice. we also found that 3dpfix users learn about 3d printing domain - specific knowledge. we discuss the implications of leveraging community - driven data in developing future human - ai collaboration designs.
arxiv:2401.15877
three - dimensional spin models of the ising and xy universality classes are studied by a combination of high - temperature expansions and monte carlo simulations applied to improved hamiltonians. the critical exponents and the critical equation of state are determined to very high precision.
arxiv:hep-th/0012120
we outline the proof of a theorem of m. baker and a. tamagawa that gives a complete description of the torsion points on a modular curve embedded in its jacobian using the notion of an ` almost rational torsion point. '
arxiv:math/0305281
given a transverse knot $ k $ in a three dimensional contact manifold $ ( y, \ alpha ) $, in [ 13 ] colin, ghiggini, honda and hutchings define a hat version of embedded contact homology for $ k $, that we call $ \ widehat { eck } ( k, y, \ alpha ) $, and conjecture that it is isomorphic to the knot floer homology $ \ widehat { hfk } ( k, y ) $. we define here a full version $ eck ( k, y, \ alpha ) $ and generalise the definitions to the case of links. we prove then that, if $ y = s ^ 3 $, $ eck $ and $ \ widehat { eck } $ categorify the ( multivariable ) alexander polynomial of knots and links, obtaining expressions analogue to that for knot and link floer homologies in the plus and, respectively, hat versions.
arxiv:1410.5081
the explorations of physical degrees of freedom with infinite dimensionalities, such as orbital angular momentum and frequency of light, have profoundly reshaped the landscape of modern optics with representative photonic functional devices including optical vortex emitters and frequency combs. in nanophotonics, whispering gallery mode microresonators naturally support orbital angular momentum of light and have been demonstrated as on - chip emitters of monochromatic optical vortices. on the other hand, whispering gallery mode microresonators serve as a highly efficient nonlinear optical platform for producing light at different frequencies - i. e., microcombs. here, we interlace the optical vortices and microcombs by demonstrating an optical vortex comb on an iii - v integrated nonlinear microresonator. the angular - grating - dressed nonlinear microring simultaneously emits spatiotemporal light springs consisting of 50 orbital angular momentum modes that are each spectrally addressed to the frequency components ( longitudinal whispering gallery modes ) of the generated microcomb. we further experimentally generate optical pulses with time - varying orbital angular momenta by carefully introducing a specific intermodal phase relation to spatiotemporal light springs. this work may immediately boost the development of integrated nonlinear / quantum photonics for exploring fundamental optical physics and advancing photonic quantum technology.
arxiv:2212.07641
the square of a graph is obtained by adding additional edges joining all pair of vertices of distance two in the original graph. particularly, if $ c $ is a hamiltonian cycle of a graph $ g $, then the square of $ c $ is called a hamiltonian square of $ g $. in this paper, we characterize all possible forbidden pairs, which implies the containment of a hamiltonian square, in a 4 - connected graph. the connectivity condition is necessary as, except $ k _ 3 $ and $ k _ 4 $, the square of a cycle is always 4 - connected.
arxiv:1412.0130
the collimation $ c $ of a hadronic event in the e ^ + e ^ - annihilation is defined as the average of $ \ cos \ theta $, $ c = < \ cos \ theta > $, where $ \ theta $ is the angle of each hadron measured from the thrust axis, and the average is over all the hadrons produced in an event. it is an infrared - stable event - shape parameter. $ 1 - \ bar c $, the difference between the unity and the average collimation at a given energy, is proportional to the anomalous dimension of the hadron multiplicity at the leading order in mlla. its next - to - leading order corrections are calculated.
arxiv:hep-ph/9705238
we study hyperbolic systems with multiplicities and smo \ - oth coefficients. in the case of non - analytic, smooth coefficients, we prove well - posedness in any gevrey class and when the coefficients are analytic, we prove $ c ^ \ infty $ well - posedness. the proof is based on a transformation to block sylvester form introduced by d ' ancona and spagnolo in ref. 9 which increases the system size but does not change the eigenvalues. this reduction introduces lower order terms for which appropriate levi - type conditions are found. these translate then into conditions on the original coefficient matrix. this paper can be considered as a generalisation of ref. 12, where weakly hyperbolic higher order equations with lower order terms were considered.
arxiv:1603.03602
the properties of josephson devices are strongly affected by geometrical effects. a loop - shaped superconducting electrode tightly couples a long josephson tunnel junction with the surrounding electromagnetic field. due to the fluxoid conservation, any change of the magnetic flux linked to the loop results in a variation of the shielding current circulating around the loop, which, in turn, affects the critical current of the josephson junction. this method allows the realization of a novel family of robust superconducting devices ( not based on the quantum interference ) which can function as a general - purpose magnetic sensors. the best performance is accomplished without compromising the noise performance by employing an in - line - type junction few times longer than its josephson penetration length. the linear ( rather than periodic ) response to magnetic flux changes over a wide range is just one of its several advantages compared to the most sensitive magnetic detectors currently available, namely the superconducting quantum interference devices ( squid ). we will also comment on the drawbacks of the proposed system and speculate on its noise properties.
arxiv:1203.6091
compositional learning, mastering the ability to combine basic concepts and construct more intricate ones, is crucial for human cognition, especially in human language comprehension and visual perception. this notion is tightly connected to generalization over unobserved situations. despite its integral role in intelligence, there is a lack of systematic theoretical and experimental research methodologies, making it difficult to analyze the compositional learning abilities of computational models. in this paper, we survey the literature on compositional learning of ai models and the connections made to cognitive studies. we identify abstract concepts of compositionality in cognitive and linguistic studies and connect these to the computational challenges faced by language and vision models in compositional reasoning. we overview the formal definitions, tasks, evaluation benchmarks, various computational models, and theoretical findings. our primary focus is on linguistic benchmarks and combining language and vision, though there is a large amount of research on compositional concept learning in the computer vision community alone. we cover modern studies on large language models to provide a deeper understanding of the cutting - edge compositional capabilities exhibited by state - of - the - art ai models and pinpoint important directions for future research.
arxiv:2406.08787
recent in - ide ai coding assistant tools ( acats ) like github copilot have significantly impacted developers ' coding habits. while some studies have examined their effectiveness, there lacks in - depth investigation into the actual assistance process. to bridge this gap, we simulate real development scenarios encompassing three typical types of software development tasks and recruit 27 computer science students to investigate their behavior with three popular acats. our goal is to comprehensively assess acats ' effectiveness, explore characteristics of recommended code, identify reasons for modifications, and understand users ' challenges and expectations. to facilitate the study, we develop an experimental platform that includes a data collection plugin for vscode ide and provides functions for screen recording, code evaluation, and automatic generation of personalized interview and survey questions. through analysis of the collected data, we find that acats generally enhance task completion rates, reduce time, improve code quality, and increase self - perceived productivity. however, the improvement is influenced by both the nature of coding tasks and users ' experience level. notably, for experienced participants, the use of acats may even increase completion time. we observe that " edited line completion " is the most frequently recommended way, while " comments completion " and " string completion " have the lowest acceptance rates. the primary reasons for modifying recommended code are disparities between output formats and requirements, flawed logic, and inconsistent code styles. in terms of challenges and expectations, optimization of service access and help documentation is also concerned by participants except for functionality and performance. our study provides valuable insights into the effectiveness and usability of acats, informing further improvements in their design and implementation.
arxiv:2404.12000
semiconductor point contacts can be a useful tool for producing spin - polarized currents in the presence of spin - orbit ( so ) interaction. neither magnetic fields nor magnetic materials are required. by numerical studies, we show that ( i ) the conductance is quantized in units of 2e ^ 2 / h unless the so interaction is too strong, ( ii ) the current is spin - polarized in the transverse direction, and ( iii ) a spin polarization of more than 50 % can be realized with experimentally accessible values of the so interaction strength. the spin - polarization ratio is determined by the adiabaticity of the transition between subbands of different spins during the transport through the point contacts.
arxiv:cond-mat/0504105
by employing gauss decomposition, we establish a direct and explicit isomorphism between the twisted $ q $ - yangians ( in r - matrix presentation ) and affine $ \ imath $ quantum groups ( in current presentation ) associated to symmetric pair of type ai introduced by molev - ragoucy - sorba and lu - wang, respectively. as a corollary, we obtain a pbw type basis for affine $ \ imath $ quantum groups of type ai.
arxiv:2308.12484
we uncover a kawai - lewellen - tye ( klt ) - type factorization of closed string amplitudes into open string amplitudes for closed string states carrying winding and momentum in toroidal compactifications. the winding and momentum closed string quantum numbers map respectively to the integer and fractional winding quantum numbers of open strings ending on a d - brane array localized in the compactified directions. the closed string amplitudes factorize into products of open string scattering amplitudes with the open strings ending on a d - brane configuration determined by closed string data.
arxiv:2103.05013
a universal secondary relaxation process, known as the johari - goldstein ( jg ) $ \ beta $ - relaxation process, appears in glass formers. it involves all parts of the molecule and is particularly important in glassy systems because of its very close relationship with the $ \ alpha $ - relaxation process. however, the absence of a j - g $ \ beta $ - relaxation mode in colloidal glasses raises questions regarding its universality. in the present work, we study the microscopic relaxation processes in laponite suspensions, a model soft glassy material, by dynamic light scattering ( dls ) experiments. $ \ alpha $ and $ \ beta $ - relaxation timescales are estimated from the autocorrelation functions obtained by dls measurements for laponite suspensions with different concentrations, salt concentrations and temperatures. our experimental results suggest that the $ \ beta $ - relaxation process in laponite suspensions involves all parts of the constituent laponite particle. the ergodicity breaking time is also seen to be correlated with the characteristic time of the $ \ beta $ - relaxation process for all laponite concentrations, salt concentrations and temperatures. the width of the primary relaxation process is observed to be correlated with the secondary relaxation time. the secondary relaxation time is also very sensitive to the concentration of laponite. we measure primitive relaxation timescales from the $ \ alpha $ - relaxation time and the stretching exponent ( $ \ beta $ ) by applying the coupling model for highly correlated systems. the order of magnitude of the primitive relaxation time is very close to the secondary relaxation time. these observations indicate the presence of a j - g $ \ beta $ - relaxation mode for soft colloidal suspensions of laponite.
arxiv:1508.05758
we consider distributed control of double - integrator networks, where agents are subject to stochastic disturbances. we study performance of such networks in terms of coherence, defined through an h2 norm metric that represents the variance of nodal state fluctuations. specifically, we address known performance limitations of the standard consensus protocol, which cause this variance to scale unboundedly with network size for a large class of networks. we propose distributed proportional integral ( pi ) and proportional derivative ( pd ) controllers that relax these limitations and achieve bounded variance, in cases where agents can access an absolute measurement of one of their states. this case applies to, for example, frequency control of power networks and vehicular formation control with limited sensing. we discuss optimal tuning of the controllers with respect to network coherence and demonstrate our results in simulations.
arxiv:1703.03691
dynamic input - output models are standard tools for understanding inter - industry dependencies and how economies respond to shocks like disasters and pandemics. however, traditional approaches often assume fixed prices, limiting their ability to capture realistic economic behavior. here, we introduce an adaptive extension to dynamic input - output recovery models where producers respond to shocks through simultaneous price and quantity adjustments. our framework preserves the economic constraints of the leontief input - output model while converging towards equilibrium configurations based on sector - specific behavioral parameters. when applied to input - output data, the model allows us to compute behavioral metrics indicating whether specific sectors predominantly favor price or quantity adjustments. using the world input - output database, we identify strong, consistent regional and sector - specific behavioral patterns. these findings provide insights into how different regions employ distinct strategies to manage shocks, thereby influencing economic resilience and recovery dynamics.
arxiv:2505.10146
a laser source delivering ultrashort pulses ( 50 - 100 fs ) tunable from 820 nm to 1200 nm has been developed. it is based on the filtering of a continuum in the fourier plane of a zero dispersion line without a phase compensator. we have also numerically investigated the impact of the residual spectral phase in order to guarantee ultrashort pulses.
arxiv:1807.06292
engaging all content providers, including newcomers or minority demographic groups, is crucial for online platforms to keep growing and working. hence, while building recommendation services, the interests of those providers should be valued. in this paper, we consider providers as grouped based on a common characteristic in settings in which certain provider groups have low representation of items in the catalog and, thus, in the user interactions. then, we envision a scenario wherein platform owners seek to control the degree of exposure to such groups in the recommendation process. to support this scenario, we rely on disparate exposure measures that characterize the gap between the share of recommendations given to groups and the target level of exposure pursued by the platform owners. we then propose a re - ranking procedure that ensures desired levels of exposure are met. experiments show that, while supporting certain groups of providers by rendering them with the target exposure, beyond - accuracy objectives experience significant gains with negligible impact in recommendation utility.
arxiv:2204.11243
we give necessary and sufficient conditions for post - critically finite polynomials to have persistent bad reduction at a given prime. we also answer in the negative a pair of questions posed by silverman about conservative polynomials. our proofs rely on conservative dynamical belyi polynomials as exemplars of pcf ( resp. conservative ) maps.
arxiv:2109.03339
an explicit upper bound on the tail probabilities for the normalized rademacher sums is given. this bound, which is best possible in a certain sense, is asymptotically equivalent to the corresponding tail probability of the standard normal distribution, thus affirming a longstanding conjecture by efron. applications to sums of general centered uniformly bounded independent random variables and to the student test are presented.
arxiv:1007.2137
it has been suggested that adiabatic energy losses are not effective in stationary jets, where the jet expansion is not associated with net work. here, we study jet solutions without them, assuming that adiabatic losses are balanced by electron reacceleration. the absence of effective adiabatic losses makes electron advection along the jet an important process, and we solve the electron kinetic equation including that process. we find analytical solutions for the case of conical jets with advection and synchrotron losses. we show that accounting for adiabatic losses in the case of sources showing soft partially self - absorbed spectra with the spectral index of $ \ alpha < 0 $ in the radio - to - ir regime requires deposition of large amounts of energy at large distances in the jet. on the other hand, such spectra can be accounted for by advection of electrons in the jet. we compare our results to the quiescent spectrum of the blazar mrk 421. we find its soft radio - ir spectrum can be fitted either by a model without adiabatic losses and advection of electrons or by one with adiabatic losses, but the latter requires injection of a very large power at large distances.
arxiv:1812.11410
given a quasi - compact, quasi - separated scheme x, a bijection between the tensor localizing subcategories of finite type in qcoh ( x ) and the set of all subsets $ y \ subseteq x $ of the form $ y = \ bigcup _ { i \ in \ omega } y _ i $, with $ x \ setminus y _ i $ quasi - compact and open for all $ i \ in \ omega $, is established. as an application, there is constructed an isomorphism of ringed spaces ( x, o _ x ) - - > ( spec ( qcoh ( x ) ), o _ { qcoh ( x ) } ), where $ ( spec ( qcoh ( x ) ), o _ { qcoh ( x ) } ) $ is a ringed space associated to the lattice of tensor localizing subcategories of finite type. also, a bijective correspondence between the tensor thick subcategories of perfect complexes $ \ perf ( x ) $ and the tensor localizing subcategories of finite type in qcoh ( x ) is established.
arxiv:0708.1622
traditional training of deep classifiers yields overconfident models that are not reliable under dataset shift. we propose a bayesian framework to obtain reliable uncertainty estimates for deep classifiers. our approach consists of a plug - in " generator " used to augment the data with an additional class of points that lie on the boundary of the training data, followed by bayesian inference on top of features that are trained to distinguish these " out - of - distribution " points.
arxiv:2007.06096
gas clouds under the influence of gravitation in thermodynamic equilibrium cannot be isothermal due to the dufour effect, the energy flux induced by density gradients. in galaxy clusters this effect may be responsible for most of the " cooling flows " instead of radiative cooling of the gas. observations from xmm - newton and chandra satellites agree well with model calculations, assuming equilibrium between energy flux induced by temperature and density gradients, but without radiation loss.
arxiv:astro-ph/0401064
we argue that the recent result of da rocha and rodrigues that in two dimensional spacetime the lagrangian of tetrad gravity is an exact differential [ 1 ], despite the claim of the authors, neither proves the jackiw conjecture [ 2 ], nor contradicts to the conclusion of [ 3 ]. this demonstrates that the tetrad formulation is different from the metric formulation of the einstein - hilbert action.
arxiv:hep-th/0602042
large language models ( llms ) like gpt - 4, deepseek - r1, and reasonflux have shown significant improvements in various reasoning tasks. however, smaller llms still struggle with complex mathematical reasoning because they fail to effectively identify and correct reasoning errors. recent reflection - based methods aim to address these issues by enabling self - reflection and self - correction, but they still face challenges in independently detecting errors in their reasoning steps. to overcome these limitations, we propose supercorrect, a novel two - stage framework that uses a large teacher model to supervise and correct both the reasoning and reflection processes of a smaller student model. in the first stage, we extract hierarchical high - level and detailed thought templates from the teacher model to guide the student model in eliciting more fine - grained reasoning thoughts. in the second stage, we introduce cross - model collaborative direct preference optimization ( dpo ) to enhance the self - correction abilities of the student model by following the teacher ' s correction traces during training. this cross - model dpo approach teaches the student model to effectively locate and resolve erroneous thoughts with error - driven insights from the teacher model, breaking the bottleneck of its thoughts and acquiring new skills and knowledge to tackle challenging problems. extensive experiments consistently demonstrate our superiority over previous methods. notably, our supercorrect - 7b model significantly surpasses powerful deepseekmath - 7b by 7. 8 % / 5. 3 % and qwen2. 5 - math - 7b by 15. 1 % / 6. 3 % on math / gsm8k benchmarks, achieving new sota performance among all 7b models. code : https : / / github. com / yangling0818 / supercorrect - llm
arxiv:2410.09008
with the exponential increase of the number of devices in the communication ecosystem toward the upcoming sixth generation ( 6g ) of wireless networks, more enabling technologies and potential wireless architectures are necessary to fulfill the networking requirements of high throughput, massive connectivity, ultra reliability, and heterogeneous quality of service ( qos ). in this work, we consider an uplink network consisting of a primary user ( pu ) and a secondary user ( su ) and, by integrating the concept of cognitive radio and multiple access, two protocols based on rate - splitting multiple access and non - orthogonal multiple access with successive interference cancellation are investigated in terms of ergodic rate. the considered protocols aim to serve the su in a resource block which is originally allocated solely for the pu without negatively affecting the qos of the pu. we extract the ergodic rate of the su considering a specific qos for the pu for the two protocols. in the numerical results, we validate the theoretical analysis and illustrate the superiority of the considered protocols over two benchmark schemes.
arxiv:2204.05825
we present the motivation, design, outline, and lessons learned from an online course in scientific integrity, research ethics, and information ethics provided to over 2000 doctoral and engineering students in stem fields, first at the university paris - saclay, and now expanded to an online mooc available to students across the world, in english. unlike a course in scientific domains, meant to provide students with methods, tools, and concepts they can apply in their future career, the goal of such a training is not so much to equip them, but to make them aware of the impact of their work on society, care about the responsibilities that befall on them, and make them realize not all share the same opinions on how should technology imprint society. while we provide conceptual tools, this is more to sustain interest and engage students. we want them to debate on concrete ethical issues and realize the difficulty of reconciling positions on contemporary dilemma such as dematerialized intellectual property, freedom of expression online and its counterparts, the protection of our digital selves, the management of algorithmic decision, the control of autonomous systems, and the resolution of the digital divide. as a bold shortcut, our course is about introducing and motivating hegelian dialectics in stem curricula, usually more bent on an aristotelian perspective.
arxiv:2204.02728
the maximum entropy method is applied to dynamical fermion simulations of the ( 2 + 1 ) - dimensional nambu - jona - lasinio model. this model is particularly interesting because at t = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances, and hence the simple pole assumption of traditional fitting procedures breaks down. we present results extracted from simulations on large lattices for the spectral functions of the elementary fermion, the pion, the sigma, the massive pseudoscalar meson and the symmetric phase resonances.
arxiv:hep-lat/0110136
ongoing spectroscopic galaxy surveys like desi ( arxiv : 1611. 00037, arxiv : 1611. 00036 ) and euclid ( arxiv : 2405. 13491 ) will cover unprecedented volumes with a number of objects large enough to effectively probe clustering anisotropies through higher - order statistics. in this work, we present an original and efficient implementation of both a model for the multipole moments of the anisotropic 3 - point correlation function ( 3pcf ) and for their estimator. to assess the adequacy of our model, we have predicted the anisotropic 3pcf for a set of dark matter halos at redshift $ z = 1 $ and compared our prediction with the 3pcf measurement obtained by applying our estimator to a suite of 298 n - body + 3000 approximate mock halo catalogs at the same redshift. the use of the anisotropic component of the 3pcf effectively breaks the degeneracy between the growth rate $ f $ and the linear bias $ b _ 1 $, allowing us to obtain unbiased estimates for both parameters from a euclid - like survey. a joint, full anisotropic, 2 and 3 - point correlation analysis would also allow us to measure the clustering amplitude $ \ sigma _ 8 $ along with $ f $ and $ b _ 1 $ with a precision of approximately 17 %, and to effectively constrain all galaxy biasing parameters of the model. however, these results largely exploit information from the isotropic part of the 3pcf signal. the addition of the anisotropic multipoles tightens parameter estimates by just 5 % at most. we argue that this modest improvement likely reflects the use of a simplified cosmological model with relatively few free parameters. the 3pcf anisotropic multipoles will prove useful in reducing the impact of projection effects that affect high - dimensional cosmological analyses. to support this conjecture, we provide an example in which an additional source of anisotropy, the alcock - paczynski effect, is added to the system.
arxiv:2408.03036
in this paper, we introduce two types of real - valued sums known as complex conjugate pair sums ( ccpss ) denoted as ccps $ ^ { ( 1 ) } $ and ccps $ ^ { ( 2 ) } $, and discuss a few of their properties. using each type of ccpss and their circular shifts, we construct two non - orthogonal nested periodic matrices ( npms ). as npms are non - singular, this introduces two non - orthogonal transforms known as complex conjugate periodic transforms ( ccpts ) denoted as ccpt $ ^ { ( 1 ) } $ and ccpt $ ^ { ( 2 ) } $. we propose another npm, which uses both types of ccpss such that its columns are mutually orthogonal, this transform is known as orthogonal ccpt ( occpt ). after a brief study of a few occpt properties like periodicity, circular shift, etc., we present two different interpretations of it. further, we propose a decimation - in - time ( dit ) based fast computation algorithm for occpt ( termed as foccpt ), whenever the length of the signal is equal to $ 2 ^ v, \ v { \ in } \ mathbb { n } $. the proposed sums and transforms are inspired by ramanujan sums and ramanujan period transform ( rpt ). finally, we show that the period ( both divisor and non - divisor ) and frequency information of a signal can be estimated using the proposed transforms with a significant reduction in the computational complexity over discrete fourier transform ( dft ).
arxiv:2107.06173
obtaining high - quality data for collaborative training of machine learning models can be a challenging task due to a ) regulatory concerns and b ) a lack of data owner incentives to participate. the first issue can be addressed through the combination of distributed machine learning techniques ( e. g. federated learning ) and privacy enhancing technologies ( pet ), such as the differentially private ( dp ) model training. the second challenge can be addressed by rewarding the participants for giving access to data which is beneficial to the training model, which is of particular importance in federated settings, where the data is unevenly distributed. however, dp noise can adversely affect the underrepresented and the atypical ( yet often informative ) data samples, making it difficult to assess their usefulness. in this work, we investigate how to leverage gradient information to permit the participants of private training settings to select the data most beneficial for the jointly trained model. we assess two such methods, namely variance of gradients ( vog ) and the privacy loss - input susceptibility score ( plis ). we show that these techniques can provide the federated clients with tools for principled data selection even in stricter privacy settings.
arxiv:2305.02942
aggregate risk analysis is a computationally intensive and a data intensive problem, thereby making the application of high - performance computing techniques interesting. in this paper, the design and implementation of a parallel aggregate risk analysis algorithm on multi - core cpu and many - core gpu platforms are explored. the efficient computation of key risk measures, including probable maximum loss ( pml ) and the tail value - at - risk ( tvar ) in the presence of both primary and secondary uncertainty for a portfolio of property catastrophe insurance treaties is considered. primary uncertainty is the the uncertainty associated with whether a catastrophe event occurs or not in a simulated year, while secondary uncertainty is the uncertainty in the amount of loss when the event occurs. a number of statistical algorithms are investigated for computing secondary uncertainty. numerous challenges such as loading large data onto hardware with limited memory and organising it are addressed. the results obtained from experimental studies are encouraging. consider for example, an aggregate risk analysis involving 800, 000 trials, with 1, 000 catastrophic events per trial, a million locations, and a complex contract structure taking into account secondary uncertainty. the analysis can be performed in just 41 seconds on a gpu, that is 24x faster than the sequential counterpart on a fast multi - core cpu. the results indicate that gpus can be used to efficiently accelerate aggregate risk analysis even in the presence of secondary uncertainty.
arxiv:1310.2274
this paper studies a dynamical system that models the free recall dynamics of working memory. this model is a modular neural network with n modules, named hypercolumns, and each module consists of m minicolumns. under mild conditions on the connection weights between minicolumns, we investigate the long - term evolution behavior of the model, namely the existence and stability of equilibriums and limit cycles. we also give a critical value in which hopf bifurcation happens. finally, we give a sufficient condition under which this model has a globally asymptotically stable equilibrium with synchronized minicolumn states in each hypercolumn, which implies that in this case recalling is impossible. numerical simulations are provided to illustrate our theoretical results. a numerical example we give suggests that patterns can be stored in not only equilibriums and limit cycles, but also strange attractors ( or chaos ).
arxiv:2209.11014
we study a multiagent learning problem where agents can either learn via repeated interactions, or can follow the advice of a mediator who suggests possible actions to take. we present an algorithmthat each agent can use so that, with high probability, they can verify whether or not the mediator ' s advice is useful. in particular, if the mediator ' s advice is useful then agents will reach a correlated equilibrium, but if the mediator ' s advice is not useful, then agents are not harmed by using our test, and can fall back to their original learning algorithm. we then generalize our algorithm and show that in the limit it always correctly verifies the mediator ' s advice.
arxiv:1206.3261
image denoising enhances image quality, serving as a foundational technique across various computational photography applications. the obstacle to clean image acquisition in real scenarios necessitates the development of self - supervised image denoising methods only depending on noisy images, especially a single noisy image. existing self - supervised image denoising paradigms ( noise2noise and noise2void ) rely heavily on information - lossy operations, such as downsampling and masking, culminating in low quality denoising performance. in this paper, we propose a novel self - supervised single image denoising paradigm, positive2negative, to break the information - lossy barrier. our paradigm involves two key steps : renoised data construction ( rdc ) and denoised consistency supervision ( dcs ). rdc renoises the predicted denoised image by the predicted noise to construct multiple noisy images, preserving all the information of the original image. dcs ensures consistency across the multiple denoised images, supervising the network to learn robust denoising. our positive2negative paradigm achieves state - of - the - art performance in self - supervised single image denoising with significant speed improvements. the code is released to the public at https : / / github. com / li - tong - 621 / p2n.
arxiv:2412.16460
compositional zero - shot learning aims to recognize unseen compositions of seen visual primitives of object classes and their states. while all primitives ( states and objects ) are observable during training in some combination, their complex interaction makes this task especially hard. for example, wet changes the visual appearance of a dog very differently from a bicycle. furthermore, we argue that relationships between compositions go beyond shared states or objects. a cluttered office can contain a busy table ; even though these compositions don ' t share a state or object, the presence of a busy table can guide the presence of a cluttered office. we propose a novel method called compositional attention propagated embedding ( cape ) as a solution. the key intuition to our method is that a rich dependency structure exists between compositions arising from complex interactions of primitives in addition to other dependencies between compositions. cape learns to identify this structure and propagates knowledge between them to learn class embedding for all seen and unseen compositions. in the challenging generalized compositional zero - shot setting, we show that our method outperforms previous baselines to set a new state - of - the - art on three publicly available benchmarks.
arxiv:2210.11557
in this article we improve the known uniform bound for subgroup growth of chevalley groups over $ \ mathbf { g } ( \ mathbb { f } _ p [ [ t ] ] ) $. we introduce a new parameter, the ridgeline number $ v ( \ mathbf { g } ) $, and give new bounds for the subgroup growth of $ \ mathbf { g } ( \ mathbb { f } _ p [ [ t ] ] ) $ expressed through $ v ( \ mathbf { g } ) $. we achieve this by deriving a new estimate for the codimension of $ [ u, v ] $ where $ u $ and $ v $ are vector subspaces in the lie algebra of $ \ mathbf { g } $.
arxiv:1511.04333
first results of the study of the process e + e - \ to 4 \ pi by the cmd - 2 collaboration at vepp - 2m are presented for the energy range 1. 05 - - 1. 38 gev. using an integrated luminosity of 5. 8 pb ^ { - 1 }, energy dependence of the processes e + e - \ to \ pi ^ + \ pi ^ - 2 \ pi ^ 0 and e + e - \ to 2 \ pi ^ + 2 \ pi ^ - has been measured. analysis of the differential distributions demonstrates the dominance of the a _ 1 \ pi and \ omega \ pi intermediate states. upper limits for the contributions of other alternative mechanisms are also placed.
arxiv:hep-ex/9904024
, saunders mac lane in mathematics, form and function summarizes the basics of several areas of mathematics, emphasizing their inter - connectedness, and observes : the development of mathematics provides a tightly connected network of formal rules, concepts, and systems. nodes of this network are closely bound to procedures useful in human activities and to questions arising in science. the transition from activities to the formal mathematical systems is guided by a variety of general insights and ideas. another approach for defining mathematics is to use its methods. for example, an area of study is often qualified as mathematics as soon as one can prove theorems — assertions whose validity relies on a proof, that is, a purely - logical deduction. = = = rigor = = = mathematical reasoning requires rigor. this means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of inference rules, without any use of empirical evidence and intuition. rigorous reasoning is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. despite mathematics ' concision, rigorous proofs can require hundreds of pages to express, such as the 255 - page feit – thompson theorem. the emergence of computer - assisted proofs has allowed proof lengths to further expand. the result of this trend is a philosophy of the quasi - empiricist proof that can not be considered infallible, but has a probability attached to it. the concept of rigor in mathematics dates back to ancient greece, where their society encouraged logical, deductive reasoning. however, this rigorous approach would tend to discourage exploration of new approaches, such as irrational numbers and concepts of infinity. the method of demonstrating rigorous proof was enhanced in the sixteenth century through the use of symbolic notation. in the 18th century, social transition led to mathematicians earning their keep through teaching, which led to more careful thinking about the underlying concepts of mathematics. this produced more rigorous approaches, while transitioning from geometric methods to algebraic and then arithmetic proofs. at the end of the 19th century, it appeared that the definitions of the basic concepts of mathematics were not accurate enough for avoiding paradoxes ( non - euclidean geometries and weierstrass function ) and contradictions ( russell ' s paradox ). this was solved by the inclusion of axioms with the apodictic inference rules of mathematical theories ; the re - introduction of axiomatic method pioneered by the ancient greeks. it results that " rigor " is no more a relevant concept in mathematics,
https://en.wikipedia.org/wiki/Mathematics
in this paper we show that an affine space is determined by the abstract group structure of its group of regular automorphisms in the category of connected affine varieties. to prove this we study commutative subgroups of the group of automorphisms of affine varieties.
arxiv:1912.01567
recently, apparent nonphysical implications of non - hermitian quantum mechanics ( nhqm ) have been discussed in the literature. in particular, the apparent violation of the no - signaling theorem, discrimination of nonorthogonal states, and the increase of quantum entanglement by local operations were reported, and therefore nhqm was not considered as a fundamental theory. here we show that these and other no - go principles ( including the no - cloning and no - deleting theorems ) of conventional quantum mechanics still hold in finite - dimensional non - hermitian quantum systems, including parity - time symmetric and pseudo - hermitian cases, if its formalism is properly applied. we have developed a modified formulation of nhqm based on the geometry of hilbert spaces which is consistent with the conventional quantum mechanics for hermitian systems. using this formulation the validity of these principles can be shown in a simple and uniform approach.
arxiv:1906.08071
twitter is one of the most popular social media platforms in the country, but pre - pandemic vaccination debate has been shown to be polarized and siloed into echo chambers. it is thus imperative to understand the nature of this discourse, with a specific focus on the vaccination hesitant individuals, whose healthcare decisions may affect their communities and the country at large. in this study we ask, how has the italian discussion around vaccination changed during the covid - 19 pandemic, and have the unprecedented events of 2020 - 2021 been able to break the echo chamber around this topic? we use a twitter dataset spanning september 2019 - november 2021 to examine the state of polarization around vaccination. we propose a hierarchical clustering approach to find the largest communities in the endorsement networks of different time periods, and manually illustrate that it produces communities of users sharing a stance. examining the structure of these networks, as well as textual content of their interactions, we find the stark division between supporters and hesitant individuals to continue throughout the vaccination campaign. however, we find an increasing commonality in the topical focus of the vaccine supporters and vaccine hesitant, pointing to a possible common set of facts the two sides may agree on. still, we discover a series of concerns voiced by the hesitant community, ranging from unfounded conspiracies ( microchips in vaccines ) to public health policy discussion ( vaccine passport limitations ). we recommend an ongoing surveillance of this debate, especially to uncover concerns around vaccination before the public health decisions and official messaging are made public.
arxiv:2204.12943
century bce followed by the middle east ( about 1000 bce ) and india and china. the assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 bce. assyrians were also the first to use iron - tipped arrows. = = = post - classical technology = = = the wujing zongyao ( essentials of the military arts ), written by zeng gongliang, ding du, and others at the order of emperor renzong around 1043 during the song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the song and the liao, jin, and yuan to their north. the book covers topics of military strategy, training, and the production and employment of advanced weaponry. advances in military technology aided the song dynasty in its defense against hostile neighbors to the north. the flamethrower found its origins in byzantine - era greece, employing greek fire ( a chemically complex, highly flammable petrol fluid ) in a device with a siphon hose by the 7th century. : 77 the earliest reference to greek fire in china was made in 917, written by wu renchen in his spring and autumn annals of the ten kingdoms. : 80 in 919, the siphon projector - pump was used to spread the ' fierce fire oil ' that could not be doused with water, as recorded by lin yu in his wuyue beishi, hence the first credible chinese reference to the flamethrower employing the chemical solution of greek fire ( see also pen huo qi ). : 81 lin yu mentioned also that the ' fierce fire oil ' derived ultimately from one of china ' s maritime contacts in the ' southern seas ', arabia dashiguo. : 82 in the battle of langshan jiang in 919, the naval fleet of the wenmu king from wuyue defeated a huainan army from the wu state ; wenmu ' s success was facilitated by the use of ' fire oil ' ( ' huoyou ' ) to burn their fleet, signifying the first chinese use of gunpowder in a battle. : 81 – 83 the chinese applied the use of double - piston bellows to pump petrol out of a single cylinder ( with an upstroke and downstroke ), lit at the end by a slow - burning gunpowder match to fire a continuous stream of flame. : 82 this device was featured in description and illustration of the wujing zongyao military manuscript of 1044
https://en.wikipedia.org/wiki/Military_technology
this research conducted a systematic review of the literature on machine learning ( ml ) - based methods in the context of continuous integration ( ci ) over the past 22 years. the study aimed to identify and describe the techniques used in ml - based solutions for ci and analyzed various aspects such as data engineering, feature engineering, hyper - parameter tuning, ml models, evaluation methods, and metrics. in this paper, we have depicted the phases of ci testing, the connection between them, and the employed techniques in training the ml method phases. we presented nine types of data sources and four taken steps in the selected studies for preparing the data. also, we identified four feature types and nine subsets of data features through thematic analysis of the selected studies. besides, five methods for selecting and tuning the hyper - parameters are shown. in addition, we summarised the evaluation methods used in the literature and identified fifteen different metrics. the most commonly used evaluation methods were found to be precision, recall, and f1 - score, and we have also identified five methods for evaluating the performance of trained ml models. finally, we have presented the relationship between ml model types, performance measurements, and ci phases. the study provides valuable insights for researchers and practitioners interested in ml - based methods in ci and emphasizes the need for further research in this area.
arxiv:2305.12695
context. the sao 206462 ( hd 135344b ) disk is one of the few known transitional disks showing asymmetric features in scattered light and thermal emission. near - infrared scattered - light images revealed two bright outer spiral arms and an inner cavity depleted in dust. giant protoplanets have been proposed to account for the disk morphology. aims. we aim to search for giant planets responsible for the disk features and, in the case of non - detection, to constrain recent planet predictions using the data detection limits. methods. we obtained new high - contrast and high - resolution total intensity images of the target spanning the y to the k bands ( 0. 95 - 2. 3 mic ) using the vlt / sphere near - infrared camera and integral field spectrometer. results. the spiral arms and the outer cavity edge are revealed at high resolutions and sensitivities without the need for image post - processing techniques, which introduce photometric biases. we do not detect any close - in companions. for the derivation of the detection limits on putative giant planets embedded in the disk, we show that the knowledge of the disk aspect ratio and viscosity is critical for the estimation of the attenuation of a planet signal by the protoplanetary dust because of the gaps that these putative planets may open. given assumptions on these parameters, the mass limits can vary from ~ 2 - 5 to ~ 4 - 7 jupiter masses at separations beyond the disk spiral arms. the sphere detection limits are more stringent than those derived from archival naco / l ' data and provide new constraints on a few recent predictions of massive planets ( 4 - 15 mj ) based on the spiral density wave theory. the sphere and alma data do not favor the hypotheses on massive giant planets in the outer disk ( beyond 0. 6 ). there could still be low - mass planets in the outer disk and / or planets inside the cavity.
arxiv:1702.05108
we investigate the linear stability of a shocked accretion flow onto a black hole in the adiabatic limit. our linear analyses and numerical calculations show that, despite the post - shock deceleration, the shock is generally unstable to non - axisymmetric perturbations. the simulation results of molteni, t \ ' oth & kuznetsov can be well explained by our linear eigenmodes. the mechanism of this instability is confirmed to be based on the cycle of acoustic waves between the corotation radius and the shock. we obtain an analytical formula to calculate the oscillation period from the physical parameters of the flow. we argue that the quasi - periodic oscillation should be a common phenomenon in accretion flows with angular momentum.
arxiv:astro-ph/0511211
identifying overpotential components of electrochemical systems enables quantitative analysis of polarization contributions of kinetic processes under practical operating conditions. however, the inherently coupled kinetic processes lead to an enormous challenge in measuring individual overpotentials, particularly in composite electrodes of lithium - ion batteries. herein, the full decomposition of electrode overpotential is realized by the collaboration of single - layer structured particle electrode ( slpe ) constructions and time - resolved potential measurements, explicitly revealing the evolution of kinetic processes. perfect prediction of the discharging profiles is achieved via potential measurements on slpes, even in extreme polarization conditions. by decoupling overpotentials in different electrode / cell structures and material systems, the dominant limiting processes of battery rate performance are uncovered, based on which the optimization of electrochemical kinetics can be conducted. our study not only shades light on decoupling complex kinetics in electrochemical systems, but also provides vitally significant guidance for the rational design of high - performance batteries.
arxiv:2207.02413
we propose xscale - nvs for high - fidelity cross - scale novel view synthesis of real - world large - scale scenes. existing representations based on explicit surface suffer from discretization resolution or uv distortion, while implicit volumetric representations lack scalability for large scenes due to the dispersed weight distribution and surface ambiguity. in light of the above challenges, we introduce hash featurized manifold, a novel hash - based featurization coupled with a deferred neural rendering framework. this approach fully unlocks the expressivity of the representation by explicitly concentrating the hash entries on the 2d manifold, thus effectively representing highly detailed contents independent of the discretization resolution. we also introduce a novel dataset, namely giganvs, to benchmark cross - scale, high - resolution novel view synthesis of realworld large - scale scenes. our method significantly outperforms competing baselines on various real - world scenes, yielding an average lpips that is 40 % lower than prior state - of - the - art on the challenging giganvs benchmark. please see our project page at : xscalenvs. github. io.
arxiv:2403.19517
a new kind of contactless pumping mechanism is realized in a layer of ferrofluid via a spatio - temporally modulated magnetic field. the resulting pressure gradient leads to a liquid ramp, which is measured by means of x - rays. the transport mechanism works best if a resonance of the surface waves with the driving is achieved. the behavior can be understood semi - quantitatively by considering the magnetically influenced dispersion relation of the fluid.
arxiv:1004.4151
here we present the first optical photometric monitoring results of a sample of twelve newly discovered blazars from the icrf - gaia crf astrometric link. the observations were performed from april 2013 until august 2019 using eight telescopes located in europe. for a robust test for the brightness and colour variability, we use abbe criterion and f - test. moreover, linear fittings are performed to investigate the relation in the colour - magnitude variations of the blazars. variability was confirmed in the case of 10 sources ; two sources, 1429 + 249 and 1556 + 335 seem to be possibly variable. three sources ( 1034 + 574, 1722 + 119, and 1741 + 597 ) have displayed large amplitude brightness change of more than one magnitude. we found that the seven sources displayed bluer - when - brighter variations, and one source showed redder - when - brighter variations. we briefly explain the various agn emission models which can explain our results.
arxiv:2304.03664
we study massless geodesics near the photon - spheres of a large family of solutions of einstein - maxwell theory in five dimensions, including bhs, naked singularities and smooth horizon - less jmart geometries obtained as six - dimensional uplifts of the five - dimensional solution. we find that a light ring of unstable photon orbits surrounding the mass center is always present, independently of the existence of a horizon or singularity. we compute the lyapunov exponent, characterizing the chaotic behaviour of geodesics near the ` photon - sphere ' and the time decay of ring - down modes dominating the response of the geometry to perturbations at late times. we show that, for geometries free of naked singularities, the lyapunov exponent is always bounded by its value for a schwarzschild bh of the same mass.
arxiv:2011.04344
we study the effect of galactic foregrounds with spatially varying spectral indices on analysis of simulated data from the planck satellite. we also briefly mention the effect the extra - galactic point sources have on the data analysis and summarise the most recent constraints on galactic emission at ghz frequencies.
arxiv:astro-ph/9810235
we study continuity properties of stochastic game problems with respect to various topologies on information structures, defined as probability measures characterizing a game. we will establish continuity properties of the value function under total variation, setwise, and weak convergence of information structures. our analysis reveals that the value function for a bounded game is continuous under total variation convergence of information structures in both zero - sum games and team problems. continuity may fail to hold under setwise or weak convergence of information structures, however, the value function exhibits upper semicontinuity properties under weak and setwise convergence of information structures for team problems, and upper or lower semicontinuity properties hold for zero - sum games when such convergence is through a blackwell - garbled sequence of information structures. if the individual channels are independent, fixed, and satisfy a total variation continuity condition, then the value functions are continuous under weak convergence of priors. we finally show that value functions for players may not be continuous even under total variation convergence of information structures in general non - zero - sum games.
arxiv:2109.11035
in this paper, we prove that there exists a zariski dense open subset $ u $ in the parameter space of all elementary $ p $ - covers of the projective line that ramified at exactly one point, defined over the rationals, such that for every curve $ x $ in $ u ( \ overline { q } ) $ and for any prime $ p $ large enough, the reduction of $ x $ at all primes lying over $ p $ achieves its generic newton slopes.
arxiv:2403.05453
vortex motions are frequently observed on the solar photosphere. these motions may play a key role in the transport of energy and momentum from the lower atmosphere into the upper solar atmosphere, contributing to coronal heating. the lower solar atmosphere also consists of complex networks of flux tubes that expand and merge throughout the chromosphere and upper atmosphere. we perform numerical simulations to investigate the behaviour of vortex driven waves propagating in a pair of such flux tubes in a non - force - free equilibrium with a realistically modelled solar atmosphere. the two flux tubes are independently perturbed at their footpoints by counter - rotating vortex motions. when the flux tubes merge, the vortex motions interact both linearly and nonlinearly. the linear interactions generate many small - scale transient magnetic substructures due to the magnetic stress imposed by the vortex motions. thus, an initially monolithic tube is separated into a complex multi - threaded tube due to the photospheric vortex motions. the wave interactions also drive a superposition that increases in amplitude until it exceeds the local mach number and produces shocks that propagate upwards with speeds of approximately $ 50 $ km s $ ^ { - 1 } $. the shocks act as conduits transporting momentum and energy upwards, and heating the local plasma by more than an order of magnitude, with peak temperature approximately $ 60, 000 $ k. therefore, we present a new mechanism for the generation of magnetic waveguides from the lower solar atmosphere to the solar corona. this wave guide appears as the result of interacting perturbations in neighbouring flux tubes. thus, the interactions of photospheric vortex motions is a potentially significant mechanism for energy transfer from the lower to upper solar atmosphere.
arxiv:1803.06112
we perform a detailed analysis of the irreversible magnetization data of salem - sugui et al. and babis et al. of underdoped and optimally doped yba2cu3o7 + x single crystals. near the zero field transition temperature we observe extended consistency with the properties of the 3d - xy universality class, even though the attained critical regime is limited by an inhomogeneity induced finite size effect. nevertheless, as tc falls from 93. 5 to 41. 5 k the critical amplitude of the in - plane correlation length gsiab0, the anisotropy amma = gsiab0 / gsic0 and the critical amplitude of the in - plane penetration depth lamdaab0 increase substantially, while the critical amplitude of the c - axis correlation length gsic0 does not change much. as a consequence, the correlation volume vcorr increases and the ritical amplitude of the specific heat singularity a decreases dramatically, while the rise of gsiab0 reflects the behavior of the zero temperature counterpart. conversely, although gsiab0 and lamdaab0 increase with reduced tc, the ginzburg - landau parameter decreases substantially and yba2cu3o7 + x crosses over from an extreme to a weak type - ii superconductor.
arxiv:cond-mat/0610289
x - ray computed tomography ( ct ) based 3d imaging is widely used in airports for aviation security screening whilst prior work on prohibited item detection focuses primarily on 2d x - ray imagery. in this paper, we aim to evaluate the possibility of extending the automatic prohibited item detection from 2d x - ray imagery to volumetric 3d ct baggage security screening imagery. to these ends, we take advantage of 3d convolutional neural neworks ( cnn ) and popular object detection frameworks such as retinanet and faster r - cnn in our work. as the first attempt to use 3d cnn for volumetric 3d ct baggage security screening, we first evaluate different cnn architectures on the classification of isolated prohibited item volumes and compare against traditional methods which use hand - crafted features. subsequently, we evaluate object detection performance of different architectures on volumetric 3d ct baggage images. the results of our experiments on bottle and handgun datasets demonstrate that 3d cnn models can achieve comparable performance ( 98 % true positive rate and 1. 5 % false positive rate ) to traditional methods but require significantly less time for inference ( 0. 014s per volume ). furthermore, the extended 3d object detection models achieve promising performance in detecting prohibited items within volumetric 3d ct baggage imagery with 76 % map for bottles and 88 % map for handguns, which shows both the challenge and promise of such threat detection within 3d ct x - ray security imagery.
arxiv:2003.12625
quasiparticle interference ( qpi ) provides a wealth of information relating to the electronic structure of a material. however, it is often assumed that this information is constrained to two - dimensional electronic states. here, we show that this is not necessarily the case. for fese, a system dominated by surface defects, we show that it is actually all electronic states with negligible group velocity in the $ z $ axis that are contained within the experimental data. by using a three - dimensional tight binding model of fese, fit to photoemission measurements, we directly reproduce the experimental qpi scattering dispersion, within a t - matrix formalism, by including both $ k _ z = 0 $ and $ k _ z = \ pi $ electronic states. this result unifies both tunnelling and photoemission based experiments on fese and highlights the importance of $ k _ z $ within surface sensitive measurements of qpi.
arxiv:1906.02150
we construct gromov - witten invariants of general symplectic manifolds.
arxiv:alg-geom/9608032
android and iso are the two mobile platforms present in almost all smartphones build during last years. developing an application that targets both platforms is a challenge. a traditional way is to build two different apps, one in java for android, the other in objective - c for ios. xamarin is a framework for developing android and ios apps which allows developers to share most of the application code across multiple implementations of the app, each for a specific platform. in this paper, we present xamforumdb, a database that stores discussions, questions and answers extracted from the xamarin forum. we envision research community could use it for studying, for instance, the problematic of developing such kind of applications.
arxiv:1703.03631