text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
i use two - dimensional axisymmetric numerical simulations of star - disk magnetospheric interaction to construct a three - dimensional model of hot spots on the star created by infalling accretion columns. the intensity of emitted radiation is computed with minimal assumptions, as seen by observers at infinity from the different angles. in the illustrative examples, shown is a change in the intensity curve with changes in the geometry of the model, and also a change in results with modification in the physical parameters in the simulation.
|
arxiv:1811.07652
|
erasure codes have become an integral part of distributed storage systems as a tool for providing data reliability and durability under the constant threat of device failures. in such systems, an $ [ n, k ] $ code over a finite field $ \ mathbb { f } _ q $ encodes $ k $ message symbols into $ n $ codeword symbols from $ \ mathbb { f } _ q $ which are then stored on $ n $ different nodes in the system. recent work has shown that significant savings in storage space can be obtained by tuning $ n $ and $ k $ to variations in device failure rates. such a tuning necessitates code conversion : the process of converting already encoded data under an initial $ [ n ^ i, k ^ i ] $ code to its equivalent under a final $ [ n ^ f, k ^ f ] $ code. the default approach to conversion is to reencode data, which places significant burden on system resources. convertible codes are a recently proposed class of codes for enabling resource - efficient conversions. existing work on convertible codes has focused on minimizing access cost, i. e., the number of code symbols accessed during conversion. bandwidth, which corresponds to the amount of data read and transferred, is another important resource to optimize. in this paper, we initiate the study on the fundamental limits on bandwidth used during code conversion and present constructions for bandwidth - optimal convertible codes. first, we model the code conversion problem using network information flow graphs with variable capacity edges. second, focusing on mds codes and an important parameter regime called the merge regime, we derive tight lower bounds on the bandwidth cost of conversion. the derived bounds show that bandwidth cost can be significantly reduced even in regimes where access cost cannot be reduced as compared to the default approach. third, we present a new construction for mds convertible codes which matches the proposed lower bound and is thus bandwidth - optimal during conversion.
|
arxiv:2008.12707
|
we present a new algorithm which is named the dynamical functional particle method, dfpm. it is based on the idea of formulating a finite dimensional damped dynamical system whose stationary points are the solution to the original equations. the resulting hamiltonian dynamical system makes it possible to apply efficient symplectic integrators. other attractive properties of dfpm are that it has an exponential convergence rate, automatically includes a sparse formulation and in many cases can solve nonlinear problems without any special treatment. we study the convergence and convergence rate of dfpm. it is shown that for the discretized symmetric eigenvalue problems the computational complexity is given by $ \ mathcal { o } ( n ^ { ( d + 1 ) / { d } } ) $, where \ emph { d } is the dimension of the problem and \ emph { n } is the vector size. an illustrative example of this is made for the 2 - dimensional schr \ " odinger equation. comparisons are made with the standard numerical libraries arpack and lapack. the conjugated gradient method and shifted power method are tested as well. it is concluded that dfpm is both versatile and efficient.
|
arxiv:1303.5317
|
the prospects to study hyperon - nuclei / nucleon interactions at besiii and similar $ e ^ + e ^ - $ colliders are proposed in this work. utilizing the large quantity of hyperons produced by the decay of 10 billion $ j / \ psi $ and 2. 7 billion $ \ psi ( 3686 ) $ collected at besiii, the cross sections of several specific elastic or inelastic hyperon - nuclei reactions can be measured via the scattering between the hyperons and the nucleus in the dense objects of besiii detector. subsequently, the cross sections of corresponding hyperon - nucleon interactions are able to be extracted with further phenomenological calculation. furthermore, the interactions between antihyperon and nuclei / nucleon, including scattering and annihilation, can also be studied via the method proposed in this paper. the results will definitely benefit a lot the precise probe of the hyperon - nuclei / nucleon interactions and provide constraints for the studies of the potential of strong interaction, the origin of color confinement, the unified model for baryon - baryon interactions, and the internal structure of neutron stars. in addition, the desirable prospects of corresponding studies in the future super tau - charm factory ( stcf ) are discussed and estimated in this work.
|
arxiv:2209.12601
|
in this paper, we analyze a hegselmann - krause opinion formation model with time - variable time delay and prove that, if the influence function is always positive, then there is exponential convergence to consensus without requiring any smallness assumptions on the time delay function. the analysis is then extended to a model with distributed time delay.
|
arxiv:2206.12151
|
with the advent of big data era and the development of artificial intelligence and other technologies, data security and privacy protection have become more important. recommendation systems have many applications in our society, but the model construction of recommendation systems is often inseparable from users ' data. especially for deep learning - based recommendation systems, due to the complexity of the model and the characteristics of deep learning itself, its training process not only requires long training time and abundant computational resources but also needs to use a large amount of user data, which poses a considerable challenge in terms of data security and privacy protection. how to train a distributed recommendation system while ensuring data security has become an urgent problem to be solved. in this paper, we implement two schemes, horizontal federated learning and secure distributed training, based on intel sgx ( software guard extensions ), an implementation of a trusted execution environment, and tensorflow framework, to achieve secure, distributed recommendation system - based learning schemes in different scenarios. we experiment on the classical deep learning recommendation model ( dlrm ), which is a neural network - based machine learning model designed for personalization and recommendation, and the results show that our implementation introduces approximately no loss in model performance. the training speed is within acceptable limits.
|
arxiv:2207.05079
|
we numerically calculated angular momentum transfer processes in a dense particulate disk within roche limit by global $ n $ - body simulations, up to $ n = 10 ^ 5 $, for parameters corresponding to a protolunar disk generated by a giant impact on a proto - earth. in the simulations, both self - gravity and inelastic physical collisions are included. we first formalized expressions for angular momentum transfer rate including self - gravity and calculated the transfer rate with the results of our $ n $ - body simulations. spiral structure is formed within roche limit by self - gravity and energy dissipation of inelastic collisions, and angular momentum is effectively transfered outward. angular momentum transfer is dominated by both gravitational torque due to the spiral structure and particles ' collective motion associated with the structure. since formation and evolution of the spiral structure is regulated by the disk surface density, the angular momentum transfer rate depends on surface density, but not on particle size or number, so that the time scale of evolution of a particulate disk is independent on the number of particles ( $ n $ ) that is used to represent the disk, if $ n $ is large enough to represent the spiral structure. with $ n = 10 ^ 5 $, the detailed spiral structure is resolved while it is only poorly resolved with $ n = 10 ^ 3 $, however, we found that calculated angular momentum transfer does not change as long as $ n \ gtrsim 10 ^ 3 $.
|
arxiv:astro-ph/0108133
|
three simple ideas about transverse spin observables are presented for the purpose of stimulating discussion. the manuscript is based on a presentation at the " transversity 2014 " workshop in torre chia, sardinia, italy on june 9 - 13, 2014 where approximately sixty experts on transverse spin physics had gathered to share recent results in an atmosphere of sun - drenched intensity.
|
arxiv:1409.2386
|
determinantal point processes ( dpps ) are a family of probabilistic models that have a repulsive behavior, and lend themselves naturally to many tasks in machine learning where returning a diverse set of objects is important. while there are fast algorithms for sampling, marginalization and conditioning, much less is known about learning the parameters of a dpp. our contribution is twofold : ( i ) we establish the optimal sample complexity achievable in this problem and show that it is governed by a natural parameter, which we call the \ emph { cycle sparsity } ; ( ii ) we propose a provably fast combinatorial algorithm that implements the method of moments efficiently and achieves optimal sample complexity. finally, we give experimental results that confirm our theoretical findings.
|
arxiv:1703.00539
|
physical adversarial attacks threaten to fool object detection systems, but reproducible research on the real - world effectiveness of physical patches and how to defend against them requires a publicly available benchmark dataset. we present apricot, a collection of over 1, 000 annotated photographs of printed adversarial patches in public locations. the patches target several object categories for three coco - trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle. our analysis suggests that maintaining adversarial robustness in uncontrolled settings is highly challenging, but it is still possible to produce targeted detections under white - box and sometimes black - box settings. we establish baselines for defending against adversarial patches through several methods, including a detector supervised with synthetic data and unsupervised methods such as kernel density estimation, bayesian uncertainty, and reconstruction error. our results suggest that adversarial patches can be effectively flagged, both in a high - knowledge, attack - specific scenario, and in an unsupervised setting where patches are detected as anomalies in natural images. this dataset and the described experiments provide a benchmark for future research on the effectiveness of and defenses against physical adversarial objects in the wild.
|
arxiv:1912.08166
|
vision transformer shows great superiority in medical image segmentation due to the ability in learning long - range dependency. for medical image segmentation from 3d data, such as computed tomography ( ct ), existing methods can be broadly classified into 2d - based and 3d - based methods. one key limitation in 2d - based methods is that the intra - slice information is ignored, while the limitation in 3d - based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner - slice information. during the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3d understanding of anatomy. motivated by this fact, our key insight is to design a hybrid model which can first learn fine - grained inner - slice information and then generate a 3d understanding of anatomy by incorporating 3d information. we present a novel \ textbf { h } ybrid \ textbf { res } idual trans \ textbf { former } \ textbf { ( hresformer ) } for 3d medical image segmentation. building upon standard 2d and 3d transformer backbones, hresformer involves two novel key designs : \ textbf { ( 1 ) } a \ textbf { h } ybrid \ textbf { l } ocal - \ textbf { g } lobal fusion \ textbf { m } odule \ textbf { ( hlgm ) } to effectively and adaptively fuse inner - slice information from 2d transformer and intra - slice information from 3d volumes for 3d transformer with local fine - grained and global long - range representation. \ textbf { ( 2 ) } a residual learning of the hybrid model, which can effectively leverage the inner - slice and intra - slice information for better 3d understanding of anatomy. experiments show that our hresformer outperforms prior art on widely - used medical image segmentation benchmarks. this paper sheds light on an important but neglected way to design transformers for 3d medical image segmentation.
|
arxiv:2412.11458
|
this paper proposes a novel meta - learning approach to optimize a robust portfolio ensemble. the method uses a deep generative model to generate diverse and high - quality sub - portfolios combined to form the ensemble portfolio. the generative model consists of a convolutional layer, a stateful lstm module, and a dense network. during training, the model takes a randomly sampled batch of gaussian noise and outputs a population of solutions, which are then evaluated using the objective function of the problem. the weights of the model are updated using a gradient - based optimizer. the convolutional layer transforms the noise into a desired distribution in latent space, while the lstm module adds dependence between generations. the dense network decodes the population of solutions. the proposed method balances maximizing the performance of the sub - portfolios with minimizing their maximum correlation, resulting in a robust ensemble portfolio against systematic shocks. the approach was effective in experiments where stochastic rewards were present. moreover, the results ( fig. 1 ) demonstrated that the ensemble portfolio obtained by taking the average of the generated sub - portfolio weights was robust and generalized well. the proposed method can be applied to problems where diversity is desired among co - optimized solutions for a robust ensemble. the source - codes and the dataset are in the supplementary material.
|
arxiv:2307.07811
|
we propose a generalization of the bjorken in - out ansatz for fluid trajectories which, when applied to the ( 1 + 1 ) hydrodynamic equations, generates a one - parameter family of analytic solutions interpolating between the boost - invariant bjorken picture and the non boost - invariant one by landau. this parameter characterises the proper - time scale when the fluid velocities approach the in - out ansatz. we discuss the resulting rapidity distribution of entropy for various freeze - out conditions and compare it with the original bjorken and landau results.
|
arxiv:0706.2108
|
we show that a method presented in [ s. l. trubatch and a. franco, canonical procedures for population dynamics, j. theor. biol. 48 ( 1974 ), 299 - 324 ] and later in [ g. h. paine, the development of lagrangians for biological models, bull. math. biol. 44 ( 1982 ) 749 - 760 ] for finding lagrangians of classic models in biology, is actually based on finding the jacobi last multiplier of such models. using known properties of jacobi last multiplier we show how to obtain linear lagrangians of those first - order systems and nonlinear lagrangian of the corresponding single second - order equations that can be derived from them, even in the case where those authors failed such as the host - parasite model.
|
arxiv:1108.2301
|
we study the correlation energy, the effective anisotropy parameter, and quantum fluctuations of the pseudospin magnetization in bilayer quantum hall systems at total filling factor nu = 1 by means of exact diagonalizations of the hamiltonian in the spherical geometry. we compare exact diagonalization results for the ground state energy with finite - size hartree - fock values. in the ordered ground state phase at small layer separations the hartree - fock data compare reasonably with the exact results. above the critical layer separation, however, the hartree - fock findings still predict an increase in the ground state energy, while the exact ground state energy is in this regime independent of the layer separation indicating the decoupling of layers and the loss of spontaneous phase coherence between them. we also find accurate values for the pseudospin anisotropy constant whose dependence of the layer separation provides another very clear indication for the strong interlayer correlations in the ordered phase and shows an inflection point at the phase boundary. finally we discuss the possibility of interlayer correlations in biased systems even above the phase boundary for the balanced case. certain features of our data for the pseudospin anisotropy constant as well as for quantum fluctuations of the pseudospin magnetization are not inconsistent with the occurence of this effect. however, it appears to be rather weak at least in the limit of vanishing tunneling amplitude.
|
arxiv:cond-mat/0209349
|
we study ( $ p $ - harmonic ) singular functions, defined by means of upper gradients, in bounded domains in metric measure spaces. it is shown that singular functions exist if and only if the complement of the domain has positive capacity, and that they satisfy very precise capacitary identities for superlevel sets. suitably normalized singular functions are called green functions. uniqueness of green functions is largely an open problem beyond unweighted $ \ mathbf { r } ^ n $, but we show that all green functions ( in a given domain and with the same singularity ) are comparable. as a consequence, for $ p $ - harmonic functions with a given pole we obtain a similar comparison result near the pole. various characterizations of singular functions are also given. our results hold in complete metric spaces with a doubling measure supporting a $ p $ - poincar \ ' e inequality, or under similar local assumptions.
|
arxiv:1906.09863
|
predicting drug - drug interaction ( ddi ) plays an important role in pharmacology and healthcare for identifying potential adverse interactions and beneficial combination therapies between drug pairs. recently, a flurry of graph learning methods have been introduced to predict drug - drug interactions. however, evaluating existing methods has several limitations, such as the absence of a unified comparison framework for ddi prediction methods, lack of assessments in meaningful real - world scenarios, and insufficient exploration of side information usage. in order to address these unresolved limitations in the literature, we propose a ddi prediction benchmark on graph learning. we first conduct unified evaluation comparison among existing methods. to meet realistic scenarios, we further evaluate the performance of different methods in settings with new drugs involved and examine the performance across different ddi types. component analysis is conducted on the biomedical network to better utilize side information. through this work, we hope to provide more insights for the problem of ddi prediction. our implementation and data is open - sourced at https : / / anonymous. 4open. science / r / ddi - benchmark - acd9 /.
|
arxiv:2410.18583
|
computational thinking has been a recent focus of education research within the sciences. however, there is a dearth of scholarly literature on how best to teach and to assess this topic, especially in disciplinary science courses. physics classes with computation integrated into the curriculum are a fitting setting for investigating computational thinking. in this paper, we lay the foundation for exploring computational thinking in introductory physics courses. first, we review relevant literature to synthesize a set of potential learning goals that students could engage in when working with computation. the computational thinking framework that we have developed features 14 practices contained within 6 different categories. we use in - class video data as existence proofs of the computational thinking practices proposed in our framework. in doing this work, we hope to provide ways for teachers to assess their students ' development of computational thinking, while also giving physics education researchers some guidance on how to study this topic in greater depth.
|
arxiv:2105.07981
|
using electronic structure calculations based on density functional theory, we predict and study the structural, mechanical, electronic, magnetic and transport properties of a new full heusler chalcogenide, namely, fe $ _ 2 $ crte, both in bulk and heterostructure form. the system shows a ferromagnetic and half - metallic ( hm ) like behavior, with a very high ( about 95 % ) spin polarization at the fermi level, in its cubic phase. interestingly, under tetragonal distortion, a clear minimum ( with almost the same energy as the cubic phase ) has also been found, at a c / a value of 1. 26, which, however, shows a ferrimagnetic and fully metallic nature. the compound has been found to be dynamically stable in both the phases against the lattice vibration. the elastic properties indicate that the compound is mechanically stable in both the phases, following the stability criteria of the cubic and tetragonal phases. the elastic parameters unveil the mechanically anisotropic and ductile nature of the alloy system. due to the hm - like behavior of the cubic phase and keeping in mind the practical aspects, we probe the effect of strain as well as substrate on various physical properties of this alloy. transmission profile of the fe $ _ 2 $ crte / mgo / fe $ _ 2 $ crte heterojunction has been calculated to probe it as a magnetic tunneling junction ( mtj ) material in both the cubic and tetragonal phases. considerably large tunneling magnetoresistance ratio ( tmr ) of 1000 % is observed for the tetragonal phase, which is found to be one order of magnitude larger than that of the cubic phase.
|
arxiv:2301.09843
|
we study in an explicit manner the partial sums of the multiplicative inverse of the riemann zeta function and its derivative.
|
arxiv:2404.15520
|
accurate prediction of fracture toughness under complex loading conditions, like mixed mode i / ii, is essential for reliable failure assessment. this paper aims to develop a machine learning framework for predicting fracture toughness and crack initiation angles by directly utilizing stress, strain, or displacement distributions represented by selected nodes as input features. validation is conducted using experimental data across various mode mixities and specimen geometries for brittle materials. among stress, strain, and displacement fields, it is shown that the stress - based features, when paired with multilayer perceptron models, achieve high predictive accuracy with r2 scores exceeding 0. 86 for fracture load predictions and 0. 94 for angle predictions. a comparison with the theory of critical distances ( generalized maximum tangential stress ) demonstrates the high accuracy of the framework. furthermore, the impact of input parameter selections is studied, and it is demonstrated that advanced feature selection algorithms enable the framework to handle different ranges and densities of the representing field. the framework ' s performance was further validated for datasets with a limited number of data points and restricted mode mixities, where it maintained high accuracy. the proposed framework is computationally efficient and practical, and it operates without any supplementary post - processing steps, such as stress intensity factor calculations.
|
arxiv:2503.00689
|
heavy ion collisions provide a unique opportunity for studying the properties of exotic hadrons with two charm quarks. the production of $ t _ { cc } ^ + $ is significantly enhanced in nuclear collisions compared to proton - proton collisions due to the creation of multiple charm pairs. in this study, we employ the langevin equation in combination with the instantaneous coalescence model ( licm ) to investigate the production of $ t _ { cc } ^ + $ and $ \ xi _ { cc } ^ { + + } $ which consists of two charm quarks. we consider $ t _ { cc } ^ + $ as molecular states composed of $ d $ and $ d ^ * $ mesons. the langevin equation is used to calculate the energy loss of charm quarks and $ d $ mesons in the hot medium. the hadronization process, where charm quarks transform into each $ d $ state as constituents of $ t _ { cc } ^ + $ production, is described using the coalescence model. the coalescence probability between $ d $ and $ d ^ * $ is determined by the wigner function, which encodes the information of the $ t _ { cc } ^ + $ wave function. our results show that the $ t _ { cc } ^ + $ production varies by approximately one order of magnitude when different widths in the wigner function, representing distinct binding energies of $ t _ { cc } ^ + $, are considered. this variation offers valuable insights into the nature of $ t _ { cc } ^ + $ through the analysis of its wave function. the $ \ xi _ { cc } ^ { + + } $ is treated as a hadronic state produced at the hadronization of the deconfined matter. its production is also calculated as a comparison with the molecular state $ t _ { cc } ^ + $.
|
arxiv:2309.02987
|
inevitability properties in branching temporal logics are of the syntax forall eventually \ phi, where \ phi is an arbitrary ( timed ) ctl formula. in the sense that " good things will happen ", they are parallel to the " liveness " properties in linear temporal logics. such inevitability properties in dense - time logics can be analyzed with greatest fixpoint calculation. we present algorithms to model - check inevitability properties both with and without requirement of non - zeno computations. we discuss a technique for early decision on greatest fixpoints in the temporal logics, and experiment with the effect of non - zeno computations on the evaluation of greatest fixpoints. we also discuss the tctl subclass with only universal path quantifiers which allows for the safe abstraction analysis of inevitability properties. finally, we report our implementation and experiments to show the plausibility of our ideas.
|
arxiv:cs/0304003
|
we study the discovery potential of the minimal universal extra dimension model ( mued ) and improve it utilizing the multijet + lepton mode at the lhc. since the mued has a nearly degenerate spectrum, most events only have soft jets and small emiss. the signature is challenging to search. we apply mt2 for the event selection and set the invisible particle mass of mt2 ( test mass ) to zero. the test mass is much smaller than the invisible particle mass of mued. in that case, mt2 of the signal can be large depending on up - stream radiations ( usr ) which includes initial state radiations ( isr ). on the other hand, mt2 of the background is mainly below the top quark mass. hence, the signal is extracted from the background in the high mt2 region. since we use the leading jets for mt2, there is a combinatorics effect. we found the effect also enhances the signal to background ratio for high mt2. we perform a detailed simulation with the matrix element correction to the qcd radiations. the discovery potential of the mued is improved by the mt2 cut, and especially, the improvement is significant for the most degenerate parameter we consider, { \ lambda } r = 10.
|
arxiv:1107.3369
|
low cost and low power consumption processor play a vital role in the field of digital signal processing ( dsp ). the omap - l138 development kit which is low cost, low power consumption, ease and speed, with a wide variety of applications includes digital signal processing, image processing and video processing. this paper represents the basic introduction to omap - l138 processor and quick procedural steps for real time and non - real time implementations with a set of programs. the real time experiments are based on audio in the applications of audio loopback, delay and echo. whereas the non - real time experiments are generation of a sine wave, low pass and high pass filter.
|
arxiv:2001.10094
|
this paper describes precise measurements of the thermal neutron flux in the lsm underground laboratory in proximity of the edelweiss - ii dark matter search experiment together with short measurements at various other locations. monitoring of the flux of thermal neutrons is accomplished using a mobile detection system with low background proportional counter filled with $ ^ 3 $ he. on average 75 neutrons per day are detected with a background level below 1 count per day ( cpd ). this provides a unique possibility of a day by day study of variations of the neutron field in a deep underground site. the measured average 4 $ \ pi $ neutron flux per cm $ ^ { 2 } $ in the proximity of edelweiss - ii is $ \ phi _ { mb } = 3. 57 \ pm0. 05 ^ { stat } \ pm0. 27 ^ { syst } \ times 10 ^ { - 6 } $ neutrons / sec. we report the first experimental observation that the point - to - point thermal neutron flux at lsm varies by more than a factor two.
|
arxiv:1001.4383
|
the purpose of this paper is to analyze the mechanism for the interplay of deterministic and stochastic models for contagious diseases. deterministic models for contagious diseases are prone to predict global stability. small natural birth and death rates in comparison to disease parameters like the contact rate and the removal rate ensures that the globally stable endemic equilibrium corresponds to a tiny average proportion of infected individuals. asymptotic equilibrium levels corresponding to low numbers of individuals invalidate the deterministic results. diffusion effects force frequency functions of the stochastic model to possess similar stability properties as the deterministic model. particular simulations of the stochastic model predict, however, oscillatory patterns. small and isolated populations show longer periods, more violent oscillations, and larger probabilities of extinction. we prove that evolution maximizes the infectiousness of the disease as measured by the ability to increase the proportion of infected individuals. this holds provided the stochastic oscillations are moderate enough to keep the proportion of susceptible individuals near a deterministic equilibrium. we close our paper with a discussion of the herd - immunity concept and stress its close relation to vaccination - programs.
|
arxiv:2104.03254
|
change detection is the study of detecting changes between two different images of a scene taken at different times. by the detected change areas, however, a human cannot understand how different the two images. therefore, a semantic understanding is required in the change detection research such as disaster investigation. the paper proposes the concept of semantic change detection, which involves intuitively inserting semantic meaning into detected change areas. we mainly focus on the novel semantic segmentation in addition to a conventional change detection approach. in order to solve this problem and obtain a high - level of performance, we propose an improvement to the hypercolumns representation, hereafter known as hypermaps, which effectively uses convolutional maps obtained from convolutional neural networks ( cnns ). we also employ multi - scale feature representation captured by different image patches. we applied our method to the tsunami panoramic change detection dataset, and re - annotated the changed areas of the dataset via semantic classes. the results show that our multi - scale hypermaps provided outstanding performance on the re - annotated tsunami dataset.
|
arxiv:1604.07513
|
attributions are a common local explanation technique for deep learning models on single samples as they are easily extractable and demonstrate the relevance of input values. in many cases, heatmaps visualize such attributions for samples, for instance, on images. however, heatmaps are not always the ideal visualization to explain certain model decisions for other data types. in this review, we focus on attribution visualizations for time series. we collect attribution heatmap visualizations and some alternatives, discuss the advantages as well as disadvantages and give a short position towards future opportunities for attributions and explanations for time series.
|
arxiv:2109.12935
|
we give a combinatorial proof of a recent geometric result of farkas and lian on linear series on curves with prescribed incidence conditions. the result states that the expected number of degree - $ d $ morphisms from a general genus $ g $, $ n $ - marked curve $ c $ to $ \ mathbb { p } ^ r $, sending the marked points on $ c $ to specified general points in $ \ mathbb { p } ^ r $, is equal to $ ( r + 1 ) ^ g $ for sufficiently large $ d $. this computation may be rephrased as an intersection problem on grassmannians, which has a natural combinatorial interpretation in terms of young tableaux by the classical littlewood - richardson rule. we give a bijection, generalizing the well - known rsk correspondence, between the tableaux in question and the $ ( r + 1 ) $ - ary sequences of length $ g $, and we explore our bijection ' s combinatorial properties. we also apply similar methods to give a combinatorial interpretation and proof of the fact that, in the modified setting in which $ r = 1 $ and several marked points map to the same point in $ \ mathbb { p } ^ 1 $, the number of morphisms is still $ 2 ^ g $ for sufficiently large $ d $.
|
arxiv:2201.00416
|
a recurrence scheme is presented to decompose an $ n $ - qubit unitary gate to the product of no more than $ n ( n - 1 ) / 2 $ single qubit gates with small number of controls, where $ n = 2 ^ n $. detailed description of the recurrence steps and formulas for the number of $ k $ - controlled single qubit gates in the decomposition are given. comparison of the result to a previous scheme is presented, and future research directions are discussed.
|
arxiv:1311.3599
|
we study the procedure of the reconstruction of phantom - scalar field potentials in two - field cosmological models. it is shown that while in the one - field case the chosen cosmological evolution defines uniquely the form of the scalar potential, in the two - field case one has an infinite number of possibilities. the classification of a large class of possible potentials is presented and the dependence of cosmological dynamics on the choice of initial conditions is investigated qualitatively and numerically for two particular models.
|
arxiv:0711.4300
|
we describe a scheme for the formation of globular cluster systems in early - type galaxies using a semi - analytic model of galaxy formation. operating within a lambda cdm cosmology, we assume that metal - poor globular clusters are formed at high - redshift in pre - galactic fragments, and that the subsequent gas - rich merging of these fragments leads to the formation of metal - rich clusters. we compare our results with contemporary data, and look at the particular case of the globular cluster and stellar metallicity distribution function of the nearby elliptical galaxy centaurus a.
|
arxiv:astro-ph/0207155
|
we study topological order in a toric code in three spatial dimensions, or a 3 + 1d z _ 2 gauge theory, at finite temperature. we compute exactly the topological entropy of the system, and show that it drops, for any infinitesimal temperature, to half its value at zero temperature. the remaining half of the entropy stays constant up to a critical temperature tc, dropping to zero above tc. these results show that topologically ordered phases exist at finite temperatures, and we give a simple interpretation of the order in terms of fluctuating strings and membranes, and how thermally induced point defects affect these extended structures. finally, we discuss the nature of the topological order at finite temperature, and its quantum and classical aspects.
|
arxiv:0804.3591
|
personalized federated learning ( pfl ) focuses on tailoring models to individual iiot clients in federated learning by addressing data heterogeneity and diverse user needs. although existing studies have proposed effective pfl solutions from various perspectives, they overlook the issue of forgetting both historical personalized knowledge and global generalized knowledge during local training on clients. therefore, this study proposes a novel pfl method, federated progressive self - distillation ( fedpsd ), based on logits calibration and progressive self - distillation. we analyze the impact mechanism of client data distribution characteristics on personalized and global knowledge forgetting. to address the issue of global knowledge forgetting, we propose a logits calibration approach for the local training loss and design a progressive self - distillation strategy to facilitate the gradual inheritance of global knowledge, where the model outputs from the previous epoch serve as virtual teachers to guide the training of subsequent epochs. moreover, to address personalized knowledge forgetting, we construct calibrated fusion labels by integrating historical personalized model outputs, which are then used as teacher model outputs to guide the initial epoch of local self - distillation, enabling rapid recall of personalized knowledge. extensive experiments under various data heterogeneity scenarios demonstrate the effectiveness and superiority of the proposed fedpsd method.
|
arxiv:2412.00410
|
application of the adiabatic model of quantum computation requires efficient encoding of the solution to computational problems into the lowest eigenstate of a hamiltonian that supports universal adiabatic quantum computation. experimental systems are typically limited to restricted forms of 2 - body interactions. therefore, universal adiabatic quantum computation requires a method for approximating quantum many - body hamiltonians up to arbitrary spectral error using at most 2 - body interactions. hamiltonian gadgets, introduced around a decade ago, offer the only current means to address this requirement. although the applications of hamiltonian gadgets have steadily grown since their introduction, little progress has been made in overcoming the limitations of the gadgets themselves. in this experimentally motivated theoretical study, we introduce several gadgets which require significantly more realistic control parameters than similar gadgets in the literature. we employ analytical techniques which result in a reduction of the resource scaling as a function of spectral error for the commonly used subdivision, 3 - to 2 - body and $ k $ - body gadgets. accordingly, our improvements reduce the resource requirements of all proofs and experimental proposals making use of these common gadgets. next, we numerically optimize these new gadgets to illustrate the tightness of our analytical bounds. finally, we introduce a new gadget that simulates a $ yy $ interaction term using hamiltonians containing only $ \ { x, z, xx, zz \ } $ terms. apart from possible implications in a theoretical context, this work could also be useful for a first experimental implementation of these key building blocks by requiring less control precision without introducing extra ancillary qubits.
|
arxiv:1311.2555
|
this work is aimed at quantifying the uncertainties in the 3d reconstruction of the location of coronal mass ejections ( cmes ) obtained with the polarization ratio technique. the method takes advantage of the different distributions along the line of sight ( los ) of total ( tb ) and polarized ( pb ) brightnesses to estimate the average location of the emitting plasma. to this end, we assumed two simple electron density distributions along the los ( a constant density and gaussian density profiles ) for a plasma blob and synthesized the expected tb and pb for different distances $ z $ of the blob from the plane of the sky ( pos ) and different projected altitudes $ \ rho $. reconstructed locations of the blob along the los were thus compared with the real ones, allowing a precise determination of uncertainties in the method. independently of the analytical density profile, when the blob is centered at a small distance from the pos ( i. e. for limb cmes ) the distance from the pos starts to be significantly overestimated. polarization ratio technique provides the los position of the center of mass of what we call folded density distribution, given by reflecting and summing in front of the pos the fraction of density profile located behind that plane. on the other hand, when the blob is far from the pos, but with very small projected altitudes ( i. e. for halo cmes, $ \ rho < 1. 4 $ r $ _ \ odot $ ), the inferred distance from that plane is significantly underestimated. better determination of the real blob position along the los is given for intermediate locations, and in particular when the blob is centered at an angle of $ 20 ^ \ circ $ from the pos. these result have important consequences not only for future 3d reconstruction of cmes with polarization ratio technique, but also for the design of future coronagraphs aimed at providing a continuous monitoring of halo - cmes for space weather prediction purposes.
|
arxiv:1503.00314
|
the charged lepton mass formula can be explained when the masses are propotinal to the squared vacuum expectation values ( vevs ) of scalar fields. we introduce u ( 3 ) flavor symmetry and its nonet scalar field $ \ phi $, whose vev structure plays an essential role for generating the fermion mass spectrum. we can naturally obtain bilinear form of the yukawa coupling $ y _ { ij } \ propto \ sum _ k < \ phi _ { ik } > < \ phi _ { kj } > $ without the non - renormalizable interactions, when the flavor symmetry is broken only through the yukawa coupling and tadpole terms. we also speculate the possible vev structure of $ < \ phi > $.
|
arxiv:0708.3913
|
the double - slit experiment has become a classic thought experiment, for its clarity in expressing the central puzzle of quantum mechanics - - wave - particle complementarity. such wave - particle duality continues to be challenged and investigated in a broad range of entities with electrons, neutrons, helium atoms, c $ _ { 60 } $ fullerenes, bose - einstein condensates and biological molecules. all existing experiments are performed with the scale larger than angstrom. in this article, we present a double - slit scenario at fermi scale with new entities - - coherent photon products in heavy - ion collisions. virtual photons from the electromagnetic fields of relativistic heavy ions can fluctuate to quark - antiquark pairs, scatter off a target nucleus and emerge as vector mesons. the two colliding nuclei can take turns to act as targets, forming a double - slit interference pattern. furthermore, the ` which - way ' information can be partially solved by the violent strong interactions in the proposed scenario, which demonstrates a key concept of quantum mechanics - - complementary principle.
|
arxiv:1810.10694
|
we investigate the higgs mechanism for gravity, which has been recently put forward by ' t hooft, when the polyakov - type action for scalar fields is added to the original action. we find that from the polyakov - type action, it is very natural to derive an ' alternative ' metric tensor composed of the scalar fields. the positivity condition on the determinant can be also derived easily by requiring that this term does not change the dynamics at all and becomes a topological number, that is, the wrapping number. it turns out that the gauge conditions adopted by ' t hooft are nothing but the restriction on a sector with unit wrapping number.
|
arxiv:0709.2419
|
we present a single neural network architecture composed of task - agnostic components ( vits, convolutions, and lstms ) that achieves state - of - art results on both the imagenav ( " go to location in < this picture > " ) and objectnav ( " find a chair " ) tasks without any task - specific modules like object detection, segmentation, mapping, or planning modules. such general - purpose methods offer advantages of simplicity in design, positive scaling with available compute, and versatile applicability to multiple tasks. our work builds upon the recent success of self - supervised learning ( ssl ) for pre - training vision transformers ( vit ). however, while the training recipes for convolutional networks are mature and robust, the recipes for vits are contingent and brittle, and in the case of vits for visual navigation, yet to be fully discovered. specifically, we find that vanilla vits do not outperform resnets on visual navigation. we propose the use of a compression layer operating over vit patch representations to preserve spatial information along with policy training improvements. these improvements allow us to demonstrate positive scaling laws for the first time in visual navigation tasks. consequently, our model advances state - of - the - art performance on imagenav from 54. 2 % to 82. 0 % success and performs competitively against concurrent state - of - art on objectnav with success rate of 64. 0 % vs. 65. 0 %. overall, this work does not present a fundamentally new approach, but rather recommendations for training a general - purpose architecture that achieves state - of - art performance today and could serve as a strong baseline for future methods.
|
arxiv:2303.07798
|
context. gamma doradus ( hereafter $ \ gamma $ ~ dor ) stars are gravity - mode pulsators whose periods carry information about the internal structure of the star. these periods are especially sensitive to the internal rotation and chemical mixing, two processes that are currently not well constrained in the theory of stellar evolution. aims. we aim to identify the pulsation modes and deduce the internal rotation and buoyancy travel time for 106 $ \ gamma $ dor stars observed by the transiting exoplanet survey satellite ( tess ) mission in its southern continuous viewing zone ( hereafter s - cvz ). we rely on 140 previously detected period - spacing patterns, that is, series of ( near - ) consecutive pulsation mode periods. methods. we used the asymptotic expression to compute gravity - mode frequencies for ranges of the rotation rate and buoyancy travel time that cover the physical range in $ \ gamma $ ~ dor stars. those frequencies were fitted to the observed period - spacing patterns by minimising a custom cost function. the effects of rotation were evaluated using the traditional approximation of rotation, using the stellar pulsation code gyre. results. we obtained the pulsation mode identification, internal rotation and buoyancy travel time for 60 tess $ \ gamma $ ~ dor stars. for the remaining 46 targets, the detected patterns are either too short or contained too many missing modes for unambiguous mode identification, and longer light curves are required. for the successfully analysed stars, we found that period - spacing patterns from 1 - yr long tess light curves can constrain the internal rotation and buoyancy travel time to a precision of $ \ rm 0. 03 ~ d ^ { - 1 } $ and 400s, respectively, which is about half as precise as literature results based on 4 - yr kepler light curves of $ \ gamma $ ~ dor stars.
|
arxiv:2210.09526
|
g } must be contained in h { \ displaystyle h }, and whenever h 1 { \ displaystyle h _ { 1 } } and h 2 { \ displaystyle h _ { 2 } } are both in h { \ displaystyle h }, then so are h 1 ⋅ h 2 { \ displaystyle h _ { 1 } \ cdot h _ { 2 } } and h 1 − 1 { \ displaystyle h _ { 1 } ^ { - 1 } }, so the elements of h { \ displaystyle h }, equipped with the group operation on g { \ displaystyle g } restricted to h { \ displaystyle h }, indeed form a group. in this case, the inclusion map h → g { \ displaystyle h \ to g } is a homomorphism. in the example of symmetries of a square, the identity and the rotations constitute a subgroup r = { i d, r 1, r 2, r 3 } { \ displaystyle r = \ { \ mathrm { id }, r _ { 1 }, r _ { 2 }, r _ { 3 } \ } }, highlighted in red in the cayley table of the example : any two rotations composed are still a rotation, and a rotation can be undone by ( i. e., is inverse to ) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. the subgroup test provides a necessary and sufficient condition for a nonempty subset h { \ displaystyle h } of a group g { \ displaystyle g } to be a subgroup : it is sufficient to check that g − 1 ⋅ h ∈ h { \ displaystyle g ^ { - 1 } \ cdot h \ in h } for all elements g { \ displaystyle g } and h { \ displaystyle h } in h { \ displaystyle h }. knowing a group ' s subgroups is important in understanding the group as a whole. given any subset s { \ displaystyle s } of a group g { \ displaystyle g }, the subgroup generated by s { \ displaystyle s } consists of all products of elements of s { \ displaystyle s } and their inverses. it is the smallest subgroup of g { \ displaystyle g } containing s { \ displaystyle s }. in the example of symmetries of a square, the subgroup generated by r 2 { \ displaystyle r _ { 2 } } and f v
|
https://en.wikipedia.org/wiki/Group_(mathematics)
|
experimental evidence from measurements of the a. c. and d. c. susceptibility, and heat capacity data show that the pyrochlore structure oxide, gd _ 2ti _ 2o _ 7, exhibits short range order that starts developing at 30k, as well as long range magnetic order at $ t \ sim 1 $ k. the curie - weiss temperature, $ \ theta _ { cw } $ = - 9. 6k, is largely due to exchange interactions. deviations from the curie - weiss law occur below $ \ sim $ 10k while magnetic heat capacity contributions are found at temperatures above 20k. a sharp maximum in the heat capacity at $ t _ c = 0. 97 $ k signals a transition to a long range ordered state, with the magnetic specific accounting for only $ \ sim $ 50 % of the magnetic entropy. the heat capacity above the phase transition can be modeled by assuming that a distribution of random fields acts on the $ ^ 8s _ { 7 / 2 } $ ground state for gd $ ^ { 3 + } $. there is no frequency dependence to the a. c. susceptibility in either the short range or long range ordered regimes, hence suggesting the absence of any spin - glassy behavior. mean field theoretical calculations show that no long range ordered ground state exists for the conditions of nearest - neighbor antiferromagnetic exchange and long range dipolar couplings. at the mean - field level, long range order at various commensurate or incommensurate wave vectors is found only upon inclusion of exchange interactions beyond nearest - neighbor exchange and dipolar coupling. the properties of gd $ _ 2ti _ 2o _ 7 are compared with other geometrically frustrated antiferromagnets such as the gd _ 3ga _ 5o _ { 12 } gadolinium gallium garnet, re _ 2ti _ 2o _ 7 pyrochlores where re = tb, ho and tm, and heisenberg - type pyrochlore such as y _ 2mo _ 2o _ 7, tb _ 2mo _ 2o _ 7, and spinels such as znfe _ 2o _ 4
|
arxiv:cond-mat/9906043
|
a functional representation of free l \ ' evy processes is established via an ensemble of unitarily invariant hermitian matrix - valued l \ ' evy processes. this is accomplished by proving functional asymptotics of their empirical spectral processes towards the law of a free l \ ' evy processes. this result recovers a functional version of wigner ' s theorem and introduces a functional version of marchenko - pastur ' s theorem providing the free poisson process as the noncommutative limit process.
|
arxiv:1511.03362
|
this paper provides a comprehensive discussion of neutralino dark matter within classes of extended supersymmetric models referred to as the ussm containing one additional sm singlet higgs plus an extra $ z ' $, together with their superpartners the singlino and bino '. these extra states of the ussm can significantly modify the nature and properties of neutralino dark matter relative to that of the minimal ( or even next - to - minimal ) supersymmetric standard models. we derive the feynman rules for the ussm and calculate the dark matter relic abundance and direct detection rates for elastic scattering in the ussm for interesting regions of parameter space where the largest differences are expected.
|
arxiv:0811.2204
|
despite their success, large vision - language models ( lvlms ) remain vulnerable to hallucinations. while existing studies attribute the cause of hallucinations to insufficient visual attention to image tokens, our findings indicate that hallucinations also arise from interference from instruction tokens during decoding. intuitively, certain instruction tokens continuously distort lvlms ' visual perception during decoding, hijacking their visual attention toward less discriminative visual regions. this distortion prevents them integrating broader contextual information from images, ultimately leading to hallucinations. we term this phenomenon ' attention hijacking ', where disruptive instruction tokens act as ' attention hijackers '. to address this, we propose a novel, training - free strategy namely attention hijackers detection and disentanglement ( aid ), designed to isolate the influence of hijackers, enabling lvlms to rely on their context - aware intrinsic attention map. specifically, aid consists of three components : first, attention hijackers detection identifies attention hijackers by calculating instruction - driven visual salience. next, attention disentanglement mechanism is proposed to mask the visual attention of these identified hijackers, and thereby mitigate their disruptive influence on subsequent tokens. finally, re - disentanglement recalculates the balance between instruction - driven and image - driven visual salience to avoid over - masking effects. extensive experiments demonstrate that aid significantly reduces hallucination across various lvlms on several benchmarks.
|
arxiv:2503.08216
|
almost a century on from the culmination of the first revolution in quantum physics, we are poised for another. even as we engage in the creation of impactful quantum technologies, it is imperative for us to face the challenges in understanding the phenomenology of various emergent forms of quantum matter. this will involve building on decades of progress in quantum condensed matter physics, and going beyond the well - established ginzburg - landau - wilson paradigm for quantum matter. we outline and discuss several outstanding challenges, including the need to explore and identify the organisational principles that can guide the development of theories, key experimental phenomenologies that continue to confound, and the formulation of methods that enable progress. these efforts will enable the prediction of new quantum materials whose properties facilitate the creation of next generation technologies.
|
arxiv:2501.00447
|
recently, the five - hundred - meter aperture spherical radio telescope ( fast ) measured the three - dimensional velocity of psr j0538 + 2817 in its associated supernova remnant s147 and found a possible spin - velocity alignment in this pulsar. here we show that the high velocity and the spin - velocity alignment in this pulsar can be explained by the so - called electromagnetic rocket mechanism. in this framework, the pulsar is kicked in the direction of the spin axis, which naturally explains the spin - velocity alignment. we scrutinize the evolution of this pulsar and show that the pulsar kick can create a highly relativistic jet at the opposite direction of the kick velocity. the lifetime and energetics of the jet is estimated. it is argued that the jet can generate a gamma - ray burst ( grb ). the long term dynamical evolution of the jet is calculated. it is found that the shock radius of the jet should expand to about 32 pc at present, which is well consistent with the observed radius of the supernova remnant s147 ( $ 32. 1 \ pm4. 8 $ pc ). additionally, our calculations indicate that the current velocity of the grb remnant should be about 440 km s $ ^ { - 1 } $, which is also consistent with the observed blast wave velocity of the remnant of s147 ( 500 km s $ ^ { - 1 } $ ).
|
arxiv:2109.11485
|
magnetic confinement fusion reactors suffer severely from heat and particle losses through turbulent transport, which has inspired the construction of ever larger and more expensive reactors. numerical simulations are vital to their design and operation, but particle collisions are too infrequent for fluid descriptions to be valid. instead, strongly magnetised fusion plasmas are described by the gyrokinetic equations, a nonlinear integro - differential system for evolving the particle distribution functions in a five - dimensional position and velocity space, and the consequent electromagnetic field. due to the high dimensionality, simulations of small reactor sections require hundreds of thousands of cpu hours on high performance computing platforms. we develop a hankel - hermite spectral representation for velocity space that exploits structural features of the gyrokinetic system. the representation exactly conserves discrete free energy in the absence of explicit dissipation, while our hermite hypercollision operator captures landau damping with few variables. calculation of the electromagnetic fields becomes purely local, eliminating inter - processor communication in, and vastly accelerating, searches for linear instabilities. we implement these ideas in spectrogk, an efficient parallel code. turbulent fusion plasmas may dissipate free energy through linear phase mixing to fine scales in velocity space, as in landau damping, or through a nonlinear cascade to fine scales in physical space, as in hydrodynamic turbulence. using spectrogk to study saturated electrostatic drift - kinetic turbulence, we find that the nonlinear cascade suppresses linear phase mixing at energetically - dominant scales, so the turbulence is fluid - like. we use this observation to derive fourier - hermite spectra for the electrostatic potential and distribution function, and confirm these spectra with simulations.
|
arxiv:1603.04727
|
in traditional neural networks for image processing, the inputs of the neural networks should be the same size such as 224 * 224 * 3. but how can we train the neural net model with different input size? a common way to do is image deformation which accompany a problem of information loss ( e. g. image crop or wrap ). sequence model ( rnn, lstm, etc. ) can accept different size of input like text and audio. but one disadvantage for sequence model is that the previous information will become more fragmentary during the transfer in time step, it will make the network hard to train especially for long sequential data. in this paper we propose a new network structure called attention incorporate network ( ain ). it solve the problem of different size of inputs including : images, text, audio, and extract the key features of the inputs by attention mechanism, pay different attention depends on the importance of the features not rely on the data size. experimentally, ain achieve a higher accuracy, better convergence comparing to the same size of other network structure
|
arxiv:1806.03961
|
prior research on out - of - distribution detection ( oodd ) has primarily focused on single - modality models. recently, with the advent of large - scale pretrained vision - language models such as clip, oodd methods utilizing such multi - modal representations through zero - shot and prompt learning strategies have emerged. however, these methods typically involve either freezing the pretrained weights or only partially tuning them, which can be suboptimal for downstream datasets. in this paper, we highlight that multi - modal fine - tuning ( mmft ) can achieve notable oodd performance. despite some recent works demonstrating the impact of fine - tuning methods for oodd, there remains significant potential for performance improvement. we investigate the limitation of na \ " ive fine - tuning methods, examining why they fail to fully leverage the pretrained knowledge. our empirical analysis suggests that this issue could stem from the modality gap within in - distribution ( id ) embeddings. to address this, we propose a training objective that enhances cross - modal alignment by regularizing the distances between image and text embeddings of id data. this adjustment helps in better utilizing pretrained textual information by aligning similar semantics from different modalities ( i. e., text and image ) more closely in the hyperspherical representation space. we theoretically demonstrate that the proposed regularization corresponds to the maximum likelihood estimation of an energy - based model on a hypersphere. utilizing imagenet - 1k ood benchmark datasets, we show that our method, combined with post - hoc oodd approaches leveraging pretrained knowledge ( e. g., neglabel ), significantly outperforms existing methods, achieving state - of - the - art oodd performance and leading id accuracy.
|
arxiv:2503.18817
|
in this article we study the convex hull spanned by the union of trajectories of a standard planar brownian motion, and an independent standard planar brownian bridge. we find exact values of the expectation of perimeter and area of such a convex hull. as an auxiliary result, that is of interest in its own right, we provide an explicit shape of the probability density function of a random variable that represents the time when combined maximum of a standard one - dimensional brownian motion, and an independent standard one - dimensional brownian bridge is attained. at the end, we generalize our results to the case of multiple independent standard planar brownian motions and brownian bridges.
|
arxiv:2406.07079
|
we study the regularity of the entropy spectrum of the lyapunov exponents for hyperbolic maps on surfaces. it is well - known that the entropy spectrum is a concave upper semi - continuous function which is analytic on the interior of the set lyapunov exponents. in this paper we construct a family of horseshoes with a discontinuous entropy spectrum at the boundary of the set of lyapunov exponents.
|
arxiv:1910.05837
|
the \ textit { gaia } - sausage / enceladus ( gs / e ) structure is an accretion remnant which comprises a large fraction of the milky way ' s stellar halo. we study gs / e using high - purity samples of kinematically selected stars from apogee dr16 and \ textit { gaia }. employing a novel framework to account for kinematic selection biases using distribution functions, we fit density profiles to these gs / e samples and measure their masses. we find that gs / e has a shallow density profile in the inner galaxy, with a break between 15 - - 25 ~ kpc beyond which the profile steepens. we also find that gs / e is triaxial, with axis ratios 1 : 0. 55 : 0. 45 ( nearly prolate ), and the major axis is oriented about 80 ~ degrees from the sun - - galactic centre line and 16 degrees above the plane. we measure a stellar mass for gs / e of $ 1. 45 \, ^ { + 0. 92 } _ { - 0. 51 } \, \ mathrm { ( stat. ) } \, ^ { + 0. 13 } _ { - 0. 37 } \ mathrm { ( sys. ) } \ \ times10 ^ { 8 } $ ~ \ msun. our mass estimate is lower than others in the literature, a finding we attribute to the excellent purity of the samples we work with. we also fit a density profile to the entire milky way stellar halo, finding a mass in the range of $ 6. 7 - 8. 4 \ times 10 ^ { 8 } $ ~ \ msun, and implying that gs / e could make up as little as 15 - 25 ~ per ~ cent of the mass of the milky way stellar halo. our lower stellar mass combined with standard stellar - mass - to - halo mass relations implies that gs / e constituted a minor 1 : 8 mass - ratio merger at the time of its accretion.
|
arxiv:2306.03084
|
it is proved that a straight projective - metric space has an open set of centers, if and only if it is either the hyperbolic or a minkowskian geometry. it is also shown that if a straight projective - metric space has some finitely many well - placed centers, then it is either the hyperbolic or a minkowskian geometry.
|
arxiv:1812.09312
|
in this work, the many - spin interactions taking place in mn12 large - spin clusters are extensively studied using the 8 - spin model hamiltonian, for which we determine the possible parameters based on experimental data. account of the many - spin excitations satisfactorily explains positions of the neutron scattering peaks, results of epr measurements and the temperature dependence of magnetic susceptibility. in particular, strong dzyaloshinsky - morya interactions are found to be important for description of neutron scattering data. the role of these interactions for the relaxation of the magnetization is qualitatively discussed.
|
arxiv:cond-mat/9807176
|
diversity is a commonly known principle in the design of recommender systems, but also ambiguous in its conceptualization. through semi - structured interviews we explore how practitioners at three different public service media organizations in the netherlands conceptualize diversity within the scope of their recommender systems. we provide an overview of the goals that they have with diversity in their systems, which aspects are relevant, and how recommendations should be diversified. we show that even within this limited domain, conceptualization of diversity greatly varies, and argue that it is unlikely that a standardized conceptualization will be achieved. instead, we should focus on effective communication of what diversity in this particular system means, thus allowing for operationalizations of diversity that are capable of expressing the nuances and requirements of that particular domain.
|
arxiv:2405.02026
|
we present a novel formulation to removing reflection from polarized images in the wild. we first identify the misalignment issues of existing reflection removal datasets where the collected reflection - free images are not perfectly aligned with input mixed images due to glass refraction. then we build a new dataset with more than 100 types of glass in which obtained transmission images are perfectly aligned with input mixed images. second, capitalizing on the special relationship between reflection and polarized light, we propose a polarized reflection removal model with a two - stage architecture. in addition, we design a novel perceptual ncc loss that can improve the performance of reflection removal and general image decomposition tasks. we conduct extensive experiments, and results suggest that our model outperforms state - of - the - art methods on reflection removal.
|
arxiv:2003.12789
|
we give elementary proofs of two theorems concerning bounds on the maximum argument of the eigenvalues of a product of two unitary matrices - - - one by childs \ emph { et al. } [ j. mod. phys., \ textbf { 47 }, 155 ( 2000 ) ] and the other one by chau [ arxiv : 1006. 3614 ]. our proofs have the advantages that the necessary and sufficient conditions for equalities are apparent and that they can be readily generalized to the case of infinite - dimensional unitary operators.
|
arxiv:1006.3978
|
we experimentally demonstrate the generation of customized perfect laguerre - gaussian ( plg ) beams whose intensity maxima localized around any desired curves. the principle is to act appropriate algebraic functions on the angular spectra of plg beams. we characterize the propagation properties of these beams and compare them with non - diffraction caustic beams possessing the same intensity profiles. the results manifest that the customized - plg beams can maintain their profiles during propagation and suffer less energy loss than the non - diffraction caustic beams, and hence are able to propagate a longer distance. this new structure beam would have potential applications in areas such as optical communication, soliton routing and steering, optical tweezing and trapping, atom optics, etc.
|
arxiv:2202.10692
|
in the context of right - censored data, we study the problem of predicting the restricted time to event based on a set of covariates. under a quadratic loss, this problem is equivalent to estimating the conditional restricted mean survival time ( rmst ). to that aim, we propose a flexible and easy - to - use ensemble algorithm that combines pseudo - observations and super learner. the classical theoretical results of the super learner are extended to right - censored data, using a new definition of pseudo - observations, the so - called split pseudo - observations. simulation studies indicate that the split pseudo - observations and the standard pseudo - observations are similar even for small sample sizes. the method is applied to maintenance and colon cancer datasets, showing the interest of the method in practice, as compared to other prediction methods. we complement the predictions obtained from our method with our rmst - adapted risk measure, prediction intervals and variable importance measures developed in a previous work.
|
arxiv:2404.17211
|
we discuss a string model where a conformal four - dimensional n = 2 gauge theory receives corrections to its gauge kinetic functions from " stringy " instantons. these contributions are explicitly evaluated by exploiting the localization properties of the integral over the stringy instanton moduli space. the model we consider corresponds to a setup with d7 / d3 - branes in type i ' theory compactified on t4 / z2 x t2, and possesses a perturbatively computable heterotic dual. in the heteoric side the corrections to the quadratic gauge couplings are provided by a 1 - loop threshold computation and, under the duality map, match precisely the first few stringy instanton effects in the type i ' setup. this agreement represents a very non - trivial test of our approach to the exotic instanton calculus.
|
arxiv:1002.4322
|
we present a new approach for redirected walking in static and dynamic scenes that uses techniques from robot motion planning to compute the redirection gains that steer the user on collision - free paths in the physical space. our first contribution is a mathematical framework for redirected walking using concepts from motion planning and configuration spaces. this framework highlights various geometric and perceptual constraints that tend to make collision - free redirected walking difficult. we use our framework to propose an efficient solution to the redirection problem that uses the notion of visibility polygons to compute the free spaces in the physical environment and the virtual environment. the visibility polygon provides a concise representation of the entire space that is visible, and therefore walkable, to the user from their position within an environment. using this representation of walkable space, we apply redirected walking to steer the user to regions of the visibility polygon in the physical environment that closely match the region that the user occupies in the visibility polygon in the virtual environment. we show that our algorithm is able to steer the user along paths that result in significantly fewer resets than existing state - of - the - art algorithms in both static and dynamic scenes. our project website is available at https : / / gamma. umd. edu / vis _ poly /.
|
arxiv:2106.06807
|
in this paper, the well - posedness for one - dimensional path dependent mckean - vlasov sdes with $ \ alpha $ ( $ \ alpha \ geq \ frac { 1 } { 2 } $ ) - h \ " { o } lder continuous diffusion is investigated. moreover, the associated quantitative propagation of chaos in the sense of wasserstein distance, total variation distance as well as relative entropy is studied.
|
arxiv:2207.04274
|
deep learning ( dl ) has seen great success in the computer vision ( cv ) field, and related techniques have been used in security, healthcare, remote sensing, and many other fields. as a parallel development, visual data has become universal in daily life, easily generated by ubiquitous low - cost cameras. therefore, exploring dl - based cv may yield useful information about objects, such as their number, locations, distribution, motion, etc. intuitively, dl - based cv can also facilitate and improve the designs of wireless communications, especially in dynamic network scenarios. however, so far, such work is rare in the literature. the primary purpose of this article, then, is to introduce ideas about applying dl - based cv in wireless communications to bring some novel degrees of freedom to both theoretical research and engineering applications. to illustrate how dl - based cv can be applied in wireless communications, an example of using a dl - based cv with a millimeter - wave ( mmwave ) system is given to realize optimal mmwave multiple - input and multiple - output ( mimo ) beamforming in mobile scenarios. in this example, we propose a framework to predict future beam indices from previously observed beam indices and images of street views using resnet, 3 - dimensional resnext, and a long short - term memory network. the experimental results show that our frameworks achieve much higher accuracy than the baseline method, and that visual data can significantly improve the performance of the mimo beamforming system. finally, we discuss the opportunities and challenges of applying dl - based cv in wireless communications.
|
arxiv:2006.05782
|
let $ \ sigma $ be a surface of constant mean curvature in $ { \ mathbb r } ^ 3 $ with multiple delaunay ends. assuming that $ \ sigma $ is non degenerate in this paper we construct new solutions to the cahn - hilliard equation $ \ varepsilon \ delta u + \ varepsilon ^ { - 1 } u ( 1 - u ^ 2 ) = \ ell _ \ varepsilon $ in $ { \ mathbb r } ^ 3 $ such that as $ \ varepsilon \ to 0 $ the zero level set of $ u _ \ varepsilon $ approaches $ \ sigma $. moreover, on compacts of the connected components of $ { \ mathbb r } ^ 3 \ setminus \ sigma $ we have $ 1 - | u _ \ varepsilon | \ to 0 $ uniformly.
|
arxiv:1810.01494
|
combining sum factorization, weighted quadrature, and row - based assembly enables efficient higher - order computations for tensor product splines. we aim to transfer these concepts to immersed boundary methods, which perform simulations on a regular background mesh cut by a boundary representation that defines the domain of interest. therefore, we present a novel concept to divide the support of cut basis functions to obtain regular parts suited for sum factorization. these regions require special discontinuous weighted quadrature rules, while gauss - like quadrature rules integrate the remaining support. two linear elasticity benchmark problems confirm the derived estimate for the computational costs of the different integration routines and their combination. although the presence of cut elements reduces the speed - up, its contribution to the overall computation time declines with h - refinement.
|
arxiv:2308.15034
|
we consider variation of energy of the light - like particle in riemann space - time, find lagrangian, canonical momenta and forces. equations of the critical curve are obtained by the nonzero energy integral variation in accordance with principles of the calculus of variations in mechanics. this method is shown to not lead to violation of conformity of varied curve to the null path in contradistinction of the interval variation. though found equations are differ from standard form of geodesics equations, for the schwarzschild space - time their solutions coincide with each other to within parameter of differentiation.
|
arxiv:0806.3350
|
in this paper, we introduce the concept of a virtual machine with graph - organised memory as a versatile backend for both explicit - state and abstraction - driven verification of software. our virtual machine uses the llvm ir as its instruction set, enriched with a small set of hypercalls. we show that the provided hypercalls are sufficient to implement a small operating system, which can then be linked with applications to provide a posix - compatible verification environment. finally, we demonstrate the viability of the approach through a comparison with a more traditionally - designed llvm model checker.
|
arxiv:1703.05341
|
we present high - resolution hydrodynamical simulations of isolated dwarf galaxies including self - gravity, non - equilibrium cooling and chemistry, interstellar radiation fields ( irsf ) and shielding, star formation, and stellar feedback. this includes spatially and temporally varying photoelectric ( pe ) heating, photoionization, resolved supernova ( sn ) blast waves and metal enrichment. a new flexible method to sample the stellar initial mass function allows us to follow the contribution to the isrf, the metal output and the sn delay times of individual massive stars. we find that sne play the dominant role in regulating the global star formation rate, shaping the multi - phase interstellar medium ( ism ) and driving galactic outflows. outflow rates ( with mass - loading factors of a few ) and hot gas fractions of the ism increase with the number of sne exploding in low - density environments where radiative energy losses are low. while pe heating alone can suppress star formation slightly more ( a factor of a few ) than sne alone can do, it is unable to drive outflows and reproduce the multi - phase ism that emerges naturally when sne are included. these results are in conflict with recent results of forbes et al. who concluded that pe heating is the dominant process suppressing star formation in dwarfs, about an order of magnitude more efficient than sne. potential origins for this discrepancy are discussed. in the absence of sne and photoionization ( mechanisms to disperse dense clouds ), the impact of pe heating is highly overestimated owing to the ( unrealistic ) proximity of dense gas to the radiation sources. this leads to a substantial boost of the infrared continuum emission from the uv - irradiated dust and a far infrared line - to - continuum ratio too low compared to observations. though sub - dominant in regulating star formation, the isrf controls the abundance of molecular hydrogen via photodissociation.
|
arxiv:1701.08779
|
blenderproc is a modular procedural pipeline, which helps in generating real looking images for the training of convolutional neural networks. these can be used in a variety of use cases including segmentation, depth, normal and pose estimation and many others. a key feature of our extension of blender is the simple to use modular pipeline, which was designed to be easily extendable. by offering standard modules, which cover a variety of scenarios, we provide a starting point on which new modules can be created.
|
arxiv:1911.01911
|
let $ q \ in ( 1, 2 ) $. a $ q $ - expansion of a number $ x $ in $ [ 0, \ frac { 1 } { q - 1 } ] $ is a sequence $ ( \ delta _ i ) _ { i = 1 } ^ \ infty \ in \ { 0, 1 \ } ^ { \ mathbb { n } } $ satisfying $ $ x = \ sum _ { i = 1 } ^ \ infty \ frac { \ delta _ i } { q ^ i }. $ $ let $ \ mathcal { b } _ { \ aleph _ 0 } $ denote the set of $ q $ for which there exists $ x $ with a countable number of $ q $ - expansions, and let $ \ mathcal { b } _ { 1, \ aleph _ 0 } $ denote the set of $ q $ for which $ 1 $ has a countable number of $ q $ - expansions. in \ cite { sidorov6 } it was shown that $ \ min \ mathcal { b } _ { \ aleph _ 0 } = \ min \ mathcal { b } _ { 1, \ aleph _ 0 } = \ frac { 1 + \ sqrt { 5 } } { 2 }, $ and in \ cite { baker } it was shown that $ \ mathcal { b } _ { \ aleph _ 0 } \ cap ( \ frac { 1 + \ sqrt { 5 } } { 2 }, q _ 1 ] = \ { q _ 1 \ } $, where $ q _ 1 ( \ approx1. 64541 ) $ is the positive root of $ x ^ 6 - x ^ 4 - x ^ 3 - 2x ^ 2 - x - 1 = 0 $. in this paper we show that the second smallest point of $ \ mathcal { b } _ { 1, \ aleph _ 0 } $ is $ q _ 3 ( \ approx1. 68042 ) $, the positive root of $ x ^ 5 - x ^ 4 - x ^ 3 - x + 1 = 0 $. enroute to proving this result we show that $ \ mathcal { b } _ { \ aleph _ 0 } \ cap ( q _ 1, q _ 3 ] = \ { q _ 2, q _ 3 \ } $, where $ q _ 2 ( \ approx1. 65462 ) $
|
arxiv:1502.07212
|
existing systems dealing with the increasing volume of data series cannot guarantee interactive response times, even for fundamental tasks such as similarity search. therefore, it is necessary to develop analytic approaches that support exploration and decision making by providing progressive results, before the final and exact ones have been computed. prior works lack both efficiency and accuracy when applied to large - scale data series collections. we present and experimentally evaluate pros, a new probabilistic learning - based method that provides quality guarantees for progressive nearest neighbor ( nn ) query answering. we develop our method for k - nn queries and demonstrate how it can be applied with the two most popular distance measures, namely, euclidean and dynamic time warping ( dtw ). we provide both initial and progressive estimates of the final answer that are getting better during the similarity search, as well suitable stopping criteria for the progressive queries. moreover, we describe how this method can be used in order to develop a progressive algorithm for data series classification ( based on a k - nn classifier ), and we additionally propose a method designed specifically for the classification task. experiments with several and diverse synthetic and real datasets demonstrate that our prediction methods constitute the first practical solutions to the problem, significantly outperforming competing approaches. this paper was published in the vldb journal ( 2022 ).
|
arxiv:2212.13310
|
photo response non - uniformity ( prnu ) noise has proven to be very effective tool in camera based forensics. it helps to match a photo to the device that clicked it. in today ' s scenario, where millions and millions of images are uploaded every hour, it is very easy to compute this unique prnu pattern from a couple of shared images on social profiles. this endangers the privacy of the camera owner and becomes a cause of major concern for the privacy - aware society. we propose sss - prnu scheme that facilitates the forensic investigators to carry out their crime investigation without breaching the privacy of the people. thus, maintaining a balance between the two. to preserve privacy, extraction of camera fingerprint and prnu noise for a suspicious image is computed in a trusted execution environment such as arm trustzone. after extraction, the sensitive information of camera fingerprint and prnu noise is distributed into multiple obfuscated shares using shamir secret sharing ( sss ) scheme. these shares are information - theoretically secure and leak no information of underlying content. the encrypted information is distributed to multiple third - part servers where correlation is computed on a share basis between the camera fingerprint and the prnu noise. these partial correlation values are combined together to obtain the final correlation value that becomes the basis for a match decision. transforming the computation of the correlation value in the encrypted domain and making it well suited for a distributed environment is the main contribution of the paper. experiment results validate the feasibility of the proposed scheme that provides a secure framework for prnu based source camera attribution. the security analysis and evaluation of computational and storage overheads are performed to analysis the practical feasibility of the scheme.
|
arxiv:2106.07029
|
the effect of spontaneous breaking of initial so ( 3 ) symmetry is shown to be possible for an h - like atom in the ground state, when it is confined in a spherical box under general boundary conditions of " not going out " through the box surface ( i. e. third kind or robin ' s ones ), for a wide range of physically reasonable values of system parameters. the reason is that such boundary conditions could yield a large magnitude of electronic wavefunction in some sector of the box boundary, what in turn promotes atomic displacement from the box center towards this part of the boundary, and so the underlying so ( 3 ) symmetry spontaneously breaks. the emerging goldstone modes, coinciding with rotations around the box center, restore the symmetry by spreading the atom over a spherical shell localized at some distances from the box center. atomic confinement inside the cavity proceeds dynamically - - due to the boundary condition the deformation of electronic wavefunction near the boundary works as a spring, that returns the atomic nuclei back into the box volume.
|
arxiv:1607.02706
|
the dynamics of several light filaments ( spatial optical solitons ) propagating in an optically nonlinear and non - local random medium is investigated using the paradigms of the physics of complexity. cluster formation is interpreted as a dynamic phase transition. a connection with the random matrices approach for explaining the vibrational spectra of an ensemble of solitons is pointed out. general arguments based on a brownian dynamics model are validated by the numerical simulation of a stochastic partial differential equation system. the results are also relevant for bose condensed gases and plasma physics.
|
arxiv:physics/0412051
|
we show that an eternal solution to a complete, locally conformally flat yamabe flow, $ \ frac { \ partial } { \ partial t } g = - rg $, with uniformly bounded scalar curvature and positive ricci curvature at $ t = 0 $, where the scalar curvature assumes its maximum is a gradient steady soliton. as an application of that, we study the blow up behavior of $ g ( t ) $ at the maximal time of existence, $ t < \ infty $. we assume that $ ( m, g ( \ cdot, t ) ) $ satisfies ( i ) the injectivity radius bound { \ bf or } ( ii ) the schouten tensor is positive at time $ t = 0 $ and the scalar curvature bounded at each time - slice. we show that the singularity the flow develops at time $ t $ is always of type i.
|
arxiv:0705.3667
|
a popular analogue used in the space domain is that of historical building projects, notably cathedrals that took decades and in some cases centuries to complete. cathedrals are often taken as archetypes for long - term projects. in this article, i will explore the cathedral from the point of view of project management and systems architecting and draw implications for long - term projects in the space domain, notably developing a starship. i will show that the popular image of a cathedral as a continuous long - term project is in contradiction to the current state of research. more specifically, i will show that for the following propositions : the cathedrals were built based on an initial detailed master plan ; building was a continuous process that adhered to the master plan ; investments were continuously provided for the building process. although initial plans might have existed, the construction process took often place in multiple campaigns, sometimes separated by decades. such interruptions made knowledge - preservation very challenging. the reason for the long stretches of inactivity was mostly due to a lack of funding. hence, the availability of funding coincided with construction activity. these findings paint a much more relevant picture of cathedral building for long - duration projects today : how can a project be completed despite a range of uncertainties regarding loss in skills, shortage in funding, and interruptions? it is concluded that long - term projects such as an interstellar exploration program can take inspiration from cathedrals by developing a modular architecture, allowing for extensibility and flexibility, thinking about value delivery at an early point, and establishing mechanisms and an organization for stable funding.
|
arxiv:2007.03654
|
online tools are used for the individual at - home learning, such as : educational videos, learning management systems, interactive tools, and other web - based resources. some advantages of flipped learning include improved learning performance, enhanced student satisfaction and engagement, flexibility in learning, and increased interaction opportunities between students and instructors. on the other hand, the disadvantages of flipped learning involve challenges related to student motivation, internet accessibility, quality of videos, and increased workload for teachers. = = technologies = = numerous types of physical technology are currently used : digital cameras, video cameras, interactive whiteboard tools, document cameras, electronic media, and lcd projectors. combinations of these techniques include blogs, collaborative software, eportfolios, and virtual classrooms. the current design of this type of application includes the evaluation through tools of cognitive analysis that allow one to identify which elements optimize the use of these platforms. = = = audio and video = = = video technology has included vhs tapes and dvds, as well as on - demand and synchronous methods with digital video via server or web - based options such as streamed video and webcams. videotelephony can connect with speakers and other experts. interactive digital video games are being used at k - 12 and higher education institutions. screencasting allows users to share their screens directly from their browser and make the video available online so that other viewers can stream the video directly. webcams and webcasting have enabled the creation of virtual classrooms and virtual learning environments. webcams are also being used to counter plagiarism and other forms of academic dishonesty that might occur in an e - learning environment. = = = computers, tablets, and mobile devices = = = computers and tablets enable learners and educators to access websites as well as applications. many mobile devices support m - learning. mobile devices such as clickers and smartphones can be used for interactive audience response feedback. mobile learning can provide performance support for checking the time, setting reminders, retrieving worksheets, and instruction manuals. such devices as ipads are used for helping disabled ( visually impaired or with multiple disabilities ) children in communication development as well as in improving physiological activity, according to the stimulation practice report. studies in pre - school ( early learning ), primary and secondary education have explored how digital devices are used to enable effective learning outcomes, and create systems that can support teachers. digital technology can improve teaching and learning by motivating students with engaging, interactive, and fun learning environments.
|
https://en.wikipedia.org/wiki/Educational_technology
|
suzaku observation of an ultraluminous x - ray source, ngc 2403 source 3, performed on 2006 march 16 - - 17, is reported. the suzaku xis spectrum of source 3 was described with a multi - color black - body - like emission from an optically thick accretion disk. the innermost temperature and radius of the accretion disk was measured to be $ t _ { \ rm in } = 1. 08 _ { - 0. 03 } ^ { + 0. 02 } $ kev and $ r _ { \ rm in } = 122. 1 _ { - 6. 8 } ^ { + 7. 7 } \ alpha ^ { 1 / 2 } $ km, respectively, where $ \ alpha = ( \ cos 60 ^ \ circ / \ cos i ) $ with $ i $ being the disk inclination. the bolometric luminosity of the source was estimated to be $ l _ { \ rm bol } = 1. 82 \ times 10 ^ { 39 } \ alpha $ ergs s $ ^ { - 1 } $. archival chandra and xmm - newton data of the source were analyzed for long - term spectral variations. in almost all observations, the source showed multi - color black - body - like x - ray spectra with parameters similar to those in the suzaku observation. in only one chandra observation, however, source 3 was found to exhibit a power - law - like spectrum, with a photon index of $ \ gamma = 2. 37 \ pm 0. 08 $, when it was fainter by about $ \ sim 15 % $ than in the suzaku observation. the spectral behavior is naturally explained in terms of a transition between the slim disk state and the " very high " states, both found in galactic black hole binaries when their luminosity approach the eddington limit. these results are utilized to argue that ultraluminous x - ray sources generally have significantly higher black - hole masses than ordinary stellar - mass black holes.
|
arxiv:0810.5188
|
variability is a property shared by practically all agn. this makes variability selection a possible technique for identifying agn. given that variability selection makes no prior assumption about spectral properties, it is a powerful technique for detecting both low - luminosity agn in which the host galaxy emission is dominating and agn with unusual spectral properties. in this paper, we will discuss and test different statistical methods for the detection of variability in sparsely sampled data that allow full control over the false positive rates. we will apply these methods to the goods north and south fields and present a catalog of variable sources in the z band in both goods fields. out of 11931 objects checked, we find 155 variable sources at a significance level of 99. 9 %, corresponding to about 1. 3 % of all objects. after rejection of stars and supernovae, 139 variability selected agn remain. their magnitudes reach down as faint as 25. 5 mag in z. spectroscopic redshifts are available for 22 of the variability selected agn, ranging from 0. 046 to 3. 7. the absolute magnitudes in the rest - frame z - band range from ~ - 18 to - 24, reaching substantially fainter than the typical luminosities probed by traditional x - ray and spectroscopic agn selection in these fields. therefore, this is a powerful technique for future exploration of the evolution of the faint end of the agn luminosity function up to high redshifts.
|
arxiv:1008.3384
|
cosmological simulations predict that galaxies are embedded into triaxial dark matter haloes, which appear approximately elliptical in projection. weak gravitational lensing allows us to constrain these halo shapes and thereby test the nature of dark matter. weak lensing has already provided robust detections of the signature of halo flattening at the mass scales of groups and clusters, whereas results for galaxies have been somewhat inconclusive. here we combine data from five surveys ( ngvslens, kids / kv450, cfhtlens, cs82, and rcslens ) in order to tighten observational constraints on galaxy - scale halo ellipticity for photometrically selected lens samples. we constrain $ f _ \ rm { h } $, the average ratio between the aligned component of the halo ellipticity and the ellipticity of the light distribution, finding $ f _ \ rm { h } = 0. 303 ^ { + 0. 080 } _ { - 0. 079 } $ for red lenses and $ f _ \ rm { h } = 0. 217 ^ { + 0. 160 } _ { - 0. 159 } $ for blue lenses when assuming elliptical nfw density profiles and a linear scaling between halo ellipticity and galaxy ellipticity. our constraints for red galaxies constitute the currently most significant ( $ 3. 8 \ sigma $ ) systematics - corrected detection of the signature of halo flattening at the mass scale of galaxies. our results are in good agreement with expectations from the millennium simulation that apply the same analysis scheme and incorporate models for galaxy - halo misalignment. assuming these misalignment models and the analysis assumptions stated above are correct, our measurements imply an average dark matter halo ellipticity for the studied red galaxy samples of $ \ langle | \ epsilon _ \ rm { h } | \ rangle = 0. 174 \ pm 0. 046 $, where $ | \ epsilon _ { h } | = ( 1 - q ) / ( 1 + q ) $ relates to the ratio $ q = b / a $ of the minor and major axes of the projected mass distribution. similar measurements based on larger upcoming weak lensing data sets can help to calibrate models for intrinsic galaxy alignments. [ abridged ]
|
arxiv:2010.00311
|
the streaming instability is one of the most promising pathways to the formation of planetesimals from pebbles. understanding how this instability operates under realistic conditions expected in protoplanetary disks is therefore crucial to assess the efficiency of planet formation. contemporary models of protoplanetary disks show that magnetic fields are key to driving gas accretion through large - scale, laminar magnetic stresses. however, the effect of such magnetic fields on the streaming instability has not been examined in detail. to this end, we study the stability of dusty, magnetized gas in a protoplanetary disk. we find the streaming instability can be enhanced by passive magnetic torques and even persist in the absence of a global radial pressure gradient. in this case, instability is attributed to the azimuthal drift between dust and gas, unlike the classical streaming instability, which is driven by radial drift. this suggests that the streaming instability can remain effective inside dust - trapping pressure bumps in accreting disks. when a live vertical field is considered, we find the magneto - rotational instability can be damped by dust feedback, while the classic streaming instability can be stabilized by magnetic perturbations. we also find that alfv \ ' en waves can be destabilized by dust - gas drift, but this instability requires nearly ideal conditions. we discuss the possible implications of these results for dust dynamics and planetesimal formation in protoplanetary disks.
|
arxiv:2111.10381
|
{ analytical equations like richardson - dushman ' s or shockley ' s provided a general, if simplified conceptual background, which was widely accepted in conventional electronics and made a fundamental contribution to advances in the field. in the attempt to develop a ( highly desirable, but so far missing ) counterpart for molecular electronics, in this work, we deduce a general analytical formula for the tunneling current through molecular junctions mediated by a single level that is valid for any bias voltage and temperature. starting from this expression, which is exact and obviates cumbersome numerical integration, in the low and high temperature limits we also provide analytical formulas expressing the current in terms of elementary functions. they are accurate for broad model parameter ranges relevant for real molecular junctions. within this theoretical framework we show that : ( i ) by varying the temperature, the tunneling current can vary by several orders of magnitude, thus debunking the myth that a strong temperature dependence of the current is evidence for a hopping mechanism, ( ii ) real molecular junctions can undergo a gradual ( sommerfeld - arrhenius ) transition from a weakly temperature dependent to a strongly ( ` ` exponential ' ' ) temperature dependent current that can be tuned by the applied bias, and ( iii ) important insight into large area molecular junctions with eutectic gallium indium alloy ( egain ) top electrodes can be gained. e. g., merely based on transport data, we estimate that the current carrying molecules represent only a fraction of f \ approx 4 \ times 10 ^ { - 4 } out of the total number of molecules in a large area \ ce { au - s - ( ch2 ) 13 - ch _ 3 / egain } junction.
|
arxiv:2311.14415
|
we present the observation of strain induced sign reversal of anisotropic magnetoresistance ( amr ) in lcmo ( lcmo ) ultrathin films ( thickness 4 nm ) deposited on sto ( 001 ) substrate ( sto ). we have also observed unusually large amr in lcmo / sto thin films with thickness of 6 nm below but close to its curie temperature ( tc ) which decrease as the film thickness increases. the sign reversal of amr ( with a maximum value of - 6 ) with magnetic field or temperature for the 4 nm thin film may be attributed to the increase in tensile strain in the plane of the thin film which in turn facilitates the rotation of the magnetization easy axis.
|
arxiv:1409.7250
|
proficient with different types of recording media, such as analog tape, digital multi - track recorders and workstations, plug - ins and computer knowledge. with the advent of the digital age, it is increasingly important for the audio engineer to understand software and hardware integration, from synchronization to analog to digital transfers. in their daily work, audio engineers use many tools, including : tape machines analog - to - digital converters digital - to - analog converters digital audio workstations ( daws ) audio plug - ins dynamic range compressors audio data compressors equalization ( audio ) music sequencers signal processors headphones microphones preamplifiers mixing consoles amplifiers loudspeakers = = notable audio engineers = = = = = recording = = = = = = mastering = = = = = = live sound = = = = = see also = = = = references = = = = external links = = audio engineering society audio engineering formulas and calculators broadcast and sound engineering technicians at the us department of labor recording engineer video interviews a free collection of online audio tools for audio engineers audio engineering online course archived 2008 - 11 - 21 at the wayback machine under creative commons licence audio white papers, articles and books aes pro audio reference
|
https://en.wikipedia.org/wiki/Audio_engineer
|
although many fields have witnessed the superior performance brought about by deep learning, the robustness of neural networks remains an open issue. specifically, a small adversarial perturbation on the input may cause the model to produce a completely different output. such poor robustness implies many potential hazards, especially in security - critical applications, e. g., autonomous driving and mobile robotics. this work studies what information the adversarially trained model focuses on. empirically, we notice that the differences between the clean and adversarial data are mainly distributed in the low - frequency region. we then find that an adversarially - trained model is more robust than its naturally - trained counterpart due to the reason that the former pays more attention to learning the dominant information in low - frequency components. in addition, we consider two common ways to improve model robustness, namely, by data augmentation and by using stronger network architectures, and understand these techniques from a frequency - domain perspective. we are hopeful this work can shed light on the design of more robust neural networks.
|
arxiv:2203.08739
|
purpose : although recent deep energy - based generative models ( ebms ) have shown encouraging results in many image generation tasks, how to take advantage of the self - adversarial cogitation in deep ebms to boost the performance of magnetic resonance imaging ( mri ) reconstruction is still desired. methods : with the successful application of deep learning in a wide range of mri reconstruction, a line of emerging research involves formulating an optimization - based reconstruction method in the space of a generative model. leveraging this, a novel regularization strategy is introduced in this article which takes advantage of self - adversarial cogitation of the deep energy - based model. more precisely, we advocate for alternative learning a more powerful energy - based model with maximum likelihood estimation to obtain the deep energy - based information, represented as image prior. simultaneously, implicit inference with langevin dynamics is a unique property of re - construction. in contrast to other generative models for reconstruction, the proposed method utilizes deep energy - based information as the image prior in reconstruction to improve the quality of image. results : experiment results that imply the proposed technique can obtain remarkable performance in terms of high reconstruction accuracy that is competitive with state - of - the - art methods, and does not suffer from mode collapse. conclusion : algorithmically, an iterative approach was presented to strengthen ebm training with the gradient of energy network. the robustness and the reproducibility of the algorithm were also experimentally validated. more importantly, the proposed reconstruction framework can be generalized for most mri reconstruction scenarios.
|
arxiv:2109.03237
|
the sources of theoretical uncertainties in the prediction of the two - neutron drip line are analyzed in the framework of covariant density functional theory. we concentrate on single - particle and pairing properties as potential sources of these uncertainties. the major source of these uncertainties can be traced back to the differences in the underlying single - particle structure of the various covariant energy density functionals ( cedf ). it is found that the uncertainties in the description of single - particle energies at the two - neutron drip line are dominated by those existing already in known nuclei. only approximately one third of these uncertainties are due to the uncertainties in the isovector channel of cedf ' s. thus, improving the cedf description of single - particle energies in known nuclei will also reduce the uncertainties in the prediction of the position of two - neutron drip line. the predictions of pairing properties in neutron rich nuclei depend on the cedf. although pairing properties affect moderately the position of the two - neutron drip line they represent only a secondary source for the uncertainties in the definition of the position of the two - neutron drip line.
|
arxiv:1501.04151
|
cerium oxide nanoparticles ( cnps ) have recently gained increasing interest as redox enzyme - mimetics to scavenge the intracellular excess of reactive oxygen species, including hydrogen peroxide ( h2o2 ). despite the extensive exploration of cnp scavenging activity, there remains a notable knowledge gap regarding the fundamental mechanism underlying the cnp catalyzed disproportionation of h2o2. in this letter, we present evidence demonstrating that h2o2 adsorption at cnp surface triggers the formation of stable intermediates known as ceriumperoxo complexes ( ce - o2 2 - ). the cerium - peroxo complexes can be resolved by raman scattering and uv - visible spectroscopy. we further demonstrate that the catalytic reactivity of cnps in the h2o2 disproportionation reaction increases with the ce ( iii ) fraction. the developed approach using uv - visible spectroscopy for the characterization of ce - o2 2 - complexes can potentially serve as a foundation for determining the catalytic reactivity of cnps in the disproportionation of h2o2.
|
arxiv:2309.13981
|
manipulation of antiferromagnetic ( afm ) spins by electrical means is on great demand to develop the afm spintronics with low power consumption. in spite of the electrical modulation of insulated afms through coupling between their intrinsic ferroelectricity and antiferromagnetism, direct electrical control of afm metals remains challenging due to the screening effect by the surface charge, and the manipulation is confined to a limited depth of atomic dimensions, which is insufficient to form a stable afm exchange spring. in the present letter we primarily report a reversible electrical control of exchange spring in afm metals, using an ionic liquid to exert a substantial electric - field effect. the exchange spring could transfer the force to the ferromagnet / antiferromagnet interface, enabling a deeper modulation depth in afm metals. the manipulation of afm moments by gate voltage is demonstrated in [ co / pt ] / irmn model system and a single irmn layer with the irmn thickness up to 5 nm. besides the fundamental significance of modulating the spin structures in metallic afm via all - electrical fashion, the present finding would advance the development of low - power - consumption afm spintronics.
|
arxiv:1504.01193
|
we study the two - point correlator of an o ( n ) scalar field with quartic self - coupling in de sitter space. for light fields in units of the expansion rate, perturbation theory is plagued by large logarithmic terms for superhorizon momenta. we show that a proper treatment of the infinite series of self - energy insertions through the schwinger - dyson equations resums these infrared logarithms into well defined power laws. we provide an exact analytical solution of the schwinger - dyson equations for infrared momenta when the self - energy is computed at two - loop order. the obtained correlator exhibits a rich structure with a superposition of free - field - like power laws. we extract mass and field - strength renormalization factors from the asymptotic infrared behavior. the latter are nonperturbative in the coupling in the case of a vanishing tree - level mass.
|
arxiv:1305.5705
|
we construct parseval wavelet frames in $ l ^ 2 ( m ) $ for a general riemannian manifold $ m $ and we show the existence of wavelet unconditional frames in $ l ^ p ( m ) $ for $ 1 < p < \ infty $. this is made possible thanks to smooth orthogonal projection decomposition of the identity operator on $ l ^ 2 ( m ) $, which was recently proven by the authors in arxiv : 1803. 03634. we also show a characterization of triebel - lizorkin $ \ mathbf f _ { p, q } ^ s ( m ) $ and besov $ \ mathbf b _ { p, q } ^ s ( m ) $ spaces on compact manifolds in terms of magnitudes of coefficients of parseval wavelet frames. we achieve this by showing that hestenes operators are bounded on manifolds $ m $ with bounded geometry.
|
arxiv:2011.13037
|
a blockchain faces two fundamental challenges. it must motivate users to maintain the system while preventing a minority of these users from colluding and gaining disproportionate control. many popular public blockchains use monetary incentives to encourage users to behave appropriately. but these same incentive schemes create more problems than they solve. mining rewards cause centralization in " proof of work " chains such as bitcoin. validator rewards and punishments invite attacks in " proof of stake " chains. this paper argues why these incentive schemes are detrimental to blockchain. it considers a range of other systems - - - some of which incorporate monetary incentives, some of which do not - - - to confirm that monetary incentives are neither necessary nor sufficient for good user behavior.
|
arxiv:1905.04792
|
we give a detailed discussion of the quantum interference effect transistor ( quiet ), a proposed device which exploits interference between electron paths through aromatic molecules to modulate current flow. in the off state, perfect destructive interference stemming from the molecular symmetry blocks current, while in the on state, current is allowed to flow by locally introducing either decoherence or elastic scattering. details of a model calculation demonstrating the efficacy of the quiet are presented, and various fabrication scenarios are proposed, including the possibility of using conducting polymers to connect the quiet with multiple leads.
|
arxiv:0705.1193
|
the lagrangian of n = 2, d = 6 supergravity coupled to e _ 7 x su ( 2 ) vector - and hyper - multiplets is derived. for this purpose the coset manifold e _ 8 / e _ 7 x su ( 2 ), parametrized by the scalars of the hypermultiplet, is constructed. a difference from the case of sp ( n ) - matter is pointed out. this model can be considered as an intermediate step in the compactification of d = 10 supergravity coupled to e _ 8 x e _ 8 matter to four - dimensional model of e _ 6 unification.
|
arxiv:hep-th/9707193
|
we show that, given a metric space $ ( y, d ) $ of curvature bounded from above in the sense of alexandrov, and a positive radon measure $ \ mu $ on $ y $ giving finite mass to bounded sets, the resulting metric measure space $ ( y, d, \ mu ) $ is infinitesimally hilbertian, i. e. the sobolev space $ w ^ { 1, 2 } ( y, d, \ mu ) $ is a hilbert space. the result is obtained by constructing an isometric embedding of the ` abstract and analytical ' space of derivations into the ` concrete and geometrical ' bundle whose fibre at $ x \ in y $ is the tangent cone at $ x $ of $ y $. the conclusion then follows from the fact that for every $ x \ in y $ such a cone is a cat ( 0 ) - space and, as such, has a hilbert - like structure.
|
arxiv:1812.02086
|
the na48 / 2 collaboration at cern has accumulated and analysed unprecedented statistics of rare kaon decays in the $ k _ { e4 } $ modes : $ k _ { e4 } ( + - ) $ ( $ k ^ \ pm \ to \ pi ^ + \ pi ^ - e ^ \ pm \ nu $ ) and $ k _ { e4 } ( 00 ) $ ( $ k ^ \ pm \ to \ pi ^ 0 \ pi ^ 0 e ^ \ pm \ nu $ ) with nearly one percent background contamination. it leads to the improved measurement of branching fractions and detailed form factor studies. new final results from the analysis of 381 $ k ^ \ pm \ to \ pi ^ \ pm \ gamma \ gamma $ rare decay candidates collected by the na48 / 2 and na62 experiments at cern are presented. the results include a decay rate measurement and fits to chiral perturbation theory ( chpt ) description.
|
arxiv:1408.0585
|
we present a general procedure for constructing exact black hole ( bh ) solutions with the magnetic charge in the context of nonlinear electrodynamics ( ned ) theory as well as in the coherent state approach to noncommutative geometry ( ncg ). in this framework, the lagrangian density for the noncommutative hayward bh is obtained and the weak energy condition ( wec ) is satisfied. the noncommutative hayward solution depends on two kind of charges which for the vanishing of them yields the schwarzschild solution. moreover, in order to find a link between the bh evaporation and uncertainty relations, we may calculate the hawking temperature and find the effect of the lagrangian density of bhs on the hawking radiation. therefore, a generalized uncertainty principle ( gup ) emerges from the modified hawking temperature in einstein - ned theory. the origin of this gup is the combined influence of a nonlinear magnetic source and an intrinsic property of the manifold associated with a fictitious charge. finally, we find that there is an upper bound on the lagrangian uncertainty of the bhs which are sourced by the ned field and / or the fictitious charge.
|
arxiv:1809.03841
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.