text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the aim of this paper is to determine all irreducible spherical functions of the pair ( g, k ) = ( su ( n + 1 ), u ( n ) ), where the highest weight of their k - types are of the form ( m + l,..., m + l, m,..., m ). instead of looking at a spherical function \ phi of type \ pi we look at a matrix - valued function h defined on a section of the k - orbits in an affine subvariety of p _ n ( c ). the function h diagonalizes, hence it can be identified with a column vector - valued function. the irreducible spherical functions of type \ pi turn out to be parameterized by s = { ( w, r ) \ in z x z : 0 \ leq w, 0 \ leq r \ leq l, 0 \ leq m + w + r }. a key result to characterize the associated function h _ { w, r } is the existence of a matrix - valued polynomial function \ psi of degree l such that f _ { w, r } ( t ) = \ psi ( t ) ^ { - 1 } h _ { w, r } ( t ) becomes an eigenfunction of a matrix hypergeometric operator with eigenvalue \ lambda ( w, r ), explicitly given. in the last section we assume that m \ ge 0 and define the matrix polynomial p _ w as the ( l + 1 ) x ( l + 1 ) matrix whose r - row is the polynomial f _ { w, r }. this leads to interesting families of matrix - valued orthogonal jacobi polynomials p _ w ^ { \ alpha, \ beta } for \ alpha, \ beta > - 1.
|
arxiv:1209.4500
|
time series are presented for the class ii methanol maser source g12. 89 + 0. 49, which has been monitored for nine years at the hartebeesthoek radio astronomy observatory. the 12. 2 and 6. 7 ghz methanol masers were seen to exhibit rapid, correlated variations on timescales of less than a month. daily monitoring has revealed that the variations have a periodic component with a period of 29. 5 days. the period seems to be stable over the 110 cycles spanned by the time series. there are variations from cycle to cycle, with the peak of the flare occurring anywhere within an eleven day window but the minima occur at the same phase of the cycle. time delays of up to 5. 7 days are seen between spectral features at 6. 7 ghz and a delay of 1. 1 day is seen between the dominant 12. 2 ghz spectral feature and its 6. 7 ghz counterpart.
|
arxiv:0906.0295
|
optically - pumped color centers in semiconductor powders can potentially induce high levels of nuclear spin polarization in surrounding solids or fluids at or near ambient conditions, but complications stemming from the random orientation of the particles and the presence of unpolarized paramagnetic defects hinder the flow of polarization beyond the defect ' s host material. here, we theoretically study the spin dynamics of interacting nitrogen - vacancy ( nv ) and substitutional nitrogen ( p1 ) centers in diamond to show that outside protons spin - polarize efficiently upon a magnetic field sweep across the nv - p1 level anti - crossing. the process can be interpreted in terms of an nv - p1 spin ratchet, whose handedness - and hence the sign of the resulting nuclear polarization - depends on the relative timing of the optical excitation pulse. further, we find that the polarization transfer mechanism is robust to nv misalignment relative to the external magnetic field, and efficient over a broad range of electron - electron and electron - nuclear spin couplings, even if proxy spins feature short coherence or spin - lattice relaxation times. therefore, these results pave the route towards the dynamic nuclear polarization of arbitrary spin targets brought in proximity with a diamond powder under ambient conditions.
|
arxiv:1904.08563
|
this paper introduces a novel technique called counterfactual knowledge distillation ( cfkd ) to detect and remove reliance on confounders in deep learning models with the help of human expert feedback. confounders are spurious features that models tend to rely on, which can result in unexpected errors in regulated or safety - critical domains. the paper highlights the benefit of cfkd in such domains and shows some advantages of counterfactual explanations over other types of explanations. we propose an experiment scheme to quantitatively evaluate the success of cfkd and different teachers that can give feedback to the model. we also introduce a new metric that is better correlated with true test performance than validation accuracy. the paper demonstrates the effectiveness of cfkd on synthetically augmented datasets and on real - world histopathological datasets.
|
arxiv:2310.01011
|
. developments in chemical engineering before and after world war ii were mainly incited by the petrochemical industry ; however, advances in other fields were made as well. advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. meanwhile, progress in polymer science in the 1950s paved way for the " age of plastics ". = = = safety and hazard developments = = = concerns regarding large - scale chemical manufacturing facilities ' safety and environmental impact were also raised during this period. silent spring, published in 1962, alerted its readers to the harmful effects of ddt, a potent insecticide. the 1974 flixborough disaster in the united kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 bhopal disaster in india resulted in almost 4, 000 deaths. these incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. in response, the icheme required safety to be part of every degree course that it accredited after 1982. by the 1970s, legislation and monitoring agencies were instituted in various countries, such as france, germany, and the united states. in time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety. = = = recent progress = = = advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. the completion of the human genome project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. chemical engineering principles were used to produce dna sequences in large quantities. = = concepts = = chemical engineering involves the application of several principles. key concepts are presented below. = = = plant design and construction = = = chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. design engineers often work in a consulting role, designing plants to meet clients ' needs. design is limited by several factors, including funding, government regulations, and safety standards. these constraints dictate a plant ' s choice of process, materials, and equipment. plant construction is coordinated by project engineers and project managers, depending on the size of the investment. a chemical engineer may do the job of project engineer full - time or part of the time, which requires
|
https://en.wikipedia.org/wiki/Chemical_engineering
|
we present a trajectory concept for a small mission to the four inner large satellites of saturn. leveraging the high efficiency of electric propulsion, the concept enables orbit insertion around each of the moons, for arbitrarily long close observation periods. the mission starts with a evves interplanetary segment, where a combination of multiple gravity assists and deep space low thrust enables reduced relative arrival velocity at saturn, followed by an unpowered capture via a sequence of resonant flybys with titan. the transfers between moons use a low - thrust control law that connects unstable and stable branches of the invariant manifolds of planar lyapunov orbits from the circular restricted three - body problem of each moon and saturn. the exploration of the moons relies on homoclinic and heteroclinic connections of the lyapunov orbits around the l $ _ 1 $ and l $ _ 2 $ equilibrium points. these science orbits can be extended for arbitrary lengths of time with negligible propellant usage. the strategy enables a comprehensive scientific exploration of the inner large moons, located deep inside the gravitational well of saturn, which is unfeasible with conventional impulsive maneuvers due to excessive fuel consumption.
|
arxiv:2305.17548
|
in this paper, we study asynchronous stochastic approximation algorithms without communication delays. our main contribution is a stability proof for these algorithms that extends a method of borkar and meyn by accommodating more general noise conditions. we also derive convergence results from this stability result and discuss their application in important average - reward reinforcement learning problems.
|
arxiv:2312.15091
|
transport - based density estimation methods are receiving growing interest because of their ability to efficiently generate samples from the approximated density. we further invertigate the sequential transport maps framework proposed from arxiv : 2106. 04170 arxiv : 2303. 02554, which builds on a sequence of composed knothe - rosenblatt ( kr ) maps. each of those maps are built by first estimating an intermediate density of moderate complexity, and then by computing the exact kr map from a reference density to the precomputed approximate density. in our work, we explore the use of sum - of - squares ( sos ) densities and $ \ alpha $ - divergences for approximating the intermediate densities. combining sos densities with $ \ alpha $ - divergence interestingly yields convex optimization problems which can be efficiently solved using semidefinite programming. the main advantage of $ \ alpha $ - divergences is to enable working with unnormalized densities, which provides benefits both numerically and theoretically. in particular, we provide a new convergence analyses of the sequential transport maps based on information geometric properties of $ \ alpha $ - divergences. the choice of intermediate densities is also crucial for the efficiency of the method. while tempered ( or annealed ) densities are the state - of - the - art, we introduce diffusion - based intermediate densities which permits to approximate densities known from samples only. such intermediate densities are well - established in machine learning for generative modeling. finally we propose low - dimensional maps ( or lazy maps ) for dealing with high - dimensional problems and numerically demonstrate our methods on bayesian inference problems and unsupervised learning tasks.
|
arxiv:2402.17943
|
we present an algorithm for the efficient simulation of the half - filled spinless $ t $ - $ v $ model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum monte carlo techniques widely used in fermionic simulations. the algorithm scales linearly in the inverse temperature, cubically with the system size and is free from the time - discretization error. we use it to map out the finite temperature phase diagram of the spinless $ t $ - $ v $ model on the honeycomb lattice and observe a suppression of the critical temperature of the charge density wave phase in the vicinity of a fermionic quantum critical point.
|
arxiv:1602.02095
|
we present a space - efficient implementation of the quantum verification of matrix products ( qvmp ) algorithm and demonstrate its functionality by running it on the aer simulator with two simulation methods : statevector and matrix product state ( mps ). we report circuit metrics ( gate count, qubit count, circuit depth ), transpilation time, simulation time, and a proof of grover oracle correctness. our study concludes that while qvmp can be simulated on moderately sized inputs, it cannot scale to a degree where we can observe any quantum advantage on current quantum hardware due to circuit depth and qubit count constraints. further, the choice of simulation method has a noticeable impact on the size of the transpiled circuit which slows down development.
|
arxiv:2208.09914
|
visual instance search involves retrieving from a collection of images the ones that contain an instance of a visual query. systems designed for visual instance search face the major challenge of scalability : a collection of a few million images used for instance search typically creates a few billion features that must be indexed. furthermore, as real image collections grow rapidly, systems must also provide dynamicity, i. e., be able to handle on - line insertions while concurrently serving retrieval operations. durability, which is the ability to recover correctly from software and hardware crashes, is the natural complement of dynamicity. durability, however, has rarely been integrated within scalable and dynamic high - dimensional indexing solutions. this article addresses the issue of dynamicity and durability for scalable indexing of very large and rapidly growing collections of local features for instance retrieval. by extending the nv - tree, a scalable disk - based high - dimensional index, we show how to implement the acid properties of transactions which ensure both dynamicity and durability. we present a detailed performance evaluation of the transactional nv - tree : ( i ) we show that the insertion throughput is excellent despite the overhead for enforcing the acid properties ; ( ii ) we also show that this transactional index is truly scalable using a standard image benchmark embedded in collections of up to 28. 5 billion high - dimensional vectors ; the largest single - server evaluations reported in the literature.
|
arxiv:1805.10942
|
recently, graph neural network ( gnn ) - based vulnerability detection systems have achieved remarkable success. however, the lack of explainability poses a critical challenge to deploy black - box models in security - related domains. for this reason, several approaches have been proposed to explain the decision logic of the detection model by providing a set of crucial statements positively contributing to its predictions. unfortunately, due to the weakly - robust detection models and suboptimal explanation strategy, they have the danger of revealing spurious correlations and redundancy issue. in this paper, we propose coca, a general framework aiming to 1 ) enhance the robustness of existing gnn - based vulnerability detection models to avoid spurious explanations ; and 2 ) provide both concise and effective explanations to reason about the detected vulnerabilities. \ sysname consists of two core parts referred to as trainer and explainer. the former aims to train a detection model which is robust to random perturbation based on combinatorial contrastive learning, while the latter builds an explainer to derive crucial code statements that are most decisive to the detected vulnerability via dual - view causal inference as explanations. we apply coca over three typical gnn - based vulnerability detectors. experimental results show that coca can effectively mitigate the spurious correlation issue, and provide more useful high - quality explanations.
|
arxiv:2401.14886
|
we study the ultraviolet complete non - relativistic theory recently proposed by horava. after introducing a lifshitz scalar for a general background, we analyze the cosmology of the model in lorentzian and euclidean signature. vacuum solutions are found and it is argued the existence of non - singular bouncing profiles. we find a general qualitative agreement with both the picture of causal dynamical triangulations and quantum einstein gravity. however, inflation driven by a lifshitz scalar field on a classical background might not produce a scale - invariant spectrum when the principle of detailed balance is assumed.
|
arxiv:0904.0829
|
this work introduces a method for preprocessing measurements of electrical impedance tomography to considerably reduce the effect uncertainties in the electrode contacts have on the reconstruction quality, without a need to explicitly estimate the contacts. the idea is to compute the jacobian matrix of the forward map with respect to the contact strengths and project the electrode measurements and the forward map onto the orthogonal complement of the range of this jacobian. using the smoothened complete electrode model as the forward model, it is demonstrated that inverting the resulting projected equation with respect to only the internal conductivity of the examined body results in good quality reconstructions both when resorting to a single step linearization with a smoothness prior and when combining lagged diffusivity iteration with total variation regularization. the quality of the reconstructions is further improved if the range of the employed projection is also orthogonal to that of the jacobian with respect to the electrode positions. these results hold even if the projections are formed at internal and contact conductivities that significantly differ from the true ones ; it is numerically demonstrated that the orthogonal complement of the range of the contact jacobian is almost independent of the conductivity parameters at which it is evaluated. in particular, our observations introduce a numerical technique for inferring whether a change in the electrode measurements is caused by a change in the internal conductivity or alterations in the electrode contacts, which has potential applications, e. g., in bedside monitoring of stroke patients. the ideas are tested both on simulated data and on real - world water tank measurements with adjustable contact resistances.
|
arxiv:2412.15009
|
this paper deals with the micro - macro derivation of virus models coupled with a reaction diffusion models that generates the dynamics in space of the virus particles. the first part of the presentation focuses, starting from [ 5, 6 ] on a survey and a critical analysis of some phenomenological models known in the literature. the second part shows how methods of the kinetic theory can be used to model the dynamics of the system treated in our paper. the third part deals with the derivation of macroscopic models from the underlying description, delivered within a general framework of the kinetic theory.
|
arxiv:2112.07302
|
in this short note we study homogenization of symmetric $ d $ - dimensional l \ ' evy processes. homogenization of one - dimensional pure jump markov processes has been investigated by tanaka \ emph { et al. } in 1992 ; their motivation was the work by benssousan \ emph { et al. } \ from 1975 on the homogenization of diffusion processes in $ \ mathbb { r } ^ d $. we investigate a similar problem for a class of symmetric pure - jump l \ ' evy processes on $ \ mathbb { r } ^ d $ and we identify - - using mosco convergence - - the limit process.
|
arxiv:1808.01667
|
thanks to asteroseismology, constraints on the core rotation rate are available for hundreds of low - and intermediate - mass stars in evolved phases. current physical processes tested in stellar evolution models cannot reproduce the evolution of these core rotation rates. we investigate the efficiency of the internal angular momentum redistribution in red giants during the hydrogen shell and core - helium burning phases based on the asteroseismic determinations of their core rotation rates. we compute stellar evolution models with rotation and model the transport of angular momentum by the action of a sole dominant diffusive process parametrized by an additional viscosity. we constrain the values of this viscosity to match the mean core rotation rates of red giants and their behaviour with mass and evolution along the red giant branch and in the red clump. for red giants in the hydrogen shell - burning phase the transport of angular momentum must be more efficient in more massive stars. the additional viscosity is found to vary by approximately two orders of magnitude in the mass range m $ \ sim $ 1 - 2. 5 m $ _ { \ odot } $. as stars evolve along the red giant branch, the efficiency of the internal transport of angular momentum must increase for low - mass stars ( m $ \ lesssim $ 2 m $ _ { \ odot } $ ) and remain approximately constant for slightly higher masses ( 2. 0 m $ _ { \ odot } $ $ \ lesssim $ m $ \ lesssim $ 2. 5 m $ _ { \ odot } $ ). in red - clump stars, the additional viscosities must be an order of magnitude higher than in younger red giants of similar mass during the hydrogen shell - burning phase. in combination with previous efforts, we obtain a clear picture of how the physical processes acting in stellar interiors should redistribute angular momentum from the end of the main sequence until the core - helium burning phase for low - and intermediate - mass stars to satisfy the asteroseismic constraints.
|
arxiv:2205.03490
|
knowledge tracing is the process of tracking mastery level of different skills of students for a given learning domain. it is one of the key components for building adaptive learning systems and has been investigated for decades. in parallel with the success of deep neural networks in other fields, we have seen researchers take similar approaches in the learning science community. however, most existing deep learning based knowledge tracing models either : ( 1 ) only use the correct / incorrect response ( ignoring useful information from other modalities ) or ( 2 ) design their network architectures through domain expertise via trial and error. in this paper, we propose a sequential model based optimization approach that combines multimodal fusion and neural architecture search within one framework. the commonly used neural architecture search technique could be considered as a special case of our proposed approach when there is only one modality involved. we further propose to use a new metric called time - weighted area under the curve ( weighted auc ) to measure how a sequence model performs with time. we evaluate our methods on two public real datasets showing the discovered model is able to achieve superior performance. unlike most existing works, we conduct mcnemar ' s test on the model predictions and the results are statistically significant.
|
arxiv:2111.04497
|
quantum violation of bell inequalities is now used in many quantum information applications and it is important to analyze it both quantitatively and conceptually. in the present paper, we analyze violation of multipartite bell inequalities via the local probability model - the lqhv ( local quasi hidden variable ) model [ loubenets, j. math. phys. 53, 022201 ( 2012 ) ], incorporating the lhv model only as a particular case and correctly reproducing the probabilistic description of every quantum correlation scenario, more generally, every nonsignaling scenario. the lqhv probability framework allows us to construct nonsignaling analogs of bell inequalities and to specify parameters quantifying violation of bell inequalities - bell ' s nonlocality - in a general nonsignaling case. for quantum correlation scenarios on an n - qudit state, we evaluate these nonlocality parameters analytically in terms of dilation characteristics of an n - qudit state and also, numerically - in d and n. in view of our rigorous mathematical description of bell ' s nonlocality in a general nonsignaling case via the local probability model, we argue that violation of bell inequalities in a quantum case is not due to violation of the einstein - podolsky - rosen ( epr ) locality conjectured by bell but due to the improper hv modelling of " quantum realism ".
|
arxiv:1612.06064
|
data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training data. recent work often tackles this problem using large language models ( llms ) like gpt3 that can generate new examples given already available ones. in this work, we propose a method to generate more helpful augmented data by utilizing the llm ' s abilities to follow instructions and perform few - shot classifications. our specific promptmix method consists of two steps : 1 ) generate challenging text augmentations near class boundaries ; however, generating borderline examples increases the risk of false positives in the dataset, so we 2 ) relabel the text augmentations using a prompting - based llm classifier to enhance the correctness of labels in the generated data. we evaluate the proposed method in challenging 2 - shot and zero - shot settings on four text classification datasets : banking77, trec6, subjectivity ( subj ), and twitter complaints. our experiments show that generating and, crucially, relabeling borderline examples facilitates the transfer of knowledge of a massive llm like gpt3. 5 - turbo into smaller and cheaper classifiers like distilbert $ _ { base } $ and bert $ _ { base } $. furthermore, 2 - shot promptmix outperforms multiple 5 - shot data augmentation methods on the four datasets. our code is available at https : / / github. com / servicenow / promptmix - emnlp - 2023.
|
arxiv:2310.14192
|
we present element - to - element abundance ratios measured from high dispersion spectra for 150 field subdwarfs and early subgiants with accurate hipparcos parallaxes ( errors < 20 % ). for 50 stars new spectra were obtained with the uves on kueyen ( vlt ut2 ), the mcdonald 2. 7m telescope, and sarg at tng. additionally, literature equivalent widths were taken from the works by nissen & schuster, fulbright, and prochaska et al. to complement our data. the whole sample includes both thick disk and halo stars ( and a few thin disk stars ) ; most stars have metallicities in the range - 2 < [ fe / h ] < - 0. 6. we found our data, that of nissen & schuster, and that of prochaska to be of comparable quality ; results from fulbright scatter a bit more, but they are still of very good quality and are extremely useful due to the large size of his sample. the results of the present analysis will be used in forthcoming papers to discuss the chemical properties of the dissipational collapse and accretion components of our galaxy.
|
arxiv:astro-ph/0303653
|
in this paper, we proposed a monotone block coordinate descent method for solving absolute value equation ( ave ). under appropriate conditions, we analyzed the global convergence of the algorithm and conduct numerical experiments to demonstrate its feasibility and effectiveness.
|
arxiv:2412.11833
|
we calculate the leading isospin conserving few - nucleon contributions to pion scattering on $ ^ 2 $ h, $ ^ 3 $ he, and $ ^ 4 $ he. we demonstrate that the strong contributions to the pion - nucleus scattering lengths can be controlled theoretically to an accuracy of a few percent for isoscalar nuclei and of 10 % for isovector nuclei. in particular, we find the $ \ pi $ - $ ^ 3 $ he scattering length to be $ ( 62 \ pm 4 \ pm 7 ) \ times 10 ^ { - 3 } m _ { \ pi } ^ { - 1 } $ where the uncertainties are due to ambiguities in the $ \ pi $ - n scattering lengths and few - nucleon effects, respectively. to establish this accuracy we need to identify a suitable power counting for pion - nucleus scattering. for this purpose we study the dependence of the two - nucleon contributions to the scattering length on the binding energy of $ ^ 2 $ h. furthermore, we investigate the relative size of the leading two -, three -, and four - nucleon contributions. for the numerical evaluation of the pertinent integrals, amonte carlo method suitable for momentum space is devised. our results show that in general the power counting suggested by weinberg is capable to properly predict the relative importance of $ n $ - nucleon operators, however, it fails to capture the relative strength of $ n $ - and $ ( n + 1 ) $ - nucleon operators, where we find a suppression by a factor of 5 compared to the predicted factor of 50. the relevance for the extraction of the isoscalar $ \ pi $ - n scattering length from pionic $ ^ 2 $ h and $ ^ 4 $ he is discussed. as a side result, we show that beyond the calculation of the $ \ pi $ - $ ^ 2 $ h scattering length is already beyond the range of applicability of heavy pion effective field theory.
|
arxiv:1003.3826
|
molecular adsorption on surfaces plays an important part in catalysis, corrosion, desalination, and various other processes that are relevant to industry and in nature. as a complement to experiments, accurate adsorption energies can be obtained using various sophisticated electronic structure methods that can now be applied to periodic systems. the adsorption energy of water on boron nitride substrates, going from zero to 2 - dimensional periodicity, is particularly interesting as it calls for an accurate treatment of polarizable electrostatics and dispersion interactions, as well as posing a practical challenge to experiments and electronic structure methods. here, we present reference adsorption energies, static polarizabilities, and dynamic polarizabilities, for water on bn substrates of varying size and dimension. adsorption energies are computed with coupled cluster theory, fixed - node quantum monte carlo ( fnqmc ), the random phase approximation ( rpa ), and second order m { \ o } ller - plesset ( mp2 ) theory. these explicitly correlated methods are found to agree in molecular as well as periodic systems. the best estimate of the water / h - bn adsorption energy is $ - 107 \ pm7 $ mev from fnqmc. in addition, the water adsorption energy on the bn substrates could be expected to grow monotonically with the size of the substrate due to increased dispersion interactions but interestingly, this is not the case here. this peculiar finding is explained using the static polarizabilities and molecular dispersion coefficients of the systems, as computed from time - dependent density functional theory ( dft ). dynamic as well as static polarizabilities are found to be highly anisotropic in these systems. in addition, the many - body dispersion method in dft emerges as a particularly useful estimation of finite size effects for other expensive, many - body wavefunction based methods.
|
arxiv:1705.10705
|
in a bottom - up approach we investigate lepton - flavour violating processes $ \ tau \ to 3 \ ell $ that are mediated by new physics encoded in effective - theory operators of dimension six. while the opportunity to scrutinize the underlying operator structure has been investigated before, we explore the benefits of utilising the polarization direction of the initial $ \ tau $ lepton and the angular distribution of the decay. given the rarity of these events ( if observed at all ), we focus on integrated observables rather than spectra, such as partial rates and asymmetries. in an effort to estimate the number of events required to extract the coupling coefficients to the effective operators we perform a phenomenological study with virtual experiments.
|
arxiv:1506.07786
|
we address combinatorial problems that can be formulated as minimization of a partially separable function of discrete variables ( energy minimization in graphical models, weighted constraint satisfaction, pseudo - boolean optimization, 0 - 1 polynomial programming ). for polyhedral relaxations of such problems it is generally not true that variables integer in the relaxed solution will retain the same values in the optimal discrete solution. those which do are called persistent. such persistent variables define a part of a globally optimal solution. once identified, they can be excluded from the problem, reducing its size. to any polyhedral relaxation we associate a sufficient condition proving persistency of a subset of variables. we set up a specially constructed linear program which determines the set of persistent variables maximal with respect to the relaxation. the condition improves as the relaxation is tightened and possesses all its invariances. the proposed framework explains a variety of existing methods originating from different areas of research and based on different principles. a theoretical comparison is established that relates these methods to the standard linear relaxation and proves that the proposed technique identifies same or larger set of persistent variables.
|
arxiv:1505.00571
|
as social issues related to gender bias attract closer scrutiny, accurate tools to determine the gender profile of large groups become essential. when explicit data is unavailable, gender is often inferred from names. current methods follow a strategy whereby individuals of the group, one by one, are assigned a gender label or probability based on gender - name correlations observed in the population at large. we show that this strategy is logically inconsistent and has practical shortcomings, the most notable of which is the systematic underestimation of gender bias. we introduce a global inference strategy that estimates gender composition according to the context of the full list of names. the tool suffers from no intrinsic methodological effects, is robust against errors, easily implemented, and computationally light.
|
arxiv:2305.07587
|
we study the stability at blow - up and deformations of a class of hermitian metrics whose fundamental two - form $ \ omega $ satisfies the condition $ \ partial \ bar \ partial \ omega ^ k = 0 $, for any $ k $ between 1 and $ n - 1 $ ( where $ n $ is the complex dimension of the manifold ). we are motivated by the existence of compact complex manifold supporting such metrics.
|
arxiv:2411.02567
|
reversible part of evolution equations of physical systems is often generated by a poisson bracket. we discuss geometric means of construction of poisson brackets and their mutual coupling ( direct, semidirect and matched - pair products ) as well as projections of poisson brackets to less detailed poisson brackets. this way the hamiltonian coupling between transport of mixtures and electrodynamics is elucidated.
|
arxiv:1607.02023
|
let $ g $ be an abelian group of order $ n $ and let $ r $ be a commutative ring which admits a homomorphism $ { \ bbb z } [ \ zeta _ { n } ] \ ra r $, where $ \ zeta _ { n } $ is a ( complex ) primitive $ n $ - th root of unity. given a finite $ r [ g \ e ] $ - module $ m $, we derive a formula relating the order of $ m $ to the product of the orders of the various isotypic components $ m ^ { \ e \ chi } $ of $ m $, where $ \ chi $ ranges over the group of $ r $ - valued characters of $ g $. we then give conditions under which the order of $ m $ is exactly equal to the product of the orders of the $ m ^ { \ chi } $. to derive these conditions, we build on work of e. aljadeff and obtain, as a by - product of our considerations, a new criterion for cohomological triviality which improves the well - known criterion of t. nakayama. we also give applications to abelian varieties and to class groups of abelian fields, obtaining in particular some new class number formulas. our results also have applications to " non - semisimple " iwasawa theory, but we do not develop these here. in general, the results of this paper can be used to strengthen a variety of known results involving finite $ r [ g \ e ] $ - modules whose hypotheses include ( an equivalent form of ) the following assumption : ` ` the order of $ g $ is invertible in $ r $ ". }
|
arxiv:math/0204210
|
in this paper we look at which alexander and markov theories can be defined for generalized knot theories
|
arxiv:1902.04263
|
modelling of stellar radiative intensities in various spectral pass - bands plays an important role in stellar physics. at the same time the direct calculations of the high - resolution spectrum and then integrating it over the given spectral pass - band is computationally demanding due to the vast number of atomic and molecular lines. this is particularly so when employing three - dimensional ( 3d ) models of stellar atmospheres. to accelerate the calculations, one can employ approximate methods, e. g., the use of opacity distribution functions ( odfs ). generally, odfs provide a good approximation of traditional spectral synthesis i. e., computation of intensities through filters with strictly rectangular transmission function. however, their performance strongly deteriorates when the filter transmission noticeably changes within its pass - band, which is the case for almost all filters routinely used in stellar physics. in this context, the aims of this paper are a ) to generalize the odfs method for calculating intensities through filters with arbitrary transmission functions ; b ) to study the performance of the standard and generalized odfs methods for calculating intensities emergent from 3d models of stellar atmosphere. for this purpose we use the newly - developed mps - atlas radiative transfer code to compute intensities emergent 3d cubes simulated with the radiative magnetohydrodynamics code muram. the calculations are performed in the 1. 5d regime, i. e., along many parallel rays passing through the simulated cube. we demonstrate that generalized odfs method allows accurate and fast syntheses of spectral intensities and their centre - to - limb variations.
|
arxiv:2104.13661
|
this article explores the structural properties of molecular beam epitaxy grown { cdo / mgo } superlattices on sapphire substrates of different crystallographic orientations ( a -, c -, r -, and m - plane ). the investigations involve a comprehensive analysis using x - ray diffraction and raman spectroscopy. high - resolution x - ray diffraction studies unveil a significant influence of surface symmetry on both the substrates and the epitaxial layers, particularly with respect to the occurrence of twins in the superlattices. remarkably, no twins are observed on r - oriented sapphire substrates, resulting in improved interface and crystallographic quality. the results of studies demonstrated in this work show that the growth rate of cdo sublayers within { cdo / mgo } superlattices is intricately dependent on the substrate orientation. notably, the c - plane and m - plane sapphire substrates yielded thicker cdo sublayers, indicating comparable growth rates for these crystallographic orientations. conversely, the a - plane and r - plane orientations seemed to favor a slower growth rate of cdo sublayers.
|
arxiv:2502.11754
|
theoretical models are vital for exploring the galaxy merger process, which plays a crucial role in the evolution of galaxies. recent advances in modelling have placed tight constraints on the buildup of stellar material in galaxies across cosmic time. despite these successes, extracting the merger rates from observable data remains a challenge. differences in modelling techniques, combined with limited observational data, drive conflicting conclusions on the merging timescales of close pairs. we employ an empirical model for galaxy formation that links galaxy properties to the growth of simulated dark matter halos, along with mock lightcone galaxy catalogues, to probe the dependencies of pair merging probabilities and merging timescales. in this work, we demonstrate that the pair merging probabilities are best described by a logistic function and that mean merging timescales can be well approximated by a linear relation in the projected separation and line of sight velocity difference in observed pairs. together, our fitting formulae can accurately predict merger rates from galaxy pairs to at least $ z \ sim4 $ under a wide variety of pair selection criteria. additionally, we show that some commonly used pair selection criteria may not represent a suitable sample of galaxies to reproduce underlying merger rates. finally, we conclude from our analysis that observation timescales are primarily driven by dynamics and are not strongly impacted by the star formation properties of the component galaxies.
|
arxiv:2011.05341
|
the hydrogen - deficiency in extremely hot post - agb stars of spectral class pg1159 is probably caused by a ( very ) late helium - shell flash or a agb final thermal pulse that consumes the hydrogen envelope, exposing the usually - hidden intershell region. thus, the photospheric element abundances of these stars allow to draw conclusions about details of nuclear burning and mixing processes in the precursor agb stars. we compare predicted element abundances to those determined by quantitative spectral analyses performed with advanced non - lte model atmospheres. a good qualitative and quantitative agreement is found for many species ( he, c, n, o, ne, f, si ) but discrepancies for others ( p, s, fe ) point at shortcomings in stellar evolution models for agb stars.
|
arxiv:astro-ph/0603225
|
we study zz instanton corrections in the $ ( 2, 4k ) $ $ n = 1 $ minimal superstring theory with the type 0b gso projection, which becomes the type 0b $ n = 1 $ super - jt gravity in the $ k \ to \ infty $ limit. each member of the $ ( 2, 4k ) $ family of theories has two phases distinguished by the sign of the liouville bulk cosmological constant. the worldsheet method for computing the one - loop normalization constant multiplying the instanton corrections gives an ill - defined answer in both phases. we fix these divergences using insights from string field theory and find finite, unambiguous results. each member of the $ ( 2, 4k ) $ family of theories is dual to a double - scaled one - matrix integral, where the double - scaling limit can be obtained starting either from a unitary matrix integral with a leading one - cut saddle point, or from a hermitian matrix integral with a leading two - cut saddle point. the matrix integral exhibits a gap - closing transition, which is the same as the double - scaled gross - witten - wadia transition when $ k = 1 $. we also compute instanton corrections in the double - scaled matrix integral for all $ k $ and in both phases, and find perfect agreement with the string theory results.
|
arxiv:2406.16867
|
in this paper, we extend the energy - casimir stability method for deterministic lie - poisson hamiltonian systems to provide sufficient conditions for the stability in probability of stochastic dynamical systems with symmetries and multiplicative noise. we illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top and the compressible euler equation in two dimensions. the main result of this extension is that stable deterministic equilibria remain stable in probability up to a certain stopping time which depends on the amplitude of the noise for finite dimensional systems and on the amplitude the spatial derivative of the noise for infinite dimensional systems.
|
arxiv:1702.03899
|
we present the contribution from potential interactions to the dynamics of non - spinning binaries to fourth post - minkowskian ( 4pm ) order. this is achieved by computing the scattering angle to $ { \ cal o } ( g ^ 4 ) $ using the effective field theory approach and deriving the bound radial action through analytic continuation. we reconstruct the hamiltonian and center - of - mass momentum in an isotropic gauge. the ( three - loop ) integrals involved in our calculation are computed via differential equations, including a sector yielding elliptic integrals. using the universal link between potential and tail terms, we also report : 1 ) the instantaneous energy flux at $ { \ cal o } ( g ^ 3 ) $ ; 2 ) the contribution to the 4pm unbound / bound radial action ( s ) depending on logarithms of the binding energy ; 3 ) the ( scheme - independent ) logarithmic contribution to the 4pm non - local tail hamiltonian for circular orbits. our results in the potential region are in agreement with the recent derivation from scattering amplitudes. we also find perfect agreement in the overlap with the state - of - the - art in post - newtonian theory.
|
arxiv:2106.08276
|
matrix product codes are generalizations of some well - known constructions of codes, such as reed - muller codes, $ [ u + v, u - v ] $ - construction, etc. recently, a bound for the symbol - pair distance of a matrix product code was given in \ cite { lel }, and new families of mds symbol - pair codes were constructed by using this bound. in this paper, we generalize this bound to the $ b $ - symbol distance of a matrix product code and determine all minimum $ b $ - symbol distances of reed - muller codes. we also give a bound for the minimum $ b $ - symbol distance of codes obtained from the $ [ u + v, u - v ] $ - construction, and use this bound to construct some $ [ 2n, 2n - 2 ] _ q $ - linear $ b $ - symbol almost mds codes with arbitrary length. all the minimum $ b $ - symbol distances of $ [ n, n - 1 ] _ q $ - linear codes and $ [ n, n - 2 ] _ q $ - linear codes for $ 1 \ leq b \ leq n $ are determined. some examples are presented to illustrate these results.
|
arxiv:2309.08920
|
we compute the hadwiger - nelson numbers $ \ chi ( e ^ 2 ) $ for certain number fields $ e $, that is, the smallest number of colors required to color the points in the plane with coordinates in ~ $ e $ so that no two points at distance $ 1 $ from one another have the same color. specifically, we show that $ \ chi ( \ mathbb { q } ( \ sqrt { 2 } ) ^ 2 ) = 2 $, that $ \ chi ( \ mathbb { q } ( \ sqrt { 3 } ) ^ 2 ) = 3 $, that $ \ chi ( \ mathbb { q } ( \ sqrt { 7 } ) ^ 2 ) = 3 $ despite the fact that the graph $ \ gamma ( \ mathbb { q } ( \ sqrt { 7 } ) ^ 2 ) $ is triangle - free, and that $ 4 \ leq \ chi ( \ mathbb { q } ( \ sqrt { 3 }, \ sqrt { 11 } ) ^ 2 ) \ leq 5 $. we also discuss some results over other fields, for other quadratic fields. we conclude with some comments on the use of the axiom of choice.
|
arxiv:1509.07023
|
the sources of icecube neutrinos are as yet unknown. the multi - messenger observation of their emission in $ \ gamma $ - rays can be a guide to their identification, as exemplified by the case of txs 0506 + 056. we suggest a new method of searching for $ \ gamma $ - rays with imaging air cherenkov telescopes from sources in coincidence with possible astrophysical neutrinos. we propose that searches of $ \ gamma $ - rays are extended, from the current practice of only a few days, to up to one month from a neutrino alert. we test this strategy on simulated sources modeled after the blazar \ emph { txs 0506 + 056 - like }, emitting neutrinos and $ \ gamma $ - rays via photohadronic interactions : the $ \ gamma $ - rays are subsequently reprocessed in the vhe range. using magic as a benchmark example, we show that current cherenkov telescopes should be able to detect $ \ gamma $ - ray counterparts to neutrino alerts with a rate of approximately one per year. it has been proposed that the high - energy diffuse neutrino flux can be explained by $ \ sim $ 5 \ % of all blazars flaring in neutrinos once every 10 years, with a neutrino luminosity similar to that of txs 0506 + 056 during the 2014 - 2015 neutrino flare. the implementation of our strategy could lead, over a timescale of one or few years, either to the detection of this subclass of blazars contributing to the diffuse neutrino flux, or to a constraint on this model.
|
arxiv:2105.14043
|
crowd counting, i. e., estimation number of the pedestrian in crowd images, is emerging as an important research problem with the public security applications. a key component for the crowd counting systems is the construction of counting models which are robust to various scenarios under facts such as camera perspective and physical barriers. in this paper, we present an adaptive scenario discovery framework for crowd counting. the system is structured with two parallel pathways that are trained with different sizes of the receptive field to represent different scales and crowd densities. after ensuring that these components are present in the proper geometric configuration, a third branch is designed to adaptively recalibrate the pathway - wise responses by discovering and modeling the dynamic scenarios implicitly. our system is able to represent highly variable crowd images and achieves state - of - the - art results in two challenging benchmarks.
|
arxiv:1812.02393
|
federated learning ( fl ) is a framework which enables distributed model training using a large corpus of decentralized training data. existing methods aggregate models disregarding their internal representations, which are crucial for training models in vision tasks. system and statistical heterogeneity ( e. g., highly imbalanced and non - i. i. d. data ) further harm model training. to this end, we introduce a method, called fedproto, which computes client deviations using margins of prototypical representations learned on distributed data, and applies them to drive federated optimization via an attention mechanism. in addition, we propose three methods to analyse statistical properties of feature representations learned in fl, in order to elucidate the relationship between accuracy, margins and feature discrepancy of fl models. in experimental analyses, fedproto demonstrates state - of - the - art accuracy and convergence rate across image classification and semantic segmentation benchmarks by enabling maximum margin training of fl models. moreover, fedproto reduces uncertainty of predictions of fl models compared to the baseline. to our knowledge, this is the first work evaluating fl models in dense prediction tasks, such as semantic segmentation.
|
arxiv:2105.08982
|
deshpande and rankin ( dr1999, dr2001 ) claim that the frequency of the very narrow feature, in the spectrum of radio flux variations of psr b0943 + 10, is an alias of its actual value. they also claim to have detected an amplitude modulation on the above phase modulation. this paper argues that both these claims are unjustified.
|
arxiv:astro-ph/0305013
|
if $ x $ is a geodesic metric space and $ x _ 1, x _ 2, x _ 3 \ in x $, a { \ it geodesic triangle } $ t = \ { x _ 1, x _ 2, x _ 3 \ } $ is the union of the three geodesics $ [ x _ 1x _ 2 ] $, $ [ x _ 2x _ 3 ] $ and $ [ x _ 3x _ 1 ] $ in $ x $. the space $ x $ is $ \ delta $ - \ emph { hyperbolic } $ ( $ in the gromov sense $ ) $ if any side of $ t $ is contained in a $ \ delta $ - neighborhood of the union of the two other sides, for every geodesic triangle $ t $ in $ x $. if $ x $ is hyperbolic, we denote by $ \ delta ( x ) $ the sharp hyperbolicity constant of $ x $, i. e., $ \ delta ( x ) = \ inf \ { \ delta \ ge 0 : \, x \, \ text { is $ \ delta $ - hyperbolic } \, \ }. $ some previous works characterize the hyperbolic product graphs ( for the cartesian, strong, join, corona and lexicographic products ) in terms of properties of the factor graphs. however, the problem with the direct product is more complicated. in this paper, we prove that if the direct product $ g _ 1 \ times g _ 2 $ is hyperbolic, then one factor is hyperbolic and the other one is bounded. also, we prove that this necessary condition is, in fact, a characterization in many cases. in other cases, we find characterizations which are not so simple. furthermore, we obtain formulae or good bounds for the hyperbolicity constant of the direct product of some important graphs.
|
arxiv:1611.04372
|
mental health counseling is an enterprise with profound societal importance where conversations play a primary role. in order to acquire the conversational skills needed to face a challenging range of situations, mental health counselors must rely on training and on continued experience with actual clients. however, in the absence of large scale longitudinal studies, the nature and significance of this developmental process remain unclear. for example, prior literature suggests that experience might not translate into consequential changes in counselor behavior. this has led some to even argue that counseling is a profession without expertise. in this work, we develop a computational framework to quantify the extent to which individuals change their linguistic behavior with experience and to study the nature of this evolution. we use our framework to conduct a large longitudinal study of mental health counseling conversations, tracking over 3, 400 counselors across their tenure. we reveal that overall, counselors do indeed change their conversational behavior to become more diverse across interactions, developing an individual voice that distinguishes them from other counselors. furthermore, a finer - grained investigation shows that the rate and nature of this diversification vary across functionally different conversational components.
|
arxiv:1906.07194
|
we have investigated the outburst properties of low - mass x - ray binary transients ( lmxbts ) based on a comprehensive study of the outbursts observed in the past few decades. the outburst rates were estimated based on the x - ray monitoring data from swift / bat, rxte / asm and maxi, and previous reports in the literature. we found that almost all lmxbts with the orbital period below $ \ sim $ 12 hr showed only one outburst in these observations. there are systematic difference in the outburst rate between long - period ( $ p _ { \ rm orb } \ gtrsim $ 12 hr ) and short - period ( $ p _ { \ rm orb } \ lesssim $ 12 hr ) systems. we infer that mass transfer rate is responsible for the systematic difference, since the disk instability model ( dim ) suggested that the mass transfer rate is a key factor affecting the quiescence time. the difference in outburst rate between long - period and short - period lmxbts is probably due to the different mass transfer mechanism at different evolutionary stages of the donors. based on the evolutionary tracks of single stars, we derived the critical orbital period for x - ray binaries that harbor a subgiant donor in various metallicity. the critical orbital period ( $ p _ { \ rm orb, crit } = $ 12. 4 hr ) is consistent with the above orbital period boundary obtained from the statistics of outburst rates. furthermore, we found a negative correlation between the outburst rate and the orbital period in the samples for which the luminosity class of the donor star is iii / iv. the best - fitting power - law index for the black hole subsamples is roughly consistent with the theoretical prediction for those systems with a donor star evolved off the main sequence.
|
arxiv:1901.00239
|
we turn back to the well known problem of interpretation of the schrodinger operator with the pseudopotential being the first derivative of the dirac function. we show that the problem in its conventional formulation contains hidden parameters and the choice of the proper selfadjoint operator is ambiguously determined. we study the asymptotic behavior of spectra and eigenvectors of the hamiltonians with increasing smooth potentials perturbed by short - range potentials. appropriate solvable models are constructed and the corresponding approximation theorems are proved. we introduce the concepts of the resonance set and the coupling function, which are spectral characteristics of the shape of squeezed potentials. the selfadjoint operators in the solvable models are determined by means of the resonance set and the coupling function.
|
arxiv:0909.1034
|
understanding the emergence of collective organizational phenomena is a major goal in many fields of physics from condensed matter to cosmology. using a recently introduced manybody perturbation formalism for fermions, we propose a mechanism for the emergence of collective behavior, specifically superfluidity, driven by quantum statistics and the enforcement of the pauli principle through the selection of normal modes. the method, which is called symmetry invariant perturbation theory ( spt ), uses group theory and graphical techniques to solve the manybody schrodinger equation through first order exactly. the solution at first order defines collective coordinates in terms of five n - body normal modes, identified as breathing, center of mass, single particle angular excitation, single particle radial excitation and phonon. a correspondence is established " on paper " that enforces the pauli principle through the assignment of specific normal mode quantum numbers. applied in the unitary regime, this normal mode assignment yields occupation only in an extremely low frequency n - body phonon mode at ultralow temperatures. a single particle radial excitation mode at a much higher frequency creates a gap that stabilizes the superfluidity at low temperatures. coupled with the corresponding values for the frequencies at unitarity obtained by this manybody calculation, we obtain good agreement with experimental thermodynamic results including the lambda transition in the specific heat. our results suggest that the emergence of collective behavior in macroscopic systems is driven by the pauli principle and its selection of the correct collective coordinates in the form of n - body normal modes.
|
arxiv:1803.05977
|
we study the axisymmetric response of a complete spherical shell under homogeneous compressive pressure $ p $ to an additional point force. for a pressure $ p $ below the classical critical buckling pressure $ p _ c $, indentation by a point force does not lead to spontaneous buckling but an energy barrier has to be overcome. the states at the maximum of the energy barrier represent a subcritical branch of unstable stationary points, which are the transition states to a snap - through buckled state. starting from nonlinear shallow shell theory we obtain a closed analytical expression for the energy barrier height, which facilitates its effective numerical evaluation as a function of pressure by continuation techniques. we find a clear crossover between two regimes : for $ p / p _ c \ ll 1 $ the post - buckling barrier state is a mirror - inverted pogorelov dimple, and for $ ( 1 - p / p _ c ) \ ll 1 $ the barrier state is a shallow dimple with indentations smaller than shell thickness and exhibits extended oscillations, which are well described by linear response. we find systematic expansions of the nonlinear shallow shell equations about the pogorelov mirror - inverted dimple for $ p / p _ c \ ll 1 $ and the linear response state for $ ( 1 - p / p _ c ) \ ll 1 $, which enable us to derive asymptotic analytical results for the energy barrier landscape in both regimes. upon approaching the buckling bifurcation at $ p _ c $ from below, we find a softening of an ideal spherical shell. the stiffness for the linear response to point forces vanishes $ \ propto ( 1 - p / p _ c ) ^ { 1 / 2 } $ ; the buckling energy barrier vanishes $ \ propto ( 1 - p / p _ c ) ^ { 3 / 2 } $. in the pogorelov limit, the energy barrier maximum diverges $ \ propto ( p / p _ c ) ^ { - 3 } $ and the corresponding indentation diverges $ \ propto ( p / p _ c ) ^ { - 2 } $. numerical prefactors for proportionalities both in the softening and the pogorelov regime are calculated analytically.
|
arxiv:1808.10183
|
a generalized 1 - in - 3sat problem is defined and found to be in complexity class p when restricted to a certain subset of cnf expressions. in particular, 1 - in - ksat with no restrictions on the number of literals per clause can be decided in polynomial time when restricted to exact read - 3 formulas with equal number of clauses ( m ) and variables ( n ), and no pure literals. also individual instances can be checked for easiness with respect to a given sat problem. by identifying whole classes of formulas as being solvable efficiently the approach might be of interest also in the complementary search for hard instances.
|
arxiv:1707.00118
|
plasmonics is based on surface plasmon polariton ( spp ) modes which can be laterally confined below the diffraction limit, thereby enabling ultracompact optical components. in order to exploit this potential, the fundamental bottleneck of poor light - spp coupling must be overcome. in established spp sources ( using prism, grating } or nanodefect coupling ) incident light is a source of noise for the spp, unless the illumination occurs away from the region of interest, increasing the system size and weakening the spp intensity. back - side illumination of subwavelength apertures in optically thick metal films eliminates this problem but does not ensure a unique propagation direction for the spp. we propose a novel back - side slit - illumination method based on drilling a periodic array of indentations at one side of the slit. we demonstrate that the spp running in the array direction can be suppressed, and the one propagating in the opposite direction enhanced, providing localized unidirectional spp launching.
|
arxiv:cond-mat/0703407
|
ultralight axions and dark photons are well - motivated dark matter candidates. inside the plasma, once the mass of ultralight dark matter candidates equals the plasma frequency, they can resonantly convert into electromagnetic waves, due to the coupling between the ultralight dark matter particles and the standard model photons. the converted electromagnetic waves are monochromatic. in this article, we review the development of using radio detectors to search for ultralight dark matter conversions in the solar corona and solar wind plasma.
|
arxiv:2304.01056
|
we present an approach that generalizes in a natural way the perturbative qcd formalism developed by brodsky and lepage for the study of exclusive hadronic processes to the case of $ l \ neq 0 $ mesons. as an application of our approach we consider here the production of meson pairs, involving tensor and pseudotensor mesons, in photon - photon collisions.
|
arxiv:hep-ph/9706349
|
science in science fiction is the study or of how science is portrayed in works of science fiction, including novels, stories, and films. it covers a large range of topics. hard science fiction is based on engineering or the " hard " sciences ( for example, physics, astronomy, or chemistry ). soft science fiction is based on the " soft " sciences, and especially the social sciences ( anthropology, sociology, psychology, of political science ). the accuracy of the science portrayed spans a wide range - sometimes it is an extrapolation of existing technology, sometimes it is a realistic or plausible portrayal of a technology that does not exist, but which is plausible from a scientific perspective ; and sometimes it is simply a plot device that looks scientific, but has no basis in science. examples are : realistic case : in 1944, the science fiction story deadline by cleve cartmill depicted the atomic bomb. this technology was real, unknown to the author. extrapolation : arthur c. clarke wrote about space elevators, basically a long cable extending from the earth ' s surface to geosynchronous orbit. while we cannot build one today, it violates no physical principles. plot device : the classic example of an unsupported plot device is faster - than - light drive, often called a " warp drive ". it is unsupported by physics as we know it, but needed for galaxy - wide plots with human lifespans. criticism and commentary on how science is portrayed in science fiction is done by academics from science, literature, film studies, and other disciplines ; by literary critics and film critics ; and by science fiction writers and sci fi fans and bloggers. = = hard science in science fiction = = planets in science fiction time travel in science fiction weapons in science fiction materials science in science fiction genetics in fiction = = social science in science fiction = = sex and sexuality in speculative fiction women in science fiction gender in speculative fiction reproduction and pregnancy in speculative fiction = = see also = = category fiction about physics physics and star wars = = references = = = = bibliography = = the science in science fiction by brian stableford, david langford, & peter nicholls ( 1982 ) science fiction with good astronomy & physics
|
https://en.wikipedia.org/wiki/Science_in_science_fiction
|
modern artificial intelligence ( ai ) applications are increasingly utilizing multi - tenant deep neural networks ( dnns ), which lead to a significant rise in computing complexity and the need for computing parallelism. reram - based processing - in - memory ( pim ) computing, with its high density and low power consumption characteristics, holds promising potential for supporting the deployment of multi - tenant dnns. however, direct deployment of complex multi - tenant dnns on exsiting reram - based pim designs poses challenges. resource contention among different tenants can result in sever under - utilization of on - chip computing resources. moreover, area - intensive operators and computation - intensive operators require excessively large on - chip areas and long processing times, leading to high overall latency during parallel computing. to address these challenges, we propose a novel reram - based in - memory computing framework that enables efficient deployment of multi - tenant dnns on reram - based pim designs. our approach tackles the resource contention problems by iteratively partitioning the pim hardware at tenant level. in addition, we construct a fine - grained reconstructed processing pipeline at the operator level to handle area - intensive operators. compared to the direct deployments on traditional reram - based pim designs, our proposed pim computing framework achieves significant improvements in speed ( ranges from 1. 75x to 60. 43x ) and energy ( up to 1. 89x ).
|
arxiv:2408.04812
|
quantum work capacitances and maximal asymptotic work / energy ratios are figures of merit characterizing the robustness against noise of work extraction processes in quantum batteries formed by collections of quantum systems. in this paper we establish a direct connection between these functionals and, exploiting this result, we analyze different types of noise models mimicking self - discharging, thermalization and dephasing effects. in this context we show that input quantum coherence can significantly improve the storage performance of noisy quantum batteries and that the maximum output ergotropy is not always achieved by the maximum available input energy.
|
arxiv:2305.16803
|
pulse self - compression is a simple and economical method for improving the peak power of ultra - intense laser pulses. by solving a modified nonlinear schrodinger equation considering the fifth - order susceptibility, we found that self - compression appeared even in normally dispersive medium owing to the negative fifth - order susceptibility inducing a mass of negative frequency chirp. furthermore, negatively pre - chirped pulses allow for self - compression at lower intensity, avoiding medium damage. we numerically analyze the optimal choice of pre - chirp, input intensity, and medium length. a proof - of - principle experiment successfully proves the above theoretical findings. it is expected that petawatt or even exawatt laser pulses with 25 fs / 15 fs transform limited pulse duration can be self - compressed to about 9. 9 fs / 7. 6 fs in normally dispersive medium, such as fused silica glass plate.
|
arxiv:2306.12743
|
multimodal generative models that can understand and generate across multiple modalities are dominated by autoregressive ( ar ) approaches, which process tokens sequentially from left to right, or top to bottom. these models jointly handle images, text, video, and audio for various tasks such as image captioning, question answering, and image generation. in this work, we explore discrete diffusion models as a unified generative formulation in the joint text and image domain, building upon their recent success in text generation. discrete diffusion models offer several advantages over ar models, including improved control over quality versus diversity of generated samples, the ability to perform joint multimodal inpainting ( across both text and image domains ), and greater controllability in generation through guidance. leveraging these benefits, we present the first unified multimodal discrete diffusion ( unidisc ) model which is capable of jointly understanding and generating text and images for a variety of downstream tasks. we compare unidisc to multimodal ar models, performing a scaling analysis and demonstrating that unidisc outperforms them in terms of both performance and inference - time compute, enhanced controllability, editability, inpainting, and flexible trade - off between inference time and generation quality. code and additional visualizations are available at https : / / unidisc. github. io.
|
arxiv:2503.20853
|
we prove an irreducibility criterion for polynomials with power series coefficients generalizing previous known results concerning quasi - ordinary polynomials.
|
arxiv:1605.05577
|
we show how a single flux quantum can be effectively manipulated in a superconducting film with a matrix of blind holes. such a sample can serve as a basic memory element, where the position of the vortex in a [ k x l ] matrix of pinning sites defines the desired combination of n bits of information ( 2 ^ n = k * l ). vortex placement is achieved by strategically applied current and the resulting position is read - out via generated voltage between metallic contacts on the sample. such a device can also act as a controllable source of a nanoengineered local magnetic field for e. g. spintronics applications.
|
arxiv:1005.1790
|
this paper introduces multilingual librispeech ( mls ) dataset, a large multilingual corpus suitable for speech research. the dataset is derived from read audiobooks from librivox and consists of 8 languages, including about 44. 5k hours of english and a total of about 6k hours for other languages. additionally, we provide language models ( lm ) and baseline automatic speech recognition ( asr ) models and for all the languages in our dataset. we believe such a large transcribed dataset will open new avenues in asr and text - to - speech ( tts ) research. the dataset will be made freely available for anyone at http : / / www. openslr. org.
|
arxiv:2012.03411
|
we prove the global existence of the non - negative unique mild solution for the cauchy problem of the cutoff boltzmann equation for soft potential model $ - 1 \ leq \ gamma < 0 $ with the small initial data in three dimensional space. thus our result fixes the gap for the case $ \ gamma = - 1 $ in three dimensional space in the authors ' previous work where the estimate for the loss term was improperly used. the other gap there for the case $ \ gamma = 0 $ in two dimensional space is recently fixed by chen, denlinger and pavlovi \ ' { c }. the initial data $ f _ { 0 } $ is non - negative, small in weighted $ l ^ { 3 } _ { x, v } $ and finite in weighted $ l ^ { 15 / 8 } _ { x, v } $. we also show that the solution scatters with respect to the kinetic transport operator. the novel contribution of this work lies in the exploration of the symmetric property of the gain term in terms of weighted estimate. it is the key ingredient for solving the model $ - 1 < \ gamma < 0 $ when applying the strichartz estimates.
|
arxiv:2203.10756
|
we study three types of generalized partial fractional operators. an extension of green ' s theorem, by considering partial fractional derivatives with more general kernels, is proved. new results are obtained, even in the particular case when the generalized operators are reduced to the standard partial fractional derivatives and fractional integrals in the sense of riemann - liouville or caputo.
|
arxiv:1205.4851
|
by the introduction of a generalized evans function defined by an appropriate 2 - modified fredholm determinant, we give a simple proof of convergence in location and multiplicity of hill ' s method for numerical approximation of spectra of periodic - coefficient ordinary differential operators. our results apply to operators of nondegenerate type, under the condition that the principal coefficient matrix be symmetric positive definite ( automatically satisfied in the scalar case ). notably, this includes a large class of nonselfadjoint operators, which were previously not treated. the case of general coefficients depends on an interesting operator - theoretic question regarding properties of toeplitz matrices.
|
arxiv:1009.3908
|
an expression for the stress tensor near an external boundary of a discrete mechanical system is derived explicitly in terms of the constituents ' degrees of freedom and interaction forces. starting point is the exact and general coarse graining formulation presented by goldhirsch in [ i. goldhirsch, gran. mat., 12 ( 3 ) : 239 - 252, 2010 ], which is consistent with the continuum equations everywhere but does not account for boundaries. our extension accounts for the boundary interaction forces in a self - consistent way and thus allows the construction of continuous stress fields that obey the macroscopic conservation laws even within one coarse - graining width of the boundary. the resolution and shape of the coarse - graining function used in the formulation can be chosen freely, such that both microscopic and macroscopic effects can be studied. the method does not require temporal averaging and thus can be used to investigate time - dependent flows as well as static and steady situations. finally, the fore - mentioned continuous field can be used to define ' fuzzy ' ( highly rough ) boundaries. two discrete particle method simulations are presented in which the novel boundary treatment is exemplified, including a chute flow over a base with roughness greater than a particle diameter.
|
arxiv:1108.5032
|
the grad - cam algorithm provides a way to identify what parts of an image contribute most to the output of a classifier deep network. the algorithm is simple and widely used for localization of objects in an image, although some researchers have point out its limitations, and proposed various alternatives. one of them is grad - cam + +, that according to its authors can provide better visual explanations for network predictions, and does a better job at locating objects even for occurrences of multiple object instances in a single image. here we show that grad - cam + + is practically equivalent to a very simple variation of grad - cam in which gradients are replaced with positive gradients.
|
arxiv:2205.10838
|
we present an exploratory study of the electromagnetic corrections to meson masses and the hadronic vacuum polarization using $ n _ f = 2 + 1 $ domain wall fermions. these corrections are estimated with two different approaches, a stochastic approach using $ u ( 1 ) $ gauge configurations for the photon fields, and a perturbative approach through a qed perturbative expansion of the qcd + qed path integral. we compare results and statistical errors from both methods.
|
arxiv:1612.05962
|
in some cases, an important example being at finite temperature, extreme infrared, collinear, or light - cone behaviour may cause the usual loop expansion to break down. for some of these cases higher order ladder graphs can become important. in an earlier paper it was shown that, given a particular relation between a vertex and a self - energy function, the resummation of the ladder graphs simplifies significantly when other types of graphs are included in a consistent effective expansion. in this paper we show that this assumed relation is valid for a large class of vertex and self - energy functions at finite temperature.
|
arxiv:hep-ph/9712280
|
in the spring of 2018 the northern california / nevada section of the american association of physics teachers was alerted to a local high school ' s plans to eliminate physics for the following school year. as part of the campaign to support the school ' s efforts to sustain physics in the following year, the physics offerings from the surrounding schools in that district were compiled. it appeared that the demographics of the student population in the district played a role in the number of different physics courses offered within that district, particularly the percentage of hispanic students ( % hispanic ) and percentage of socioeconomically disadvantaged ( % sed ) students at each school. concerned that this trend was more widespread, physics course offerings were reviewed for northern california public high schools to determine if there were correlations between the amount of different physics class offerings and these populations. it was found that % hispanic and % sed are strongly correlated in california public schools, and along with number of students, could be used as statistically significant predictors of a school ' s physics offerings.
|
arxiv:2010.08476
|
in this paper, using the recent method proposed by ono, ishihara and asada ( oia ) who extend the idea of gibbons and werner to the stationary and axisymmetric case, we apply the gauss - bonnet theorem to the optical metric of the non - rotating and rotating damour - solodukhin wormholes spacetimes to study the weak gravitational lensing by these objects. furthermore, we study the strong gravitational lensing by the non - rotating damour - solodukhin wormholes using the bozza ' s method to see the differences between the weak lensing and the strong lensing. we demonstrate the relation between the strong deflection angle and quasinormal modes of the damour - solodukhin wormholes. interestingly it is found that the wormhole parameter $ \ lambda $, affects the deflection of light in strong and weak limits compared to the previous studies of gravitational lensing by schwarzschild black holes. hence, the results provide a unique tool to shed light on the possible existence of wormholes.
|
arxiv:1805.06296
|
in the article, we will report on the recovery of a melloni ' s optical bench built at the end of 1800 by the " macchinista " filippo caliri in the " belle \ ' epoque " of palermo. a scientific instrument of particular historical and didactic interest belonging to the collection of liceo classico statale " umberto i " of palermo. in the article, we will discuss the technical aspects of the interventions carried out. in questo articolo discuteremo del recupero di uno strumento scientifico di particolare interesse storico - didattico appartenente alla collezione del liceo classico statale " umberto i " di palermo : un raro banco ottico del melloni costruito alla fine del 1800 nella palermo della " belle \ ' epoque " dal " macchinista " filippo caliri. nell ' articolo discuteremo gli aspetti tecnici degli interventi conservativi effettuati.
|
arxiv:1701.01802
|
a hacker is a person skilled in information technology who achieves goals by non - standard means. the term has become associated in popular culture with a security hacker – someone with knowledge of bugs or exploits to break into computer systems and access data which would otherwise be inaccessible to them. in a positive connotation, though, hacking can also be utilized by legitimate figures in legal situations. for example, law enforcement agencies sometimes use hacking techniques to collect evidence on criminals and other malicious actors. this could include using anonymity tools ( such as a vpn or the dark web ) to mask their identities online and pose as criminals. hacking can also have a broader sense of any roundabout solution to a problem, or programming and hardware development in general, and hacker culture has spread the term ' s broader usage to the general public even outside the profession or hobby of electronics ( see life hack ). = = definitions = = reflecting the two types of hackers, there are two definitions of the word " hacker " : originally, hacker simply meant advanced computer technology enthusiast ( both hardware and software ) and adherent of programming subculture ; see hacker culture. someone who is able to subvert computer security. if doing so for malicious purposes, the person can also be called a cracker. mainstream usage of " hacker " mostly refers to computer criminals, due to the mass media usage of the word since the 1990s. this includes what hacker jargon calls script kiddies, less skilled criminals who rely on tools written by others with very little knowledge about the way they work. this usage has become so predominant that the general public is largely unaware that different meanings exist. though the self - designation of hobbyists as hackers is generally acknowledged and accepted by computer security hackers, people from the programming subculture consider the computer intrusion related usage incorrect, and emphasize the difference between the two by calling security breakers " crackers " ( analogous to a safecracker ). the controversy is usually based on the assertion that the term originally meant someone messing about with something in a positive sense, that is, using playful cleverness to achieve a goal. but then, it is supposed, the meaning of the term shifted over the decades and came to refer to computer criminals. as the security - related usage has spread more widely, the original meaning has become less known. in popular usage and in the media, " computer intruders " or " computer criminals " is the exclusive meaning of the word. in computer enthusiast and hacker culture, the primary meaning is a complimentary description for a particularly brilliant
|
https://en.wikipedia.org/wiki/Hacker
|
bl lac objects undergo strong flux variations involving considerable changes in their spectral shapes. we specifically investigate the x - ray spectral evolution of mrk 421 over a time span of about nine years. we aim at statistically describing and physically understanding the large spectral changes in x rays observed in mrk 421 over this time span. we perform a homogeneous spectral analysis of a wide data set including archived observations with asca, bepposax, rxte, as well as published and unpublished xmm - newton data. the presence of uncertainties is taken into account in our correlation analysis. the significance of the correlations found and possible spurious effects are studied with monte carlo simulations. we find that the mrk421 spectral energy distribution ( sed ) has a lower peak at energies that vary in the range, 0. 1 - 10 kev while its x - ray spectrum is definitely curved. parameterizing the x - ray spectra with a log - parabolic model, we find a positive correlation between the position and the height of the sed peak. in addition, we find a negative trend of the spectral curvature parameter vs. the sed peak energy. we show that these relations between the spectral parameters are consistent with statistical or stochastic acceleration of the emitting particles, and provide insight into the physical processes occurring in bl lac nuclei.
|
arxiv:astro-ph/0702151
|
personalized text - to - image generation aims to create images tailored to user - defined concepts and textual descriptions. balancing the fidelity of the learned concept with its ability for generation in various contexts presents a significant challenge. existing methods often address this through diverse fine - tuning parameterizations and improved sampling strategies that integrate superclass trajectories during the diffusion process. while improved sampling offers a cost - effective, training - free solution for enhancing fine - tuned models, systematic analyses of these methods remain limited. current approaches typically tie sampling strategies with fixed fine - tuning configurations, making it difficult to isolate their impact on generation outcomes. to address this issue, we systematically analyze sampling strategies beyond fine - tuning, exploring the impact of concept and superclass trajectories on the results. building on this analysis, we propose a decision framework evaluating text alignment, computational constraints, and fidelity objectives to guide strategy selection. it integrates with diverse architectures and training approaches, systematically optimizing concept preservation, prompt adherence, and resource efficiency. the source code can be found at https : / / github. com / controlgenai / persongensampler.
|
arxiv:2502.05895
|
let $ \ phi : x \ times \ mathbb { r } \ rightarrow x $ be a continuous flow on a compact metric space $ ( x, d ) $. in this article we constructively prove the existence of a continuous lyapunov function for $ \ phi $ which is strictly decreasing outside $ \ mathcal { scr } _ d ( \ phi ) $. such a result generalizes conley ' s fundamental theorem of dynamical systems for the strong chain recurrent set.
|
arxiv:2011.09830
|
a qcd multiquark cluster system is studied in the relativistic harmonic oscillator potential model ( rhopm ), and the electromagnetic form factors of the pion, proton and deuteron in the rhopm are predicted. the calculated theoretical results are then compared with existing experimental data, finding very good agreement between the theoretical predictions and experimental data for these three target particles. we claim that this model can be applied to study qcd hadronic properties, particularly neutron properties, and to find six - quark cluster and / or nine - quark cluster probabilities in light nuclei such as helium $ ^ { 3 } he $ and tritium $ ^ { 3 } h $. this is a problem of particular importance and interest in quark nuclear physics.
|
arxiv:1410.6871
|
in this publication, we assess the ability of a novel reinforcement learning - based solution to the problem of neural architecture search, where a reinforcement learning ( rl ) agent learns to search for good architectures, rather than to return a single optimal architecture. we consider both the nas - bench - 101 and nas - bench - 301 settings, and compare against various known strong baselines, such as local search and random search. we conclude that our reinforcement learning agent displays strong scalability with regards to the size of the search space, but limited robustness to hyperparameter changes.
|
arxiv:2410.01431
|
profile hacks have included the abduction of caltech ' s cannon, reconstructing a wright flyer atop the great dome, and adorning the john harvard statue with the master chief ' s mjolnir helmet. = = = athletics = = = mit sponsors 31 varsity sports and has one of the three broadest ncaa division iii athletic programs. mit participates in the ncaa ' s division iii, and the new england women ' s and men ' s athletic conference. it also participates in ncaa ' s division i patriot league for women ' s crew, and the collegiate water polo association ( cwpa ) for men ' s water polo. men ' s crew competes outside the ncaa in the eastern association of rowing colleges ( earc ). mit ' s intercollegiate sports teams, called the engineers, won 22 team national championships and 42 individual national championships. mit is the all - time division iii leader in producing academic all - americas ( 302 ) and ranks second across all ncaa divisions, behind only the university of nebraska. mit athletes won 13 elite 90 awards and ranks first among ncaa division iii programs, and third among all divisions. in april 2009, budget cuts led to mit eliminating eight of its 41 sports, including the mixed men ' s and women ' s teams in alpine skiing and pistol ; separate teams for men and women in ice hockey and gymnastics ; and men ' s programs in golf and wrestling. = = people = = = = = students = = = mit enrolled 4, 602 undergraduates and 6, 972 graduate students in 2018 – 2019. undergraduate and graduate students came from all 50 us states as well as from 115 foreign countries. mit received 33, 240 applications for admission to the undergraduate class of 2025 : it admitted 1, 365 ( 4. 1 percent ). in 2019, 29, 114 applications were received for graduate and advanced degree programs across all departments ; 3, 670 were admitted ( 12. 6 percent ) and 2, 312 enrolled ( 63 percent ). in august 2024, after the u. s. supreme court overruled race - based affirmative action in students for fair admissions v. harvard ( 2023 ), the university reported that for the class of 2028, black and latino student enrollment decreased from previous averages to 5 and 11 percent, respectively, while asian american enrollment increased to 47 percent. undergraduate tuition and fees for 2019 – 2020 was $ 53, 790 for nine months. 59 % of students were awarded a need - based mit scholarship. graduate tuition and fees for 2019 – 2020 was also $
|
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
|
we study the effect of persistence of engagement on learning in a stochastic multi - armed bandit setting. in advertising and recommendation systems, repetition effect includes a wear - in period, where the user ' s propensity to reward the platform via a click or purchase depends on how frequently they see the recommendation in the recent past. it also includes a counteracting wear - out period, where the user ' s propensity to respond positively is dampened if the recommendation was shown too many times recently. priming effect can be naturally modelled as a temporal constraint on the strategy space, since the reward for the current action depends on historical actions taken by the platform. we provide novel algorithms that achieves sublinear regret in time and the relevant wear - in / wear - out parameters. the effect of priming on the regret upper bound is also additive, and we get back a guarantee that matches popular algorithms such as the ucb1 and thompson sampling when there is no priming effect. our work complements recent work on modeling time varying rewards, delays and corruptions in bandits, and extends the usage of rich behavior models in sequential decision making settings.
|
arxiv:2006.10356
|
we have witnessed the unprecedented success of diffusion - based video generation over the past year. recently proposed models from the community have wielded the power to generate cinematic and high - resolution videos with smooth motions from arbitrary input prompts. however, as a supertask of image generation, video generation models require more computation and are thus hosted mostly on cloud servers, limiting broader adoption among content creators. in this work, we propose a comprehensive acceleration framework to bring the power of the large - scale video diffusion model to the hands of edge users. from the network architecture scope, we initialize from a compact image backbone and search out the design and arrangement of temporal layers to maximize hardware efficiency. in addition, we propose a dedicated adversarial fine - tuning algorithm for our efficient model and reduce the denoising steps to 4. our model, with only 0. 6b parameters, can generate a 5 - second video on an iphone 16 pm within 5 seconds. compared to server - side models that take minutes on powerful gpus to generate a single video, we accelerate the generation by magnitudes while delivering on - par quality.
|
arxiv:2412.10494
|
we study taylor expansions of jacobi forms of lattice index. as the main result, we give an embedding from certain space of such forms, whether scalar - valued or vector - valued, integral - weight or half - integral - weight, of any level, with any character, into a product of finitely many spaces of modular forms. as an application, we investigate linear relations among jacobi theta series of lattice index. many linear relations among the second powers of such theta series associated with the $ d _ 4 $ lattice and $ a _ 3 $ lattice are obtained, along with relations among the third powers of series associated with the $ a _ 2 $ lattice. we present the complete sagemath code for the $ d _ 4 $ lattice.
|
arxiv:2204.08262
|
let k be an algebraically closed field endowed with a complete non - archimedean norm. let f : y - > x be a map of k - affinoid varieties. we prove that for each point x in x, either f is flat at x, or there exists, at least locally around x, a maximal locally closed analytic subvariety z in x containing x, such that the base change f ^ { - 1 } ( z ) - > z is flat at x, and, moreover, g ^ { - 1 } ( z ) has again this property in any point of the fibre of x after base change over an arbitrary map g : x ' - > x of affinoid varieties. if we take the local blowing up \ pi : x - tilde - > x with this centre z, then the fibre with respect to the strict transform f - tilde of f under \ pi, of any point of x - tilde lying above x, has grown strictly smaller. among the corollaries to these results we quote, that flatness in rigid analytic geometry is local in the source ; that flatness over a reduced quasi - compact rigid analytic variety can be tested by surjective families ; that an inclusion of affinoid domains is flat in a point, if it is unramified in that point.
|
arxiv:math/9702230
|
exceptional points ( eps ) are non - hermitian singularities associated with the coalescence of individual eigenvectors accompanied by the degeneracy of their complex energies. here, we report the discovery of a generalization to the concept of ep called exceptional deficiency ( ed ), which features the complete coalescence of two eigenspaces with identical but arbitrarily large dimensions and the coincidence of entire spectral continua. the characteristics of the ed are studied using one - way coupled hermitian and non - hermitian lattices. the ed can induce an anomalous absence and presence of non - hermitian skin effect ( nhse ) that transcends the topological bulk - edge correspondence of nhse, resulting in unexpected synergistic skin - propagative dynamics. the conditions of the ed are also explored for unprecedented control of localization and propagation in non - hermitian systems. these effects are experimentally observed using active mechanical lattices. the discovery of ed opens multiple new frontiers in non - hermitian physics and can potentially resolve long - standing challenges in related applications.
|
arxiv:2504.12238
|
diabetic retinopathy ( dr ) caused by diabetes occurs as a result of changes in the retinal vessels and causes visual impairment. microaneurysms ( mas ) are the early clinical signs of dr, whose timely diagnosis can help detecting dr in the early stages of its development. it has been observed that mas are more common in the inner retinal layers compared to the outer retinal layers in eyes suffering from dr. optical coherence tomography ( oct ) is a noninvasive imaging technique that provides a cross - sectional view of the retina and it has been used in recent years to diagnose many eye diseases. as a result, in this paper has attempted to identify areas with ma from normal areas of the retina using oct images. this work is done using the dataset collected from fa and oct images of 20 patients with dr. in this regard, firstly fluorescein angiography ( fa ) and oct images were registered. then the ma and normal areas were separated and the features of each of these areas were extracted using the bag of features ( bof ) approach with speeded - up robust feature ( surf ) descriptor. finally, the classification process was performed using a multilayer perceptron network. for each of the criteria of accuracy, sensitivity, specificity, and precision, the obtained results were 96. 33 %, 97. 33 %, 95. 4 %, and 95. 28 %, respectively. utilizing oct images to detect masautomatically is a new idea and the results obtained as preliminary research in this field are promising.
|
arxiv:2205.04695
|
muscovite mica, kal $ _ 2 $ ( si $ _ 3 $ al ) o $ _ { 10 } $ ( oh ) $ _ 2 $, is a common layered phyllosilicate with perfect cleavage planes. the atomically flat surfaces obtained through cleaving lend themselves to scanning probe techniques with atomic resolution and are ideal to model minerals and clays. despite the importance of the cleaved mica surfaces, several questions remain unresolved. it is established that k $ ^ + $ ions decorate the cleaved surface, but their intrinsic ordering - - unaffected by the interaction with the environment - - is not known. this work presents clear images of the k $ ^ + $ distribution of cleaved mica obtained with low - temperature non - contact atomic force microscopy ( afm ) under ultra - high vacuum ( uhv ) conditions. the data unveil the presence of short - range ordering, contrasting previous assumptions of random or fully ordered distributions. density functional theory ( dft ) calculations and monte carlo simulations show that the substitutional subsurface al $ ^ { 3 + } $ ions have an important role for the surface k $ ^ + $ ion arrangement.
|
arxiv:2308.14055
|
lexical semantics continues to play an important role in driving research directions in nlp, with the recognition and understanding of context becoming increasingly important in delivering successful outcomes in nlp tasks. besides traditional processing areas such as word sense and named entity disambiguation, the creation and maintenance of dictionaries, annotated corpora and resources have become cornerstones of lexical semantics research and produced a wealth of contextual information that nlp processes can exploit. new efforts both to link and construct from scratch such information - as linked open data or by way of formal tools coming from logic, ontologies and automated reasoning - have increased the interoperability and accessibility of resources for lexical and computational semantics, even in those languages for which they have previously been limited. lexsem + logics 2016 combines the 1st workshop on lexical semantics for lesser - resources languages and the 3rd workshop on logics and ontologies. the accepted papers in our program covered topics across these two areas, including : the encoding of plurals in wordnets, the creation of a thesaurus from multiple sources based on semantic similarity metrics, and the use of cross - lingual treebanks and annotations for universal part - of - speech tagging. we also welcomed talks from two distinguished speakers : on portuguese lexical knowledge bases ( different approaches, results and their application in nlp tasks ) and on new strategies for open information extraction ( the capture of verb - based propositions from massive text corpora ).
|
arxiv:1608.04767
|
we analyze the influence of decaying sterile neutrinos with the masses in the range 1 - 140 mev on the primordial helium - 4 abundance, explicitly solving the boltzmann equations for all particle species, taking into account neutrino flavour oscillations, and paying special attention to systematic uncertainties. we show that the helium abundance depends only on the sterile neutrino lifetime and not on the way the active - sterile mixing is distributed between flavours, and derive an upper bound on the lifetime. we also demonstrate that the recent results of izotov & thuan [ arxiv : 1001. 4440 ], who find 2sigma higher than predicted by the standard primordial nucleosynthesis value of helium - 4 abundance, are consistent with the presence in the plasma of sterile neutrinos with the lifetime 0. 01 - 2 seconds. the decay of these particles perturbs the spectra of ( decoupled ) neutrinos and heats photons, changing the ratio of neutrino to photon energy density, that can be interpreted as extra neutrino species at the recombination epoch.
|
arxiv:1202.2841
|
we show that transverse - magnetic ( tm ) leaky modes can propagate further than transverse electric ( te ) modes in real - index dielectric waveguides. we compute the density of states and find that while the te spectrum contains only overlapping resonances, the tm spectrum typically contains several isolated peaks. by transforming the tm equation into a schr \ " { o } dinger - type equation, we show that these isolated peaks arise due to $ \ delta $ - function barriers at the core - cladding interface. our theory is useful for a range of applications, including filtering tm modes from initially unpolarized light and transferring information between distant waveguides.
|
arxiv:1801.09959
|
it is well - known that liquid and saturated vapor, separated by a flat interface in an unbounded space, are in equilibrium. one would similarly expect a liquid drop, sitting on a flat substrate, to be in equilibrium with the vapor surrounding it. yet, it is not : as shown in this work, the drop evaporates. mathematically, this conclusion is deduced using the diffuse - interface model, but it can also be reformulated in terms of the maximum - entropy principle, suggesting model independence. physically, evaporation of drops is due to the so - called kelvin effect, which gives rise to a liquid - to - vapor mass flux in all cases where the boundary of the liquid phase is convex.
|
arxiv:2106.14221
|
the effect of dephasing on electron transport through a benzene molecule is carefully examined using a phenomenological model introduced by b \ " { u } ttiker. within a tight - binding framework all the calculations are performed based on the green ' s function formalism. we investigate the influence of dephasing on transmission probability and current - voltage characteristics for three different configurations ( { \ em ortho }, { \ em meta } and { \ em para } ) of the molecular system depending on the locations of two contacting leads. the presence of dephasing provides a significant change in the spectral properties of the molecule and exhibits several interesting patterns that have so far remain unexplored.
|
arxiv:1011.2033
|
the condition for stationary engines to attain the carnot efficiency in and beyond the linear response regime is investigated. we find that this condition for finite - size engines is significantly different from that for macroscopic engines in the thermodynamic limit. for the case of finite - size engines, the tight - coupling condition in the linear response regime directly implies the attainability of the carnot efficiency beyond the linear response regime. contrary to this, for the case of macroscopic engines in the thermodynamic limit, there are three types of mechanisms to attain the carnot efficiency. one mechanism allows engines to attain the carnot efficiency only in the linear response limit, while other two mechanisms enable engines to attain the carnot efficiency beyond the linear response regime. these three mechanisms are classified by introducing tight - coupling window.
|
arxiv:1703.03621
|
we study the chiral phase transition of qcd at finite temperature and density by numerically solving schwinger - dyson equation for the quark propagator with the improved ladder approximation in the landau gauge. using the solution we calculate a pion decay constant from a generalized version of pagels - stokar formula. chiral phase transition point is determined by analyzing an effective potential for the quark propagator. we find solutions for which chiral symmetry is broken while the value of the effective potential is larger than that for the chiral symmetric vacuum. these solutions correspond to meta - stable states, and the chiral symmetric vacuum is energetically favored. we present a phase diagram on the general temperature - - chemical potential plane, and show that phase transitions are of first order in wide range.
|
arxiv:hep-ph/9807408
|
it is shown in this study that deviations from the einstein - hilbert action at the quadratic level using a proper analyses and suitable dynamical variables lead to a tiny modification to the post - newtonian equations of motion, and non - gr - like behavior at very short length scales.
|
arxiv:2111.08305
|
t ) } $ for all $ t > 0 $. we show that under these assumptions the operator $ l _ 0 $ is unitarily equivalent to the minimal schr \ " { o } dinger operator $ s _ 0 = - d ^ 2 + q $ in $ { l _ 2 ( 0, \ infty ) } $ with a smooth real - valued potential $ q $, which is in the limit point case at infinity. it is also proved that $ s _ 0 $ provides a canonical wave model of $ l _ 0 $.
|
arxiv:2311.01612
|
sarah is a mathematica package for studying supersymmetric models. it calculates for a given model the masses, tadpole equations and all vertices at tree - level. those information can be used by \ sarah to write model files for calchep / comphep or feynarts / formcalc. in addition, the second version of sarah can derive the renormalization group equations for the gauge couplings, parameters of the superpotential and soft - breaking parameters at one and two - loop level. furthermore, it calculates the one - loop self energies and the one - loop corrections to the tadpoles. sarah can handle all n = 1 susy models whose gauge sector is a direct product of su ( n ) and u ( 1 ) gauge groups. the particle content of the model can be an arbitrary number of chiral superfields transforming as any irreducible representation with respect to the gauge groups. to implement a new model, the user has just to define the gauge sector, the particle, the superpotential and the field rotations to mass eigenstates.
|
arxiv:1002.0840
|
we investigate the minimal conditions under which the creation of our universe might arise due to a " bounce " from a previous collapse, rather than an explosion from a big - bang singularity. such a bounce is sometimes referred to as a tolman wormhole. we subject the bounce to a general model - independent analysis along the lines of that applied to the morris - thorne traversable wormholes, and show that there is always an open temporal region surrounding the bounce over which the strong energy condition ( sec ) must be violated. on the other hand, all the other energy conditions can easily be satisfied. in particular, we exhibit an inflation - inspired model in which a big bounce is " natural ".
|
arxiv:gr-qc/9810023
|
we find necessary and sufficient conditions for the free additive infinite divisibility of some free multiplicative convolutions with the wigner, the arcsine, the free poisson and other distributions, including explicit examples.
|
arxiv:0910.1199
|
image outpainting technology generates visually plausible content regardless of authenticity, making it unreliable to be applied in practice. thus, we propose a reliable image outpainting task, introducing the sparse depth from lidars to extrapolate authentic rgb scenes. the large field view of lidars allows it to serve for data enhancement and further multimodal tasks. concretely, we propose a depth - guided outpainting network to model different feature representations of two modalities and learn the structure - aware cross - modal fusion. and two components are designed : 1 ) the multimodal learning module produces unique depth and rgb feature representations from the perspectives of different modal characteristics. 2 ) the depth guidance fusion module leverages the complete depth modality to guide the establishment of rgb contents by progressive multimodal feature fusion. furthermore, we specially design an additional constraint strategy consisting of cross - modal loss and edge loss to enhance ambiguous contours and expedite reliable content generation. extensive experiments on kitti and waymo datasets demonstrate our superiority over the state - of - the - art method, quantitatively and qualitatively.
|
arxiv:2204.05543
|
we study the efficiency of grain alignment by radiative torques ( rats ) for an ensemble of irregular grains. the grains are modeled as ensembles of oblate and prolate spheroids, deformed as gaussian random ellipsoids, and their scattering interactions are solved using numerically exact methods. we define the fraction of the grains that both rotate fast and demonstrate perfect alignment with grain long axes perpendicular to the magnetic field. we demonstrate that for typical interstellar conditions the degree of alignment arising from the rat mechanism is significantly larger than that arising from the davis - greenstein ( dg ) mechanism based on paramagnetic relaxation. we quantify a factor related to the efficacy of alignment and show that it is related to a $ q _ \ mathrm { max } $ - factor of analytical model ( amo ) of the rat theory. our results indicate that the rat alignment can potentially be sufficiently strong and to explain observations even if grains do not have magnetic inclusions.
|
arxiv:2006.16563
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.