text
stringlengths
1
3.65k
source
stringlengths
15
79
over the past few decades, industrial control systems ( icss ) have been targeted by cyberattacks and are becoming increasingly vulnerable as more icss are connected to the internet. using machine learning ( ml ) for intrusion detection systems ( ids ) is a promising approach for ics cyber protection, but the lack of suitable datasets for evaluating ml algorithms is a challenge. although there are a few commonly used datasets, they may not reflect realistic ics network data, lack necessary features for effective anomaly detection, or be outdated. this paper presents the ' ics - flow ' dataset, which offers network data and process state variables logs for supervised and unsupervised ml - based ids assessment. the network data includes normal and anomalous network packets and flows captured from simulated ics components and emulated networks. the anomalies were injected into the system through various attack techniques commonly used by hackers to modify network traffic and compromise icss. we also proposed open - source tools, ` icsflowgenerator ' for generating network flow parameters from raw network packets. the final dataset comprises over 25, 000, 000 raw network packets, network flow records, and process variable logs. the paper describes the methodology used to collect and label the dataset and provides a detailed data analysis. finally, we implement several ml models, including the decision tree, random forest, and artificial neural network to detect anomalies and attacks, demonstrating that our dataset can be used effectively for training intrusion detection ml models.
arxiv:2305.09678
we consider m - estimation problems, where the target value is determined using a minimizer of an expected functional of a levy process. with discrete observations from the levy process, we can produce a " quasi - path " by shuffling increments of the levy process, we call it a quasi - process. under a suitable sampling scheme, a quasi - process can converge weakly to the true process according to the properties of the stationary and independent increments. using this resampling technique, we can estimate objective functionals similar to those estimated using the monte carlo simulations, and it is available as a contrast function. the m - estimator based on these quasi - processes can be consistent and asymptotically normal.
arxiv:2112.08199
the possible sources of systematic uncertainties in the hyperons global polarization measurement are discussed. the equation with detector acceptance effects taken into account is provided. contribution of the hyperons directed flow into the hyperons global polarization measurement is shown. the systematic uncertainties of the lambda hyperons global polarization measurement in au + au collisions with the star detector at rhic are calculated.
arxiv:nucl-ex/0608034
wound assessment is a critical aspect of wound treatment, as the healing progress of a wound determines the optimal approach to care. however, the heterogeneity of burn wounds often complicates wound assessment, causing inaccurate wound evaluation and ineffective treatment. traditional wound assessment methods such as gross area reduction ( gar ) and percentage area reduction ( par ) are prone to misinterpretation, due to irregular results. inaccurate wound assessment leads to higher rates of death and life - long physical and psychological morbidities in burn patients, especially in low - income communities that lack specialty care and medical resources. therefore, i propose a novel approach to wound assessment : wound healing from the biophysical perspective of collective cell migration by analyzing cell packing behavior. this approach was modeled through voronoi tessellation simulations and applied to a wound healing system, where changes in the cell morphology parameters of aspect ratio and shape index were plotted over time to numerically evaluate the geometry of different cell migration packing patterns. experimental results demonstrate the effectiveness of measuring aspect ratio, as a reduction in aspect ratio indicates that cell shapes become increasingly rounded throughout wound closure. this is further proven when considering physical principles in wound healing and changes in cell elongation. by placing a microscope objective on a phone camera, it is possible to directly examine any wound, with the calculations done on the phone as well. this efficient and accurate mechanism can be especially useful in low - resource communities, as it is accessible regardless of technical or medical background.
arxiv:2208.13812
in comparison to traditional footage, 360 { \ deg } videos can convey engaging, immersive experiences and even be utilized to create interactive virtual environments. like regular recordings, these videos need to consider the privacy of recorded people and could be targets for video manipulations. however, due to their properties like enhanced presence, the effects on users might differ from traditional, non - immersive content. therefore, we are interested in how changes of real - world footage like adding privacy protection or applying video manipulations could mitigate or introduce harm in the resulting immersive media.
arxiv:2405.05924
we present a polarization resolved study of the low energy band structure in the optimally doped iron pnictide superconductor ba $ _ { 0. 6 } $ k $ _ { 0. 4 } $ fe $ _ 2 $ as $ _ 2 $ ( t $ _ c $ = 37k ) using angle resolved photoemission spectroscopy. polarization - contrasted measurements are used to identify and trace all three low energy hole - like bands predicted by local density approximation ( lda ) calculations. the photoemitted electrons reveal an inconsistency with lda - predicted symmetries along the $ \ gamma $ - x high symmetry momentum axis, due to unexpectedly strong rotational anisotropy in electron kinetics. we evaluate many - body effects such as mott - hubbard interactions that are likely to underlie the anomaly, and discuss how the observed deviations from lda band structure affect the energetics of iron pnictide cooper pairing in the hole doped regime.
arxiv:1207.2221
real - world vector embeddings are usually associated with extra labels, such as attributes and keywords. many applications require the nearest neighbor search that contains specific labels, such as searching for product image embeddings restricted to a particular brand. a straightforward approach is to materialize all possible indices according to the complete query label workload. however, this leads to an exponential increase in both index space and processing time, which significantly limits scalability and efficiency. in this paper, we leverage the inclusion relationships among query label sets to construct partial indexes, enabling index sharing across queries for improved construction efficiency. we introduce \ textit { elastic factor } bounds to guarantee search performance and use the greedy algorithm to select indices that meet the bounds, achieving a tradeoff between efficiency and space. meanwhile, we also designed the algorithm to achieve the best elastic factor under a given space limitation. experimental results on multiple real datasets demonstrate that our algorithm can achieve near - optimal search performance, achieving up to 10x - 500x search efficiency speed up over state - of - the - art approaches. our algorithm is highly versatile, since it is not constrained by index type and can seamlessly integrate with existing optimized libraries.
arxiv:2505.03212
using iram 30m, sest 15m, and nancay radiotelescopes, we have gathered the 1mm continuum emission, spectra of the j = 1 - - 0 co line and of the hi line at 21cm for two samples of iras galaxies. using these data, we have estimated the hi masses, the h $ _ 2 $ masses and the dust masses. we find that for far - infrared selected galaxies : ( 1 ) the median value of m $ _ { h2 } $ / m $ _ { hi } $ is 0. 5, meaning that the atomic phase dominates in these galaxies. the fraction of gas in molecular form increases with increasing fir luminosity but does not show any obvious trend with other galaxy properties. ( 2 ) the $ h2 $ surface density ( s. d. ) is better correlated with the cold dust s. d. than the hi s. d., but the correlation of hi with dust is not negligible. globally in these galaxies the cold dust emission is associated with both the molecular and atomic phases. ( 3 ) the fir surface brightness increases as the third power of the s60um / s100um ratio, it shows a tight correlation with both the h $ _ 2 $ and dust surface densities and a weaker one with the hi surface density : a large part of the far - infrared emission of these galaxies originates in the molecular medium. ( 4 ) the gas - to - dust ratio ranges between 100 and 1000 and its average value is 230, close to the galactic value. there is indeed a clear trend : this ratio decreases as the fir surface density increases, a result which can be explained in the framework of an enhancement of metallicity in galaxy discs having a higher star formation rate.
arxiv:astro-ph/9501014
we construct a coherent state path integral formalism for the one - dimensional bloch particle within the single band model. the transition amplitude between two coherent states is a sum of transition amplitudes with different winding numbers on the two - dimensional phase space which has the same topology as that of the cylinder. appearance of the winding number is due to the periodicity of the quasi - momentum of the bloch particle. our formalism is successfully applied to a semiclassical motion of the bloch particle under a uniform electric field. the wave packet exhibits not only the bloch oscillation but also a similar breathing to the one for the squeezed state of a harmonic oscillator.
arxiv:cond-mat/0105453
we fabricated nife2o thin films on mgal2o4 ( 001 ) substrates by reactive dc magnetron co - sputtering in a pure oxygen atmosphere at different substrate temperatures. the film properties were investigated by various techniques with a focus on their structure, surface topography, magnetic characteristics, and transport properties. structural analysis revealed a good crystallization with epitaxial growth and low roughness and a similar quality as in films grown by pulsed laser deposition. electrical conductivity measurements showed high room temperature resistivity ( 12 ohmm ), but low activation energy, indicating an extrinsic transport mechanism. a band gap of about 1. 55 ev was found by optical spectroscopy. detailed x - ray spectroscopy studies confirmed the samples to be ferrimagnetic with fully compensated fe moments. by comparison with multiplet calculations of the spectra we found that the cation valencies are to a large extent ni2 + and fe3 +.
arxiv:1312.1086
we present a time dependent quantum calculation of the scattering of a few - photon pulse on a single atom. the photon wave packet is assumed to propagate in a transversely strongly confined geometry, which ensures strong atom - light coupling and allows a quasi 1d treatment. the amplitude and phase of the transmitted, reflected and transversely scattered part of the wave packet strongly depend on the pulse length ( bandwidth ) and energy. for a transverse mode size of the order of $ \ lambda ^ 2 $, we find nonlinear behavior for a few photons already, or even for a single photon. in a second step we study the collision of two such wave packets at the atomic site and find striking differences between fock state and coherent state wave packets of the same photon number.
arxiv:quant-ph/0202005
causal discovery aims to recover causal structures generating the observational data. despite its success in certain problems, in many real - world scenarios the observed variables are not the target variables of interest, but the imperfect measures of the target variables. causal discovery under measurement error aims to recover the causal graph among unobserved target variables from observations made with measurement error. we consider a specific formulation of the problem, where the unobserved target variables follow a linear non - gaussian acyclic model, and the measurement process follows the random measurement error model. existing methods on this formulation rely on non - scalable over - complete independent component analysis ( oica ). in this work, we propose the transformed independent noise ( tin ) condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables. by leveraging the non - gaussianity and higher - order statistics of data, tin is informative about the graph structure among the unobserved target variables. by utilizing tin, the ordered group decomposition of the causal model is identifiable. in other words, we could achieve what once required oica to achieve by only conducting independence tests. experimental results on both synthetic and real - world data demonstrate the effectiveness and reliability of our method.
arxiv:2210.11021
meteoroid streams can be complex structures shaped by the processes of their formation and subsequent orbital evolution. the first step of their understanding is mapping their current stage. we used precise data from the european fireball network to disentangle the situation with meteor showers active in august and having radiants in the cygnus - draco area. in total, 179 fireballs observed between 2016 - 2024 were analyzed. we confirmed that two showers, $ \ kappa $ cygnids and august draconids, are present. the meteoroid swarm producing $ \ kappa $ cygnids is locked in the 5 : 3 main - motion resonance with jupiter with orbital period 7. 12 years and has a limited extent of $ \ leq $ 90 degrees in mean anomaly. the shower is therefore markedly active only once or twice during each seven - year period. the orbits have wide range of inclinations, 28 - 44 degrees. there is a correlation between inclination, perihelion distance, and argument of perihelion due to observational selection effects. the radiant area is almost 30 degrees long in declination. august draconids have even more extended radiant and can be divided into three branches depending on the position of the perihelion relative to the ecliptic plane. neither of the showers can be described by a single set of orbital elements. we provide sets of representative orbits and identifications with showers previously reported in the literature. physical properties of meteoroids and possible parent bodies are also discussed.
arxiv:2502.02178
an interaction term is added to the qcd lagrangian involving hadron and gluon fields. an effective vertex is calculated for such interactions through exchanges of reggeized gluons. this gives rise to an effective coupling for hadron - gluon elastic scattering in the t - channel, which is used in an inclusive hadron - hadron interaction from which the pomeron intercept alpha ( 0 ) is calculated.
arxiv:hep-ph/0106238
in the last years it has been shown that lotka - volterra mappings constitute a valuable tool from both the theoretical and the applied points of view, with developments in very diverse fields such as physics, population dynamics, chemistry and economy. the purpose of this work is to demonstrate that many of the most important ideas and algebraic methods that constitute the basis of the quasipolynomial formalism ( originally conceived for the analysis of ordinary differential equations ) can be extended into the mapping domain. the extension of the formalism into the discrete - time context is remarkable as far as the quasipolynomial methodology had never been shown to be applicable beyond the differential case. it will be demonstrated that lotka - volterra mappings play a central role in the quasipolynomial formalism for the discrete - time case. moreover, the extension of the formalism into the discrete - time domain allows a significant generalization of lotka - volterra mappings as well as a whole transfer of algebraic methods into the discrete - time context. the result is a novel and more general conceptual framework for the understanding of lotka - volterra mappings, as well as a new range of possibilities that becomes open not only for the theoretical analysis of lotka - volterra mappings and their generalizations, but also for the development of new applications.
arxiv:1910.00951
the $ \ sigma $ mesons may be considered as the higgs boson of strong interaction. while the observation of the electroweak higgs boson is the primary goal in ongoing experiments at the lhc, the $ \ sigma $ meson is by now well studied both as an on - shell particle and as a virtual particle while being part of the constituent quark. this makes it timely to give an overview of the present status of the higgs sector of strong interaction which includes the scalar mesons $ \ sigma ( 600 ) $, $ \ kappa ( 800 ) $, $ f _ 0 ( 980 ) $ and $ a _ 0 ( 980 ) $ together with the pseudo goldstone bosons $ \ pi $, $ k $ and $ \ eta $.
arxiv:1107.4226
we present the first in depth dynamical analysis of the archetypal wide - angle tailed ( wat ) cluster abell 562. we have combined gemini observations with archival data from the literature to form a sample of 76 cluster members and derived a mean redshift of $ 0. 1088 \ pm 0. 0004 $ and a velocity dispersion of $ 919 \ pm 116 $ km s $ ^ { - 1 } $. this relatively large velocity dispersion suggests either a very massive cluster ( $ m _ { dyn } > 6. 9 \ times $ 10 $ ^ { 14 } $ $ m _ { sun } $ ) and / or a merger system. the merger model is supported by a non - gaussian galaxy velocity distribution, an elongated spatial distribution of likely cluster members, and an elongated x - ray emitting gas. this scenario would generate the bulk flow motion of the intra - cluster medium that can exert enough ram pressure to bend the radio jets. thus, our observations support the model in which a recent off - axis merger event produced the cluster wide conditions needed to shape the wat in abell 562.
arxiv:2009.04265
pointwise - supported generalized wavelets are introduced, based on dirac, doublet and further derivatives of delta. a generalized biorthogonal analysis leads to standard taylor series and new dual - taylor series that may be interpreted as laurent schwartz distributions. a parseval - like identity is also derived for taylor series, showing that taylor series support an energy theorem. new representations for signals called derivagrams are introduced, which are similar to spectrograms. this approach corroborates the impact of wavelets in modern signal analysis.
arxiv:1502.01570
let $ k $ be a compact convex domain in the euclidean plane. the mixed area $ a ( k, - k ) $ of $ k $ and $ - k $ can be bounded from above by $ 1 / ( 6 \ sqrt { 3 } ) l ( k ) ^ 2 $, where $ l ( k ) $ is the perimeter of $ k $. this was proved by ulrich betke and wolfgang weil ( 1991 ). they also showed that if $ k $ is a polygon, then equality holds if and only if $ k $ is a regular triangle. we prove that among all convex domains, equality holds only in this case, as conjectured by betke and weil. this is achieved by establishing a stronger stability result for the geometric inequality $ 6 \ sqrt { 3 } a ( k, - k ) \ le l ( k ) ^ 2 $.
arxiv:2103.11672
large language models ( llms ) have achieved remarkable progress in complex reasoning tasks, yet they remain fundamentally limited by their reliance on static internal knowledge and text - only reasoning. real - world problem solving often demands dynamic, multi - step reasoning, adaptive decision making, and the ability to interact with external tools and environments. in this work, we introduce artist ( agentic reasoning and tool integration in self - improving transformers ), a unified framework that tightly couples agentic reasoning, reinforcement learning, and tool integration for llms. artist enables models to autonomously decide when, how, and which tools to invoke within multi - turn reasoning chains, leveraging outcome - based rl to learn robust strategies for tool use and environment interaction without requiring step - level supervision. extensive experiments on mathematical reasoning and multi - turn function calling benchmarks show that artist consistently outperforms state - of - the - art baselines, with up to 22 % absolute improvement over base models and strong gains on the most challenging tasks. detailed studies and metric analyses reveal that agentic rl training leads to deeper reasoning, more effective tool use, and higher - quality solutions. our results establish agentic rl with tool integration as a powerful new frontier for robust, interpretable, and generalizable problem - solving in llms.
arxiv:2505.01441
in this paper, we consider an inverse conductivity problem on a bounded domain $ \ omega \ subset \ mathbb { r } ^ n $, $ n \ geq2 $, also known as electrical impedance tomography ( eit ), for the case where unknown impenetrable obstacles are embedded into $ \ omega $. we show that a piecewise - constant conductivity function and embedded obstacles can be simultaneously recovered in terms of the local dirichlet - to - neumann map defined on an arbitrary small open subset of the boundary of the domain $ \ omega $. the method depends on the well - posedness of a coupled pde - system constructed for the conductivity equations in the $ h ^ 1 $ - space and some elementary a priori estimates for harmonic functions.
arxiv:2104.13552
we study the entanglement entropy and the modular hamiltonian of slightly excited states reduced to a ball shaped region in generic conformal field theories. we set up a formal expansion in the one point functions of the state in which all orders are explicitly given in terms of integrals of multi - point functions along the vacuum modular flow, without a need for replica index analytic continuation. we show that the quadratic order contributions in this expansion can be calculated in a way expected from holography, namely via the bulk canonical energy for the entanglement entropy, and its variation for the modular hamiltonian. the bulk fields contributing to the canonical energy are defined via the hkll procedure. in terms of cft variables, the contribution of each such bulk field to the modular hamiltonian is given by the ope block corresponding to the dual operator integrated along the vacuum modular flow. these results do not rely on assuming large $ n $ or other special properties of the cft and therefore they are purely kinematic.
arxiv:1705.01486
recent advances in 3d gaussian splatting ( 3d - gs ) have shown remarkable success in representing 3d scenes and generating high - quality, novel views in real - time. however, 3d - gs and its variants assume that input images are captured based on pinhole imaging and are fully in focus. this assumption limits their applicability, as real - world images often feature shallow depth - of - field ( dof ). in this paper, we introduce dof - gaussian, a controllable depth - of - field method for 3d - gs. we develop a lens - based imaging model based on geometric optics principles to control dof effects. to ensure accurate scene geometry, we incorporate depth priors adjusted per scene, and we apply defocus - to - focus adaptation to minimize the gap in the circle of confusion. we also introduce a synthetic dataset to assess refocusing capabilities and the model ' s ability to learn precise lens parameters. our framework is customizable and supports various interactive applications. extensive experiments confirm the effectiveness of our method. our project is available at https : / / dof - gaussian. github. io.
arxiv:2503.00746
compressive sensing ( cs ) is a data acquisition technique that measures sparse or compressible signals at a sampling rate lower than their nyquist rate. results show that sparse signals can be reconstructed using greedy algorithms, often requiring prior knowledge such as the signal sparsity or the noise level. as a substitute to prior knowledge, cross validation ( cv ), a statistical method that examines whether a model overfits its data, has been proposed to determine the stopping condition of greedy algorithms. this paper first analyzes cross validation in a general compressive sensing framework and developed general cross validation techniques which could be used to understand cv - based sparse recovery algorithms. furthermore, we provide theoretical analysis for omp - cv, a cross validation modification of orthogonal matching pursuit, which has very good sparse recovery performance. finally, numerical experiments are given to validate our theoretical results and investigate the behaviors of omp - cv.
arxiv:1602.06373
in this paper, the problem of load balancing in network intrusion detection system is considered. load balancing method based on work of several components of network intrusion detection system and on the analysis of multifractal properties of incoming traffic is proposed. the proposed method takes into account a degree of multifractality for calculation of deep packet inspection time, on the basis of which the time necessary for comparing the packet with the signatures is calculated. load balancing rules are generated using the estimated average deep packet inspection time and the multifractality parameters of incoming load. comparative analysis of the proposed load balancing method with the standard one showed that the proposed method improves the quality of service parameters and the percentage of packets that are not analyzed.
arxiv:1904.05926
we establish three identities involving dyck paths and alternating motzkin paths, whose proofs are based on variants of the same bijection. we interpret these identities in terms of closed random walks on the halfline. we explain how these identities arise from combinatorial interpretations of certain properties of the $ \ beta $ - hermite and $ \ beta $ - laguerre ensembles of random matrix theory. we conclude by presenting two other identities obtained in the same way, for which finding combinatorial proofs is an open problem.
arxiv:math/0307252
recent advancements in spoken dialogue models, exemplified by systems like gpt - 4o, have captured significant attention in the speech domain. compared to traditional three - tier cascaded spoken dialogue models that comprise speech recognition ( asr ), large language models ( llms ), and text - to - speech ( tts ), modern spoken dialogue models exhibit greater intelligence. these advanced spoken dialogue models not only comprehend audio, music, and other speech - related features, but also capture stylistic and timbral characteristics in speech. moreover, they generate high - quality, multi - turn speech responses with low latency, enabling real - time interaction through simultaneous listening and speaking capability. despite the progress in spoken dialogue systems, there is a lack of comprehensive surveys that systematically organize and analyze these systems and the underlying technologies. to address this, we have first compiled existing spoken dialogue systems in the chronological order and categorized them into the cascaded and end - to - end paradigms. we then provide an in - depth overview of the core technologies in spoken dialogue models, covering aspects such as speech representation, training paradigm, streaming, duplex, and interaction capabilities. each section discusses the limitations of these technologies and outlines considerations for future research. additionally, we present a thorough review of relevant datasets, evaluation metrics, and benchmarks from the perspectives of training and evaluating spoken dialogue systems. we hope this survey will contribute to advancing both academic research and industrial applications in the field of spoken dialogue systems. the related material is available at https : / / github. com / jishengpeng / wavchat.
arxiv:2411.13577
in the first part of the talk, i discussed results on the determination of the ratios of the light quark masses from large n _ c chiral perturbation theory, to be described elsewhere. the following notes contain material from the second part of the talk, which concerns the implications of large n _ c for resonance dominance estimates of the low energy coupling constants in chiral perturbation theory.
arxiv:hep-ph/0502065
the non - unitarity of the leptonic mixing matrix is a generic signal of new physics aiming at the generation of the observed neutrino masses. we discuss the minimal unitarity violation ( muv ) scheme, an effective field theory framework which represents the class of extensions of the standard model ( sm ) by heavy neutral leptons, and discuss the present bounds on the non - unitarity parameters as well as estimates for the sensitivity of the circular electron positron collider ( cepc ), based on the performance parameters from the precdr.
arxiv:1604.00208
high - order quantum coherence reveals the statistical correlation of quantum particles. manipulation of quantum coherence of light in temporal domain enables to produce single - photon source, which has become one of the most important quantum resources. high - order quantum coherence in spatial domain plays a crucial role in a variety of applications, such as quantum imaging, holography and microscopy. however, the active control of high - order spatial quantum coherence remains a challenging task. here we predict theoretically and demonstrate experimentally the first active manipulation of high - order spatial quantum coherence by mapping the entanglement of spatially structured photons. our results not only enable to inject new strength into current applications, but also provide new possibilities towards more wide applications of high - order quantum coherence.
arxiv:2306.00772
constraints on primordial black holes in the range $ 10 ^ { - 18 } m _ { \ odot } $ to $ 10 ^ { 3 } m _ { \ odot } $ are reevaluated for a general class of extended mass functions. whereas previous work has assumed that pbhs are produced with one single mass, instead there is expected to be a range of masses even in the case of production from a single mechanism ; constraints therefore change from previous literature. although tightly constrained in the majority of cases, it is shown that, even under conservative assumptions, primordial black holes in the mass range $ 10 ^ { - 10 } m _ { \ odot } $ to $ 10 ^ { - 8 } m _ { \ odot } $ could still constitute the entirety of the dark matter. this stresses both the importance for a comprehensive reevaluation of all respective constraints that have previously been evaluated only for a monochromatic mass function, and the need to obtain more constraints in the allowed mass range.
arxiv:1701.07223
in this paper we are interested in solving the fermat - type equations x ^ 5 + y ^ 5 = dz ^ p where d is a positive integer and p a prime number $ \ ge 7 $. we describe a new method based on modularity theorems which allows us to improve all the results in a previous paper of the first author. we finally discuss the present limitations of the method by looking at the case d = 3.
arxiv:0802.1217
dementia is the fifth cause of death worldwide with 10 million new cases every year. healthcare applications using machine learning techniques have almost reached the physical limits while more data is becoming available resulting from the increasing rate of diagnosis. recent research in quantum machine learning ( qml ) techniques have found different approaches that may be useful to accelerate the training process of existing machine learning models and provide an alternative to learn more complex patterns. this work aims to report a real - world application of a quantum machine learning algorithm, in particular, we found that using the implemented version for variational quantum classiffication ( vqc ) in ibm ' s framework qiskit allows predicting dementia in elderly patients, this approach proves to provide more consistent results when compared with a classical support vector machine ( svm ) with a linear kernel using different number of features.
arxiv:2007.08653
let ( m, g ) be a compact riemannian manifold with boundary. we consider the problem ( first studied by escobar in 1992 ) of finding a conformal metric with constant scalar curvature in the interior and zero mean curvature on the boundary. using a local test function construction, we are able to settle most cases left open by escobar ' s work. moreover, we reduce the remaining cases to the positive mass theorem.
arxiv:0908.4327
we study the effects of elastic anisotropy on the landau - de gennes critical points for nematic liquid crystals, in a square domain. the elastic anisotropy is captured by a parameter, $ l _ 2 $, and the critical points are described by three degrees of freedom. we analytically construct a symmetric critical point for all admissible values of $ l _ 2 $, which is necessarily globally stable for small domains i. e., when the square edge length, $ \ lambda $, is small enough. we perform asymptotic analyses and numerical studies to discover at least $ 5 $ classes of these symmetric critical points - the $ wors $, $ ring ^ { \ pm } $, $ constant $ and $ pwors $ solutions, of which the $ wors $, $ ring ^ + $ and $ constant $ solutions can be stable. furthermore, we demonstrate that the novel $ constant $ solution is energetically preferable for large $ \ lambda $ and large $ l _ 2 $, and prove associated stability results that corroborate the stabilising effects of $ l _ 2 $ for reduced landau - de gennes critical points. we complement our analysis with numerically computed bifurcation diagrams for different values of $ l _ 2 $, which illustrate the interplay of elastic anisotropy and geometry for nematic solution landscapes, at low temperatures.
arxiv:2105.10253
using a tight - binding approach we study theoretically the nature of surface states in pb0. 4sn0. 6te - the newly discovered topological - crystalline - insulator. apart from the studied before ( 001 ) surface states, two other surface families, { 011 } and { 111 }, in which the mirror symmetry of the crystal ' s rock - salt structure plays the same role in topological protection, are considered. our calculations show that while in ( 111 ) surface states of ( pb, sn ) te four single topologically protected dirac - cones should appear, for the ( 110 ) surface states the protection is lifted for two l points. in this case, instead of the dirac points energy gaps occur in the surface states, due to the interaction between the two l valleys. in all studied cases a chiral spin texture is obtained.
arxiv:1303.7119
using momentum sum rule for evolution equations for double parton distribution functions ( dpdfs ) in the leading logarithmic approximation, we find that the double gluon distribution function can be uniquely constrained via the single gluon distribution function. we also study numerically its evolution with a hard scale and show that an approximately factorized ansatz into the product of two single gluon distributions performs quite well at small values of $ x $ but is always violated for larger values, as expected.
arxiv:1606.01679
we construct a quantum dolbeault double complex $ \ oplus _ { p, q } \ omega ^ { p, q } $ on the quantum plane $ \ bbb c _ q ^ 2 $. this solves the long - standing problem that the standard differential calculus on the quantum plane is not a $ * $ - calculus, by embedding it as the holomorphic part of a $ * $ - calculus. we show in general that any nichols - woronowicz algebra or braided plane $ b _ + ( v ) $, where $ v $ is an object in an abelian $ \ bbb c $ - linear braided bar category of real type is a quantum complex space in this sense with a factorisable dolbeault double complex. we combine the chern construction on $ \ omega ^ { 1, 0 } $ in such a dolbeault complex for an algebra $ a $ with its conjugate to construct a canonical metric compatible connection on $ \ omega ^ 1 $ associated to a class of quantum metrics, and apply this to the quantum plane. we also apply this to finite groups $ g $ with cayley graph generators split into two halves related by inversion, constructing such a dolbeault complex $ \ omega ( g ) $ in this case, recovering the quantum levi - civita connection for any edge - symmetric metric on the integer lattice with $ \ omega ( \ bbb z ) $ now viewed as a quantum complex structure. we also show how to build natural quantum metrics on $ \ omega ^ { 1, 0 } $ and $ \ omega ^ { 0, 1 } $ separately where the inner product in the case of the quantum plane, in order to descend to $ \ otimes _ a $, is taken with values in an $ a $ - bimodule.
arxiv:2409.05253
recent transformer - based solutions have been introduced to estimate 3d human pose from 2d keypoint sequence by considering body joints among all frames globally to learn spatio - temporal correlation. we observe that the motions of different joints differ significantly. however, the previous methods cannot efficiently model the solid inter - frame correspondence of each joint, leading to insufficient learning of spatial - temporal correlation. we propose mixste ( mixed spatio - temporal encoder ), which has a temporal transformer block to separately model the temporal motion of each joint and a spatial transformer block to learn inter - joint spatial correlation. these two blocks are utilized alternately to obtain better spatio - temporal feature encoding. in addition, the network output is extended from the central frame to entire frames of the input video, thereby improving the coherence between the input and output sequences. extensive experiments are conducted on three benchmarks ( human3. 6m, mpi - inf - 3dhp, and humaneva ). the results show that our model outperforms the state - of - the - art approach by 10. 9 % p - mpjpe and 7. 6 % mpjpe. the code is available at https : / / github. com / jinluzhang1126 / mixste.
arxiv:2203.00859
within this study, we propose a new approach for natural language processing using bayesian networks to predict and analyze the context and how this approach can be applied to the community question answering domain. we discuss how bayesian networks can detect semantic relationships and dependencies between entities, and this is connected to different score - based approaches of structure - learning. we compared the bayesian networks with different score metrics, such as the bic, bdeu, k2 and chow - liu trees. our proposed approach out - performs the baseline model at the precision metric. we also discuss the influence of penalty terms on the structure of bayesian networks and how they can be used to analyze the relationships between entities. in addition, we examine the visualization of directed acyclic graphs to analyze semantic relationships. the article further identifies issues with detecting certain semantic classes that are separated in the structure of directed acyclic graphs. finally, we evaluate potential improvements for the bayesian network approach.
arxiv:2302.13253
let $ c $ be an algebraic curve of genus $ g \ geq 2 $ and $ m _ l $ be the moduli space of rank 2 stable vector bundles on $ c $ whose determinants are isomorphic to a fixed line bundle $ l $ of degree 1 on $ c. $ s. del bano studied motives of moduli spaces of rank 2 vector bundles on $ c $ and computed the motive of $ m _ l. $ in this note, we prove that his result gives an interesting decomposition of the motive of $ m _ l. $ this motivic decomposition is compatible with a conjecture of m. s. narasimhan which predicts semi - orthogonal decomposition of derived category of the moduli space.
arxiv:1806.11101
let $ ( m, g ) $ be a time oriented lorentzian manifold and $ d $ the lorentzian distance on $ m $. the function $ \ tau ( q ) : = \ sup _ { p < q } d ( p, q ) $ is the cosmological time function of $ m $, where as usual $ p < q $ means that $ p $ is in the causal past of $ q $. this function is called regular iff $ \ tau ( q ) < \ infty $ for all $ q $ and also $ \ tau \ to 0 $ along every past inextendible causal curve. if the cosmological time function $ \ tau $ of a space time $ ( m, g ) $ is regular it has several pleasant consequences : ( 1 ) it forces $ ( m, g ) $ to be globally hyperbolic, ( 2 ) every point of $ ( m, g ) $ can be connected to the initial singularity by a rest curve ( i. e., a timelike geodesic ray that maximizes the distance to the singularity ), ( 3 ) the function $ \ tau $ is a time function in the usual sense, in particular ( 4 ) $ \ tau $ is continuous, in fact locally lipschitz and the second derivatives of $ \ tau $ exist almost everywhere.
arxiv:gr-qc/9709084
effective user representations are pivotal in personalized advertising. however, stringent constraints on training throughput, serving latency, and memory, often limit the complexity and input feature set of online ads ranking models. this challenge is magnified in extensive systems like meta ' s, which encompass hundreds of models with diverse specifications, rendering the tailoring of user representation learning for each model impractical. to address these challenges, we present scaling user modeling ( sum ), a framework widely deployed in meta ' s ads ranking system, designed to facilitate efficient and scalable sharing of online user representation across hundreds of ads models. sum leverages a few designated upstream user models to synthesize user embeddings from massive amounts of user features with advanced modeling techniques. these embeddings then serve as inputs to downstream online ads ranking models, promoting efficient representation sharing. to adapt to the dynamic nature of user features and ensure embedding freshness, we designed sum online asynchronous platform ( soap ), a latency free online serving system complemented with model freshness and embedding stabilization, which enables frequent user model updates and online inference of user embeddings upon each user request. we share our hands - on deployment experiences for the sum framework and validate its superiority through comprehensive experiments. to date, sum has been launched to hundreds of ads ranking models in meta, processing hundreds of billions of user requests daily, yielding significant online metric gains and improved infrastructure efficiency.
arxiv:2311.09544
using the volume averaging technique of jackson ( 1997 ), we derive a set of two - fluid equations that describe the dynamics of a mono - disperse non - brownian colloidal suspension in the semi - dilute regime. the equations are tensorial and can be applied in arbitrary geometries. closure models are developed that represent the stress surrounding each particle as a sum of stresses due to fluid movement through a fixed bed of particles, and those due to interactions between particles. emphasising pragmatism, the developed closure models are consistent with current knowledge of particle interactions in these systems but employ parameters that can be tuned to represent the microstructure of specific particle suspensions. within the interaction model, a model for the particle distribution around each particle is used that depends on the strain rate field, allowing anisotropy of the microstructure ( and hence normal suspension stresses ) to develop within the suspension in response to arbitrary strain fields. force moments acting on particles during particle interactions are calculating by summing hydrodynamic contributions between particle pairs, but adjusted to recognise that multi - particle interactions can increase the effective stress generated during each interaction. finally, an order of magnitude analysis is performed on the derived momentum equations to determine what terms are significant, leaving only terms in the final equations that are required to predict behaviour during laminar flow within the targeted semi - dilute regime. in a companion paper ( referred to as paper ii, noori et. al., 2024 ) we chose sets of microstructure parameters that predict experimentally measured bulk suspension behaviour, creating a link between particle properties ( i. e. roughness ), suspension microstructure and shear - induced particle migration in arbitrary flow fields.
arxiv:2501.01742
predicting the phase diagram of interacting quantum many - body systems is a central problem in condensed matter physics and related fields. a variety of quantum many - body systems, ranging from unconventional superconductors to spin liquids, exhibit complex competing phases whose theoretical description has been the focus of intense efforts. here, we show that neural network quantum states can be combined with a lee - yang theory of quantum phase transitions to predict the critical points of strongly - correlated spin lattices. specifically, we implement our approach for quantum phase transitions in the transverse - field ising model on different lattice geometries in one, two, and three dimensions. we show that the lee - yang theory combined with neural network quantum states yields predictions of the critical field, which are consistent with large - scale quantum many - body methods. as such, our results provide a starting point for determining the phase diagram of more complex quantum many - body systems, including frustrated heisenberg and hubbard models.
arxiv:2301.09923
modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance. sample re - weighting methods are popularly used to alleviate this data bias issue. most current methods, however, require to manually pre - specify the weighting schemes as well as their additional hyper - parameters relying on the characteristics of the investigated problem and training data. this makes them fairly hard to be generally applied in practical scenarios, due to their significant complexities and inter - class variations of data bias situations. to address this issue, we propose a meta - model capable of adaptively learning an explicit weighting scheme directly from data. specifically, by seeing each training class as a separate learning task, our method aims to extract an explicit weighting function with sample loss and task / class feature as input, and sample weight as output, expecting to impose adaptively varying weighting schemes to different sample classes based on their own intrinsic bias characteristics. synthetic and real data experiments substantiate the capability of our method on achieving proper weighting schemes in various data bias cases, like the class imbalance, feature - independent and dependent label noise scenarios, and more complicated bias scenarios beyond conventional cases. besides, the task - transferability of the learned weighting scheme is also substantiated, by readily deploying the weighting function learned on relatively smaller - scale cifar - 10 dataset on much larger - scale full webvision dataset. a performance gain can be readily achieved compared with previous soat ones without additional hyper - parameter tuning and meta gradient descent step. the general availability of our method for multiple robust deep learning issues, including partial - label learning, semi - supervised learning and selective classification, has also been validated.
arxiv:2202.05613
the phase noise and frequency stability measurements of 1 ghz, 100 mhz, and 10 mhz signals are presented which have been synthesized from microwave cryogenic sapphire oscillators using ultra - low - vibration pulse - tube cryocooler technology. we present the measured data using independent cryogenic oscillators for the 100 mhz and 10 mhz synthesized signals, whereas previously we only estimated the expected results based on residual phase noise measurements, when only one cryogenic oscillator was available. in addition we present the design of a 1 ghz synthesizer using a crystek voltage controlled oscillator phase locked to 1 ghz output derived from a cryogenic sapphire oscillator.
arxiv:1302.0283
3d human dance motion is a cooperative and elegant social movement. unlike regular simple locomotion, it is challenging to synthesize artistic dance motions due to the irregularity, kinematic complexity and diversity. it requires the synthesized dance is realistic, diverse and controllable. in this paper, we propose a novel generative motion model based on temporal convolution and lstm, tc - lstm, to synthesize realistic and diverse dance motion. we introduce a unique control signal, dance melody line, to heighten controllability. hence, our model, and its switch for control signals, promote a variety of applications : random dance synthesis, music - to - dance, user control, and more. our experiments demonstrate that our model can synthesize artistic dance motion in various dance types. compared with existing methods, our method achieved start - of - the - art results.
arxiv:2006.05743
the problem of malicious software ( malware ) detection and classification is a complex task, and there is no perfect approach. there is still a lot of work to be done. unlike most other research areas, standard benchmarks are difficult to find for malware detection. this paper aims to investigate recent advances in malware detection on macos, windows, ios, android, and linux using deep learning ( dl ) by investigating dl in text and image classification, the use of pre - trained and multi - task learning models for malware detection approaches to obtain high accuracy and which the best approach if we have a standard benchmark dataset. we discuss the issues and the challenges in malware detection using dl classifiers by reviewing the effectiveness of these dl classifiers and their inability to explain their decisions and actions to dl developers presenting the need to use explainable machine learning ( xai ) or interpretable machine learning ( iml ) programs. additionally, we discuss the impact of adversarial attacks on deep learning models, negatively affecting their generalization capabilities and resulting in poor performance on unseen data. we believe there is a need to train and test the effectiveness and efficiency of the current state - of - the - art deep learning models on different malware datasets. we examine eight popular dl approaches on various datasets. this survey will help researchers develop a general understanding of malware recognition using deep learning.
arxiv:2407.19153
guddum type results for pedant tree - connectivity.
arxiv:1508.07149
velocity dispersion measurements of recently discovered milky way satellites with $ m _ v \ gtrsim - 7 $ imply they posses high mass - to - light ratios. the expected velocity dispersions due to their baryonic mass are $ \ sim0. 2 $ \, km \, s $ ^ { - 1 } $, but values $ \ gtrsim3 $ \, km \, s $ ^ { - 1 } $ are measured. we perform monte carlo simulations of mock radial velocity measurements of these systems assuming they have mass - to - light ratios similar to globular clusters and posses an unidentified binary star population, to determine if these stars could boost the velocity dispersion to the observed values. we find that this hypothesis is unlikely to produce dispersions much in excess of $ \ sim 4. 5 $ \, km \, s $ ^ { - 1 } $, in agreement with previous work. however, for the systems with potentially the smallest velocity dispersions, values consistent with observations are produced in 5 - 40 % of our simulations for binary fractions in excess of $ f _ { bin } ( p \ le10 $ \, yrs $ ) \ sim5 % $. this sample includes the dwarf galaxy candidates that lie closest to classical globular clusters in $ m _ v - r _ h $ space. considered as a population, it is unlikely that all of these dwarf galaxy candidates have mass - to - light ratios typical of globular clusters, but boosting of the observed dispersion by binaries from near - zero values cannot be ruled out at high confidence for several individual dwarf galaxy candidates. given the importance of obtaining accurate velocity dispersions and dynamical masses for the faintest satellites, it is clearly desirable to exclude directly the possible effect of binaries on these systems. this requires multi - epoch radial velocity measurements with individual uncertainties of $ \ lesssim $ 1 \, km \, s $ ^ { - 1 } $ to identify spectroscopic binaries with orbital velocities of order the observed velocity dispersion.
arxiv:1009.4205
there are two known general results on the finite model property ( fmp ) of commutators [ l, l ' ] ( bimodal logics with commuting and confluent modalities ). if l is finitely axiomatisable by modal formulas having universal horn first - order correspondents, then both [ l, k ] and [ l, s5 ] are determined by classes of frames that admit filtration, and so have the fmp. on the negative side, if both l and l ' are determined by transitive frames and have frames of arbitrarily large depth, then [ l, l ' ] does not have the fmp. in this paper we show that commutators with a ` weakly connected ' component often lack the fmp. our results imply that the above positive result does not generalise to universally axiomatisable component logics, and even commutators without ` transitive ' components such as [ k. 3, k ] can lack the fmp. we also generalise the above negative result to cases where one of the component logics has frames of depth one only, such as [ s4. 3, s5 ] and the decidable product logic s4. 3xs5. we also show cases when already half of commutativity is enough to force infinite frames.
arxiv:1502.05834
unconventional superconductivity frequently emerges as the transition temperature of a magnetic phase, typically antiferromagnetic, is suppressed continuously toward zero temperature. here, we report contrary behavior in pressurized cerhge3, a non - centrosymmetric heavy fermion compound. we find that its pressure - tuned antiferromagnetic transition temperature ( tn ) appears to avoid a continuous decrease to zero temperature by terminating abruptly above a dome of pressure - induced superconductivity. near 21. 5 gpa, evidence for tn suddenly vanishes, the electrical resistance becomes linear in temperature and the superconducting transition reaches a maximum. in light of x - ray absorption spectroscopy measurements, these characteristics appear to be related to a pressured - induced ce valence instability, which reveals as a sharp increase in the rate of change of ce valence with applied pressure.
arxiv:1711.00688
we consider the electromagnetic production of positron in collision of heavy nuclei, with the simultaneously produced electron captured by one of the nuclei. this cross - section exceeds by about four orders of magnitude the cross - section of $ e ^ + e ^ - $ production.
arxiv:1603.06061
bounding chains are a technique that offers three benefits to markov chain practitioners : a theoretical bound on the mixing time of the chain under restricted conditions, experimental bounds on the mixing time of the chain that are provably accurate and construction of perfect sampling algorithms when used in conjunction with protocols such as coupling from the past. perfect sampling algorithms generate variates exactly from the target distribution without the need to know the mixing time of a markov chain at all. we present here the basic theory and use of bounding chains for several chains from the literature, analyzing the running time when possible. we present bounding chains for the transposition chain on permutations, the hard core gas model, proper colorings of a graph, the antiferromagnetic potts model and sink free orientations of a graph.
arxiv:math/0405284
in data from the new horizons encounter with pluto in 2015, attention was called to a crater named kiladze and its surroundings because of the water ice spectral properties, which contrast with the primarily methane ice regional surface composition. the water ice carries the spectral signature of an ammoniated compound, similar to that seen at two other sites on pluto where cryovolcanism has been identified. the faulted structure of kiladze, including shaping by numerous collapse pits and the distorted shape of the crater, are compatible with the surroundings in hayabusa terra, east of spunik planitia. they are further compatible with an interpretation as a resurgent caldera formed during a past era of active cryovolcanic period that appears to be significantly more recent than the overall age of the planet ' s surface, possibly in the last several million years. in view of the size of the caldera and the large scale of the surrounding distribution of water ice, we propose that kiladze is a " supervolcano " in which one or more explosive events has scattered more than ~ 1000 km $ ^ { 3 } $ of icy cryomagma erupted from the interior onto the surface.
arxiv:2310.10904
htks is a game - like cognitive assessment method, designed for children between four and eight years of age. during the htks assessment, a child responds to a sequence of requests, such as " touch your head " or " touch your toes ". the cognitive challenge stems from the fact that the children are instructed to interpret these requests not literally, but by touching a different body part than the one stated. in prior work, we have developed the cognilearn system, that captures data from subjects performing the htks game, and analyzes the motion of the subjects. in this paper we propose some specific improvements that make the motion analysis module more accurate. as a result of these improvements, the accuracy in recognizing cases where subjects touch their toes has gone from 76. 46 % in our previous work to 97. 19 % in this paper.
arxiv:1703.08697
we present next - to - next - to leading order ( nnlo ) quantum electrodynamics ( qed ) corrections to the production of the higgs boson in bottom quark annihilation at the large hadron collider ( lhc ) in the five flavor scheme. we have systematically included the nnlo corrections resulting from the interference of quantum chromodynamics ( qcd ) and qed interactions. we have investigated the infrared ( ir ) structure of the bottom quark form factor up to two loop level in qed and in qcd $ \ times $ qed using k + g equation. we find that the ir poles in the form factor are controlled by the universal cusp, collinear and soft anomalous dimensions. in addition, we derive the qed as well as qcd $ \ times $ qed contributions to soft distribution function as well as to the ultraviolet renormalization constant of the bottom yukawa coupling up to second order in strong coupling and fine structure constant. finally, we report our findings on the numerical impact of the nnlo results from qed and qcd $ \ times $ qed at the lhc energies taking into account the dominant nnlo qcd corrections.
arxiv:1906.09028
we construct the commutative poisson algebra of classical hamiltonians in field theory. we pose the problem of quantization of this poisson algebra. we also make some interesting computations in the known quadratic part of the quantum algebra.
arxiv:1008.3333
in addition to electronic polarization or charge redistribution, the shape of neutral conjugated molecules yields position - dependent ionization potentials and electron affinities in organic thin films. self - consistent i ( n ) and a ( n ) are computed in each layer n of 10 - layer films of prototypical organics on a metal. the depth dependence of i ( n ) is discussed at surfaces of anthracene, c60 and ptcda. the shape contribution can be substantial, up to 0. 5 ev, and comes primarily from charge - quadrupole interactions.
arxiv:1005.1554
electroweak baryogenesis in a two - higgs doublet model is a well - motivated and testable scenario for physics beyond the standard model. an attractive way of providing $ cp $ violation is through flavor - changing higgs couplings, where the top - charm coupling is hardly constrained. this minimal scenario can be tested by searching for heavy charged and neutral higgs bosons at the lhc. while the charged higgs signature requires a dedicated analysis, the neutral higgs signature will be covered by a general search for same - sign top pairs. together, they provide a conclusive test of this kind of baryogenesis.
arxiv:2012.03572
data from direct numerical simulations of disperse bubbly flows in a vertical channel are used to study the effect of the bubbles on the carrier - phase turbulence. a new method is developed, based on the barycentric map approach, that allows to quantify the anisotropy and componentiality of the flow at any scale. using this the bubbles are found to significantly enhance flow anisotropy at all scales compared with the unladen case, and for some bubble cases, very strong anisotropy persists down to the smallest flow scales. the strongest anisotropy observed was for the cases involving small bubbles. concerning the inter - scale energy transfer, our results indicate that for the bubble - laden cases, the energy transfer is from large to small scales, just as for the unladen case. however, there is evidence of an upscale transfer when considering the transfer of energy associated with particular components of the velocity field. although the direction of the energy transfer is the same with and without the bubbles, the transfer is much stronger for the bubble - laden cases, suggesting that the bubbles play a strong role in enhancing the activity of the nonlinear term in the flow. the normalized forms of the fourth and sixth - order structure functions are also considered, and reveal that the introduction of bubbles into the flow strongly enhances intermittency in the dissipation range, but suppresses it at larger scales. this strong enhancement of the dissipation scale intermittency has significant implications for understanding how the bubbles might modify the mixing properties of turbulent flows.
arxiv:2104.00449
in order to discuss the spin - gap formation in a multiorbital system, we analyze an e _ g - orbital hubbard model on a geometrically frustrated zigzag chain by using a density - matrix renormalization group method. due to the appearance of a ferro - orbital arrangement, the system is regarded as a one - orbital system, while the degree of spin frustration is controlled by the spatial anisotropy of the orbital. in the region of strong spin frustration, we observe a finite energy gap between ground and first - excited states, which should be called a spin - orbital gap. the physical meaning is clarified by an effective heisenberg spin model including correctly the effect of the orbital arrangement influenced by the spin excitation.
arxiv:cond-mat/0604076
we describe simudo, a free poisson / drift - diffusion steady state device model for semiconductor and intermediate band materials, including self - consistent optical absorption and generation. simudo is the first freely available device model that can treat intermediate band materials. simudo uses the finite element method ( fem ) to solve the coupled nonlinear partial differential equations in two dimensions, which is different from the standard choice of the finite volume method in essentially all commercial semiconductor device models. we present the continuous equations that simudo solves, show the fem formulations we have developed, and demonstrate how they allow robust convergence with double - precision floating point arithmetic. with a benchmark semiconductor pn - junction device, we show that simudo has a higher rate of convergence than synopsys sentaurus, converging to high accuracy with a considerably smaller mesh. simudo includes many semiconductor phenomena and parameters and is designed for extensibility by the user to include many physical processes.
arxiv:1905.11303
we consider several topologically twisted chern - simons - matter theories and propose boundary voas whose module categories should model the category of line operators of the 3d bulk. our main examples come from the topological $ a $ and $ b $ twists of the exotic $ \ mathcal { n } = 4 $ chern - simons - matter theories of gaiotto - witten, but we show that there is a topological " $ a $ - twist " for a much larger class of $ \ mathcal { n } \ neq4 $ theories. we illustrate a particular example of this new class of theories that admits the $ p = 2 $ singlet voa $ \ mathfrak { m } ( 2 ) $ on its boundary and comment on its relation to the $ \ psi \ to \ infty $ limit of the gaiotto - rap { \ v c } { \ ' a } k corner voa $ y _ { 1, 1, 0 } [ \ psi ] $.
arxiv:2204.02991
recent progress in the automated driving system ( ads ) and advanced driver assistant system ( adas ) has shown that the combined use of 3d light detection and ranging ( lidar ) and the camera is essential for an intelligent vehicle to perceive and understand its surroundings. lidar - camera fusion requires precise intrinsic and extrinsic calibrations between the sensors. however, due to the limitation of the calibration equipment and susceptibility to noise, algorithms in existing methods tend to fail in finding lidar - camera correspondences in long - range. in this paper, we introduced an interactive lidar to camera calibration toolbox to estimate the intrinsic and extrinsic transforms. this toolbox automatically detects the corner of a planer board from a sequence of lidar frames and provides a convenient user interface for annotating the corresponding pixels on camera frames. since the toolbox only detects the top corner of the board, there is no need to prepare a precise polygon planar board or a checkerboard with different reflectivity areas as in the existing methods. furthermore, the toolbox uses genetic algorithms to estimate the transforms and supports multiple camera models such as the pinhole camera model and the fisheye camera model. experiments using velodyne vlp - 16 lidar and point grey chameleon 3 camera show robust results.
arxiv:1903.02122
this paper introduces and explores a new programming paradigm, model - based programming, designed to address the challenges inherent in applying deep learning models to real - world applications. despite recent significant successes of deep learning models across a range of tasks, their deployment in real business scenarios remains fraught with difficulties, such as complex model training, large computational resource requirements, and integration issues with existing programming languages. to ameliorate these challenges, we propose the concept of ' model - based programming ' and present a novel programming language - m language, tailored to a prospective model - centered programming paradigm. m language treats models as basic computational units, enabling developers to concentrate more on crucial tasks such as model loading, fine - tuning, evaluation, and deployment, thereby enhancing the efficiency of creating deep learning applications. we posit that this innovative programming paradigm will stimulate the extensive application and advancement of deep learning technology and provide a robust foundation for a model - driven future.
arxiv:2305.07341
we analyze the aquarius simulations to characterize the shape of dark matter halos with peak circular velocity in the range 8 < vmax < 200 km / s, and perform a convergence study using the various aquarius resolution levels. for the converged objects, we determine the principal axis ( a < b < c ) of the normalized inertia tensor as a function of radius. we find that the triaxiality of field halos is an increasing function of halo mass, so that the smallest halos in our sample are ~ 40 - 50 % rounder than milky way - like objects at the radius where the circular velocity peaks, rmax. we find that the distribution of subhalo axis ratios is consistent with that of field halos of comparable vmax. inner and outer contours within each object are well aligned, with the major axis preferentially pointing in the radial direction for subhalos closest to the center of their host halo. we also analyze the dynamical structure of subhalos likely to host luminous satellites comparable to the classical dwarf spheroidals in the local group. these halos have axis ratios that increase with radius, and which are mildly triaxial with < b / a > ~ 0. 75 and < c / a > ~ 0. 60 at r ~ 1 kpc. their velocity ellipsoid become strongly tangentially biased in the outskirts as a consequence of tidal stripping.
arxiv:1402.0903
in a first step, we explore the discovery and analysis potentials of the hera collider, with and without polarized beams, in the search for electron - quark compositeness in the neutral current channel. then we study the parity violating effects, for jet production in polarized $ pp $ collisions at rhic, which could be due to the presence of quark subconstituents or new massive gauge bosons. we emphasize that the measurement of spin asymmetries in such a polarized context could give some crucial informations on the chiral structure of these hypothetical new interactions.
arxiv:hep-ph/9707470
models where the accelerated expansion of our universe is caused by a quintessence scalar field are reviewed. in the framework of high energy physics, the physical nature of this field is discussed and its interaction with ordinary matter is studied and explicitly calculated. it is shown that this coupling is generically too strong to be compatible with local tests of gravity. a possible way out, the chameleon effect, is also briefly investigated.
arxiv:0803.4076
in the framework of the one - boson - exchange model, we explore whether the intermediate and short - range forces from $ \ sigma / \ omega $ - exchange can be strong enough to bind heavy molecular states. $ \ lambda _ cd ( \ bar { d } ) $, and $ \ lambda _ c \ lambda _ c ( \ bar { \ lambda } _ c ) $ systems have been studied and compared. we find that the force from $ \ sigma $ - exchange is attractive and dominant, whereas the $ \ omega $ - exchange force not. as a consequence, the s - wave $ \ lambda _ cd $, $ \ lambda _ c \ lambda _ c $, and $ \ lambda _ c \ bar { \ lambda } _ c $ can be possible molecular candidates. we further indicate that one hadron - hadron system with more light quarks $ ( u, d ) $ can be easier to form a bound state. as a byproduct, by studying the heavy - quark mass dependence for the $ \ lambda _ cd ( \ bar { d } ) $ - like and $ \ lambda _ c \ lambda _ c ( \ bar { \ lambda } _ c ) $ - like systems, we find that the charm / bottom sector can easily accommodates molecular states. finally, the $ \ lambda _ cn ( \ bar { n } ) $ and $ \ lambda _ bn ( \ bar { n } ) $ systems are investigated. our results indicate that they are also likely to form bound states. by including one - $ \ pi $ - exchange forces providing additional attraction when coupled channels are included, we expect many molecular states in heavy quark sectors.
arxiv:1707.08306
recently, linear computed tomography ( lct ) systems have actively attracted attention. to weaken projection truncation and image the region of interest ( roi ) for lct, the backprojection filtration ( bpf ) algorithm is an effective solution. however, in bpf for lct, it is difficult to achieve stable interior reconstruction, and for differentiated backprojection ( dbp ) images of lct, multiple rotation - finite inversion of hilbert transform ( hilbert filtering ) - inverse rotation operations will blur the image. to satisfy multiple reconstruction scenarios for lct, including interior roi, complete object, and exterior region beyond field - of - view ( fov ), and avoid the rotation operations of hilbert filtering, we propose two types of reconstruction architectures. the first overlays multiple dbp images to obtain a complete dbp image, then uses a network to learn the overlying hilbert filtering function, referred to as the overlay - single network ( osnet ). the second uses multiple networks to train different directional hilbert filtering models for dbp images of multiple linear scannings, respectively, and then overlays the reconstructed results, i. e., multiple networks overlaying ( mneto ). in two architectures, we introduce a swin transformer ( st ) block to the generator of pix2pixgan to extract both local and global features from dbp images at the same time. we investigate two architectures from different networks, fov sizes, pixel sizes, number of projections, geometric magnification, and processing time. experimental results show that two architectures can both recover images. osnet outperforms bpf in various scenarios. for the different networks, st - pix2pixgan is superior to pix2pixgan and cyclegan. mneto exhibits a few artifacts due to the differences among the multiple models, but any one of its models is suitable for imaging the exterior edge in a certain direction.
arxiv:2309.11858
we investigate the response to modular transformations and the fractional statistics of abelian multi - component fractional quantum hall ( fqh ) states. in particular, we analytically derive the modular matrices encoding the statistics of anyonic excitations for general halperin states using the conformal field theories ( cfts ). we validate our theory by several microscopic examples, including the spin - singlet state using anyon condensation picture and the halperin ( 221 ) state in a topological flat - band lattice model using numerical calculations. our results, uncovering that the modular matrices and associated fractional statistics are solely determined by the $ k $ - matrix, further strengthens the correspondence between the 2d cfts and ( 2 + 1 ) d topological orders for multi - component fqh states.
arxiv:2301.06427
in ordinary solids, material disorder is known to increase the size of the process zone in which stress concentrates at the crack tip, causing a transition from localized to diffuse failure. here, we report experiments on disordered 2d lattices, derived from frictional particle packings, in which the mean coordination number $ \ langle z \ rangle $ of the underlying network provides a similar control. our experiments show that tuning the connectivity of the network provides access to a range of behaviors from brittle to ductile failure. we elucidate the cooperative origins of this transition using a frictional pebble game algorithm on the original, intact lattices. we find that the transition corresponds to the isostatic value $ \ langle z \ rangle = 3 $ in the large - friction limit, with brittle failure occurring for structures vertically spanned by a rigid cluster, and ductile failure for floppy networks containing nonspanning rigid clusters. furthermore, we find that individual failure events typically occur within the floppy regions separated by the rigid clusters.
arxiv:1812.07466
recent advancements in high - fidelity dynamic scene reconstruction have leveraged dynamic 3d gaussians and 4d gaussian splatting for realistic scene representation. however, to make these methods viable for real - time applications such as ar / vr, gaming, and rendering on low - power devices, substantial reductions in memory usage and improvements in rendering efficiency are required. while many state - of - the - art methods prioritize lightweight implementations, they struggle in handling scenes with complex motions or long sequences. in this work, we introduce temporally compressed 3d gaussian splatting ( tc3dgs ), a novel technique designed specifically to effectively compress dynamic 3d gaussian representations. tc3dgs selectively prunes gaussians based on their temporal relevance and employs gradient - aware mixed - precision quantization to dynamically compress gaussian parameters. it additionally relies on a variation of the ramer - douglas - peucker algorithm in a post - processing step to further reduce storage by interpolating gaussian trajectories across frames. our experiments across multiple datasets demonstrate that tc3dgs achieves up to 67 $ \ times $ compression with minimal or no degradation in visual quality.
arxiv:2412.05700
in recent years, impressive performance of deep learning technology has been recognized in synthetic aperture radar ( sar ) automatic target recognition ( atr ). since a large amount of annotated data is required in this technique, it poses a trenchant challenge to the issue of obtaining a high recognition rate through less labeled data. to overcome this problem, inspired by the contrastive learning, we proposed a novel framework named batch instance discrimination and feature clustering ( bidfc ). in this framework, different from that of the objective of general contrastive learning methods, embedding distance between samples should be moderate because of the high similarity between samples in the sar images. consequently, our flexible framework is equipped with adjustable distance between embedding, which we term as weakly contrastive learning. technically, instance labels are assigned to the unlabeled data in per batch and random augmentation and training are performed few times on these augmented data. meanwhile, a novel dynamic - weighted variance loss ( dwv loss ) function is also posed to cluster the embedding of enhanced versions for each sample. experimental results on the moving and stationary target acquisition and recognition ( mstar ) database indicate a 91. 25 % classification accuracy of our method fine - tuned on only 3. 13 % training data. even though a linear evaluation is performed on the same training data, the accuracy can still reach 90. 13 %. we also verified the effectiveness of bidfc in opensarship database, indicating that our method can be generalized to other datasets. our code is avaliable at : https : / / github. com / wenlve - zhou / bidfc - master.
arxiv:2408.03627
the holographic charged fluid with anomalous current in einstein - maxwell gravity has been generalized from the infinite boundary to the finite cutoff surface by using the gravity / fluid correspondence. after perturbing the boosted reissner - nordstrom ( rn ) - ads black brane solution of the einstein - maxwell gravity with the chern - simons term, we obtain the first order perturbative gravitational and maxwell solutions, and calculate the stress tensor and charged current of the dual fluid at finite cutoff surfaces which contains undetermined parameters after demanding regularity condition at the future horizon. we adopt the dirichlet boundary condition and impose the landau frame to fix these parameters, finally obtain the dependence of transport coefficients in the dual stress tensor and charged current on the arbitrary radical cutoff $ r _ c $. we find that the dual fluid is not conformal, but it has vanishing bulk viscosity, and the shear viscosity to entropy density ratio is universally $ 1 / 4 \ pi $. other transport coefficients of the dual current turns out to be cutoff - dependent. in particular, the chiral vortical conductivity expressed in terms of thermodynamic quantities takes the same form as that of the dual fluid at the asymptotic ads boundary, and the chiral magnetic conductivity receives a cutoff - dependent correction which vanishes at the infinite boundary.
arxiv:1207.5309
we present green bank telescope ( gbt ) observations of the 3 ( 12 ) - 3 ( 13 ) ( 29 ghz ) and 4 ( 13 ) - 4 ( 14 ) ( 48 ghz ) transitions of the h2co molecule toward a sample of 23 well - studied star - forming regions. analysis of the relative intensities of these transitions can be used to reliably measure the densities of molecular cores. adopting kinetic temperatures from the literature, we have employed a large velocity gradient ( lvg ) model to derive the average hydrogen number density [ n ( h2 ) ] within a 16 arcsecond beam toward each source. densities in the range of 10 ^ { 5. 5 } - - 10 ^ { 6. 5 } cm ^ { - 3 } and ortho - formaldehyde column densities per unit line width between 10 ^ { 13. 5 } and 10 ^ { 14. 5 } cm ^ { - 2 } ( km s ^ { - 1 } ) ^ { - 1 } are found for most objects, in general agreement with existing measurements. a detailed analysis of the advantages and limitations to this densitometry technique is also presented. we find that h2co 3 ( 12 ) - 3 ( 13 ) / 4 ( 13 ) - 4 ( 14 ) densitometry proves to be best suited to objects with t _ k > ~ 100 k, above which the h2co lvg models become relatively independent of kinetic temperature. this study represents the first detection of these h2co k - doublet transitions in all but one object in our sample. the ease with which these transitions were detected, coupled with their unique sensitivity to spatial density, make them excellent monitors of density in molecular clouds for future experiments. we also report the detection of the 9 _ 2 - - 8 _ 1 a ^ - ( 29 ghz ) transition of ch3oh toward 6 sources.
arxiv:1108.3719
the energy spectrum of two 0 - branes for fixed angular momentum in 2 + 1 dimensions is calculated by the rayleigh - ritz method. the basis function used for each angular momentum consists of 80 eigenstates of the harmonic oscillator problem on the corresponding space. it is seen that the spectrum exhibits a definite linear regge trajectory behavior. it is argued how this behavior supports the picture by which the bound - states of quarks and qcd - strings are governed by the quantum mechanics of matrix coordinates.
arxiv:1506.02961
in this letter we propose an experiment to measure the kondo effect for magnetic atoms adsorbed on the surface of a metallic nanowire. in addition to the traditional sp - d hybridization, by introducing the strong electromagnetic field of the localized surface plasmon on the nanowire, we show that it is possible to observe additional sp - d electron transfer processes assisted by surface plasmons. due to the good surface - to - volume ratio of the nanowire, the kondo resonances here would be revealed as multiple anti - resonances in the differential conductance versus bias voltage curve.
arxiv:cond-mat/0608239
the part - per - million measurement of the positive muon lifetime and determination of the fermi constant by the mulan experiment at the paul scherrer institute is reviewed. the experiment used an innovative, time - structured, surface muon beam and a near - 4pi, finely - segmented, plastic scintillator positron detector. two in - vacuum muon stopping targets were used : a ferromagnetic foil with a large internal magnetic field, and a quartz crystal in a moderate external magnetic field. the experiment obtained a muon lifetime 2 196 980. 3 ( 2. 2 ) ps ( 1. 0 ppm ) and a fermi constant 1. 166 378 7 ( 6 ) 10 ^ - 5 gev ^ - 2 ( 0. 5 ppm ). the thirty - fold improvement in the muon lifetime has proven valuable for precision measurements in nuclear muon capture and the commensurate improvement in the fermi constant has proven valuable for precision tests of the standard model.
arxiv:2108.09182
planning at execution time has been shown to dramatically improve performance for agents in both single - agent and multi - agent settings. a well - known family of approaches to planning at execution time are alphazero and its variants, which use monte carlo tree search together with a neural network that guides the search by predicting state values and action probabilities. alphazero trains these networks by minimizing a planning loss that makes the value prediction match the episode return, and the policy prediction at the root of the search tree match the output of the full tree expansion. alphazero has been applied to both single - agent environments ( such as sokoban ) and multi - agent environments ( such as chess and go ) with great success. in this paper, we explore an intriguing question : in single - agent environments, can we outperform alphazero by directly maximizing the episode score instead of minimizing this planning loss, while leaving the mcts algorithm and neural architecture unchanged? to directly maximize the episode score, we use evolution strategies, a family of algorithms for zeroth - order blackbox optimization. our experiments indicate that, across multiple environments, directly maximizing the episode score outperforms minimizing the planning loss.
arxiv:2406.08687
autonomous radar has been an integral part of advanced driver assistance systems due to its robustness to adverse weather and various lighting conditions. conventional automotive radars use digital signal processing ( dsp ) algorithms to process raw data into sparse radar pins that do not provide information regarding the size and orientation of the objects. in this paper, we propose a deep - learning based algorithm for radar object detection. the algorithm takes in radar data in its raw tensor representation and places probabilistic oriented bounding boxes around the detected objects in bird ' s - eye - view space. we created a new multimodal dataset with 102544 frames of raw radar and synchronized lidar data. to reduce human annotation effort we developed a scalable pipeline to automatically annotate ground truth using lidar as reference. based on this dataset we developed a vehicle detection pipeline using raw radar data as the only input. our best performing radar detection model achieves 77. 28 \ % ap under oriented iou of 0. 3. to the best of our knowledge, this is the first attempt to investigate object detection with raw radar data for conventional corner automotive radars.
arxiv:2004.05310
using the relativistic electrodynamics of continuous media formalism and main relativistic quantum field theory principles the covariant lagrangian of electromagnetic field interaction with polarizable 1 / 2 - spin particles have been obtained. this lagrangian let us to determine canonical and metric energy - momentum tensors as well as low - energy compton scattering amplitude. the application of this lagrangian for the calculation of the radiative correction to the imaginary part of double virtual compton scattering is demonstrated.
arxiv:0707.2395
by using the dimension - free harnack inequality and the integration by parts formula for the associated diffusion semigroup, we prove the central limit theorem, the moderate deviation principle, and the logarithmic iteration law for the sample entropy production rate of stochastic differential equations with lipschitz continuous and dissipative drifts.
arxiv:1510.01881
let $ x _ 1, \ ldots, x _ n $ be independent identically distributed random vectors in $ \ mathbb { r } ^ d $. we consider upper bounds on $ \ max _ x \ mathbb { p } ( a _ 1x _ 1 + \ cdots + a _ nx _ n = x ) $ under various restrictions on $ x _ i $ and the weights $ a _ i $. when $ \ mathbb { p } ( x _ i = \ pm 1 ) = \ frac { 1 } { 2 } $, this corresponds to the classical littlewood - offord problem. we prove that in general for identically distributed random vectors and even values of $ n $ the optimal choice for $ ( a _ i ) $ is $ a _ i = 1 $ for $ i \ leq \ frac { n } { 2 } $ and $ a _ i = - 1 $ for $ i > \ frac { n } 2 $, regardless of the distribution of $ x _ 1 $. applying these results for bernoulli random variables answers a recent question of fox, kwan and sauermann. finally, we provide sharp bounds for concentration probabilities of sums of random vectors under the condition $ \ sup _ { x } \ mathbb { p } ( x _ i = x ) \ leq \ alpha $, where it turns out that the worst case scenario is provided by distributions on an arithmetic progression that are in some sense as close to the uniform distribution as possible. an important feature of this work is that unlike much of the literature on the subject we use neither methods of harmonic analysis nor those from extremal combinatorics.
arxiv:1912.08770
the majorana nature of massive neutrinos will be crucially probed in the next - generation experiments of the neutrinoless double - beta ( $ 0 \ nu 2 \ beta $ ) decay. the effective mass term of this process, $ \ langle m \ rangle ^ { } _ { ee } $, may be contaminated by new physics. so how to interpret a discovery or null result of the $ 0 \ nu 2 \ beta $ decay in the foreseeable future is highly nontrivial. in this paper we introduce a novel three - dimensional description of $ | \ langle m \ rangle _ { ee } ^ { } | $, which allows us to see its sensitivity to the lightest neutrino mass and two majorana phases in a transparent way. we take a look at to what extent the free parameters of $ | \ langle m \ rangle _ { ee } ^ { } | $ can be well constrained provided a signal of the $ 0 \ nu 2 \ beta $ decay is observed someday. to fully explore lepton number violation, all the six effective majorana mass terms $ \ langle m \ rangle _ { \ alpha \ beta } ^ { } $ ( for $ \ alpha, \ beta = e, \ mu, \ tau $ ) are calculated and their lower bounds are illustrated with the two - dimensional contour figures. the effect of possible new physics on the $ 0 \ nu 2 \ beta $ decay is also discussed in a model - independent way. we find that the result of $ | \ langle m \ rangle _ { ee } ^ { } | $ in the normal ( or inverted ) neutrino mass ordering case modified by the new physics effect may somewhat mimic that in the inverted ( or normal ) mass ordering case in the standard three - flavor scheme. hence a proper interpretation of a discovery or null result of the $ 0 \ nu 2 \ beta $ decay may demand extra information from some other measurements.
arxiv:1504.05820
palantir was considering an ipo in the first half of 2019 following a $ 41 billion valuation. in july 2020, it was revealed the company had filed for an ipo. it ultimately went public on the new york stock exchange through a direct public offering on september 30, 2020 under the ticker symbol " pltr ". on september 6, 2024, s & p global announced that the company would be added to the s & p 500 index. palantir ’ s share price rose 14 % the next trading day. on november 14 2024 palantir technologies inc. announced its transfer of stock listing from the new york stock exchange ( nyse ) to the nasdaq global select market, effective november 26, 2024. the company ' s class a common stock will continue to trade under the ticker symbol " pltr. " = = = investments = = = the company has invested over $ 400 million into nearly two dozen special - purpose acquisition company ( spac ) targets according to investment bank rbc capital markets, while bringing alongside those companies as customers. = = products = = = = = palantir gotham = = = released in 2008, palantir gotham is palantir ' s defense and intelligence offering. it is an evolution of palantir ' s longstanding work in the united states intelligence community, and is used by intelligence and defense agencies. among other things, the software supports alerts, geospatial analysis, and prediction. foreign customers include the ukrainian military. palantir gotham has also been used as a predictive policing system, which has elicited some controversy over racism in their ai analytics. = = = palantir foundry = = = palantir foundry is a software platform offered for use in commercial and civil government sectors. it was popularized for use in the health sector by its use within the national covid cohort collaborative, a secure enclave of electronic health records from across the united states that produced hundreds of scientific manuscripts and won the nih / faseb dataworks grand prize. foundry was also used by the center nhs england in dealing with the covid - 19 pandemic in england to analyze the operation of the vaccination program. a campaign was started against the company in june 2021 by foxglove, a tech - justice nonprofit, because " their background has generally been in contracts where people are harmed, not healed. " clive lewis mp, supporting the campaign said palantir had an " appalling track record. " as
https://en.wikipedia.org/wiki/Palantir_Technologies
the luminescence properties of the colloidal hybrid si - ni nanoparticles system fabricated in the pure water by pulsed laser ablation is considered. the red - shifted photoluminescence of this system because of the stark effect in the coulomb field of the charged ni nanoparticles has been registered in the blue range of the spectrum.
arxiv:1508.07757
we present a comprehensive mean - field calculation of the schiff moment of the nucleus 225ra, the quantity which determines the static electric dipole moment of the corresponding atom if time - reversal ( t ) invariance is violated in the nucleus. the calculation breaks all possible intrinsic symmetries of the nuclear mean field and includes, in particular, both exchange and direct terms from the full finite - range t - violating nucleon - nucleon interaction, and the effects of short - range correlations. the resulting schiff moment, which depends on three unknown t - violating pion - nucleon coupling constants, is much larger than in 199hg, the isotope with the best current experimental limit on its atomic electric - dipole moment.
arxiv:nucl-th/0503057
the capacity of a modern deep learning system to determine if a sample falls within its realm of knowledge is fundamental and important. in this paper, we offer insights and analyses of recent state - of - the - art out - of - distribution ( ood ) detection methods - extremely simple activation shaping ( ash ). we demonstrate that activation pruning has a detrimental effect on ood detection, while activation scaling enhances it. moreover, we propose scale, a simple yet effective post - hoc network enhancement method for ood detection, which attains state - of - the - art ood detection performance without compromising in - distribution ( id ) accuracy. by integrating scaling concepts into the training process to capture a sample ' s id characteristics, we propose intermediate tensor shaping ( ish ), a lightweight method for training time ood detection enhancement. we achieve auroc scores of + 1. 85 \ % for near - ood and + 0. 74 \ % for far - ood datasets on the openood v1. 5 imagenet - 1k benchmark. our code and models are available at https : / / github. com / kai422 / scale.
arxiv:2310.00227
we propose a scheme to entangle two mechanical nanocantilevers through indirect interactions mediated by a gas of ultra cold atoms. we envisage a system of nanocantilevers magnetically coupled to a bose - einstein condensate of atoms and focus on studying the dark states of the system. these dark states are entangled states of the two nanocantilevers, with no coupling to the atomic condensate. in the absence of dissipation, the degree of entanglement is found to oscillate with time, while if dissipation is included, the system is found to relax to a statistical mixture of dark states which remains time independent until the inevitable thermal dephasing destroys the nanocantilever coherence. this opens up the possibility of achieving long - lived entangled nanocantilever states.
arxiv:1006.4036
we investigate the phase diagram of a quantum spin - 1 chain whose hamiltonian is invariant under a global onsite $ a _ 4 $, translation and lattice inversion symmetries. we detect different gapped phases characterized by spt order and symmetry breaking using matrix product state order parameters. we observe a rich variety of phases of matter characterized by a combination of symmetry breaking and symmetry fractionalization and also the interplay between the onsite and spatial symmetries. examples of continuous phase transitions directly between topologically nontrivial spt phases are also observed.
arxiv:1604.00037
we introduce a new numerical method to approximate the solution of a finite horizon deterministic optimal control problem. we exploit two hamilton - jacobi - bellman pde, arising by considering the dynamics in forward and backward time. this allows us to compute a neighborhood of the set of optimal trajectories, in order to reduce the search space. the solutions of both pde are successively approximated by max - plus linear combinations of appropriate basis functions, using a hierarchy of finer and finer grids. we show that the sequence of approximate value functions obtained in this way does converge to the viscosity solution of the hjb equation in a neighborhood of optimal trajectories. then, under certain regularity assumptions, we show that the number of arithmetic operations needed to compute an approximate optimal solution of a $ d $ - dimensional problem, up to a precision $ \ varepsilon $, is bounded by $ o ( c ^ d ( 1 / \ varepsilon ) ) $, for some constant $ c > 1 $, whereas ordinary grid - based methods have a complexity in $ o ( 1 / \ varepsilon ^ { ad } $ ) for some constant $ a > 0 $.
arxiv:2304.10342
stereoscopic video technologies have been introduced to the consumer market in the past few years. a key factor in designing a 3d system is to understand how different visual cues and distortions affect the perceptual quality of stereoscopic video. the ultimate way to assess 3d video quality is through subjective tests. however, subjective evaluation is time consuming, expensive, and in some cases not possible. the other solution is developing objective quality metrics, which attempt to model the human visual system ( hvs ) in order to assess perceptual quality. although several 2d quality metrics have been proposed for still images and videos, in the case of 3d efforts are only at the initial stages. in this paper, we propose a new full - reference quality metric for 3d content. our method mimics hvs by fusing information of both the left and right views to construct the cyclopean view, as well as taking to account the sensitivity of hvs to contrast and the disparity of the views. in addition, a temporal pooling strategy is utilized to address the effect of temporal variations of the quality in the video. performance evaluations showed that our 3d quality metric quantifies quality degradation caused by several representative types of distortions very accurately, with pearson correlation coefficient of 90. 8 %, a competitive performance compared to the state - of - the - art 3d quality metrics.
arxiv:1803.04832
hybridoma technology is a method for producing large quantities of monoclonal antibodies by fusing antibody producing b cells with myeloma cells ( cancerous b cells ). this creates hybrid cells, hybridomas, that produce the antibody from their parent b cell whilst maintaining the properties of the parental myeloma cell line being immortal ( endlessly reproducing ) and having desirable properties for cell culture. the b cells to be used are generally gathered from animals who have been immunized with an antigen against which an antibody targeting it is desired. after forming hybridomas any non - hybrid cells are killed before screening and monoclonalization to create hybridoma lines that are derived from one parental cell and thus producing the same antibody against the desired target. the production of monoclonal antibodies was invented by cesar milstein and georges j. f. kohler in 1975. they shared the nobel prize of 1984 for medicine and physiology with niels kaj jerne, who made other contributions to immunology. the term hybridoma was coined by leonard herzenberg during his sabbatical in milstein ' s laboratory in 1976 – 1977. = = method = = laboratory animals ( mammals, e. g. mice ) are first exposed to the antigen against which an antibody is to be generated. usually this is done by a series of injections of the antigen in question, over the course of several weeks. these injections are typically followed by the use of in vivo electroporation, which significantly enhances the immune response. once splenocytes are isolated from the mammal ' s spleen, the b cells are fused with immortalised myeloma cells. the fusion of the b cells with myeloma cells can be done using electrofusion. electrofusion causes the b cells and myeloma cells to align and fuse with the application of an electric field. alternatively, the b - cells and myelomas can be made to fuse by chemical protocols, most often using polyethylene glycol. the myeloma cells are selected beforehand to ensure they are not secreting antibody themselves and that they lack the hypoxanthine - guanine phosphoribosyltransferase ( hgprt ) gene, making them sensitive ( or vulnerable ) to the hat medium ( see below ). fused cells are incubated in hat medium ( hypoxanthine - aminopterin - thymidine medium ) for roughly 10 to 14 days. aminopterin blocks the pathway that allows
https://en.wikipedia.org/wiki/Hybridoma_technology
under endoscopic assumptions about $ l $ - packets of unitary groups, we prove the local gan - gross - prasad conjecture for tempered representations of unitary groups over $ p $ - adic fields. roughly, this conjecture says that branching laws for $ u ( n - 1 ) \ subset u ( n ) $ can be computed using epsilon factors.
arxiv:1212.0951
high in content ingaas quantum wells ( in $ \ geq $ 75 % ) are potentially useful for topological quantum computing and spintronics applications. in high mobility ingaas quantum wells, alloy disorder scattering is a limiting factor. in this report, we demonstrate that by growing the ingaas quantum wells as a digital alloy, or a short period superlattice, we can reduce the alloy disorder scattering within the quantum well and increase the peak 2 k electron mobility to 545, 000 cm ^ 2 / v s, which is the highest reported mobility for high in content ingaas quantum wells to the best of the authors ' knowledge. our results demonstrate that the digital alloy approach can be used to increase the mobility of quantum wells in random alloy ternary materials.
arxiv:2403.17166
multichroic polarization sensitive detectors enable increased sensitivity and spectral coverage for observations of the cosmic microwave background ( cmb ). an array optimized for dual frequency detectors can provide 1. 7 times gain in sensitivity compared to a single frequency array. we present the design and measurements of horn coupled multichroic polarimeters encompassing the 90 and 150 ghz frequency bands and discuss our plans to field an array of these detectors as part of the actpol project.
arxiv:1401.8029
for self - supervised speaker verification, the quality of pseudo labels decides the upper bound of the system due to the massive unreliable labels. in this work, we propose dynamic loss - gate and label correction ( dlg - lc ) to alleviate the performance degradation caused by unreliable estimated labels. in dlg, we adopt gaussian mixture model ( gmm ) to dynamically model the loss distribution and use the estimated gmm to distinguish the reliable and unreliable labels automatically. besides, to better utilize the unreliable data instead of dropping them directly, we correct the unreliable label with model predictions. moreover, we apply the negative - pairs - free dino framework in our experiments for further improvement. compared to the best - known speaker verification system with self - supervised learning, our proposed dlg - lc converges faster and achieves 11. 45 %, 18. 35 % and 15. 16 % relative improvement on vox - o, vox - e and vox - h trials of voxceleb1 evaluation dataset.
arxiv:2208.01928