text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
categorical data are common in educational and social science research ; however, methods for its analysis are generally not covered in introductory statistics courses. this chapter overviews fundamental concepts and methods in categorical data analysis. it describes and illustrates the analysis of contingency tables given different sampling processes and distributions, estimation of probabilities, hypothesis testing, measures of associations, and tests of no association with nominal variables, as well as the test of linear association with ordinal variables. three data sets illustrate fatal police shootings in the united states, clinical trials of the moderna vaccine, and responses to general social survey questions.
|
arxiv:2409.02942
|
quantum correlations : entanglement and quantumness of correlations are main resource for quantum information theory. in this chapter it is presented the scenarios which quantumness of correlations plays an interesting role in entanglement distillation protocol. by means of koashi - winter relation, it is discussed that quantumness of correlations are related to the irreversibility of the entanglement distillation protocol. the activation protocol is introduced, and it is proved that quantumness of correlations can create distillable entanglement between the system and the measurement apparatus during a local measurement process.
|
arxiv:1708.00740
|
popular and news media often portray teenagers with sensationalism, as both a risk to society and at risk from society. as ai begins to absorb some of the epistemic functions of traditional media, we study how teenagers in two countries speaking two languages : 1 ) are depicted by ai, and 2 ) how they would prefer to be depicted. specifically, we study the biases about teenagers learned by static word embeddings ( swes ) and generative language models ( glms ), comparing these with the perspectives of adolescents living in the u. s. and nepal. we find english - language swes associate teenagers with societal problems, and more than 50 % of the 1, 000 words most associated with teenagers in the pretrained glove swe reflect such problems. given prompts about teenagers, 30 % of outputs from gpt2 - xl and 29 % from llama - 2 - 7b glms discuss societal problems, most commonly violence, but also drug use, mental illness, and sexual taboo. nepali models, while not free of such associations, are less dominated by social problems. data from workshops with n = 13 u. s. adolescents and n = 18 nepalese adolescents show that ai presentations are disconnected from teenage life, which revolves around activities like school and friendship. participant ratings of how well 20 trait words describe teens are decorrelated from swe associations, with pearson ' s r =. 02, n. s. in english fasttext and r =. 06, n. s. in glove ; and r =. 06, n. s. in nepali fasttext and r = -. 23, n. s. in glove. u. s. participants suggested ai could fairly present teens by highlighting diversity, while nepalese participants centered positivity. participants were optimistic that, if it learned from adolescents, rather than media sources, ai could help mitigate stereotypes. our work offers an understanding of the ways swes and glms misrepresent a developmentally vulnerable group and provides a template for less sensationalized characterization.
|
arxiv:2408.01961
|
in this paper we study the nonabelian, gauge invariant and asymptotically free quantum gauge theory with a mass parameter introduced in hep - th / 0605050. we develop the feynman diagram technique, calculate the mass and coupling constant renormalizations and the effective action at the one - - loop order. using the brst technique we also prove that the theory is renormalizable within the dimensional regularization framework.
|
arxiv:hep-th/0605051
|
we report the successful manufacture and characterization of a microwave resonant cylindrical cavity made of bulk mgb2 superconductor ( tc = 38. 5 k ), which has been produced by the reactive liquid mg infiltration technique. the quality factor of the cavity for the te011 mode, resonating at 9. 79 ghz, has been measured as a function of the temperature. at t = 4. 2 k, the unloaded quality factor is 2. 2x10 ^ 5 ; it remains of the order of 10 ^ 5 up to t ~ 30 k. we discuss the potential performance improvements of microwave cavities built from bulk mgb2 materials produced by reactive liquid mg infiltration.
|
arxiv:cond-mat/0612159
|
having to collaborate with less able and motivated students. the cmp program does not take into account the needs of students with minor learning disabilities or other disabilities who might be integrated into general education classrooms but still need extra help and need associated or modified learning materials. the publishers and creators of cmp have stated that reassuring results from a variety of research projects blunted concerns about basic skill mastery, missing knowledge, and student misconceptions resulting from use of cmp and other reform curricula. however, many teachers and parents remain wary. = = references = = = = external links = = connected mathematics project http : / / connectedmath. msu. edu / pearson http : / / www. connectedmathematics3. com common core state standards http : / / www. corestandards. org / math
|
https://en.wikipedia.org/wiki/Connected_Mathematics
|
population iii galaxies are predicted to exist at high redshifts and may be rendered sufficiently bright for detection with current telescopes when gravitationally lensed by a foreground galaxy cluster. population iii galaxies that exhibit strong lya emission should furthermore be identifiable from broadband photometry because of their unusual colors. here, we report on a search for such objects at z > 6 in the imaging data from the cluster lensing and supernova survey with hubble ( clash ), covering 25 galaxy clusters in 16 filters. our selection algorithm returns five singly - imaged candidates with lya - like color signatures, for which ground - based spectroscopy with current 8 - 10 m class telescopes should be able to test the predicted strength of the lya line. none of these five objects have been included in previous clash compilations of high - redshift galaxy candidates. however, when large grids of spectral synthesis models are applied to the study of these objects, we find that only two of these candidates are significantly better fitted by population iii models than by more mundane, low - metallicity stellar populations.
|
arxiv:1411.5691
|
a search for the decay of the standard model higgs boson into a $ b \ bar { b } $ pair when produced in association with a $ w $ or $ z $ boson is performed with the atlas detector. the analysed data, corresponding to an integrated luminosity of 36. 1 fb $ ^ { - 1 } $, were collected in proton - proton collisions in run 2 of the large hadron collider at a centre - of - mass energy of 13 tev. final states containing zero, one and two charged leptons ( electrons or muons ) are considered, targeting the decays $ z \ to \ nu \ nu $, $ w \ to \ ell \ nu $ and $ z \ to \ ell \ ell $. for a higgs boson mass of 125 gev, an excess of events over the expected background from other standard model processes is found with an observed significance of 3. 5 standard deviations, compared to an expectation of 3. 0 standard deviations. this excess provides evidence for the higgs boson decay into $ b $ - quarks and for its production in association with a vector boson. the combination of this result with that of the run 1 analysis yields a ratio of the measured signal events to the standard model expectation equal to $ 0. 90 \ pm 0. 18 \ rm { ( stat. ) } ^ { + 0. 21 } _ { - 0. 19 } \ rm { ( syst. ) } $. assuming the standard model production cross - section, the results are consistent with the value of the yukawa coupling to $ b $ - quarks in the standard model.
|
arxiv:1708.03299
|
current evolutionary biology models usually assume that a phenotype undergoes gradual change. this is in stark contrast to biological intuition, which indicates that change can also be punctuated - the phenotype can jump. such a jump could especially occur at speciation, i. e. dramatic change occurs that drives the species apart. here we derive a central limit theorem for punctuated equilibrium. we show that, if adaptation is fast, for weak convergence to normality to hold, the variability in the occurrence of change has to disappear with time.
|
arxiv:1602.05189
|
we analyse the dependence of the peak position of the thrust distribution on the cutoff value in the nagy - soper dipole shower. we compare the outcome of the parton shower simulations to a relation of the dependence from an analytic computation, derived within soft - collinear effective theory. we show that the result of the parton shower simulations and the analytic computation are in good agreement.
|
arxiv:2004.01657
|
we discuss on the potential of suspensions of gold nanoparticles with variable refractive index for the possible physical realization of in - relief virtual dynamic display of plane images. a reasoning approach for a vision system to display in real - time volumetric moving images is proposed based on well - known properties of optical media, namely the anomalous dispersion of light on certain transparent media and the virtual image formed by a refracting transparent surface. the system relies on creating mechanisms to modify the refractive index of in - relief virtual dynamical display ( ivdd ) bulbs that ideally would contain a suspension of gold nanoparticles each and that might be ordered in an array filling up a whole screen.
|
arxiv:physics/0106050
|
given an integer $ k \ ge3 $ and a group $ g $ of odd order, if there exists a $ 2 $ - $ ( v, k, 1 ) $ - design and if $ v $ is sufficiently large, then there is such a design whose automorphism group has a subgroup isomorphic to $ g $. a weaker result is proved when $ | g | $ is even and $ ( k, | g | ) = 1 $.
|
arxiv:1909.10126
|
quantum phase transition is one of the main interests in the field of condensed matter physics, while geometric phase is a fundamental concept and has attracted considerable interest in the field of quantum mechanics. however, no relevant relation was recognized before recent work. in this paper, we present a review of the connection recently established between these two interesting fields : investigations in the geometric phase of the many - body systems have revealed so - called " criticality of geometric phase ", in which geometric phase associated with the many - body ground state exhibits universality, or scaling behavior in the vicinity of the critical point. in addition, we address the recent advances on the connection of some other geometric quantities and quantum phase transitions. the closed relation recently recognized between quantum phase transitions and some of geometric quantities may open attractive avenues and fruitful dialog between different scientific communities.
|
arxiv:0803.1914
|
pre - trained models ( ptms ) are widely adopted across various downstream tasks in the machine learning supply chain. adopting untrustworthy ptms introduces significant security risks, where adversaries can poison the model supply chain by embedding hidden malicious behaviors ( backdoors ) into ptms. however, existing backdoor attacks to ptms can only achieve partially task - agnostic and the embedded backdoors are easily erased during the fine - tuning process. this makes it challenging for the backdoors to persist and propagate through the supply chain. in this paper, we propose a novel and severer backdoor attack, transtroj, which enables the backdoors embedded in ptms to efficiently transfer in the model supply chain. in particular, we first formalize this attack as an indistinguishability problem between poisoned and clean samples in the embedding space. we decompose embedding indistinguishability into pre - and post - indistinguishability, representing the similarity of the poisoned and reference embeddings before and after the attack. then, we propose a two - stage optimization that separately optimizes triggers and victim ptms to achieve embedding indistinguishability. we evaluate transtroj on four ptms and six downstream tasks. experimental results show that our method significantly outperforms sota task - agnostic backdoor attacks - - achieving nearly 100 % attack success rate on most downstream tasks - - and demonstrates robustness under various system settings. our findings underscore the urgent need to secure the model supply chain against such transferable backdoor attacks. the code is available at https : / / github. com / haowang - cqu / transtroj.
|
arxiv:2401.15883
|
estimation of the value - at - risk ( var ) of a large portfolio of assets is an important task for financial institutions. as the joint log - returns of asset prices can often be projected to a latent space of a much smaller dimension, the use of a variational autoencoder ( vae ) for estimating the var is a natural suggestion. to ensure the bottleneck structure of autoencoders when learning sequential data, we use a temporal vae ( tempvae ) that avoids an auto - regressive structure for the observation variables. however, the low signal - to - noise ratio of financial data in combination with the auto - pruning property of a vae typically makes the use of a vae prone to posterior collapse. therefore, we propose to use annealing of the regularization to mitigate this effect. as a result, the auto - pruning of the tempvae works properly which also results in excellent estimation results for the var that beats classical garch - type and historical simulation approaches when applied to real data.
|
arxiv:2112.01896
|
four dimensional gravity with a u ( 1 ) gauge field, coupled to various fields in asymptotically anti - de sitter spacetime, provides a rich arena for the holographic study of the strongly coupled ( 2 + 1 ) - dimensional dynamics of finite density matter charged under a global u ( 1 ). as a first step in furthering the study of the properties of fractionalized and partially fractionalized degrees of freedom in the strongly coupled theory, we construct electron star solutions at zero temperature in the presence of a background magnetic field. we work in einstein - maxwell - dilaton theory. in all cases we construct, the magnetic source is cloaked by an event horizon. a key ingredient of our solutions is our observation that starting with the standard landau level structure for the density of states, the electron star limits reduce the charge density and energy density to that of the free fermion result. using this result we construct three types of solution : one has a star in the infra - red with an electrically neutral horizon, another has a star that begins at an electrically charged event horizon, and another has the star begin a finite distance from an electrically charged horizon.
|
arxiv:1207.1677
|
this article investigates the properties of order - divisor graphs associated with finite groups. an order - divisor graph of a finite group is an undirected graph in which the set of vertices includes all elements of the group, and two distinct vertices with different orders are adjacent if the order of one vertex divides the order of the other. we prove some beautiful results in order - divisor graphs of finite groups. the primary focus is on examining the girth, degree of vertices, and size of the order - divisor graph. in particular, we provide a comprehensive description of these parameters for the order - divisor graphs of finite cyclic groups and dihedral groups.
|
arxiv:2408.14104
|
we apply a fourier spectral numerical method to 3d incompressible mhd turbulence with a magnetic prandtl number $ pr \ geq 1 $. we examine the processes by which an initially weak, large - scale seed magnetic field and an initially weak, small - scale, impulse - like seed magnetic field are amplified. we find that in both cases the magnetic energy spectrum grows at all scales. the growth rates at different amplification stages are analyzed. for a large - scale seed magnetic field, the magnetic energy density grows as $ \ sim t ^ 2 $ for the first few turbulence eddy turnover times, followed by a dynamic growth stage, where nonlinear interactions between different scales of the turbulence contribute to an exponential growth rate that is largely determined by the turbulence eddy turnover time. for a seed magnetic field that is initially set up at a small scale in the turbulence, during the kinematic development stage, the growth rate of magnetic energy is $ \ propto 1 / \ tau _ { max } $, where $ \ tau _ { max } $ is the eddy turnover time of the smallest eddies of the turbulence. the kinematic growth stage is followed by a dynamic growth stage, where nonlinearity plays important role. during such dynamic growth stage, the growth rate of total magnetic energy is determined by both the magnetic energy amplification within the turbulence inertial range and that within the turbulence dissipation range.
|
arxiv:astro-ph/0012447
|
concepts and formalism from acoustics are often used to exemplify quantum mechanics. conversely, quantum mechanics could be used to achieve a new perspective on acoustics, as shown by gabor studies. here, we focus in particular on the study of human voice, considered as a probe to investigate the world of sounds. we present a theoretical framework that is based on observables of vocal production, and on some measurement apparati that can be used both for analysis and synthesis. in analogy to the description of spin states of a particle, the quantum - mechanical formalism is used to describe the relations between the fundamental states associated with phonetic labels such as phonation, turbulence, and supraglottal myoelastic vibrations. the intermingling of these states, and their temporal evolution, can still be interpreted in the fourier / gabor plane, and effective extractors can be implemented. the bases for a quantum vocal theory of sound, with implications in sound analysis and design, are presented.
|
arxiv:2003.09632
|
a proper vertex coloring of a graph is equitable if the sizes of color classes differ by at most 1. the equitable chromatic number of a graph $ g $, denoted by $ \ chi _ = ( g ) $, is the minimum $ k $ such that $ g $ is equitably $ k $ - colorable. the equitable chromatic threshold of a graph $ g $, denoted by $ \ chi _ = ^ * ( g ) $, is the minimum $ t $ such that $ g $ is equitably $ k $ - colorable for $ k \ ge t $. in this paper, we give the exact values of $ \ chi _ = ( k _ { m _ 1,..., m _ r } \ times k _ n ) $ and $ \ chi _ = ^ * ( k _ { m _ 1,..., m _ r } \ times k _ n ) $ for $ \ sum _ { i = 1 } ^ r m _ i \ leq n $.
|
arxiv:1210.0188
|
long linear polymers in dilute solutions are known to undergo a collapse transition from a random coil ( expand itself ) to a compact ball ( fold itself up ) when the temperature is lowered, or the solvent quality deteriorates. a natural model for this phenomenon is a 1 + 1 dimensional self - interacting and partially directed self - avoiding walk. in this paper, we develop a new method to study the partition function of this model, from which we derive a variational formula for the free energy. this variational formula allows us to prove the existence of the collapse transition and to identify the critical temperature in a simple way. we also prove that the order of the collapse transition is 3 / 2.
|
arxiv:1211.0925
|
lambda \ to \ infty ; \ ] where $ t _ 1, t _ 2, t _ 3 : x \ to x $ are commuting invertible and measure - preserving transformations of a $ \ sigma $ - finite measure space $ ( x, \ nu ) $ for any function $ f \ in l ^ p ( x ) $ with $ p > \ frac { 11 - 4c } { 11 - 7c } $. finally, we will study the equidistribution problem corresponding to the spheres $ \ mathbf s _ { c } ^ 3 ( \ lambda ) $.
|
arxiv:2106.12015
|
motivated by the weak gravity conjecture, we uncover an intricate interplay between black holes, bps particle counting, and calabi - yau geometry in five dimensions. in particular, we point out that extremal bps black holes exist only in certain directions in the charge lattice, and we argue that these directions fill out a cone that is dual to the cone of effective divisors of the calabi - yau threefold. the tower and sublattice versions of the weak gravity conjecture require an infinite tower of bps particles in these directions, and therefore imply purely geometric conjectures requiring the existence of infinite towers towers of holomorphic curves in every direction within the dual of the cone of effective divisors. we verify these geometric conjectures in a number of examples by computing gopakumar - vafa invariants.
|
arxiv:2108.08309
|
primordial black holes ( pbhs ) might have formed in the early universe as a consequence of the collapse of density fluctuations with an amplitude above a critical value $ \ delta _ { c } $ : the formation threshold. although for a radiation - dominated universe $ \ delta _ { c } $ remains constant, if the universe experiences some dust - like phases ( e. g. phase transitions ) $ \ delta _ { c } $ might decrease, improving the chances of pbh formation. we studied the evolution of $ \ delta _ { c } $ during the qcd phase transition epoch within three different models : bag model ( bm ), lattice fit model ( lfm ), and crossover model ( cm ). we found that the reduction on the background value of $ \ delta _ { c } $ can be as high as $ 77 \ % $ ( bm ), which might imply a $ \ sim10 ^ { - 10 } $ probability of pbhs forming at the qcd epoch.
|
arxiv:1609.01205
|
experimental data on electromagnetic and weak form factors of the nucleon are analyzed in a two - component model with a quark - like intrinsic structure surrounded by a meson cloud. the contribution from strange quarks is discussed and compared with recent data from the g0 collaboration.
|
arxiv:nucl-th/0511004
|
defects which appear in heterostructure junctions involving topological insulators are sources of gapless modes governing the low energy properties of the systems, as recently elucidated by teo and kane [ physical review b82, 115120 ( 2010 ) ]. a standard approach for the calculation of topological invariants associated with defects is to deal with the spatial inhomogeneity raised by defects within a semiclassical approximation. in this paper, we propose a full quantum formulation for the topological invariants characterizing line defects in three - dimensional insulators with no symmetry by using the green ' s function method. on the basis of the full quantum treatment, we demonstrate the existence of a nontrivial topological invariant in the topological insulator - ferromagnet tri - junction systems, for which a semiclassical approximation fails to describe the topological phase. also, our approach enables us to study effects of electron - electron interactions and impurity scattering on topological insulators with spatial inhomogeneity which gives rise to the axion electrodynamics responses.
|
arxiv:1111.1685
|
multi - controlled gates are fundamental components in the design of quantum algorithms, where efficient decompositions of these operators can enhance algorithm performance. the best asymptotic decomposition of an n - controlled x gate with one borrowed ancilla into single qubit and cnot gates produces circuits with degree 3 polylogarithmic depth and employs a divide - and - conquer strategy. in this paper, we reduce the number of recursive calls in the divide - and - conquer algorithm and decrease the depth of n - controlled x gate decomposition to a degree of 2. 799 polylogarithmic depth. with this optimized decomposition, we also reduce the depth of n - controlled su ( 2 ) gates and approximate n - controlled u ( 2 ) gates. decompositions described in this work achieve the lowest asymptotic depth reported in the literature. we also perform an optimization in the base of the recursive approach. starting at 52 control qubits, the proposed n - controlled x gate with one borrowed ancilla has the shortest circuit depth in the literature. one can reproduce all the results with the freely available open - source code provided in a public repository.
|
arxiv:2407.05162
|
in recent years, with the rapid advancements in large language models ( llms ), achieving excellent empathetic response capability has become a crucial prerequisite. consequently, managing and understanding large - scale video datasets has gained increasing importance. however, empathetic data are typically trained without any quality selection, leading to inefficient data usage and wasted computational resources. additionally, using raw data can result in low performance in empathetic dialogues. in this work, we present efficient - empathy, a sensibility and rationality score - based data selection algorithm that automatically selects sensibility and rationality data while discarding low - quality data. with only the sensibility data ( 59 % of the full dataset ), our trained sensibility model efficiently achieves state - of - the - art ( sota ) performance. furthermore, with multiple data selection hyperparameters, the sensibility model demonstrates sota performance, showcasing the robustness of our method. by integrating sensibility and rationality data with a moe structure, we achieve even higher performance, demonstrating the effectiveness of our efficient - empathy algorithm.
|
arxiv:2407.01937
|
a plus - contact representation of a planar graph $ g $ is called $ c $ - balanced if for every plus shape $ + _ v $, the number of other plus shapes incident to each arm of $ + _ v $ is at most $ c \ delta + o ( 1 ) $, where $ \ delta $ is the maximum degree of $ g $. although small values of $ c $ have been achieved for a few subclasses of planar graphs ( e. g., $ 2 $ - and $ 3 $ - trees ), it is unknown whether $ c $ - balanced representations with $ c < 1 $ exist for arbitrary planar graphs. in this paper we compute $ ( 1 / 2 ) $ - balanced plus - contact representations for all planar graphs that admit a rectangular dual. our result implies that any graph with a rectangular dual has a 1 - bend box - orthogonal drawings such that for each vertex $ v $, the box representing $ v $ is a square of side length $ \ frac { deg ( v ) } { 2 } + o ( 1 ) $.
|
arxiv:1708.09560
|
the extra - dimensional origin of dark matter is a fascinating and nowadays often discussed possibility. here, we present the gamma - ray signatures that are expected from the self - annihilation of kaluza - klein dark matter particles. for comparison, we contrast this with the case of supersymmetry, where the neutralino annihilation spectra take a very different form. in both cases we find pronounced spectral signatures that could in principle be used to distinguish between these two types of dark matter candidates already with today ' s detector resolutions.
|
arxiv:astro-ph/0609510
|
keypoint detection serves as the basis for many computer vision and robotics applications. despite the fact that colored point clouds can be readily obtained, most existing keypoint detectors extract only geometry - salient keypoints, which can impede the overall performance of systems that intend to ( or have the potential to ) leverage color information. to promote advances in such systems, we propose an efficient multi - modal keypoint detector that can extract both geometry - salient and color - salient keypoints in colored point clouds. the proposed centroid distance ( ced ) keypoint detector comprises an intuitive and effective saliency measure, the centroid distance, that can be used in both 3d space and color space, and a multi - modal non - maximum suppression algorithm that can select keypoints with high saliency in two or more modalities. the proposed saliency measure leverages directly the distribution of points in a local neighborhood and does not require normal estimation or eigenvalue decomposition. we evaluate the proposed method in terms of repeatability and computational efficiency ( i. e. running time ) against state - of - the - art keypoint detectors on both synthetic and real - world datasets. results demonstrate that our proposed ced keypoint detector requires minimal computational time while attaining high repeatability. to showcase one of the potential applications of the proposed method, we further investigate the task of colored point cloud registration. results suggest that our proposed ced detector outperforms state - of - the - art handcrafted and learning - based keypoint detectors in the evaluated scenes. the c + + implementation of the proposed method is made publicly available at https : / / github. com / ucr - robotics / ced _ detector.
|
arxiv:2210.01298
|
we define a relaxed version $ h _ f ^ { \ textrm { fine } } $ of the distortion number $ h _ f $ that is used to define quasiconformal mappings. then we show that for a bv function $ f \ in bv ( \ mathbb { r } ^ n ; \ mathbb { r } ^ n ) $, for $ | df | $ - a. e. $ x \ in \ mathbb { r } ^ n $ it holds that $ h _ { f ^ * } ^ { \ textrm { fine } } ( x ) < \ infty $ if and only if $ \ tfrac { ddf } { d | df | } ( x ) $ has full rank.
|
arxiv:2406.05824
|
arthur stanley eddington was one of the leading astronomers and theorists of his generation and a prominent proponent of the general theory of relativity. yet when his former assistant georges lemaitre sent him a paper in 1927 suggesting that the well - known redshifts of the spiral nebulae might be a manifestation of a cosmic expansion predicted by relativity, eddington paid no attention for three years. in this paper, we consider the reasons for this oversight. we find that conventional explanations ( such as lemaitre ' s status as a relatively junior researcher and his decision to publish in a lesser - known belgian journal ) do not convince. we propose an alternative explanation that has not been considered in the literature - namely that the observational data cited by lemaitre in support of his model were of a preliminary nature and would not have been sufficiently convincing for eddington and others to consider non - static cosmologies.
|
arxiv:1907.12297
|
quantum gases of doubly - polar molecules represent appealing frameworks for a variety of cross - disciplinary applications, encompassing quantum simulation and computation, controlled quantum chemistry and precision measurements. through a joint experimental and theoretical study, here we explore a novel class of ultracold paramagnetic polar molecules combining lithium alkali and chromium transition metal elements. focusing on the specific bosonic isotopologue $ ^ { 6 } $ li $ ^ { 53 } $ cr, leveraging on the fermi statistics of the parent atomic mixture and on suitable feshbach resonances recently discovered, we produce up to $ 50 \ times10 ^ 3 $ ultracold licr molecules at peak phase - space densities exceeding 0. 1, prepared within the least - bound rotationless level of the licr electronic $ sextet $ ground state $ x ^ 6 \ sigma ^ + $. we thoroughly characterize the molecular gas, demonstrating the paramagnetic nature of licr dimers and the precise control of their quantum state. we investigate their stability against inelastic processes and identify a parameter region where pure licr samples exhibit lifetimes exceeding 0. 2 s. parallel to this, we employ state - of - the - art quantum - chemical calculations to predict the properties of licr ground and excited electronic states. we identify efficient paths to coherently transfer weakly - bound licr dimers to their absolute ground state, to deliver ultracold gases of doubly - polar molecules with significant electric ( 3. 3 d ) and magnetic ( $ 5 \, \ mu _ \ text { b } $ ) dipole moments.
|
arxiv:2402.08337
|
we present simulation results of the impulsively generated linear and non - linear alfv \ ' en waves in the weakly curved coronal magnetic flux - tubes ( coronal funnels ) and discuss their implications for the coronal heating and solar wind acceleration. we solve numerically the time - dependent magnetohydrodynamic ( mhd ) equations to obtain the temporal signatures of the small ( linear ) and large - amplitude ( non - linear ) alfv \ ' en waves in the model atmosphere of expanding open magnetic field configuration ( e. g., coronal funnels ) by considering a realistic temperature distribution. we compute the maximum transversal velocity of both linear and non - linear alfv \ ' en waves at different heights in the coronal funnel, and study their response in the solar corona during the time of their propagation. we infer that the pulse - driven non - linear alfv \ ' en waves may carry sufficient wave energy fluxes to heat the coronal funnels and also to power the solar wind that originates in these funnels. our study of linear alfv \ ' en waves show that they can contribute only to the plasma dynamics and heating of the funnel - like magnetic flux - tubes associated with the polar coronal holes.
|
arxiv:1401.2329
|
we introduce a fast and scalable method for solving quadratic programs with conditional value - at - risk ( cvar ) constraints. while these problems can be formulated as standard quadratic programs, the number of variables and constraints grows linearly with the number of scenarios, making general - purpose solvers impractical for large - scale problems. our method combines operator splitting with a specialized $ o ( m \ log m ) $ algorithm for projecting onto cvar constraints, where $ m $ is the number of scenarios. the method alternates between solving a linear system and performing parallel projections : onto cvar constraints using our specialized algorithm and onto box constraints with a closed - form solution. numerical examples from several application domains demonstrate that our method outperforms general - purpose solvers by several orders of magnitude on problems with up to millions of scenarios. our method is implemented in an open - source package called cvqp.
|
arxiv:2504.10814
|
an abelian threefold $ a _ { / { \ mathbb q } } $ of prime conductor $ n $ is favorable if its 2 - division field $ f $ is an $ { \ mathcal s } _ 7 $ - extension over $ { \ mathbb q } $ with ramification index 7 over $ { \ mathbb q } _ 2 $. let $ a $ be favorable and let $ b $ be a semistable abelian variety of dimension $ 3d $ and conductor $ n ^ d $ with $ b [ 2 ] $ filtered by copies of $ a [ 2 ] $. we give a sufficient and computable class field theoretic criterion on $ f $ to guarantee that $ b $ is isogenous to $ a ^ d $.
|
arxiv:2002.00510
|
we present the results of detailed surface photometry of ngc 3808b and ngc 6286 - two spiral galaxies with possibly forming ring - like structures rotating around major axes of the galaxies. the formation of rings in ngc 3808b and ngc 6286 being accompanied by accretion of matter on galactic disk results in some interesting gasdynamical and stellardynamical effects in these galaxies. one can note, for instance, peculiar rotation curve of ngc 3808b gaseous disk ; strong infrared and h - alpha emission from the galaxies ; bending and flaring stellar disks in both galaxies. our observations clearly illustrate the possibility that polar - ring galaxies may be formed as a result of matter accretion from one galaxy to another.
|
arxiv:astro-ph/9608100
|
we describe laboratory experiments to generate x - ray photoionized plasmas of relevance to accretion - powered x - ray sources such as neutron star binaries and quasars, with significant improvements over previous work. a key quantity is referenced, namely the photoionization parameter. this is normally meaningful in an astrophysical steady - state context, but is also commonly used in the literature as a figure of merit for laboratory experiments that are, of necessity, time - dependent. we demonstrate emission - weighted values of { \ xi } > 50 ergcm / s using laser - plasma x - ray sources, with higher results at the centre of the plasma which are in the regime of interest for several astrophysical scenarios. comparisons of laboratory experiments with astrophysical codes are always limited, principally by the many orders of magnitude differences in time and spatial scales, but also other plasma parameters. however useful checks on performance can often be made for a limited range of parameters. for example, we show that our use of a kev line source, rather than the quasi - blackbody radiation fields normally employed in such experiments, has allowed the generation of the ratio of inner - shell to outer - shell photoionization expected from a blackbody source with ~ kev spectral temperature. we compare calculations from our in - house plasma modelling code with those from cloudy and find moderately good agreement for the time evolution of both electron temperature and average ionisation. however, a comparison of code predictions for a k - beta argon x - ray spectrum with experimental data reveals that our cloudy simulation overestimates the intensities of more highly ionised argon species. this is not totally surprising as the cloudy model was generated for a single set of plasma conditions, while the experimental data are spatially integrated.
|
arxiv:2309.07267
|
we report the discovery of a new molecular phase of carbon dioxide at high - pressure and high - temperature. using x - ray diffraction, we identify this phase as the theoretically predicted high - temperature cmca phase [ bonev et al., phys. rev. lett., 91, 065501 ( 2003 ) ]. its relation with phase iii, on one hand, and its relative stability with respect to phase iv, on the other hand, are discussed based on spectroscopic and melting data. the existence of this strictly molecular phase challenges the interpretation of phases iv and ii as intermediate phases between the molecular and covalent - bonded forms of co2.
|
arxiv:cond-mat/0608529
|
charged particles in a magnetosphere are spontaneously attracted to a planet while increasing their kinetic energy via inward diffusion process. a constraint on particles ' micro - scale adiabatic invariants restricts the class of motions available to the system, giving rise to a proper frame on which particle diffusion occurs. we investigate the inward diffusion process by numerical simulation of particles on constrained phase space. the results reveal the emergence of inhomogeneous density gradient and anisotropic heating, which is consistent with spacecraft observations, experimental observations, and the recently formulated diffusion model on the constrained phase space.
|
arxiv:1609.02373
|
policy makers, urban planners, architects, sociologists, and economists are interested in creating urban areas that are both lively and safe. but are the safety and liveliness of neighborhoods independent characteristics? or are they just two sides of the same coin? in a world where people avoid unsafe looking places, neighborhoods that look unsafe will be less lively, and will fail to harness the natural surveillance of human activity. but in a world where the preference for safe looking neighborhoods is small, the connection between the perception of safety and liveliness will be either weak or nonexistent. in this paper we explore the connection between the levels of activity and the perception of safety of neighborhoods in two major italian cities by combining mobile phone data ( as a proxy for activity or liveliness ) with scores of perceived safety estimated using a convolutional neural network trained on a dataset of google street view images scored using a crowdsourced visual perception survey. we find that : ( i ) safer looking neighborhoods are more active than what is expected from their population density, employee density, and distance to the city centre ; and ( ii ) that the correlation between appearance of safety and activity is positive, strong, and significant, for females and people over 50, but negative for people under 30, suggesting that the behavioral impact of perception depends on the demographic of the population. finally, we use occlusion techniques to identify the urban features that contribute to the appearance of safety, finding that greenery and street facing windows contribute to a positive appearance of safety ( in agreement with oscar newman ' s defensible space theory ). these results suggest that urban appearance modulates levels of human activity and, consequently, a neighborhood ' s rate of natural surveillance.
|
arxiv:1608.00462
|
we introduce state - independent, non - perturbative hamiltonian quantum speed limits for population leakage and fidelity loss, for a gapped open system interacting with a reservoir. these results hold in the presence of initial correlations between the system and the reservoir, under the sole assumption that their interaction and its commutator with the reservoir hamiltonian are norm - bounded. the reservoir need not be thermal and can be time - dependent. we study the significance of energy mismatch between the system and the local degrees of freedom of the reservoir which directly interact with the system. we demonstrate that, in general, by increasing the system gap we may reduce this energy mismatch, and consequently drive the system and the reservoir into resonance, which can accelerate fidelity loss, irrespective of the thermal properties or state of the reservoir. this implies that quantum error suppression strategies based on increasing the gap are not uniformly beneficial. our speed limits also yield an elementary lower bound on the relaxation time of spin systems.
|
arxiv:1505.07850
|
jedi ( joint efficient dark - energy investigation ) is a candidate implementation of the nasa - doe joint dark energy mission ( jdem ). jedi will probe dark energy in three independent methods : ( 1 ) type ia supernovae, ( 2 ) baryon acoustic oscillations, and ( 3 ) weak gravitational lensing. in an accompanying paper, an overall summary of the jedi mission is given. in this paper, we present further details of the supernova component of jedi. to derive model - independent constraints on dark energy, it is important to precisely measure the cosmic expansion history, h ( z ), in continuous redshift bins from z \ ~ 0 - 2 ( the redshift range in which dark energy is important ). sne ia at z > 1 are not readily accessible from the ground because the bulk of their light has shifted into the near - infrared where the sky background is overwhelming ; hence a space mission is required to probe dark energy using sne. because of its unique near - infrared wavelength coverage ( 0. 8 - 4. 2 microns ), jedi has the advantage of observing sne ia in the rest frame j band for the entire redshift range of 0 < z < 2, where they are less affected by dust, and appear to be nearly perfect standard candles. during the first year of jedi operations, spectra and light curves will be obtained for ~ 4, 000 sne ia at z < 2. the resulting constraints on dark energy are discussed, with special emphasis on the improved precision afforded by the rest frame near - infrared data.
|
arxiv:astro-ph/0606691
|
in any low energy effective supergravity theory general formulae exist which allow one to discuss fermion masses, the scalar potential and breaking of symmetries in a model independent set up. a particular role in this discussion is played by killing vectors and killing prepotentials. we outline these relations in general and specify then in the context of n = 1 and n = 2 supergravities in four dimensions. useful relations of gauged quaternionic geometry underlying hypermultiplets dynamics are discussed.
|
arxiv:hep-th/0103153
|
l } _ \ mathsf { t }, \ mathsf { t }, \ mathsf { r } _ \ mathsf { t } ; \ mathsf { r } _ \ mathsf { b }, \ mathsf { b }, \ mathsf { l } _ \ mathsf { b } ) $ such that $ d ( \ mathsf { r } _ \ mathsf { b } ) + d ( \ mathsf { b } ) + d ( \ mathsf { l } _ \ mathsf { b } ) - d ( \ mathsf { l } _ \ mathsf { t } ) - d ( \ mathsf { t } ) - d ( \ mathsf { r } _ \ mathsf { t } ) - \ vert \ mathsf { l } _ \ mathsf { t } \ vert _ 1 \ vert \ mathsf { t } \ vert _ 0 - \ vert \ mathsf { t } \ vert _ 1 \ vert \ mathsf { r } _ \ mathsf { t } \ vert _ 0 - \ vert \ mathsf { r } _ \ mathsf { b } \ vert _ 0 \ vert \ mathsf { l } _ \ mathsf { b } \ vert _ 1 = 0, 1 $. to be more precise, in the first case they are enumerated by littlewood - richardson coefficients and in the second case their number is expressed in terms of littlewood - richardson coefficients.
|
arxiv:1408.6131
|
we evoke situations where large fluctuations in the entropy are induced, our main example being a spacetime containing a potential black hole whose formation depends on the outcome of a quantum mechanical event. we argue that the teleological character of the event horizon implies that the consequent entropy fluctuations must be taken seriously in any interpretation of the quantal formalism. we then indicate how the entropy can be well defined despite the teleological character of the horizon, and we argue that this is possible only in the context of a spacetime or ` ` histories ' ' formulation of quantum gravity, as opposed to a canonical one, concluding that only a spacetime formulation has the potential to compute - - - from first principles and in the general case - - - the entropy of a black hole. from the entropy fluctuations in a related example, we also derive a condition governing the form taken by the entropy, when it is expressed as a function of the quantal density - operator.
|
arxiv:gr-qc/9902051
|
the second - order dirac equation ( de ) and its velocity operator of graphene electrons in an electromagnetic field are obtained according to tight - binding k. p method. with extra terms included, they demonstrate the motion of graphene electrons more completely through a more complete ehrenfest theorem and present finer properties of graphene electrons. eigen - energy given by the second - order de for field - free graphene indicates that extra terms may affect the trembling motion of graphene electrons. for graphene in a magnetic field, eigen - energy given by the second - order de suggests that graphene electrons have a new kind of spin of a boson other than true electronic spin and pseudo - spin of dirac particles, which will modify graphene properties such as the optical spectra.
|
arxiv:1303.7290
|
noise is the major problem while working with wireless lan. in this paper we analyze the noise by using active receiving antenna and also propose the detection mechanism based on rf energy duration. the standard back off mechanism of 802. 11 wireless lan ( wlan ) increases the contention window when a transmission failure occurs in order to alleviate contentions in a wlan. in addition, many proposed schemes for 802. 11 wlan behave adaptively to transmission failures. transmission failures in wlans occur mostly by two causes : collision and channel noise. however, in 802. 11 wlan, a station cannot know the cause of a transmission failure, thus the adaptive schemes assume the ideal situation in which all transmission failures occur by only one of two causes. for this reason, they may behave erroneously in a real world where transmission failures occur by both causes. in this paper, we propose a novel scheme to detect collision, which utilizes transmission time information and rf energy duration on the channel. by detecting collisions, a station can differentiate the causes of transmission failures and the adaptive schemes can operate correctly by using the detection information.
|
arxiv:1110.2270
|
telecom services are at the core of today ' s societies ' everyday needs. the availability of numerous online forums and discussion platforms enables telecom providers to improve their services by exploring the views of their customers to learn about common issues that the customers face. natural language processing ( nlp ) tools can be used to process the free text collected. one way of working with such data is to represent text as numerical vectors using one of many word embedding models based on neural networks. this research uses a novel dataset of telecom customers ' reviews to perform an extensive study showing how different word embedding algorithms can affect the text classification process. several state - of - the - art word embedding techniques are considered, including bert, word2vec and doc2vec, coupled with several classification algorithms. the important issue of feature engineering and dimensionality reduction is addressed and several pca - based approaches are explored. moreover, the energy consumption used by the different word embeddings is investigated. the findings show that some word embedding models can lead to consistently better text classifiers in terms of precision, recall and f1 - score. in particular, for the more challenging classification tasks, bert combined with pca stood out with the highest performance metrics. moreover, our proposed pca approach of combining word vectors using the first principal component shows clear advantages in performance over the traditional approach of taking the average.
|
arxiv:2504.13653
|
we consider a recursive device that is based on a mach - zehnder interferometer and linear optical elements which allow self - feedback through multiple internal reflection of radiation between two parallel arrays of opposite faced mirrors. by a carefully chosen experimental arrangement and for certain input states it is possible to observe at the open ends of the device time generated coherent superpositions \ textit { in perpetuum }.
|
arxiv:1907.05641
|
deep networks for monocular depth estimation ( mde ) have achieved promising performance recently and it is of great importance to further understand the interpretability of these networks. existing methods attempt to provide posthoc explanations by investigating visual cues, which may not explore the internal representations learned by deep networks. in this paper, we find that some hidden units of the network are selective to certain ranges of depth, and thus such behavior can be served as a way to interpret the internal representations. based on our observations, we quantify the interpretability of a deep mde network by the depth selectivity of its hidden units. moreover, we then propose a method to train interpretable mde deep networks without changing their original architectures, by assigning a depth range for each unit to select. experimental results demonstrate that our method is able to enhance the interpretability of deep mde networks by largely improving the depth selectivity of their units, while not harming or even improving the depth estimation accuracy. we further provide a comprehensive analysis to show the reliability of selective units, the applicability of our method on different layers, models, and datasets, and a demonstration on analysis of model error. source code and models are available at https : / / github. com / youzunzhi / interpretablemde.
|
arxiv:2108.05312
|
we propose gauge theory / gravity duality involving conformal theories based on u ( n + k | k ) gauge groups. we show that to all orders in 1 / n these non - unitary theories based on supergroups are indistinguishable from the corresponding unitary theories where the gauge group is replaced by u ( n ). this leads to non - unitary gravity duals which to all orders in 1 / n are indistinguishable from their unitary cousins. they are distinguished by operators whose correlation functions differ by o ( exp ( - an ) ). the celebrated type iib on ads ^ 5 x s ^ 5 and m - theory on ads ^ 4 x s ^ 7 fall in this class and thus seem to also admit non - unitary non - perturbative completions. it is tempting to conjecture that this setup may provide a non - unitary model for black hole evaporation.
|
arxiv:1409.1603
|
we demonstrate the existence of gravitational critical phenomena in higher dimensional electrovac bubble spacetimes. to this end, we study linear fluctuations about families of static, homogeneous spherically symmetric bubble spacetimes in kaluza - klein theories coupled to a maxwell field. we prove that these solutions are linearly unstable and posses a unique unstable mode with a growth rate that is universal in the sense that it is independent of the family considered. furthermore, by a double analytical continuation this mode can be seen to correspond to marginally stable stationary modes of perturbed black strings whose periods are integer multiples of the gregory - laflamme critical length. this allow us to rederive recent results about the behavior of the critical mass for large dimensions and to generalize them to the charged black string case.
|
arxiv:hep-th/0407265
|
the mergers of binary compact objects such as neutron stars and black holes are of central interest to several areas of astrophysics, including as the progenitors of gamma - ray bursts ( grbs ), sources of high - frequency gravitational waves and likely production sites for heavy element nucleosynthesis via rapid neutron capture ( the r - process ). these heavy elements include some of great geophysical, biological and cultural importance, such as thorium, iodine and gold. here we present observations of the exceptionally bright gamma - ray burst grb 230307a. we show that grb 230307a belongs to the class of long - duration gamma - ray bursts associated with compact object mergers, and contains a kilonova similar to at2017gfo, associated with the gravitational - wave merger gw170817. we obtained james webb space telescope mid - infrared ( mid - ir ) imaging and spectroscopy 29 and 61 days after the burst. the spectroscopy shows an emission line at 2. 15 microns which we interpret as tellurium ( atomic mass a = 130 ), and a very red source, emitting most of its light in the mid - ir due to the production of lanthanides. these observations demonstrate that nucleosynthesis in grbs can create r - process elements across a broad atomic mass range and play a central role in heavy element nucleosynthesis across the universe.
|
arxiv:2307.02098
|
odometry forms an important component of many manned and autonomous systems. in the rail industry in particular, having precise and robust odometry is crucial for the correct operation of the automatic train protection systems that ensure the safety of high - speed trains in operation around the world. two problems commonly encountered in such odometry systems are miscalibration of the wheel encoders and slippage of the wheels under acceleration and braking, resulting in incorrect velocity estimates. this paper introduces an odometry system that addresses these problems. it comprises of an extended kalman filter that tracks the calibration of the wheel encoders as state variables, and a measurement pre - processing stage called sensor consensus analysis ( sca ) that scales the uncertainty of a measurement based on how consistent it is with the measurements from the other sensors. sca uses the statistical z - test to determine when an individual measurement is inconsistent with the other measurements, and scales the uncertainty until the z - test passes. this system is demonstrated on data from german intercity - express high - speed trains and it is shown to successfully deal with errors due to miscalibration and wheel slip.
|
arxiv:1803.02237
|
we consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity - inducing regularizer. problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. we present a new method for solving such problems that utilize subspace acceleration, domain decomposition, and support identification. our analysis shows, under common assumptions, that the iterate sequence generated by our framework is globally convergent, converges to an $ \ epsilon $ - approximate solution in at most $ o ( \ epsilon ^ { - ( 1 + p ) } ) $ ( respectively, $ o ( \ epsilon ^ { - ( 2 + p ) } ) $ ) iterations for all $ \ epsilon $ bounded above and large enough ( respectively, all $ \ epsilon $ bounded above ) where $ p > 0 $ is an algorithm parameter, and exhibits superlinear local convergence. preliminary numerical results for the task of binary classification based on regularized logistic regression show that our approach is efficient and robust, with the ability to outperform a state - of - the - art method.
|
arxiv:2007.14951
|
a non - standard teleportation scheme is proposed, wherein probabilistic teleportation is achieved in conventionally non - teleporting channels. we make use of entanglement monogamy to incorporate an unknown state in a multipartite entangled channel, such that the receiver partially gets disentangled from the network. subsequently, the sender performs local measurement based teleportation protocol in an appropriate measurement basis, which results with the receiver in the possession of an unknown state, connected by local unitary transformation with the state to be teleported. this procedure succeeds in a number of cases, like that of w and other non - maximally entangled four qubit states, where the conventional measurement based approach has failed. it is also found that in certain four particle channels, the present procedure does not succeed, although the conventional one works well.
|
arxiv:1108.0080
|
we relate the minimax game of generative adversarial networks ( gans ) to finding the saddle points of the lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. this formulation shows the connection between the standard gan training process and the primal - dual subgradient methods for convex optimization. the inherent connection does not only provide a theoretical convergence proof for training gans in the function space, but also inspires a novel objective function for training. the modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal - dual subgradient methods. a toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard gan or wasserstein gan. experiments on both gaussian mixture synthetic data and real - world image datasets demonstrate the performance of the proposed method on generating diverse samples.
|
arxiv:1802.01765
|
let $ x $ be one of the $ 28 $ atkin - lehner quotients of a curve $ x _ 0 ( n ) $ such that $ x $ has genus $ 2 $ and its jacobian variety $ j $ is absolutely simple. we show that the shafarevich - tate group of $ j / \ mathbb { q } $ is trivial. this verifies the strong bsd conjecture for $ j $.
|
arxiv:2107.00325
|
statistical modeling of physical laws connects experiments with mathematical descriptions of natural phenomena. the modeling is based on the probability density of measured variables expressed by experimental data via a kernel estimator. as an objective kernel the scattering function determined by calibration of the instrument is introduced. this function provides for a new definition of experimental information and redundancy of experimentation in terms of information entropy. the redundancy increases with the number of experiments, while the experimental information converges to a value that describes the complexity of the data. the difference between the redundancy and the experimental information is proposed as the model cost function. from its minimum, a proper number of data in the model is estimated. as an optimal, nonparametric estimator of the relation between measured variables the conditional average extracted from the kernel estimator is proposed. the modeling is demonstrated on noisy chaotic data.
|
arxiv:cs/0612027
|
protein complexes conserved across species indicate processes that are core to cellular machinery ( e. g. cell - cycle or dna damage - repair complexes conserved across human and yeast ). while numerous computational methods have been devised to identify complexes from the protein interaction ( ppi ) networks of individual species, these are severely limited by noise and errors ( false positives ) in currently available datasets. our analysis using human and yeast ppi networks revealed that these methods missed several important complexes including those conserved between the two species ( e. g. the mlh1 - msh2 - pms2 - pcna mismatch - repair complex ). here, we note that much of the functionalities of yeast complexes have been conserved in human complexes not only through sequence conservation of proteins but also of critical functional domains. therefore, integrating information of domain conservation might throw further light on conservation patterns between yeast and human complexes.
|
arxiv:1307.3856
|
human ratings have become a crucial resource for training and evaluating machine learning systems. however, traditional elicitation methods for absolute and comparative rating suffer from issues with consistency and often do not distinguish between uncertainty due to disagreement between annotators and ambiguity inherent to the item being rated. in this work, we present goldilocks, a novel crowd rating elicitation technique for collecting calibrated scalar annotations that also distinguishes inherent ambiguity from inter - annotator disagreement. we introduce two main ideas : grounding absolute rating scales with examples and using a two - step bounding process to establish a range for an item ' s placement. we test our designs in three domains : judging toxicity of online comments, estimating satiety of food depicted in images, and estimating age based on portraits. we show that ( 1 ) goldilocks can improve consistency in domains where interpretation of the scale is not universal, and that ( 2 ) representing items with ranges lets us simultaneously capture different sources of uncertainty leading to better estimates of pairwise relationship distributions.
|
arxiv:2108.01799
|
a quasislit is the image of a vertical line segment [ 0, iy ], y > 0, under a quasiconformal homeomorphism of the upper half - plane fixing infinity. quasislits correspond precisely to curves generated by the loewner equation with a driving function in the lip - 1 / 2 class. it is known that a quasislit is contained in a cone depending only on its loewner driving function lip - 1 / 2 seminorm, s. in this note we use the loewner equation to give quantitative estimates on the opening angle of this cone in the full range s < 4. the estimate is shown to be sharp for small s. as consequences, we derive explicit h \ " older exponents for s < 4 as well as estimates on winding rates. we also relate quantitatively the lip - 1 / 2 seminorm with the quasiconformal dilatation and discuss the optimal regularity of quasislits achievable through reparametrization.
|
arxiv:1910.03303
|
we derive a symmetry property for the fourier - transform of the collisionless sound modes of bose condensates in anisotropic traps connected with a somewhat hidden conservation law. we discuss its possible observation by dispersive light scattering.
|
arxiv:cond-mat/9805338
|
we study our non - perturbative formalism to describe scalar gauge - invariant metric fluctuations by extending the ponce de leon metric.
|
arxiv:0710.1640
|
large language models ( llms ) have rapidly become integral to numerous applications in critical domains where reliability is paramount. despite significant advances in safety frameworks and guardrails, current protective measures exhibit crucial vulnerabilities, particularly in multilingual contexts. existing safety systems remain susceptible to adversarial attacks in low - resource languages and through code - switching techniques, primarily due to their english - centric design. furthermore, the development of effective multilingual guardrails is constrained by the scarcity of diverse cross - lingual training data. even recent solutions like llama guard - 3, while offering multilingual support, lack transparency in their decision - making processes. we address these challenges by introducing x - guard agent, a transparent multilingual safety agent designed to provide content moderation across diverse linguistic contexts. x - guard effectively defends against both conventional low - resource language attacks and sophisticated code - switching attacks. our approach includes : curating and enhancing multiple open - source safety datasets with explicit evaluation rationales ; employing a jury of judges methodology to mitigate individual judge llm provider biases ; creating a comprehensive multilingual safety dataset spanning 132 languages with 5 million data points ; and developing a two - stage architecture combining a custom - finetuned mbart - 50 translation module with an evaluation x - guard 3b model trained through supervised finetuning and grpo training. our empirical evaluations demonstrate x - guard ' s effectiveness in detecting unsafe content across multiple languages while maintaining transparency throughout the safety evaluation process. our work represents a significant advancement in creating robust, transparent, and linguistically inclusive safety systems for llms and its integrated systems.
|
arxiv:2504.08848
|
( x, \ mathcal { a }, \ mu ) = \ { f \ in \ mathcal { m } ( x, \ mathcal { a } ) : ~ \ forall g \ in \ mathcal { m } ( x, \ mathcal { a } ), f. g \ in l ^ \ infty ( x, \ mathcal { a }, \ mu ) \ } $. it is established that an ideal $ i $ in $ \ mathcal { m } ( x, \ mathcal { a } ) $ is dense in the $ u _ \ mu $ - topology if and only if it is dense in the $ m _ \ mu $ - topology and this happens when and only when there exists $ z \ in z [ i ] $ such that $ \ mu ( z ) = 0 $. furthermore, it is proved that $ i $ is closed in $ \ mathcal { m } ( x, \ mathcal { a } ) $ in the $ m _ \ mu $ - topology if and only if it is a $ z _ \ mu $ - ideal in the sense that if $ f \ equiv g $ almost everywhere on $ x $ with $ f \ in i $ and $ g \ in \ mathcal { m } ( x, \ mathcal { a } ) $, then $ g \ in i $.
|
arxiv:2306.03768
|
according to the theoretical study, a deformation object ( e. g., a spinning non - axisymmetric pulsar star ) will radiate a gravitational wave ( gw ) signal during an accelaration motion process by ligo science project. these types of disturbance sources with a large bump or dimple on the equator would survive and be identifiable as gw sources. in this work, we aim to provide a method for exploring gw radiation from isolated neutron stars ( nss ) with deformation state using some observational results, which can be confirmed by the next ligo project. combination with the properties in observation results ( e. g., psr j1748 - 2446, psr 1828 - 11 and cygnus x - 1 ), based on a binary population synthesis ( bps ) approach we give a numerical gw radiation under the assumption that ns should have non - axisymmetric and give the results of energy spectrum. we find that the gw luminosity of $ l _ { gw } $ can be changed from about $ 10 ^ { 40 } \ rm erg / s $ - - $ 10 ^ { 55 } \ rm erg / s $.
|
arxiv:1304.4455
|
we apply various conventional tests of integrability to the supersymmetric nonlinear schr \ " odinger equation. we find that a matrix lax pair exists and that the system has the painlev \ ' e property only for a particular choice of the free parameters of the theory. we also show that the second hamiltonian structure generalizes to superspace only for these values of the parameters. we are unable to construct a zero curvature formulation of the equations based on osp ( 2 $ | $ 1 ). however, this attempt yields a nonsupersymmetric fermionic generalization of the nonlinear schr \ " odinger equation which appears to possess the painlev \ ' e property.
|
arxiv:hep-th/9403019
|
in this paper, we investigate the probabilistic variants of the strategy logics atl and atl * under imperfect information. specifically, we present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. that is, the semantics of the logics are based on multi - agent, stochastic transition systems with imperfect information, which combine two sources of uncertainty, namely, the partial observability agents have on the environment, and the likelihood of transitions to occur from a system state. since the model checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless ( positional ) strategies. the resulting setting captures the situation in which agents have qualitative uncertainty of the local state and quantitative uncertainty about the occurrence of future events. we illustrate the usefulness of this setting with meaningful examples.
|
arxiv:2310.17240
|
situation calculus has been applied widely in artificial intelligence to model and reason about actions and changes in dynamic systems. since actions carried out by agents will cause constant changes of the agents ' beliefs, how to manage these changes is a very important issue. shapiro et al. [ 22 ] is one of the studies that considered this issue. however, in this framework, the problem of noisy sensing, which often presents in real - world applications, is not considered. as a consequence, noisy sensing actions in this framework will lead to an agent facing inconsistent situation and subsequently the agent cannot proceed further. in this paper, we investigate how noisy sensing actions can be handled in iterated belief change within the situation calculus formalism. we extend the framework proposed in [ 22 ] with the capability of managing noisy sensings. we demonstrate that an agent can still detect the actual situation when the ratio of noisy sensing actions vs. accurate sensing actions is limited. we prove that our framework subsumes the iterated belief change strategy in [ 22 ] when all sensing actions are accurate. furthermore, we prove that our framework can adequately handle belief introspection, mistaken beliefs, belief revision and belief update even with noisy sensing, as done in [ 22 ] with accurate sensing actions only.
|
arxiv:1202.3743
|
we prove gaussian approximation theorems for specific $ k $ - dimensional marginals of convex bodies which possess certain symmetries. in particular, we treat bodies which possess a 1 - unconditional basis, as well as simplices. our results extend recent results for 1 - dimensional marginals due to e. meckes and the author.
|
arxiv:math/0606073
|
creation of quantum computer is outstanding fundamental and practical problem. the quantum computer could be used for execution of very complicated tasks which are not solvable with the classical computers. the first prototype of solid state quantum computer was created in 2009 with superconducting qubits. however, it suffers from the decoherent processes and it is desirable to find more practical encoding of qubits with long - lived coherence. it could be single impurity or vacancy centers in solids, but their interaction with electromagnetic radiation is rather weak. so, here, ensembles of atoms were proposed for the qubit encoding by using the dipole blockade mechanism in order to turn multilevel systems in two level ones. but dipole - dipole based blockade introduces an additional decoherence that limits its practical significance. recently, the collective blockade mechanism has been proposed for the system of three - level atoms by using the different frequency shifts for the raman transitions between the collective atomic states characterized by a different number of the excited atoms. here, we propose two qubit gate by using another collective blockade mechanism in the system of two level atoms based on exchange interaction via the virtual photons between the multi - atomic ensembles in the resonator. also we demonstrate the possibility of three qubit gate ( controlled swap gate ) using a suppression of the swap - process between two multi - atomic ensembles due to dynamical shift of the atomic levels controlled by the states of photon encoded qubit.
|
arxiv:1103.3098
|
we describe the behavior of a perturbed 5 - dimensional black string subject to the gregory - laflamme instability. we show that the horizon evolves in a self - similar manner, where at any moment in the late - time development of the instability the horizon can be described as a sequence of 3 - dimensional spherical black holes of varying size, joined by black string segments of similar radius. as with the initial black string, each local string segment is itself unstable, and this fuels the self - similar cascade to ( classically ) arbitrarily small scales ; in the process the horizon develops a fractal structure. in finite asymptotic time, the remaining string segments shrink to zero - size, yielding a naked singularity. since no fine - tuning is required to excite the instability, this constitutes a generic violation of cosmic censorship. we further discuss how this behavior is related to satellite formation in low - viscosity fluid streams subject to the rayleigh - plateau instability, and estimate the fractal dimension of the horizon prior to formation of the naked singularity.
|
arxiv:1106.5184
|
the goal of computational logic is to allow us to model computation as well as to reason about it. we argue that a computational logic must be able to model interactive computation. we show that first - order logic cannot model interactive computation due to the incompleteness of interaction. we show that interactive computation is necessarily paraconsistent, able to model both a fact and its negation, due to the role of the world ( environment ) in determining the course of the computation. we conclude that paraconsistency is a necessary property for a logic that can model interactive computation.
|
arxiv:cs/0207074
|
a deep neural network ( dnn ) that can reliably model muscle responses from corresponding brain stimulation has the potential to increase knowledge of coordinated motor control for numerous basic science and applied use cases. such cases include the understanding of abnormal movement patterns due to neurological injury from stroke, and stimulation based interventions for neurological recovery such as paired associative stimulation. in this work, potential dnn models are explored and the one with the minimum squared errors is recommended for the optimal performance of the m2m - net, a network that maps transcranial magnetic stimulation of the motor cortex to corresponding muscle responses, using : a finite element simulation, an empirical neural response profile, a convolutional autoencoder, a separate deep network mapper, and recordings of multi - muscle activation. we discuss the rationale behind the different modeling approaches and architectures, and contrast their results. additionally, to obtain a comparative insight of the trade - off between complexity and performance analysis, we explore different techniques, including the extension of two classical information criteria for m2m - net. finally, we find that the model analogous to mapping the motor cortex stimulation to a combination of direct and synergistic connection to the muscles performs the best, when the neural response profile is used at the input.
|
arxiv:2002.06250
|
robust principal component analysis ( rpca ) and its associated non - convex relaxation methods constitute a significant component of matrix completion problems, wherein matrix factorization strategies effectively reduce dimensionality and enhance computational speed. however, some non - convex factorization forms lack theoretical guarantees. this paper proposes a novel strategy in non - convex quasi - norm representation, introducing a method to obtain weighted matrix quasi - norm factorization forms. especially, explicit bilinear factor matrix factorization formulations for the weighted logarithmic norm and weighted schatten - $ q $ quasi norms with $ q = 1, 1 / 2, 2 / 3 $ are provided, along with the establishment of corresponding matrix completion models. an alternating direction method of multipliers ( admm ) framework algorithm is employed for solving, and convergence results of the algorithm are presented.
|
arxiv:2403.18400
|
this paper presents an knowledge graph to assist in reasoning over signals for intelligence purposes. we highlight limitations of existing knowledge graphs and reasoning systems for this purpose, using inference of an attack using combined data from microphones, cameras and social media as an example. rather than acting directly on the received signal, our approach considers attacker behaviour, signal emission, receiver characteristics, and how signals are summarised to support inferring the underlying cause of the signal.
|
arxiv:2206.12111
|
we analyze the changes in the training and educational efforts of the scinet hpc consortium, a canadian academic high performance computing center, in the areas of scientific computing and high - performance computing, over the last six years. initially, scinet offered isolated training events on how to use hpc systems and write parallel code, but the training program now consists of a broad range of workshops and courses that users can take toward certificates in scientific computing, data science, or high - performance computing. using data on enrollment, attendence, and certificate numbers from scinet ' s education website, used by almost 1800 users so far, we extract trends on the growth, demand, and breadth of scinet ' s training program. among the results are a steady overall growth, a sharp and steady increase in the demand for data science training, and a wider participation of ' non - traditional ' computing disciplines, which has motivated an increasingly broad spectrum of training offerings. of interest is also that many of the training initiatives have evolved into courses that can be taken as part of the graduate curriculum at the university of toronto.
|
arxiv:1901.05520
|
recent x - ray observations have established that collisions between subclusters of galaxies are rather common phenomena. prompted by such observations, we have performed n - body simulations of two equal - mass subclusters of galaxies, which are going to merge. we first have confirmed that only a part of kinetic energy associated with the relative motion of two subclusters is converted to internal energy of each subcluster at the first encounter. we performed simulations for a variety of dark - matter ( dm ) distributions, and find that the time scale for washing out the double peak structures after the first encounter strongly depends on the distribution of the density and velocity distributions of the dm. it takes longer when the dm is spatially extended. according to our calculation it takes more than $ 4 imes 10 ^ 9 $ yr after the first encounter until the density contour shows only a single peak. to explain the high fraction of the clusters with substructures among nearby clusters, richstone, loeb turner ( 1992 ; hereafter rlt ) required recent ( $ $ $ 10 ^ 9 h ^ { - 1 } $ yr ) cluster formation. however, our results show that the timescale for subcluster merging is still uncertain and possibly much longer than the time scale assumed by rlt. caution should be exercised when concluding that the density of the universe is high by using rlt ' s method.
|
arxiv:astro-ph/9505004
|
first order policy optimization has been widely used in reinforcement learning. it guarantees to find the optimal policy for the state - feedback linear quadratic regulator ( lqr ). however, the performance of policy optimization remains unclear for the linear quadratic gaussian ( lqg ) control where the lqg cost has spurious suboptimal stationary points. in this paper, we introduce a novel perturbed policy gradient ( pgd ) method to escape a large class of bad stationary points ( including high - order saddles ). in particular, based on the specific structure of lqg, we introduce a novel reparameterization procedure which converts the iterate from a high - order saddle to a strict saddle, from which standard random perturbations in pgd can escape efficiently. we further characterize the high - order saddles that can be escaped by our algorithm.
|
arxiv:2204.00912
|
the self - force acting on a ( scalar or electric ) charge held in place outside a massive body contains information about the body ' s composition, and can therefore be used as a probe of internal structure. we explore this theme by computing the ( scalar or electromagnetic ) self - force when the body is a spherical ball of perfect fluid in hydrostatic equilibrium, under the assumption that its rest - mass density and pressure are related by a polytropic equation of state. the body is strongly self - gravitating, and all computations are performed in exact general relativity. the dependence on internal structure is best revealed by expanding the self - force in powers of 1 / r, with r denoting the radial position of the charge outside the body. to the leading order, the self - force scales as 1 / r ^ 3 and depends only on the square of the charge and the body ' s mass ; the leading self - force is universal. the dependence on internal structure is seen at the next order, 1 / r ^ 5, through a structure factor that depends on the equation of state. we compute this structure factor for relativistic polytropes, and show that for a fixed mass, it increases linearly with the body ' s radius in the case of the scalar self - force, and quadratically with the body ' s radius in the case of the electromagnetic self - force. in both cases we find that for a fixed mass and radius, the self - force is smaller if the body is more centrally dense, and larger if the mass density is more uniformly distributed.
|
arxiv:1205.1236
|
a perfect cuboid is a rectangular parallelepiped with integer edges and integer face diagonals whose space diagonal is also integer. the existence of such cuboids is neither proved, nor disproved. a rational perfect cuboid is a natural companion of a perfect cuboid absolutely equivalent to the latter one. its edges and face diagonals are rational numbers, while its space diagonal is equal to unity. recently, based on a symmetry reduction, it was shown that edges of a rational perfect cuboid are roots of a certain cubic equation with rational coefficients depending on two rational parameters. face diagonals of this cuboid are roots of another cubic equation whose coefficients are rational numbers depending on the same two rational parameters. in the present paper these two cubic equations are studied for reducibility. six special cases of their reducibility over the field of rational numbers are found.
|
arxiv:1208.0308
|
a highly accurate, multi - domain spectral code is used in order to construct sequences of general relativistic, differentially rotating neutron stars in axisymmetry and stationarity. for bodies with a spheroidal topology and a homogeneous or an n = 1 polytropic equation of state, we investigate the solution space corresponding to broad ranges of degree of differential rotation and stellar densities. in particular, starting from static and spherical configurations, we analyse the changes of the corresponding surface shapes as the rate of rotation is increased. for a sufficiently weak degree of differential rotation, the sequences terminate at a mass - shedding limit, while for moderate and strong rates of differential rotation, they exhibit a continuous parametric transition to a regime of toroidal fluid bodies. in this article, we concentrate on the appearance of this transition, analyse in detail its occurrence and show its relevance for the calculation of astrophysical sequences. moreover, we find that the solution space contains various types of spheroidal configurations, which were not considered in previous work, mainly due to numerical limitations.
|
arxiv:0812.3347
|
we discuss the characteristic interference features of soft radiation in the threshold production of heavy unstable particles : soft gluon radiation in $ \ ee \ to \ tt $ and soft photon radiation in $ \ ee \ to \ ww $. we show that the heavy particle decay width controls the interference between the emission off the final state particles. as a result, the radiation pattern may provide a way of measuring the decay width of the heavy particles.
|
arxiv:hep-ph/9302250
|
we develop the formalism for determining the quasinormal modes of general relativistic multi - fluid compact stars in such a way that the impact of superfluid gap data can be assessed. our results represent the first attempt to study true multi - layer dynamics, an important step towards considering realistic superfluid / superconducting compact stars. we combine a relativistic model for entrainment with model equations of state that explicity incorporate the symmetry energy. our analysis emphasises the many different parameters that are required for this kind of modelling, and the fact that standard tabulated equations of state are grossly incomplete in this respect. to make progress, future equations of state need to provide the energy density as a function of the various nucleon number densities, the temperature ( i. e. entropy ), and the entrainment among the various components.
|
arxiv:0709.0660
|
in this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision. our proposed approach uses a top - down attention mechanism to align entities in captions to objects in the image, and then leverage the syntactic structure of the captions to align the relations. we use these alignments to train a relation classification network, thereby obtaining both grounded captions and dense relationships. we demonstrate the effectiveness of our model on the visual genome dataset by achieving a recall @ 50 of 15 % and recall @ 100 of 25 % on the relationships present in the image. we also show that the model successfully predicts relations that are not present in the corresponding captions.
|
arxiv:1912.00311
|
we report measurements from which we determine the spatial structure of the lunar contribution to night sky brightness, taken at the lsst site on cerro pachon in chile. we use an array of six photodiodes with filters that approximate the large synoptic survey telescope ' s { \ it u, g, r, i, z, } and { \ it y } bands. we use the sun as a proxy for the moon, and measure sky brightness as a function of zenith angle of the point on sky, zenith angle of the sun, and angular distance between the sun and the point on sky. we make a correction for the difference between the illumination spectrum of the sun and the moon. since scattered sunlight totally dominates the daytime sky brightness, this technique allows us to cleanly determine the contribution to the ( cloudless ) night sky from backscattered moonlight, without contamination from other sources of night sky brightness. we estimate our uncertainty in the relative lunar night sky brightness vs. zenith and lunar angle to be 10 \, \ %. this information is useful in planning the optimal execution of the lsst survey, and perhaps for other astronomical observations as well. although our primary objective is to map out the angular structure and spectrum of the scattered light from the atmosphere and particulates, we also make an estimate of the expected number of scattered lunar photons per pixel per second in lsst, and find values that are in overall agreement with previous estimates.
|
arxiv:1510.07574
|
ml - as - a - service continues to grow, and so does the need for very strong privacy guarantees. secure inference has emerged as a potential solution, wherein cryptographic primitives allow inference without revealing users ' inputs to a model provider or model ' s weights to a user. for instance, the model provider could be a diagnostics company that has trained a state - of - the - art densenet - 121 model for interpreting a chest x - ray and the user could be a patient at a hospital. while secure inference is in principle feasible for this setting, there are no existing techniques that make it practical at scale. the cryptflow2 framework provides a potential solution with its ability to automatically and correctly translate clear - text inference to secure inference for arbitrary models. however, the resultant secure inference from cryptflow2 is impractically expensive : almost 3tb of communication is required to interpret a single x - ray on densenet - 121. in this paper, we address this outstanding challenge of inefficiency of secure inference with three contributions. first, we show that the primary bottlenecks in secure inference are large linear layers which can be optimized with the choice of network backbone and the use of operators developed for efficient clear - text inference. this finding and emphasis deviates from many recent works which focus on optimizing non - linear activation layers when performing secure inference of smaller networks. second, based on analysis of a bottle - necked convolution layer, we design a x - operator which is a more efficient drop - in replacement. third, we show that the fast winograd convolution algorithm further improves efficiency of secure inference. in combination, these three optimizations prove to be highly effective for the problem of x - ray interpretation trained on the chexpert dataset.
|
arxiv:2209.00411
|
tracking has traditionally been the art of following interest points through space and time. this changed with the rise of powerful deep networks. nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking - by - detection. in this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. our tracker, centertrack, applies a detection model to a pair of images and detections from the prior frame. given this minimal input, centertrack localizes objects and predicts their associations with the previous frame. that ' s it. centertrack is simple, online ( no peeking into the future ), and real - time. it achieves 67. 3 % mota on the mot17 challenge at 22 fps and 89. 4 % mota on the kitti tracking benchmark at 15 fps, setting a new state of the art on both datasets. centertrack is easily extended to monocular 3d tracking by regressing additional 3d attributes. using monocular video input, it achieves 28. 3 % amota @ 0. 2 on the newly released nuscenes 3d tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 fps.
|
arxiv:2004.01177
|
in urban streets, the intrusion of pedestrians presents significant safety challenges. modelling mixed pedestrian - vehicle traffic is complex due to the distinct motion characteristics and spatial dimensions of pedestrians and vehicles, making unified modelling difficult, with few studies addressing these issues. this paper employs a multi - grid cellular automata model to bridge the gap between vehicle and pedestrian models. an improved kerner - klenov - wolf ( ikkw ) model and a pedestrian motion model that incorporates time - to - collision ( ttc ) are introduced. both models update the spatial motions of vehicles and pedestrians uniformly. empirical analysis indicates that the model achieves high simulation accuracy. this model effectively illustrates the impact of pedestrian intrusion within mixed traffic scenario. the fundamental diagram of heterogeneous traffic reveals substantial differences, highlighting the effects of pedestrian intrusion on traffic flow states and identifying six phase regions in mixed traffic. additionally, this paper examines conflicts between pedestrians and vehicles under varying speed limits and sidewalk widths, demonstrating that lower speeds and broader sidewalks significantly reduce the frequency of pedestrian - vehicle conflicts. notably, the frequency of peak conflicts at a vehicle speed limit of 60. 48 km / h is more than three times higher than at 30. 24 km / h. this model offers a potential approach to studying mixed traffic flows and exhibits substantial scalability.
|
arxiv:2405.06282
|
landau predicted that transverse sound propagates in a fermi liquid with sufficiently strong fermi liquid interactions, unlike a classical fluid which cannot support shear oscillations. previous attempts to observe this unique collective mode yielded inconclusive results due to contributions from single particle excitations. here, we have microfabricated acoustic cavities with a micron - scale path length that is suitable for direct detection of this sound mode. the interference fringes of these acoustic fabry - perot cavities can be used to determine both the real and imaginary parts of the acoustic impedance. we report a null - result in this search as no clear interference fringe has been observed in the fermi liquid, indicating the attenuation of tzs is likely above 2000 cm ^ - 1. we provide theoretical justification for why the sound mode may yet exist but not being directly detectable due to high attenuation.
|
arxiv:2410.10795
|
we find some sufficient conditions for a system of partial derivatives of an entire function to be complete in the space $ h ( \ mathbb { c } ^ d ) $ of all entire functions of $ d $ variables. as an appliation of this result we describe new classes of frequently hypercyclic operators on $ h ( \ mathbb { c } ^ d ) $.
|
arxiv:1310.7133
|
we consider relativistic quantum field theory in the presence of an external electric potential in a general curved space - time geometry. we utilise fermi coordinates adapted to the time - like geodesic to describe the low - energy physics in the laboratory and calculate the leading correction due to the curvature of the space - time geometry to the schr \ " odinger equation. we then compute the non - vanishing probability of excitation for a hydrogen atom that falls in or is scattered by a general schwarzschild black hole. the photon that is emitted from the excited state by spontaneous emission extracts energy from the black hole, increases the decay rate of the black hole and adds to the information paradox.
|
arxiv:2105.13896
|
a new model for a spin 1 / 2 ladder system with two legs is introduced. it is demonstrated that this model is solvable via the bethe ansatz method for arbitrary values of the rung coupling j. this is achieved by a suitable mapping from the hubbard model with appropriate twisted boundary conditions. we determine that a phase transition between gapped and gapless spin excitations occurs at the critical value j _ c = 1 / 2 of the rung coupling.
|
arxiv:cond-mat/9911096
|
in natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. in addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias. similarly, a protocol may refer to the procedural methods of health organizations, commercial laboratories, manufacturing plants, etc. to ensure their activities ( e. g., blood testing at a hospital, testing of certified reference materials at a calibration laboratory, and manufacturing of transmission gears at a facility ) are consistent to a specific standard, encouraging safe use and accurate results. finally, in the field of social science, a protocol may also refer to a " descriptive record " of observed events or a " sequence of behavior " of one or more organisms, recorded during or immediately after an activity ( e. g., how an infant reacts to certain stimuli or how gorillas behave in natural habitat ) to better identify " consistent patterns and cause - effect relationships. " these protocols may take the form of hand - written journals or electronically documented media, including video and audio capture. = = experiment and study protocol = = various fields of science, such as environmental science and clinical research, require the coordinated, standardized work of many participants. additionally, any associated laboratory testing and experiment must be done in a way that is both ethically sound and results can be replicated by others using the same methods and equipment. as such, rigorous and vetted testing and experimental protocols are required. in fact, such predefined protocols are an essential component of good laboratory practice ( glp ) and good clinical practice ( gcp ) regulations. protocols written for use by a specific laboratory may incorporate or reference standard operating procedures ( sop ) governing general practices required by the laboratory. a protocol may also reference applicable laws and regulations that are applicable to the procedures described. formal protocols typically require approval by one or more individuals — including for example a laboratory directory, study director, and / or independent ethics committee : 12 — before they are implemented for general use. clearly defined protocols are also required by research
|
https://en.wikipedia.org/wiki/Protocol_(science)
|
in this paper, by calculating the dual code of the schur square for the standard twisted reed - solomon code, we give a sufficient and necessary condition for the generalized twisted reed - solomon code with $ h + t \ le k - 1 $ to be self - orthogonal, where $ k $ is dimension, $ h $ is hook and $ t $ is twist. and then, we show that there is no self - orthogonal generalized twisted reed - solomon code under some conditions. furthermore, several classes of self - orthogonal generalized twisted reed - solomon codes are constructed, and some of these codes are non - grs self - orthogonal mds codes or nmds codes.
|
arxiv:2201.02758
|
we demonstrate a considerable suppression of the low - field leakage through a y2o3 topgate insulator on graphene by applying high - pressure o2 at 100 atm during post - deposition annealing ( hp - pda ). consequently, the quantum capacitance measurement for the monolayer graphene reveals the largest fermi energy modulation ( ef = ~ 0. 52 ev, i. e., the carrier density of ~ 2 * 10 ^ 13 cm ^ - 2 ) in the solid - state topgate insulators reported so far. hp - pda is the robust method to improve the electrical quality of high - k insulators on graphene.
|
arxiv:1402.6060
|
the right to be forgotten, as stated in most data regulations, poses an underexplored challenge in federated learning ( fl ), leading to the development of federated unlearning ( fu ). however, current fu approaches often face trade - offs between efficiency, model performance, forgetting efficacy, and privacy preservation. in this paper, we delve into the paradigm of federated client unlearning ( fcu ) to guarantee a client the right to erase the contribution or the influence, introducing the first fu framework in medical imaging. in the unlearning process of a client, the proposed model - contrastive unlearning marks a pioneering step towards feature - level unlearning, and frequency - guided memory preservation ensures smooth forgetting of local knowledge while maintaining the generalizability of the trained global model, thus avoiding performance compromises and guaranteeing rapid post - training. we evaluated our fcu framework on two public medical image datasets, including intracranial hemorrhage diagnosis and skin lesion diagnosis, demonstrating that our framework outperformed other state - of - the - art fu frameworks, with an expected speed - up of 10 - 15 times compared with retraining from scratch. the code and the organized datasets can be found at : https : / / github. com / dzp2095 / fcu.
|
arxiv:2407.02356
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.