text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
fluctuations of the atomic positions are at the core of a large class of unusual material properties ranging from quantum para - electricity to high temperature superconductivity. their measurement in solids is the subject of an intense scientific debate focused on seeking a methodology capable of establishing a direct link between the variance of the atomic displacements and experimentally measurable observables. here we address this issue by means of non - equilibrium optical experiments performed in shot - noise limited regime. the variance of the time dependent atomic positions and momenta is directly mapped into the quantum fluctuations of the photon number of the scattered probing light. a fully quantum description of the non - linear interaction between photonic and phononic fields is benchmarked by unveiling the squeezing of thermal phonons in $ \ alpha $ - quartz.
|
arxiv:1507.04148
|
a nonlinear langmuir wave in the kinetic regime $ k \ lambda _ d \ gtrsim0. 2 $ may have a filamentation instability, where $ k $ is the wavenumber and $ \ lambda _ d $ is the debye length. the nonlinear stage of that instability develops into the filamentation of langmuir waves which in turn leads to the saturation of the stimulated raman scattering in laser - plasma interaction experiments. here we study the linear stage of the filamentation instability of the particular family \ cite { roserussellpop2001 } of bernstein - greene - kruskal ( bgk ) modes \ cite { bernsteingreenekruskal1957 } that is a bifurcation of the linear langmuir wave. performing direct $ 2 + 2d $ vlasov - poisson simulations of collisionless plasma we find the growth rates of oblique modes of the electric field as a function of bgk ' s amplitude, wavenumber and the angle of the oblique mode ' s wavevector relative to the bgk ' s wavevector. simulation results are compared to theoretical predictions.
|
arxiv:1610.06137
|
the difference between the updated experimental result on the muon anomalous magnetic dipole moment and the corresponding theoretical prediction of the standard model on that is about $ 4. 2 $ standard deviations. in this work, we calculate the muon anomalous mdm at the two - loop level in the supersymmetric $ b - l $ extension of the standard model. considering the experimental constraints on the lightest higgs boson mass, higgs boson decay modes $ h \ rightarrow \ gamma \ gamma, \ ; ww, \ ; zz, \ ; b \ bar b, \ ; \ tau \ bar \ tau $, b rare decay $ \ bar b \ rightarrow x _ s \ gamma $, and the transition magnetic moments of majorana neutrinos, we analyze the theoretical predictions of the muon anomalous magnetic dipole moment in the $ b - l $ supersymmetric model. the numerical analyses indicate that the tension between the experimental measurement and the standard model prediction is remedied in the $ b - l $ supersymmetric model.
|
arxiv:2104.03542
|
graph games are fundamental in strategic reasoning of multi - agent systems and their environments. we study a new family of graph games which combine stochastic environmental uncertainties and auction - based interactions among the agents, formalized as bidding games on ( finite ) markov decision processes ( mdp ). normally, on mdps, a single decision - maker chooses a sequence of actions, producing a probability distribution over infinite paths. in bidding games on mdps, two players - - called the reachability and safety players - - bid for the privilege of choosing the next action at each step. the reachability player ' s goal is to maximize the probability of reaching a target vertex, whereas the safety player ' s goal is to minimize it. these games generalize traditional bidding games on graphs, and the existing analysis techniques do not extend. for instance, the central property of traditional bidding games is the existence of a threshold budget, which is a necessary and sufficient budget to guarantee winning for the reachability player. for mdps, the threshold becomes a relation between the budgets and probabilities of reaching the target. we devise value - iteration algorithms that approximate thresholds and optimal policies for general mdps, and compute the exact solutions for acyclic mdps, and show that finding thresholds is at least as hard as solving simple - stochastic games.
|
arxiv:2412.19609
|
the present paper continues the series [ v. v. vereshagin, true self - energy function and reducibility in effective scalar theories, phys. rev. d 89, 125022 ( 2014 ) ; a. vereshagin and v. vereshagin, resultant parameters of effective theory, phys. rev. d 69, 025002 ( 2004 ) ; k. semenov - tian - shansky, a. vereshagin, and v. vereshagin, s - matrix renormalization in effective theories, phys. rev. d 73, 025020 ( 2006 ) ] devoted to the systematic study of effective scattering theories. we consider matrix elements of the effective lagrangian monomials ( in the interaction picture ) of arbitrary high dimension d and show that the full set of corresponding coupling constants contains parameters of both kinds : essential and redundant. since it would be pointless to formulate renormalization prescriptions for redundant parameters, it is necessary to select the full set of the essential ones. this is done in the present paper for the case of the single scalar field.
|
arxiv:1704.01976
|
i discuss a family of statistical - mechanics models in which ( some classes of ) elements of a finite group $ g $ occupy the ( directed ) edges of a lattice ; the product around any plaquette is constrained to be the group identity $ e $. such a model may possess topological order, i. e. its equilibrium ensemble has distinct, symmetry - related thermodynamic components that cannot be distinguished by any local order parameter. in particular, if $ g $ is a non - abelian group, the topological order may be non - abelian. criteria are given for the viability of particular models, in particular for monte carlo updates.
|
arxiv:0910.4574
|
general criteria on spectral extremal problems for hypergraphs were developed by keevash, lenz, and mubayi in their seminal work ( siam j. discrete math., 2014 ), in which extremal results on \ alpha - spectral radius of hypergraphs for \ alpha > 1 may be deduced from the corresponding hypergraph tur \ ' an problem which has the stability property and whose extremal construction satisfies some continuity assumptions. using this criterion, we give two general spectral tur \ ' an results for hypergraphs with bipartite or mulitpartite pattern, transform corresponding the spectral tur \ ' an problems into pure combinatorial problems with respect to degree - stability of a nondegenerate k - graph family. as an application, we determine the maximum \ alpha - spectral radius for some classes of hypergraphs and characterize the corresponding extremal hypergraphs, such as the expansion of complete graphs, the generalized fans, the cancellative hypergraphs, the generalized triangles, and a special book hypergraph.
|
arxiv:2409.17679
|
many well - known and effective anomaly detection methods assume that a reasonable decision boundary has a hypersphere shape, which however is difficult to obtain in practice and is not sufficiently compact, especially when the data are in high - dimensional spaces. in this paper, we first propose a novel deep anomaly detection model that improves the original hypersphere learning through an orthogonal projection layer, which ensures that the training data distribution is consistent with the hypersphere hypothesis, thereby increasing the true positive rate and decreasing the false negative rate. moreover, we propose a bi - hypersphere compression method to obtain a hyperspherical shell that yields a more compact decision region than a hyperball, which is demonstrated theoretically and numerically. the proposed methods are not confined to common datasets such as image and tabular data, but are also extended to a more challenging but promising scenario, graph - level anomaly detection, which learns graph representation with maximum mutual information between the substructure and global structure features while exploring orthogonal single - or bi - hypersphere anomaly decision boundaries. the numerical and visualization results on benchmark datasets demonstrate the superiority of our methods in comparison to many baselines and state - of - the - art methods.
|
arxiv:2302.06430
|
recent advancements have shown tensions between observations and our current understanding of the universe. such observations may include the $ h _ o $ tension and massive galaxies at high redshifts that are older than what traditional galaxy formation models predicted. since these observations are based on the redshift as the primary distance indicator, a bias in the redshift may explain these tensions. while the redshift follows an established model, when applied to astronomy it is based on the assumption that the rotational velocity of the milky way galaxy relative to the observed galaxies has a negligible effect on the redshift. but given the mysterious nature of the physics of galaxy rotation, that assumption should be tested. the test is done by comparing the redshift of galaxies rotating in the same direction relative to the milky way to the redshift of galaxies rotating in the opposite direction relative to the milky way. the results show that the mean redshift of galaxies that rotate in the same direction relative to the milky way is higher than the mean redshift of galaxies that rotate in the opposite direction. additionally, the redshift difference becomes larger as the redshift gets higher. the consistency of the analysis was verified by comparing data collected by three different telescopes, annotated using four different methods, released by three different research teams, and cover both the northern and southern ends of the galactic pole. all datasets are in excellent agreement with each other, showing consistency in the observed redshift bias. given the " reproducibility crisis " in science, all datasets used in this study are publicly available, and the results can be easily reproduced. the observation could be a first direct empirical reproducible observation for the zwicky ' s " tired - light " model.
|
arxiv:2407.20487
|
the effects of edge irregularity and mixed edge shapes on the characteristics of graphene nanoribbon transistors are examined by self - consistent atomistic simulations based on the non - equilibrium green ' s function formalism. the minimal leakage current increases due to the localized states induced in the band gap, and the on - current decreases due to smaller quantum transmission and the self - consistent electrostatic effect in general. although the ratio between the on - current and minimal leakage current decreases, the transistor still switches even in the presence of edge roughness. the variation between devices, however, can be large, especially for a short channel length.
|
arxiv:0712.3928
|
in terms of dilatations, it is proved a series of criteria for continuous and homeomorphic extension to the boundary of mappings with finite distortion between regular domains on the riemann surfaces
|
arxiv:1604.00280
|
graph neural networks ( gnn ) are deep learning architectures for graphs. essentially, a gnn is a distributed message passing algorithm, which is controlled by parameters learned from data. it operates on the vertices of a graph : in each iteration, vertices receive a message on each incoming edge, aggregate these messages, and then update their state based on their current state and the aggregated messages. the expressivity of gnns can be characterised in terms of certain fragments of first - order logic with counting and the weisfeiler - lehman algorithm. the core gnn architecture comes in two different versions. in the first version, a message only depends on the state of the source vertex, whereas in the second version it depends on the states of the source and target vertices. in practice, both of these versions are used, but the theory of gnns so far mostly focused on the first one. on the logical side, the two versions correspond to two fragments of first - order logic with counting that we call modal and guarded. the question whether the two versions differ in their expressivity has been mostly overlooked in the gnn literature and has only been asked recently ( grohe, lics ' 23 ). we answer this question here. it turns out that the answer is not as straightforward as one might expect. by proving that the modal and guarded fragment of first - order logic with counting have the same expressivity over labelled undirected graphs, we show that in a non - uniform setting the two gnn versions have the same expressivity. however, we also prove that in a uniform setting the second version is strictly more expressive.
|
arxiv:2403.06817
|
let $ k $ denote a field, and let $ v $ denote a vector space over $ k $ with finite positive dimension. we consider a pair of linear transformations $ a : v \ to v $ and $ a ^ * : v \ to v $ that satisfy ( i ), ( ii ) below : ( i ) there exists a basis for $ v $ with respect to which the matrix representing $ a $ is irreducible tridiagonal and the matrix representing $ a ^ * $ is diagonal. ( ii ) there exists a basis for $ v $ with respect to which the matrix representing $ a ^ * $ is irreducible tridiagonal and the matrix representing $ a $ is diagonal. we call such a pair a { \ em leonard pair } on $ v $. in this paper we investigate the commutator $ aa ^ * - a ^ * a $. our results are as follows. first assume the dimension of $ v $ is even. we show $ aa ^ * - a ^ * a $ is invertible and display several attractive formulae for the determinant. next assume the dimension of $ v $ is odd. we show that the null space of $ aa ^ * - a ^ * a $ has dimension 1. we display a nonzero vector in this null space. we express this vector as a sum of eigenvectors for $ a $ and as a sum of eigenvectors for $ a ^ * $.
|
arxiv:math/0511641
|
spontaneous breaking of global supersymmetries results in goldstino fields which provide a nonlinear realisation of the supersymmetry algebra. a finite supersymmetry transformation of a goldstino field can be used to generate a superfield whose components provide a linear realisation of the supersymmetry algebra. this construction also automatically determines the action of the algebra of supercovariant derivatives on goldstino superfields, essential to the efficient computation of manifestly supersymmetric component actions for the goldstinos, including coupling to matter fields. in this paper, we extend known constructions of goldstino superfields resulting from spontaneous breaking of supersymmetry in flat four - dimensional n = 1 superspace to spontaneous breaking of n = 1 and n = 2 supersymmetry in ads _ 4.
|
arxiv:1301.4842
|
this article supplements recent work of the authors. ( 1 ) a criterion for failure of covariant finiteness of a full subcategory of $ \ lambda \ text { - mod } $ is given, where $ \ lambda $ is a finite dimensional algebra. the criterion is applied to the category $ { \ cal p } ^ { \ infty } ( \ lambda \ rm { - mod } ) $ of all finitely generated $ \ lambda $ - modules of finite projective dimension, yielding a negative answer to the question whether $ { \ cal p } ^ { \ infty } ( \ lambda \ rm { - mod } ) $ is always covariantly finite in $ \ lambda \ text { - mod } $. part ( 2 ) concerns contravariant finiteness of $ { \ cal p } ^ { \ infty } ( \ lambda \ rm { - mod } ) $. an example is given where this condition fails, the failure being, however, curable via a sequence of one - point extensions. in particular, this example demonstrates that curing failure of contravariant finiteness of $ { \ cal p } ^ { \ infty } ( \ lambda \ rm { - mod } ) $ usually involves a tradeoff with respect to other desirable qualities of the algebra.
|
arxiv:1407.2300
|
biology is the scientific study of life and living organisms. it is a broad natural science that encompasses a wide range of fields and unifying principles that explain the structure, function, growth, origin, evolution, and distribution of life. central to biology are five fundamental themes : the cell as the basic unit of life, genes and heredity as the basis of inheritance, evolution as the driver of biological diversity, energy transformation for sustaining life processes, and the maintenance of internal stability ( homeostasis ). biology examines life across multiple levels of organization, from molecules and cells to organisms, populations, and ecosystems. subdisciplines include molecular biology, physiology, ecology, evolutionary biology, developmental biology, and systematics, among others. each of these fields applies a range of methods to investigate biological phenomena, including observation, experimentation, and mathematical modeling. modern biology is grounded in the theory of evolution by natural selection, first articulated by charles darwin, and in the molecular understanding of genes encoded in dna. the discovery of the structure of dna and advances in molecular genetics have transformed many areas of biology, leading to applications in medicine, agriculture, biotechnology, and environmental science. life on earth is believed to have originated over 3. 7 billion years ago. today, it includes a vast diversity of organisms β from single - celled archaea and bacteria to complex multicellular plants, fungi, and animals. biologists classify organisms based on shared characteristics and evolutionary relationships, using taxonomic and phylogenetic frameworks. these organisms interact with each other and with their environments in ecosystems, where they play roles in energy flow and nutrient cycling. as a constantly evolving field, biology incorporates new discoveries and technologies that enhance the understanding of life and its processes, while contributing to solutions for challenges such as disease, climate change, and biodiversity loss. = = history = = the earliest of roots of science, which included medicine, can be traced to ancient egypt and mesopotamia in around 3000 to 1200 bce. their contributions shaped ancient greek natural philosophy. ancient greek philosophers such as aristotle ( 384 β 322 bce ) contributed extensively to the development of biological knowledge. he explored biological causation and the diversity of life. his successor, theophrastus, began the scientific study of plants. scholars of the medieval islamic world who wrote on biology included al - jahiz ( 781 β 869 ), al - dinawari ( 828 β 896 ), who wrote on botany, and rhazes ( 865 β 925 ) who wrote on anatomy and physiology. medicine was especially well
|
https://en.wikipedia.org/wiki/Biology
|
we study heuristic algorithms for job shop scheduling problems. we compare classical approaches, such as the shifting bottleneck heuristic with novel strategies using decision diagrams. balas ' local refinement is used to improve feasible solutions. heuristic approaches are combined with mixed integer programming and constraint programming approaches. we discuss our results via computational experiments.
|
arxiv:2407.18111
|
this article describes some collaborative activities of the authors, aimed at improving science education in elementary schools. these include curriculum enhancement, development of new apparatus ( a wind tunnel ), science - education web site contributions and production of a film about the physics of flight. the output of these projects is intended to be generally accessible or reproducible.
|
arxiv:physics/0207051
|
we prove a global li - yau inequality for a general markov semigroup under a curvature - dimension condition. this inequality is stronger than all classical li - yau type inequalities known to us. on a riemannian manifold, it is equivalent to a new parabolic harnack inequality, both in negative and positive curvature, giving new subsequents bounds on the heat kernel of the semigroup. under positive curvature we moreover reach ultracontractive bounds by a direct and robust method.
|
arxiv:1412.5165
|
the responds of different common alkali halide crystals to alpha - rays and gamma - rays are tested in our research. it is found that only csi ( na ) crystals have significantly different waveforms between alpha and gamma scintillations, while others have not this phenomena. it is suggested that the fast light of csi ( na ) crystals arises from the recombination of free electrons with self - trapped holes of the host crystal csi. self - absorption limits the emission of fast light of csi ( tl ) and nai ( tl ) crystals.
|
arxiv:1103.6105
|
pre - trained language models have achieved huge success on a wide range of nlp tasks. however, contextual representations from pre - trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive useful semantic sentence embeddings for some tasks. paraphrase pairs offer an effective way of learning the distinction between semantics and syntax, as they naturally share semantics and often vary in syntax. in this work, we present parabart, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre - trained language models. parabart is trained to perform syntax - guided paraphrasing, based on a source sentence that shares semantics with the target paraphrase, and a parse tree that specifies the target syntax. in this way, parabart learns disentangled semantic and syntactic representations from their respective inputs with separate encoders. experiments in english show that parabart outperforms state - of - the - art sentence embedding models on unsupervised semantic similarity tasks. additionally, we show that our approach can effectively remove syntactic information from semantic sentence embeddings, leading to better robustness against syntactic variation on downstream semantic tasks.
|
arxiv:2104.05115
|
proposals for quantum computing devices are many and varied. they each have unique noise processes that make none of them fully reliable at this time. there are several error correction / avoidance techniques which are valuable for reducing or eliminating errors, but not one, alone, will serve as a panacea. one must therefore take advantage of the strength of each of these techniques so that we may extend the coherence times of the quantum systems and create more reliable computing devices. to this end we give a general strategy for using dynamical decoupling operations on encoded subspaces. these encodings may be of any form ; of particular importance are decoherence - free subspaces and quantum error correction codes. we then give means for empirically determining an appropriate set of dynamical decoupling operations for a given experiment. using these techniques, we then propose a comprehensive encoding solution to many of the problems of quantum computing proposals which use exchange - type interactions. this uses a decoherence - free subspace and an efficient set of dynamical decoupling operations. it also addresses the problems of controllability in solid state quantum dot devices.
|
arxiv:quant-ph/0210072
|
topcat is a desktop gui tool for working with tabular data such as source catalogues. among other capabilities it provides a rich set of visualisation options suitable for interactive exploration of large datasets. the latest release introduces a corner plot window which displays a grid of linked scatter - plot - like and histogram - like plots for all pair and single combinations from a supplied list of coordinates.
|
arxiv:2401.01156
|
we consider a newly - born millisecond magnetar, focusing on its interaction with the dense stellar plasma in which it is initially embedded. we argue that the confining pressure and inertia of the surrounding plasma acts to collimate the magnetar ' s poynting - flux - dominated outflow into tightly beamed jets and increases its magnetic luminosity. we propose this process as an essential ingredient in the magnetar model for gamma - ray burst and asymmetric supernova central engines. we introduce the ` ` pulsar - in - a - cavity ' ' as an important model problem representing a magnetized rotating neutron star inside a collapsing star. we describe its essential properties and derive simple estimates for the evolution of the magnetic field and the resulting spin - down power. we find that the infalling stellar mantle confines the magnetosphere, enabling a gradual build - up of the toroidal magnetic field due to continuous twisting. the growing magnetic pressure eventually becomes dominant, resulting in a magnetically - driven explosion. the initial phase of the explosion is quasi - isotropic, potentially exposing a sufficient amount of material to $ ^ { 56 } $ ni - producing temperatures to result in a bright supernova. however, if significant expansion of the star occurs prior to the explosion, then very little $ ^ { 56 } $ ni is produced and no supernova is expected. in either case, hoop stress subsequently collimates the magnetically - dominated outflow, leading to the formation of a magnetic tower. after the star explodes, the decrease in bounding pressure causes the magnetic outflow to become less beamed. however, episodes of late fallback can reform the beamed outflow, which may be responsible for late x - ray flares.
|
arxiv:astro-ph/0609047
|
in this short note we show that ( i ) in a qcd - like theory with four ( rather than two ) degenerate flavors $ ud u ' d ' $, the $ \ pi \ pi ' $ scattering length is positive ( attractive ) ; and ( ii ) in qcd with only two ( u, d ) degenerate flavors the i = 2 ( say, $ \ pi ^ + \ pi ^ + $ hadronic ) scattering length is, in the large $ n _ c $ limit, repulsive. $ \ pi ( \ pi ' ) $ are the lowest physical states coupling to $ j ^ p = \ bar { u } ( x ) \ gamma _ 5d ( x ) $ and $ j ^ { p ' } = \ bar { u } ' ( x ) \ gamma _ 5d ' ( x ), $ respectively.
|
arxiv:hep-th/0010250
|
in 3 - d the average projected area of a convex solid is 1 / 4 the surface area, as cauchy showed in the 19th century. in general, the ratio in n dimensions may be obtained from cauchy ' s surface area formula, which is in turn a special case of kubota ' s theorem. however, while these latter results are well - known to those working in integral geometry or the theory of convex bodies, the results are largely unknown to the physics community - - - so much so that even the 3 - d result is sometimes said to have first been proven by an astronomer in the early 20th century! this is likely because the standard proofs in the mathematical literature are, by and large, couched in terms of concepts that are may not be familiar to many physicists. therefore, in this work, we present a simple geometrical method of calculating the ratio of average projected area to surface area for convex bodies in arbitrary dimensions. we focus on a pedagogical, physically intuitive treatment that it is hoped will be useful to those in the physics community. we do discuss the mathematical background of the theorem as well, pointing those who may be interested to sources that offer the proofs that are standard in the fields of integral geometry and the theory of convex bodies. we also provide discussion of the applications of the theorem, especially noting that higher - dimensional ratios may be of use for constructing observational tests of string theory. finally, we examine the limiting behavior of the ratio with the goal of offering intuition on its behavior by pointing out a suggestive connection with a well - known fact in statistics.
|
arxiv:1109.0595
|
most of the work on the structural nested model and g - estimation for causal inference in longitudinal data assumes a discrete - time underlying data generating process. however, in some observational studies, it is more reasonable to assume that the data are generated from a continuous - time process and are only observable at discrete time points. when these circumstances arise, the sequential randomization assumption in the observed discrete - time data, which is essential in justifying discrete - time g - estimation, may not be reasonable. under a deterministic model, we discuss other useful assumptions that guarantee the consistency of discrete - time g - estimation. in more general cases, when those assumptions are violated, we propose a controlling - the - future method that performs at least as well as g - estimation in most scenarios and which provides consistent estimation in some cases where g - estimation is severely inconsistent. we apply the methods discussed in this paper to simulated data, as well as to a data set collected following a massive flood in bangladesh, estimating the effect of diarrhea on children ' s height. results from different methods are compared in both simulation and the real application.
|
arxiv:1103.1472
|
as large - scale language models become the standard for text generation, there is a greater need to tailor the generations to be more or less concise, targeted, and informative, depending on the audience / application. existing control approaches primarily adjust the semantic ( e. g., emotion, topics ), structural ( e. g., syntax tree, parts - of - speech ), and lexical ( e. g., keyword / phrase inclusion ) properties of text, but are insufficient to accomplish complex objectives such as pacing which control the complexity and readability of the text. in this paper, we introduce cev - lm - a lightweight, semi - autoregressive language model that utilizes constrained edit vectors to control three complementary metrics ( speed, volume, and circuitousness ) that quantify the shape of text ( e. g., pacing of content ). we study an extensive set of state - of - the - art ctg models and find that cev - lm provides significantly more targeted and precise control of these three metrics while preserving semantic content, using less training data, and containing fewer parameters.
|
arxiv:2402.14290
|
recently, a complete set of differential equations for the effective neutrino masses and mixing parameters in matter have been derived to characterize their evolution with respect to the ordinary matter term $ a \ equiv 2 \ sqrt { 2 } g ^ { } _ { \ rm f } n ^ { } _ e e $, in analogy with the renormalization - group equations ( rges ) for running parameters. via series expansion in terms of the small ratio $ \ alpha ^ { } _ { \ rm c } \ equiv \ delta ^ { } _ { 21 } / \ delta ^ { } _ { \ rm c } $, we obtain approximate analytical solutions to the rges of the effective neutrino parameters and make several interesting observations. first, at the leading order, $ \ widetilde { \ theta } ^ { } _ { 12 } $ and $ \ widetilde { \ theta } ^ { } _ { 13 } $ are given by the simple formulas in the two - flavor mixing limit, while $ \ widetilde { \ theta } ^ { } _ { 23 } \ approx \ theta ^ { } _ { 23 } $ and $ \ widetilde { \ delta } \ approx \ delta $ are not changed by matter effects. second, the ratio of the matter - corrected jarlskog invariant $ \ widetilde { \ cal j } $ to its counterpart in vacuum $ { \ cal j } $ approximates to $ \ widetilde { \ cal j } / { \ cal j } \ approx 1 / ( \ widehat { c } ^ { } _ { 12 } \ widehat { c } ^ { } _ { 13 } ) $, where $ \ widehat { c } ^ { } _ { 12 } \ equiv \ sqrt { 1 - 2 a ^ { } _ * \ cos 2 \ theta ^ { } _ { 12 } + a ^ 2 _ * } $ with $ a ^ { } _ * \ equiv a / \ delta ^ { } _ { 21 } $ and $ \ widehat { c } ^ { } _ { 13 } \ equiv \ sqrt { 1 - 2 a ^ { } _ { \ rm c } \ cos 2 \ theta ^ { } _ { 13 } + a ^ 2 _ { \ rm c } } $ with $ a ^ { } _ { \ rm c } \ equiv a / \ delta ^ {
|
arxiv:1901.10882
|
contrastive learning applied to self - supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max - margin and the n - pairs loss. in this work, we extend the self - supervised batch contrastive approach to the fully - supervised setting, allowing us to effectively leverage label information. clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. we analyze two possible versions of the supervised contrastive ( supcon ) loss, identifying the best - performing formulation of the loss. on resnet - 200, we achieve top - 1 accuracy of 81. 4 % on the imagenet dataset, which is 0. 8 % above the best number reported for this architecture. we show consistent outperformance over cross - entropy on other datasets and two resnet variants. the loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. our loss function is simple to implement, and reference tensorflow code is released at https : / / t. ly / supcon.
|
arxiv:2004.11362
|
in this report we review the microscopic formulation of the five dimensional black hole of type iib string theory in terms of the d1 - d5 brane system. the emphasis here is more on the brane dynamics than on supergravity solutions. we show how the low energy brane dynamics, combined with crucial inputs from ads / cft correspondence, leads to a derivation of black hole thermodynamics and the rate of hawking radiation. our approach requires a detailed exposition of the gauge theory and conformal field theory of the d1 - d5 system. we also discuss some applications of the ads / cft correspondence in the context of black hole formation in three dimensions by thermal transition and by collision of point particles.
|
arxiv:hep-th/0203048
|
recent breakthroughs in vision - language models ( vlms ) start a new page in the vision community. the vlms provide stronger and more generalizable feature embeddings compared to those from imagenet - pretrained models, thanks to the training on the large - scale internet image - text pairs. however, despite the amazing achievement from the vlms, vanilla vision transformers ( vits ) remain the default choice for the image encoder. although pure transformer proves its effectiveness in the text encoding area, it remains questionable whether it is also the case for image encoding, especially considering that various types of networks are proposed on the imagenet benchmark, which, unfortunately, are rarely studied in vlms. due to small data / model scale, the original conclusions of model design on imagenet can be limited and biased. in this paper, we aim at building an evaluation protocol of vision models in the vision - language era under the contrastive language - image pretraining ( clip ) framework. we provide a comprehensive way to benchmark different vision models, covering their zero - shot performance and scalability in both model and training data sizes. to this end, we introduce vitamin, a new vision models tailored for vlms. vitamin - l significantly outperforms vit - l by 2. 0 % imagenet zero - shot accuracy, when using the same publicly available datacomp - 1b dataset and the same openclip training scheme. vitamin - l presents promising results on 60 diverse benchmarks, including classification, retrieval, open - vocabulary detection and segmentation, and large multi - modal models. when further scaling up the model size, our vitamin - xl with only 436m parameters attains 82. 9 % imagenet zero - shot accuracy, surpassing 82. 0 % achieved by eva - e that has ten times more parameters ( 4. 4b ).
|
arxiv:2404.02132
|
here, we propose a conceptual approach for design of an ultracompact nanoscale passive optical circulator based on the excitation of plasmonic resonances. we study a three - port y - junction with a deep subwavelength plasmonic nanorod structure integrated into its core. we show theoretically that such a structure immersed in the magneto - optical media may function as magnetically tunable scatterer tilting and rotating its near - field distribution and corresponding radiation. we demonstrate, using numerical simulations, that such a rotation of the near - field radiation yields a break in the symmetry of the coupling between the junction arms and the structure in such a way that the signal launched from any of the three ports is mostly transmitted into the next port in the circular order, while the other port is essentially isolated, thus providing the functionality of an optical circulator with subwavelength dimensions.
|
arxiv:1302.5300
|
starting from antonov ' s discovery that there is no maximum to the entropy of a gravitating system of point particles at fixed energy in a spherical box if the density contrast between centre and edge exceeds 709, we review progress in the understanding of gravitational thermodynamics. we pinpoint the error in the proof that all systems have positive specific heat and say when it can occur. we discuss the development of the thermal runaway in both the gravothermal catastrophe and its inverse. the energy range over which microcanonical ensembles have negative heat capacity is replaced by a first order phase transition in the corresponding canonical ensembles. we conjecture that all first order phase transitions may be viewed as caused by negative heat capacities of units within them. we find such units in the theory of ionisation, chemical dissociation and in the van der waals gas so these concepts are applicable outside the realm of stars, star clusters and black holes.
|
arxiv:cond-mat/9812172
|
this chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high - dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. the presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio - temporal organization of these systems. a key insight is to organize the evidence and mechanisms in terms of two summarizing measures : ( i ) amplitude of disorder or heterogeneity in the system and ( ii ) level of coupling or interaction strength among the system ' s components. on the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particularly powerful in describing earthquake phenomenology and which, we think, has a promising future in epileptology. the so - called self - exciting hawkes point processes capture parsimoniously the idea that events can trigger other events, and their cascades of interactions and mutual influence are essential to understand the behavior of these systems.
|
arxiv:1007.2420
|
a multi - objective optimization problem is $ c ^ r $ weakly simplicial if there exists a $ c ^ r $ surjection from a simplex onto the pareto set / front such that the image of each subsimplex is the pareto set / front of a subproblem, where $ 0 \ leq r \ leq \ infty $. this property is helpful to compute a parametric - surface approximation of the entire pareto set and pareto front. it is known that all unconstrained strongly convex $ c ^ r $ problems are $ c ^ { r - 1 } $ weakly simplicial for $ 1 \ leq r \ leq \ infty $. in this paper, we show that all unconstrained strongly convex problems are $ c ^ 0 $ weakly simplicial. the usefulness of this theorem is demonstrated in a sparse modeling application : we reformulate the elastic net as a non - differentiable multi - objective strongly convex problem and approximate its pareto set ( the set of all trained models with different hyper - parameters ) and pareto front ( the set of performance metrics of the trained models ) by using a b \ ' ezier simplex fitting method, which accelerates hyper - parameter search.
|
arxiv:2106.12704
|
acceptance criteria β including " what the part should look like if you ' ve made it correctly. " the service of this goal is what creates a drawing that one even could scale and get an accurate dimension thereby. and thus the great temptation to do so, when a dimension is wanted but was not labeled. the second principle β that even though scaling the drawing will usually work, one should nevertheless never do it β serves several goals, such as enforcing total clarity regarding who has authority to discern design intent, and preventing erroneous scaling of a drawing that was never drawn to scale to begin with ( which is typically labeled " drawing not to scale " or " scale : nts " ). when a user is forbidden from scaling the drawing, they must turn instead to the engineer ( for the answers that the scaling would seek ), and they will never erroneously scale something that is inherently unable to be accurately scaled. but in some ways, the advent of the cad and mbd era challenges these assumptions that were formed many decades ago. when part definition is defined mathematically via a solid model, the assertion that one cannot interrogate the model β the direct analog of " scaling the drawing " β becomes ridiculous ; because when part definition is defined this way, it is not possible for a drawing or model to be " not to scale ". a 2d pencil drawing can be inaccurately foreshortened and skewed ( and thus not to scale ), yet still be a completely valid part definition as long as the labeled dimensions are the only dimensions used, and no scaling of the drawing by the user occurs. this is because what the drawing and labels convey is in reality a symbol of what is wanted, rather than a true replica of it. ( for example, a sketch of a hole that is clearly not round still accurately defines the part as having a true round hole, as long as the label says " 10mm dia ", because the " dia " implicitly but objectively tells the user that the skewed drawn circle is a symbol representing a perfect circle. ) but if a mathematical model β essentially, a vector graphic β is declared to be the official definition of the part, then any amount of " scaling the drawing " can make sense ; there may still be an error in the model, in the sense that what was intended is not depicted ( modeled ) ; but there can be no error of the " not to scale " type β because the mathematical vectors and curves are replicas, not symbols, of the part features. even in
|
https://en.wikipedia.org/wiki/Engineering_drawing
|
fese0. 5te0. 5 thin films were grown by pulsed laser deposition on caf2, laalo3 and mgo substrates and structurally and electro - magnetically characterized in order to study the influence of the substrate on their transport properties. the in - plane lattice mismatch between fese0. 5te0. 5 bulk and the substrates shows no influence on the lattice parameters of the films, whereas the type of substrates affects the crystalline quality of the films and, therefore, the superconducting properties. the film on mgo showed an extra peak in the angular dependence of critical current density jc ( { \ theta } ) at { \ theta } = 180 { \ deg } ( h | | c ), which arises from c - axis defects as confirmed by transmission electron microscopy. in contrast, no jc ( { \ theta } ) peaks for h | | c were observed in films on caf2 and laalo3. jc ( { \ theta } ) can be scaled successfully for both films without c - axis correlated defects by the anisotropic ginzburg - landau ( agl ) approach with appropriate anisotropy ratio { \ gamma } j. the scaling parameter { \ gamma } j is decreasing with decreasing temperature, which is different from what we observed in fese0. 5te0. 5 films on fe - buffered mgo substrates.
|
arxiv:1504.04004
|
increasing ellipticity usually suppresses the recollision probability drastically. in contrast, we report on a recollision channel with large return energy and a substantial probability, regardless of the ellipticity. the laser envelope plays a dominant role in the energy gained by the electron, and in the conditions under which the electron comes back to the core. we show that this recollision channel eciently triggers multiple ionization with an elliptically polarized pulse.
|
arxiv:1905.05989
|
in this paper, we define the core entropy for postcritically - finite newton maps and study its continuity within this family. we show that the entropy function is not continuous in this family, which is different from the polynomial case studied by thurston, gao, dudko - schleicher, tiozzo [ th +, gt, ds, ti2 ], and describe completely the continuity of the entropy function at generic parameters.
|
arxiv:1906.01523
|
this paper has been withdrawn.
|
arxiv:math/0512426
|
topological photonics harnesses the physics of topological insulators to control the behavior of light. photonic modes robust against material imperfections are an example of such control. in this work, we propose a soft - matter platform based on nematic liquid crystals that supports photonic topological insulators. the orientation of liquid crystal molecules introduces an extra geometric degree of freedom which in conjunction with suitably designed structural properties, leads to the creation of topologically protected states of light. the use of soft building blocks potentially allows for reconfigurable systems that exploit the interplay between light and the soft responsive medium.
|
arxiv:2005.02476
|
astrophysical black hole candidates, although long thought to have a horizon, could be horizonless ultra - compact objects. this intriguing possibility is motivated by the black hole information paradox and a plausible fundamental connection with quantum gravity. asymptotically free quadratic gravity is considered here as the uv completion of general relativity. a classical theory that captures its main features is used to search for solutions as sourced by matter. we find that sufficiently dense matter produces a novel horizonless configuration, the 2 - 2 - hole, which closely matches the exterior schwarzschild solution down to about a planck proper length of the would - be horizon. the 2 - 2 - hole is characterized by an interior with a shrinking volume and a seemingly innocuous timelike curvature singularity. the interior also has a novel scaling behavior with respect to the physical mass of the 2 - 2 - hole. this leads to an extremely deep gravitational potential in which particles get efficiently trapped via collisions. as a generic static solution, the 2 - 2 - hole may then be the nearly black endpoint of gravitational collapse. there is a considerable time delay for external probes of the 2 - 2 - hole interior, and this determines the spacing of echoes in a post - merger gravitational wave signal.
|
arxiv:1612.04889
|
. for example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group. in linear algebra, if a linear transformation t has an eigenvector v, then the line through 0 and v is an invariant set under t, in which case the eigenvectors span an invariant subspace which is stable under t. when t is a screw displacement, the screw axis is an invariant line, though if the pitch is non - zero, t has no fixed points. in probability theory and ergodic theory, invariant sets are usually defined via the stronger property x β s t ( x ) β s. { \ displaystyle x \ in s \ leftrightarrow t ( x ) \ in s. } when the map t { \ displaystyle t } is measurable, invariant sets form a sigma - algebra, the invariant sigma - algebra. = = formal statement = = the notion of invariance is formalized in three different ways in mathematics : via group actions, presentations, and deformation. = = = unchanged under group action = = = firstly, if one has a group g acting on a mathematical object ( or set of objects ) x, then one may ask which points x are unchanged, " invariant " under the group action, or under an element g of the group. frequently one will have a group acting on a set x, which leaves one to determine which objects in an associated set f ( x ) are invariant. for example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. formally, define the set of lines in the plane p as l ( p ) ; then a rigid motion of the plane takes lines to lines β the group of rigid motions acts on the set of lines β and one may ask which lines are unchanged by an action. more importantly, one may define a function on a set, such as " radius of a circle in the plane ", and then ask if this function is invariant under a group action, such as rigid motions. dual to the notion of invariants are coinvariants, also known as orbits, which formalizes the notion of congruence : objects which can be taken to each other by a group action. for example, under the group of rigid motions of the plane, the perimeter of a triangle
|
https://en.wikipedia.org/wiki/Invariant_(mathematics)
|
multiple p - and s - polarized compound surface plasmon - polariton ( spp ) waves at a fixed frequency can be guided by a structure consisting of a metal layer sandwiched between a homogeneous isotropic dielectric ( hid ) material and a periodic multilayered isotropic dielectric ( pmlid ) material. for any thickness of the metal layer, at least one compound spp wave must exist. it possesses the p - polarization state, is strongly bound to the metal / hid interface when the metal thickness is large but to both metal / dielectric interfaces when the metal thickness is small. when the metal layer vanishes, this compound spp wave transmutes into a tamm wave. additional compound spp waves exist, depending on the thickness of the metal layer, the relative permittivity of the hid material, and the period and the composition of the pmlid material. some of these are p polarized, the others being s polarized. all of them differ in phase speed, attenuation rate, and field profile, even though all are excitable at the same frequency. the multiplicity and the dependence of the number of compound spp waves on the relative permittivity of the hid material when the metal layer is thin could be useful for optical sensing applications.
|
arxiv:1506.08753
|
we show that, under the assumption of the existence of $ m _ 1 ^ { \ # } $, there exists a model on which the restricted nonstationary ideal $ \ hbox { ns } \ upharpoonright a $ is $ \ aleph _ 2 $ - saturated, for $ a $ a stationary co - stationary subset of $ \ omega _ 1 $, while the full nonstationary ideal $ \ hbox { ns } $ can be made $ \ delta _ 1 $ definable with $ k _ { \ omega _ 1 } $ as a parameter. further we show, again under the assumption of the existence of $ m _ 1 ^ { \ # } $ that there is a model of set theory such that $ \ hbox { ns } $ is $ \ aleph _ 2 $ - saturated and such that there is lightface $ \ sigma ^ 1 _ 4 $ - definable well - order on the reals. this result is optimal in the presence of a measurable cardinal.
|
arxiv:1610.04039
|
we continue our study of the distribution of the maximal number $ x ^ { \ ast } _ k $ of offsprings amongst all individuals in a critical galton - watson process started with $ k $ ancestors, treating the case when the reproduction law has a regularly varying tail $ \ bar f $ with index $ - \ alpha $ for $ \ alpha > 2 $ ( and hence finite variance ). we show that $ x ^ { \ ast } _ k $ suitably normalized converges in distribution to a frechet law with shape parameter $ \ alpha / 2 $ ; this contrasts sharply with the case $ 1 < \ alpha < 2 $ when the variance is infinite. more generally, we obtain a weak limit theorem for the offspring sequence ranked in the decreasing order, in terms of atoms of a certain doubly stochastic poisson measure.
|
arxiv:1209.3854
|
using data from focus ( e831 ) experiment at fermilab, we present a model independent partial - wave analysis of the $ k ^ - \ pi ^ + $ s - wave amplitude from the decay $ d ^ + \ to k ^ - \ pi ^ + \ pi ^ + $. the s - wave is a generic complex function to be determined directly from the data fit. the p - and d - waves are parameterized by a sum of breit - wigner amplitudes. the measurement of the s - wave amplitude covers the whole elastic range of the $ k ^ - \ pi ^ + $ system.
|
arxiv:0905.4846
|
dna as a data storage medium has several advantages, including far greater data density compared to electronic media. we propose that schemes for data storage in the dna of living organisms may benefit from studying the reconstruction problem, which is applicable whenever multiple reads of noisy data are available. this strategy is uniquely suited to the medium, which inherently replicates stored data in multiple distinct ways, caused by mutations. we consider noise introduced solely by uniform tandem - duplication, and utilize the relation to constant - weight integer codes in the manhattan metric. by bounding the intersection of the cross - polytope with hyperplanes, we prove the existence of reconstruction codes with greater capacity than known error - correcting codes, which we can determine analytically for any set of parameters.
|
arxiv:1801.06022
|
we experimentally and theoretically investigate the influence of the magnetic component of an electromagnetic field on high - order above - threshold ionization of xenon atoms driven by ultrashort femtosecond laser pulses. the nondipole shift of the electron momentum distribution along the light - propagation direction for high energy electrons beyond the classical cutoff is found to be vastly different from that below the cutoff. a v - shape structure in the momentum dependence of the nondipole shift above the cutoff is identified for the first time. with the help of classical and quantum - orbit analysis, we show that large - angle rescattering of the electrons strongly alters the partitioning of the photon momentum between electron and ion. the sensitivity of the observed nondipole shift to the electronic structure of the target atom is confirmed by three - dimensional time - dependent schr \ " odinger equation simulations for different model potentials.
|
arxiv:2110.08601
|
experiments on a sufficiently disordered two - dimensional ( 2d ) electron system in silicon reveal a new and unexpected kind of metallic behavior, where the conductivity decreases as \ sigma ( n _ s, t ) = \ sigma ( n _ s, t = 0 ) + a ( n _ s ) t ^ 2 ( n _ s - carrier density ) to a non - zero value as temperature t - > 0. in 2d, the existence of a metal with d \ sigma / dt > 0 is very surprising. in addition, a novel type of a metal - insulator transition obtains, which is unlike any known quantum phase transition in 2d.
|
arxiv:cond-mat/9903236
|
earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. glaciology is the study of the cryosphere, including glaciers and coverage of the earth by ice and snow. concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. = = ecology = = ecology is the study of the biosphere. this includes the study of nature and of how living things interact with the earth and one another and the consequences of that. it considers how living things use resources such as oxygen, water, and nutrients from the earth to sustain themselves. it also considers how humans and other living creatures cause changes to nature. = = physical geography = = physical geography is the study of earth ' s systems and how they interact with one another as part of a single self - contained system. it incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. physical geography is distinct from human geography, which studies the human populations on earth, though it does include human effects on the environment. = = methodology = = methodologies vary depending on the nature of the subjects being studied. studies typically fall into one of three categories : observational, experimental, or theoretical. earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (
|
https://en.wikipedia.org/wiki/Earth_science
|
the level set estimation problem seeks to find all points in a domain $ { \ cal x } $ where the value of an unknown function $ f : { \ cal x } \ rightarrow \ mathbb { r } $ exceeds a threshold $ \ alpha $. the estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in $ { \ cal x } $. the threshold value $ \ alpha $ can either be \ emph { explicit } and provided a priori, or \ emph { implicit } and defined relative to the optimal function value, i. e. $ \ alpha = ( 1 - \ epsilon ) f ( x _ \ ast ) $ for a given $ \ epsilon > 0 $ where $ f ( x _ \ ast ) $ is the maximal function value and is unknown. in this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the reproducing kernel hilbert space ( rkhs ) setting. we assume that $ f $ can be approximated by a function in the rkhs up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. moreover, in the linear ( kernel ) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. to our knowledge this work provides the first instance - dependent, non - asymptotic upper bounds on sample complexity of level - set estimation that match information theoretic lower bounds.
|
arxiv:2111.01768
|
general for degree 5 and higher. in the quadratic formula, changing the sign ( permuting the resulting two solutions ) can be viewed as a ( very simple ) group operation. analogous galois groups act on the solutions of higher - degree polynomial equations and are closely related to the existence of formulas for their solution. abstract properties of these groups ( in particular their solvability ) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above. modern galois theory generalizes the above type of galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. this theory establishes β via the fundamental theorem of galois theory β a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics. = = finite groups = = a group is called finite if it has a finite number of elements. the number of elements is called the order of the group. an important class is the symmetric groups s n { \ displaystyle \ mathrm { s } _ { n } }, the groups of permutations of n { \ displaystyle n } objects. for example, the symmetric group on 3 letters s 3 { \ displaystyle \ mathrm { s } _ { 3 } } is the group of all possible reorderings of the objects. the three letters abc can be reordered into abc, acb, bac, bca, cab, cba, forming in total 6 ( factorial of 3 ) elements. the group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. this class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group s n { \ displaystyle \ mathrm { s } _ { n } } for a suitable integer n { \ displaystyle n }, according to cayley ' s theorem. parallel to the group of symmetries of the square above, s 3 { \ displaystyle \ mathrm { s } _ { 3 } } can also be interpreted as the group of symmetries of an equilateral triangle. the order of an element a { \ displaystyle a } in a group g { \ displaystyle g } is the least positive integer n { \ displaystyle n } such that a n = e { \ displaystyle a ^ { n } = e }, where a
|
https://en.wikipedia.org/wiki/Group_(mathematics)
|
we calculate the maximum lyapunov exponent of the motion in the separatrix map ' s chaotic layer, along with calculation of its width, as functions of the adiabaticity parameter $ \ lambda $. the separatrix map is set in natural variables ; and the case of the layer ' s least perturbed border is considered, i. ~ e., the winding number of the layer ' s border ( the last invariant curve ) is the golden mean. although these two dependences ( for the lyapunov exponent and the layer width ) are strongly non - monotonous and evade any simple analytical description, the calculated dynamical entropy $ h $ turns out to be a close - to - linear function of $ \ lambda $. in other words, if normalized by $ \ lambda $, the entropy is a quasi - constant. we discuss whether the function $ h ( \ lambda ) $ can be in fact exactly linear, $ h \ propto \ lambda $. the function $ h ( \ lambda ) $ forms a basis for calculating the dynamical entropy for any perturbed nonlinear resonance in the first fundamental model, as soon as the corresponding melnikov - - arnold integral is estimated.
|
arxiv:2503.13667
|
we calibrate an effective - one - body ( eob ) model to numerical - relativity simulations of mass ratios 1, 2, 3, 4, and 6, by maximizing phase and amplitude agreement of the leading ( 2, 2 ) mode and of the subleading modes ( 2, 1 ), ( 3, 3 ), ( 4, 4 ) and ( 5, 5 ). aligning the calibrated eob waveforms and the numerical waveforms at low frequency, the phase difference of the ( 2, 2 ) mode between model and numerical simulation remains below 0. 1 rad throughout the evolution for all mass ratios considered. the fractional amplitude difference at peak amplitude of the ( 2, 2 ) mode is 2 % and grows to 12 % during the ringdown. using the advanced ligo noise curve we study the effectualness and measurement accuracy of the eob model, and stress the relevance of modeling the higher - order modes for parameter estimation. we find that the effectualness, measured by the mismatch, between the eob and numerical - relativity polarizations which include only the ( 2, 2 ) mode is smaller than 0. 2 % for binaries with total mass 20 - 200 msun and mass ratios 1, 2, 3, 4, and 6. when numerical - relativity polarizations contain the strongest seven modes, and stellar - mass black holes with masses less than 50msun are considered, the mismatch for mass ratio 6 ( 1 ) can be as high as 5 % ( 0. 2 % ) when only the eob ( 2, 2 ) mode is included, and an upper bound of the mismatch is 0. 5 % ( 0. 07 % ) when all the four subleading eob modes calibrated in this paper are taken into account. for binaries with intermediate - mass black holes with masses greater than 50msun the mismatches are larger. we also determine for which signal - to - noise ratios the eob model developed here can be used to measure binary parameters with systematic biases smaller than statistical errors due to detector noise.
|
arxiv:1106.1021
|
alzheimer ' s disease is a common cognitive disorder in the elderly. early and accurate diagnosis of alzheimer ' s disease ( ad ) has a major impact on the progress of research on dementia. at present, researchers have used machine learning methods to detect alzheimer ' s disease from the speech of participants. however, the recognition accuracy of current methods is unsatisfactory, and most of them focus on using low - dimensional handcrafted features to extract relevant information from audios. this paper proposes an alzheimer ' s disease detection system based on the pre - trained framework wav2vec 2. 0 ( wav2vec2 ). in addition, by replacing the loss function with the soft - weighted crossentropy loss function, we achieved 85. 45 \ % recognition accuracy on the same test dataset.
|
arxiv:2402.11931
|
the $ \ lambda $ cdm cosmological framework predicts the existence of thousands of subhalos in our own galaxy not massive enough to retain baryons and become visible. yet, some of them may shine in gamma rays provided that the dark matter ( dm ) is made of weakly interacting massive particles ( wimps ), that would self - annihilate and would appear as unidentified gamma - ray sources ( unids ) in gamma - ray catalogs. indeed, unids have proven to be competitive targets for dm searches with gamma rays. in this work, we focus on the three high - latitude ( $ | b | \ geq 10 ^ \ circ $ ) sources present in the 2hwc catalog of the high altitude water cherenkov ( hawc ) observatory with no associations at other wavelenghts. indeed, only one of these sources, 2hwc j1040 + 308, is found to be above the hawc detection threshold when considering 760 days of data, a factor 1. 5 more exposure time than in the original 2hwc catalog. other instruments such as fermi - lat or veritas at lower energies do not detect this source. also, this unid is reported as spatially extended, making it even more interesting in a dm search context. while waiting for more data that may shed further light on the nature of this source, we set competitive upper limits on the annihilation cross section by comparing this hawc unid to expectations based on state - of - the - art n - body cosmological simulations of the galactic subhalo population. we find these constraints to be particularly competitive for heavy wimps, i. e., masses above $ \ sim 25 $ ( 40 ) tev in the case of the $ b \ bar { b } $ ( $ \ tau ^ + \ tau ^ - $ ) annihilation channel, reaching velocity - averaged cross section values of $ 2 \ cdot10 ^ { - 25 } $ ( $ 5 \ cdot10 ^ { - 25 } $ ) $ cm ^ 3s ^ { - 1 } $. although far from the thermal relic cross section value, the obtained limits are independent and nicely complementary to those from radically different dm analyses and targets, demonstrating again the high potential of this dm search approach.
|
arxiv:2001.02536
|
the leading - order nucleon - nucleon ( nn ) potential derived from chiral perturbation theory consists of one - pion exchange plus a short - distance contact interaction. we show that in the 1s0 and 3s1 - 3d1 channels renormalization of the lippmann - schwinger equation for this potential can be achieved by performing one subtraction. this subtraction requires as its only input knowledge of the nn scattering lengths. this procedure leads to a set of integral equations for the partial - wave nn t - matrix which give cutoff - independent results for the corresponding nn phase shifts. this reformulation of the nn scattering equation offers practical advantages, because only observable quantities appear in the integral equation. the scattering equation may then be analytically continued to negative energies, where information on bound - state energies and wave functions can be extracted.
|
arxiv:0706.1242
|
we have recently introduced a new and very simple action for three - dimensional massive gravity. this action is written in a first order formulation where the triad and the connection play a manifestly symmetric role, but where internal lorentz gauge symmetry is broken. the absence of lorentz invariance, which in this model is the mechanism underlying the propagation of a massive graviton, does however prevent from writing a purely metric non - linear action for the theory. nonetheless, in this letter, we explain how to disentangle, at the non - linear level, the metric and non - metric degrees of freedom in the equations of motion. focusing on the metric part, we show that it satisfies modified einstein equations with higher derivative terms. as a particular case, these equations reproduce a well - studied model known as minimal massive gravity. in the general case, we obtain new metric field equations for massive gravity in three dimensions starting from the simple first order action. these field equations are consistent through a mechanism known as " third way consistency ", which our theory therefore provides a new example of.
|
arxiv:1905.04390
|
it is recently demonstrated that cortical activity can track the time courses of phrases and sentences during speech listening. here, we propose a plausible neural processing framework to explain this phenomenon. it is argued that the brain maintains the neural representation of a linguistic unit, i. e., a word or a phrase, in a processing buffer until the unit is integrated into a higher - level structure. after being integrated, the unit is removed from the buffer and becomes activated long - term memory. in this model, the duration each unit is maintained in the processing buffer depends on the linguistic structure of the speech input. it is shown that the number of items retained in the processing buffer follows the time courses of phrases and sentences, in line with neurophysiological data, whether the syntactic structure of a sentence is mentally parsed using a bottom - up or top - down predictive model. this model generates a range of testable predictions about the link between linguistic structures, their dynamic psychological representations and their neural underpinnings.
|
arxiv:2002.11870
|
czech is a very specific language due to its large differences between the formal and the colloquial form of speech. while the formal ( written ) form is used mainly in official documents, literature, and public speeches, the colloquial ( spoken ) form is used widely among people in casual speeches. this gap introduces serious problems for asr systems, especially when training or evaluating asr models on datasets containing a lot of colloquial speech, such as the malach project. in this paper, we are addressing this problem in the light of a new paradigm in end - to - end asr systems - - recently introduced self - supervised audio transformers. specifically, we are investigating the influence of colloquial speech on the performance of wav2vec 2. 0 models and their ability to transcribe colloquial speech directly into formal transcripts. we are presenting results with both formal and colloquial forms in the training transcripts, language models, and evaluation transcripts.
|
arxiv:2206.07666
|
in the same base setup as sakharov ' s induced gravity, we investigate emergence of gravity in effective quantum field theories ( qft ), with particular emphasis on the gauge sector in which gauge bosons acquire anomalous masses in proportion to the ultraviolet cutoff $ \ lambda _ \ wp $. drawing on the fact that $ \ lambda _ \ wp ^ 2 $ corrections explicitly break the gauge and poincare symmetries, we find that it is possible to map $ \ lambda _ \ wp ^ 2 $ to spacetime curvature as a covariance relation and we find also that this map erases the anomalous gauge boson masses. the resulting framework describes gravity by the general relativity ( gr ) and matter by the qft itself with $ \ log \ lambda _ \ wp $ corrections ( dimensional regularization ). this qft - gr concord predicts existence of new physics beyond the standard model such that the new physics can be a weakly - interacting or even a non - interacting sector comprising the dark matter, dark energy and possibly more. the concord has consequential implications for collider, astrophysical and cosmological phenomena.
|
arxiv:2101.12391
|
the mechanism leading to an auger transition is based on the residual coulomb interaction between the valence electron and the core electrons. on the assumption that the wave field is switched on adiabatically, the probability of the auger effect of the inner electrons of the atom is determined.
|
arxiv:1706.09228
|
16. { \ displaystyle \ leftrightarrow ( x + 3c / 4 ) ^ { 2 } + y ^ { 2 } = 9c ^ { 2 } / 16. } the locus of the vertex c is a circle with center ( β3c / 4, 0 ) and radius 3c / 4. = = = third example = = = a locus can also be defined by two associated curves depending on one common parameter. if the parameter varies, the intersection points of the associated curves describe the locus. in the figure, the points k and l are fixed points on a given line m. the line k is a variable line through k. the line l through l is perpendicular to k. the angle Ξ± { \ displaystyle \ alpha } between k and m is the parameter. k and l are associated lines depending on the common parameter. the variable intersection point s of k and l describes a circle. this circle is the locus of the intersection point of the two associated lines. = = = fourth example = = = a locus of points need not be one - dimensional ( as a circle, line, etc. ). for example, the locus of the inequality 2x + 3y β 6 < 0 is the portion of the plane that is below the line of equation 2x + 3y β 6 = 0. = = see also = = algebraic variety curve line ( geometry ) set - builder notation shape ( geometry ) = = references = =
|
https://en.wikipedia.org/wiki/Locus_(mathematics)
|
basket designs are prospective clinical trials that are devised with the hypothesis that the presence of selected molecular features determine a patient ' s subsequent response to a particular " targeted " treatment strategy. basket trials are designed to enroll multiple clinical subpopulations to which it is assumed that the therapy in question offers beneficial efficacy in the presence of the targeted molecular profile. the treatment, however, may not offer acceptable efficacy to all subpopulations enrolled. moreover, for rare disease settings, such as oncology wherein these trials have become popular, marginal measures of statistical evidence are difficult to interpret for sparsely enrolled subpopulations. consequently, basket trials pose challenges to the traditional paradigm for trial design, which assumes inter - patient exchangeability. the r - package \ pkg { basket } facilitates the analysis of basket trials by implementing multi - source exchangeability models. by evaluating all possible pairwise exchangeability relationships, this hierarchical modeling framework facilitates bayesian posterior shrinkage among a collection of discrete and pre - specified subpopulations. analysis functions are provided to implement posterior inference of the response rates and all possible exchangeability relationships between subpopulations. in addition, the package can identify " poolable " subsets of and report their response characteristics. the functionality of the package is demonstrated using data from an oncology study with subpopulations defined by tumor histology.
|
arxiv:1908.00618
|
recently, knowledge tracing models have been applied in educational data mining such as the self - attention knowledge tracing model ( sakt ), which models the relationship between exercises and knowledge concepts ( kcs ). however, relation modeling in traditional knowledge tracing models only considers the static question - knowledge relationship and knowledge - knowledge relationship and treats these relationships with equal importance. this kind of relation modeling is difficult to avoid the influence of subjective labeling and considers the relationship between exercises and kcs, or kcs and kcs separately. in this work, a novel knowledge tracing model, named knowledge relation rank enhanced heterogeneous learning interaction modeling for neural graph forgetting knowledge tracing ( ngfkt ), is proposed to reduce the impact of the subjective labeling by calibrating the skill relation matrix and the q - matrix and apply the graph convolutional network ( gcn ) to model the heterogeneous interactions between students, exercises, and skills. specifically, the skill relation matrix and q - matrix are generated by the knowledge relation importance rank calibration method ( krirc ). then the calibrated skill relation matrix, q - matrix, and the heterogeneous interactions are treated as the input of the gcn to generate the exercise embedding and skill embedding. next, the exercise embedding, skill embedding, item difficulty, and contingency table are incorporated to generate an exercise relation matrix as the inputs of the position - relation - forgetting attention mechanism. finally, the position - relation - forgetting attention mechanism is applied to make the predictions. experiments are conducted on the two public educational datasets and results indicate that the ngfkt model outperforms all baseline models in terms of auc, acc, and performance stability ( ps ).
|
arxiv:2304.03945
|
fluid flow in pipes with discontinuous cross section or with kinks is described through balance laws with a non conservative product in the source. at jump discontinuities in the pipes ' geometry, the physics of the problem suggests how to single out a solution. on this basis, we present a definition of solution for a general bv geometry and prove an existence result, consistent with a limiting procedure from piecewise constant geometries. in the case of a smoothly curved pipe we thus justify the appearance of the curvature in the source term of the linear momentum equation. these results are obtained as consequences of a general existence result devoted to abstract balance laws with non conservative source terms.
|
arxiv:2104.05548
|
in order to find out if magnetic impurities can mediate interactions between quasiparticles in metals, we have measured the effect of a magnetic field b on the energy distribution function f ( e ) of quasiparticles in two silver wires driven out - of - equilibrium by a bias voltage u. in a sample showing sharp distributions at b = 0, no magnetic field effect is found, whereas in the other sample, rounded distributions at low magnetic field get sharper as b is increased, with a characteristic field proportional to u. comparison is made with recent calculations of the effect of magnetic - impurities - mediated interactions taking into account kondo physics.
|
arxiv:cond-mat/0301070
|
we give the full description of all degenerations of complex five dimensional noncommutative heisenberg algebras. as a corollary, we have the full description of all degenerations of four dimensional anticommutative $ 3 $ - ary algebras.
|
arxiv:2406.07557
|
active galactic nuclei ( agn ) can be probed by at different regions of the electromagnetic spectrum : e. g., radio observations reveal the nature of their relativistic jets and their magnetic fields, and complementarily, x - ray observations give insight into the changes in the accretion disk flows. here we present an overview over the agn research and results from an ongoing multi - band campaign on the active galaxy ngc1052. beyond these studies, we address the latest technical developments and its impact in the agn field : the square kilometre array ( ska ), a new radio interferometer planned for the next decade, and the oncoming x - ray and gamma - ray missions.
|
arxiv:astro-ph/0611530
|
we propose a novel explanation method that explains the decisions of a deep neural network by investigating how the intermediate representations at each layer of the deep network were refined during the training process. this way we can a ) find the most influential training examples during training and b ) analyze which classes attributed most to the final representation. our method is general : it can be wrapped around any iterative optimization procedure and covers a variety of neural network architectures, including feed - forward networks and convolutional neural networks. we first propose a method for stochastic training with single training instances, but continue to also derive a variant for the common mini - batch training. in experimental evaluations, we show that our method identifies highly representative training instances that can be used as an explanation. additionally, we propose a visualization that provides explanations in the form of aggregated statistics over the whole training process.
|
arxiv:2109.05880
|
we have conducted a large - field simultaneous survey of $ ^ { 12 } $ co, $ ^ { 13 } $ co, and c $ ^ { 18 } $ o $ j = 1 - 0 $ emission toward the cassiopeia a ( cas a ) supernova remnant ( snr ), which covers a sky area of $ 3. 5 ^ { \ circ } \ times3. 1 ^ { \ circ } $. the cas giant molecular cloud ( gmc ) mainly consists of three individual clouds with masses on the order of $ 10 ^ 4 - 10 ^ 5 \ m _ { \ odot } $. the total mass derived from the $ \ rm { ^ { 13 } co } $ emission of the gmc is 2. 1 $ \ times10 ^ { 5 } \ m _ { \ odot } $ and is 9. 5 $ \ times10 ^ 5 \ m _ { \ odot } $ from the $ \ rm { ^ { 12 } co } $ emission. two regions with broadened ( 6 $ - $ 7 km s $ ^ { - 1 } $ ) or asymmetric $ ^ { 12 } $ co line profiles are found in the vicinity ( within a 10 $ ' \ times10 ' $ region ) of the cas a snr, indicating possible interactions between the snr and the gmc. using the gaussclumps algorithm, 547 $ ^ { 13 } $ co clumps are identified in the gmc, 54 $ \ % $ of which are supercritical ( i. e. $ \ alpha _ { \ rm { vir } } < 2 $ ). the mass spectrum of the molecular clumps follows a power - law distribution with an exponent of $ - 2. 20 $. the pixel - by - pixel column density of the gmc can be fitted with a log - normal probability distribution function ( n - pdf ). the median column density of molecular hydrogen in the gmc is $ 1. 6 \ times10 ^ { 21 } $ cm $ ^ { - 2 } $ and half the mass of the gmc is contained in regions with h $ _ 2 $ column density lower than $ 3 \ times10 ^ { 21 } $ cm $ ^ { - 2 } $, which is well below the threshold of star formation. the distribution of the yso candidates in the region shows no agglomeration.
|
arxiv:1905.10193
|
a recent study by one of the authors has demonstrated the importance of profile vectors in dna - based data storage. we provide exact values and lower bounds on the number of profile vectors for finite values of alphabet size $ q $, read length $ \ ell $, and word length $ n $. consequently, we demonstrate that for $ q \ ge 2 $ and $ n \ le q ^ { \ ell / 2 - 1 } $, the number of profile vectors is at least $ q ^ { \ kappa n } $ with $ \ kappa $ very close to one. in addition to enumeration results, we provide a set of efficient encoding and decoding algorithms for each of two particular families of profile vectors.
|
arxiv:1607.02279
|
we provide a systematic approach for constructing approximate quantum many - body scars ( qmbs ) starting from two - layer floquet automaton circuits that exhibit trivial many - body revivals. we do so by applying successively more restrictions that force local gates of the automaton circuit to commute concomitantly more accurately when acting on select scar states. with these rules in place, an effective local, floquet hamiltonian is seen to capture dynamics of the automaton over a long prethermal window. we provide numerical evidence for such a picture and use our construction to derive several qmbs models, including the celebrated pxp model.
|
arxiv:2112.12153
|
distributionally robust optimization ( dro ) incorporates robustness against uncertainty in the specification of probabilistic models. this paper focuses on mitigating the curse of dimensionality in data - driven dro problems with optimal transport ambiguity sets. by exploiting independence across lower - dimensional components of the uncertainty, we construct structured ambiguity sets that exhibit a faster shrinkage as the number of collected samples increases. this narrows down the plausible models of the data - generating distribution and mitigates the conservativeness that the decisions of dro problems over such ambiguity sets may face. we establish statistical guarantees for these structured ambiguity sets and provide dual reformulations of their associated dro problems for a wide range of objective functions. the benefits of the approach are demonstrated in a numerical example.
|
arxiv:2310.20657
|
public art shapes our shared spaces. public art should speak to community and context, and yet, recent work has demonstrated numerous instances of art in prominent institutions favoring outdated cultural norms and legacy communities. motivated by this, we develop a novel recommender system to curate public art exhibits with built - in equity objectives and a local value - based allocation of constrained resources. we develop a cost matrix by drawing on schelling ' s model of segregation. using the cost matrix as an input, the scoring function is optimized via a projected gradient descent to obtain a soft assignment matrix. our optimization program allocates artwork to public spaces in a way that de - prioritizes " in - group " preferences, by satisfying minimum representation and exposure criteria. we draw on existing literature to develop a fairness metric for our algorithmic output, and we assess the effectiveness of our approach and discuss its potential pitfalls from both a curatorial and equity standpoint.
|
arxiv:2207.14367
|
the nature of the superconducting pairing state in the pristine phase of a compressed kagome metal csv $ _ 3 $ sb $ _ 5 $ under pressure is studied by the migdal - eliashberg formalism and density - functional theory calculations. we find that the superconducting gap distribution driven by electron - phonon coupling is nodeless and anisotropic. it is revealed that the hybridized v 3 $ d $ and sb 5 $ p $ orbitals are strongly coupled to the v - v bond - stretching and v - sb bond - bending phonon modes, giving rise to a wide spread of superconducting gap depending on its associated fermi - surface sheets and momentum. specifically, the superconducting gaps associated with v 3 $ d _ { xy, x ^ 2 - y ^ 2, z ^ 2 } $ and 3 $ d _ { xz, yz } $ orbitals are larger in their average magnitude and more widely spread compared to that associated with the sb 5 $ p _ z $ orbital. our findings demonstrate that the superconductivity of compressed csv $ _ 3 $ sb $ _ 5 $ can be explained by the anisotropic multiband pairing mechanism with conventional phonon - mediated $ s $ - wave symmetry, evidenced by recent experimental observations under pressure as well as at ambient pressure.
|
arxiv:2303.10080
|
we consider complex ginzburg - landau equations with a polynomial nonlinearity in the real line. we use splitting - methods to prove well - posedness for a subset of almost periodic spaces. specifically, we prove that if the initial condition has multiples of an irrational phase, then the solution of the equation maintains those same phases.
|
arxiv:2211.03746
|
for a braided vector space $ ( v, \ sigma ) $ with braiding $ \ sigma $ of hecke type, we introduce three associative algebra structures on the space $ \ oplus _ { p = 0 } ^ { m } \ mathrm { end } s _ \ sigma ^ p ( v ) $ of graded endomorphisms of the quantum symmetric algebra $ s _ \ sigma ( v ) $. we use the second product to construct a new trace. this trace is an algebra morphism with respect to the third product. in particular, when $ v $ is the fundamental representation of $ \ mathcal { u } _ { q } \ mathfrak { sl } _ { n + 1 } $ and $ \ sigma $ is the action of the $ r $ - matrix, this trace is a scalar multiple of the quantum trace of type $ a $.
|
arxiv:0907.0257
|
we study the ginzburg - landau energy of superconductors with a term $ a _ \ ep $ modelling the pinning of vortices by impurities in the limit of a large ginzburg - landau parameter $ \ kappa = 1 / \ ep $. the function $ a _ \ ep $ is oscillating between 1 / 2 and 1 with a scale which may tend to 0 as $ \ kappa $ tends to infinity. our aim is to understand that in the large $ \ kappa $ limit, stable configurations should correspond to vortices pinned at the minimum of $ a _ \ ep $ and to derive the limiting homogenized free - boundary problem which arises for the magnetic field in replacement of the london equation. the method and techniques that we use are inspired from those of sandier - serfaty ( in which the case $ a _ \ ep \ equiv 1 $ was treated ) and based on energy estimates, convergence of measures and construction of approximate solutions. because of the term $ a _ \ ep ( x ) $ in the equations, we also need homogenization theory to describe the fact that the impurities, hence the vortices, form a homogenized medium in the material.
|
arxiv:cond-mat/0004177
|
we prove that the derived direct image of the constant sheaf with field coefficients under any proper map with smooth source contains a canonical summand. this summand, which we call the geometric extension, only depends on the generic fibre. for resolutions we get a canonical extension of the constant sheaf. when our coefficients are of characteristic zero, this summand is the intersection cohomology sheaf. when our coefficients are finite we obtain a new object, which provides interesting topological invariants of singularities and topological obstructions to the existence of morphisms. the geometric extension is a generalization of a parity sheaf. our proof is formal, and also works with coefficients in modules over suitably finite ring spectra.
|
arxiv:2309.11780
|
accord to explore mutual recognition for experienced engineering technologists and to remove artificial barriers to the free movement and practice of engineering technologists amongst their countries. etmf can be compared to the engineers mobility forum ( emf ) for engineers. graduates acquiring an associate degree, or lower, typically find careers as engineering technicians. according to the united states bureau of labor statistics : " many four - year colleges offer bachelor ' s degrees in engineering technology and graduates of these programs are hired to work as entry - level engineers or applied engineers, but not technicians. " engineering technicians typically have a two - year associate degree, while engineering technologists have a bachelor ' s degrees. = = = canada = = = in canada, the new occupational category of " technologist " was established in the 1960s, in conjunction with an emerging system of community colleges and technical institutes. it was designed to effectively bridge the gap between the increasingly theoretical nature of engineering degrees and the predominantly practical approach of technician and trades programs. provincial associations may certify individuals as a professional technologist ( p. tech. ), certified engineering technologist ( c. e. t. ), registered engineering technologist ( r. e. t. ), applied science technologist ( asct ), or technologue professionel ( t. p. ). these provincial associations are constituent members of technology professionals canada ( tpc ), which accredits technology programs across canada, through its technology accreditation canada ( tac ). nationally accredited engineering technology programs range from two to three years in length, depending on the province, and often require as many classroom hours as a 4 - year degree program. = = = united states = = = in the united states, the u. s. department of education or the council for higher education accreditation ( chea ) are at the top of the educational accreditation hierarchy. the u. s. department of education acknowledges regional and national accreditation and chea recognizes specialty accreditation. one technology accreditation is currently recognized by chea : the association of technology, management and applied engineering ( atmae ). chea recognizes atmae for accrediting associate, baccalaureate, and master ' s degree programs in technology, applied technology, engineering technology, and technology - related disciplines delivered by national or regional accredited institutions in the united states. as of march 2019, abet withdrew from chea recognition the national institute for certification in engineering technologies ( nicet ) awards certification at two levels, depending on work experience : the associate engineering technologist ( at ) and the certified engineering technologist (
|
https://en.wikipedia.org/wiki/Engineering_technologist
|
magnetic reconnection regions in space and astrophysics are known as active particle acceleration sites. there is ample evidence showing that energetic particles can take a substantial amount of converted energy during magnetic reconnection. however, there has been a lack of studies understanding the backreaction of energetic particles at magnetohydrodynamical scales in magnetic reconnection. to address this, we have developed a new computational method to explore the feedback by non - thermal energetic particles. this approach considers the backreaction from these energetic particles by incorporating their pressure into magnetohydrodynamics ( mhd ) equations. the pressure of the energetic particles is evaluated from their distribution evolved through parker ' s transport equation, solved using stochastic differential equations ( sde ), so we coin the name mhd - sde. applying this method to low - beta magnetic reconnection simulations, we find that reconnection is capable of accelerating a large fraction of energetic particles that contain a substantial amount of energy. when the feedback from these particles is included, their pressure suppresses the compression structures generated by magnetic reconnection, thereby mediating particle energization. consequently, the feedback from energetic particles results in a steeper power - law energy spectrum. these findings suggest that feedback from non - thermal energetic particles plays a crucial role in magnetic reconnection and particle acceleration.
|
arxiv:2404.12276
|
we are developing a high - current cyclotron as a driver for the isodar neutrino experiment. it accelerates 5 ma h2 + to 60 mev / amu, after which the electron is removed to produce a 10 ma, 60 mev proton beam. the enabling innovations that offset space - charge effects occur at injection and in the first few turns, allowing one to construct cyclotrons with energies ranging from below 5 mev up to 60 mev / amu, or possibly higher, with the same performance for accelerated ions with q / a = 0. 5 ( h2 +, d +, he + +,... ). in this paper, we discuss the possible uses of such cyclotrons for isotope production, including production of long - lived generator parents ( 68ga, 44ti, 82sr,... ), as well as intense fast neutron beams from deuteron breakup for ( n, 2n ) production of isotopes like 225ac.
|
arxiv:2310.19160
|
the measurement of the biological tissue ' s electrical impedance is an active research field that has attracted a lot of attention during the last decades. bio - impedances are closely related to a large variety of physiological conditions ; therefore, they are useful for diagnosis and monitoring in many medical applications. measuring living tissues, however, is a challenging task that poses countless technical and practical problems, in particular if the tissues need to be measured under the skin. this paper presents a bio - impedance sensor asic targeting a battery - free, miniature size, implantable device, which performs accurate 4 - point complex impedance extraction in the frequency range from 2 khz to 2 mhz. the asic is fabricated in 150 nm cmos, has a size of 1. 22 mm x 1. 22 mm and consumes 165 ua from a 1. 8 v power supply. the asic is embedded in a prototype which communicates with, and is powered by an external reader device through inductive coupling. the prototype is validated by measuring the impedances of different combinations of discrete components, measuring the electrochemical impedance of physiological solution, and performing ex vivo measurements on animal organs. the proposed asic is able to extract complex impedances with around 1 ohm resolution ; therefore enabling accurate wireless tissue measurements.
|
arxiv:1507.03388
|
we use the holographic method to investigate an rg flow and ir physics of a two - dimensional conformal field theory ( cft ) deformed by a relevant scalar operator. on the dual gravity side, a renormalization group ( rg ) flow from a uv to ir cft can be described by rolling a scalar field from an unstable to a stable equilibrium point. after considering a simple scalar potential allowing several local equilibrium points, we study the change of a coupling constant and ground state from the momentum - space and real - space rg flow viewpoints. for the real - space rg flow, we calculate the entanglement entropy as a function of a coupling constant and then explicitly show that the entanglement entropy diverges logarithmically at fixed points due to the restoration of conformal symmetry. we further study how the change of a ground state affects the two - point function and conformal dimension of a local operator numerically and analytically in the probe limit.
|
arxiv:2406.17221
|
one examines the interaction and possible resonances between supernova neutrinos and electron plasma waves. the neutrino phase space distribution and its boundary regions are analyzed in detail. it is shown that the boundary regions are too wide to produce non - linear resonant effects. the growth or damping rates induced by neutrinos are always proportional to the neutrino flux and $ g _ { { \ rm f } } ^ { 2 } $.
|
arxiv:hep-ph/0101054
|
clean technology, also called cleantech or climate tech, is any process, product, or service that reduces negative environmental impacts through significant energy efficiency improvements, the sustainable use of resources, or environmental protection activities. clean technology includes a broad range of technologies related to recycling, renewable energy, information technology, green transportation, electric motors, green chemistry, lighting, grey water, and more. environmental finance is a method by which new clean technology projects can obtain financing through the generation of carbon credits. a project that is developed with concern for climate change mitigation is also known as a carbon project. clean edge, a clean technology research firm, describes clean technology as " a diverse range of products, services, and processes that harness renewable materials and energy sources, dramatically reduce the use of natural resources, and cut or eliminate emissions and wastes. " clean edge notes that, " clean technologies are competitive with, if not superior to, their conventional counterparts. many also offer significant additional benefits, notably their ability to improve the lives of those in both developed and developing countries. " investments in clean technology have grown considerably since coming into the spotlight around 2000. according to the united nations environment program, wind, solar, and biofuel companies received a record $ 148 billion in new funding in 2007, as rising oil prices and climate change policies encouraged investment in renewable energy. $ 50 billion of that funding went to wind power. overall, investment in clean - energy and energy - efficiency industries rose 60 percent from 2006 to 2007. in 2009, clean edge forecasted that the three main clean technology sectors β solar photovoltaics, wind power, and biofuels β would have revenues of $ 325. 1 billion by 2018. according to an mit energy initiative working paper published in july 2016, about half of over $ 25 billion in funding provided by venture capital to cleantech from 2006 to 2011 was never recovered. the report cited cleantech ' s dismal risk / return profiles and the inability of companies developing new materials, chemistries, or processes to achieve manufacturing scale as contributing factors to its flop. clean technology has also emerged as an essential topic among businesses and companies. it can reduce pollutants and dirty fuels for every company, regardless of which industry they are in, and using clean technology has become a competitive advantage. through building their corporate social responsibility ( csr ) goals, they participate in using clean technology and other means by promoting sustainability. fortune global 500 firms spent around $ 20 billion a year on csr activities in 2018. silicon valley, tel aviv
|
https://en.wikipedia.org/wiki/Clean_technology
|
we present shap - med, a text - to - 3d object generative model specialized in the biomedical domain. the objective of this study is to develop an assistant that facilitates the 3d modeling of medical objects, thereby reducing development time. 3d modeling in medicine has various applications, including surgical procedure simulation and planning, the design of personalized prosthetic implants, medical education, the creation of anatomical models, and the development of research prototypes. to achieve this, we leverage shap - e, an open - source text - to - 3d generative model developed by openai, and fine - tune it using a dataset of biomedical objects. our model achieved a mean squared error ( mse ) of 0. 089 in latent generation on the evaluation set, compared to shap - e ' s mse of 0. 147. additionally, we conducted a qualitative evaluation, comparing our model with others in the generation of biomedical objects. our results indicate that shap - med demonstrates higher structural accuracy in biomedical object generation.
|
arxiv:2503.15562
|
sub - arcsecond images of the rotational line emission of cs and so have been obtained toward the class i protostar iras 04365 $ + $ 2535 in tmc - 1a with alma. a compact component around the protostar is clearly detected in the cs and so emission. the velocity structure of the compact component of cs reveals infalling - rotating motion conserving the angular momentum. it is well explained by a ballistic model of an infalling - rotating envelope with the radius of the centrifugal barrier ( a half of the centrifugal radius ) of 50 au, although the distribution of the infalling gas is asymmetric around the protostar. the distribution of so is mostly concentrated around the radius of the centrifugal barrier of the simple model. thus a drastic change in chemical composition of the gas infalling onto the protostar is found to occur at a 50 au scale probably due to accretion shocks, demonstrating that the infalling material is significantly processed before being delivered into the disk.
|
arxiv:1603.08608
|
in this paper, an error - controlled hybrid adaptive fast solver that combine both o ( n ) and o ( n log n ) scheme is proposed. for a given accuracy, the adaptive solver is used in the context of regularized vortex methods to optimize the speed of the velocity and vortex stretching calculation. this is accomplished by introducing three critical numbers in order to limit the depth of the tree division and to balance the near - field and far - field calculations for any hardware architecture. the adaptive solver is analyzed in term of speed and accuracy.
|
arxiv:2008.06673
|
firstly, by establishing a prediction model for global sea - level rise and calculating with maple, it is shown that the global sea - level rise rate in 2009 is 2. 68 mm / a. the height and rate of global sea - level rise will be about 9. 11 cm and 3. 22 mm / a in 2020. based on the study and the actual land subsidence in shanghai lingang new city, the rate of relative sea - level rise near lingang new city is calculated to be 12. 68 mm / a in 2009. then, through setting up the extrapolation prediction model with a linear trend term and a significant tidal cycle, the rise rate of average sea - level near lingang new city was predicted. the result showed it will be 0. 33 mm / a in 2020.
|
arxiv:2408.06387
|
the ability to accurately detect and localize objects is recognized as being the most important for the perception of self - driving cars. from 2d to 3d object detection, the most difficult is to determine the distance from the ego - vehicle to objects. expensive technology like lidar can provide a precise and accurate depth information, so most studies have tended to focus on this sensor showing a performance gap between lidar - based methods and camera - based methods. although many authors have investigated how to fuse lidar with rgb cameras, as far as we know there are no studies to fuse lidar and stereo in a deep neural network for the 3d object detection task. this paper presents sls - fusion, a new approach to fuse data from 4 - beam lidar and a stereo camera via a neural network for depth estimation to achieve better dense depth maps and thereby improves 3d object detection performance. since 4 - beam lidar is cheaper than the well - known 64 - beam lidar, this approach is also classified as a low - cost sensors - based method. through evaluation on the kitti benchmark, it is shown that the proposed method significantly improves depth estimation performance compared to a baseline method. also, when applying it to 3d object detection, a new state of the art on low - cost sensor based method is achieved.
|
arxiv:2103.03977
|
we study the properties of pasta structures and their influence on the neutron star observables employing the effective relativistic mean - field ( e - rmf ) model. the compressible liquid drop model is used to incorporate the finite size effects, considering the possibility of nonspherical structures in the inner crust. the unified equation of states are constructed for several e - rmf parameters to study various properties such as pasta mass and thickness in the neutron star ' s crust. the majority of the pasta properties are sensitive to the symmetry energy in the subsaturation density region. using the results from monte carlo simulations, we estimate the shear modulus of the crust in the context of quasiperiodic oscillations from soft gamma - ray repeaters and calculate the frequency of fundamental torsional oscillation mode in the inner crust. global properties of the neutron star such as mass - radius profile, the moment of inertia, crustal mass, crustal thickness, and fractional crustal moment of inertia are worked out. the results are consistent with various observational and theoretical constraints.
|
arxiv:2203.16827
|
luminous compact galaxies are enigmatic sources by many aspects. they can reach the luminosity of the milky way within a radius of only a few kpc. they also represent one of the most rapidly evolving populations of galaxies since they represent up to 1 / 5 of the luminous galaxies at redshift z = 0. 7 while being almost absent in the local universe. the measurement of their dynamics is crucial to our understanding of lcgs since this has the potential of telling us which physical process ( es ) that drives them, and ultimately to link them to the existing present - day galaxies. here we derive the 3 dimensional velocity fields and velocity dispersion ( sigma ) maps of 17 luminous compact galaxies selected from the canada france redshift survey and the hubble deep field south with redshifts ranging from z = 0. 4 to z = 0. 75. we find that only 18 % of them show rotational velocity fields typical of rotating disks, the others showing more complex kinematics. assuming that lcgs are not too far from equilibrium, about half of lcgs then appear to be either non - relaxed objects, or objects that are not supported by velocity dispersion alone. this supports the view that an important fraction of lcgs are probably mergers. it brings additional support to the ` ` spiral rebuilding scenario ' ' in which lcgs correspond to a previous or post - merger phase before the disk re - building.
|
arxiv:astro-ph/0603562
|
using the formalism of the spherical infall model the structure of collapsed and virialized dark halos is calculated for a variety of scale - free initial conditions. in spite of the scale - free cosmological nature of the problem, the collapse of individual objects is not self - similar. unlike most of previous calculations the dynamics used here relies only on adiabatic invariants and not on self - similarity. the paper focuses on the structure of the innermost part of the collapsed halos and addresses the problem of central density cusps. the slopes of density profiles at 1 % of virial radius are calculated for a variety of cosmological models and are found to vary with the mass of the halos and power spectrum of the initial conditions. the inner slopes range between r ^ - 2. 3 and r ^ - 2 with the limiting case of r ^ - 2 reached for the largest masses. the steep cusps found here correspond to the limiting case where all particles move on radial orbits. the introduction of angular momentum will make the density profile shallower. we expect this to resolve the discrepancy found between the calculated profiles and the ones found in high resolution n - body simulations, where the exponent ranges from - 0. 5 to - 1. 5. the robust prediction here is that collisionless gravitational collapse in an expanding universe is expected to form density cups and not halos with a core structure.
|
arxiv:astro-ph/0005566
|
universality is one of the most important ideas in computability theory. there are various criteria of simplicity for universal turing machines. probably the most popular one is to count the number of states / symbols. this criterion is more complex than it may appear at a first glance. in this note we review recent results in algorithmic information theory and propose three new criteria of simplicity for universal prefix - free turing machines. these criteria refer to the possibility of proving various natural properties of such a machine ( its universality, for example ) in a formal theory, pa or zfc. in all cases some, but not all, machines are simple.
|
arxiv:0906.3235
|
a new amorphous alloy system $ ( tizrnbcu ) _ { 1 - x } co _ x $ covering a broad composition range from the high - entropy ( hea ) to co rich alloys ( x $ \ leqslant $ 0. 43 ) has been fabricated, characterized and investigated. a comprehensive study of the chemical compositions, homogeneity, thermal stability, electronic structure and magnetic and mechanical properties has been performed. all properties change their variations with x within the hea range. in particular, the average atomic volume deviates from the vegard ' s law for x $ \ ge 0. 2 $, where also the average atomic packing fraction suddenly changes. the valence band structure, studied with ultraviolet photoemission spectroscopy, shows a split - band shape with 3d - states of co approaching the fermi level on increasing x. due to onset of magnetic correlations magnetic susceptibility rapidly increases for x $ \ ge 0. 25 $. very high microhardness increases rapidly with x. the results are compared with those for similar binary and quinary metallic glasses and with those for cantor type of crystalline alloys.
|
arxiv:2107.08239
|
a 5 - mev rfq designed for a proton current up to 100 - ma cw is now under construction as part of the high intensity proton injector project ( iphi ). its computed transmission is greater than 99 %. the main goals of the project are to verify the accuracy of the design codes, to gain the know - how on fabrication, tuning procedures and operations, to measure the output beam characteristics in order to optimise the higher energy part of the linac, and to reach a high availability with minimum beam trips. a cold model has been built to develop the tuning procedure. the present status of the iphi rfq is presented.
|
arxiv:physics/0008145
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.