text
stringlengths
1
3.65k
source
stringlengths
15
79
the augmented space formalism coupled with recursion method and density functional theory based tight - binding linear muffin - tin orbitals have been applied for a first principles calculation of surface electronic and magnetic properties of body centered cubic fe ( 001 ) and face centered cubic co ( 001 ) and ni ( 001 ). nine atomic layers have been studied to see the trend of change in these properties from surface into the bulk. surface magnetic moment has been found to be higher than that of the bulk and in different layers below, magnetic moments show friedel oscillations in agreement with other studies. work functions of these systems have been found to agree with experimental values. we propose this real space technique to be suitable for the study of localized physical properties like surface layers and it is also suitable for the study rough surfaces and interfaces.
arxiv:1410.3185
let $ n $ and $ k $ be two positive integers with $ k \ leq n $ and $ c $ an $ n \ times n $ matrix with nonnegative entries. in this paper, the rank - $ k $ numerical range in the max algebra setting is introduced and studied. the related notions of the max joint $ k $ - numerical range and the max joint $ c $ - numerical range of an entry - wise nonnegative matrix and an $ m $ - tuple of nonnegative matrices are also introduced. some interesting algebraic properties of these concepts are investigated.
arxiv:2412.10375
deep neural networks ( dnns ) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles and medical diagnosis. however, recent studies indicate that dnns are vulnerable to adversarial examples ( aes ), which are designed by attackers to fool deep learning models. different from real examples, aes can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security - critical deep - learning applications. in recent years, the generation and defense of aes have become a research hotspot in the field of artificial intelligence ( ai ) security. this article reviews the latest research progress of aes. first, we introduce the concept, cause, characteristics and evaluation metrics of aes, then give a survey on the state - of - the - art ae generation methods with the discussion of advantages and disadvantages. after that, we review the existing defenses and discuss their limitations. finally, future research opportunities and challenges on aes are prospected.
arxiv:1809.04790
we study the scattering of photons by a two - level system ultrastrongly coupled to a one - dimensional waveguide. using a combination of the polaron transformation with scattering theory we can compute the one - photon scattering properties of the qubit for a broad range of coupling strengths, estimating resonance frequencies, lineshapes and linewidths. we validate numerically and analytically the accuracy of this technique up to $ \ alpha = 0. 3 $, close to the toulouse point $ \ alpha = 1 / 2 $, where inelastic scattering becomes relevant. these methods model recent experiments with superconducting circuits [ p. forn - d { \ ' \ i } az et al., nat. phys. ( 2016 ) ].
arxiv:1701.04709
##cleotides containing optimal codons and beneficial mutations can be included. = = = = in vivo homologous recombination = = = = cloning performed in yeast involves pcr dependent reassembly of fragmented expression vectors. these reassembled vectors are then introduced to, and cloned in yeast. using yeast to clone the vector avoids toxicity and counter - selection that would be introduced by ligation and propagation in e. coli. = = = = mutagenic organized recombination process by homologous in vivo grouping ( morphing ) = = = = this method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast. = = = = phage - assisted continuous evolution ( pace ) = = = = this method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. the phage ' s life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. this method is advantageous because it requires minimal human intervention for the continuous evolution of the gene. = = = in vitro non - homologous recombination methods = = = these methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology. = = = exon shuffling = = = exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. orthologous exon shuffling involves combining exons from orthologous genes from different species. orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. paralogous exon shuffling involves shuffling of exon from different genes from the same species. paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. functional homolog shuffling involves shuffling of non - homologous domains which are functional related. all of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. this amplification products are then reassembled into full length genes using primer - less pcr. during these pcr cycles the fragments act as templates and primers. this results in chimeric full length genes, which are then subjected to screening. = = = = incremental truncation for the creation of hybrid enzymes ( itchy )
https://en.wikipedia.org/wiki/Protein_engineering
speech synthesis ( text to speech, tts ) and recognition ( automatic speech recognition, asr ) are important speech tasks, and require a large amount of text and speech pairs for model training. however, there are more than 6, 000 languages in the world and most languages are lack of speech training data, which poses significant challenges when building tts and asr systems for extremely low - resource languages. in this paper, we develop lrspeech, a tts and asr system under the extremely low - resource setting, which can support rare languages with low data cost. lrspeech consists of three key techniques : 1 ) pre - training on rich - resource languages and fine - tuning on low - resource languages ; 2 ) dual transformation between tts and asr to iteratively boost the accuracy of each other ; 3 ) knowledge distillation to customize the tts model on a high - quality target - speaker voice and improve the asr model on multiple voices. we conduct experiments on an experimental language ( english ) and a truly low - resource language ( lithuanian ) to verify the effectiveness of lrspeech. experimental results show that lrspeech 1 ) achieves high quality for tts in terms of both intelligibility ( more than 98 % intelligibility rate ) and naturalness ( above 3. 5 mean opinion score ( mos ) ) of the synthesized speech, which satisfy the requirements for industrial deployment, 2 ) achieves promising recognition accuracy for asr, and 3 ) last but not least, uses extremely low - resource training data. we also conduct comprehensive analyses on lrspeech with different amounts of data resources, and provide valuable insights and guidances for industrial deployment. we are currently deploying lrspeech into a commercialized cloud speech service to support tts on more rare languages.
arxiv:2008.03687
we present a new loss function called distribution - balanced loss for the multi - label recognition problems that exhibit long - tailed class distributions. compared to conventional single - label classification problem, multi - label recognition problems are often more challenging due to two significant issues, namely the co - occurrence of labels and the dominance of negative labels ( when treated as multiple binary classification problems ). the distribution - balanced loss tackles these issues through two key modifications to the standard binary cross - entropy loss : 1 ) a new way to re - balance the weights that takes into account the impact caused by label co - occurrence, and 2 ) a negative tolerant regularization to mitigate the over - suppression of negative labels. experiments on both pascal voc and coco show that the models trained with this new loss function achieve significant performance gains over existing methods. code and models are available at : https : / / github. com / wutong16 / distributionbalancedloss.
arxiv:2007.09654
extragalactic nuclear activity is best explored with observations at high energies, where the most extreme flux and spectral variations are expected to occur, witnessing changes in the accretion flow or in the kinematics of the plasma. in active galactic nuclei of blazar type, these variations are the most dramatic. by following blazar outbursts from their onset and by correlating the observed variations at many different wavelengths we can reconstruct the behavior of the plasma and map out the development of the flare within the jet. the advent of the fermi satellite has allowed the start of a systematic and intensive monitoring program of blazars. blazar outbursts are very effectively detected by the lat instrument in the mev - gev domain, and these can be promptly followed up with other facilities. based on a fermi lat detection of a high mev - gev state, we have observed the blazar pks 1502 + 106 with the integral satellite between 9 and 11 august 2008. simultaneous swift observations have been also accomplished, as well as optical follow - up at the nordic optical telescope. the ibis instrument onboard integral detected a source at a position inconsistent with the optical coordinates of pks 1502 + 106, but consistent with those of the seyfert 1 galaxy mkn 841, located at 6. 8 arcmin south - west from the blazar, which is therefore responsible for all the hard x - ray flux detected by ibis. at the location of the blazar, ibis sets an upper limit of ~ 10 ^ { - 11 } erg / s / cm2 on the 15 - 60 kev flux, that turns out to be consistent with a model of inverse compton scattering accounting for the soft x - ray and gamma - ray spectra measured by swift xrt and fermi lat, respectively. the gamma - ray spectrum during the outburst indicates substantial variability of the characteristic energy of the inverse compton component in this blazar. ( abridged ).
arxiv:1011.3224
we prove that for given integers b and c, the diophantine equation x ^ 2 + bx + c = y ^ 2, has finitely many integer solutions ( i. e. pairs in zxz ), in fact an even number of such solutions ( including the zero or no solutions case ). we also offer an explicit description of the solution set. such a description depends on the form of the integer b ^ 2 - 4c. some corollaries do follow. furthermore, we show that the said equation has exactly two integer solutions, precisely when b ^ 2 - 4c = 1, 4, 16, - 4, or - 16. in each case we list the two solutions in terms of the coefficients b and c.
arxiv:0803.3956
in the traditional random - conformational - search model, various hypotheses with a series of meta - stable intermediate states were often proposed to resolve the levinthal paradox. here we introduce a quantum strategy to formulate protein folding as a quantum walk on a definite graph, which provides us a general framework without making hypotheses. evaluating it by the mean of first passage time, we find that the folding time via our quantum approach is much shorter than the one obtained via classical random walks. this idea is expected to evoke more insights for future studies.
arxiv:1906.09184
the mean square atomic displacement in hcp - phase solid he - 4 has been measured in crystals with a molar volume of 21. 3 cm ^ 3. it is temperature independent from 1 k to 140 mk, with no evidence for an anomaly in the vicinity of the proposed supersolid transition. the mean square displacement is different for in - plane motions ( 0. 122 + / - 0. 001 a ^ ( 2 ) ) and out - of - plane motions ( 0. 150 + / - 0. 001 a ^ ( 2 ) ).
arxiv:cond-mat/0702537
for a graph class $ \ mathcal g $ and a graph $ h $, the four $ \ mathcal g $ - covering numbers of $ h $, namely global $ { \ rm cn } _ { g } ^ { \ mathcal { g } } ( h ) $, union $ { \ rm cn } _ { u } ^ { \ mathcal { g } } ( h ) $, local $ { \ rm cn } _ { l } ^ { \ mathcal { g } } ( h ) $, and folded $ { \ rm cn } _ { f } ^ { \ mathcal { g } } ( h ) $, each measure in a slightly different way how well $ h $ can be covered with graphs from $ \ mathcal g $. for every $ \ mathcal g $ and $ h $ it holds \ [ { \ rm cn } _ { g } ^ { \ mathcal { g } } ( h ) \ geq { \ rm cn } _ { u } ^ { \ mathcal { g } } ( h ) \ geq { \ rm cn } _ { l } ^ { \ mathcal { g } } ( h ) \ geq { \ rm cn } _ { f } ^ { \ mathcal { g } } ( h ) \ ] and in general each inequality can be arbitrarily far apart. we investigate structural properties of graph classes $ \ mathcal g $ and $ \ mathcal h $ such that for all graphs $ h \ in \ mathcal { h } $, a larger $ \ mathcal g $ - covering number of $ h $ can be bounded in terms of a smaller $ \ mathcal g $ - covering number of $ h $. for example, we prove that if $ \ mathcal g $ is hereditary and the chromatic number of graphs in $ \ mathcal h $ is bounded, then there exists a function $ f $ ( called a binding function ) such that for all $ h \ in \ mathcal { h } $ it holds $ { \ rm cn } _ { u } ^ { \ mathcal { g } } ( h ) \ leq f ( { \ rm cn } _ { g } ^ { \ mathcal { g } } ( h ) ) $. for $ \ mathcal g $ we consider graph classes that are component - closed, hereditary, monotone, sparse, or of bounded chromatic number. for $ \ mathcal h $ we
arxiv:2504.17458
we report on a theoretical study of the hidden charm $ n ^ * _ { c \ bar { c } } $ states in the $ \ gamma p \ to \ bar { d } ^ { * 0 } \ lambda ^ + _ c $ reaction near threshold within an effective lagrangian approach. in addition to the contributions from the $ s $ - channel nucleon pole, the $ t $ - channel $ d ^ 0 $ exchange, the $ u $ - channel $ \ lambda ^ + _ c $ exchange and the contact term, we study the contributions from the $ n ^ * _ { c \ bar { c } } $ states with spin - parity $ j ^ p = 1 / 2 ^ - $ and $ 3 / 2 ^ - $. the total and differential cross sections of the $ \ gamma p \ to \ bar { d } ^ { * 0 } lambda ^ + _ c $ reaction are predicted. it is found that the contributions of these $ n ^ * _ { c \ bar { c } } $ states give clear peak structures in the total cross sections. thus, this reaction is another new platform to study the hidden - charm states. it is expected that our model calculation may be tested by the future experiments.
arxiv:1604.05969
many graph algorithms can be viewed as sets of rules that are iteratively applied, with the number of iterations dependent on the size and complexity of the input graph. existing machine learning architectures often struggle to represent these algorithmic decisions as discrete state transitions. therefore, we propose a novel framework : graphfsa ( graph finite state automaton ). graphfsa is designed to learn a finite state automaton that runs on each node of a given graph. we test graphfsa on cellular automata problems, showcasing its abilities in a straightforward algorithmic setting. for a comprehensive empirical evaluation of our framework, we create a diverse range of synthetic problems. as our main application, we then focus on learning more elaborate graph algorithms. our findings suggest that graphfsa exhibits strong generalization and extrapolation abilities, presenting an alternative approach to represent these algorithms.
arxiv:2408.11042
language representations are efficient tools used across nlp applications, but they are strife with encoded societal biases. these biases are studied extensively, but with a primary focus on english language representations and biases common in the context of western society. in this work, we investigate biases present in hindi language representations with focuses on caste and religion - associated biases. we demonstrate how biases are unique to specific language representations based on the history and culture of the region they are widely spoken in, and how the same societal bias ( such as binary gender - associated biases ) is encoded by different words and text spans across languages. the discoveries of our work highlight the necessity of culture awareness and linguistic artifacts when modeling language representations, in order to better understand the encoded biases.
arxiv:2110.07871
we report on the possibilities of using the method of normal fundamental systems for solving some problems of oscillation theory. large elastic dynamical systems with continuous and discrete parameters are considered, which have many different engineering applications. intensive oscillations in such systems are possible, but not desirable. therefore, it is very important to obtain conditions for which oscillations take or not - take place. mathematically, one needs to search for the solutions of partial differential equations satisfying both boundary and conjugation conditions. in this paper we overview the methodology of normal fundamental systems for the study of such oscillation problems, which provide an efficient and reliable computational method. the obtained results permit to analyze the influence of different system parameters on oscillations as well as to compute the optimal feedback parameters for the active vibration control of the systems.
arxiv:math/0607203
our traditional notion of a cell is changing dramatically given the increasing degree of heterogeneity in 4g and emerging 5g systems. rather than belonging to a specific cell, a device would choose the most suitable connection from the plethora of connections available. in such a setting, given the transmission powers differ significantly between downlink ( dl ) and uplink ( ul ), a wireless device that sees multiple base stations ( bss ) may access the infrastructure in a way that it receives the downlink ( dl ) traffic from one bs and sends uplink ( ul ) traffic through another bs. this situation is referred to as downlink and uplink decoupling ( dude ). in this paper, the capacity and throughput gains brought by decoupling are rigorously derived using stochastic geometry. theoretical findings are then corroborated by means of simulation results. a further constituent of this paper is the verification of the theoretically derived results by means of a real - world system simulation platform. despite theoretical assumptions differing from the very complete system simulator, the trends in the association probabilities and capacity gains are similar. based on the promising results, we then outline architectural changes needed to facilitate the decoupling of dl and ul.
arxiv:1410.7270
we investigate theoretically the phase diagram of a classical heisenberg antiferromagnet on the pyrochlore lattice perturbed by a weak second - neighbor interaction j _ 2. the huge ground state degeneracy of the nearest - neighbor heisenberg spins is lifted by j _ 2 and a magnetically ordered ground state sets in upon approaching zero temperature. we have found a new, partially ordered phase with collinear spins at finite temperatures for a ferromagnetic j _ 2. in addition to a large nematic order parameter, this intermediate phase also exhibits a layered structure and a bond order that breaks the sublattice symmetry. thermodynamic phase boundaries separating it from the fully disordered and magnetically ordered states scale as 1. 87 j _ 2 s ^ 2 and 0. 26 j _ 2 s ^ 2 in the limit of small j _ 2. the phase transitions are discontinuous. we analytically examine the local stability of the collinear state and obtain a boundary t ~ j _ 2 ^ 2 / j _ 1 in agreement with monte carlo simulations.
arxiv:0803.2332
the global activity fields of a nuclear core can be reconstructed using data assimilation. data assimilation allows to combine measurements from instruments, and information from a model, to evaluate the best possible activity within the core. we present and apply a specific procedure which evaluates this influence by adding or removing instruments in a given measurement network ( possibly empty ). the study of various network configurations of instruments in the nuclear core establishes that influence of the instruments depends both on the independant instrumentation location and on the chosen network.
arxiv:1006.0819
the widespread collection and sharing of location data, even in aggregated form, raises major privacy concerns. previous studies used meta - classifier - based membership inference attacks ~ ( mias ) with multi - layer perceptrons ~ ( mlps ) to estimate privacy risks in location data, including when protected by differential privacy ( dp ). in this work, however, we show that a significant gap exists between the expected attack accuracy given by dp and the empirical attack accuracy even with informed attackers ( also known as dp attackers ), indicating a potential underestimation of the privacy risk. to explore the potential causes for the observed gap, we first propose two new metric - based mias : the one - threshold attack and the two - threshold attack. we evaluate their performances on real - world location data and find that different data distributions require different attack strategies for optimal performance : the one - threshold attack is more effective with gaussian dp noise, while the two - threshold attack performs better with laplace dp noise. comparing their performance with one of the mlp - based attack models in previous works shows that the mlp only learns the one - threshold rule, leading to a suboptimal performance under the laplace dp noise and an underestimation of the privacy risk. second, we theoretically prove that mlps can encode complex rules ~ ( \ eg, the two - threshold attack rule ), which can be learned when given a substantial amount of training data. we conclude by discussing the implications of our findings in practice, including broader applications extending beyond location aggregates to any differentially private datasets containing multiple observations per individual and how techniques such as synthetic data generation and pre - training might enable mlp to learn more complex optimal rules.
arxiv:2412.20456
machine learning models for graphs in real - world applications are prone to two primary types of uncertainty : ( 1 ) those that arise from incomplete and noisy data and ( 2 ) those that arise from uncertainty of the model in its output. these sources of uncertainty are not mutually exclusive. additionally, models are susceptible to targeted adversarial attacks, which exacerbate both of these uncertainties. in this work, we introduce radius enhanced graph embeddings ( rege ), an approach that measures and incorporates uncertainty in data to produce graph embeddings with radius values that represent the uncertainty of the model ' s output. rege employs curriculum learning to incorporate data uncertainty and conformal learning to address the uncertainty in the model ' s output. in our experiments, we show that rege ' s graph embeddings perform better under adversarial attacks by an average of 1. 5 % ( accuracy ) against state - of - the - art methods.
arxiv:2412.05735
a combination of spin - orbit coupling and electron - electron interaction gives rise to a new type of collective spin modes, which correspond to oscillations of magnetization even in the absence of the external magnetic field. we review recent progress in theoretical understanding and experimental observation of such modes, focusing on three examples of real - life systems : a two - dimensional electron gas with rashba and / or dresselhaus spin - orbit coupling, graphene with proximity - induced spin - orbit coupling, and the dirac state on the surface of a three - dimensional topological insulator. this paper is dedicated to the 95th birthday of professor emmanuel i. rashba.
arxiv:2208.05123
motivated by the ongoing discussion about a seeming asymmetry in the performance of fermionic and bosonic replicas, we present an exact, nonperturbative approach to zero - dimensional replica field theories belonging to the broadly interpreted " beta = 2 " dyson symmetry class. we then utilise the formalism developed to demonstrate that the bosonic replicas do correctly reproduce the microscopic spectral density in the qcd inspired chiral gaussian unitary ensemble. this disproves the myth that the bosonic replica field theories are intrinsically faulty.
arxiv:0704.2968
} $ erg. if the bubble is approximated by an adiabatic spherical shock wave, the age is estimated to be $ t \ sim 2 / 5 r _ { \ rm bub } / v _ { \ rm exp } \ sim 7. 2 \ times 10 ^ 4 $ y. neither non - thermal radio structures nor thermal radio emission indicative of hii region are found in the archival data from the meerkat. we suggest that the molecular bubble will be a dark supernova remnant buried in the brick, which, therefore, has experienced in the past ( $ \ sim 0. 1 $ myr ago ) massive - star formation with a supernova explosion.
arxiv:2404.11892
the high - luminosity lhc is expected to provide $ 3 ab ^ { - 1 } $ of integrated luminosity. as a result billions of events containing top quarks will be detected at the cms and atlas experiments, allowing for precise measurements of the top quark properties. the experimental challenges that will be faced in a high luminosity environment, with a special focus on top quark related observables are examined. we discuss prospects for measuring top quark anomalous couplings at the hl - lhc. projections for detecting flavor changing neutral currents involving top quarks are also reviewed.
arxiv:1512.04807
we show that a particular ` ` universal ' ' form for the soft - breaking couplings in a softly broken $ n = 1 $ supersymmetric gauge theory is renormalisation - group invariant through two loops, provided we impose one simple condition on the dimensionless couplings. the universal form for the trilinear couplings and mass terms is identical to that found in popular derivations of the soft - breaking terms from strings or supergravity.
arxiv:hep-ph/9501395
energy efficiency of electronic digital processors is primarily limited by the energy consumption of electronic communication and interconnects. the industry is almost unanimously pushing towards replacing both long - haul, as well as local chip interconnects, using optics to drastically increase efficiency. in this paper, we explore what comes after the successful migration to optical interconnects, as with this inefficiency solved, the main source of energy consumption will be electronic digital computing, memory and electro - optical conversion. our approach attempts to address all these issues by introducing efficient all - optical digital computing and memory, which in turn eliminates the need for electro - optical conversions. here, we demonstrate for the first time a scheme to enable general purpose digital data processing in an integrated form and present our photonic integrated circuit ( pic ) implementation. for this demonstration we implemented a urisc architecture capable of running any classical piece of software all - optically and present a comprehensive architectural framework for all - optical computing to go beyond.
arxiv:2403.00045
an ultragraph gives rise to a labelled graph with some particular properties. in this paper we describe the algebras associated to such labelled graphs as groupoid algebras. more precisely, we show that the known groupoid algebra realization of ultragraph c * - algebras is only valid for ultragraphs for which the range of each edge is finite, and we extend this realization to any ultragraph ( including ultragraphs with sinks ). using our machinery, we characterize the shift space associated to an ultragraph as the tight spectrum of the inverse semigroup associated to the ultragraph ( viewed as a labelled graph ). furthermore, in the purely algebraic setting, we show that the algebraic partial action used to describe an ultragraph leavitt path algebra as a partial skew group ring is equivalent to the dual of a topological partial action, and we use this to describe ultragraph leavitt path algebras as steinberg algebras. finally, we prove generalized uniqueness theorems for both ultragraph c * - algebras and ultragraph leavitt path algebras and characterize their abelian core subalgebras.
arxiv:2009.01357
we show that the known expressions for the force on a point - like dipole are incompatible with the relativistic transformation of force, and in this respect we apply the lagrangian approach to the derivation of the correct equation for force on a small electric / magnetic dipole. the obtained expression for the generalized momentum of a moving dipole predicts two novel quantum effects with non - topological and non - dynamic phases, when an electric dipole is moving in an electric field, and when a magnetic dipole is moving in a magnetic field, correspondingly. the implications of the obtained results are discussed.
arxiv:1511.04341
design science research ( dsr ) is a research paradigm focusing on the development and validation of prescriptive knowledge in information science. herbert simon distinguished the natural sciences, concerned with explaining how things are, from design sciences which are concerned with how things ought to be, that is, with devising artifacts to attain goals. design science research methodology ( dsrm ) refers to the research methodologies associated with this paradigm. it spans the methodologies of several research disciplines, for example information technology, which offers specific guidelines for evaluation and iteration within research projects. dsr focuses on the development and performance of ( designed ) artifacts with the explicit intention of improving the functional performance of the artifact. dsrm is typically applied to categories of artifacts including algorithms, human / computer interfaces, design methodologies ( including process models ) and languages. its application is most notable in the engineering and computer science disciplines, though is not restricted to these and can be found in many disciplines and fields. dsr, or constructive research, in contrast to explanatory science research, has academic research objectives generally of a more pragmatic nature. research in these disciplines can be seen as a quest for understanding and improving human performance. such renowned research institutions as the mit media lab, stanford university ' s center for design research, carnegie mellon university ' s software engineering institute, xerox ’ s parc, and brunel university london ’ s organisation and system design centre, use the dsr approach. design science is a valid research methodology to develop solutions for practical engineering problems. design science is particularly suitable for wicked problems. = = objectives = = the main goal of dsr is to develop knowledge that professionals of the discipline in question can use to design solutions for their field problems. design sciences focus on the process of making choices on what is possible and useful for the creation of possible futures, rather than on what is currently existing. this mission can be compared to that of the β€˜ explanatory sciences ’, like the natural sciences and sociology, which is to develop knowledge to describe, explain and predict. hevner states that the main purpose of dsr is achieving knowledge and understanding of a problem domain by building and application of a designed artifact. = = evolution and applications = = since the first days of computer science, computer scientists have been doing dsr without naming it. they have developed new architectures for computers, new programming languages, new compilers, new algorithms, new data and file structures, new data models, new database management systems, and so on. much
https://en.wikipedia.org/wiki/Design_science_(methodology)
help of wires, cables or any other forms of electrical conductors. wireless operations permit services, such as long - range communications, that are impossible or impractical to implement with the use of wires. the term is commonly used in the telecommunications industry to refer to telecommunications systems ( e. g. radio transmitters and receivers, remote controls etc. ) which use some form of energy ( e. g. radio waves, acoustic energy, etc. ) to transfer information without the use of wires. information is transferred in this manner over both short and long distances. = = roles = = = = = telecom equipment engineer = = = a telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer / electronics equipment designed to be used in the telecommunication network infrastructure. = = = network engineer = = = a network engineer is a computer engineer who is in charge of designing, deploying and maintaining computer networks. in addition, they oversee network operations from a network operations center, designs backbone infrastructure, or supervises interconnections in a data center. = = = central - office engineer = = = a central - office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office ( co for short ), also referred to as a wire center or telephone exchange a co engineer is responsible for integrating new technology into the existing network, assigning the equipment ' s location in the wire center, and providing power, clocking ( for digital equipment ), and alarm monitoring facilities for the new equipment. the co engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there are currently not enough available to support the new equipment being installed. finally, the co engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment. = = = = sub - roles = = = = as structural engineers, co engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on. as electrical engineers, co engineers are responsible for the resistance, capacitance, and inductance ( rcl ) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. attenuation or gradual loss in intensity and loop loss calculations are required to determine cable length and size required to provide the service called for.
https://en.wikipedia.org/wiki/Telecommunications_engineering
the cellular potts model ( cpm ) is a powerful computational method for simulating collective spatiotemporal dynamics of biological cells. to drive the dynamics, cpms rely on physics - inspired hamiltonians. however, as first principles remain elusive in biology, these hamiltonians only approximate the full complexity of real multicellular systems. to address this limitation, we propose neuralcpm, a more expressive cellular potts model that can be trained directly on observational data. at the core of neuralcpm lies the neural hamiltonian, a neural network architecture that respects universal symmetries in collective cellular dynamics. moreover, this approach enables seamless integration of domain knowledge by combining known biological mechanisms and the expressive neural hamiltonian into a hybrid model. our evaluation with synthetic and real - world multicellular systems demonstrates that neuralcpm is able to model cellular dynamics that cannot be accounted for by traditional analytical hamiltonians.
arxiv:2502.02129
in our high mobility p - type algaas / gaas two - dimensional hole samples, we originally observe the b - periodic oscillation induced by microwave ( mw ) in photovoltage ( pv ) measurements. in the frequency range of our measurements ( 5 - 40 ghz ), the period ( { \ delta } b ) is inversely proportional to the microwave frequency ( f ). the distinct oscillations come from the edge magnetoplasmon ( emp ) in the high quality heavy hole system. in our hole sample with a very large effective mass, the observation of the emp oscillations is in neither the low frequency limit nor the high frequency limit, and the damping of the emp oscillations is very weak under high magnetic fields. simultaneously, we observe the giant plasmon resonance signals in our measurements on the shallow two - dimensional hole system ( 2dhs ).
arxiv:1606.09356
we study a generalized fermat - torricelli ( s. ft ) problem for infinitesimal geodesic triangles on a c ^ 2 complete surface m with variable gaussian curvature a < k < b, for a, b in r, such that the intersection point ( generalized fermat - torricelli point ) of the three geodesics acquires a positive real number ( subconscious ). the solution of the s. ft problem is a generalized fermat - torricelli tree with one node that has acquired a subconscious. this solution is based on a new variational method of the length of a geodesic arc with respect to arc length, which coincides with the first variational formula for geodesics on a surface with k < 0, or 0 < k < c. the ' plasticity ' solution of the inverse s. ft problem gives a connection of the absolute value of the gaussian curvature k ( f ) at the generalized fermat - torricelli point f with the absolute value of the aleksandrov curvature of the geodesic triangle by acquiring both of them the subconscious of the g. ft point.
arxiv:2004.04246
efficient approximation of geodesics is crucial for practical algorithms on manifolds. here we introduce a class of retractions on submanifolds, induced by a foliation of the ambient manifold. they match the projective retraction to the third order and thus match the exponential map to the second order. in particular, we show that newton retraction ( nr ) is always stabler than the popular approach known as oblique projection or orthographic retraction : per kantorovich - type convergence theorems, the superlinear convergence regions of nr include those of the latter. we also show that nr always has a lower computational cost. the preferable properties of nr are useful for optimization, sampling, and many other statistical problems on manifolds.
arxiv:2006.14751
the goal of scene graph generation is to predict a graph from an input image, where nodes correspond to identified and localized objects and edges to their corresponding interaction predicates. existing methods are trained in a fully supervised manner and focus on message passing mechanisms, loss functions, and / or bias mitigation. in this work we introduce a simple - yet - effective self - supervised relational alignment regularization designed to improve the scene graph generation performance. the proposed alignment is general and can be combined with any existing scene graph generation framework, where it is trained alongside the original model ' s objective. the alignment is achieved through distillation, where an auxiliary relation prediction branch, that mirrors and shares parameters with the supervised counterpart, is designed. in the auxiliary branch, relational input features are partially masked prior to message passing and predicate prediction. the predictions for masked relations are then aligned with the supervised counterparts after the message passing. we illustrate the effectiveness of this self - supervised relational alignment in conjunction with two scene graph generation architectures, sgtr and neural motifs, and show that in both cases we achieve significantly improved performance.
arxiv:2302.01403
in this paper, we present a novel workflow consisting of algebraic algorithms and data structures for fast and topologically accurate conversion of vector data models such as boundary representations into voxels ( topological voxelization ) ; spatially indexing them ; constructing connectivity graphs from voxels ; and constructing a coherent set of multivariate differential and integral operators from these graphs. topological voxelization is revisited and presented in the paper as a reversible mapping of geometric models from $ \ mathbb { r } ^ 3 $ to $ \ mathbb { z } ^ 3 $ to $ \ mathbb { n } ^ 3 $ and eventually to an index space created by morton codes in $ \ mathbb { n } $ while ensuring the topological validity of the voxel models ; namely their topological thinness and their geometrical consistency. in addition, we present algorithms for constructing graphs and hyper - graph connectivity models on voxel data for graph traversal and field interpolations and utilize them algebraically in elegantly discretizing differential and integral operators for geometric, graphical, or spatial analyses and digital simulations. the multi - variate differential and integral operators presented in this paper can be used particularly in the formulation of partial differential equations for physics simulations.
arxiv:2309.15472
rotational temperatures trot derived from lines of the same oh band are an important method to study the mesopause region near 87 km. to measure realistic temperatures, the rotational level populations have to be in local thermodynamic equilibrium ( lte ). however, this might not be fulfilled, especially at high emission altitudes. in order to quantify possible non - lte contributions to the oh trot as a function of the upper vibrational level v ', we studied a sample of 343 echelle spectra taken with the x - shooter spectrograph at the very large telescope at cerro paranal in chile. these data allowed us to analyse 25 oh bands in each spectrum. moreover, we could measure lines of o2b ( 0 - 1 ), which peaks at 94 to 95 km, and o2a ( 0 - 0 ) with an emission peak at about 90 km. since the radiative lifetimes are relatively long, the derived o2 trot are not significantly affected by non - lte contributions. for a comparison with oh, the differences in the emission profiles were corrected by using oh emission, o2a ( 0 - 0 ) emission, and co2 - based temperature profile data from the multi - channel radiometer saber on the timed satellite. for a reference profile at 90 km, we found a good agreement of the o2 with the saber - related temperatures, whereas the oh temperatures, especially for the high and even v ', showed significant excesses with a maximum of more than 10 k for v ' = 8. we could also find a nocturnal trend towards higher non - lte effects, particularly for high v '.
arxiv:1604.03961
we study higher uniformity properties of the von mangoldt function $ \ lambda $, the m \ " obius function $ \ mu $, and the divisor functions $ d _ k $ on short intervals $ ( x, x + h ] $ for almost all $ x \ in [ x, 2x ] $. let $ \ lambda ^ \ sharp $ and $ d _ k ^ \ sharp $ be suitable approximants of $ \ lambda $ and $ d _ k $, $ g / \ gamma $ a filtered nilmanifold, and $ f \ colon g / \ gamma \ to \ mathbb { c } $ a lipschitz function. then our results imply for instance that when $ x ^ { 1 / 3 + \ varepsilon } \ leq h \ leq x $ we have, for almost all $ x \ in [ x, 2x ] $, \ [ \ sup _ { g \ in \ text { poly } ( \ mathbb { z } \ to g ) } \ left | \ sum _ { x < n \ leq x + h } ( \ lambda ( n ) - \ lambda ^ \ sharp ( n ) ) \ overline { f } ( g ( n ) \ gamma ) \ right | \ ll h \ log ^ { - a } x \ ] for any fixed $ a > 0 $, and that when $ x ^ { \ varepsilon } \ leq h \ leq x $ we have, for almost all $ x \ in [ x, 2x ] $, \ [ \ sup _ { g \ in \ text { poly } ( \ mathbb { z } \ to g ) } \ left | \ sum _ { x < n \ leq x + h } ( d _ k ( n ) - d _ k ^ \ sharp ( n ) ) \ overline { f } ( g ( n ) \ gamma ) \ right | = o ( h \ log ^ { k - 1 } x ). \ ] as a consequence, we show that the short interval gowers norms $ \ | \ lambda - \ lambda ^ \ sharp \ | _ { u ^ s ( x, x + h ] } $ and $ \ | d _ k - d _ k ^ \ sharp \ | _ { u ^ s ( x, x + h ] } $ are also asymptotically small for any fixed $ s $ in the same ranges
arxiv:2411.05770
the detection of the abnormal area from urban data is a significant research problem. however, to the best of our knowledge, previous methods designed on spatio - temporal anomalies are road - based or grid - based, which usually causes the data sparsity problem and affects the detection results. in this paper, we proposed a dynamic region partition method to address the above issues. besides, we proposed an unsupervised regional anomaly detection framework ( read ) to detect abnormal regions with arbitrary shapes by jointly considering spatial and temporal properties. specifically, the proposed framework first generate regions via a dynamic region partition method. it keeps that observations in the same region have adjacent locations and similar non - spatial attribute readings, and could alleviate data sparsity and heterogeneity compared with the grid - based approach. then, an anomaly metric will be calculated for each region by a regional divergence calculation method. the abnormal regions could be finally detected by a weighted approach or a wavy approach according to the different scenario. experiments on both the simulated dataset and real - world applications demonstrate the effectiveness and practicability of the proposed framework.
arxiv:2007.06794
years ahead of its time. = = = modern = = = francis bacon ( no direct relation to roger bacon, who lived 300 years earlier ) was a seminal figure in philosophy of science at the time of the scientific revolution. in his work novum organum ( 1620 ) β€” an allusion to aristotle ' s organon β€” bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. bacon ' s method relied on experimental histories to eliminate alternative theories. in 1637, rene descartes established a new framework for grounding scientific knowledge in his treatise, discourse on method, advocating the central role of reason as opposed to sensory experience. by contrast, in 1713, the 2nd edition of isaac newton ' s philosophiae naturalis principia mathematica argued that "... hypotheses... have no place in experimental philosophy. in this philosophy [, ] propositions are deduced from the phenomena and rendered general by induction. " this passage influenced a " later generation of philosophically - inclined readers to pronounce a ban on causal hypotheses in natural philosophy ". in particular, later in the 18th century, david hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by immanuel kant in his critique of pure reason and metaphysical foundations of natural science. in 19th century auguste comte made a major contribution to the theory of science. the 19th century writings of john stuart mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation. = = = logical positivism = = = instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism ( a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences ). seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the berlin circle and the vienna circle propounded logical positivism in the late 1920s. interpreting ludwig wittgenstein ' s early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. from bertrand russell
https://en.wikipedia.org/wiki/Philosophy_of_science
the fe electronic structure and magnetism in ( i ) monoclinic ca $ _ 2 $ fereo $ _ 6 $ with a metal - insulator transition at $ t _ { mi } \ sim 140 $ k and ( ii ) quasi - cubic half - metallic ba $ _ 2 $ fereo $ _ 6 $ ceramic double perovskites are probed by soft x - ray absorption spectroscopy ( xas ) and magnetic circular dichroism ( xmcd ). these materials show distinct fe $ l _ { 2, 3 } $ xas and xmcd spectra, which are primarily associated with their different average fe oxidation states ( close to fe $ ^ { 3 + } $ for ca $ _ 2 $ fereo $ _ 6 $ and intermediate between fe $ ^ { 2 + } $ and fe $ ^ { 3 + } $ for ba $ _ 2 $ fereo $ _ 6 $ ) despite being related by an isoelectronic ( ca $ ^ { 2 + } $ / ba $ ^ { 2 + } $ ) substitution. for ca $ _ 2 $ fereo $ _ 6 $, the powder - averaged fe spin moment along the field direction ( $ b = 5 $ t ), as probed by the xmcd experiment, is strongly reduced in comparison with the spontaneous fe moment previously obtained by neutron diffraction, consistent with a scenario where the magnetic moments are constrained to remain within an easy plane. for $ b = 1 $ t, the unsaturated xmcd signal is reduced below $ t _ { mi } $ consistent with a magnetic transition to an easy - axis state that further reduces the powder - averaged magnetization in the field direction. for ba $ _ 2 $ fereo $ _ 6 $, the field - aligned fe spins are larger than for ca $ _ 2 $ fereo $ _ 6 $ ( $ b = 5 $ t ) and the temperature dependence of the fe magnetic moment is consistent with the magnetic ordering transition at $ t _ c ^ { ba } = 305 $ k. our results illustrate the dramatic influence of the specific spin - orbital configuration of re $ 5d $ electrons on the fe $ 3d $ local magnetism of these fe / re double perovskites.
arxiv:1905.04988
dna autoionization is a fundamental process wherein uv - photoexcited nucleobases dissipate energy by charge transfer to the environment without undergoing chemical damage. here, single - wall carbon nanotubes ( swnt ) are explored as a photoluminescent reporter for studying the mechanism and rates of dna autoionization. two - color photoluminescence spectroscopy allows separate photoexcitation of the dna and the swnts in the uv and visible range, respectively. a strong swnt photoluminescence quenching is observed when the uv pump is resonant with the dna absorption, consistent with charge transfer from the excited states of the dna to the swnt. semiempirical calculations of the dna - swnt electronic structure, combined with a green ' s function theory for charge transfer, show a 20 fs autoionization rate, dominated by the hole transfer. rate - equation analysis of the spectroscopy data confirms that the quenching rate is limited by the thermalization of the free charge carriers transferred to the nanotube reservoir. the developed approach has a great potential for monitoring dna excitation, autoionization, and chemical damage both { \ it in vivo } and { \ it in vitro }.
arxiv:1512.00710
in this paper, we will provide an introduction to the derivative - free optimization algorithms which can be potentially applied to train deep learning models. existing deep learning model training is mostly based on the back propagation algorithm, which updates the model variables layers by layers with the gradient descent algorithm or its variants. however, the objective functions of deep learning models to be optimized are usually non - convex and the gradient descent algorithms based on the first - order derivative can get stuck into the local optima very easily. to resolve such a problem, various local or global optimization algorithms have been proposed, which can help improve the training of deep learning models greatly. the representative examples include the bayesian methods, shubert - piyavskii algorithm, direct, lipo, mcs, ga, sce, de, pso, es, cma - es, hill climbing and simulated annealing, etc. this is a follow - up paper of [ 18 ], and we will introduce the population based optimization algorithms, e. g., ga, sce, de, pso, es and cma - es, and random search algorithms, e. g., hill climbing and simulated annealing, in this paper. for the introduction to the other derivative - free optimization algorithms, please refer to [ 18 ] for more information.
arxiv:1904.09368
the bertrand ' s theorem can be formulated as the solution of an inverse problem for a classical unidimensional motion. we show that the solutions of these problems, if restricted to a given class, can be obtained by solving a numerical equation. this permit a particulary compact and elegant proof of bertrand ' s theorem.
arxiv:math-ph/0612009
distributed systems can be found in various applications, e. g., in robotics or autonomous driving, to achieve higher flexibility and robustness. thereby, data flow centric applications such as deep neural network ( dnn ) inference benefit from partitioning the workload over multiple compute nodes in terms of performance and energy - efficiency. however, mapping large models on distributed embedded systems is a complex task, due to low latency and high throughput requirements combined with strict energy and memory constraints. in this paper, we present a novel approach for hardware - aware layer scheduling of dnn inference in distributed embedded systems. therefore, our proposed framework uses a graph - based algorithm to automatically find beneficial partitioning points in a given dnn. each of these is evaluated based on several essential system metrics such as accuracy and memory utilization, while considering the respective system constraints. we demonstrate our approach in terms of the impact of inference partitioning on various performance metrics of six different dnns. as an example, we can achieve a 47. 5 % throughput increase for efficientnet - b0 inference partitioned onto two platforms while observing high energy - efficiency.
arxiv:2406.19913
residential facilities to the students, research scholars and faculty. the students live in hostels ( sometimes referred to as halls ) throughout their stay in the iit. students in all iits must choose among national cadet corps ( ncc ), national service scheme ( nss ) and national sports organisation ( nso ) in their first years. all the iits have sports grounds for basketball, cricket, football ( soccer ), hockey, volleyball, lawn tennis, badminton, athletics and swimming pools for aquatic events. usually, the hostels also have their own sports grounds. moreover, an inter iit sports meet is organised annually where participants from all 23 iits contest for the general championship trophy in 13 different sports. along with inter iit cultural meet and tech meet, all of them generally happening on various dates in the month of december every year. = = = technical and cultural festivals = = = all iits organize annual technical festivals, typically lasting three or four days. the technical festivals are shaastra ( iit madras ), advitiya ( iit ropar ), kshitij ( iit kharagpur ), techfest ( iit bombay ), technex ( iit - bhu varanasi ), cognizance ( iit roorkee ), concetto ( iit - ism dhanbad ), tirutsava ( iit tirupati ), nvision ( iit hyderabad ), meraz ( iit bhilai ), amalthea, ( iit gandhinagar ), techkriti ( iit kanpur ), tryst ( iit delhi ), techniche ( iit guwahati ), wissenaire ( iit bhubaneswar ), technunctus ( iit jammu ), xpecto ( iit mandi ), fluxus ( iit indore ), celesta ( iit patna ) and ignus ( iit jodhpur ) petrichor ( iit palakkad ). most of them are organized in january or march. techfest ( iit bombay ) is also one of the most popular and largest technical festivals in asia in terms of participants and prize money involved. it has been granted patronage from the united nations educational, scientific and cultural organisation ( unesco ) for providing a platform for students to showcase their talent in science and technology. shaastra holds the distinction of being the first student - managed event in the world to implement
https://en.wikipedia.org/wiki/Indian_Institutes_of_Technology
in this paper we discuss the index problem for geometric differential operators ( spin - dirac operator, gau { \ ss } - bonnet operator, signature operator ) on manifolds with metric horns. on singular manifolds these operators in general do not have unique closed extensions. but there always exist two extremal extensions $ d _ { min } $ and $ d _ { max } $. we describe the quotient $ { \ cal d } ( d _ { max } ) / { \ cal d } ( d _ { min } ) $ explicitely in geometric resp. topologic terms of the base manifolds of the metric horns. we derive index formulas for the spin - dirac and gau { \ ss } - bonnet operator. for the signature operator we present a partial result. the first version of this paper was completed august 1995 at the university of augsburg.
arxiv:dg-ga/9609009
blind image deblurring ( bid ) has been extensively studied in computer vision and adjacent fields. modern methods for bid can be grouped into two categories : single - instance methods that deal with individual instances using statistical inference and numerical optimization, and data - driven methods that train deep - learning models to deblur future instances directly. data - driven methods can be free from the difficulty in deriving accurate blur models, but are fundamentally limited by the diversity and quality of the training data - - collecting sufficiently expressive and realistic training data is a standing challenge. in this paper, we focus on single - instance methods that remain competitive and indispensable. however, most such methods do not prescribe how to deal with unknown kernel size and substantial noise, precluding practical deployment. indeed, we show that several state - of - the - art ( sota ) single - instance methods are unstable when the kernel size is overspecified, and / or the noise level is high. on the positive side, we propose a practical bid method that is stable against both, the first of its kind. our method builds on the recent ideas of solving inverse problems by integrating the physical models and structured deep neural networks, without extra training data. we introduce several crucial modifications to achieve the desired stability. extensive empirical tests on standard synthetic datasets, as well as real - world ntire2020 and realblur datasets, show the superior effectiveness and practicality of our bid method compared to sota single - instance as well as data - driven methods. the code of our method is available at : \ url { https : / / github. com / sun - umn / blind - image - deblurring }.
arxiv:2208.09483
blazars display strong variability on multiple timescales and in multiple radiation bands. their variability is often characterized by power spectral densities ( psds ) and time lags plotted as functions of the fourier frequency. we develop a new theoretical model based on the analysis of the electron transport ( continuity ) equation, carried out in the fourier domain. the continuity equation includes electron cooling and escape, and a derivation of the emission properties includes light travel time effects associated with a radiating blob in a relativistic jet. the model successfully reproduces the general shapes of the observed psds and predicts specific psd and time lag behaviors associated with variability in the synchrotron, synchrotron self - compton ( ssc ), and external compton ( ec ) emission components, from sub - mm to gamma - rays. we discuss applications to bl lacertae objects and to flat - spectrum radio quasars ( fsrqs ), where there are hints that some of the predicted features have already been observed. we also find that fsrqs should have steeper psd power - law indices than bl lac objects at fourier frequencies < 10 ^ { - 4 } hz, in qualitative agreement with previously reported observations by the fermi large area telescope.
arxiv:1406.2333
the robotics community is increasingly interested in autonomous aerial transportation. unmanned aerial vehicles with suspended payloads have advantages over other systems, including mechanical simplicity and agility, but pose great challenges in planning and control. to realize fully autonomous aerial transportation, this paper presents a systematic solution to address these difficulties. first, we present a real - time planning method that generates smooth trajectories considering the time - varying shape and non - linear dynamics of the system, ensuring whole - body safety and dynamic feasibility. additionally, an adaptive nmpc with a hierarchical disturbance compensation strategy is designed to overcome unknown external perturbations and inaccurate model parameters. extensive experiments show that our method is capable of generating high - quality trajectories online, even in highly constrained environments, and tracking aggressive flight trajectories accurately, even under significant uncertainty. we plan to release our code to benefit the community.
arxiv:2310.15050
long - range entanglement - - the backbone of topologically ordered states - - cannot be created in finite time using local unitary circuits, or equivalently, adiabatic state preparation. recently it has come to light that single - site measurements provide a loophole, allowing for finite - time state preparation in certain cases. here we show how this observation imposes a complexity hierarchy on long - range entangled states based on the minimal number of measurement layers required to create the state, which we call " shots ". first, similar to abelian stabilizer states, we construct single - shot protocols for creating any non - abelian quantum double of a group with nilpotency class two ( such as $ d _ 4 $ or $ q _ 8 $ ). we show that after the measurement, the wavefunction always collapses into the desired non - abelian topological order, conditional on recording the measurement outcome. moreover, the clean quantum double ground state can be deterministically prepared via feedforward - - gates which depend on the measurement outcomes. second, we provide the first constructive proof that a finite number of shots can implement the kramers - wannier duality transformation ( i. e., the gauging map ) for any solvable symmetry group. as a special case, this gives an explicit protocol to prepare twisted quantum double for all solvable groups. third, we argue that certain topological orders, such as non - solvable quantum doubles or fibonacci anyons, define non - trivial phases of matter under the equivalence class of finite - depth unitaries and measurement, which cannot be prepared by any finite number of shots. moreover, we explore the consequences of allowing gates to have exponentially small tails, which enables, for example, the preparation of any abelian anyon theory, including chiral ones. this hierarchy paints a new picture of the landscape of long - range entangled states, with practical implications for quantum simulators.
arxiv:2209.06202
we prove conditional asymptotic normality of a class of quadratic u - statistics that are dominated by their degenerate second order part and have kernels that change with the number of observations. these statistics arise in the construction of estimators in high - dimensional semi - and non - parametric models, and in the construction of nonparametric confidence sets. this is illustrated by estimation of the integral of a square of a density or regression function, and estimation of the mean response with missing data. we show that estimators are asymptotically normal even in the case that the rate is slower than the square root of the observations.
arxiv:1512.02280
we analyse the velocity - dependent potentials seen by d0 and d4 - brane probes moving in type i ' background for head - on scattering off the fixed planes. we find that at short distances ( compared to string length ) the d0 - brane probe has a nontrivial moduli space metric, in agreement with the prediction of type i ' matrix model ; however, at large distances it is modified by massive open strings to a flat metric, which is consistent with the spacetime equations of motion of type i ' theory. we discuss the implication of this result for the matrix model proposal for m - theory. we also find that the nontrivial metric at short distances in the moduli space action of the d0 - brane probe is reflected in the coefficient of the higher dimensional v ^ 4 term in the d4 - brane probe action.
arxiv:hep-th/9707132
neural radiance fields ( nerf ) have shown promise in generating realistic novel views from sparse scene images. however, existing nerf approaches often encounter challenges due to the lack of explicit 3d supervision and imprecise camera poses, resulting in suboptimal outcomes. to tackle these issues, we propose altnerf - - a novel framework designed to create resilient nerf representations using self - supervised monocular depth estimation ( smde ) from monocular videos, without relying on known camera poses. smde in altnerf masterfully learns depth and pose priors to regulate nerf training. the depth prior enriches nerf ' s capacity for precise scene geometry depiction, while the pose prior provides a robust starting point for subsequent pose refinement. moreover, we introduce an alternating algorithm that harmoniously melds nerf outputs into smde through a consistence - driven mechanism, thus enhancing the integrity of depth priors. this alternation empowers altnerf to progressively refine nerf representations, yielding the synthesis of realistic novel views. extensive experiments showcase the compelling capabilities of altnerf in generating high - fidelity and robust novel views that closely resemble reality.
arxiv:2308.10001
we study perturbative behavior of free energies on a d - dimensional sphere s ^ d for theories with marginal interactions. the free energies are interpreted as the " dilaton effective action " with the dilaton having a nontrivial background vacuum expectation value. we compute the dependence of the free energies on the radius of the sphere by using dimensional regularization. it is shown that the first ( second ) derivative of the free energies in odd ( even ) dimensions with respect to the radius of the sphere are proportional to the square of the beta functions of coupling constants. the result is consistent with the c, f and a - theorems in two, three, four and six dimensions. the result is also used to rule out a large class of scale invariant theories which are not conformally invariant.
arxiv:1212.3028
segmentation of nasopharyngeal carcinoma ( npc ) from magnetic resonance images ( mri ) is a crucial prerequisite for npc radiotherapy. however, manually segmenting of npc is time - consuming and labor - intensive. additionally, single - modality mri generally cannot provide enough information for its accurate delineation. therefore, a multi - modality mri fusion network ( mmfnet ) based on three modalities of mri ( t1, t2 and contrast - enhanced t1 ) is proposed to complete accurate segmentation of npc. the backbone of mmfnet is designed as a multi - encoder - based network, consisting of several encoders to capture modality - specific features and one single decoder to fuse them and obtain high - level features for npc segmentation. a fusion block is presented to effectively fuse features from multi - modality mri. it firstly recalibrates low - level features captured from modality - specific encoders to highlight both informative features and regions of interest, then fuses weighted features by a residual fusion block to keep balance between fused ones and high - level features from decoder. moreover, a training strategy named self - transfer, which utilizes pre - trained modality - specific encoders to initialize multi - encoder - based network, is proposed to make full mining of information from different modalities of mri. the proposed method based on multi - modality mri can effectively segment npc and its advantages are validated by extensive experiments.
arxiv:1812.10033
we develop a model of dust evolution in a multiphase, inhomogeneous ism including dust growth and destruction processes. the physical conditions for grain evolution are taken from hydrodynamical simulations of giant molecular clouds in a milky way - like spiral galaxy. we improve the treatment of dust growth by accretion in the ism to investigate the role of the temperature - dependent sticking coefficient and ion - grain interactions. from detailed observational data on the gas - phase si abundances [ si / h ] _ { gas } measured in the local galaxy, we derive a relation between the average [ si / h ] _ { gas } and the local gas density n ( h ) which we use as a critical constraint for the models. this relation requires a sticking coefficient that decreases with the gas temperature. the synthetic relation constructed from the spatial dust distribution reproduces the slope of - 0. 5 of the observed relation in cold clouds. this slope is steeper than that for the warm medium and is explained by the dust growth. we find that it occurs for all adopted values of the minimum grain size a _ { min } from 1 to 5nm. for the classical cut - off of a _ { min } = 5 nm, the ion - grain interactions result in longer growth timescales and higher [ si / h ] _ { gas } than the observed values. for a _ { min } below 3 nm, the ion - grain interactions enhance the growth rates, steepen the slope of [ si / h ] _ { gas } - n ( h ) relation and provide a better match to observations. the rates of dust re - formation in the ism by far exceed the rates of dust production by stellar sources as expected from simple evolution models. after the cycle of matter in and out of dust reaches a steady state, the dust growth balances the destruction operating on similar timescales of 350 myr.
arxiv:1608.04781
three - dimensional electron microscopy ( 3dem ) is an essential technique to investigate volumetric tissue ultra - structure. due to technical limitations and high imaging costs, samples are often imaged anisotropically, where resolution in the axial direction ( $ z $ ) is lower than in the lateral directions $ ( x, y ) $. this anisotropy 3dem can hamper subsequent analysis and visualization tasks. to overcome this limitation, we propose a novel deep - learning ( dl ) - based self - supervised super - resolution approach that computationally reconstructs isotropic 3dem from the anisotropic acquisition. the proposed dl - based framework is built upon the u - shape architecture incorporating vision - transformer ( vit ) blocks, enabling high - capability learning of local and global multi - scale image dependencies. to train the tailored network, we employ a self - supervised approach. specifically, we generate pairs of anisotropic and isotropic training datasets from the given anisotropic 3dem data. by feeding the given anisotropic 3dem dataset in the trained network through our proposed framework, the isotropic 3dem is obtained. importantly, this isotropic reconstruction approach relies solely on the given anisotropic 3dem dataset and does not require pairs of co - registered anisotropic and isotropic 3dem training datasets. to evaluate the effectiveness of the proposed method, we conducted experiments using three 3dem datasets acquired from brain. the experimental results demonstrated that our proposed framework could successfully reconstruct isotropic 3dem from the anisotropic acquisition.
arxiv:2309.10646
reentrant superconductivity is a phenomenon in which the destructive effects of magnetic field on superconductivity are mitigated, allowing a zero - resistance state to survive under conditions that would otherwise destroy it. typically, the reentrant superconducting region derives from a zero - field parent superconductor. here, we show that in specifically - prepared ute $ _ 2 $ crystals, extremely large magnetic field gives rise to an unprecedented high field superconductor that lacks a zero - field parent phase. this orphan superconductivity exists at fields between 37 t and 52 t, over a smaller angular range than observed in superconducting ute $ _ 2 $. the stability of field - induced orphan superconductivity is a challenge to existing theoretical explanations, and underscores the likelihood of a field - induced modification of the electronic structure of ute $ _ 2 $.
arxiv:2304.12392
we study the stationary and nonstationary measurement of a classical force driving a mechanical oscillator coupled to an electromagnetic cavity under two - tone driving. for this purpose, we develop a theoretical framework based on the signal - to - noise ratio to quantify the sensitivity of linear spectral measurements. then, we consider stationary force sensing and study the necessary conditions to minimise the added force noise. we find that imprecision noise and back - action noise can be arbitrarily suppressed by manipulating the amplitudes of the input coherent fields, however, the force noise power spectral density cannot be reduced below the level of thermal fluctuations. therefore, we consider a nonstationary protocol that involves non - thermal dissipative state preparation followed by a finite time measurement, which allows one to perform measurements with a signal - to - noise much greater than the maximum possible in a stationary measurement scenario. we analyse two different measurement schemes in the nonstationary transient regime, a back - action evading measurement, which implies modifying the drive asymmetry configuration upon arrival of the force, and a nonstationary measurement that leaves the drive asymmetry configuration unchanged. conditions for optimal force noise sensitivity are determined, and the corresponding force noise power spectral densities are calculated.
arxiv:2007.13051
inspired by the success of transformers in computer vision, transformers have been widely investigated for medical imaging segmentation. however, most of transformer architecture are using the recent transformer architectures as encoder or as parallel encoder with the cnn encoder. in this paper, we introduce a novel hybrid cnn - transformer segmentation architecture ( pag - transynet ) designed for efficiently building a strong cnn - transformer encoder. our approach exploits attention gates within a dual pyramid hybrid encoder. the contributions of this methodology can be summarized into three key aspects : ( i ) the utilization of pyramid input for highlighting the prominent features at different scales, ( ii ) the incorporation of a pvt transformer to capture long - range dependencies across various resolutions, and ( iii ) the implementation of a dual - attention gate mechanism for effectively fusing prominent features from both cnn and transformer branches. through comprehensive evaluation across different segmentation tasks including : abdominal multi - organs segmentation, infection segmentation ( covid - 19 and bone metastasis ), microscopic tissues segmentation ( gland and nucleus ). the proposed approach demonstrates state - of - the - art performance and exhibits remarkable generalization capabilities. this research represents a significant advancement towards addressing the pressing need for efficient and adaptable segmentation solutions in medical imaging applications.
arxiv:2404.18199
we prove a local - global principle for torsors under the prosolvable geometric fundamental group of an affine curve over a number field.
arxiv:2109.14887
this paper presents the cyber - physcial model of a computer - mediated control system that is a seamless, fully synergistic integration of the physical system and the cyber system, which provides a systematic framework for synthesis of cyber - physical systems ( cpss ). in our proposed framework, we establish a lyapunov stabilty theory for synthesis of cpss and apply it to sampled - data control systems, which are typically synonymous with computer - mediated control systems. by our cps approach, we not only develop stability criteria for sampled - data control systems but also reveal the equivalence and inherent relationship between the two main design methods ( viz. controller emulation and discrete - time approximation ) in the literature. as application of our established theory, we study feedback stabilization of linear sampled - data stochastic systems and propose a control design method. illustrative examples show that our proposed method has improved the existing results. our established theory of synthetic cpss lays a theoretic foundation for computer - mediated control systems and provokes many open and interesting problems for future work.
arxiv:2303.01851
, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β€” the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β€” the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which
https://en.wikipedia.org/wiki/Biology
france, known as georgia tech europe ( gte ). opened in october 1990, it offers master ' s - level courses in electrical and computer engineering, computer science and mechanical engineering and ph. d. coursework in electrical and computer engineering and mechanical engineering. georgia tech europe was the defendant in a lawsuit pertaining to the language used in advertisements, which was a violation of the toubon law. georgia tech and tianjin university cooperatively operated a campus in shenzhen, guangdong, china β€” georgia tech shenzhen institute, tianjin university. launched in 2014, the institute offered undergraduate and graduate programs in electrical and computer engineering, analytics, computer science, environmental engineering, and industrial design. admission and degree requirements at the institute are the same as those in atlanta. in september 2024, georgia tech announced that it was ending its partnership with tianjin university following u. s. congressional scrutiny of potential ties to the people ' s liberation army. the college of design ( formerly college of architecture ) maintains a small permanent presence in paris in affiliation with the ecole d ' architecture de paris - la villette and the college of computing has a similar program with the barcelona school of informatics at the polytechnic university of catalonia in barcelona, spain. there are additional programs in athlone, ireland, shanghai, china, and singapore. georgia tech was supposed to have set up two campuses for research and graduate education in the cities of visakhapatnam and hyderabad, telangana, india by 2010, but it appeared the plans had been set on hold as of 2011. = = = campus services = = = georgia tech cable network, or gtcn, is the college ' s branded cable source. most non - original programming is obtained from dish network. gtcn currently has 100 standard - definition channels and 23 high - definition channels. the office of information technology, or oit, manages most of the institute ' s computing resources ( and some related services such as campus telephones ). with the exception of a few computer labs maintained by individual colleges, oit is responsible for most of the computing facilities on campus. student, faculty, and staff e - mail accounts are among its services. georgia tech ' s resnet provides free technical support to all students and guests living in georgia tech ' s on - campus housing ( excluding fraternities and sororities ). resnet is responsible for network, telephone, and television service, and most support is provided by part - time student employees. = = organization and administration = = georgia tech ' s undergraduate and graduate programs are divided
https://en.wikipedia.org/wiki/Georgia_Tech
we present new measurements of the values of the hubble constant, matter density, dark energy density, and dark energy density equation - of - state parameters from a full strong lensing analysis of the observed positions of 89 multiple images and 4 measured time delays of sn refsdal multiple images in the hubble frontier fields galaxy cluster macs j1149. 5 + 2223. by strictly following the identical modelling methodology as in our previous work, that was done before the time delays were available, our cosmographic measurements here are essentially blind based on the frozen procedure. without using any priors from other cosmological experiments, in an open $ w $ cdm cosmological model, through our reference cluster mass model, we measure the following values : $ h _ 0 = 65. 1 ^ { + 3. 5 } _ { - 3. 4 } $ km s $ ^ { - 1 } $ mpc $ ^ { - 1 } $, $ \ omega _ { \ rm de } = 0. 76 ^ { + 0. 15 } _ { - 0. 10 } $, and $ w = - 0. 92 ^ { + 0. 15 } _ { - 0. 21 } $ ( at the 68. 3 % confidence level ). no other single cosmological probe is able to measure simultaneously all these parameters. remarkably, our estimated values of the cosmological parameters, particularly $ h _ 0 $, are very robust and do not depend significantly on the assumed cosmological model and the cluster mass modelling details. the latter introduce systematic uncertainties on the values of $ h _ 0 $ and $ w $ which are found largely subdominant compared to the statistical errors. the results of this study show that time delays in lens galaxy clusters, combined with extensive photometric and spectroscopic information, offers a novel and competitive cosmological tool.
arxiv:2401.10980
heterogeneous graphs ( hgs ) are composed of multiple types of nodes and edges, making it more effective in capturing the complex relational structures inherent in the real world. however, in real - world scenarios, labeled data is often difficult to obtain, which limits the applicability of semi - supervised approaches. self - supervised learning aims to enable models to automatically learn useful features from data, effectively addressing the challenge of limited labeling data. in this paper, we propose a novel contrastive learning framework for heterogeneous graphs ( ashgcl ), which incorporates three distinct views, each focusing on node attributes, high - order and low - order structural information, respectively, to effectively capture attribute information, high - order structures, and low - order structures for node representation learning. furthermore, we introduce an attribute - enhanced positive sample selection strategy that combines both structural information and attribute information, effectively addressing the issue of sampling bias. extensive experiments on four real - world datasets show that ashgcl outperforms state - of - the - art unsupervised baselines and even surpasses some supervised benchmarks.
arxiv:2503.13911
lainey et al. ( 2012 ), by re - analyzing long - baseline astrometry of saturn ' s moons, have found that the moons ' tidal evolution is much faster than previously thought, implying an order of magnitude stronger tidal dissipation within saturn. this result is controversial and implies recent formation of at least some of the mid - sized icy moons of saturn. here we show that this more intensive tidal dissipation is in full agreement with the evolved state of the titan - hyperion resonance. this resonance was previously thought to be non - tidal in origin, as the amount of tidal evolution required for its assembly is beyond what is possible in models that assume that all the major moons are primordial. we find that the survival of the titan - hyperion resonance is in agreement with a past titan - iapetus 5 : 1 resonance, but not with unbroken tidal evolution of rhea from the rings to its current distance.
arxiv:1311.6780
visual learning analytics ( vla ) is becoming increasingly adopted in educational technologies and learning analytics dashboards to convey critical insights to students and educators. yet many students experienced difficulties in comprehending complex vla due to their limited data visualisation literacy. while conventional scaffolding approaches like data storytelling have shown effectiveness in enhancing students ' comprehension of vla, these approaches remain difficult to scale and adapt to individual learning needs. generative ai ( genai ) technologies, especially conversational agents, offer potential solutions by providing personalised and dynamic support to enhance students ' comprehension of vla. this study investigates the effectiveness of genai agents, particularly when integrated with scaffolding techniques, in improving students ' comprehension of vla. a randomised controlled trial was conducted with 117 higher education students to compare the effects of two types of genai agents : passive agents, which respond to student queries, and proactive agents, which utilise scaffolding questions, against standalone scaffolding in a vla comprehension task. the results show that passive agents yield comparable improvements to standalone scaffolding both during and after the intervention. notably, proactive genai agents significantly enhance students ' vla comprehension compared to both passive agents and standalone scaffolding, with these benefits persisting beyond the intervention. these findings suggest that integrating genai agents with scaffolding can have lasting positive effects on students ' comprehension skills and support genuine learning.
arxiv:2409.11645
the cauchy relations distinguish between rari - and multi - constant linear elasticity theories. these relations are treated in this paper in a form that is invariant under two groups of transformations : indices permutation and general linear transformations of the basis. the irreducible decomposition induced by the permutation group is outlined. the cauchy relations are then formulated as a requirement of nullification of an invariant subspace. a successive decomposition under rotation group allows to define the partial cauchy relations and two types of elastic materials. we explore several applications of the full and partial cauchy relations in physics of materials. the structure ' s deviation from the basic physical assumptions of cauchy ' s model is defined in an invariant form. the cauchy and non - cauchy contributions to hooke ' s law and elasticity energy are explained. we identify wave velocities and polarization vectors that are independent of the non - cauchy part for acoustic wave propagation. several bounds are derived for the elasticity invariant parameters.
arxiv:2304.09579
we devise a way to calculate the dimensions of symmetry sectors appearing in the particle entanglement spectrum ( pes ) and real space entanglement spectrum ( rses ) of multi - particle systems from their real space wave functions. we first note that these ranks in the entanglement spectra equal the dimensions of spaces of wave functions with a number of particles fixed. this also yields equality of the multiplicities in the pes and the rses. our technique allows numerical calculations for much larger systems than were previously feasible. for somewhat smaller systems, we can find approximate entanglement energies as well as multiplicities. we illustrate the method with results on the rses and pes multiplicities for integer quantum hall states, laughlin and jain composite fermion states and for the moore - read state at filling $ \ nu = 5 / 2 $, for system sizes up to 70 particles.
arxiv:1111.3634
many diffusion processes in nature and society were found to be anomalous, in the sense of being fundamentally different from conventional brownian motion. an important example is the migration of biological cells, which exhibits non - trivial temporal decay of velocity autocorrelation functions. this means that the corresponding dynamics is characterized by memory effects that slowly decay in time. motivated by this we construct non - markovian lattice - gas cellular automata models for moving agents with memory. for this purpose the reorientation probabilities are derived from velocity autocorrelation functions that are given a priori ; in that respect our approach is ` data - driven '. particular examples we consider are velocity correlations that decay exponentially or as power laws, where the latter functions generate anomalous diffusion. the computational efficiency of cellular automata combined with our analytical results paves the way to explore the relevance of memory and anomalous diffusion for the dynamics of interacting cell populations, like confluent cell monolayers and cell clustering.
arxiv:1802.04201
the long term evolution ( lte ) as a mobile broadband technology supports a wide domain of communication services with different requirements. therefore, scheduling of all flows from various applications in overload states in which the requested amount of bandwidth exceeds the limited available spectrum resources is a challenging issue. accordingly, in this paper, a greedy algorithm is presented to evaluate user candidates which are waiting for scheduling and select an optimal set of the users to maximize system performance, without exceeding available bandwidth capacity. the greedy - knapsack algorithm is defined as an optimal solution to the resource allocation problem, formulated based on the fractional knapsack problem. a compromise between throughput and qos provisioning is obtained by proposing a class - based ranking function, which is a combination of throughput and qos related parameters defined for each application. the simulation results show that the proposed method provides high performance in terms of throughput, loss and delay for different classes of qos over the existing ones, especially under overload traffic.
arxiv:1601.03461
this paper considers a one - dimensional generalized allen - cahn equation of the form \ [ u _ t = \ varepsilon ^ 2 ( d ( u ) u _ x ) _ x - f ( u ), \ ] where $ \ varepsilon > 0 $ is constant, $ d = d ( u ) $ is a positive, uniformly bounded below diffusivity coefficient that depends on the phase field $ u $ and $ f ( u ) $ is a reaction function that can be derived from a double - well potential with minima at two pure phases $ u = \ alpha $ and $ u = \ beta $. it is shown that interface layers ( namely, solutions that are equal to $ \ alpha $ or $ \ beta $ except at a finite number of thin transitions of width $ \ varepsilon $ ) persist for an exponentially long time proportional to $ \ exp ( c / \ varepsilon ) $, where $ c > 0 $ is a constant. in other words, the emergence and persistence of \ emph { metastable patterns } for this class of equations is established. for that purpose, we prove energy bounds for a renormalized effective energy potential of ginzburg - landau type. numerical simulations, which confirm the analytical results, are also provided.
arxiv:1911.06926
text stemming is a natural language processing technique that is used to reduce words to their base form, also known as the root form. the use of stemming in ir has been shown to often improve the effectiveness of keyword - matching models such as bm25. however, traditional stemming methods, focusing solely on individual terms, overlook the richness of contextual information. recognizing this gap, in this paper, we investigate the promising idea of using large language models ( llms ) to stem words by leveraging its capability of context understanding. with this respect, we identify three avenues, each characterised by different trade - offs in terms of computational cost, effectiveness and robustness : ( 1 ) use llms to stem the vocabulary for a collection, i. e., the set of unique words that appear in the collection ( vocabulary stemming ), ( 2 ) use llms to stem each document separately ( contextual stemming ), and ( 3 ) use llms to extract from each document entities that should not be stemmed, then use vocabulary stemming to stem the rest of the terms ( entity - based contextual stemming ). through a series of empirical experiments, we compare the use of llms for stemming with that of traditional lexical stemmers such as porter and krovetz for english text. we find that while vocabulary stemming and contextual stemming fail to achieve higher effectiveness than traditional stemmers, entity - based contextual stemming can achieve a higher effectiveness than using porter stemmer alone, under specific conditions.
arxiv:2402.11757
in this paper, high - dimensional data analysis methods are proposed to deal with random matrix which is composed by the real data from power network before and after the fault. the mean spectral radius ( msr ) of non - hermitian random matrices is defined as a statistic analytic for the fault detection. by analyzing the characteristics of random matrices and observing the changes of the spectral radius of random matrices, grid failure detection will be achieved. this paper describes the basic mathematical theory of this big data method, and the real - world data of a certain china power grid is used to verify the methods.
arxiv:1503.08445
the goal of this research is the study of the thermomagnetic consequences in isotropic type - ii superconductors, subjected to multi - component magnetic fields $ \ mathbf { h } _ a = h _ { ay } \ hat { y } + h _ { az } \ hat { z } $, because the instability field $ \ mathbf { h } _ { fi } $ is closely related with a flux jump occurrence. at the critical - state model framework, once the lorentz $ \ mathbf { f } _ l $ and pinning forces $ \ mathbf { f } _ p $ are at equilibrium, the current density reaches a critical value $ j _ c $ and a stationary magnetic induction distribution $ \ mathbf { b } $ is established. the equilibrium of forces is analytically solved considering that the pinning force is mainly affected by temperature increments ; the energy dissipation is incorporated throughout the heat equation at the adiabatic regime. the theory is able to obtain the instability field according to the thermal bath and applied field values ; moreover, it provides of instability field branches comprising both partial and full penetrates states. with this information is possible to construct a field - temperature map. the results are compared with already published experimental data, finding a qualitatively agreement between them. this theoretical study works with a first order perturbation, then the perturbation presents a periodical behavior along the thickness direction ; considering this environment it is constructed the magnetic induction distributions which resemble flexible cantilever structures.
arxiv:1809.07900
in this paper, we introduce a novel approach for generating random elements of a finite group given a set of generators of that. our method draws upon combinatorial group theory and automata theory to achieve this objective. furthermore, we explore the application of this method in generating random elements of a particularly significant group, namely the symmetric group ( or group of permutations on a set ). through rigorous analysis, we demonstrate that our proposed method requires fewer average swaps to generate permutations compared to existing approaches. however, recognizing the need for practical applications, we propose a hardware - based implementation based on our theoretical approach, and provide a comprehensive comparison with previous methods. our evaluation reveals that our method outperforms existing approaches in certain scenarios. although our primary proposed method only aims to speed up the shuffling and does not decrease its time complexity, we also extend our method to improve the time complexity.
arxiv:2311.16347
we construct a class of codimension - 2 solutions in supergravity that realize t - folds with arbitrary $ o ( 2, 2, \ mathbb { z } ) $ monodromy and we develop a geometric point of view in which the monodromy is identified with a product of dehn twists of an auxiliary surface $ \ sigma $ fibered on a base $ \ mathcal { b } $. these defects, that we call t - fects, are identified by the monodromy of the mapping torus obtained by fibering $ \ sigma $ over the boundary of a small disk encircling a degeneration. we determine all possible local geometries by solving the corresponding cauchy - riemann equations, that imply the equations of motion for a semi - flat metric ansatz. we discuss the relation with the f - theoretic approach and we consider a generalization to the t - duality group of the heterotic theory with a wilson line.
arxiv:1508.01193
circular atomtronics is known to exhibit a uniform ground state, unlike elliptical atomtronics. in elliptical atomtronics, the matter wave tends to accumulate along the semimajor edges during its time dynamics, which we depict by the survival function. consequently, the dynamical time scales become coupled to the eccentricity, making the dynamics nontrivial for applications. we report that an appropriate dispersion management can decouple the time scales from the eccentricity. one can choose the suitable dispersion coefficient from the overlap function involving the corresponding ground state. we focus on producing distinct fractional matter waves inside an elliptical waveguide to achieve efficient atom interferometry. the said dispersion engineering can recover fractional revivals in the elliptical waveguide, analogous to the circular case. we demonstrate atom interferometry for the engineered elliptical atomtronics, where matter wave interference is mediated by an external harmonic trap for controlled interference patterns.
arxiv:2404.08904
preliminary test results on microscope investigation and besiii - type rpc aging performance have revealed interesting aging phenomena that had not been seen before in linseed oil coated italian - type rpc. we report here on the aging performance of besiii - type and its variant rpc, and on microscopic surface characterization of besiii - type bakelite electrodes.
arxiv:1006.1061
in this work, we present a reward - driven automated curriculum reinforcement learning approach for interaction - aware self - driving at unsignalized intersections, taking into account the uncertainties associated with surrounding vehicles ( svs ). these uncertainties encompass the uncertainty of svs ' driving intention and also the quantity of svs. to deal with this problem, the curriculum set is specifically designed to accommodate a progressively increasing number of svs. by implementing an automated curriculum selection mechanism, the importance weights are rationally allocated across various curricula, thereby facilitating improved sample efficiency and training outcomes. furthermore, the reward function is meticulously designed to guide the agent towards effective policy exploration. thus the proposed framework could proactively address the above uncertainties at unsignalized intersections by employing the automated curriculum learning technique that progressively increases task difficulty, and this ensures safe self - driving through effective interaction with svs. comparative experiments are conducted in $ highway \ _ env $, and the results indicate that our approach achieves the highest task success rate, attains strong robustness to initialization parameters of the curriculum selection module, and exhibits superior adaptability to diverse situational configurations at unsignalized intersections. furthermore, the effectiveness of the proposed method is validated using the high - fidelity carla simulator.
arxiv:2403.13674
computational guidance is an emerging and accelerating trend in aerospace guidance and control. combining machine learning and convex optimization, this paper presents a real - time computational guidance method for the 6 - degrees - of - freedom powered landing guidance problem. the powered landing guidance problem is formulated as an optimal control problem, which is then transformed into a convex optimization problem. instead of brutally using the neural networks as the controller, we use neural networks to improve the state - of - the - art sequential convex programming ( scp ) algorithm. based on the deep neural network, an initial trajectory generator is designed to provide a satisfactory initial guess for the scp algorithm. benefitting from designing the initial trajectory generator as a sequence model predictor, the proposed data - driven scp architecture is capable of improving the performance of any state - of - the - art scp algorithm in various applications, not just powered landing guidance. the simulation results show that the proposed method can precisely guide the vehicle to the landing site. moreover, through monte carlo tests, the proposed method can averagely save 40. 8 % of the computation time compared with the scp method, while ensuring higher terminal states accuracy. the proposed computational guidance scheme is suitable for real - time applications.
arxiv:2210.07480
preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. clinical trial registration is similar, although it may not require the registration of a study ' s analysis protocol. finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection. preregistration can have a number of different goals, including ( a ) facilitating and documenting research plans, ( b ) identifying and reducing questionable research practices and researcher biases, ( c ) distinguishing between confirmatory and exploratory analyses, ( d ) transparently evaluating the severity of hypothesis tests, and, in the case of registered reports, ( e ) facilitating results - blind peer review, and ( f ) reducing publication bias. a number of research practices such as p - hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and harking may increase the probability of incorrect claims. although the idea of preregistration is old, the practice of preregistering studies has gained prominence to mitigate to some of the issues that are thought to underlie the replication crisis. = = types = = = = = standard preregistration = = = in the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. this preregistration document is then posted on a publicly available website such as the open science framework or aspredicted. the preregistered study is then conducted, and a report of the study and its results are submitted for publication together with access to the preregistration document. this preregistration approach allows peer reviewers and subsequent readers to cross - reference the preregistration document with the published research article in order to identify the presence of any undisclosed deviations of the preregistration. deviations from the preregistration are possible and common in practice, but they should be transparently reported, and the consequences for the severity of the test should be evaluated. = = = registered reports = = = the registered report format requires authors to submit a description of the study methods and analyses prior to data collection. once the theoretical introduction, method, and analysis plan has been peer reviewed ( stage 1 peer
https://en.wikipedia.org/wiki/Preregistration_(science)
the levy stability analysis is carried out for e + e - collisions at z ^ 0 mass using monte carlo method. the levy index \ mu is found to be \ mu = 1. 701 + - 0. 043. the self - similar generalized dimensions d ( q ) and multi - fractal spectrum f ( \ alpha ) are presented. the renyi dimension d ( q ) decreases with increasing q. the self - similar multi - fractal spectrum is a convex curve with a maximum at q = 0, \ alpha = 1. 169 + - 0. 011. the right - side part of the spectrum, corresponding to negative values of q is obtained through analytical continuation.
arxiv:hep-ph/0204148
read - only memory model is a classical model of computation to study time - space tradeoffs of algorithms. one of the classical results on the rom model is that any sorting algorithm that uses o ( s ) words of extra space requires $ \ omega ( n ^ 2 / s ) $ comparisons for $ \ lg n \ leq s \ leq n / \ lg n $ and the bound has also been recently matched by an algorithm. however, if we relax the model ( from rom ), we do have sorting algorithms ( say heapsort ) that can sort using $ o ( n \ lg n ) $ comparisons using $ o ( \ lg n ) $ bits of extra space, even keeping a permutation of the given input sequence at any point of time during the algorithm. we address similar questions for graph algorithms. we show that a simple natural relaxation of rom model allows us to implement fundamental graph search methods like bfs and dfs more space efficiently than in rom. by simply allowing elements in the adjacency list of a vertex to be permuted, we show that, on an undirected or directed connected graph $ g $ having $ n $ vertices and $ m $ edges, the vertices of $ g $ can be output in a dfs or bfs order using $ o ( \ lg n ) $ bits of extra space and $ o ( n ^ 3 \ lg n ) $ time. thus we obtain similar bounds for reachability and shortest path distance ( both for undirected and directed graphs ). with a little more ( but still polynomial ) time, we can also output vertices in the lex - dfs order. as reachability in directed graphs and shortest path distance are nl - complete, and lex - dfs is p - complete, our results show that our model is more powerful than rom if l $ \ neq $ p. en route, we also introduce and develop algorithms for another relaxation of rom where the adjacency lists of the vertices are circular lists and we can modify only the heads of the lists. all our algorithms are simple but quite subtle, and we believe that these models are practical enough to spur interest for other graph problems in these models.
arxiv:1711.09859
extending the recently - developed bond - order - length - strength ( bols ) correlation mechanism [ sun cq, prog solid state chem 2007, 35, 1 - 159 ] to the pressure domain has led to atomistic insight into the phase stability of nanostructures under the varied stimuli of pressure and solid size. it turns out that the competition between the pressure - induced overheating ( tc elevation ) and the size - induced undercooling ( tc depression ) dominates the measured size trends of the pressure - induced phase transition. reproduction of the measured size and pressure dependence of the phase stability for cdse, fe2o3, and sno2 nanocrystals evidences the validity of the solution derived from the perspective of atomic cohesive energy and its response to the external stimulus.
arxiv:0801.0468
the classic enumerative functions for counting colorings of a graph $ g $, such as the chromatic polynomial $ p ( g, k ) $, do so under the assumption that the given graph is labeled. in 1985, hanlon defined and studied the chromatic polynomial for an unlabeled graph $ \ mathcal { g } $, $ p ( \ mathcal { g }, k ) $. determining $ p ( \ mathcal { g }, k ) $ amounts to counting colorings under the action of automorphisms of $ \ mathcal { g } $. in this paper, we consider the problem of counting list colorings of unlabeled graphs. list coloring of graphs is a widely studied generalization of classic coloring that was introduced by vizing and by erd \ h { o } s, rubin, and taylor in the 1970s. in 1990, kostochka and sidorenko introduced the list color function $ p _ \ ell ( g, k ) $ which is the guaranteed number of list colorings of a labeled graph $ g $ over all $ k $ - list assignments of $ g $. in this paper, we extend hanlon ' s definition to the list context and define the unlabeled list color function, $ p _ \ ell ( \ mathcal { g }, k ) $, of an unlabeled graph $ \ mathcal { g } $. in this context, we pursue a fundamental question whose analogues have driven much of the research on counting list colorings and its generalizations : for a given unlabeled graph $ \ mathcal { g } $, does $ p _ \ ell ( \ mathcal { g }, k ) = p ( \ mathcal { g }, k ) $ when $ k $ is large enough? we show the answer to this question is yes for a large class of unlabeled graphs that include point - determining graphs ( also known as irreducible graphs and as mating graphs ).
arxiv:2409.06063
we introduce cogvlm, a powerful open - source visual language foundation model. different from the popular shallow alignment method which maps image features into the input space of language model, cogvlm bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and ffn layers. as a result, cogvlm enables deep fusion of vision language features without sacrificing any performance on nlp tasks. cogvlm - 17b achieves state - of - the - art performance on 10 classic cross - modal benchmarks, including nocaps, flicker30k captioning, refcoco, refcoco +, refcocog, visual7w, gqa, scienceqa, vizwiz vqa and tdiuc, and ranks the 2nd on vqav2, okvqa, textvqa, coco captioning, etc., surpassing or matching pali - x 55b. codes and checkpoints are available at https : / / github. com / thudm / cogvlm.
arxiv:2311.03079
we introduce the spinor representations for osp ( m | 2n ). these generalize the spinors for so ( m ) and the symplectic spinors for sp ( 2n ) and correspond to representations of the supergroup with supergroup pair ( spin ( m ) x mp ( 2n ), osp ( m | 2n ) ). we prove that these spinor spaces are uniquely characterized as the completely pointed osp ( m | 2n ) - modules. then the tensor product of this representation with irreducible finite dimensional osp ( m | 2n ) - modules is studied. therefore we derive a criterion for complete reducibility of tensor product representations. we calculate the decomposition into irreducible osp ( m | 2n ) - representations of the tensor product of the super spinor space with an extensive class of such representations and also obtain cases where the tensor product is not completely reducible.
arxiv:1205.0119
the paper proposes a two player game based strategy for resource allocation in service computing domain such as cloud, grid etc. the players are modeled as demand / workflows for the resource and represent multiple types of qualitative and quantitative factors. the proposed strategy will classify them in two classes. the proposed system would forecast outcome using a priori information available and measure / estimate existing parameters such as utilization and delay in an optimal load - balanced paradigm. keywords : load balancing ; service computing ; logistic regression ; probabilistic estimation
arxiv:1503.07038
since the $ n ^ * ( 1535 ) $ resonance was found to have large coupling to the strangeness due to its possible large $ s \ bar s $ component, we investigate the possible contribution of the t - channel $ n ^ * ( 1535 ) $ exchange for the $ p \ bar p \ to \ phi \ phi $ reaction. our calculation indicates that the new mechanism gives very significant contribution for the energies above 2. 25 gev and may be an important source for evading the okubo - zweig - iizuka rule in the $ \ phi $ production from $ n \ bar { n } $ annihilation.
arxiv:1008.1223
we investigate a possibility to explain the discrepancy between the standard model predictions and the observed value of the anomalous magnetic moment of the muon within unorthodox susy scenarios in which neutralinos are unstable. we start by reviewing the muon g - 2 calculations in the mssm and confront it with the most up - to - date experimental constraints. we find out that the next generation of direct detection dm experiments combined with the current lhc results will ultimately test the parameter region that can explain the $ ( g - 2 ) _ \ mu $ anomaly in the mssm. next, we study r - parity violating and gauge - mediated susy - breaking scenarios with unstable neutralinos. these models do not provide a viable dm candidate, which allows them to evade the dm constraints. we find that in rpv and gmsb with slepton nlsp the lhc constraints are weaker, and a large region of parameter space can explain the observed anomaly and evade experimental limits.
arxiv:2205.04378
the creation of stable 1d and 2d localized modes in lossy nonlinear media is a fundamental problem in optics and plasmonics. this article gives a short review of theoretical methods elaborated for this purpose, using localized gain applied at one or several " hot spots " ( hss ). the introduction surveys a broad class of models for which this approach was developed. other sections focus in some detail on basic 1d continuous and discrete systems, where the results can be obtained, partly or fully, in an analytical form ( verified by comparison with numerical results ), which provides a deeper insight into the nonlinear dynamics of optical beams in dissipative nonlinear media. in particular, considered are the single and double hss in the usual waveguide with the self - focusing ( sf ) or self - defocusing ( sdf ) kerr nonlinearity, which give rise to rather sophisticated results, in spite of apparent simplicity of the model ; solitons attached to a pt - symmetric dipole embedded into the sf or sdf medium ; gap solitons pinned to an hs in a bragg grating ; and discrete solitons in a 1d lattice with a " hot site ".
arxiv:1408.3579
. prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including paul allen, jeff hawkins, john holland, jaron lanier, steven pinker, theodore modis, gordon moore, and roger penrose. one claim made was that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies. = = intelligence explosion = = although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to paul r. ehrlich, changed significantly for millennia. however, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans. if a superhuman intelligence were to be invented β€” either through the amplification of human intelligence or through artificial intelligence β€” it would, in theory, vastly improve over human problem - solving and inventive skills. such an ai is referred to as seed ai because if an ai were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. this recursive self - improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. it is speculated that over many iterations, such an ai would far surpass human cognitive abilities. i. j. good speculated that superhuman intelligence might bring about an intelligence explosion : let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines ; there would then unquestionably be an ' intelligence explosion ', and the intelligence of man would be left far behind. thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. one version of intelligence explosion is where computing power approaches infinity in a finite amount of time. in this version, once ais are performing the research to improve themselves, speed doubles e. g. after 2 years, then 1 year, then 6 months, then 3 months, then 1. 5 months, etc., where the infinite sum of the doubling periods
https://en.wikipedia.org/wiki/Technological_singularity
we determine the minimum degree sum of two adjacent vertices that ensures a perfect matching in a 3 - graph without isolated vertex. more precisely, suppose that $ h $ is a 3 - uniform hypergraph whose order $ n $ is sufficiently large and divisible by $ 3 $. if $ h $ contains no isolated vertex and $ deg ( u ) + deg ( v ) > \ frac { 2 } { 3 } n ^ 2 - \ frac { 8 } { 3 } n + 2 $ for any two vertices $ u $ and $ v $ that are contained in some edge of $ h $, then $ h $ contains a perfect matching. this bound is tight.
arxiv:1710.04752
the second survey of the molecular clouds in 12co ( j = 1 - 0 ) was carried out in the large magellanic cloud by nanten. the sensitivity of this survey is twice as high as that of the previous nanten survey, leading to a detection of molecular clouds with m _ co > 2 x 10 ^ 4 m _ sun. we identified 272 molecular clouds, 230 of which are detected at three or more observed positions. we derived the physical properties, such as size, line width, virial mass, of the 164 gmcs which have an extent more than the beam size of nanten in both the major and minor axes. the co luminosity and virial mass of the clouds show a good correlation of m _ vir propto l _ co ^ { 1. 1 + - 0. 1 } with a spearman rank correlation of 0. 8 suggesting that the clouds are in nearly virial equilibrium. assuming the clouds are in virial equilibrium, we derived an x _ co - factor to be ~ 7 x 10 ^ 20 cm ^ - 2 ( k km s ^ - 1 ) ^ - 1. the mass spectrum of the clouds is fitted well by a power law of n _ cloud ( > m _ co ) proportional to m _ co ^ { - 0. 75 + - 0. 06 } above the completeness limit of 5 x 10 ^ 4 m _ sun. the slope of the mass spectrum becomes steeper if we fit only the massive clouds ; e. g., n _ cloud ( > m _ co ) is proportional to m _ co ^ { - 1. 2 + - 0. 2 } for m _ co > 3 x 10 ^ 5 m _ sun.
arxiv:0804.1458
we report the experimental verification of noise - enhanced logic behaviour in an electronic analog of a synthetic genetic network, composed of two repressors and two constitutive promoters. we observe good agreement between circuit measurements and numerical prediction, with the circuit allowing for robust logic operations in an optimal window of noise. namely, the input - output characteristics of a logic gate is reproduced faithfully under moderate noise, which is a manifestation of the phenomenon known as logical stochastic resonance. the two dynamical variables in the system yield complementary logic behaviour simultaneously. the system is easily morphed from and / nand to or / nor logic.
arxiv:1212.4470
gallium oxide epitaxial layers grown on native substrates and basal plane sapphire were characherized by x - ray phtotelectron and optical reflectance spectroscopies. the xps electronic structure mapping was coupled to density functional theory calculations.
arxiv:1801.09451