text
stringlengths
1
3.65k
source
stringlengths
15
79
supernova 1987a ( sn1987a ) in the neighbouring large magellanic cloud offers a superb opportunity to follow the evolution of a supernova and its remnant in unprecedented detail. recently, far - infrared ( far - ir ) and sub - mm emission was detected from the direction of sn1987a, which was interpreted as due to the emission from dust, possibly freshly synthesized in the sn ejecta. to better constrain the location and hence origin of the far - ir and sub - mm emission in sn1987a, we have attempted to resolve the object in that part of the electro - magnetic spectrum. we observed sn1987a during july - september 2011 with the atacama pathfinder experiment ( apex ), at a wavelength of 350 micron with the submillimetre apex bolometer camera ( saboca ) and at 870 micron with the large apex bolometer camera ( laboca ). the 350 - micron image has superior angular resolution ( 8 " ) over that of the herschel space observatory 350 - micron image ( 25 " ). the 870 - micron observation ( at 20 " resolution ) is a repetition of a similar observation made in 2007. in both images, at 350 and 870 micron, emission is detected from sn1987a, and the source is unresolved. the flux densities in the new ( 2011 ) measurements are consistent with those measured before with herschel at 350 micron ( in 2010 ) and with apex at 870 micron ( in 2007 ). a higher dust temperature ( approximately 33 k ) and lower dust mass might be possible than what was previously thought. the new measurements, at the highest angular resolution achieved so far at far - ir and sub - mm wavelengths, strengthen the constraints on the location of the emission, which is thought to be close to the site of sn1987a and its circumstellar ring structures. these measurements set the stage for upcoming observations at even higher angular resolution with the atacama large millimeter array ( alma ).
arxiv:1203.4975
the perturbative qcd static potential and ultrasoft contributions, which together give the static energy, have been calculated to three - and four - loop order respectively, by several authors. using the renormalization group, and pad \ ' e approximants, we estimate the four - loop corrections to the static energy. we also employ the optimal renormalization method and resum the logarithms of the perturbative series in order to reduce sensitivity to the renormalization scale in momentum space. this is the first application of the method to results at these orders. the convergence behaviour of the perturbative series is also improved in position space using the restricted fourier transform scheme. using optimal renormalization, we have extracted the value of $ \ lambda ^ { \ overline { \ textrm { ms } } } _ { \ textrm { qcd } } $ at different scales for two active flavours by matching to the static energy from lattice qcd simulations.
arxiv:2007.10775
braid combing is a procedure defined by emil artin to solve the word problem in braid groups for the first time. it is well - known to have exponential complexity. in this paper, we use the theory of straight line programs to give a polynomial algorithm which performs braid combing. this procedure can be applied to braids on surfaces, providing the first algorithm ( to our knowledge ) which solves the word problem for braid groups on surfaces with boundary in polynomial time and space. in the case of surfaces without boundary, braid combing needs to use a section from the fundamental group of the surface to the braid group. such a section was shown to exist by gon \ ccalves and guaschi, who also gave a geometric description. we propose an algebraically simpler section, which we describe explicitly in terms of generators of the braid group, and we show why the above procedure to comb braids in polynomial time does not work in this case.
arxiv:1712.01552
we review the nucleon ' s twist - 3 polarized structure functions from the viewpoint of gauge invariant, nonlocal light - cone operators in qcd. we discuss a systematic treatment of the polarized structure functions and the corresponding parton distribution functions. we emphasize unique features of higher twist distributions, and the role of the qcd equations of motion to derive their anomalous dimensions for q ^ 2 - evolution.
arxiv:hep-ph/9909300
we introduce a constructive method applicable to a large number of description logics ( dls ) for establishing the concept - based beth definability property ( cbp ) based on sequent systems. using the highly expressive dl riq as a case study, we introduce novel sequent calculi for riq - ontologies and show how certain interpolants can be computed from sequent calculus proofs, which permit the extraction of explicit definitions of implicitly definable concepts. to the best of our knowledge, this is the first sequent - based approach to computing interpolants and definitions within the context of dls, as well as the first proof that riq enjoys the cbp. moreover, due to the modularity of our sequent systems, our results hold for restrictions of riq, and are applicable to other dls by suitable modifications.
arxiv:2404.15840
we propose a surface - edge state theory for half quantized hall conductance of surface states in topological insulators. the gap opening of a single dirac cone for the surface states in a weak magnetic field is demonstrated. we find a new surface state resides on the surface edges and carries chiral edge current, resulting in a half - quantized hall conductance in a four - terminal setup. we also give a physical interpretation of the half quantized conductance by showing that this state is the product of splitting of a boundary bound state of massive dirac fermions which carries a conductance quantum.
arxiv:1007.0497
inter - city highway transportation is significant for urban life. as one of the key functions in intelligent transportation system ( its ), traffic evaluation always plays significant role nowadays, and daily traffic flow prediction still faces challenges at network - wide toll stations. on the one hand, the data imbalance in practice among various locations deteriorates the performance of prediction. on the other hand, complex correlative spatio - temporal factors cannot be comprehensively employed in long - term duration. in this paper, a prediction method is proposed for daily traffic flow in highway domain through spatio - temporal deep learning. in our method, data normalization strategy is used to deal with data imbalance, due to long - tail distribution of traffic flow at network - wide toll stations. and then, based on graph convolutional network, we construct networks in distinct semantics to capture spatio - temporal features. beside that, meteorology and calendar features are used by our model in the full connection stage to extra external characteristics of traffic flow. by extensive experiments and case studies in one chinese provincial highway, our method shows clear improvement in predictive accuracy than baselines and practical benefits in business.
arxiv:2308.05601
as part of the first far - ir line survey towards orion kl, we present the detection of seven new rotationally excited oh lambda - doublets ( at 48, 65, 71, 79, 98 and 115 um ). observations were performed with the long wavelength spectrometer ( lws ) fabry - perots on board the infrared space observatory ( iso ). in total, more than 20 resolved oh rotational lines, with upper energy levels up to 620 k, have been detected at an angular and velocity resolutions of 80 $ ' ' and 33 km s ^ - 1 respectively. oh line profiles show a complex behavior evolving from pure absorption, p - cygni type to pure emission. we also present a large scale 6 ' declination raster in the oh ^ 2 \ pi _ 3 / 2 j = 5 / 2 ^ + - 3 / 2 ^ - and ^ 2 \ pi _ 3 / 2 j = 7 / 2 ^ - 5 / 2 ^ + lines ( at 119. 441 and 84. 597 um ) revealing the decrease of excitation outside the core of the cloud. from the observed profiles, mean intrinsic line widths and velocity offsets between emission and absorption line peaks we conclude that most of the excited oh arises from orion outflow ( s ), i. e. the ` ` plateau ' ' component. we determine an averaged oh abundance relative to h _ 2 of x ( oh ) = ( 0. 5 - 1. 0 ) x10 ^ - 6, a kinetic temperature of 100 k and a density of n ( h _ 2 ) = 5x10 ^ 5 cm ^ - 3. even with these conditions, the oh excitation is heavily coupled with the strong dust continuum emission from the inner hot core regions and from the expanding flow itself.
arxiv:astro-ph/0603077
in this paper we show that for any affine complete rational surface singularity there is a correspondence between the dual graph of the minimal resolution and the quiver of the endomorphism ring of the special cm modules. we thus call such an algebra the reconstruction algebra. as a consequence the derived category of the minimal resolution is equivalent to the derived category of an algebra whose quiver is determined by the dual graph. also, for any finite subgroup g of gl ( 2, c ), it means that the endomorphism ring of the special cm c [ [ x, y ] ] ^ g modules can be used to build the dual graph of the minimal resolution of c ^ 2 / g, extending mckay ' s observation for finite subgroups of sl ( 2, c ) to all finite subgroups of gl ( 2, c ).
arxiv:0809.1973
a quantum dot coupled to an optical cavity has recently proven to be an excellent source of on - demand single photons. typically, applications require simultaneous high efficiency of the source and quantum indistinguishability of the extracted photons. while much progress has been made both in suppressing background sources of decoherence and utilizing cavity - quantum electrodynamics to overcome fundamental limitations set by the intrinsic exciton - phonon scattering inherent in the solid - state platform, the role of the excitation pulse has been often neglected. we investigate quantitatively the factors associated with pulsed excitation that can limit simultaneous efficiency and indistinguishability, including excitation of multiple excitons, multi - photons, and pump - induced dephasing, and find for realistic single photon sources that these effects cause degradation of the source figures - of - merit comparable to that of phonon scattering. we also develop rigorous open quantum system polaron master equation models of quantum dot dynamics under a time - dependent drive which incorporate non - markovian effects of both photon and phonon reservoirs, and explicitly show how coupling to a high q - factor cavity suppresses multi - photon emission in a way not predicted by commonly employed models. we then use our findings to summarize the criteria that can be used for single photon source optimization.
arxiv:1805.08823
those involved in the treatment of eye diseases such as cataracts. ibn sina ( known as avicenna in the west, c. 980 – 1037 ) was a prolific persian medical encyclopedist wrote extensively on medicine, with his two most notable works in medicine being the kitab al - shifaʾ ( " book of healing " ) and the canon of medicine, both of which were used as standard medicinal texts in both the muslim world and in europe well into the 17th century. amongst his many contributions are the discovery of the contagious nature of infectious diseases, and the introduction of clinical pharmacology. institutionalization of medicine was another important achievement in the islamic world. although hospitals as an institution for the sick emerged in the byzantium empire, the model of institutionalized medicine for all social classes was extensive in the islamic empire and was scattered throughout. in addition to treating patients, physicians could teach apprentice physicians, as well write and do research. the discovery of the pulmonary transit of blood in the human body by ibn al - nafis occurred in a hospital setting. = = = = decline = = = = islamic science began its decline in the 12th – 13th century, before the renaissance in europe, due in part to the christian reconquest of spain and the mongol conquests in the east in the 11th – 13th century. the mongols sacked baghdad, capital of the abbasid caliphate, in 1258, which ended the abbasid empire. nevertheless, many of the conquerors became patrons of the sciences. hulagu khan, for example, who led the siege of baghdad, became a patron of the maragheh observatory. islamic astronomy continued to flourish into the 16th century. = = = western europe = = = by the eleventh century, most of europe had become christian ; stronger monarchies emerged ; borders were restored ; technological developments and agricultural innovations were made, increasing the food supply and population. classical greek texts were translated from arabic and greek into latin, stimulating scientific discussion in western europe. in classical antiquity, greek and roman taboos had meant that dissection was usually banned, but in the middle ages medical teachers and students at bologna began to open human bodies, and mondino de luzzi ( c. 1275 – 1326 ) produced the first known anatomy textbook based on human dissection. as a result of the pax mongolica, europeans, such as marco polo, began to venture further and further east. the written accounts of polo and his fellow travelers inspired other western
https://en.wikipedia.org/wiki/History_of_science
let $ e $ be an elliptic curve defined over $ \ mathbb { q } $ and let $ n $ be a positive integer. now, $ m _ e ( n ) $ counts the number of primes $ p $ such that the group $ e _ p ( \ mathbb { f } _ p ) $ is of order $ n $. in an earlier joint work with balasubramanian, we showed that $ m _ e ( n ) $ follows poisson distribution when an average is taken over a family of elliptic curve with parameters $ a $ and $ b $ where $ a, \, b \ ge n ^ { \ frac { \ ell } { 2 } } ( \ log n ) ^ { 1 + \ gamma } $ and $ ab > n ^ { \ frac { 3 \ ell } { 2 } } ( \ log n ) ^ { 2 + \ gamma } $ for a fixed integer $ \ ell $ and any $ \ gamma > 0 $. in this paper, we show that for sufficiently large $ n $, the same result holds even if we take $ a $ and $ b $ in the range $ \ exp ( n ^ { \ frac { \ epsilon ^ 2 } { 20 \ ell } } ) \ ge a, b > n ^ \ epsilon $ and $ ab > n ^ { \ frac { 3 \ ell } { 2 } } ( \ log n ) ^ { 6 + \ gamma } $ for any $ \ epsilon > 0 $.
arxiv:1609.08549
the ever - growing processing power of supercomputers in recent decades enables us to explore increasing complex scientific problems. effective scheduling these jobs is crucial for individual job performance and system efficiency. the traditional job schedulers in high performance computing ( hpc ) are simple and concentrate on improving cpu utilization. the emergence of new hardware resources and novel hardware structure impose severe challenges on traditional schedulers. the increasing diverse workloads, including compute - intensive and data - intensive applications, require more efficient schedulers. even worse, the above two factors interplay with each other, which makes scheduling problem even more challenging. in recent years, many research has discussed new scheduling methods to combat the problems brought by rapid system changes. in this research study, we have investigated challenges faced by hpc scheduling and state - of - art scheduling methods to overcome these challenges. furthermore, we propose an intelligent scheduling framework to alleviate the problems encountered in modern job scheduling.
arxiv:2109.09269
we study the cp asymmetries of the rare top - quark decays $ t \ to c \ gamma $ and $ t \ to cg $ in the aligned two - higgs - doublet model ( a2hdm ), which is generically characterized by new sources of cp violation beyond the standard model ( sm ). specifically, the branching ratios and cp asymmetries of these rare top - quark decays are explicitly formulated, with an emphasis on the origins of weak and strong phases in the a2hdm. taking into account the most relevant constraints on this model, we evaluate the variations of these observables with respect to the model parameters. it is found that the branching ratios of $ t \ to c \ gamma $ and $ t \ to cg $ decays can maximally reach up to $ 1. 47 \ times10 ^ { - 10 } $ and $ 4. 86 \ times10 ^ { - 9 } $ respectively, which are about four and three orders of magnitude higher than the corresponding sm predictions. while the branching ratios are almost independent of the relative phase $ \ varphi $ between the two alignment parameters $ \ varsigma _ u $ and $ \ varsigma _ d $ within the allowed parameter space, the cp asymmetries are found to be very sensitive to $ \ varphi $. when the two alignment parameters are complex with a non - zero $ \ varphi $ varied within the range $ [ 50 ^ \ circ, 150 ^ \ circ ] $, the magnitudes of the cp asymmetries can be significantly enhanced relative to both the sm and the real case. in particular, the maximum absolute values of the cp asymmetries can even reach up to $ \ mathcal { o } ( 1 ) $ for these two decay modes, in the range $ \ varphi \ in [ 70 ^ \ circ, 100 ^ \ circ ] $. these interesting observations could be utilized to discriminate the sm and the different scenarios of the a2hdm.
arxiv:2409.04179
first - order linear temporal logic ( foltl ) is a flexible and expressive formalism capable of naturally describing complex behaviors and properties. although the logic is in general highly undecidable, the idea of using it as a specification language for the verification of complex infinite - state systems is appealing. however, a missing piece, which has proved to be an invaluable tool in dealing with other temporal logics, is an automaton model capable of capturing the logic. in this paper we address this issue, by defining and studying such a model, which we call first - order automaton. we define this very general class of automata, and the corresponding notion of regular first - order language, showing their closure under most common language - theoretic operations. we show how they can capture any foltl formula over any signature and theory, and provide sufficient conditions for the semi - decidability of their non - emptiness problem. then, to show the usefulness of the formalism, we prove the decidability of monodic foltl, a classic result known in the literature, with a simpler and direct proof.
arxiv:2405.20057
we present a completely perturbative model that displays behavior similar to that of walking technicolor. in one phase of the model rg - trajectories run towards an ir - fixed point but approximate scale invariance is spontaneously broken before reaching the fixed point. the trajectories then run away from it and a light dilaton appears in the spectrum. the mass of the dilaton is controlled by the " distance " of the theory to the critical surface, and can be adjusted to be arbitrarily small without turning off the interactions. there is a second phase with no spontaneous symmetry breaking and hence no dilaton, and in which rg trajectories do terminate at the ir - fixed point.
arxiv:1105.2370
civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals
https://en.wikipedia.org/wiki/Metallurgy
the accurate localization of gamma - ray bursts remains a crucial task. while historically, improved localization have led to the discovery of afterglow emission and the realization of their cosmological distribution via redshift measurements, a more recent requirement comes with the potential of studying the kilonovae of neutron star mergers. gravitational wave detectors are expected to provide locations to not better than 10 square degrees over the next decade. with their increasing horizon for merger detections also the intensity of the gamma - ray and kilonova emission drops, making their identification in large error boxes a challenge. thus, a localization via the gamma - ray emission seems to be the best chance to mitigate this problem. here we propose to equip some of the second generation galileo satellites with dedicated grb detectors. this saves costs for launches and satellites for a dedicated grb network, the large orbital radius is beneficial for triangulation, and perfect positional and timing accuracy come for free. we present simulations of the triangulation accuracy, demonstrating that short grbs as faint as grb 170817a can be localized to 1 degree radius ( 1 sigma ).
arxiv:2205.08637
we review several geometric aspects and properties of the orbifold t6 / z ( 2 ) xz ( 6 ' ) xor with discrete torsion, that are crucial with respect to global model building and the search for discrete gauge symmetries in the context of intersecting d6 - brane models. a global six - stack pati - salam model is used for illustration, and various characteristics of its effective field theory are discussed.
arxiv:1303.6845
the container motion along a planar circular trajectory at a constant angular velocity, i. e. orbital shaking, is of interest in several industrial applications, e. g. for fermentation processes or in cultivation of stem cells, where good mixing and efficient gas exchange are the main targets. under these external forcing conditions, the free surface typically exhibits a primary steady state motion through a single - crest dynamics, whose wave amplitude, as a function of the external forcing parameters, shows a duffing - like behaviour. however, previous experiments in lab - scale cylindrical containers have unveiled that, owing to the excitation of super - harmonics, diverse dynamics are observable in certain driving - frequency ranges. among these super - harmonics, the double - crest dynamics is particularly relevant, as it displays a notably large amplitude response, that is strongly favored by the spatial structure of the external forcing. in the inviscid limit and with regards to circular cylindrical containers, we formalize here a weakly nonlinear analysis via multiple timescale method of the full hydrodynamic sloshing system, leading to an amplitude equation suitable to describe such a super - harmonic dynamics and the resulting single - to - double crest wave transition. the weakly nonlinear prediction is shown to be in fairly good agreement with previous experiments described in the literature. lastly, we discuss how an analogous amplitude equation can be derived by solving asymptotically for the first super - harmonic of the forced helmholtz - duffing equation with small nonlinearities.
arxiv:2203.11267
a non - relativistic theory of inertia based on mach ' s principle is presented as has been envisaged but not achieved by ernst mach in 1872. central feature is a space - dependent, anisotropic, symmetric inert mass tensor.
arxiv:1808.06687
seymour conjectured that every oriented simple graph contains a vertex whose second neighborhood is at least as large as its first. seymour ' s conjecture has been verified in several special cases, most notably for tournaments by fisher. one extension of the conjecture that has been used by several researchers is to consider vertex - weighted digraphs. in this paper we introduce a version of the conjecture for arc - weighted digraphs. we prove the conjecture in the special case of arc - weighted tournaments, strengthening fisher ' s theorem. our proof does not rely on fisher ' s result, and thus can be seen as an alternate proof of said theorem.
arxiv:1212.1883
the paper is devoted to the investigation of finite dimensional commutative nilpotent ( associative ) algebras n over an arbitrary base field of characteristic zero. due to the lack of a general structure theory for algebras of this type ( as opposed to the semi - simple case ) we associate various objects to every n which encode the algebra structure. our main results are in the subclass of algebras having 1 - dimensional annihilator, that is, are maximal ideals of gorenstein algebras of finite vector dimension > 1. associated structural objects are then, for instance, a class of mutually affinely equivalent algebraic hypersurfaces s in n, and a class of so - called nil - polynomials p, whose degree is the nil - index of n. then n can be reconstructed from s and even from the quadratic plus cubic part of p. if the algebra n is graded the hypersurface s is affinely homogeneous. the paper closes with an example of an n of dimension 23 and nil - index 5, for which s is not affinely homogeneous.
arxiv:1101.3088
a non - monotonic energy dependence of the $ k ^ { + } / \ pi ^ { + } $ ratio with a sharp maximum close to 30 a $ \ cdot $ gev is observed in central pb + pb collisions. within a statistical model of the early stage, this is interpreted as a sign of the phase transition to a qgp, which causes a sharp change in the energy dependence of the strangeness to entropy ratio. this observation naturally motivates us to study the production of multistrange hyperons ( $ \ xi $, $ \ omega $ ) as a function of the beam energy. furthermore it was suggested that the kinematic freeze - out of $ \ omega $ takes place directly at qgp hadronization. if this is indeed the case, the transverse momentum spectra of the $ \ omega $ directly reflect the transverse expansion velocity of a hadronizing qgp. in this report we show preliminary na49 results on $ \ omega ^ { - } $ and $ \ bar { \ omega } ^ { + } $ production in central pb + pb collisions at 40 and 158 a $ \ cdot $ gev and compare them to measurements of $ \ xi ^ { - } $ and $ \ bar { \ xi } ^ { + } $ production in central pb + pb collisions at 30, 40, 80 and 158 a $ \ cdot $ gev.
arxiv:nucl-ex/0312022
we develop a lazard correspondence between post - lie rings and skew braces that satisfy a natural completeness condition. this is done through a thorough study of how the lazard correspondence behaves on semi - direct sums of lie rings. in particular, for a prime $ p $ and $ k < p $, we obtain a correspondence between skew braces of order $ p ^ k $ and left nilpotent post - lie rings of order $ p ^ k $ on a nilpotent lie ring. this therefore extends results by smoktunowicz.
arxiv:2406.02475
this article advances the prerequisite network as a means to visualize the hidden structure in an academic curriculum. network technologies have been used for some time now in social analyses and more recently in biology in the areas of genomics and systems biology. here i treat the curriculum as a complex system with nodes representing courses and links between nodes the course prerequisites as readily obtained from a course catalogue. the resulting curriculum prerequisite network can be rendered as a directed acyclic graph, which has certain desirable analytical features. the curriculum is seen as partitioned into numerous isolated course groupings, the size of the groups varying considerably. individual courses are seen serving very different roles in the overall organization, such as information sources, hubs, and bridges. this network represents the intrinsic, hard - wired constraints on the flow of information in a curriculum, and is the organizational context within which learning occurs.
arxiv:1408.5340
learned image compression ( lic ) techniques have achieved remarkable progress ; however, effectively integrating high - level semantic information remains challenging. in this work, we present a \ underline { s } emantic - \ underline { e } nhanced \ underline { l } earned \ underline { i } mage \ underline { c } ompression framework, termed \ textbf { selic }, which leverages high - level textual guidance to improve rate - distortion performance. specifically, \ textbf { selic } employs a text encoder to extract rich semantic descriptions from the input image. these textual features are transformed into fixed - dimension tensors and seamlessly fused with the image - derived latent representation. by embedding the \ textbf { selic } tensor directly into the compression pipeline, our approach enriches the bitstream without requiring additional inputs at the decoder, thereby maintaining fast and efficient decoding. extensive experiments on benchmark datasets ( e. g., kodak ) demonstrate that integrating semantic information substantially enhances compression quality. our \ textbf { selic } - guided method outperforms a baseline lic model without semantic integration by approximately 0. 1 - 0. 15 db across a wide range of bit rates in psnr and achieves a 4. 9 \ % bd - rate improvement over vvc. moreover, this improvement comes with minimal computational overhead, making the proposed \ textbf { selic } framework a practical solution for advanced image compression applications.
arxiv:2504.01279
to handle ai tasks that combine perception and logical reasoning, recent work introduces neurosymbolic deep neural networks ( ns - dnns ), which contain - - in addition to traditional neural layers - - symbolic layers : symbolic expressions ( e. g., sat formulas, logic programs ) that are evaluated by symbolic solvers during inference. we identify and formalize an intuitive, high - level principle that can guide the design and analysis of ns - dnns : symbol correctness, the correctness of the intermediate symbols predicted by the neural layers with respect to a ( generally unknown ) ground - truth symbolic representation of the input data. we demonstrate that symbol correctness is a necessary property for ns - dnn explainability and transfer learning ( despite being in general impossible to train for ). moreover, we show that the framework of symbol correctness provides a precise way to reason and communicate about model behavior at neural - symbolic boundaries, and gives insight into the fundamental tradeoffs faced by ns - dnn training algorithms. in doing so, we both identify significant points of ambiguity in prior work, and provide a framework to support further ns - dnn developments.
arxiv:2402.03663
black and white holes play remarkably contrasting roles in general relativity versus observational astrophysics. while there is overwhelming observational evidence for the existence of compact objects that are " cold, dark, and heavy ", which thereby are natural candidates for black holes, the theoretically viable time - reversed variants - - the " white holes " - - have nowhere near the same level of observational support. herein we shall explore the possibility that the connection between black and white holes is much more intimate than commonly appreciated. we shall first construct " horizon penetrating " coordinate systems that differ from the standard curvature coordinates only in a small near - horizon region, thereby emphasizing that ultimately the distinction between black and white horizons depends only on near - horizon physics. we shall then construct an explicit model for a " black - to - white transition " where all of the nontrivial physics is confined to a compact region of spacetime - - a finite - duration finite - thickness, ( in principle arbitrarily small ), region straddling the naive horizon. moreover we shall show that it is possible to arrange the " black - to - white transition " to have zero action - - so that it will not be subject to destructive interference in the feynman path integral. this then raises the very intriguing possibility that astrophysical black holes might be interpratable in terms of a quantum superposition of black and white horizons.
arxiv:2304.10692
we find an explicit combinatorial interpretation of the coefficients of kerov character polynomials which express the value of normalized irreducible characters of the symmetric groups s ( n ) in terms of free cumulants r _ 2, r _ 3,... of the corresponding young diagram. our interpretation is based on counting certain factorizations of a given permutation.
arxiv:0810.3209
in this paper, we consider a secondary wireless powered communication network ( wpcn ) underlaid to a primary point - to - point communication link. the wpcn consists of a multi - antenna hybrid access point ( hap ) that transfers wireless energy to a cluster of low - power wireless devices ( wds ) and receives sensing data from them. to tackle the inherent severe user unfairness problem in wpcn, we consider a cluster - based cooperation where a wd acts as the cluster head that relays the information of the other wds. besides, we apply energy beamforming technique to balance the dissimilar energy consumptions of the wds to further improve the fairness. however, the use of energy beamforming and cluster - based cooperation may introduce more severe interference to the primary system than the wds transmit independently. to guarantee the performance of primary system, we consider an interference - temperature constraint to the primary system and derive the throughput performance of each wd under the peak interference - temperature constraint. to achieve maximum throughput fairness, we jointly optimize the energy beamforming design, the transmit time allocation among the hap and the wds, and the transmit power allocation of each wd to maximize the minimum data rate achievable among the wds ( the max - min throughput ). we show that the non - convex joint optimization problem can be transformed to a convex one and then be efficiently solved using off - the - shelf convex algorithms. moreover, we simulate under practical network setups and show that the proposed method can effectively improve the throughput fairness of the secondary wpcn, meanwhile guaranteeing the communication quality of the primary network.
arxiv:1904.09863
molecular dynamics triggered by interaction with light often involve the excitation of several electronic, vibrational, and rotational states. characterizing the resulting coupled electronic and nuclear wave packet motion represents a severe challenge, even for small polyatomic systems. in this letter, we demonstrate how the interplay between vibrational, rotational, and electronic degrees of freedom governs the evolution of molecular wave packets in the low - lying states of strong - field - ionized sulfur dioxide. using time - resolved coulomb explosion imaging ( cei ) in combination with quantum mechanical wave packet simulations, we directly map bending vibrations of the molecule, show how the vibrational wave packet is influenced by molecular alignment, and elucidate the role of the coupling between the two lowest electronic states of the cation. a conical intersection between these states couples the bending and asymmetric stretching coordinates, which is clearly reflected in the correlated fragment momenta. our results suggest that multi - coincident cei represents an efficient experimental tool for characterizing coupled electronic and nuclear motion in polyatomic molecules.
arxiv:2408.07958
we investigate the absorption and emission features in observations of gx 301 - 2 detected with insight - hxmt / le in 2017 - 2019. at different orbital phases, we found prominent fe kalpha, kbeta and ni kalpha lines, as well as compton shoulders and fe k - shell absorption edges. these features are due to the x - ray reprocessing caused by the interaction between the radiation from the source and surrounding accretion material. according to the ratio of iron lines kalpha and kbeta, we infer the accretion material is in a low ionisation state. we find an orbital - dependent local absorption column density, which has a large value and strong variability around the periastron. we explain its variability as a result of inhomogeneities of the accretion environment and / or instabilities of accretion processes. in addition, the variable local column density is correlated with the equivalent width of the iron kalpha lines throughout the orbit, which suggests that the accretion material near the neutron star is spherically distributed.
arxiv:2012.02556
the coexistence of ferromagnetism and metallic conduction in doped manganites has long been explained by a double - exchange model in which the ferromagnetic exchange arises from the carrier hopping. we evaluate the zero - temperature spin stiffness d ( 0 ) and the curie temperature t _ { c } on the basis of the double - exchange model using the measured values of the bare bandwidth w and the hund ' s rule coupling j _ { h }. the calculated d ( 0 ) and t _ { c } values are too small compared with the observed ones even in the absence of interactions. a realistic onsite interorbital coulomb repulsion can reduce d ( 0 ) substantially in the case of a 2 - orbital model. furthermore, experiment shows that d ( 0 ) is simply proportional to x in la _ { 1 - x } sr _ { x } mno _ { 3 } system, independent of whether the ground state is a ferromagnetic insulator or metal. these results strongly suggest that the ferromagnetism in manganites does not originate from the double - exchange interaction. on the other hand, an alternative model based on the d - p exchange can semi - quantitatively explain the ferromagnetism of doped manganites at low temperatures.
arxiv:cond-mat/0001390
the understanding and modeling of complex physical phenomena through dynamical systems has historically driven scientific progress, as it provides the tools for predicting the behavior of different systems under diverse conditions through time. the discovery of dynamical systems has been indispensable in engineering, as it allows for the analysis and prediction of complex behaviors for computational modeling, diagnostics, prognostics, and control of engineered systems. joining recent efforts that harness the power of symbolic regression in this domain, we propose a novel framework for the end - to - end discovery of ordinary differential equations ( odes ), termed grammar - based ode discovery engine ( gode ). the proposed methodology combines formal grammars with dimensionality reduction and stochastic search for efficiently navigating high - dimensional combinatorial spaces. grammars allow us to seed domain knowledge and structure for both constraining, as well as, exploring the space of candidate expressions. gode proves to be more sample - and parameter - efficient than state - of - the - art transformer - based models and to discover more accurate and parsimonious ode expressions than both genetic programming - and other grammar - based methods for more complex inference tasks, such as the discovery of structural dynamics. thus, we introduce a tool that could play a catalytic role in dynamics discovery tasks, including modeling, system identification, and monitoring tasks.
arxiv:2504.02630
molecular adsorption at the surface of a 2d material poses numerous questions regarding the modification to the band structure and interfacial states, which of course deserve full attention. in line with this, first - principle density functional theory is employed on the graphene / ammonia system. we identify the effects on the band structure due to strain, charge transfer and presence of molecular orbitals ( mos ) of nh3 for six adsorption configurations. the induced strain upon ammonia - adsorption opens the bandgap ( eg ) of graphene due to the breaking of translational symmetry and shifts the equilibrium fermi energy ( ef ). the eg and ef values and charge density distribution are dependent on the adsorption configuration, where the mo structure of nh3 plays a crucial role. the presence of mos of n or h - originated pushes the unoccupied states of graphene towards ef. nh3 forms an interfacial occupied state originating from n2p below the ef within 1. 6 - 2. 2 ev for all configurations. these findings enhance the fundamental understanding of the graphene / nh3 system.
arxiv:2003.11800
we investigate the method for the reconstruction of f ( r ) gravity models both from the background evolution observations and from the large scale structure measurements. due to the lack of the first principles, one needs to rely on the observations to build the $ f ( r ) $ gravity models. we show that the general $ f ( r ) $ models can be specified if the 5 \ % accuracy level large scale structure formation observations will be available in the near future.
arxiv:1710.11581
optimization for robot control tasks, spanning various methodologies, includes model predictive control ( mpc ). however, the complexity of the system, such as non - convex and non - differentiable cost functions and prolonged planning horizons often drastically increases the computation time, limiting mpc ' s real - world applicability. prior works in speeding up the optimization have limitations on optimizing mpc running time directly and generalizing to hold out domains. to overcome this challenge, we develop a novel framework aiming at expediting optimization processes directly. in our framework, we combine offline self - supervised learning and online fine - tuning to improve the control performance and reduce optimization time. we demonstrate the success of our method on a novel and challenging formula 1 track driving task. comparing to single - phase training, our approach achieves a 19. 4 \ % reduction in optimization time and a 6. 3 \ % improvement in tracking accuracy on zero - shot tracks.
arxiv:2408.03394
we study the generation of the wakefields by means of the high energy radiation of pulsars. the problem is considered in the framework of a one dimensional approach. we linearize the set of governing equations consisting of the momentum equation, continuity equation an poisson equation and show that a wavelike structure will inevitably arise relatively close to the pulsar.
arxiv:1605.00905
i further study the manner by which a pair of opposite jets shape the keyhole morphological structure of the core - collapse supernova ( ccsn ) sn 1997a, now the ccsn remnant ( ccsnr ) 1987a. by doing so, i strengthen the claim that the jittering - jet explosion mechanism ( jjem ) accounts for most, likely all, ccsne. the keyhole structure comprises a northern low - intensity zone closed with a bright rim on its front and an elongated low - intensity nozzle in the south. this rim - nozzle asymmetry is observed in some cooling flow clusters and planetary nebulae that are observed to be shaped by jets. i build a toy model that uses the planar jittering jets pattern, where consecutive pairs of jets tend to jitter in a common plane, implying that the accreted gas onto the newly born neutron star at the late explosion phase flows perpendicular to that plane. this allows for a long - lived jet - launching episode. this long - lasting jet - launching episode launches more mass into the jets that can inflate larger pairs of ears or bubbles, forming the main jets ' axis of the ccsnr that is not necessarily related to a possible pre - collapse core rotation. i discuss the relation of the main jets ' axis to the neutron star ' s natal kick velocity.
arxiv:2404.07455
we present the theory and numerical evaluations of the backscattering rate determined by acceptor holes or mn spins in hgte and ( hg, mn ) te quantum wells in the quantum spin hall regime. the role of anisotropic s - p and sp - d exchange interactions, kondo coupling, luttinger liquid effects, precessional dephasing, and bound magnetic polarons is quantified. the determined magnitude and temperature dependence of conductance are in accord with experimental results for hgte and ( hg, mn ) te quantum wells.
arxiv:2209.03283
high - throughput methods for yielding the set of connections in a neural system, the connectome, are now being developed. this tutorial describes ways to analyze the topological and spatial organization of the connectome at the macroscopic level of connectivity between brain regions as well as the microscopic level of connectivity between neurons. we will describe topological features at three different levels : the local scale of individual nodes, the regional scale of sets of nodes, and the global scale of the complete set of nodes in a network. such features can be used to characterize components of a network and to compare different networks, e. g. the connectome of patients and control subjects for clinical studies. at the global scale, different types of networks can be distinguished and we will describe erd \ " os - r \ ' enyi random, scale - free, small - world, modular, and hierarchical archetypes of networks. finally, the connectome also has a spatial organization and we describe methods for analyzing wiring lengths of neural systems. as an introduction for new researchers in the field of connectome analysis, we discuss the benefits and limitations of each analysis approach.
arxiv:1105.4705
jacobi diffusion is a representative diffusion process whose solution is bounded in a domain under certain conditions on its drift and diffusion coefficients. however, the process without such conditions has been far less investigated. we explore a jacobi diffusion whose drift coefficient is modulated by another process, which causes the process to hit the boundary of a domain in finite time. the kolmogorov equation ( a degenerate elliptic partial differential equation ) for evaluating the boundary hitting of the proposed jacobi diffusion is then presented and mathematically analyzed. we also investigate a related mean field game arising in tourism management, where the drift depends on the index for sensor boundary hitting, thereby confining the process in a domain with a higher probability. in this case, the kolmogorov equation becomes nonlinear. we propose a finite difference method applicable to both the linear and nonlinear kolmogorov equations that yields a unique numerical solution due to discrete ellipticity. the accuracy of the finite difference method critically depends on the regularity of the boundary condition, and the use of a high - order discretization method is not always effective. finally, we computationally investigate the mean field effect.
arxiv:2501.02729
new analysis of the heavy, neutral mssm higgs bosons h and a production at the photon collider is presented for m _ a = 200, 250, 300 and 350 gev in the parameter range corresponding to the so called " lhc wedge " and beyond. the expected precision of the cross section measurement for the process gamma + gamma - > a, h - > b + bbar and the " discovery reach " of the photon collider are compared for different mssm scenarios. the analysis takes into account all relevant theoretical and experimental issues which could affect the measurement. for mssm higgs bosons a and h, for m _ a = 200 - 350 gev and tan ( beta ) = 7, the statistical precision of the cross - section determination is estimated to be 8 - 34 %, after one year of photon collider running, for four considered mssm parameters sets. as heavy neutral higgs bosons in this scenario may not be discovered at lhc or at the first stage of the e + e - collider, an opportunity of being a discovery machine is also studied for the photon collider.
arxiv:hep-ph/0507006
i present a parametric, bijective transformation to generate heavy tail versions y of arbitrary rvs x ~ f. the tail behavior of the so - called ' heavy tail lambert w x f ' rv y depends on a tail parameter delta > = 0 : for delta = 0, y = x, for delta > 0 y has heavier tails than x. for x being gaussian, this meta - family of heavy - tailed distributions reduces to tukey ' s h distribution. lambert ' s w function provides an explicit inverse transformation, which can be estimated by maximum likelihood. this inverse can remove heavy tails from data, and also provide analytical expressions for the cumulative distribution ( cdf ) and probability density function ( pdf ). as a special case, these yield explicit formulas for tukey ' s h pdf and cdf - to the author ' s knowledge for the first time in the literature. simulations and applications to s & p 500 log - returns and solar flares data demonstrate the usefulness of the introduced methodology. the r package " lambertw " ( cran. r - project. org / web / packages / lambertw ) implementing the presented methodology is publicly available at cran.
arxiv:1010.2265
in a recent paper, we have proposed an unsupervised algorithm for audio signal segmentation entirely based on bayesian methods. in its first implementation, however, the method showed poor computational performance. in this paper we address this question by describing a fast parallel implementation using the cython library for python ; we use open gsl methods for standard mathematical functions, and the openmp framework for parallelization. we also offer a detailed analysis on the sensibility of the algorithm to its different parameters, and show its application to real - life subacquatic signals obtained off the brazilian south coast. our code and data are available freely on github.
arxiv:1803.01801
we use observed transmission line outage data to make a markov influence graph that describes the probabilities of transitions between generations of cascading line outages, where each generation of a cascade consists of a single line outage or multiple line outages. the new influence graph defines a markov chain and generalizes previous influence graphs by including multiple line outages as markov chain states. the generalized influence graph can reproduce the distribution of cascade size in the utility data. in particular, it can estimate the probabilities of small, medium and large cascades. the influence graph has the key advantage of allowing the effect of mitigations to be analyzed and readily tested, which is not available from the observed data. we exploit the asymptotic properties of the markov chain to find the lines most involved in large cascades and show how upgrades to these critical lines can reduce the probability of large cascades.
arxiv:1902.00686
the classical diagrams of radio loudness and jet power as a function of mass and accretion rate of the central spacetime singularity in active galactic nuclei are reanalyzed by including the data of the recently discovered powerful relativistic jets in narrow - line seyfert 1 galaxies. the results are studied in the light of the known theories on the relativistic jets, indicating that while the blandford - znajek mechanism is sufficient to explain the power radiated by bl lac objects, it fails to completely account the power from quasars and narrow - line seyfert 1. this favors the scenario outlined by cavaliere & d ' elia of a composite jet, with a magnetospheric core plus a hydromagnetic component emerging as the accretion power increases and the disc becomes radiation - pressure dominated. a comparison with galactic compact objects is also performed, finding some striking similarities, indicating that as the neutron stars are the low - mass jet systems analogue of black holes, the narrow - line seyfert 1 galaxies are the low - mass counterpart of the blazars.
arxiv:1106.5532
we obtain gaussian upper and lower bounds on the transition density q _ t ( x, y ) of the continuous time simple random walk on a supercritical percolation cluster c _ { \ infty } in the euclidean lattice. the bounds, analogous to aronsen ' s bounds for uniformly elliptic divergence form diffusions, hold with constants c _ i depending only on p ( the percolation probability ) and d. the irregular nature of the medium means that the bound for q _ t ( x, \ cdot ) holds only for t \ ge s _ x ( \ omega ), where the constant s _ x ( \ omega ) depends on the percolation configuration \ omega.
arxiv:math/0302004
7200 specifies the data fields used in title blocks. it standardizes eight mandatory data fields : title ( hence the name " title block " ) created by ( name of drafter ) approved by legal owner ( name of company or organization ) document type drawing number ( same for every sheet of this document, unique for each technical document of the organization ) sheet number and number of sheets ( for example, " sheet 5 / 7 " ) date of issue ( when the drawing was made ) traditional locations for the title block are the bottom right ( most commonly ) or the top right or center. = = = revisions block = = = the revisions block ( rev block ) is a tabulated list of the revisions ( versions ) of the drawing, documenting the revision control. traditional locations for the revisions block are the top right ( most commonly ) or adjoining the title block in some way. = = = next assembly = = = the next assembly block, often also referred to as " where used " or sometimes " effectivity block ", is a list of higher assemblies where the product on the current drawing is used. this block is commonly found adjacent to the title block. = = = notes list = = = the notes list provides notes to the user of the drawing, conveying any information that the callouts within the field of the drawing did not. it may include general notes, flagnotes, or a mixture of both. traditional locations for the notes list are anywhere along the edges of the field of the drawing. = = = = general notes = = = = general notes ( g / n, gn ) apply generally to the contents of the drawing, as opposed to applying only to certain part numbers or certain surfaces or features. = = = = flagnotes = = = = flagnotes or flag notes ( fl, f / n ) are notes that apply only where a flagged callout points, such as to particular surfaces, features, or part numbers. typically the callout includes a flag icon. some companies call such notes " delta notes ", and the note number is enclosed inside a triangular symbol ( similar to capital letter delta, δ ). " fl5 " ( flagnote 5 ) and " d5 " ( delta note 5 ) are typical ways to abbreviate in ascii - only contexts. = = = field of the drawing = = = the field of the drawing ( f / d, fd ) is the main body or main area of the drawing, excluding the title block, rev block, p /
https://en.wikipedia.org/wiki/Engineering_drawing
beam energy scan programs in heavy - ion collisions aim to explore the qcd phase structure at high baryon density. sensitive observables are applied to probe the signatures of the qcd phase transition and critical point in heavy - ion collisions at rhic and sps. intriguing structures, such as dip, peak and oscillation, have been observed in the energy dependence of various observables. in this paper, an overview is given and corresponding physics implications will be discussed for the experimental highlights from the beam energy scan programs at the star, phenix and na61 / shine experiments. furthermore, the beam energy scan phase ii at rhic ( 2019 - 2020 ) and other future experimental facilities for studying the physics at low energies will be also discussed.
arxiv:1512.09215
topic models are popular statistical tools for detecting latent semantic topics in a text corpus. they have been utilized in various applications across different fields. however, traditional topic models have some limitations, including insensitivity to user guidance, sensitivity to the amount and quality of data, and the inability to adapt learned topics from one corpus to another. to address these challenges, this paper proposes a neural topic model, topicadapt, that can adapt relevant topics from a related source corpus and also discover new topics in a target corpus that are absent in the source corpus. the proposed model offers a promising approach to improve topic modeling performance in practical scenarios. experiments over multiple datasets from diverse domains show the superiority of the proposed model against the state - of - the - art topic models.
arxiv:2310.04978
we discuss the correspondence point between a string state and a black hole, in a pp - wave background, and find that the answer is considerably different from that in a flat spacetime background.
arxiv:hep-th/0205043
when analyzing temporal networks, a fundamental task is the identification of dense structures ( i. e., groups of vertices that exhibit a large number of links ), together with their temporal span ( i. e., the period of time for which the high density holds ). we tackle this task by introducing a notion of temporal core decomposition where each core is associated with its span : we call such cores span - cores. as the total number of time intervals is quadratic in the size of the temporal domain $ t $ under analysis, the total number of span - cores is quadratic in $ | t | $ as well. our first contribution is an algorithm that, by exploiting containment properties among span - cores, computes all the span - cores efficiently. then, we focus on the problem of finding only the maximal span - cores, i. e., span - cores that are not dominated by any other span - core by both the coreness property and the span. we devise a very efficient algorithm that exploits theoretical findings on the maximality condition to directly compute the maximal ones without computing all span - cores. experimentation on several real - world temporal networks confirms the efficiency and scalability of our methods. applications on temporal networks, gathered by a proximity - sensing infrastructure recording face - to - face interactions in schools, highlight the relevance of the notion of ( maximal ) span - core in analyzing social dynamics and detecting / correcting anomalies in the data.
arxiv:1808.09376
language workbenches offer language designers an expressive environment in which to create their dsls. similarly, research into mechanised meta - theory has shown how dependently typed languages provide expressive environments to formalise and study dsls and their meta - theoretical properties. but can we claim that dependently typed languages qualify as language workbenches? we argue yes! we have developed an exemplar dsl called velo that showcases not only dependently typed techniques to realise and manipulate irs, but that dependently typed languages make fine language workbenches. velo is a simple verified language with well - typed holes and comes with a complete compiler pipeline : parser, elaborator, repl, evaluator, and compiler passes. specifically, we describe our design choices for well - typed irs design that includes support for well - typed holes, how cse is achieved in a well - typed setting, and how the mechanised type - soundness proof for velo is the source of the evaluator.
arxiv:2301.12852
we compute holographic entanglement entropy for subregions of a bcft thermal state living on a nongravitating black hole background. the system we consider is doubly holographic and dual to an eternal black string with an embedded karch - randall brane that is parameterized by its angle. entanglement islands are conventionally expected to emerge at late times to preserve unitarity at finite temperature, but recent calculations at zero temperature have shown such islands do not exist when the brane lies below a critical angle. when working at finite temperature in the context of a black string, we find that islands exist even when the brane lies below the critical angle. we note that although these islands exist when they are needed to preserve unitarity, they are restricted to a finite connected region on the brane which we call the atoll. depending on two parameters - - the size of the subregion and the brane angle - - the entanglement entropy either remains constant in time or follows a page curve. we discuss this rich phase structure in the context of bulk reconstruction.
arxiv:2112.09132
equilibration of n / z in binary breakup of an excited and transiently deformed projectile - like fragment ( plf * ), produced in peripheral collisions of 64zn + 27al, 64zn, 209bi at e / a = 45 mev, is examined. the composition of emitted light fragments ( 3 < = z < = 6 ) changes with the decay angle of the plf *. the most neutron - rich fragments observed are associated with a small rotation angle. a clear target dependence is observed with the largest initial n / z correlated with the heavy, neutron - rich target. using the rotation angle as a clock, we deduce that n / z equilibration persists for times as long as 3 - 4 zs ( 1zs = 1 x 10 ^ - 21 s = 300 fm / c ). the rate of n / z equilibration is found to depend on the initial neutron gradient within the plf *.
arxiv:1305.1320
this paper presents an alternative approach to p - values in regression settings. this approach, whose origins can be traced to machine learning, is based on the leave - one - out bootstrap for prediction error. in machine learning this is called the out - of - bag ( oob ) error. to obtain the oob error for a model, one draws a bootstrap sample and fits the model to the in - sample data. the out - of - sample prediction error for the model is obtained by calculating the prediction error for the model using the out - of - sample data. repeating and averaging yields the oob error, which represents a robust cross - validated estimate of the accuracy of the underlying model. by a simple modification to the bootstrap data involving " noising up " a variable, the oob method yields a variable importance ( vimp ) index, which directly measures how much a specific variable contributes to the prediction precision of a model. vimp provides a scientifically interpretable measure of the effect size of a variable, we call the " predictive effect size ", that holds whether the researcher ' s model is correct or not, unlike the p - value whose calculation is based on the assumed correctness of the model. we also discuss a marginal vimp index, also easily calculated, which measures the marginal effect of a variable, or what we call " the discovery effect ". the oob procedure can be applied to both parametric and nonparametric regression models and requires only that the researcher can repeatedly fit their model to bootstrap and modified bootstrap data. we illustrate this approach on a survival data set involving patients with systolic heart failure and to a simulated survival data set where the model is incorrectly specified to illustrate its robustness to model misspecification.
arxiv:1701.04944
layered van der waals two - dimensional ( 2d ) magnets are a cornerstone of ultrathin spintronic and magnonic devices. the recent discovery of a 2d multiferroic with strong magnetoelectric coupling in nii $ _ 2 $ offers a promising platform for the electrical control of spin - wave transport. in this work, using ab initio calculations, we investigate how the magnonic properties of monolayer nii $ _ 2 $ can be controlled using an external electric field. we show that the emergence of a ferroelectric polarization leads to an energy splitting in the magnon spectrum, thus establishing a way to detect the electric polarization experimentally. we also show the modulation of the magnon splitting and the energy position of the singularities in magnon dos by an electric field due to the strong magnetoelectric coupling. our results highlight the interplay between ferroelectricity and magnons in van der waals multiferroics and pave the way to design electrically tunable magnetic devices at the 2d limit.
arxiv:2407.21645
the exotic structures in the 2s _ { 1 / 2 } states of five pairs of mirror nuclei ^ { 17 } o - ^ { 17 } f, ^ { 26 } na - ^ { 26 } p, ^ { 27 } mg - ^ { 27 } p, ^ { 28 } al - ^ { 28 } p and ^ { 29 } si - ^ { 29 } p are investigated with the relativistic mean - field ( rmf ) theory and the single - particle model ( spm ) to explore the role of the coulomb effects on the proton halo formation. the present rmf calculations show that the exotic structure of the valence proton is more obvious than that of the valence neutron of its mirror nucleus, the difference of exotic size between each mirror nuclei becomes smaller with the increase of mass number a of the mirror nuclei and the ratios of the valence proton and valence neutron root - mean - square ( rms ) radius to the matter radius in each pair of mirror nuclei all decrease linearly with the increase of a. in order to interpret these results, we analyze two opposite effects of coulomb interaction on the exotic structure formation with spm and find that the contribution of the energy level shift is more important than that of the coulomb barrier for light nuclei. however, the hindrance of the coulomb barrier becomes more obvious with the increase of a. when a is larger than 34, coulomb effects on the exotic structure formation will almost become zero because its two effects counteract with each other.
arxiv:0708.0071
a recently developed spectral - element adaptive refinement incompressible magnetohydrodynamic ( mhd ) code [ rosenberg, fournier, fischer, pouquet, j. comp. phys. 215, 59 - 80 ( 2006 ) ] is applied to simulate the problem of mhd island coalescence instability ( mici ) in two dimensions. mici is a fundamental mhd process that can produce sharp current layers and subsequent reconnection and heating in a high - lundquist number plasma such as the solar corona [ ng and bhattacharjee, phys. plasmas, 5, 4028 ( 1998 ) ]. due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. the output of the spectral - element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo - spectral simulations with uniform grids as baselines. it is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.
arxiv:0711.3868
gravitational waves deliver information in exquisite detail about astrophysical phenomena, among them the collision of two black holes, a system completely invisible to the eyes of electromagnetic telescopes. models that predict gravitational wave signals from likely sources are crucial for the success of this endeavor. modeling binary black hole sources of gravitational radiation requires solving the eintein equations of general relativity using powerful computer hardware and sophisticated numerical algorithms. this proceeding presents where we are in understanding ground - based gravitational waves resulting from the merger of black holes and the implications of these sources for the advent of gravitational - wave astronomy.
arxiv:1503.02674
collaborative filtering models based on matrix factorization and learned similarities using artificial neural networks ( anns ) have gained significant attention in recent years. this is, in part, because anns have demonstrated good results in a wide variety of recommendation tasks. the introduction of anns within the recommendation ecosystem has been recently questioned, raising several comparisons in terms of efficiency and effectiveness. one aspect most of these comparisons have in common is their focus on accuracy, neglecting other evaluation dimensions important for the recommendation, such as novelty, diversity, or accounting for biases. we replicate experiments from three papers that compare neural collaborative filtering ( ncf ) and matrix factorization ( mf ), to extend the analysis to other evaluation dimensions. our contribution shows that the experiments are entirely reproducible, and we extend the study including other accuracy metrics and two statistical hypothesis tests. we investigated the diversity and novelty of the recommendations, showing that mf provides a better accuracy also on the long tail, although ncf provides a better item coverage and more diversified recommendations. we discuss the bias effect generated by the tested methods. they show a relatively small bias, but other recommendation baselines, with competitive accuracy performance, consistently show to be less affected by this issue. this is the first work, to the best of our knowledge, where several evaluation dimensions have been explored for an array of sota algorithms covering recent adaptations of anns and mf. hence, we show the potential these techniques may have on beyond - accuracy evaluation while analyzing the effect on reproducibility these complementary dimensions may spark. available at github. com / sisinflab / reenvisioning - the - comparison - between - neural - collaborative - filtering - and - matrix - factorization
arxiv:2107.13472
we show that the qcd van der waals attractive potential is strong enough to bind a $ \ phi ^ { 0 } $ meson onto a nucleon inside a nucleus to form a bound state. the direct experimental signature for such an exotic state is proposed in the case of subthreshold $ \ phi ^ { 0 } $ meson photoproduction from nuclear targets. the production rate is estimated and such an experiment is found to be feasible at the jefferson laboratory.
arxiv:nucl-th/0010042
we derive stringent constraints on the persistent source associated with frb 121102 : size $ 10 ^ { 17 } $ cm $ < r < 10 ^ { 18 } $ cm, age $ < 300 $ yr, characteristic electron energy $ \ varepsilon _ e \ sim0. 3 $ gev, total energy $ \ sim10 ^ { 49 } $ erg. the hot radiating plasma is confined by a cold plasma of mass $ m _ c < 0. 03 ( r / 10 ^ { 17. 5 } { \ rm cm } ) ^ 4 m _ \ odot $. the source is nearly resolved, and may be resolved by 10 ghz observations. the fact that $ \ varepsilon _ e \ sim m _ p c ^ 2 $ suggests that the hot plasma was created by the ejection of a mildly relativistic, $ m \ sim10 ^ { - 5 } m _ \ odot $ shell, which propagated into an extended ambient medium or collided with a pre - ejected shell of mass $ m _ c $. the inferred plasma properties are inconsistent with typical " magnetar wind nebulae " model predictions. we suggest a physical mechanism for the generation of frbs ( independent of the persistent source model ) : ejection from an underlying compact object, $ r _ s \ sim10 ^ { 6 } $ cm, of highly relativistic shells, with energy $ e _ s = 10 ^ { 41 } $ erg and lorentz factor $ \ gamma _ s $ ~ $ 10 ^ 3 $, into a surrounding e - p plasma with density $ n \ sim0. 1 / cm ^ 3 $ ( consistent with that inferred for the plasma producing the persistent emission associated with frb 121102 ). such shell ejections with energy typical for frb events lead to plasma conditions appropriate for strong synchrotron maser emission at the ghz range, $ \ nu _ { coh. } \ sim0. 5 ( e / 10 ^ { 41 } erg ) ^ { 1 / 4 } $ ghz. in this model, a significant fraction of the deposited energy is converted to an frb with duration $ r _ s / c $, accompanied by ~ 10 mev photons carrying less energy than the frb. the inferred energy and mass associated with the source are low compared to those of typical supernova ejecta. this may suggest some type of a " weak stellar
arxiv:1703.06723
we consider a compressive hyperspectral imaging reconstruction problem, where three - dimensional spatio - spectral information about a scene is sensed by a coded aperture snapshot spectral imager ( cassi ). the cassi imaging process can be modeled as suppressing three - dimensional coded and shifted voxels and projecting these onto a two - dimensional plane, such that the number of acquired measurements is greatly reduced. on the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. we previously proposed a compressive imaging reconstruction algorithm that is applied to two - dimensional images based on the approximate message passing ( amp ) framework. amp is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. we employed an adaptive wiener filter as the image denoiser, and called our algorithm " amp - wiener. " in this paper, we extend amp - wiener to three - dimensional hyperspectral image reconstruction, and call it " amp - 3d - wiener. " applying the amp framework to the cassi system is challenging, because the matrix that models the cassi system is highly sparse, and such a matrix is not suitable to amp and makes it difficult for amp to converge. therefore, we modify the adaptive wiener filter and employ a technique called damping to solve for the divergence issue of amp. our approach is applied in nature, and the numerical experiments show that amp - 3d - wiener outperforms existing widely - used algorithms such as gradient projection for sparse reconstruction ( gpsr ) and two - step iterative shrinkage / thresholding ( twist ) given a similar amount of runtime. moreover, in contrast to gpsr and twist, amp - 3d - wiener need not tune any parameters, which simplifies the reconstruction process.
arxiv:1507.01248
we study the c - axis transport of stacked, intrinsic junctions in bi2sr2cacu2o8 + y single crystals, fabricated by the double - sided ion beam processing technique from single crystal whiskers. measurements of the i - v characteristics of these samples allow us to obtain the temperature and voltage dependence of the quasiparticle c - axis conductivity in the superconducting state, the josephson critical current, and the superconducting gap. we show that the bcs d - wave model in the clean limit for resonant impurity scattering with a significant contribution from coherent interlayer tunneling, describes satisfactorily the low temperature and low energy c - axis transport of both quasiparticles and cooper pairs.
arxiv:cond-mat/9903256
in this paper we discuss an in - progress work on the development of a speech corpus for four low - resource indo - aryan languages - - awadhi, bhojpuri, braj and magahi using the field methods of linguistic data collection. the total size of the corpus currently stands at approximately 18 hours ( approx. 4 - 5 hours each language ) and it is transcribed and annotated with grammatical information such as part - of - speech tags, morphological features and universal dependency relationships. we discuss our methodology for data collection in these languages, most of which was done in the middle of the covid - 19 pandemic, with one of the aims being to generate some additional income for low - income groups speaking these languages. in the paper, we also discuss the results of the baseline experiments for automatic speech recognition system in these languages.
arxiv:2206.12931
considerable interest is now focused on the detection of terrestrial mass planets around m dwarfs, and radial velocity surveys with high - resolution spectrographs in the near infrared are expected to be able to discover such planets. we explore the possibility of using commercially available molecular absorption gas cells as a wavelength reference standard for high - resolution fiber - fed spectrographs in the near - infrared. we consider the relative merits and disadvantages of using such cells compared to thorium - argon emission lamps and conclude that in the astronomical h band they are a viable method of simultaneous calibration, yielding an acceptable wavelength calibration error for most applications. four well - characterized and commercially available standard gas cells of hcn, 12c $ _ 2 $ h $ _ 2 $, 12co, and 13co can together span over 120nm of the h band, making them suitable for use in astronomical spectrographs. the use of isotopologues of these molecules can increase line densities and wavelength coverage, extending their application to different wavelength regions.
arxiv:0810.5342
the task of " unlearning " certain concepts in large language models ( llms ) has attracted immense attention recently, due to its importance in mitigating undesirable model behaviours, such as the generation of harmful, private, or incorrect information. current protocols to evaluate unlearning methods largely rely on behavioral tests, without monitoring the presence of unlearned knowledge within the model ' s parameters. this residual knowledge can be adversarially exploited to recover the erased information post - unlearning. we argue that unlearning should also be evaluated internally, by considering changes in the parametric knowledge traces of the unlearned concepts. to this end, we propose a general evaluation methodology that leverages vocabulary projections to inspect concepts encoded in model parameters. we use this approach to localize " concept vectors " - parameter vectors that encode concrete concepts - and construct conceptvectors, a benchmark dataset containing hundreds of common concepts and their parametric knowledge traces within two open - source llms. evaluation on conceptvectors shows that existing unlearning methods minimally impact concept vectors and mostly suppress them during inference, while directly ablating these vectors demonstrably removes the associated knowledge and significantly reduces the model ' s susceptibility to adversarial manipulation. our results highlight limitations in behavioral - based unlearning evaluations and call for future work to include parameter - based evaluations. to support this, we release our code and benchmark at https : / / github. com / yihuaihong / conceptvectors.
arxiv:2406.11614
two atomic models of non - stoichiometric metal diborides m $ _ { 1 - x } $ b $ _ 2 $ are now assumed : ( i ) the presence of cation vacancies and ( ii ) the presence of ' super - stoichiometric ' boron which is placed in cation vacancy site. we have performed first principle total energy calculations using the vasp - paw method with the generalized gradient approximation ( gga ) for the exchange - correlation potential in a perspective to reveal the trends of m $ _ { 1 - x } $ b $ _ 2 $ possible stable atomic configurations depending on the type of m cations ( m = mg, al, zr or nb ) and the type of the defects ( metal vacancies versus metal vacancies occupied by ' super - stoichiometric ' boron in forms of single atoms, dimers b $ _ 2 $ or trimers b $ _ 3 $ ). besides we have estimated the stability of these non - stoichiometric states ( on the example of magnesium - boron system ) as depending on the possible synthetic routes, namely via solid state reaction method, as well as in reactions between solid boron and mg vapor ; and between these reagents in gaseous phase. we demonstrate that the non - stoichiometric states such as b $ _ 2 $ and b $ _ 3 $ placed in metal sites may be stabilized, while the occupation of vacancy sites by single boron atoms is the most unfavorable.
arxiv:0804.0894
we report the detection of co ( 6 - 5 ) and co ( 7 - 6 ) and their underlying continua from the host galaxy of quasar j100758. 264 + 211529. 207 ( p \ = oniu \ = a ' ena ) at z = 7. 5419, obtained with the northern extended millimeter array ( noema ). p \ = oniu \ = a ' ena belongs to the hyperluminous quasars at the epoch of reionization ( hyperion ) sample of 17 $ z > 6 $ quasars selected to be powered by supermassive black holes ( smbh ) which experienced the fastest mass growth in the first gyr of the universe. the one reported here is the highest - redshift measurement of the cold and dense molecular gas to date. the host galaxy is unresolved and the line luminosity implies a molecular reservoir of $ \ rm m ( h _ 2 ) = ( 2. 2 \ pm0. 2 ) \ times 10 ^ { 10 } $ $ \ rm m _ \ odot $, assuming a co spectral line energy distribution typical of high - redshift quasars and a conversion factor $ \ alpha = 0. 8 $ $ \ rm m _ { \ odot } ( k \, km \, s ^ { - 1 } \, pc ^ { 2 } ) ^ { - 1 } $. we model the cold dust spectral energy distribution ( sed ) to derive a dust mass of m $ _ { \ rm dust } = ( 2. 1 \ pm 0. 7 ) \ times 10 ^ 8 $ $ \ rm m _ \ odot $, and thus a gas to dust ratio $ \ sim100 $. both the gas and dust mass are not dissimilar from the reservoir found for luminous quasars at $ z \ sim6 $. we use the co detection to derive an estimate of the cosmic mass density of $ \ rm h _ 2 $, $ \ omega _ { h _ 2 } \ simeq 1. 31 \ times 10 ^ { - 5 } $. this value is in line with the general trend suggested by literature estimates at $ z < 7 $ and agrees fairly well with the latest theoretical expectations of non - equilibrium molecular - chemistry cosmological simulations of cold gas at early times.
arxiv:2304.09129
we investigate a limit theorem on traversable length inside semi - cylinder in the 2 - dimensional supercritical bernoulli bond percolation, which gives an extension of theorem 2 in grimmett ( 1981 ). this type of limit theorems was originally studied for the extinction time for the 1 - dimensional contact process on a finite interval in wagner and anantharam ( 2005 ). actually, our main result theorem 2. 1 is stated under a rather general 2 - dimensional bond percolation setting.
arxiv:math/0610744
improving the scalability of gnns is critical for large graphs. existing methods leverage three sampling paradigms including node - wise, layer - wise and subgraph sampling, then design unbiased estimator for scalability. however, the high variance still severely hinders gnns ' performance. on account that previous studies either lacks variance analysis or only focus on a particular sampling paradigm, we firstly propose an unified node sampling variance analysis framework and analyze the core challenge " circular dependency " for deriving the minimum variance sampler, i. e., sampling probability depends on node embeddings while node embeddings can not be calculated until sampling is finished. existing studies either ignore the node embeddings or introduce external parameters, resulting in the lack of a both efficient and effective variance reduction methods. therefore, we propose the \ textbf { h } ierarchical \ textbf { e } stimation based \ textbf { s } ampling gnn ( he - sgnn ) with first level estimating the node embeddings in sampling probability to break circular dependency, and second level employing sampling gnn operator to estimate the nodes ' representations on the entire graph. considering the technical difference, we propose different first level estimator, i. e., a time series simulation for layer - wise sampling and a feature based simulation for subgraph sampling. the experimental results on seven representative datasets demonstrate the effectiveness and efficiency of our method.
arxiv:2211.09813
polycyclic aromatic hydrocarbons ( pahs ) are carbon - based molecules that are ubiquitous in a variety of astrophysical objects and environments. in this work, we use jwst / miri mrs spectroscopy of three seyferts to compare their nuclear pah emission with that of star - forming regions. this study represents the first of its kind using sub - arcsecond angular resolution data of local luminous seyferts ( lbol > 10 ^ 44. 46 erg / s ) on a wide wavelength coverage ( 4. 9 - 28. 1 micron ). we present an analysis of their nuclear pah properties by comparing the observed ratios with pah diagnostic model grids, derived from theoretical spectra. our results show that a suite of pah features is present in the innermost parts ( ~ 0. 45 arcsec at 12 micron ; in the inner ~ 142 - 245 pc ) of luminous seyfert galaxies. we find that the nuclear regions of agn lie at different positions of the pah diagnostic diagrams, whereas the sf regions are concentrated around the average values of sf galaxies. in particular, we find that the nuclear pah emission mainly originates in neutral pahs. in contrast, pah emission originating in the sf regions favours ionised pah grains. the observed pah ratios in the nuclear region of agn - dominated galaxy ngc 6552 indicate the presence of larger - sized pah molecules compared with those of the sf regions. therefore, our results provide evidence that the agn have a significant impact on the ionization state ( and probably the size ) of the pah grains on scales of ~ 142 - 245 pc.
arxiv:2208.11620
extremal compact hyperbolic surfaces contain a packing of discs of the largest possible radius permitted by the topology of the surface. it is well known that arithmetic conditions on the uniformizing group are necessary for the existence of a second extremal packing in the same surface, but constructing explicit examples of this phenomenon is a complicated task. we present a brute force computational procedure that can be used to produce examples in all cases.
arxiv:1905.13297
we prove a representation theorem for the choquet integral model. the preference relation is defined on a two - dimensional heterogeneous product set $ x = x _ 1 \ times x _ 2 $ where elements of $ x _ 1 $ and $ x _ 2 $ are not necessarily comparable with each other. however, making such comparisons in a meaningful way is necessary for the construction of the choquet integral ( and any rank - dependent model ). we construct the representation, study its uniqueness properties, and look at applications in multicriteria decision analysis, state - dependent utility theory, and social choice. previous axiomatizations of this model, developed for decision making under uncertainty, relied heavily on the notion of comonotocity and that of a " constant act ". however, that requires $ x $ to have a special structure, namely, all factors of this set must be identical. our characterization does not assume commensurateness of criteria a priori, so defining comonotonicity becomes impossible.
arxiv:1507.04167
understanding the origin of unintentional doping in ga2o3 is key to increasing breakdown voltages of ga2o3 based power devices. therefore, transport and capacitance spectroscopy studies have been performed to better understand the origin of unintentional doping in ga2o3. previously unobserved unintentional donors in commercially available ( - 201 ) ga2o3 substrates have been electrically characterized via temperature dependent hall effect measurements up to 1000 k and found to have a donor energy of 110 mev. the existence of the unintentional donor is confirmed by temperature dependent admittance spectroscopy, with an activation energy of 131 mev determined via that technique, in agreement with hall effect measurements. with the concentration of this donor determined to be in the mid to high 10 ^ 16 cm ^ - 3 range, elimination of this donor from the drift layer of ga2o3 power electronics devices will be key to pushing the limits of device performance. indeed, analytical assessment of the specific on - resistance ( ronsp ) and breakdown voltage of schottky diodes containing the 110 mev donor indicates that incomplete ionization increases ronsp and decreases breakdown voltage as compared to ga2o3 schottky diodes containing only the shallow donor. the reduced performance due to incomplete ionization occurs in addition to the usual tradeoff between ronsp and breakdown voltage. to achieve 10 kv operation in ga2o3 schottky diode devices, analysis indicates that the concentration of 110 mev donors must be reduced below 5x10 ^ 14 cm ^ - 3 to limit the increase in ronsp to one percent.
arxiv:1706.09960
we generalize several inequalities involving powers of the numerical radius for product of two operators acting on a hilbert space. for any $ a, b, x \ in \ mathbb { b } ( \ mathscr { h } ) $ such that $ a, b $ are positive, we establish some numerical radius inequalities for $ a ^ \ alpha xb ^ \ alpha $ and $ a ^ \ alpha x b ^ { 1 - \ alpha } \, \, ( 0 \ leq \ alpha \ leq 1 ) $ and heinz means under mild conditions.
arxiv:1409.0321
we report on the observation of spontaneously drifting coupled spin and quadrupolar density waves in the ground state of laser driven rubidium atoms. these laser - cooled atomic ensembles exhibit spontaneous magnetism via light mediated interactions when submitted to optical feedback by a retro - reflecting mirror. drift direction and chirality of the waves arise from spontaneous symmetry breaking. the observations demonstrate a novel transport process in out - of - equilibrium magnetic systems.
arxiv:2310.17305
we study the wigner crystallization on partially filled topological flat bands. we identify the wigner crystals by analyzing the cartesian and angular fourier transform of the pair correlation density of the many - body ground state obtained using exact diagonalization. the crystallization strength measured by the magnitude of the fourier peaks, increases with decreasing particle density. the shape of the resulting wigner crystals is determined by the boundary conditions of the chosen plaquette and to a large extent independent on the underlying lattice, including its topology, and follows the behavior of classical point particles.
arxiv:1712.06007
batteryless iot devices, powered by energy harvesting, face significant challenges in maintaining operational efficiency and reliability due to intermittent power availability. traditional checkpointing mechanisms, while essential for preserving computational state, introduce considerable energy and time overheads. this paper introduces approxify, an automated framework that significantly enhances the sustainability and performance of batteryless iot networks by reducing energy consumption by approximately 40 % through intelligent approximation techniques. \ tool balances energy efficiency with computational accuracy, ensuring reliable operation without compromising essential functionalities. our evaluation of applications, susan and link quality indicator ( lqi ), demonstrates significant reductions in checkpoint frequency and energy usage while maintaining acceptable error bounds.
arxiv:2410.07202
##veyor of " junk science ". cameron ' s research has been heavily criticized for unscientific methods and distortions which attempt to link homosexuality with pedophilia. in one instance, cameron claimed that lesbians are 300 times more likely to get into car accidents. the splc states his work has been continually cited in some sections of the media despite being discredited. cameron was expelled from the american psychological association in 1983. = = combatting junk science = = in 1995, the union of concerned scientists launched the sound science initiative, a national network of scientists committed to debunking junk science through media outreach, lobbying, and developing joint strategies to participate in town meetings or public hearings. in its newsletter on science and technology in congress, the american association for the advancement of science also recognized the need for increased understanding between scientists and lawmakers : " although most individuals would agree that sound science is preferable to junk science, fewer recognize what makes a scientific study ' good ' or ' bad '. " the american dietetic association, criticizing marketing claims made for food products, has created a list of " ten red flags of junk science ". = = see also = = = = references = = = = further reading = = agin, dan ( 2006 ). junk science – how politicians, corporations, and other hucksters betray us. st. martin ' s griffin. isbn 978 - 0312374808. archived from the original on 2023 - 11 - 04. retrieved 2016 - 10 - 18. huber, peter w. ( 1993 ). galileo ' s revenge : junk science in the courtroom. basic books. isbn 978 - 0465026241. mooney, chris ( 2005 ). the republican war on science. basic books. isbn 978 - 0465046751. kiss sarnoff, susan ( 2001 ). sanctified snake oil : the effect of junk science on public policy. bloomsbury academic. isbn 978 - 0275968458. = = external links = = project on scientific knowledge and public policy ( skapp ) defendingscience. org michaels, david ( june 2005 ). " doubt is their product ". scientific american. 292 ( 6 ) : 96 – 101. bibcode : 2005sciam. 292f.. 96m. doi : 10. 1038 / scientificamerican0605 - 96. pmid 15934658. archived from the original on 2007 - 09 - 27. retrieved 2008
https://en.wikipedia.org/wiki/Junk_science
the coronavirus disease 2019 ( covid - 19 ) has become a public health emergency of international concern affecting 201 countries and territories around the globe. as of april 4, 2020, it has caused a pandemic outbreak with more than 11, 16, 643 confirmed infections and more than 59, 170 reported deaths worldwide. the main focus of this paper is two - fold : ( a ) generating short term ( real - time ) forecasts of the future covid - 19 cases for multiple countries ; ( b ) risk assessment ( in terms of case fatality rate ) of the novel covid - 19 for some profoundly affected countries by finding various important demographic characteristics of the countries along with some disease characteristics. to solve the first problem, we presented a hybrid approach based on autoregressive integrated moving average model and wavelet - based forecasting model that can generate short - term ( ten days ahead ) forecasts of the number of daily confirmed cases for canada, france, india, south korea, and the uk. the predictions of the future outbreak for different countries will be useful for the effective allocation of health care resources and will act as an early - warning system for government policymakers. in the second problem, we applied an optimal regression tree algorithm to find essential causal variables that significantly affect the case fatality rates for different countries. this data - driven analysis will necessarily provide deep insights into the study of early risk assessments for 50 immensely affected countries.
arxiv:2004.09996
goyeneche et al. \ [ phys. \ rev. \ a \ textbf { 97 }, 062326 ( 2018 ) ] introduced several classes of quantum combinatorial designs, namely quantum latin squares, quantum latin cubes, and the notion of orthogonality on them. they also showed that mutually orthogonal quantum latin arrangements can be entangled in the same way in which quantum states are entangled. moreover, they established a relationship between quantum combinatorial designs and a remarkable class of entangled states called $ k $ - uniform states, i. e., multipartite pure states such that every reduction to $ k $ parties is maximally mixed. in this article, we put forward the notions of incomplete quantum latin squares and orthogonality on them and present construction methods for mutually orthogonal quantum latin squares and mutually orthogonal quantum latin cubes. furthermore, we introduce the notions of generalized mutually orthogonal quantum latin squares and generalized mutually orthogonal quantum latin cubes, which are equivalent to quantum orthogonal arrays of size $ d ^ 2 $ and $ d ^ 3 $, respectively, and thus naturally provide $ 2 $ - and $ 3 $ - uniform states.
arxiv:2111.04055
we study scalar radiation spectra from a particle in circular orbit, in the background of the janis - newman - winicour ( jnw ) naked singularity. the differences in the nature of the spectra, from what one obtains with a schwarzschild black hole, is established. we also compute the angular distribution of the spectra.
arxiv:1303.6824
we attempt to measure the proper motions of two magnetars - the soft gamma - ray repeater sgr 1900 + 14 and the anomalous x - ray pulsar 1e 2259 + 586 - using two epochs of chandra observations separated by ~ 5 yr. we perform extensive tests using these data, archival data, and simulations to verify the accuracy of our measurements and understand their limitations. we find 90 % upper limits on the proper motions of 54 mas / yr ( sgr 1900 + 14 ) and 65 mas / yr ( 1e 2259 + 586 ), with the limits largely determined by the accuracy with which we could register the two epochs of data and by the inherent uncertainties on two - point proper motions. we translate the proper motions limits into limits on the transverse velocity using distances, and find v _ perp < 1300 km / s ( sgr 1900 + 14, for a distance of 5 kpc ) and v _ perp < 930 km / s ( 1e 2259 + 586, for a distance of 3 kpc ) at 90 % confidence ; the range of possible distances for these objects makes a wide range of velocities possible, but it seems that the magnetars do not have uniformly high space velocities of > 3000 km / s. unfortunately, our proper motions also cannot significantly constrain the previously proposed origins of these objects in nearby supernova remnants or star clusters, limited as much by our ignorance of ages as by our proper motions.
arxiv:0810.4184
the modern deep learning method based on backpropagation has surged in popularity and has been used in multiple domains and application areas. at the same time, there are other - - less - known - - machine learning algorithms with a mature and solid theoretical foundation whose performance remains unexplored. one such example is the brain - like bayesian confidence propagation neural network ( bcpnn ). in this paper, we introduce streambrain - - a framework that allows neural networks based on bcpnn to be practically deployed in high - performance computing systems. streambrain is a domain - specific language ( dsl ), similar in concept to existing machine learning ( ml ) frameworks, and supports backends for cpus, gpus, and even fpgas. we empirically demonstrate that streambrain can train the well - known ml benchmark dataset mnist within seconds, and we are the first to demonstrate bcpnn on stl - 10 size networks. we also show how streambrain can be used to train with custom floating - point formats and illustrate the impact of using different bfloat variations on bcpnn using fpgas.
arxiv:2106.05373
the $ l _ 1 / l _ 2 $ norm ratio arose as a sparseness measure and attracted a considerable amount of attention due to three merits : ( i ) sharper approximations of $ l _ 0 $ compared to the $ l _ 1 $ ; ( ii ) parameter - free and scale - invariant ; ( iii ) more attractive than $ l _ 1 $ under highly - coherent matrices. in this paper, we first establish the partly smooth property of $ l _ 1 $ over $ l _ 2 $ minimization relative to an active manifold $ { \ cal m } $ and also demonstrate its prox - regularity property. second, we reveal that admm $ _ p $ ( or admm $ ^ + _ p $ ) can identify the active manifold within a finite iterations. this discovery contributes to a deeper understanding of the optimization landscape associated with $ l _ 1 $ over $ l _ 2 $ minimization. third, we propose a novel heuristic algorithm framework that combines admm $ _ p $ ( or admm $ ^ + _ p $ ) with a globalized semismooth newton method tailored for the active manifold $ { \ cal m } $. this hybrid approach leverages the strengths of both methods to enhance convergence. finally, through extensive numerical simulations, we showcase the superiority of our heuristic algorithm over existing state - of - the - art methods for sparse recovery.
arxiv:2401.15405
the nature of subproton scale fluctuations in the solar wind is an open question, partly because two similar types of electromagnetic turbulence can occur : kinetic alfven turbulence and whistler turbulence. these two possibilities, however, have one key qualitative difference : whistler turbulence, unlike kinetic alfven turbulence, has negligible power in density fluctuations. in this letter, we present new observational data, as well as analytical and numerical results, to investigate this difference. the results show, for the first time, that the fluctuations well below the proton scale are predominantly kinetic alfven turbulence, and, if present at all, the whistler fluctuations make up only a small fraction of the total energy.
arxiv:1305.2950
several works have shown that the regularization mechanisms underlying deep neural networks ' generalization performances are still poorly understood. in this paper, we hypothesize that deep neural networks are regularized through their ability to extract meaningful clusters among the samples of a class. this constitutes an implicit form of regularization, as no explicit training mechanisms or supervision target such behaviour. to support our hypothesis, we design four different measures of intraclass clustering, based on the neuron - and layer - level representations of the training data. we then show that these measures constitute accurate predictors of generalization performance across variations of a large set of hyperparameters ( learning rate, batch size, optimizer, weight decay, dropout rate, data augmentation, network depth and width ).
arxiv:2103.06733
we employ the formalism developed in \ cite { mentasti : 2023gmg } and \ cite { bartolo _ 2022 } to study the prospect of detecting an anisotropic stochastic gravitational wave background ( sgwb ) with the laser interferometer space antenna ( lisa ) alone, and combined with the proposed space - based interferometer taiji. previous analyses have been performed in the frequency domain only. here, we study the detectability of the individual coefficients of the expansion of the sgwb in spherical harmonics, by taking into account the specific motion of the satellites. this requires the use of time - dependent response functions, which we include in our analysis to obtain an optimal estimate of the anisotropic signal. we focus on two applications. firstly, the reconstruction of the anisotropic galactic signal without assuming any prior knowledge of its spatial distribution. we find that both lisa and lisa with taiji cannot put tight constraints on the harmonic coefficients for realistic models of the galactic sgwb. we then focus on the discrimination between a galactic signal of known morphology but unknown overall amplitude and an isotropic extragalactic sgwb component of astrophysical origin. in this case, we find that the two surveys can confirm, at a confidence level $ \ gtrsim 3 \ sigma $, the existence of both the galactic and extragalactic background if both have amplitudes as predicted in standard models. we also find that, in the lisa - only case, the analysis in the frequency domain ( under the assumption of a time average of data taken homogeneously across the year ) provides a nearly identical determination of the two amplitudes as compared to the optimal analysis.
arxiv:2312.10792
in 3 + 1 numerical simulations of dynamic black hole spacetimes, it ' s useful to be able to find the apparent horizon ( s ) ( ah ) in each slice of a time evolution. a number of ah finders are available, but they often take many minutes to run, so they ' re too slow to be practically usable at each time step. here i present a new ah finder, _ ahfinderdirect _, which is very fast and accurate : at typical resolutions it takes only a few seconds to find an ah to $ \ sim 10 ^ { - 5 } m $ accuracy on a ghz - class processor. i assume that an ah to be searched for is a strahlk \ " orper ( star - shaped region ) with respect to some local origin, and so parameterize the ah shape by $ r = h ( angle ) $ for some single - valued function $ h : s ^ 2 \ to \ re ^ + $. the ah equation then becomes a nonlinear elliptic pde in $ h $ on $ s ^ 2 $, whose coefficients are algebraic functions of $ g _ { ij } $, $ k _ { ij } $, and the cartesian - coordinate spatial derivatives of $ g _ { ij } $. i discretize $ s ^ 2 $ using 6 angular patches ( one each in the neighborhood of the $ \ pm x $, $ \ pm y $, and $ \ pm z $ axes ) to avoid coordinate singularities, and finite difference the ah equation in the angular coordinates using 4th order finite differencing. i solve the resulting system of nonlinear algebraic equations ( for $ h $ at the angular grid points ) by newton ' s method, using a " symbolic differentiation " technique to compute the jacobian matrix. _ ahfinderdirect _ is implemented as a thorn in the _ cactus _ computational toolkit, and is freely available by anonymous cvs checkout.
arxiv:gr-qc/0306056
in recent years there has been considerable interest in developing photonic temperature sensors such as the fiber bragg gratings ( fbg ) as an alternative to resistance thermometry. in this study we examine the thermal response of fbgs over the temperature range of 233 k to 393 k. we demonstrate, in a hermetically sealed dry argon environment, that fbg devices show a quadratic dependence on temperature with expanded uncertainties ( k = 2 ) of ~ 500 mk. our measurements indicate that the combined measurement uncertainty is dominated by uncertainty in determining the peak center fitting and by thermal aging of polyimide coated fibers.
arxiv:1603.07688
in the traditional simple step - stress partial accelerated life test ( ssspalt ), the items are put on normal operating conditions up to a certain time and after that the stress is increased to get the failure time information early. however, when the stress increases, an additional cost is incorporated that increases the cost of the life test. in this context, an adaptive ssspalt is considered where the stress is increased after a certain time if the number of failures up to that point is less than a pre - specified number of failures. we consider determination of bayesian reliability acceptance sampling plans ( bsp ) through adaptive sssalt conducted under type i censoring. the bsp under adaptive ssspalt is called bspaa. the bayes decision function and bayes risk are obtained for the general loss function. optimal bspaas are obtained for the quadratic loss function by minimizing bayes risk. an algorithm is provided for computation of optimum bspaa. comparisons between the proposed bspaa and the conventional bsp through non - accelerated life test ( cbsp ) and conventional bsp through ssspalt ( cbspa ) are carried out.
arxiv:2408.00734
in this paper we prove that a uniformly distributed random circular automaton $ \ mathcal { a } _ n $ of order $ n $ synchronizes with high probability ( whp ). more precisely, we prove that $ $ \ mathbb { p } \ left [ \ mathcal { a } _ n \ text { synchronizes } \ right ] = 1 - o \ left ( \ frac { 1 } { n } \ right ). $ $ the main idea of the proof is to translate the synchronization problem into properties of a random matrix ; these properties are then handled with tools of the probabilistic method. additionally, we provide an upper bound for the probability of synchronization of circular automata in terms of chromatic polynomials of circulant graphs.
arxiv:1906.02602
we propose a novel approach for answering and explaining multiple - choice science questions by reasoning on grounding and abstract inference chains. this paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. our system, explanationlp, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. to extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. the graphs ' weighting function is composed of a set of parameters, which we fine - tune to optimize answer selection performance. we carry out our experiments on the worldtree and arc - challenge corpus to empirically demonstrate the following conclusions : ( 1 ) grounding - abstract inference chains provides the semantic control to perform explainable abductive reasoning ( 2 ) efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer - based approaches in a similar setting ( 3 ) generalisability by outperforming sota explainable approaches on general science question sets.
arxiv:2010.13128
this paper is a brief report to our submission to the vipriors object detection challenge. object detection has attracted many researchers ' attention for its full application, but it is still a challenging task. in this paper, we study analysis the characteristics of the data, and an effective data enhancement method is proposed. we carefully choose the model which is more suitable for training from scratch. we benefit a lot from using softnms and model fusion skillfully.
arxiv:2007.08170
, we find that over a broad redshift range, the outflow strength strongly depends on the main - sequence offset at the respective redshifts rather than simply the sfr.
arxiv:2010.10540
to constrain the equation of state of super - nuclear density matter and probe the interior composition of the x - ray pulsar in sax j1808. 4 - 3658. in our estimation, we consider both its persistent 2. 49 ms x - ray pulsations discovered by wijnands and van der klis from using the rossi x - ray timing explorer, which is interpreted to come from an accreting - powered millisecond x - ray pulsar in the low mass x - ray binaries, and the corresponding mass - radius data analyzed of the light curves of sax j1808. 4 - 3685 during its 1998 and 2005 outbursts by leahy et al. from assuming a hot spot model where the x - rays are originated from the surface of the neutron star.
arxiv:0801.4123