text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
an approach is proposed for bounding the number of zeros that solutions of linear differential systems with polynomial coefficients may have. a bound is obtained in a special case which improves upon currently existing.
|
arxiv:math/0003030
|
we study the null geodesics extending from the near - horizon region out to the far region in the background of the schwarzschild and the singly - spinning myers - perry black holes in the large dimension limit. we find that in this limit the radial integrals of these geodesics can be obtained by using the method of matched asymptotic expansions. if the motion of the photon is confined to the equator plane, then all geodesic equations are solvable analytically. the study in this paper may provide a toy model to analyze the observables relevant to the electromagnetic phenomena occurring near the black holes.
|
arxiv:1911.08814
|
realizing the theoretical promise of quantum computers will require overcoming decoherence. here we demonstrate numerically that high fidelity quantum gates are possible within a framework of quantum dynamical decoupling. orders of magnitude improvement in the fidelities of a universal set of quantum gates, relative to unprotected evolution, is achieved over a broad range of system - environment coupling strengths, using recursively constructed ( concatenated ) dynamical decoupling pulse sequences.
|
arxiv:1012.3433
|
rapid solidification in additively manufactured ( am ) metallic materials results in the development of significant microscale internal stresses, which are attributed to the printing induced dislocation substructures. the resulting backstress due to the geometrically necessary dislocations ( gnds ) is responsible for the observed tension - compression ( tc ) asymmetry. we propose a combined phase field ( pf ) - strain gradient $ j _ 2 $ plasticity ( sgp ) framework to investigate the tc asymmetry in such microstructures. the proposed pf model is an extension of kobayashi ' s dendritic growth framework, modified to account for the orientation - based anisotropy and multi - grain interaction effects. the sgp model has consideration for anisotropic temperature - dependent elasticity, dislocation strengthening, solid solution strengthening, along with gnd - induced directional backstress. this model is employed to predict the solute segregation, dislocation substructure and backstress development during solidification and the post - solidification anisotropic mechanical properties in terms of the tc asymmetry of rapidly solidified fe - cr alloys. it is observed that higher thermal gradients ( and hence, cooling rates ) lead to higher magnitudes of solute segregation, gnd density, and backstress. this also correlates with a corresponding increase in the predicted tc asymmetry. the results presented in this study point to the microstructural factors, such as dislocation substructure and solute segregation, and mechanistic factors, such as backstress, which may contribute to the development of tc asymmetry in rapidly solidified microstructures.
|
arxiv:2403.11080
|
we study and compare different examples of stellar evolutionary synthesis input parameters used to produce photoionisation model grids using the mappings v modelling code. the aim of this study is to ( a ) explore the systematic effects of various stellar evolutionary synthesis model parameters on the interpretation of emission lines in optical strong - line diagnostic diagrams, ( b ) characterise the combination of parameters able to reproduce the spread of local galaxies located in the star - forming region in the sloan digital sky survey, and ( c ) investigate the emission from extremely metal - poor galaxies using photoionisation models. we explore and compare the stellar input ionising spectrum ( stellar population synthesis code [ starburst99, slug, bpass ], stellar evolutionary tracks, stellar atmospheres, star - formation history, sampling of the initial mass function ) as well as parameters intrinsic to the h ii region ( metallicity, ionisation parameter, pressure, h ii region boundedness ). we also perform a comparison of the photoionisation codes mappings and cloudy. on the variations in the ionising spectrum model parameters, we find that the differences in strong emission - line ratios between varying models for a given input model parameter are small, on average ~ 0. 1 dex. an average difference of ~ 0. 1 dex in emission - line ratio is also found between models produced with mappings and cloudy. large differences between the emission - line ratios are found when comparing intrinsic h ii region parameters. we find that low - metallicity galaxies are better explained by a density - bounded h ii region and higher pressures better encompass the spread of galaxies at high redshift.
|
arxiv:1905.09528
|
this work is part of a systematic re - analysis program of all the data of gamma - ray burst ( grb ) x - ray afterglows observed so far, in order to constrain the grb models. we present here a systematic analysis of those afterglows observed by xmm - newton between january 2000 and march 2004. this dataset includes grb 011211 and grb 030329. we have obtained spectra, light curves and colors for these afterglows. in this paper we focus on the continuum spectral and temporal behavior. we compare these values with the theoretical ones expected from the fireball model. we derive constraints about the burst environment ( absorption, density profile ) and put constraints on their beaming angle.
|
arxiv:astro-ph/0412302
|
this work addresses techniques to solve convection - diffusion problems based on hermite interpolation. we extend to the case of these equations a hermite finite element method providing flux continuity across inter - element boundaries, shown to be a well - adapted tool for simulating pure diffusion phenomena ( cf. v. ruas, j. comput. appl. maths., 246 p. 234 - 242, 2013 ). we consider two methods that can be viewed as non trivial improved versions of the lowest order raviart - thomas mixed method, corresponding to its extensions to convection - diffusion problems proposed by douglas and roberts ( cf. computational and applied mathematics, 1, p. 91 - 103, 1982 ). a detailed convergence study is carried out for one of the methods, and numerical results illustrate the performance of both of them, as compared to each other and to the corresponding mixed methods.
|
arxiv:1512.07642
|
almost 80 years have passed since trumpler ' s analysis of the galactic open cluster system laid one of the main foundations for understanding the nature and structure of the milky way. since then, the open cluster system has been recognised as a key source of information for addressing a wide range of questions about the structure and evolution of our galaxy. over the last decade, surveys and individual observations from the ground and space have led to an explosion of astrometric, kinematic and multiwavelength photometric and spectroscopic open cluster data. in addition, a growing fraction of these data is often time - resolved. together with increasing computing power and developments in classification techniques, the open cluster system reveals an increasingly clearer and more complete picture of our galaxy. in this contribution, i review the observational properties of the milky way ' s open cluster system. i discuss what they can and cannot teach us now and in the near future about several topics such as the galaxy ' s spiral structure and dynamics, chemical evolution, large - scale star formation, stellar populations and more.
|
arxiv:0911.1459
|
we study two species of ( or spin - 1 / 2 ) fermions with short - range intra - species repulsion in the presence of opposite ( effective ) magnetic field, each at landau level filling factor 1 / 3. in the absence of inter - species interaction, the ground state is simply two copies of the 1 / 3 laughlin state, with opposite chirality, representing the fractional topological insulator ( fti ) phase. we show this phase is stable against moderate inter - species interactions. however strong enough inter - species repulsion leads to phase separation, while strong enough inter - species attraction drives the system into a superfluid phase. we obtain the phase diagram through exact diagonalization calculations. the fti - superfluid phase transition is shown to be in the ( 2 + 1 ) d xy universality class, using an appropriate chern - simons - ginsburg - landau effective field theory.
|
arxiv:1112.4872
|
in this paper we discuss several constructions that lead to new examples of nil - clean, clean, and exchange rings. a characterization of the idempotents in the algebra defined by a 2 - cocycle is given and used to prove some of the algebra ' s properties ( the infinitesimal deformation case ). from infinitesimal deformations we go to full deformations and prove that any formal deformation of a clean ( exchange ) ring is itself clean ( exchange ). examples of nil - clean, clean, and exchange rings arising from poset algebras are also discussed.
|
arxiv:1404.2662
|
animals thrive in a constantly changing environment and leverage the temporal structure to learn well - factorized causal representations. in contrast, traditional neural networks suffer from forgetting in changing environments and many methods have been proposed to limit forgetting with different trade - offs. inspired by the brain thalamocortical circuit, we introduce a simple algorithm that uses optimization at inference time to generate internal representations of the current task dynamically. the algorithm alternates between updating the model weights and a latent task embedding, allowing the agent to parse the stream of temporal experience into discrete events and organize learning about them. on a continual learning benchmark, it achieves competitive end average accuracy by mitigating forgetting, but importantly, by requiring the model to adapt through latent updates, it organizes knowledge into flexible structures with a cognitive interface to control them. tasks later in the sequence can be solved through knowledge transfer as they become reachable within the well - factorized latent space. the algorithm meets many of the desiderata of an ideal continually learning agent in open - ended environments, and its simplicity suggests fundamental computations in circuits with abundant feedback control loops such as the thalamocortical circuits in the brain.
|
arxiv:2205.11713
|
the amplified spontaneous emission from a superluminescent diode was frequency doubled in a periodically poled lithium niobate waveguide crystal. the temporally incoherent radiation of such a superluminescent diode is characterized by a relatively broad spectral bandwidth and thermal - like photon statistics, as the measured degree of second order coherence, g $ ^ { ( 2 ) } $ ( 0 ) = 1. 9 $ \ pm $ 0. 1, indicates. despite the non - optimized scenario in the spectral domain, we achieve six orders of magnitude higher conversion efficiency than previously reported with truly incoherent light. this is possible by using single spatial mode radiation and quasi phase matched material with a waveguide architecture. this work is a principle step towards efficient frequency conversion of temporally incoherent radiation in one spatial mode to access wavelengths where no radiation from superluminescent diodes is available, especially with tailored quasi phase matched crystals. the frequency doubled light might find use in applications and quantum optics experiments.
|
arxiv:1704.01096
|
we study ` ` nanoptera ' ', which are non - localized solitary waves with exponentially small but non - decaying oscillations, in two singularly - perturbed hertzian chains with precompression. these two systems are woodpile chains ( which we model as systems of hertzian particles and springs ) and diatomic hertzian chains with alternating masses. we demonstrate that nanoptera arise from stokes phenomena and appear as special curves, called stokes curves, are crossed in the complex plane. we use techniques from exponential asymptotics to obtain approximations of the oscillation amplitudes. our analysis demonstrates that traveling waves in a singularly perturbed woodpile chain have a single stokes curve, across which oscillations appear. comparing these asymptotic predictions with numerical simulations reveals that this accurately describes the non - decaying oscillatory behavior in a woodpile chain. we perform a similar analysis of a diatomic hertzian chain, that the nanpteron solution has two distinct exponentially small oscillatory contributions. we demonstrate that there exists a set of mass ratios for which these two contributions cancel to produce localized solitary waves. this result builds on prior experimental and numerical observations that there exist mass ratios that support localized solitary waves in diatomic hertzian chains without precompression. comparing asymptotic and numerical results in a diatomic hertzian chain with precompression reveals that our exponential asymptotic approach accurately predicts the oscillation amplitude for a wide range of system parameters, but it fails to identify several values of the mass ratio that correspond to localized solitary - wave solutions.
|
arxiv:2102.07322
|
fish stock assessment often involves manual fish counting by taxonomy specialists, which is both time - consuming and costly. we propose fishnet, an automated computer vision system for both taxonomic classification and fish size estimation from images captured with a low - cost digital camera. the system first performs object detection and segmentation using a mask r - cnn to identify individual fish from images containing multiple fish, possibly consisting of different species. then each fish species is classified and the length is predicted using separate machine learning models. to develop the model, we use a dataset of 300, 000 hand - labeled images containing 1. 2m fish of 163 different species and ranging in length from 10cm to 250cm, with additional annotations and quality control methods used to curate high - quality training data. on held - out test data sets, our system achieves a 92 % intersection over union on the fish segmentation task, a 89 % top - 1 classification accuracy on single fish species classification, and a 2. 3cm mean absolute error on the fish length estimation task.
|
arxiv:2403.10916
|
we present a brief review of the present day situation with studies of high - temperature superconductivity in iron pnictides and chalcogenides. recent discovery of superconductivity with t _ c > 30 k in a _ xfe _ { 2 - x / 2 } se _ 2 ( a = k, cs, tl,... ) represents the major new step in the development of new concepts in the physics of fe - based high - temperature superconductors. we compare lda and arpes data on the band structure and fermi surfaces of novel superconductors and those of the previously studied feas superconductors, especially isostructural 122 - superconductors like bafe _ 2as _ 2. it appears that electronic structure of new superconductors is rather different from that of feas 122 - systems. in particular, no nesting properties of electron and hole - like fermi surfaces is observed, casting doubts on most popular theoretical schemes of cooper pairing for these systems. the discovery of fe vacancies ordering and antiferromagnetic ( afm ) ordering at pretty high temperatures ( t _ n > 500 k ), much exceeding superconducting t _ c makes these systems unique antiferromagnetic superconductors with highest t _ n observed up to now. we discuss the role of both vacancies and afm ordering in transformations of band structure and fermi surfaces, as well as their importance for superconductivity. in particular, we show that system remains metallic with unfolded fermi surfaces quite similar to that in paramagnetic state. superconducting transition temperature t _ c of new superconductors is discussed within the general picture of superconductivity in multiple band systems. it is demonstrated that both in feas - superconductors and in new fese - systems the value of t _ c correlates with the value of the total density of states ( dos ) at the fermi level.
|
arxiv:1106.3707
|
in the process of translation, ribosomes read the genetic code on an mrna and assemble the corresponding polypeptide chain. the ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process ( tasep ) with open boundaries. using monte carlo simulations and a simple mean - field theory, we discuss the effect of one or two ` ` bottlenecks ' ' ( i. e., slow codons ) on the production rate of the final protein. confirming and extending previous work by chou and lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. in particular, we observe a novel ` ` edge ' ' effect, i. e., an interaction of a single slow codon with the system boundary. we focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
|
arxiv:q-bio/0602024
|
in this paper we introduce an abstract nonsmooth optimization problem and prove existence and uniqueness of its solution. we present a numerical scheme to approximate this solution. the theory is later applied to a sample static contact problem describing an elastic body in frictional contact with a foundation. this contact is governed by a nonmonotone friction law with dependence on normal and tangential components of displacement. finally, computational simulations are performed to illustrate obtained results.
|
arxiv:1903.04241
|
this paper presents a fast and modular framework for multi - object tracking ( mot ) based on the markov descision process ( mdp ) tracking - by - detection paradigm. it is designed to allow its various functional components to be replaced by custom - designed alternatives to suit a given application. an interactive gui with integrated object detection, segmentation, mot and semi - automated labeling is also provided to help make it easier to get started with this framework. though not breaking new ground in terms of performance, deep mdp has a large code - base that should be useful for the community to try out new ideas or simply to have an easy - to - use and easy - to - adapt system for any mot application. deep mdp is available at https : / / github. com / abhineet123 / deep _ mdp.
|
arxiv:2310.14294
|
we study the problem of existence of preduals of locally convex hausdorff spaces. we derive necessary and sufficient conditions for the existence of a predual with certain properties of a bornological locally convex hausdorff space $ x $. then we turn to the case that $ x = \ mathcal { f } ( \ omega ) $ is a space of scalar - valued functions on a non - empty set $ \ omega $ and characterise those among them which admit a special predual, namely a strong linearisation, i. e. there are a locally convex hausdorff space $ y $, a map $ \ delta \ colon \ omega \ to y $ and a topological isomorphism $ t \ colon \ mathcal { f } ( \ omega ) \ to y _ { b } ' $ such that $ t ( f ) \ circ \ delta = f $ for all $ f \ in \ mathcal { f } ( \ omega ) $.
|
arxiv:2402.12615
|
this paper proposes two approaches for overcoming access points ' phase misalignment effects in the downlink of cell - free massive mimo ( cf - mmimo ) systems. the first approach is based on the differential space - time block coding technique, while the second one is based on the use of differential modulation schemes. both approaches are shown to perform exceptionally well and to restore system performance in cf - mmimo systems where phase alignment at the access points for downlink joint coherent transmission cannot be achieved.
|
arxiv:2503.04935
|
there is a growing trend of applying machine learning methods to medical datasets in order to predict patients ' future status. although some of these methods achieve high performance, challenges still exist in comparing and evaluating different models through their interpretable information. such analytics can help clinicians improve evidence - based medical decision making. in this work, we develop a visual analytics system that compares multiple models ' prediction criteria and evaluates their consistency. with our system, users can generate knowledge on different models ' inner criteria and how confidently we can rely on each model ' s prediction for a certain patient. through a case study of a publicly available clinical dataset, we demonstrate the effectiveness of our visual analytics system to assist clinicians and researchers in comparing and quantitatively evaluating different machine learning methods.
|
arxiv:2002.10998
|
we study the problem of allocating bailouts ( stimulus, subsidy allocations ) to people participating in a financial network subject to income shocks. we build on the financial clearing framework of eisenberg and noe that allows the incorporation of a bailout policy that is based on discrete bailouts motivated by the types of stimulus checks people receive around the world as part of covid - 19 economical relief plans. we show that optimally allocating such bailouts on a financial network in order to maximize a variety of social welfare objectives of this form is a computationally intractable problem. we develop approximation algorithms to optimize these objectives and establish guarantees for their approximation rations. then, we incorporate multiple fairness constraints in the optimization problems and establish relative bounds on the solutions with versus without these constraints. finally, we apply our methodology to a variety of data, both in the context of a system of large financial institutions with real - world data, as well as in a realistic societal context with financial interactions between people and businesses for which we use semi - artificial data derived from mobility patterns. our results suggest that the algorithms we develop and study have reasonable results in practice and outperform other network - based heuristics. we argue that the presented problem through the societal - level lens could assist policymakers in making informed decisions on issuing subsidies.
|
arxiv:2106.07560
|
i give a brief overview of experimental studies of the spectrum and the structure of the excited states of the nucleon and what we learn about their internal structure. the focus is on the effort to obtain a more complete picture of the light - quark baryon excitation spectrum employing electromagnetic beams, and on the study of the transition form factors and helicity amplitudes and their dependence on the magnitude of the photon virtuality $ q ^ 2 $, especially for some of the most prominent resonances. the results were obtained in pion and eta electroproduction experiments off proton targets. they strengthen the connection of experiment and new results from modeling sqcd in dse and light cone sr approaches. they also point to the nature of these states as 3 - quark excitations at the core.
|
arxiv:1801.10480
|
despite global connectivity, societies seem to be increasingly polarized and fragmented. this phenomenon is rooted in the underlying complex structure and dynamics of social systems. far from homogeneously mixing or adopting conforming views, individuals self - organize into groups at multiple scales, ranging from families up to cities and cultures. in this paper, we study the fragmented structure of the american society using mobility and communication networks obtained from geo - located social media data. we find self - organized patches with clear geographical borders that are consistent between physical and virtual spaces. the patches have multi - scale structure ranging from parts of a city up to the entire nation. their significance is reflected in distinct patterns of collective interests and conversations. finally, we explain the patch emergence by a model of network growth that combines mechanisms of geographical distance gravity, preferential attachment, and spatial growth. our observations are consistent with the emergence of social groups whose separated association and communication reinforce distinct identities. rather than eliminating borders, the virtual space reproduces them as people mirror their offline lives online. understanding the mechanisms driving the emergence of fragmentation in hyper - connected social systems is imperative in the age of the internet and globalization.
|
arxiv:1809.07676
|
recently, deep learning based video object detection has attracted more and more attention. compared with object detection of static images, video object detection is more challenging due to the motion of objects, while providing rich temporal information. the rnn - based algorithm is an effective way to enhance detection performance in videos with temporal information. however, most studies in this area only focus on accuracy while ignoring the calculation cost and the number of parameters. in this paper, we propose an efficient method that combines channel - reduced convolutional gru ( squeezed gru ), and information entropy map for video object detection ( sge - net ). the experimental results validate the accuracy improvement, computational savings of the squeezed gru, and superiority of the information entropy attention mechanism on the classification performance. the map has increased by 3. 7 contrasted with the baseline, and the number of parameters has decreased from 6. 33 million to 0. 67 million compared with the standard gru.
|
arxiv:2106.07224
|
therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilise output over the business cycle. thus, a central conclusion of keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. john hicks ' is / lm model has been the most influential interpretation of the general theory. over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from keynesianism. the neoclassical synthesis refers to the reconciliation of keynesian economics with classical economics, stating that keynesianism is correct in the short run but qualified by classical - like considerations in the intermediate and long run. new classical macroeconomics, as distinct from the keynesian view of the business cycle, posits market clearing with imperfect information. it includes friedman ' s permanent income hypothesis on consumption and " rational expectations " theory, led by robert lucas, and real business cycle theory. in contrast, the new keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. in particular, new keynesians assume prices and wages are " sticky ", which means they do not adjust instantaneously to changes in economic conditions. thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new keynesians see full employment as being automatically achieved only in the long run, and hence government and central - bank policies are needed because the " long run " may be very long. = = = unemployment = = = the amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. the labour force only includes workers actively looking for jobs. people who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. unemployment can be generally broken down into several types that are related to different causes. classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment. structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers ' skills and the skills required for open jobs. large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in
|
https://en.wikipedia.org/wiki/Economics
|
we study self - interacting dark matter coupled to the standard model via the higgs portal. we consider a scenario where dark matter is a thermal relic with strong enough self interactions that can alleviate the problems of collisionless cold dark matter. we study constraints from direct detection searches, the lhc, and big bang nucleosynthesis. we show that the tension between these constraints and the need for sufficiently strong self - interactions with light mediators can be alleviated by coupling the mediator to either active or sterile neutrinos. future direct detection data offers great potential and can be used to find evidence of a light mediator and verify that dark matter scatters via long - range self - interactions.
|
arxiv:1411.3730
|
let $ a $ be a rational function of degree $ n \ geq 2 $. let us denote by $ g ( a ) $ the group of m \ " obius transformations $ \ sigma $ such that $ a \ circ \ sigma = \ nu _ { \ sigma } \ circ a $ for some m \ " obius transformations $ \ nu _ { \ sigma } $, and by $ \ sigma ( a ) $ and $ { \ rm aut } ( a ) $ the subgroups of $ g ( a ) $ consisting of $ \ sigma $ such that $ a \ circ \ sigma = a $ and $ a \ circ \ sigma = \ sigma \ circ a $, correspondingly. in this paper, we study sequences of the above groups arising from iterating $ a $. in particular, we show that if $ a $ is not conjugate to $ z ^ { \ pm n }, $ then the orders of the groups $ g ( a ^ { \ circ k } ) $, $ k \ geq 2, $ are finite and uniformly bounded in terms of $ n $ only. we also prove a number of results about the groups $ \ sigma _ { \ infty } ( a ) = \ cup _ { k = 1 } ^ { \ infty } \ sigma ( a ^ { \ circ k } ) $ and $ { \ rm aut } _ { \ infty } ( a ) = \ cup _ { k = 1 } ^ { \ infty } { \ rm aut } ( a ^ { \ circ k } ) $, which are especially interesting from the dynamical perspective.
|
arxiv:2006.08154
|
the purpose of this paper is to study property ( rd ) for locally compact hecke pairs. we discuss length functions on hecke pairs and the growth of hecke pairs. we establish an equivalence between property ( rd ) of locally compact groups and property ( rd ) of certain locally compact hecke pairs. this allows us to transfer several important results concerning property ( rd ) of locally compact groups into our setting, and consequently to identify many classes of examples of locally compact hecke pairs with property ( rd ). we also show that a reduced discrete hecke pair $ ( g, h ) $ has ( rd ) if and only if its schlichting completion $ \ bar { g } $ has ( rd ). then it follows that the relative unimodularity is a necessary condition for a discrete hecke pair to possess property ( rd ).
|
arxiv:1412.1208
|
we prove a refinement of the global gan - gross - prasad conjecture proposed by ichino - ikeda and n. harris for unitary groups under some local conditions. we need to assume some expected properties of l - packets and some part of the local gan - gross - prasad conjecture.
|
arxiv:1208.6280
|
a new cryptographic tool, anonymous quantum key technique, is introduced that leads to unconditionally secure key distribution and encryption schemes that can be readily implemented experimentally in a realistic environment. if quantum memory is available, the technique would have many features of public - key cryptography ; an identification protocol that does not require a shared secret key is provided as an illustration. the possibility is also indicated for obtaining unconditionally secure quantum bit commitment protocols with this technique.
|
arxiv:quant-ph/0009113
|
deep learning - based image stitching pipelines are typically divided into three cascading stages : registration, fusion, and rectangling. each stage requires its own network training and is tightly coupled to the others, leading to error propagation and posing significant challenges to parameter tuning and system stability. this paper proposes the simple and robust stitcher ( srstitcher ), which revolutionizes the image stitching pipeline by simplifying the fusion and rectangling stages into a unified inpainting model, requiring no model training or fine - tuning. we reformulate the problem definitions of the fusion and rectangling stages and demonstrate that they can be effectively integrated into an inpainting task. furthermore, we design the weighted masks to guide the reverse process in a pre - trained largescale diffusion model, implementing this integrated inpainting task in a single inference. through extensive experimentation, we verify the interpretability and generalization capabilities of this unified model, demonstrating that srstitcher outperforms state - of - the - art methods in both performance and stability. code : https : / / github. com / yayoyo66 / srstitcher
|
arxiv:2404.14951
|
we demonstrate the use of dual - comb spectroscopy for isotope ratio measurements. we show that the analysis spectral range of a free - running near - infrared dual - comb spectrometer can be extended to the mid - infrared by difference frequency generation to target specific spectral regions suitable for such measurements, and especially the relative isotopic ratio $ \ delta { } ^ { 13 } $ c. the measurements performed present a very good repeatability over several days with a standard deviation below 2 $ \ unicode { x2030 } $ for a recording time of a few tens of seconds, and the results are compatible with measurements obtained using an isotope ratio mass spectrometer. our setup also shows the possibility to target several chemical species without any major modification, which can be used to measure other isotopic ratios. further improvements could decrease the uncertainties of the measurements, and the spectrometer could thus compete with isotope ratio spectrometers currently available on the market.
|
arxiv:2202.01977
|
the modern era always looks into advancements in technology. design and topology of interconnection networks play a mutual role in development of technology. analysing the topological properties and characteristics of an interconnection network is not an easy task. graph theory helps in solving this task analytically and efficiently through the use of numerical parameters known as distance based topological descriptors. these descriptors have considerable applications in various fields of computer science, chemistry, biology, etc. this paper deals with the evaluation of topological descriptors for an n - dimensional multistage interconnection network, the benes network, bb ( n ). also, a new variant of interconnection network is derived from the benes network, named as augmented benes network and denoted as bb ^ * ( n ). the topological descriptors for the benes derived network are also determined in this work. further, the benes network and augmented benes network undergoes a comparative analysis based on few network parameters, which helps to understand the efficiency of newly derived benes network. a broadcasting algorithm for the augmented benes network is also provided.
|
arxiv:2411.04135
|
the proliferation of data and text documents such as articles, web pages, books, social network posts, etc. on the internet has created a fundamental challenge in various fields of text processing under the title of " automatic text summarisation ". manual processing and summarisation of large volumes of textual data is a very difficult, expensive, time - consuming and impossible process for human users. text summarisation systems are divided into extractive and abstract categories. in the extractive summarisation method, the final summary of a text document is extracted from the important sentences of the same document without any modification. in this method, it is possible to repeat a series of sentences and to interfere with pronouns. however, in the abstract summarisation method, the final summary of a textual document is extracted from the meaning and significance of the sentences and words of the same document or other documents. many of the works carried out have used extraction methods or abstracts to summarise the collection of web documents, each of which has advantages and disadvantages in the results obtained in terms of similarity or size. in this work, a crawler has been developed to extract popular text posts from the instagram social network with appropriate preprocessing, and a set of extraction and abstraction algorithms have been combined to show how each of the abstraction algorithms can be used. observations made on 820 popular text posts on the social network instagram show the accuracy ( 80 % ) of the proposed system.
|
arxiv:2303.07957
|
motor skill acquisition in fields like surgery, robotics, and sports involves learning complex task sequences through extensive training. traditional performance metrics, like execution time and error rates, offer limited insight as they fail to capture the neural mechanisms underlying skill learning and retention. this study introduces directed functional connectivity ( dfc ), derived from electroencephalography ( eeg ), as a novel brain - based biomarker for assessing motor skill learning and retention. for the first time, dfc is applied as a biomarker to map the stages of the fitts and posner motor learning model, offering new insights into the neural mechanisms underlying skill acquisition and retention. unlike traditional measures, it captures both the strength and direction of neural information flow, providing a comprehensive understanding of neural adaptations across different learning stages. the analysis demonstrates that dfc can effectively identify and track the progression through various stages of the fitts and posner model. furthermore, its stability over a six - week washout period highlights its utility in monitoring long - term retention. no significant changes in dfc were observed in a control group, confirming that the observed neural adaptations were specific to training and not due to external factors. by offering a granular view of the learning process at the group and individual levels, dfc facilitates the development of personalized, targeted training protocols aimed at enhancing outcomes in fields where precision and long - term retention are critical, such as surgical education. these findings underscore the value of dfc as a robust biomarker that complements traditional performance metrics, providing a deeper understanding of motor skill learning and retention.
|
arxiv:2502.14731
|
we show that degrees of the real fields of definition of arithmetic kleinian reflection groups are bounded by 35.
|
arxiv:0710.5108
|
clinical decision support systems are software tools that help clinicians to make medical decisions. however, their acceptance by clinicians is usually rather low. a known problem is that they often require clinicians to manually enter lots of patient data, which is long and tedious. existing solutions, such as the automatic data extraction from electronic health record, are not fully satisfying, because of low data quality and availability. in practice, many systems still include long questionnaire for data entry. in this paper, we propose an original solution to simplify patient data entry, using an adaptive questionnaire, i. e. a questionnaire that evolves during user interaction, showing or hiding questions dynamically. considering a rule - based decision support systems, we designed methods for translating the system ' s clinical rules into display rules that determine the items to show in the questionnaire, and methods for determining the optimal order of priority among the items in the questionnaire. we applied this approach to a decision support system implementing stopp / start v2, a guideline for managing polypharmacy. we show that it permits reducing by about two thirds the number of clinical conditions displayed in the questionnaire. presented to clinicians during focus group sessions, the adaptive questionnaire was found " pretty easy to use ". in the future, this approach could be applied to other guidelines, and adapted for data entry by patients.
|
arxiv:2309.10398
|
integrity and reliability of a national power grid system are essential to society ' s development and security. among the power grid components, transmission lines are critical due to exposure and vulnerability to severe external conditions, including high winds, ice, and extreme temperatures. the combined effects of external agents with high electrical load and presence of damage precursors greatly affects the conducting material ' s properties due to a thermal runaway cycle that accelerates the aging process. in this paper, we develop a thermo - electro - mechanical model for long - term failure analysis of overhead transmission lines. a phase - field model of damage and fatigue, coupled with electrical and thermal modules, provides a detailed description of the conductor ' s temperature evolution. we define a limit state function based on maximum operating temperature to avoid excessive overheating and sagging. we study four representative scenarios deterministically, and propose the probabilistic collocation method ( pcm ) as a tool to understand the stochastic behavior of the system. we use pcm in forward parametric uncertainty quantification, global sensitivity analysis, and computation of failure probability curves in a straightforward and computationally efficient fashion, and we quantify the most influential parameters that affect the failure predictability from a physics - based perspective.
|
arxiv:2406.18860
|
one of the challenges in evaluating multi - object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. however, the measures of performance for tracking and classification are different. data sets that are suitable for evaluating tracking systems may not be appropriate for classification. tracking video data sets typically only have ground truth track ids, while classification video data sets only have ground truth class - label ids. the former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. this paper describes an advancement of the ground truth meta - data for the darpa neovision2 tower data set to allow both the evaluation of tracking and classification. the ground truth data sets presented in this paper contain unique object ids across 5 different classes of object ( car, bus, truck, person, cyclist ) for 24 videos of 871 image frames each. in addition to the object ids and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un - annotated objects were present. the unique ids are maintained during occlusions between multiple objects or when objects re - enter the field of view. this will provide : a solid foundation for evaluating the performance of multi - object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard multi object tracking ( mot ) framework, and classification performance using the neovision2 metrics. these data have been hosted publically.
|
arxiv:1704.06378
|
temporal graph neural networks, a new and trending area of machine learning, suffers from a lack of formal analysis. in this paper, information theory is used as the primary tool to provide a framework for the analysis of temporal gnns. for this reason, the concept of information bottleneck is used and adjusted to be suitable for a temporal analysis of such networks. to this end, a new definition for mutual information rate is provided, and the potential use of this new metric in the analysis of temporal gnns is studied.
|
arxiv:2408.05624
|
since experiencing domain shifts during test - time is inevitable in practice, test - time adaption ( tta ) continues to adapt the model after deployment. recently, the area of continual and gradual test - time adaptation ( tta ) emerged. in contrast to standard tta, continual tta considers not only a single domain shift, but a sequence of shifts. gradual tta further exploits the property that some shifts evolve gradually over time. since in both settings long test sequences are present, error accumulation needs to be addressed for methods relying on self - training. in this work, we propose and show that in the setting of tta, the symmetric cross - entropy is better suited as a consistency loss for mean teachers compared to the commonly used cross - entropy. this is justified by our analysis with respect to the ( symmetric ) cross - entropy ' s gradient properties. to pull the test feature space closer to the source domain, where the pre - trained model is well posed, contrastive learning is leveraged. since applications differ in their requirements, we address several settings, including having source data available and the more challenging source - free setting. we demonstrate the effectiveness of our proposed method ' robust mean teacher ' ( rmt ) on the continual and gradual corruption benchmarks cifar10c, cifar100c, and imagenet - c. we further consider imagenet - r and propose a new continual domainnet - 126 benchmark. state - of - the - art results are achieved on all benchmarks.
|
arxiv:2211.13081
|
trust models are widely used in various computer science disciplines. the main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. in this article, the novel notion of " rational trust modeling " is introduced by bridging trust management and game theory. note that trust models / reputation systems have been used in game theory ( e. g., repeated games ) for a long time, however, game theory has not been utilized in the process of trust model construction ; this is where the novelty of our approach comes from. in our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational / selfish, i. e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. in other words, the players are incentivized ( or penalized ) by the model itself to act properly. the problem of trust management can be then approached by game theoretical analyses and solution concepts such as nash equilibrium. although rationality might be built - in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer ' s perspective. this approach will result in two fascinating outcomes. first of all, the designer of a trust model can incentivise trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust - based interactions ( e. g., e - commerce scenarios ). furthermore, using a rational trust model, we can prevent many well - known attacks on trust models. these two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses.
|
arxiv:1706.09861
|
a dynamical system defined by a metriplectic structure is a dissipative model characterized by a specific pair of tensors, which defines the leibniz brackets. generally, these tensors are poisson brackets tensor and a symmetric metric tensor that models purely dissipative dynamics. in this paper, the metriplectic system describing a simplified two - photon absorption by a two - level atom is disclosed. the hamiltonian component describes the free electromagnetic radiation. the metric component encodes the radiation - matter coupling, driving the system to an asymptotically stable state in which the excited level of the atom is populated due to absorption. this work is intended as a first result to pave the way to apply the metriplectic formalism to many other irreversible processes in nonlinear optics.
|
arxiv:1804.00526
|
fault tolerance in multi - core architecture has attracted attention of research community for the past 20 years. rapid improvements in the cmos technology resulted in exponential growth of transistor density. it resulted in increased challenges for designing resilient multi - core architecture at the same pace. the article presents a survey of fault tolerant methods like fault detection, recovery, re - configurability and repair techniques for multi - core architectures. salvaging at micro - architectural and architectural level are also discussed. gamut of fault tolerant approaches discussed in this article have tangible improvements on the reliability of the multi - core architectures. every concept in the seminal articles is examined with respect to relevant metrics like performance cost, area overhead, fault coverage, level of protection, detection latency and mean time to failure. the existing literature is critically examined. new research directions in the form of new fault tolerant design alternatives for both homogeneous and heterogeneous multi - core architectures are presented. brief on an analytical approach for fault tolerating model is suggested for intel and amd based modern homogeneous multi - core architecture are presented to enhance the understanding of the readers about the architecture with respect to performance degradation, memory access time and execution time.
|
arxiv:2112.14952
|
deep learning has gained immense popularity in the earth sciences as it enables us to formulate purely data - driven models of complex earth system processes. deep learning - based weather prediction ( dlwp ) models have made significant progress in the last few years, achieving forecast skills comparable to established numerical weather prediction models with comparatively lesser computational costs. in order to train accurate, reliable, and tractable dlwp models with several millions of parameters, the model design needs to incorporate suitable inductive biases that encode structural assumptions about the data and the modelled processes. when chosen appropriately, these biases enable faster learning and better generalisation to unseen data. although inductive biases play a crucial role in successful dlwp models, they are often not stated explicitly and their contribution to model performance remains unclear. here, we review and analyse the inductive biases of state - of - the - art dlwp models with respect to five key design elements : data selection, learning objective, loss function, architecture, and optimisation method. we identify the most important inductive biases and highlight potential avenues towards more efficient and probabilistic dlwp models.
|
arxiv:2304.04664
|
the efficiency of the future devices for quantum information processing will be limited mostly by the finite decoherence rates of the individual qubits and quantum gates. recently, substantial progress was achieved in enhancing the time within which a solid - state qubit demonstrates coherent dynamics. this progress is based mostly on a successful isolation of the qubits from external decoherence sources obtained by clever engineering. under these conditions, the material - inherent sources of noise start to play a crucial role. in most cases, quantum devices are affected by noise decreasing with frequency, f, approximately as 1 / f. according to the present point of view, such noise is due to material - and device - specific microscopic degrees of freedom interacting with quantum variables of the nanodevice. the simplest picture is that the environment that destroys the phase coherence of the device can be thought of as a system of two - state fluctuators, which experience random hops between their states. if the hopping times are distributed in a exponentially broad domain, the resulting fluctuations have a spectrum close to 1 / f in a large frequency range. in this paper we review the current state of the theory of decoherence due to degrees of freedom producing 1 / f noise. we discuss basic mechanisms of such noises in various nanodevices and then review several models describing the interaction of the noise sources with quantum devices. the main focus of the review is to analyze how the 1 / f noise destroys their coherent operation. we start from individual qubits concentrating mostly on the devices based on superconductor circuits, and then discuss some special issues related to more complicated architectures. finally, we consider several strategies for minimizing the noise - induced decoherence.
|
arxiv:1304.7925
|
dense 3d shape acquisition of swimming human or live fish is an important research topic for sports, biological science and so on. for this purpose, active stereo sensor is usually used in the air, however it cannot be applied to the underwater environment because of refraction, strong light attenuation and severe interference of bubbles. passive stereo is a simple solution for capturing dynamic scenes at underwater environment, however the shape with textureless surfaces or irregular reflections cannot be recovered. recently, the stereo camera pair with a pattern projector for adding artificial textures on the objects is proposed. however, to use the system for underwater environment, several problems should be compensated, i. e., disturbance by fluctuation and bubbles. simple solution is to use convolutional neural network for stereo to cancel the effects of bubbles and / or water fluctuation. since it is not easy to train cnn with small size of database with large variation, we develop a special bubble generation device to efficiently create real bubble database of multiple size and density. in addition, we propose a transfer learning technique for multi - scale cnn to effectively remove bubbles and projected - patterns on the object. further, we develop a real system and actually captured live swimming human, which has not been done before. experiments are conducted to show the effectiveness of our method compared with the state of the art techniques.
|
arxiv:1811.09675
|
collisions of identical nuclei at finite impact parameter have an unequal number of participating nucleons from each nucleus due to fluctuations. the event - by - event fluctuations have been estimated by measuring the difference of energy in the zero - degree calorimeters on either side of interaction vertex. the fluctuations affect the global variables such as the rapidity distributions, and the effect has been correlated with a measure of these fluctuations.
|
arxiv:1512.08177
|
a tower in venice cannot be substantiated ; since he was around 65 years old at the time. mariner ' s astrolabe the earliest recorded uses of the astrolabe for navigational purposes are by the portuguese explorers diogo de azambuja ( 1481 ), bartholomew diaz ( 1487 / 88 ) and vasco da gama ( 1497 – 98 ) during their sea voyages around africa. dry dock while dry docks were already known in hellenistic shipbuilding, these facilities were reintroduced in 1495 / 96, when henry vii of england ordered one to be built at the portsmouth navy base. = = = 16th century = = = floating dock the earliest known description of a floating dock comes from a small italian book printed in venice in 1560, titled descrittione dell ' artifitiosa machina. in the booklet, an unknown author asks for the privilege of using a new method for the salvaging of a grounded ship and then proceeds to describe and illustrate his approach. the included woodcut shows a ship flanked by two large floating trestles, forming a roof above the vessel. the ship is pulled in an upright position by a number of ropes attached to the superstructure. lifting tower a lifting tower was used to great effect by domenico fontana to relocate the monolithic vatican obelisk in rome. its weight of 361 t was far greater than any of the blocks the romans are known to have lifted by cranes. mining, machinery and chemistry a standard reference for the state of mechanical arts during the renaissance is given in the mining engineering treatise de re metallica ( 1556 ), which also contains sections on geology, mining and chemistry. de re metallica was the standard chemistry reference for the next 180 years. = = = early 17th century = = = newspaper the newspaper is an application of the printing press from which the press derives its name. the 16th century sees a rising demand for up - to - date information which can not be covered effectively by the circulating hand - written newssheets. for " gaining time " from the slow copying process, johann carolus of strassburg is the first to publish his german - language relation by using a printing press ( 1605 ). in rapid succession, further german newspapers are established in wolfenbuttel ( avisa relation oder zeitung ), basel, frankfurt and berlin. from 1618 onwards, enterprising dutch printers take up the practice and begin to provide the english and french market with translated news. by the mid - 17th century it is estimated
|
https://en.wikipedia.org/wiki/Renaissance_technology
|
the relaxation dynamics of correlated electron transport ( et ) along molecular chains is studied based on a substantially improved numerically exact path integral monte carlo ( pimc ) approach. as archetypical model we consider a hubbard chain containing two interacting electrons coupled to a bosonic bath. for this generalization of the ubiquitous spin - boson model, the intricate interdependence of correlations and dissipation leads to non - boltzmann thermal equilibrium distributions for many - body states. by mapping the multi - particle dynamics onto an isomorphic single particle motion this phenomenon is shown to be sensitive to the particle statistics and due to its robustness allows for new control schemes in designed quantum aggregates.
|
arxiv:cond-mat/0508098
|
in this paper, we will try to present a general formalism for the construction of { \ it deformed photon - added nonlinear coherent states } ( dpancss ) $ | \ alpha, f, m > $, which in special case lead to the well - known photon - added coherent state ( pacs ) $ | \ alpha, m > $. some algebraic structures of the introduced dpancss are studied and particularly the resolution of the identity, as the most important property of generalized coherent states, is investigated. meanwhile, it will be demonstrated that, the introduced states can also be classified in the $ f $ - deformed coherent states, with a special nonlinearity function. next, we will show that, these states can be produced through a simple theoretical scheme. a discussion on the dpancss with negative values of $ m $, i. e., $ | \ alpha, f, - m > $, is then presented. our approach, has the potentiality to be used for the construction of a variety of new classes of dpancss, corresponding to any nonlinear oscillator with known nonlinearity function, as well as arbitrary solvable quantum system with known discrete, nondegenerate spectrum. finally, after applying the formalism to a particular physical system known as p \ " oschl - teller ( p - t ) potential and the nonlinear coherent states corresponding to a specific nonlinearity function $ f ( n ) = \ sqrt n $, some of the nonclassical properties such as mandel parameter, second order correlation function, in addition to first and second - order squeezing of the corresponding states will be investigated, numerically.
|
arxiv:1108.5503
|
we propose a novel approach for optimizing the graph ratio - cut by modeling the binary assignments as random variables. we provide an upper bound on the expected ratio - cut, as well as an unbiased estimate of its gradient, to learn the parameters of the assignment variables in an online setting. the clustering resulting from our probabilistic approach ( prcut ) outperforms the rayleigh quotient relaxation of the combinatorial problem, its online learning extensions, and several widely used methods. we demonstrate that the prcut clustering closely aligns with the similarity measure and can perform as well as a supervised classifier when label - based similarities are provided. this novel approach can leverage out - of - the - box self - supervised representations to achieve competitive performance and serve as an evaluation method for the quality of these representations.
|
arxiv:2502.03405
|
currently, there is a growing interest in studying the coherent interaction between magnetic systems and electromagnetic radiation in a cavity, prompted partly by possible applications in hybrid quantum systems. we propose a multimode cavity optomagnonic system based on antiferromagnetic insulators, where optical photons couple coherently to the two homogeneous magnon modes of the antiferromagnet. these have frequencies typically in the thz range, a regime so far mostly unexplored in the realm of coherent interactions, and which makes antiferromagnets attractive for quantum transduction from thz to optical frequencies. we derive the theoretical model for the coupled system, and show that it presents unique characteristics. in particular, if the antiferromagnet presents hard - axis magnetic anisotropy, the optomagnonic coupling can be tuned by a magnetic field applied along the easy axis. this allows to bring a selected magnon mode into and out of a dark mode, providing an alternative for a quantum memory protocol. the dynamical features of the driven system present unusual behavior due to optically induced magnon - magnon interactions, including regions of magnon heating for a red detuned driving laser. the multimode character of the system is evident in a substructure of the optomagnonically induced transparency window.
|
arxiv:1908.06110
|
let $ p \ geq 3 $ be a prime. let $ e / \ mathbb { q } $ and $ e ' / \ mathbb { q } $ be elliptic curves with isomorphic $ p $ - torsion modules $ e [ p ] $ and $ e ' [ p ] $. assume further that either ( i ) every $ g _ \ mathbb { q } $ - modules isomorphism $ \ phi : e [ p ] \ to e ' [ p ] $ admits a multiple $ \ lambda \ cdot \ phi $ with $ \ lambda \ in \ mathbb { f } _ p ^ \ times $ preserving the weil pairing ; or ( ii ) no $ g _ \ mathbb { q } $ - isomorphism $ \ phi : e [ p ] \ to e ' [ p ] $ preserves the weil pairing. this paper considers the problem of deciding if we are in case ( i ) or ( ii ). our approach is to consider the problem locally at a prime $ \ ell \ neq p $. firstly, we determine the primes $ \ ell $ for which the local curves $ e / \ mathbb { q } _ \ ell $ and $ e ' / \ mathbb { q } _ \ ell $ contain enough information to decide between ( i ) or ( ii ). secondly, we establish a collection of criteria, in terms of the standard invariants associated to minimal weierstrass models of $ e / \ mathbb { q } _ \ ell $ and $ e ' / \ mathbb { q } _ \ ell $, to decide between ( i ) and ( ii ). we show that our results give a complete solution to the problem by local methods away from $ p $. we apply our methods to show the non - existence of rational points on certain hyperelliptic curves of the form $ y ^ 2 = x ^ p - \ ell $ and $ y ^ 2 = x ^ p - 2 \ ell $ where $ \ ell $ is a prime ; we also give incremental results on the fermat equation $ x ^ 2 + y ^ 3 = z ^ p $. as a different application, we discuss variants of a question raised by mazur concerning the existence of symplectic isomorphisms between the $ p $ - torsion of two non - isogenous elliptic curves defined over $ \ mathbb { q } $.
|
arxiv:1607.01218
|
we demonstrate optical performance monitoring of in - band optical signal to noise ratio ( osnr ) and residual dispersion, at bit rates of 40gb / s, 160gb / s and 640gb / s, using slow - light enhanced optical third harmonic generation ( thg ) in a compact ( 80 micron ) dispersion engineered 2d silicon photonic crystal waveguide. we show that there is no intrinsic degradation in the enhancement of the signal processing at 640 gb / s relative to that at 40gb / s, and that this device should operate well above 1tb / s. this work represents a record 16 - fold increase in processing speed for a silicon device, and opens the door for slow light to play a key role in ultra - high bandwidth telecommunications systems.
|
arxiv:1505.03224
|
this paper presents an algorithm to automatically design two - level fat - tree networks, such as ones widely used in large - scale data centres and cluster supercomputers. the two levels may each use a different type of switches from design database to achieve an optimal network structure. links between layers can run in bundles to simplify cabling. several sample network designs are examined and their technical and economic characteristics are discussed. the characteristic feature of our approach is that real life equipment prices and values of technical characteristics are used. this allows to select an optimal combination of hardware to build the network ( including semi - populated configurations of modular switches ) and accurately estimate the cost of this network. we also show how technical characteristics of the network can be derived from its per - port metrics and suggest heuristics for equipment placement. the algorithm is useful as a part of a bigger design procedure that selects optimal hardware of cluster supercomputer as a whole. therefore the article is focused on the use of fat - trees for high - performance computing, although the results are valid for any type of data centres.
|
arxiv:1301.6179
|
we introduce an elementary argument to bound the $ \ textrm { bmo } $ seminorm of fourier series with gaps giving in particular a sufficient condition for them to be in this space. using finer techniques we carry out a detailed study of the series $ \ sum n ^ { - 1 } e ^ { 2 \ pi i n ^ 2 x } $ providing some insight into how much this $ \ text { bmo } $ fourier series differs from defining an $ l ^ \ infty $ function.
|
arxiv:1812.08747
|
there is an increasing need for silicon - compatible high bandwidth extended - short wave infrared ( e - swir ) photodetectors ( pds ) to implement cost - effective and scalable optoelectronic devices. these systems are quintessential to address several technological bottlenecks in detection and ranging, surveillance, ultrafast spectroscopy, and imaging. in fact, current e - swir high bandwidth pds are predominantly made of iii - v compound semiconductors and thus are costly and suffer a limited integration on silicon besides a low responsivity at wavelengths exceeding $ 2. 3 \, \ mu $ m. to circumvent these challenges, ge $ _ { 1 - x } $ sn $ _ { x } $ semiconductors have been proposed as building blocks for silicon - integrated high - speed e - swir devices. herein, this study demonstrates a vertical all - gesn pin pds consisting of p - ge $ _ { 0. 92 } $ sn $ _ { 0. 08 } $ / i - ge $ _ { 0. 91 } $ sn $ _ { 0. 09 } $ / n - ge $ _ { 0. 89 } $ sn $ _ { 0. 11 } $ and p - ge $ _ { 0. 91 } $ sn $ _ { 0. 09 } $ / i - ge $ _ { 0. 88 } $ sn $ _ { 0. 12 } $ / n - ge $ _ { 0. 87 } $ sn $ _ { 0. 13 } $ heterostructures grown on silicon following a step - graded temperature - controlled epitaxy protocol. the performance of these pds was investigated as a function of the device diameter in the $ 10 - 30 \, \ mu $ m range. the developed pd devices yield a high bandwidth of 12. 4 ghz at a bias of 5v for a device diameter of $ 10 \, \ mu $ m. moreover, these devices show a high responsivity of 0. 24 a / w, a low noise, and a $ 2. 8 \, \ mu $ m cutoff wavelength thus covering the whole e - swir range.
|
arxiv:2401.02629
|
spectral clustering became a popular choice for data clustering for its ability of uncovering clusters of different shapes. however, it is not always preferable over other clustering methods due to its computational demands. one of the effective ways to bypass these computational demands is to perform spectral clustering on a subset of points ( data representatives ) then generalize the clustering outcome, this is known as approximate spectral clustering ( asc ). asc uses sampling or quantization to select data representatives. this makes it vulnerable to 1 ) performance inconsistency ( since these methods have a random step either in initialization or training ), 2 ) local statistics loss ( because the pairwise similarities are extracted from data representatives instead of data points ). we proposed a refined version of $ k $ - nearest neighbor graph, in which we keep data points and aggressively reduce number of edges for computational efficiency. local statistics were exploited to keep the edges that do not violate the intra - cluster distances and nullify all other edges in the $ k $ - nearest neighbor graph. we also introduced an optional step to automatically select the number of clusters $ c $. the proposed method was tested on synthetic and real datasets. compared to asc methods, the proposed method delivered a consistent performance despite significant reduction of edges.
|
arxiv:2302.11296
|
exclusive two - photon reactions such as compton scattering at large angles, deeply virtual compton scattering, and hadron production in photon - photon collisions provide important tests of qcd at the amplitude level, particularly as measures of hadron distribution amplitudes and skewed parton distributions.
|
arxiv:hep-ph/0010176
|
a fan - theobald - von neumann system is a triple $ ( v, w, \ lambda ) $, where $ v $ and $ w $ are real inner product spaces and $ \ lambda : v \ to w $ is a norm - preserving map satisfying a fan - theobald - von neumann type inequality together with a condition for equality. examples include euclidean jordan algebras, systems induced by certain hyperbolic polynomials, and normal decomposition systems ( eaton triples ). the present article is a continuation of an earlier paper, where the concepts of commutativity, automorphisms, majorization, and reduction were introduced and elaborated. here, we describe some transfer principles and present fenchel conjugate and subdifferential formulas.
|
arxiv:2307.08478
|
we investigate the interplay between gravity and the quantum coherence present in the state of a pulse of light propagating in curved spacetime. we first introduce an operational way to distinguish between the overall shift in the pulse wavepacket and its genuine deformation after propagation. we then apply our technique to quantum states of photons that are coherent in the frequency degree of freedom, as well as to states of completely incoherent light. we focus on gaussian profiles and frequency combs and find that the quantum coherence initially present can enhance the deformation induced by propagation in a curved background. these results further supports the claim that genuine quantum features, such as quantum coherence, can be used to probe the gravitational properties of physical systems. we specialize our techniques to earth - to - satellite communication setups, where the effects of gravity are weak but can be tested with current satellite technologies.
|
arxiv:2106.12424
|
by imposing system - observer symmetry on the von neumann description of measurement, it is shown that the quantum measurement problem is structurally equivalent to a familiar reverse - engineering problem : that of describing the behavior of an arbitrary physical device as algorithm instantiation. it is suggested that this problem can at best be given a relational solution.
|
arxiv:1308.1383
|
we establish one - to - one correspondences between ( i ) binary even lcd codes and certain simple graphs, and ( ii ) ternary lcd codes and certain two - graphs.
|
arxiv:2407.07689
|
in this work, an efficient and robust isogeometric three - dimensional solid - beam finite element is developed for large deformations and finite rotations with merely displacements as degrees of freedom. the finite strain theory and hyperelastic constitutive models are considered and b - spline and nurbs are employed for the finite element discretization. similar to finite elements based on lagrange polynomials, also nurbs - based formulations are affected by the non - physical phenomena of locking, which constrains the field variables and negatively impacts the solution accuracy and deteriorates convergence behavior. to avoid this problem within the context of a solid - beam formulation, the assumed natural strain ( ans ) method is applied to alleviate membrane and transversal shear locking and the enhanced assumed strain ( eas ) method against poisson thickness locking. furthermore, the mixed integration point ( mip ) method is employed to make the formulation more efficient and robust. the proposed novel isogeometric solid - beam element is tested on several single - patch and multi - patch benchmark problems, and it is validated against classical solid finite elements and isoparametric solid - beam elements. the results show that the proposed formulation can alleviate the locking effects and significantly improve the performance of the isogeometric solid - beam element. with the developed element, efficient and accurate predictions of mechanical properties of lattice - based structured materials can be achieved. the proposed solid - beam element inherits both the merits of solid elements e. g. flexible boundary conditions and of the beam elements i. e. higher computational efficiency.
|
arxiv:2312.07124
|
given a morphism between smooth projective varieties $ f : w \ to x $, we study whether $ f $ - relatively free rational curves imply the existence of $ f $ - relatively very free rational curves. the answer is shown to be positive when the fibers of the map $ f $ have picard number 1 and a further smoothness assumption is imposed. the main application is when $ x \ subset \ pp ^ n $ is a smooth complete intersection of type $ ( d _ 1,..., d _ c ) $ and $ \ sum d _ i ^ 2 \ leq n $. in this case, we take $ w $ to be the space of pointed lines contained in $ x $ and the positive answer to the question implies that $ x $ contains very twisting ruled surfaces and is strongly rationally simply connected. if the fibers of a smooth family of varieties over a 2 - dimensional base satisfy these conditions and the brauer obstruction vanishes, then the family has a rational section ( see \ cite { djhs } )
|
arxiv:1005.1250
|
the intrinsic magnetic moment ( imm ) and intrinsic angular momentum ( iam ) of a chiral superconductor with $ p $ - wave symmetry on a two - dimensional square lattice are discussed on the basis of the bogoliubov - de gennes equation. the the imm and iam are shown to be on the order of $ \ mu _ { \ rm b } n $ and $ \ hbar n $, respectively, $ n $ being the total number of particles, without an extra factor $ ( t _ { \ rm c } / t _ { \ rm f } ) ^ { \ gamma } $ ( $ \ gamma = 1, 2 $ ), and parallel to the pair angular momentum. they arise from the current in the surface layer with a width on the order of the coherence length $ \ xi _ { 0 } $, the size of cooper pairs. however, in a single - band model, they are considerably canceled by the contribution from the meissner surface current in a layer with the width of the penetration depth $ \ lambda $, making it difficult to observe them experimentally. in the case of multi - band metals with both electron - like and hole - like bands, however, considerable cancellation still occurs for the imm but not for the iam, making it possible to observe the iam selectively because the effect of the meissner current becomes less important. as an example of a multi - band metal, the case of the spin - triplet chiral superconductor sr $ _ 2 $ ruo $ _ 4 $ is discussed and experiments for observing the iam are proposed.
|
arxiv:1508.06702
|
in a recent paper ( arxiv : 2301. 09744 ), erickson and hunziker consider partitions in which the arm - leg difference is an arbitrary constant $ m $. in previous works, these partitions are called $ ( - m ) $ - asymmetric partitions. regarding these partitions and their conjugates as the highest weights, they prove an identity yielding an infinite family of dimension equalities between $ \ mathfrak { gl } _ n $ and $ \ mathfrak { gl } _ { n + m } $ modules. their proof proceeds by the manipulations of the hook content formula. we give a simple bijective proof of their result.
|
arxiv:2305.07806
|
a significant part of the united nations world heritage sites ( whss ) is located in developing countries. these sites attract an increasing number of tourist and income to these countries. unfortunately, many of these whss are in a poor condition due to climatic and environmental impacts ; war and tourism pressure, requiring the urgent need for restoration and preservation ( tuan & navrud, 2007 ). in this study, we characterise residents from shiraz city ( visitors and non - visitors ) willingness to invest in the management of the heritage sites through models for the preservation of heritage and development of tourism as a local resource. the research looks at different categories of heritage sites within shiraz city, iran. the measurement instrument is a stated preference referendum task administered state - wide to a sample of 489 respondents, with the payment mechanism defined as a purpose - specific incremental levy of a fixed amount over a set period of years. a latent class binary logit model, using parametric constraints is used innovatively to deal with any strategic voting such as yea - sayers and nay - sayers, as well as revealing the latent heterogeneity among sample members. results indicate that almost 14 % of the sampled population is unwilling to be levied any amount ( nay - sayers ) to preserve any heritage sites. not recognizing the presence of nay - sayers in the data or recognizing them but eliminating them from the estimation will result in biased willingness to pay ( wtp ) results and, consequently, biased policy propositions by authorities. moreover, it is found that the type of heritage site is a driver of wtp. the results from this study provide insights into the wtp of heritage site visitors and non - visitors with respect to avoiding the impacts of future erosion and destruction and contributing to heritage management and maintenance policies.
|
arxiv:1902.02418
|
the eigenvalues of the matrix structure $ x + x ^ { ( 0 ) } $, where $ x $ is a random gaussian hermitian matrix and $ x ^ { ( 0 ) } $ is non - random or random independent of $ x $, are closely related to dyson brownian motion. previous works have shown how an infinite hierarchy of equations satisfied by the dynamical correlations become triangular in the infinite density limit, and give rise to the complex burgers equation for the green ' s function of the corresponding one - point density function. we show how this and analogous partial differential equations, for chiral, circular and jacobi versions of dyson brownian motion follow from a macroscopic hydrodynamical description involving the current density and continuity equation. the method of characteristics gives a systematic approach to solving the pdes, and in the chiral case we show how this efficiently reclaims the characterisation of the global eigenvalue density for non - central wishart matrices due to dozier and silverstein. collective variables provide another approach to deriving the complex burgers equation in the gaussian case, and we show that this approach applies equally as well to chiral matrices. we relate both the gaussian and chiral cases to the asymptotics of matrix integrals.
|
arxiv:1507.07274
|
effects of anisotropic gap structures on a diamagnetic response are investigated in order to demonstrate that the field - angle - resolved magnetization ( $ m _ l ( \ chi ) $ ) measurement can be used as a spectroscopic method to detect gap structures. our microscopic calculation based on the quasiclassical eilenberger formalism reveals that $ m _ l ( \ chi ) $ in a superconductor with four - fold gap displays a four - fold oscillation reflecting the gap and fermi surface anisotropies, and the sign of this oscillation changes at a field between $ h _ { c1 } $ and $ h _ { c2 } $. as a prototype of unconventional superconductors, magnetization data for borocarbides are also discussed.
|
arxiv:cond-mat/0411290
|
this is the theory summary of strangeness in quark matter 2019 conference. results include the state - of - the - art updates to the quantum chromodynamics ( qcd ) phase diagram with contributions both from heavy - ion collisions and nuclear astrophysics, studies on the qcd freeze - out lines, and several aspects regarding small systems including collectivity, heavy flavor dynamics, strangeness, and hard probes.
|
arxiv:1911.01328
|
ai algorithms that identify maneuvers from trajectory data could play an important role in improving flight safety and pilot training. ai challenges allow diverse teams to work together to solve hard problems and are an effective tool for developing ai solutions. ai challenges are also a key driver of ai computational requirements. the maneuver identification challenge hosted at maneuver - id. mit. edu provides thousands of trajectories collected from pilots practicing in flight simulators, descriptions of maneuvers, and examples of these maneuvers performed by experienced pilots. each trajectory consists of positions, velocities, and aircraft orientations normalized to a common coordinate system. construction of the data set required significant data architecture to transform flight simulator logs into ai ready data, which included using a supercomputer for deduplication and data conditioning. there are three proposed challenges. the first challenge is separating physically plausible ( good ) trajectories from unfeasible ( bad ) trajectories. human labeled good and bad trajectories are provided to aid in this task. subsequent challenges are to label trajectories with their intended maneuvers and to assess the quality of those maneuvers.
|
arxiv:2108.11503
|
we report on fors2 optical spectroscopy of the black hole x - ray binary v404 cygni, performed at the very beginning of its 2015 outburst decay, complemented by quasi - simultaneous $ swift $ x - ray and ultra - violet as well as rem near - infrared observations. its peculiar spectrum is dominated by a wealth of emission signatures of hi, hei, and higher ionisation species, in particular feii. the spectral features are divided between broad red - shifted and narrow stationary varieties, the latter being emitted in the outer regions. continuum and line variability at short time scale is high and we find baldwin effect - like anti - correlations between the full - widths at half - maximum and equivalent widths of the broad lines with their local continua. the balmer decrement h { \ alpha } / h { \ beta } is also abnormally large at $ 4. 61 \ pm0. 62 $. we argue that these properties hint at the broad lines being optically thick and arising within a circumbinary component in which shocks between faster optically thick and slower optically thin regions may occur. we associate it to a nova - like nebula formed by the cooling remnant of strong accretion disc winds that turned off when the mass - accretion rate dropped following the last major flare. the feii lines likely arise from the overlap region between this nebula and the companion star winds, whereas we favour the shocks within the nebula as responsible for the optical continuum via self - absorbed optically thin bremsstrahlung. the presence of a near - infrared excess also points towards the contribution of a strongly variable compact jet or a dusty component.
|
arxiv:1611.02278
|
the squeezed cross - bispectrum \ bispeconed \ between the gravitational lensing in the cosmic microwave background and the 1d \ lya \ forest power spectrum can constrain bias parameters and break degeneracies between $ \ sigma _ 8 $ and other cosmological parameters. we detect \ bispeconed \ with $ 4. 8 \ sigma $ significance at an effective redshift $ z _ \ mathrm { eff } = 2. 4 $ using planck pr3 lensing map and over 280, 000 quasar spectra from the dark energy spectroscopic instrument ' s first - year data. we test our measurement against metal contamination and foregrounds such as galactic extinction and clusters of galaxies by deprojecting the thermal sunyaev - zeldovich effect. we compare our results to a tree - level perturbation theory calculation and find reasonable agreement between the model and measurement.
|
arxiv:2405.14988
|
video generation, by leveraging a dynamic visual generation method, pushes the boundaries of artificial intelligence generated content ( aigc ). video generation presents unique challenges beyond static image generation, requiring both high - quality individual frames and temporal coherence to maintain consistency across the spatiotemporal sequence. recent works have aimed at addressing the spatiotemporal consistency issue in video generation, while few literature review has been organized from this perspective. this gap hinders a deeper understanding of the underlying mechanisms for high - quality video generation. in this survey, we systematically review the recent advances in video generation, covering five key aspects : foundation models, information representations, generation schemes, post - processing techniques, and evaluation metrics. we particularly focus on their contributions to maintaining spatiotemporal consistency. finally, we discuss the future directions and challenges in this field, hoping to inspire further efforts to advance the development of video generation.
|
arxiv:2502.17863
|
forecasts by the european centre for medium - range weather forecasts ( ecmwf ; ec for short ) can provide a basis for the establishment of maritime - disaster warning systems, but they contain some systematic biases. the fifth - generation ec atmospheric reanalysis ( era5 ) data have high accuracy, but are delayed by about 5 days. to overcome this issue, a spatiotemporal deep - learning method could be used for nonlinear mapping between ec and era5 data, which would improve the quality of ec wind forecast data in real time. in this study, we developed the multi - task - double encoder trajectory gated recurrent unit ( mt - detrajgru ) model, which uses an improved double - encoder forecaster architecture to model the spatiotemporal sequence of the u and v components of the wind field ; we designed a multi - task learning loss function to correct wind speed and wind direction simultaneously using only one model. the study area was the western north pacific ( wnp ), and real - time rolling bias corrections were made for 10 - day wind - field forecasts released by the ec between december 2020 and november 2021, divided into four seasons. compared with the original ec forecasts, after correction using the mt - detrajgru model the wind speed and wind direction biases in the four seasons were reduced by 8 - 11 % and 9 - 14 %, respectively. in addition, the proposed method modelled the data uniformly under different weather conditions. the correction performance under normal and typhoon conditions was comparable, indicating that the data - driven mode constructed here is robust and generalizable.
|
arxiv:2212.14160
|
we consider continuous, translation - commuting transformations of compact, translation - invariant families of mappingsfrom finitely generated groups into finite alphabets. it is well - known that such transformations and spaces can be described " locally " via families of patterns and finitary functions ; such descriptions can be re - used on groups larger than the original, usually defining non - isomorphic structures. we show how some of the properties of the " induced " entities can be deduced from those of the original ones, and vice versa ; then, we show how to " simulate " the smaller structure into the larger one, and obtain a characterization in terms of group actions for the dynamical systems admitting of presentations via structures as such. special attention is given to the class of sofic shifts.
|
arxiv:0711.3841
|
entanglement and information are powerful lenses to probe phases transitions in many - body systems. motivated by recent cold atom experiments, which are now able to measure the corresponding information - theoretic quantities, we study the mott transition in the half - filled two - dimensional hubbard model using cellular dynamical mean - field theory, and focus on two key measures of quantum correlations : entanglement entropy and mutual information. we show that they detect the first - order nature of the transition, the universality class of the endpoint, and the crossover emanating from the endpoint.
|
arxiv:1807.10408
|
natural beings undergo a morphological development process of their bodies while they are learning and adapting to the environments they face from infancy to adulthood. in fact, this is the period where the most important learning pro - cesses, those that will support learning as adults, will take place. however, in artificial systems, this interaction between morphological development and learning, and its possible advantages, have seldom been considered. in this line, this paper seeks to provide some insights into how morphological development can be harnessed in order to facilitate learning in em - bodied systems facing tasks or domains that are hard to learn. in particular, here we will concentrate on whether morphological development can really provide any advantage when learning complex tasks and whether its relevance towards learning in - creases as tasks become harder. to this end, we present the results of some initial experiments on the application of morpho - logical development to learning to walk in three cases, that of a quadruped, a hexapod and that of an octopod. these results seem to confirm that as task learning difficulty increases the application of morphological development to learning becomes more advantageous.
|
arxiv:2003.05817
|
in this paper, we focus on the subgroups control $ p $ - fusion, and we improve the theorem b of [ 4 ] for odd prime. for odd prime, we prove that elementary abelian subgroups of rank at least 2 can control $ p $ - fusion ( see our theorem b ).
|
arxiv:2412.11173
|
the problem of constructing a perfect cuboid is related to a certain class of univariate polynomials with three integer parameters $ a $, $ b $, and $ u $. their irreducibility over the ring of integers under certain restrictions for $ a $, $ b $, and $ u $ would mean the non - existence of perfect cuboids. this irreducibility is conjectured and then verified numerically for approximately 10000 instances of $ a $, $ b $, and $ u $.
|
arxiv:1108.5348
|
we have numerically investigated the long term dynamical behavior of known centaurs. this class of objects is thought to constitute the transitional population between the kuiper belt and the jupiter - family comets ( jfcs ). in our study, we find that over their dynamical lifetimes, these objects diffuse into the jfcs and other sinks, and also make excursions into the scattered disk, but ( not surprisingly ) do not diffuse into the parameter space representing the main kuiper belt. these centaurs spend most of their dynamical lifetimes in orbits of eccentricity 0. 2 - to - 0. 6 and perihelion distance 12 - to - 30 au. their orbital evolution is characterized by frequent close encounters with the giant planets. most of these centaurs will escape from the solar system ( or enter the oort cloud ), while a fraction will enter the jfc population and a few percent will impact a giant planet. their median dynamical lifetime is 9 myr, although there is a wide dispersion in lifetimes, ranging from less than 1 myr to more than 100 myr. we find the dynamical evolution of this sample of centaurs to be less orderly than the planet - to - planet " hand - off " described in previous investigations. we discuss the implications of our study for the spatial distribution of the centaurs as a whole.
|
arxiv:astro-ph/0211076
|
we use the method of free energy minimization based on the first law of thermodynamics to derive static meniscus shapes for crystal ribbon growth systems. to account for the possibility of multivalued curves as solutions to the minimization problem, we choose a parametric representation of the meniscus geometry. using weierstrass ' form of the euler - lagrange equation we derive analytical solutions that provide explicit knowledge on the behaviour of the meniscus shapes. young ' s contact angle and gibbs pinning conditions are also analyzed and are shown to be a consequence of the energy minimization problem with variable end - points. for a given ribbon growth configuration, we find that there can exist multiple static menisci that satisfy the boundary conditions. the stability of these solutions is analyzed using second order variations and are found to exhibit saddle node bifurcations. we show that the arc length is a natural representation of a meniscus geometry and provides the complete solution space, not accessible through the classical variational formulation. we provide a range of operating conditions for hydro - statically feasible menisci and illustrate the transition from a stable to spill - over configuration using a simple proof of concept experiment.
|
arxiv:2005.08020
|
towards the goal of quantum computing for lattice quantum chromodynamics, we present a loop - string - hadron ( lsh ) framework in 1 + 1 dimensions for describing the dynamics of su ( 3 ) gauge fields coupled to staggered fermions. this novel framework was previously developed for an su ( 2 ) lattice gauge theory in $ d \ leq3 $ spatial dimensions and its advantages for classical and quantum algorithms have thus far been demonstrated in $ d = 1 $. the lsh approach uses gauge invariant degrees of freedoms such as loop segments, string ends, and on - site hadrons, it is free of all nonabelian gauge redundancy, and it is described by a hamiltonian containing only local interactions. in this work, the su ( 3 ) lsh framework is systematically derived from the reformulation of hamiltonian lattice gauge theory in terms of irreducible schwinger bosons, including the addition of staggered quarks. furthermore, the superselection rules governing the lsh dynamics are identified directly from the form of the hamiltonian. the su ( 3 ) lsh hamiltonian with open boundary conditions has been numerically confirmed to agree with the completely gauge - fixed hamiltonian, which contains long - range interactions and does not generalize to either periodic boundary conditions or to $ d > 1 $.
|
arxiv:2212.04490
|
we analyze features of the fe k spectrum of the high - mass x - ray binary cygnus x - 3. the spectrum was obtained with the chandra high energy transmission grating spectrometer in the third diffraction order. the increased energy resolution of the third order enables us to fully resolve the fe xxv he { \ alpha } complex and the fe xxvi ly { \ alpha } lines. the emission - line spectrum shows the expected features of photoionization equilibrium, excited in the dense stellar wind of the companion star. we detect discrete emission from inner - shell transitions, in addition to absorption likely due to multiple unresolved transitions in lower ionization states. the emission - line intensity ratios observed in the range of the spectrum occupied by the fe xxv n = 1 - 2 forbidden and intercombination lines suggest that there is a substantial contribution from resonantly scattered inner - shell emission from the li - and be - like ionization states. the fe xxv forbidden and intercombination lines arise in the ionization zone closest to the compact object, and since they are not subject to radiative - transfer effects, we can use them in principle to constrain the radial velocity amplitude of the compact object. we infer that the results indicate a compact object mass of the order of the mass of the wolf - rayet companion star, but we note that the presence of resonantly scattered radiation from li - like ions may complicate the interpretation of the he - like emission spectrum, and specifically of the radial velocity curve of the fe xxv forbidden line.
|
arxiv:2212.04165
|
several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. the majority of this work focuses on a binary domain label. similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. robust inference is possible if it is based on a pivot - - a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. in this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property ( or, equivalently, fairness with respect to continuous attributes ) on a predictive model. the method includes a hyperparameter to control the trade - off between accuracy and robustness. we demonstrate the effectiveness of this approach with a toy example and examples from particle physics.
|
arxiv:1611.01046
|
we enumerate hurwitz orbits of shortest reflection factorizations of an arbitrary element in the infinite family $ g ( m, p, n ) $ of complex reflection groups. as a consequence, we characterize the elements for which the action is transitive and give a simple criterion to tell when two shortest reflection factorizations belong to the same hurwitz orbit. we also characterize the quasi - coxeter elements ( those with a shortest reflection factorization that generates the whole group ) in $ g ( m, p, n ) $.
|
arxiv:2105.08104
|
the early 1960s, research progressed into the manufacture of nuclear weapons. with the unexpected deaths of then prime minister nehru in 1964 and bhabha in 1966, the programme slowed down. the incoming prime minister lal bahadur shastri appointed physicist vikram sarabhai as the head of the nuclear programme and the direction of the programme changed towards using nuclear energy for peaceful purposes rather than military development. = = = development of nuclear bomb and first test ( 1966 – 1972 ) = = = after shastri ' s death in 1966, indira gandhi became the prime minister and work on the nuclear programme resumed. the design work on the bomb proceeded under physicist raja ramanna, who continued the nuclear weapons technology research after bhabha ' s death in 1966. the project employed 75 scientists and progressed in secrecy. during the indo - pakistani war, the us government sent a carrier battle group into the bay of bengal in an attempt to intimidate india, who were aided by the soviet union, who responded by sending a submarine armed with nuclear missiles. the soviet response underlined the deterrent value and significance of nuclear weapons to india. after india gained military and political initiative over pakistan in the war, the work on building a nuclear device continued. the hardware began to be built in early 1972 and the prime minister authorised the development of a nuclear test device in september 1972. on 18 may 1974, india tested a implosion - type fission device at the indian army ' s pokhran test range under the code name smiling buddha. the test was described as a peaceful nuclear explosion ( pne ) and the yield was estimated to be between 6 and 10 kilotons. = = = aftermath of nuclear tests ( 1973 – 1988 ) = = = while india continued to state that the test was for peaceful purposes, it encountered opposition from many countries. the nuclear suppliers group ( nsg ) was formed in reaction to the indian tests to check international nuclear proliferation. the technological embargo and sanctions affected the development of india ' s nuclear programme. it was crippled by the lack of indigenous resources and dependence on imported technology on certain areas. though india declared to the international atomic energy agency ( iaea ) that india ' s nuclear program was intended only for peaceful purposes, preliminary work on a fusion bomb was initiated. in the aftermath of the state emergency in 1975 that resulted in the collapse of the second indira gandhi ministry, the programme continued under m. r. srinivasan, but made slow progress. though the nuclear programme did not receive much attention from incoming prime minister mora
|
https://en.wikipedia.org/wiki/Pokhran-II
|
we consider an integral transform introduced by prabhakar, involving generalised multi - parameter mittag - leffler functions, which can be used to introduce and investigate several different models of fractional calculus. we derive a new series expression for this transform, in terms of classical riemann - liouville fractional integrals, and use it to obtain or verify series formulae in various specific cases corresponding to different fractional - calculus models. we demonstrate the power of our result by applying the series formula to derive analogues of the product and chain rules in more general fractional contexts. we also discuss how the prabhakar model can be used to explore the idea of fractional iteration in connection with semigroup properties.
|
arxiv:1807.10101
|
storage - efficient privacy - preserving learning is crucial due to increasing amounts of sensitive user data required for modern learning tasks. we propose a framework for reducing the storage cost of user data while at the same time providing privacy guarantees, without essential loss in the utility of the data for learning. our method comprises noise injection followed by lossy compression. we show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise - free training data as the sample size of the training data ( or the dimension of the training data ) increases. in this sense, the utility of the data for learning is essentially maintained, while reducing storage and privacy leakage by quantifiable amounts. we present experimental results on the celeba dataset for gender classification and find that our suggested pipeline delivers in practice on the promise of the theory : the individuals in the images are unrecognizable ( or less recognizable, depending on the noise level ), overall storage of the data is substantially reduced, with no essential loss ( and in some cases a slight boost ) to the classification accuracy. as an added bonus, our experiments suggest that our method yields a substantial boost to robustness in the face of adversarial test data.
|
arxiv:2202.02892
|
this article investigates the effect of prices and socio - demographic variables on the farmers decision to purchase agricultural insurance. a survey has been conducted to 200 farmers most of whom are engaged in diversified income - generating activities. the logistic estimation results suggest that education and household income from farming activities positively affect the likelihood of purchasing insurance. the demand for insurance is negatively correlated with the premium paid per insured value, suggesting that insurance is a normal good. farmers are willing to pay ( wtp ) increasingly higher premiums for contracts with a higher coverage ratio. according to the valuation model, the wtp declines sharply for coverage ratios under 70 %.
|
arxiv:2004.11279
|
an algorithm, based on fourier decomposition of light curves, permitted to define a sample of 388 contact binaries with well observed light curves, periods shorter one day and with available v - i colors ( the r - sample ), from among 933 eclipsing binary systems in the ogle variable - star catalog for 9 fields of baade ' s window. the sample of such systems which was visually classified by the ogle project as ew - type binaries ( the o - sample ) is by 55 % larger and consists of 604 stars. the algorithm prevents inclusion of rr lyr and sx phe stars which in visual classification might be mistakenly taken for contact binaries with periods equal to twice their pulsation periods. determinations of distances for the contact systems, utilizing the m _ i = m _ i ( log p, v - i ) absolute - magnitude calibration and the map of reddening and extinction of stanek ( 1996 ), indicate an approximately uniform distribution of contact binaries almost all the way to the galactic bulge, implying heights up to z = 420 - 450 pc. this distribution, as well as a tendency for the colors to be concentrated in the region normally occupied by old turn - off - point stars, confirm the currently held opinion that contact binary systems belong to the old stellar population of the galaxy. the apparent frequency estimated in the volume - limited sense, relative to nearby ms dwarfs to the distances of 2 and 3 kpc, is one contact system per about 250 - 300 stars ; it is one contact system per 400 ms stars if m - dwarfs are included. the apparent density of contact systems is 7 - 10x10 ^ - 5 pc ^ - 3.
|
arxiv:astro-ph/9607009
|
successive cancellation decoders have come a long way since the implementation of traditional sc decoders, but there still is a potential for improvement. the main struggle over the years was to find an optimal algorithm to implement them. most of the proposed algorithms are not practical enough to be implemented in real - life. in this research, we aim to introduce the efficiency of stochastic neural networks as an sc decoder and find the possible ways of improving its performance and practicality. in this paper, after a brief introduction to stochastic neurons and snns, we introduce methods to realize stochastic nns on both deterministic and stochastic platforms.
|
arxiv:2011.06427
|
in conventional, phonon - mediated superconductors, the transition temperature $ t _ c $ and normal - state scattering rate $ 1 / \ tau $ - deduced from the linear - in - temperature resistivity $ \ rho ( t ) $ - are linked through the electron - phonon coupling strength $ \ lambda _ { \ rm ph } $. in cuprate high - $ t _ c $ superconductors, no equivalent $ \ lambda $ has yet been identified, despite the fact that at high doping, $ \ alpha $ - the low - $ t $ $ t $ - linear coefficient of $ \ rho ( t ) $ - also scales with $ t _ c $. here, we use dc resistivity and high - field magnetoresistance to extract $ \ tau ^ { - 1 } $ in electron - doped la $ _ { 2 - x } $ ce $ _ x $ cuo $ _ 4 $ ( lcco ) as a function of $ x $ from optimal doping to beyond the superconducting dome. a highly anisotropic inelastic component to $ \ tau ^ { - 1 } $ is revealed whose magnitude diminishes markedly across the doping series. using known fermi surface parameters and subsequent modelling of the hall coefficient, we demonstrate that the form of $ \ tau ^ { - 1 } $ in lcco is consistent with scattering off commensurate antiferromagnetic spin fluctuations of variable strength $ \ lambda _ { \ rm sf } $. the clear correlation between $ \ alpha $, $ \ lambda _ { \ rm sf } $ and $ t _ c $ then identifies low - energy spin - fluctuations as the primary pairing glue in electron - doped cuprates. the contrasting magnetotransport behaviour in hole - doped cuprates suggests that the higher $ t _ c $ in the latter cannot be attributed solely to an increase in $ \ lambda _ { \ rm sf } $. indeed, the success in modelling lcco serves to reinforces the notion that resolving the origin of high - temperature superconductivity in hole - doped cuprates may require more than a simple extension of bcs theory.
|
arxiv:2502.13612
|
in this paper, we give geometric realizations of lusztig ' s symmetries. we also give projective resolutions of a kind of standard modules. by using the geometric realizations and the projective resolutions, we obtain the categorification of the formulas of lusztig ' s symmetries.
|
arxiv:1501.01778
|
we investigate the q ^ 2 evolution of parton distributions at small x values, obtained in the case of flat initial conditions. the contributions of twist - two and ( renormalon - type ) higher - twist operators of the wilson operator product expansion are taken into account. the results are in excellent agreement with deep inelastic scattering experimental data from hera.
|
arxiv:hep-ph/0012299
|
we theoretically investigate the thermal boundary conductance across metal - nonmetal interfaces in the presence of the electron - phonon coupling not only in metal but also at interface. the thermal energy can be transferred from metal to nonmetal via three channels : ( 1 ) the phonon - phonon coupling at interface ; ( 2 ) the electron - phonon coupling at interface ; and ( 3 ) the electron - phonon coupling within metal and then subsequently the phonon - phonon coupling at interface. we find that these three channels can be described by an equivalent series - parallel thermal resistor network, based on which we derive out the analytic expression of the thermal boundary conductance. we then exemplify different contributions from each channel to the thermal boundary conductance in three typical interfaces : pb - diamond, ti - diamond, and tin - mgo. our results reveal that the competition among above channels determines the thermal boundary conductance.
|
arxiv:1412.2465
|
spherically symmetric ( 1d ) black - hole spacetimes are considered as a test for numerical relativity. a finite difference code, based in the hyperbolic structure of einstein ' s equations with the harmonic slicing condition is presented. significant errors in the mass function are shown to arise from the steep gradient zone behind the black hole horizon, which challenge the computational fluid dynamics numerical methods used in the code. the formalism is extended to moving numerical grids, which are adapted to follow horizon motion. the black hole exterior region can then be modeled with higher accuracy.
|
arxiv:gr-qc/9412070
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.