text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
jstor 3101655, s2cid 111425847 wikander, charlotte ( 2000 ), " canals ", in wikander, orjan ( ed. ), handbook of ancient water technology, technology and change in history, vol. 2, leiden : brill, pp. 321 β 330, isbn 90 - 04 - 11123 - 9 wolf, hans - jurgen ( 1974 ), geschichte der druckpressen ( 1st ed. ), frankfurt / main : interprint = = external links = =
|
https://en.wikipedia.org/wiki/Renaissance_technology
|
we present results of period analysis of asas, macho and ogle light curves of 79 symbiotic stars classified as s and d ' - type. the light curves of 58 objects show variations with the orbital period. in case of 34 objects, orbital periods are estimated for the first time, what increases the number of symbiotic stars with known orbital periods by about 64 %. the light curves of 46 objects show, in addition to the long - term or / and orbital variations, short - term variations with time scales of 50 - 200 days most likely due to stellar pulsations of the cool giant component. we also report eclipse - like minima and outbursts present in many of the light curves.
|
arxiv:1312.6063
|
federated learning ( fl ) enables multiple clients to collaboratively train a global model without sharing their local data. recent studies have highlighted the vulnerability of fl to byzantine attacks, where malicious clients send poisoned updates to degrade model performance. notably, many attacks have been developed targeting specific aggregation rules, whereas various defense mechanisms have been designed for dedicated threat models. this paper studies the resilience of an attack - agnostic fl scenario, where the server lacks prior knowledge of both the attackers ' strategies and the number of malicious clients involved. we first introduce a hybrid defense against state - of - the - art attacks. our goal is to identify a general - purpose aggregation rule that performs well on average while also avoiding worst - case vulnerabilities. by adaptively selecting from available defenses, we demonstrate that the server remains robust even when confronted with a substantial proportion of poisoned updates. to better understand this resilience, we then assess the attackers ' capability using a proxy called client heterogeneity. we also emphasize that the existing fl defenses should not be regarded as secure, as demonstrated through the newly proposed trapsetter attack. the proposed attack outperforms other state - of - the - art attacks by further reducing the model test accuracy by 8 - 10 %. our findings highlight the ongoing need for the development of byzantine - resilient aggregation algorithms in fl.
|
arxiv:2409.06474
|
the physiological and behavioral trait is employed to develop biometric authentication systems. the proposed work deals with the authentication of iris and signature based on minimum variance criteria. the iris patterns are preprocessed based on area of the connected components. the segmented image used for authentication consists of the region with large variations in the gray level values. the image region is split into quadtree components. the components with minimum variance are determined from the training samples. hu moments are applied on the components. the summation of moment values corresponding to minimum variance components are provided as input vector to k - means and fuzzy kmeans classifiers. the best performance was obtained for mmu database consisting of 45 subjects. the number of subjects with zero false rejection rate [ frr ] was 44 and number of subjects with zero false acceptance rate [ far ] was 45. this paper addresses the computational load reduction in off - line signature verification based on minimal features using k - means, fuzzy k - means, k - nn, fuzzy k - nn and novel average - max approaches. frr of 8. 13 % and far of 10 % was achieved using k - nn classifier. the signature is a biometric, where variations in a genuine case, is a natural expectation. in the genuine signature, certain parts of signature vary from one instance to another. the system aims to provide simple, fast and robust system using less number of features when compared to state of art works.
|
arxiv:1006.1187
|
the propagator of a gauge boson, like the massless photon or the massive vector bosons $ w ^ \ pm $ and $ z $ of the electroweak theory, can be derived in two different ways, namely via green ' s functions ( semi - classical approach ) or via the vacuum expectation value of the time - ordered product of the field operators ( field theoretical approach ). comparing the semi - classical with the field theoretical approach, the central tensorial object can be defined as the gauge boson projector, directly related to the completeness relation for the complete set of polarisation four - vectors. in this paper we explain the relation for this projector to different cases of the $ r _ \ xi $ gauge and explain why the unitary gauge is the default gauge for massive gauge bosons.
|
arxiv:2001.04106
|
ultrahigh - energy cosmic rays are almost exclusively detected through extensive air showers, which they initiate upon interaction with the atmosphere. the longitudinal development of these air showers can be directly observed using fluorescence detector telescopes, such as those employed at the pierre auger observatory or the telescope array. in this article, we discuss the properties of the greisen function, which was initially derived as an approximate solution to the electromagnetic cascade equations, and its ability to describe the longitudinal shower profiles. we demonstrate that the greisen function can be used to describe longitudinal air - shower profiles, even for hadronic air showers. furthermore we discuss the possibility to discriminate between hadrons and photons from the shape of air - shower profiles using the greisen function.
|
arxiv:2303.16670
|
in engineering, a foundation is the element of a structure which connects it to the ground or more rarely, water ( as with floating structures ), transferring loads from the structure to the ground. foundations are generally considered either shallow or deep. foundation engineering is the application of soil mechanics and rock mechanics ( geotechnical engineering ) in the design of foundation elements of structures. = = purpose = = foundations provide the structure ' s stability from the ground : to distribute the weight of the structure over a large area in order to avoid overloading the underlying soil ( possibly causing unequal settlement ). to anchor the structure against natural forces including earthquakes, floods, droughts, frost heaves, tornadoes and wind. to provide a level surface for construction. to anchor the structure deeply into the ground, increasing its stability and preventing overloading. to prevent lateral movements of the supported structure ( in some cases ). = = requirements of a good foundation = = the design and the construction of a well - performing foundation must possess some basic requirements : the design and the construction of the foundation is done such that it can sustain as well as transmit the dead and the imposed loads to the soil. this transfer has to be carried out without resulting in any form of settlement that can cause stability issues for the structure. differential settlements can be avoided by having a rigid base for the foundation. these issues are more pronounced in areas where the superimposed loads are not uniform in nature. based on the soil and area it is recommended to have a deeper foundation so that it can guard any form of damage or distress. these are mainly caused due to the problem of shrinkage and swelling because of temperature changes. the location of the foundation chosen must be an area that is not affected or influenced by future works or factors. = = historic types = = = = = earthfast or post in ground construction = = = buildings and structures have a long history of being built with wood in contact with the ground. post in ground construction may technically have no foundation. timber pilings were used on soft or wet ground even below stone or masonry walls. in marine construction and bridge building a crisscross of timbers or steel beams in concrete is called grillage. = = = padstones = = = perhaps the simplest foundation is the padstone, a single stone which both spreads the weight on the ground and raises the timber off the ground. staddle stones are a specific type of padstone. = = = stone foundations = = = dry stone and stones laid in mortar to build foundations are common in many
|
https://en.wikipedia.org/wiki/Foundation_(engineering)
|
transit timing variations ( ttvs ) are observed for exoplanets at a range of amplitudes and periods, yielding an ostensibly degenerate forest of possible explanations. we offer some clarity in this forest, showing that systems with a distant perturbing planet preferentially show ttvs with a dominant period equal to either the perturbing planet ' s period or half the perturbing planet ' s period. we demonstrate that planet induced ttvs are not expected with ttv periods below this exoplanet edge ( lower period limit ) and that systems with ttvs that fall below this limit likely contain additional mass in the system. we present an explanation for both of these periods, showing that both aliasing of the conjunction induced synodic period and the near $ 1 : 2 $ resonance super - period and tidal effects induce ttvs at periods equal to either the perturber ' s orbit or half - orbit. we provide three examples of known systems for which the recovered ttv period induced by a distant perturbing planet is equal to the perturber ' s orbital period or half its orbital period. we then investigate $ \ textit { kepler } $ two - planet systems with ttvs and identify 13 two - planet systems with ttvs below this ttv period lower limit - - thus potentially uncovering the gravitational influence of new planets and / or moons. we conclude by discussing how the exoplanet edge effects can be used to predict the presence of distance companion planets, in situations where ttvs are detected and where nearby companions can be ruled out by additional observations, such as radial velocity data.
|
arxiv:2411.09752
|
data mining deals with automatic extraction of previously unknown patterns from large amounts of data. organizations all over the world handle large amounts of data and are dependent on mining gigantic data sets for expansion of their enterprises. these data sets typically contain sensitive individual information, which consequently get exposed to the other parties. though we cannot deny the benefits of knowledge discovery that comes through data mining, we should also ensure that data privacy is maintained in the event of data mining. privacy preserving data mining is a specialized activity in which the data privacy is ensured during data mining. data privacy is as important as the extracted knowledge and efforts that guarantee data privacy during data mining are encouraged. in this paper we propose a strategy that protects the data privacy during decision tree analysis of data mining process. we propose to add specific noise to the numeric attributes after exploring the decision tree of the original data. the obfuscated data then is presented to the second party for decision tree analysis. the decision tree obtained on the original data and the obfuscated data are similar but by using our method the data proper is not revealed to the second party during the mining process and hence the privacy will be preserved.
|
arxiv:1001.3504
|
reinforcement learning ( rl ) is a popular data - driven method that has demonstrated great success in robotics. previous works usually focus on learning an end - to - end ( direct ) policy to directly output joint torques. while the direct policy seems convenient, the resultant performance may not meet our expectations. to improve its performance, more sophisticated reward functions or more structured policies can be utilized. this paper focuses on the latter because the structured policy is more intuitive and can inherit insights from previous model - based controllers. it is unsurprising that the structure, such as a better choice of the action space and constraints of motion trajectory, may benefit the training process and the final performance of the policy at the cost of generality, but the quantitative effect is still unclear. to analyze the effect of the structure quantitatively, this paper investigates three policies with different levels of structure in learning quadruped locomotion : a direct policy, a structured policy, and a highly structured policy. the structured policy is trained to learn a task - space impedance controller and the highly structured policy learns a controller tailored for trot running, which we adopt from previous work. to evaluate trained policies, we design a simulation experiment to track different desired velocities under force disturbances. simulation results show that structured policy and highly structured policy require 1 / 3 and 3 / 4 fewer training steps than the direct policy to achieve a similar level of cumulative reward, and seem more robust and efficient than the direct policy. we highlight that the structure embedded in the policies significantly affects the overall performance of learning a complicated task when complex dynamics are involved, such as legged locomotion.
|
arxiv:2008.12970
|
detection of periodic patterns of interest within noisy time series data plays a critical role in various tasks, spanning from health monitoring to behavior analysis. existing learning techniques often rely on labels or clean versions of signals for detecting the periodicity, and those employing self - supervised learning methods are required to apply proper augmentations, which is already challenging for time series and can result in collapse - - all representations collapse to a single point due to strong augmentations. in this work, we propose a novel method to detect the periodicity in time series without the need for any labels or requiring tailored positive or negative data generation mechanisms with specific augmentations. we mitigate the collapse issue by ensuring the learned representations retain information from the original samples without imposing any random variance constraints on the batch. our experiments in three time series tasks against state - of - the - art learning methods show that the proposed approach consistently outperforms prior works, achieving performance improvements of more than 45 - - 50 \ %, showing its effectiveness. code : https : / / github. com / eth - siplab / unsupervised _ periodicity _ detection
|
arxiv:2406.00566
|
we present results for light hadrons composed of both degenerate and non - degenerate quarks in quenched lattice qcd. we calculate masses and decay constants using 60 gauge configurations with an $ o ( a ) $ - - improved fermion action at $ \ beta = 6. 2 $. using the $ \ rho $ mass to set the scale, we find hadron masses within two to three standard deviations of the experimental values ( given in parentheses ) : $ m _ { k ^ * } = 868 \ er { 9 } { 8 } $ ~ mev ( 892 ~ mev ), $ m _ { \ phi } = 970 \ err { 20 } { 10 } $ ~ mev ( 1020 ~ mev ), $ m _ n = 820 \ err { 90 } { 60 } $ ~ mev ( 938 ~ mev ), $ m _ \ delta = 1300 \ errr { 100 } { 100 } $ ~ mev ( 1232 ~ mev ) and $ m _ \ omega = 1650 \ err { 70 } { 50 } $ ~ mev ( 1672 ~ mev ). direct comparison with experiment for decay constants is obscured by uncertainty in current renormalisations. however, for ratios of decay constants we obtain $ f _ k / f _ \ pi = 1. 20 \ er { 3 } { 2 } $ ( 1. 22 ) and $ f _ \ phi / f _ \ rho = 1. 13 \ er { 2 } { 3 } $ ( 1. 22 ).
|
arxiv:hep-lat/9309002
|
very metal - poor stars that have $ [ \ text { fe } / \ text { h } ] < - 2 $ and that are enhanced in c relative to fe ( $ [ \ text { c } / \ text { fe } ] > + 0. 7 $ ) but have no enhancement of heavy elements ( $ [ \ text { ba } / \ text { fe } ] < 0 $ ) are known as carbon - enhanced metal - poor ( cemp - no ) stars. these stars are thought to be produced from a gas that was polluted by the supernova ( sn ) ejecta of the very first generation ( pop iii ) massive stars. the very high enrichment of c ( $ a ( \ text { c } ) \ gtrsim 6 $ ) observed in many of the cemp - no stars is difficult to explain by current models of sn explosions from massive pop iii stars when a reasonable dilution of the sn ejecta, that is consistent with detailed simulation of metal mixing in minihaloes, is adopted. we explore rapidly rotating pop iii stars that undergo efficient mixing and reach a quasi - chemically homogeneous ( qch ) state. we find that qch stars can eject large amounts of c in the wind and that the resulting dilution of the wind ejecta in the interstellar medium can lead to a c enrichment of $ a ( \ text { c } ) \ lesssim7. 75 $. the core of qch stars can produce up to an order of magnitude of more c than non - rotating progenitors of similar mass and the resulting sn can lead to a c enrichment of $ a ( \ text { c } ) \ lesssim7 $. our rapidly rotating massive pop iii stars cover almost the entire range of $ a ( \ text { c } ) $ observed in cemp - no stars and are a promising site for explaining the high c enhancement in the early galaxy. our work indicates that a substantial fraction of pop iii stars were likely rapid rotators.
|
arxiv:2306.06433
|
during disasters, contests, and experimentation. radio amateurs must hold an amateur radio license and are given a unique callsign that must be used as an identifier in transmissions. amateur radio is restricted to small frequency bands, the amateur radio bands, spaced throughout the radio spectrum starting at 136 khz. within these bands, amateurs are allowed the freedom to transmit on any frequency using a wide variety of voice modulation methods, along with other forms of communication, such as slow - scan television ( sstv ), and radioteletype ( rtty ). additionally, amateurs are among the only radio operators still using morse code radiotelegraphy. = = = = one - way voice communication = = = = one way, unidirectional radio transmission is called simplex. baby monitor β a crib - side appliance for parents of infants that transmits the baby ' s sounds to a receiver carried by the parent, so they can monitor the baby while they are in other parts of the house. the wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9. 3 β 49. 9 or 900 mhz wavebands, and digital systems in the 2. 4 ghz waveband. many baby monitors have duplex channels so the parent can talk to the baby, and cameras to show video of the baby. wireless microphone β a battery - powered microphone with a short - range transmitter that is handheld or worn on a person ' s body which transmits its sound by radio to a nearby receiver unit connected to a sound system. wireless microphones are used by public speakers, performers, and television personalities so they can move freely without trailing a microphone cord. traditionally, analog models transmit in fm on unused portions of the television broadcast frequencies in the vhf and uhf bands. some models transmit on two frequency channels for diversity reception to prevent nulls from interrupting transmission as the performer moves around. some models use digital modulation to prevent unauthorized reception by scanner radio receivers ; these operate in the 900 mhz, 2. 4 ghz or 6 ghz ism bands. european standards also support wireless multichannel audio systems ( wmas ) that can better support the use of large numbers of wireless microphones at a single event or venue. as of 2021, u. s. regulators were considering adopting rules for wmas. = = = data communication = = = wireless networking β automated radio links which transmit digital data between computers and other wireless devices using radio waves, linking the devices together transparently in a computer network. computer networks
|
https://en.wikipedia.org/wiki/Radio
|
the demand for interactive narratives is growing with increasing popularity of vr and video gaming. this presents an opportunity to create interactive storytelling experiences that allow players to engage with a narrative from a first person perspective, both, immersively in vr and in 3d on a computer. however, for artists and storytellers without programming experience, authoring such experiences is a particularly complex task as it involves coding a series of story events ( character animation, movements, time control, dialogues, etc. ) to be connected and triggered by a variety of player behaviors. in this work, we present connectvr, a trigger - action interface to enable non - technical creators design agent - based narrative experiences. our no - code authoring method specifically focuses on the design of narratives driven by a series of cause - effect relationships triggered by the player ' s actions. we asked 15 participants to use connectvr in a preliminary workshop study as well as two artists to extensively use our system to create vr narrative projects in a three - week in - depth study. our findings shed light on the creative opportunities facilitated by connectvr ' s trigger - action approach, particularly its capability to establish chained behavioral effects between virtual characters and objects. the results of both studies underscore the positive feedback from participants regarding our system ' s capacity to not only support creativity but also to simplify the creation of interactive narrative experiences. results indicate compatibility with non - technical narrative creator ' s workflows, showcasing its potential to enhance the overall creative process in the realm of vr narrative design.
|
arxiv:2406.15889
|
the goal of this work is to study an infectious disease spreading in a medium size population occupying a confined environment. for this purpose, we consider a kinetic theory approach to model crowd dynamics in bounded domains and couple it to a kinetic equation to model contagion. the interactions of a person with other pedestrians and the environment are modeled by using tools of game theory. the pedestrian dynamics model allows to weight between two competing behaviors : the search for less congested areas and the tendency to follow the stream unconsciously in a panic situation. each person in the system has a contagion level that is affected by their neighborhood. for the numerical solution of the coupled problem, we propose a numerical algorithm that at every time step solves one crowd dynamics problem and one contagion problem, i. e. with no subiterations between the two. we test our coupled model on a problem involving a small crowd walking through a corridor.
|
arxiv:2003.08357
|
virtual bidding plays an important role in two - settlement electric power markets, as it can reduce discrepancies between day - ahead and real - time markets. renewable energy penetration increases volatility in electricity prices, making accurate forecasting critical for virtual bidders, reducing uncertainty and maximizing profits. this study presents a transformer - based deep learning model to forecast the price spread between real - time and day - ahead electricity prices in the ercot ( electric reliability council of texas ) market. the proposed model leverages various time - series features, including load forecasts, solar and wind generation forecasts, and temporal attributes. the model is trained under realistic constraints and validated using a walk - forward approach by updating the model every week. based on the price spread prediction results, several trading strategies are proposed and the most effective strategy for maximizing cumulative profit under realistic market conditions is identified through backtesting. the results show that the strategy of trading only at the peak hour with a precision score of over 50 % produces nearly consistent profit over the test period. the proposed method underscores the importance of an accurate electricity price forecasting model and introduces a new method of evaluating the price forecast model from a virtual bidder ' s perspective, providing valuable insights for future research.
|
arxiv:2412.00062
|
we describe the practical implementation of an average polynomial - time algorithm for counting points on superelliptic curves defined over $ \ mathbb q $ that is substantially faster than previous approaches. our algorithm takes as input a superelliptic curves $ y ^ m = f ( x ) $ with $ m \ ge 2 $ and $ f \ in \ mathbb z [ x ] $ any squarefree polynomial of degree $ d \ ge 3 $, along with a positive integer $ n $. it can compute $ \ # x ( \ mathbb f _ p ) $ for all $ p \ le n $ not dividing $ m \ mathrm { lc } ( f ) \ mathrm { disc } ( f ) $ in time $ o ( md ^ 3 n \ log ^ 3 n \ log \ log n ) $. it achieves this by computing the trace of the cartier - - manin matrix of reductions of $ x $. we can also compute the cartier - - manin matrix itself, which determines the $ p $ - rank of the jacobian of $ x $ and the numerator of its zeta function modulo ~ $ p $.
|
arxiv:2004.10189
|
we show that the action of the kauffman bracket skein algebra of a surface $ \ sigma $ on the skein module of the handlebody bounded by $ \ sigma $ is faithful if and only if the quantum parameter is not a root of 1.
|
arxiv:2103.11532
|
the anchor words algorithm performs provably efficient topic model inference by finding an approximate convex hull in a high - dimensional word co - occurrence space. however, the existing greedy algorithm often selects poor anchor words, reducing topic quality and interpretability. rather than finding an approximate convex hull in a high - dimensional space, we propose to find an exact convex hull in a visualizable 2 - or 3 - dimensional space. such low - dimensional embeddings both improve topics and clearly show users why the algorithm selects certain words.
|
arxiv:1711.06826
|
a new simple method for the first order phase transition kinetics is suggested. the metastable phase consumption can be imagined in frames of the modisperse approximation for the distribution of the droplets sizes. in all situations of the metastable phase decay this approximation leads to negligible errors in the total number of droplets appeared in the system. an evident advantage of the presented method is the possibility to investigate the situation of the metastable phase decay on several sorts of heterogeneous centers.
|
arxiv:physics/0001072
|
in this paper, we present a multi - level mixed element scheme for the helmholtz transmission eigenvalue problem on polygonal domains that are not necessarily able to be covered by rectangle grids. we first construct an equivalent linear mixed formulation of the transmission eigenvalue problem and then discretize it with lagrangian finite elements of low regularities. the proposed scheme admits a natural nested discretization, based on which we construct a multi - level scheme. optimal convergence rate and optimal com - putational cost can be obtained with the scheme.
|
arxiv:1707.00567
|
we examine a two - dimensional system of sterically repulsive interacting disks where each particle runs in a random direction. this system is equivalent to a run - and - tumble dynamics system in the limit where the run time is infinite. at low densities, we find a strongly fluctuating state composed of transient clusters. above a critical density that is well below the density at which non - active particles would crystallize, the system can organize into a drifting quiescent or frozen state where the fluctuations are lost and large crystallites form surrounded by a small density of individual particles. although all the particles are still moving, their paths form closed orbits. the average transient time to organize into the quiescent state diverges as a power law upon approaching the critical density from above. we compare our results to the random organization observed for periodically sheared systems that can undergo an absorbing transition from a fluctuating state to a dynamical non - fluctuating state. in the random organization studies, the system organizes to a state in which the particles no longer interact ; in contrast, we find that the randomly running active matter organizes to a strongly interacting dynamically jammed state. we show that the transition to the frozen state is robust against a certain range of stochastic fluctuations. we also examine the effects of adding a small number of pinned particles to the system and find that the transition to the frozen state shifts to significantly lower densities and arises via the nucleation of faceted crystals centered at the obstacles.
|
arxiv:1406.3383
|
successful modern generalized gradient approximations ( gga ) are biased toward atomic energies. restoration of the first - principles gradient expansion for the exchange energy over a wide range of density gradients eliminates this bias. we introduce pbesol, a revised perdew - burke - ernzerhof gga that improves equilibrium properties for many densely - packed solids and their surfaces.
|
arxiv:0707.2088
|
##ses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent β the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell β which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed
|
https://en.wikipedia.org/wiki/Botany
|
for years, it has been believed that the main lhc detectors can only restrictively play the role of a lifetime frontier experiment exploring the parameter space of long - lived particles ( llps ) - hypothetical particles with tiny couplings to the standard model. this paper demonstrates that the lhcb experiment may become a powerful lifetime frontier experiment if it uses the new downstream algorithm reconstructing tracks that do not let hits in the lhcb vertex tracker. in particular, for many llp scenarios, lhcb may be as sensitive as the proposed experiments beyond main lhc detectors for various llp models, including heavy neutral leptons, dark scalars, dark photons, and axion - like particles.
|
arxiv:2312.14016
|
image reconstruction in low - count pet is particularly challenging because gammas from natural radioactivity in lu - based crystals cause high random fractions that lower the measurement signal - to - noise - ratio ( snr ). in model - based image reconstruction ( mbir ), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. new regularization methods based on learned convolutional operators are emerging in mbir. we modify the architecture of an iterative neural network, bcd - net, for pet mbir, and demonstrate the efficacy of the trained bcd - net using xcat phantom data that simulates the low true coincidence count - rates with high random fractions typical for y - 90 pet patient imaging after y - 90 microsphere radioembolization. numerical results show that the proposed bcd - net significantly improves cnr and rmse of the reconstructed images compared to mbir methods using non - trained regularizers, total variation ( tv ) and non - local means ( nlm ). moreover, bcd - net successfully generalizes to test data that differs from the training data. improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count - levels.
|
arxiv:1906.02327
|
gauge theories broken by a single higgs field are known to have first - order phase transitions in temperature if $ \ lambda / g ^ 2 \ ll 1 $, where $ g $ is the gauge coupling and $ \ lambda $ the higgs self - coupling. if the theory is extended from one to $ n $ higgs doublets, with u ( $ n $ ) flavor symmetry, the transition is known to be second order for $ \ lambda / g ^ 2 \ gtrsim 1 $ in the $ n \ to \ infty $ limit. we show that one can in principal compute the tricritical value of $ \ lambda / g ^ 2 $, separating first from second - order transitions, to any order in $ 1 / n $. in particular, scalar fluctuations at the transition damp away the usual problems with the infrared behavior of high - temperature non - abelian gauge theories. we explicitly compute the tricritical value of $ \ lambda / g ^ 2 $ for u ( 1 ) and su ( 2 ) gauge theory to next - to - leading order in $ 1 / n $.
|
arxiv:hep-ph/9610226
|
aims. one of the main probes for systematic errors in the cosmic shear signal are the division of the shear field into e - and b - mode shear, where gravitational lensing only produces the former. as shown in a recent note, all currently used e - / b - mode separation methods for the shear correlation functions xi _ pm require them to be measured to arbitrarily small and / or large separations which is of course not feasible in practice. methods. we derive second - order shear statistics which provide a clean separation into e - and b - modes from measurements of xi _ pm ( theta ) over a finite interval only. we call these new statistics the circle and ring statistics, respectively ; the latter is obtained by an integral over the former. the mathematical properties of these new shear statistics are obtained, as well as specific expressions for applying them to observed data. results. it is shown that an e - / b - mode separation can be performed on measurements of xi _ pm over a finite interval in angular separation, using the ring statistics. we furthermore generalize this result to derive the most general class of second - order shear statistics which provide a separation of e - and b - mode shear on a given angular interval theta _ min < = theta < = theta _ max. our results will be of practical use particularly for future cosmic shear surveys where highly precise measurements of the shear will become available and where control of systematics will be mandatory.
|
arxiv:astro-ph/0605084
|
we generalize classical results in spectral graph theory and linear algebra more broadly, from the case where the underlying matrix is hermitian to the case where it is non - hermitian. new admissibility conditions are introduced to replace the hermiticity condition. we prove new variational estimates of the rayleigh quotient for non - hermitian matrices. as an application, a new delsarte - hoffman - type bound on the size of the largest independent set in a directed graph is developed. our techniques consist in quantifying the impact of breaking the hermitian symmetry of a matrix and are broadly applicable.
|
arxiv:1812.04737
|
this paper proposes a method for generating images of customized objects specified by users. the method is based on a general framework that bypasses the lengthy optimization required by previous approaches, which often employ a per - object optimization paradigm. our framework adopts an encoder to capture high - level identifiable semantics of objects, producing an object - specific embedding with only a single feed - forward pass. the acquired object embedding is then passed to a text - to - image synthesis model for subsequent generation. to effectively blend a object - aware embedding space into a well developed text - to - image model under the same generation context, we investigate different network designs and training strategies, and propose a simple yet effective regularized joint training scheme with an object identity preservation loss. additionally, we propose a caption generation scheme that become a critical piece in fostering object specific embedding faithfully reflected into the generation process, while keeping control and editing abilities. once trained, the network is able to produce diverse content and styles, conditioned on both texts and objects. we demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity, without the need of test - time optimization. systematic studies are also conducted to analyze our models, providing insights for future work.
|
arxiv:2304.02642
|
due to the computational complexity of self - attention ( sa ), prevalent techniques for image deblurring often resort to either adopting localized sa or employing coarse - grained global sa methods, both of which exhibit drawbacks such as compromising global modeling or lacking fine - grained correlation. in order to address this issue by effectively modeling long - range dependencies without sacrificing fine - grained details, we introduce a novel approach termed local frequency transformer ( loformer ). within each unit of loformer, we incorporate a local channel - wise sa in the frequency domain ( freq - lc ) to simultaneously capture cross - covariance within low - and high - frequency local windows. these operations offer the advantage of ( 1 ) ensuring equitable learning opportunities for both coarse - grained structures and fine - grained details, and ( 2 ) exploring a broader range of representational properties compared to coarse - grained global sa methods. additionally, we introduce an mlp gating mechanism complementary to freq - lc, which serves to filter out irrelevant features while enhancing global learning capabilities. our experiments demonstrate that loformer significantly improves performance in the image deblurring task, achieving a psnr of 34. 09 db on the gopro dataset with 126g flops. https : / / github. com / deepmed - lab - ecnu / single - image - deblur
|
arxiv:2407.16993
|
recent years have witnessed lots of progress in the computation of supersymmetric partition functions of scfts on curved manifolds via localization. the twisted partition function on product manifolds of the form $ s ^ 1 \ times \ sigma _ g $, where $ \ sigma _ g $ is a two - dimensional riemann surface, is of particular relevance due to its role in the microstate counting for magnetic static ads $ _ 4 $ black holes realizing the topological twist. we review here supergravity solutions having as conformal boundary more general 3d manifolds. we first focus on solutions ( ads - taub - nut and ads - taub - bolt ) having as boundary a circle bundle over $ \ sigma _ g $, showing the matching of their on - shell action with the large $ n $ limit of the partition function of the dual cft. we then discuss some recent results for a challenging example, which involves the refinement by angular momentum. the gravitational backgrounds in this case are rotating supersymmetric ads $ _ 4 $ black holes. we review the salient features of two different classes of such solutions in theories of supergravity with uplift in m - theory, and comment on the current status of their entropy counting in the dual cft.
|
arxiv:2003.14409
|
we describe a casimir apparatus based on a differential force measurement between a au - coated sphere and a planar slab divided in two regions, one of which is made of high - resistivity ( dielectric ) si, and the other of au. the crucial feature of the setup is a semi - transparent plane parallel conducting over - layer, covering both regions. the setup offers two important advantages over existing casimir setups. on one hand it leads to a large amplification of the difference between the drude and the plasma prescriptions that are currently used to compute the thermal casimir force. on the other hand, thanks to the screening power of the over - layer, it is in principle immune from electrostatic forces caused by potential patches on the plates surfaces, that plague present large distance casimir experiments. if a semi - transparent conductive over - layer with identical patch structure over the au - si regions of the plate can be manufactured, similar to the opaque over - layers used in recent searches of non - newtonian gravitational forces based on the isoelectronic technique, the way will be paved for a clear observation of the thermal casimir force up to separations of several microns, and an unambiguous discrimination between the drude and the plasma prescriptions.
|
arxiv:1410.4476
|
we investigate the in - spiraling timescales of globular clusters in dwarf spheroidal ( dsph ) and dwarf elliptical ( de ) galaxies, due to dynamical friction. we address the problem of these timescales having been variously estimated in the literature as much shorter than a hubble time. using self - consistent two - component ( dark matter and stars ) models, we explore mechanisms which may yield extended dynamical friction timescales in such systems in order to explain why dwarf galaxies often show globular cluster systems. as a general rule, dark matter and stars both give a comparable contribution to the dynamical drag. by exploring various possibilities for their gravitational make - up, it is shown that these studies help constrain the parameters of the dark matter haloes in these galaxies, as well as to test alternatives to dark matter. under the assumption of a dark haloes having a constant density core, dynamical friction timescales are naturally extended upwards of a hubble time. cuspy dark haloes yield timescales $ \ lesssim $ 4. 5 gyr, for any dark halo parameters in accordance with observations of stellar line - of - sight velocity dispersion in dwarf spheroidal galaxies. we find that under the hypothesis of mond dynamics, due to the enhanced dynamical drag of the stars, the dynamical friction timescales would be extremely short. taking the well - measured structural parameters of the fornax dsph and its globular cluster system as a case study, we conclude that requiring dynamical friction timescales comparable to the hubble time strongly favours dark haloes with a central core.
|
arxiv:astro-ph/0601490
|
recent studies have highlighted significant fairness issues in graph transformer ( gt ) models, particularly against subgroups defined by sensitive features. additionally, gts are computationally intensive and memory - demanding, limiting their application to large - scale graphs. our experiments demonstrate that graph partitioning can enhance the fairness of gt models while reducing computational complexity. to understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in gt models. we found that the sensitive features of higher - order nodes disproportionately influence lower - order nodes, resulting in sensitive feature bias. we propose fairness - aware scalable gt based on graph partitioning ( fairgp ), which partitions the graph to minimize the negative impact of higher - order nodes. by optimizing attention mechanisms, fairgp mitigates the bias introduced by global attention, thereby enhancing fairness. extensive empirical evaluations on six real - world datasets validate the superior performance of fairgp in achieving fairness compared to state - of - the - art methods. the codes are available at https : / / github. com / luorenqiang / fairgp.
|
arxiv:2412.10669
|
we develop a new method to isolate localized defects from extended vibrational modes in disordered solids. this method augments particle interactions with an artificial potential that acts as a high - pass filter : it preserves small - scale structures while pushing extended vibrational modes to higher frequencies. the low - frequency modes that remain are " bare " defects ; they are exponentially localized without the quadrupolar tails associated with elastic interactions. we demonstrate that these localized excitations are excellent predictors of plastic rearrangements in the solid. we characterize several of the properties of these defects that appear in mesoscopic theory of plasticity, including their distribution of energy barriers, number density, and size, which is a first step in testing and revising continuum models for plasticity in disordered solids.
|
arxiv:1502.00685
|
as an important type of dynamics on complex networks, spreading is widely used to model many real processes such as the epidemic contagion and information propagation. one of the most significant research questions in spreading is to rank the spreading ability of nodes in the network. to this end, substantial effort has been made and a variety of effective methods have been proposed. these methods usually define the spreading ability of a node as the number of finally infected nodes given that the spreading is initialized from the node. however, in many real cases such as advertising and medicine science the spreading only aims to cover a specific group of nodes. therefore, it is necessary to study the spreading ability of nodes towards localized targets in complex networks. in this paper, we propose a reversed local path algorithm for this problem. simulation results show that our method outperforms the existing methods in identifying the influential nodes with respect to these localized targets. moreover, the influential spreaders identified by our method can effectively avoid infecting the non - target nodes in the spreading process.
|
arxiv:1512.05612
|
here i will present an introduction to the results that have been recently obtained in constraint optimization of random problems using statistical mechanics techniques. after presenting the general results, in order to simplify the presentation i will describe in details the problems related to the coloring of a random graph.
|
arxiv:cond-mat/0602350
|
the success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3d world from limited sensor data. inspired by the nature of human perception of 3d shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. given a single depth image of an object, we present 3d - prnn, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. our generative model encodes symmetry characteristics of common man - made objects, preserves long - range structural coherence, and describes objects of varying complexity with a compact representation. we also propose a method based on gaussian fields to generate a large scale dataset of primitive - based shape representations to train our network. we evaluate our approach on a wide range of examples and show that it outperforms nearest - neighbor based shape retrieval methods and is on - par with voxel - based generative models while using a significantly reduced parameter space.
|
arxiv:1708.01648
|
we show that the dynamics of particles in a one - dimensional harmonic trap with hard - core interactions can be solvable for certain arrangements of unequal masses. for any number of particles, there exist two families of unequal mass particles that have integrable dynamics, and there are additional exceptional cases for three, four and five particles. the integrable mass families are classified by coxeter reflection groups and the corresponding solutions are bethe ansatz - like superpositions of hyperspherical harmonics in the relative hyperangular coordinates that are then restricted to sectors of fixed particle order. we also provide evidence for superintegrability of these coxeter mass families and conjecture maximal superintegrability.
|
arxiv:1704.01433
|
in this paper, we investigate the problem of verifying the finite - time safety of continuous - time perturbed deterministic systems represented by ordinary differential equations in the presence of measurable disturbances. given a finite time horizon, if the system is safe, it, starting from a compact initial set, will remain within an open and bounded safe region throughout the specified time horizon, regardless of the disturbances. the main contribution of this work is to uncover that there exists a time - dependent barrier certificate if and only if the system is safe. this barrier certificate satisfies the following conditions : negativity over the initial set at the initial time instant, non - negativity over the boundary of the safe set, and non - increasing behavior along the system dynamics over the specified finite time horizon. the existence problem is explored using a hamilton - jacobi differential equation, which has a unique lipschitz viscosity solution.
|
arxiv:2402.17167
|
in this study, we investigate the effects of pre - germinative and post - germinative plasma treatments, applied separately or in combination, to improve maize germination and early seedling development. pre - germinative treatment consists of priming the seeds with a dry atmospheric plasma ( dap ) generated by a dielectric barrier device ( dbd ), characterized by minimal radiative emission, low electrical power ( 4 w ) and high emissions of o, oh and no radicals. post - germinative treatment, known as plasma - activated water ( paw ), uses a single - pin electrode device ( sped ) to generate a dc discharge that features a power of 126 w and produces large amounts of oh radicals. the resulting paw, after 5 minutes of sped treatment, induces a slight acidification and increased concentrations of nitrate ions ( from 24 to 250 mg / l ), nitrite ions ( from less than 0. 1 to 56. 1 mg / l ) and hydrogen peroxide ( from 0. 3 to 18. 5 mg / l ). results indicate that dap applied on maize seeds for 20 min boosts their germination rate up to 90 % ( versus only 65 % for untreated seeds ) while reducing the median germination time by 37. 5 %. then, seedling growth monitoring is achieved on control, dap, paw and dap + paw groups to assess stem length, hypocotyl length, leaf count, collar diameter and fresh / dry mass. the dap + paw group shows the most robust growth, demonstrating a synergistic effect of the combined treatments, particularly with significantly longer stem lengths. additionally, physiological analyses of seedling leaves indicate a decrease in chlorophyll content despite enhanced growth, while fluorescence microscopy reveals a reduction in stomatal density in leaves treated with dap and paw, especially in the combined treatment group, potentially impacting photosynthetic efficiency and water regulation.
|
arxiv:2412.09759
|
we extend langdon winner ' s idea that artifacts have politics into the realm of mathematics. to do so, we first provide a list of examples showing the existence of mathematical artifacts that have politics. in the second step, we provide an argument that shows that all mathematical artifacts have politics. we conclude by showing the implications for embedding ethics into mathematical curricula. we show how acknowledging that mathematical artifacts have politics can help mathematicians design better exercises for their mathematics students.
|
arxiv:2308.04871
|
during the last decades, quantum dots within the coulomb blockade regime of transport have been proposed as essential building blocks for a wide variety of nanomachines. this includes thermoelectric devices, quantum shuttles, quantum pumps, and even quantum motors. however, in this regime, the role of quantum mechanics is commonly limited to provide energy quantization while the working principle of the devices is ultimately the same as their classic counterparts. here, we study quantum - dot - based nanomachines in the coulomb blockade regime, but in a configuration where the coherent superpositions of the dots ' states plays a crucial role. we show that the studied system can be used as the basis for different forms of " true " quantum machines that should only work in the presence of these coherent superpositions. we analyze the efficiency of these machines against different nonequilibrium sources ( bias voltage, temperature gradient, and external driving ) and the factors that limit it, including decoherence and the role of the different orders appearing in the adiabatic expansion of the charge / heat currents.
|
arxiv:2102.04408
|
evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, and 22. 5 % concluded positive effect. = = quality, efficiency, and access = = evidence - based medicine, prevention of medical error ( and other " iatrogenesis " ), and avoidance of unnecessary health care are a priority in modern medical systems. these topics generate significant political and public policy attention, particularly in the united states where healthcare is regarded as excessively costly but population health metrics lag similar nations. globally, many developing countries lack access to care and access to medicines. as of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the united states where lack of health insurance coverage may limit access. = = see also = = = = notes = = = = references = =
|
https://en.wikipedia.org/wiki/Medicine
|
we find a new class of ( 2, 0 ) - supersymmetric two - dimensional sigma models with torsion and target spaces almost complex manifolds extending similar results for models with ( 2, 2 ) supersymmetry. these models are invariant under a new symmetry which is generated by a noether charge of lorentz weight one and it is associated to the nijenhuis tensor of the almost complex structure of the sigma model target manifold. we compute the poisson bracket algebra of charges of the above ( 2, 0 ) - and ( 2, 2 ) - supersymmetric sigma models and show that it closes but it is not isomorphic to the standard ( 2, 0 ) and ( 2, 2 ) supersymmetry algebra, respectively. examples of such ( 2, 0 ) - and ( 2, 2 ) - supersymmetric sigma models with target spaces group manifolds are also given. in addition, we study the quantisation of the ( 2, 0 ) - supersymmetric sigma models, compute the anomalies of their classical symmetries and examine their cancellation. furthermore, we examine the massive extension of ( 2, 0 ) - supersymmetric sigma models with target spaces almost complex manifolds, and study the topological twist of the new supersymmetry algebras.
|
arxiv:hep-th/9503063
|
a recent result of griffin, ono, rolen and zagier on jensen polynomials related with the riemann zeta function is improved.
|
arxiv:2105.05386
|
the large array survey telescope ( last ) is a wide - field telescope designed to explore the variable and transient sky with a high cadence and to be a test - bed for cost - effective telescope design. a last node is composed of 48 ( 32 already deployed ), 28 - cm f / 2. 2 telescopes. a single telescope has a 7. 4 deg ^ 2 field of view and reaches a 5 - sigma limiting magnitude of 19. 6 ( 21. 0 ) in 20s ( 20x20s ) ( filter - less ), while the entire system provides a 355 deg ^ 2 field of view. the basic strategy of last is to obtain multiple 20 - s consecutive exposures of each field ( a visit ). each telescope carries a 61 mpix camera, and the system produces, on average, about 2. 2 gbit / s. this high data rate is analyzed in near real - time at the observatory site, using limited computing resources ( about 700 cores ). given this high data rate, we have developed a new, efficient data reduction and analysis pipeline. the data pipeline includes two major parts : ( i ) processing and calibration of single images, followed by a coaddition of the visit ' s exposures. ( ii ) building the reference images and performing image subtraction and transient detection. here we describe in detail the first part of the pipeline. among the products of this pipeline are photometrically and astrometrically calibrated single and coadded images, 32 - bit mask images marking a wide variety of problems and states of each pixel, source catalogs built from individual and coadded images, point spread function ( psf ) photometry, merged source catalogs, proper motion and variability indicators, minor planets detection, calibrated light curves, and matching with external catalogs. the entire pipeline code is made public. finally, we demonstrate the pipeline performance on real data taken by last.
|
arxiv:2310.13063
|
using $ ( 448. 1 \ pm2. 9 ) \ times10 ^ 6 $ $ \ psi ( 3686 ) $ events collected with the besiii detector at the bepcii collider, the decay $ \ psi ( 3686 ) \ to \ sigma ^ - \ bar \ sigma ^ + $ is observed for the first time with a branching fraction of $ ( 2. 82 \ pm0. 04 _ { \ rm stat. } \ pm0. 08 _ { \ rm syst. } ) \ times10 ^ { - 4 } $, and the angular parameter $ \ alpha _ { \ sigma ^ - } $ is measured to be $ 0. 96 \ pm0. 09 _ { \ rm stat. } \ pm 0. 03 _ { \ rm syst. } $.
|
arxiv:2209.14564
|
we study the problem of sequential prediction and online minimax regret with stochastically generated features under a general loss function. we introduce a notion of expected worst case minimax regret that generalizes and encompasses prior known minimax regrets. for such minimax regrets we establish tight upper bounds via a novel concept of stochastic global sequential covering. we show that for a hypothesis class of vc - dimension $ \ mathsf { vc } $ and $ i. i. d. $ generated features of length $ t $, the cardinality of the stochastic global sequential covering can be upper bounded with high probability ( whp ) by $ e ^ { o ( \ mathsf { vc } \ cdot \ log ^ 2 t ) } $. we then improve this bound by introducing a new complexity measure called the star - littlestone dimension, and show that classes with star - littlestone dimension $ \ mathsf { sl } $ admit a stochastic global sequential covering of order $ e ^ { o ( \ mathsf { sl } \ cdot \ log t ) } $. we further establish upper bounds for real valued classes with finite fat - shattering numbers. finally, by applying information - theoretic tools of the fixed design minimax regrets, we provide lower bounds for the expected worst case minimax regret. we demonstrate the effectiveness of our approach by establishing tight bounds on the expected worst case minimax regrets for logarithmic loss and general mixable losses.
|
arxiv:2209.04417
|
we prove strichartz estimates in similarity coordinates for the radial wave equation with a self similar potential in dimensions $ d \ geq 3 $. as an application of these, we establish the asymptotic stability of the ode blowup profile of the energy critical radial nonlinear wave equation for $ 3 \ leq d \ leq 6 $.
|
arxiv:2204.03388
|
$ \ mathcal { pt } $ - symmetric system has attracted extensive attention in recent years because of its unique properties and applications. how to simulate $ \ mathcal { pt } $ - symmetric system in traditional quantum mechanical system has not only fundamental theoretical significance but also practical value. we propose a dynamics simulation scheme of arbitrary time - dependent $ \ mathcal { pt } $ - symmetric system based on density operators, and the results are compatible with previous methods based on pure - state vectors. based on the above, we are able to study the influence of quantum noises on the simulation results with the technique of vectorization of density operators and matrixization of superoperators ( vdms ), and we show the depolarizing ( dep ) noise is the most fatal and should be avoided as much as possible. meanwhile, we also give a numerical analysis. we find that the problem of chronological product usually has to be solved not only in the numerical calculation, but also even in the experiment, because the dilated higher - dimensional hamiltonian is usually time - dependent. through theoretical analysis and numerical calculation, we find that on the premise of meeting the goal of calculation accuracy and saving computing resources, the time step of calculation and the cut - off term of magnus series have to be carefully balanced.
|
arxiv:2203.08776
|
we provide, under minimal continuity assumptions, a description of \ textsl { additive partition entropies }. they are real functions $ i $ on the set of finite partitions that are additive on stochastically independent partitions in a given probability space.
|
arxiv:1202.4591
|
human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. however, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. an alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. however, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. in this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. this model can be viewed as an extension of factor analysis to array - valued data, as it uses a factor model to estimate the covariance along each dimension of the array. we discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. we apply this methodology to the analysis of data from the human mortality database, and show in a cross - validation experiment how it outperforms simpler methods. additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations.
|
arxiv:1211.3813
|
in this paper, numerical and solitonic solutions of korteweg de vries ( kdv ) and korteweg de vries - burger ' s ( kdvb ) equations with initial and boundary conditions are calculated by sinc - collocation method. the basis of method is sinc functions. first, discretizing time derivative of kdv and kdvb ' s equations using a classic finite difference formula and space derivatives by { \ theta } - weighted scheme between successive two time lev - els is applied, then sinc functions are used to solve these two equations. mathematica programming is used to solve matrix representation of these equations. kdv equation describes behavior of traveling waves which is a third order non - linear partial differential equation ( pde ). maximum absolute errors are given in tables. the figures show approximate solutions of these two equations. three conservation laws for kdv ' s equation are obtained.
|
arxiv:1209.1782
|
in this paper, we introduce libriheavy, a large - scale asr corpus consisting of 50, 000 hours of read english speech derived from librivox. to the best of our knowledge, libriheavy is the largest freely - available corpus of speech with supervisions. different from other open - sourced datasets that only provide normalized transcriptions, libriheavy contains richer information such as punctuation, casing and text context, which brings more flexibility for system building. specifically, we propose a general and efficient pipeline to locate, align and segment the audios in previously published librilight to its corresponding texts. the same as librilight, libriheavy also has three training subsets small, medium, large of the sizes 500h, 5000h, 50000h respectively. we also extract the dev and test evaluation sets from the aligned audios and guarantee there is no overlapping speakers and books in training sets. baseline systems are built on the popular ctc - attention and transducer models. additionally, we open - source our dataset creatation pipeline which can also be used to other audio alignment tasks.
|
arxiv:2309.08105
|
the increasing recognition of the association between adverse human health conditions and many environmental substances as well as processes has led to the need to monitor them. an important problem that arises in environmental statistics is the design of the locations of the monitoring stations for those environmental processes of interest. one particular design criterion for monitoring networks that tries to reduce the uncertainty about predictions of unseen processes is called the maximum - entropy design. however, this design criterion involves a hard optimization problem that is computationally intractable for large data sets. previous work of wang et al. ( 2017 ) examined a probabilistic model that can be implemented efficiently to approximate the underlying optimization problem. in this paper, we attempt to establish statistically sound tools for assessing the quality of the approximations.
|
arxiv:2002.01019
|
the persistent current in a clean mesoscopic ring with ballistic electron motion is calculated. the particle dynamics inside a ring is assumed to be chaotic due to scattering at the surface irregularities of atomic size. this allows one to use the so - called ` ` ballistic ' ' supersymmetric \ sigma model for calculation of the two - level correlation function in the presence of a nonzero magnetic flux.
|
arxiv:cond-mat/9905249
|
deep learning aided codes have been shown to improve code performance in feedback codes in high noise regimes due to the ability to leverage non - linearity in code design. in the additive white gaussian broadcast channel ( awgn - bc ), the addition of feedback may allow the capacity region to extend far beyond the capacity region of the channel without feedback, enabling higher data rates. on the other hand, there are limited deep - learning aided implementations of broadcast codes. in this work, we extend two classes of deep - learning assisted feedback codes to the awgn - bc channel ; the first being an rnn - based architecture and the second being a lightweight mlp - based architecture. both codes are trained using a global model, and then they are trained using a more realistic vertical federated learning based framework. we first show that in most cases, using an awgn - bc code outperforms a linear - based concatenated scheme. second, we show in some regimes, the lightweight architecture far exceeds the rnn - based code, but in especially unreliable conditions, the rnn - based code dominates. the results show the promise of deep - learning aided broadcast codes in unreliable channels, and future research directions are discussed.
|
arxiv:2410.17404
|
the spectral and photometric imaging receiver ( spire ) on herschel has been carrying out deep extragalactic surveys, one of whose aims is to establish spectral energy distributions ( sed ) s of individual galaxies spanning the infrared / submillimeter ( ir / smm ) wavelength region. we report observations of the ( ir / smm ) emission from the lockman north field ( ln ) and great observatories origins deep survey field north ( goods - n ). because galaxy images in the wavelength range covered by herschel generally represent a blend with contributions from neighboring galaxies, we present sets of galaxies in each field especially free of blending at 250, 350, and 500 microns. we identify the cumulative emission of these galaxies and the fraction of the far infrared cosmic background radiation they contribute. our surveys reveal a number of highly luminous galaxies at redshift z ~ < 3 and a novel relationship between infrared and visible emission that shows a dependence on luminosity and redshift.
|
arxiv:1009.1371
|
we study non - abelian vortex strings in $ { \ mathcal n } = 2 $ supersymmetric qcd with the gauge group u $ ( n ) $ deformed by the mass $ \ mu $ of the adjoint matter. this deformation breaks $ { \ mathcal n } = 2 $ supersymmetry down to $ { \ mathcal n } = 1 $ and in the limit of large $ \ mu $ the theory flows to $ { \ mathcal n } = 1 $ qcd. non - abelian strings in addition to translational zero modes have orientation moduli. in the $ { \ mathcal n } = 2 $ limit of small $ \ mu $ the dynamics of orientational moduli is described by the two dimensional $ cp ( n - 1 ) $ model for qcd with $ n _ f = n $ flavors of quark hypermultiplets. for the case of $ n _ f > n $ the non - abelian string becomes semilocal developing additional size moduli which modify the effective two dimensional $ \ sigma $ - model on the string making its target space non - compact. in this paper we consider the $ \ mu $ - deformed theory with $ n _ f > n $ eventually making $ \ mu $ large. we show that size moduli develop a potential that forces the string transverse size to shrink. eventually in the large $ \ mu $ limit size moduli decouple and the effective theory on the string reduces to the $ cp ( n - 1 ) $ model. we also comment on physics of confined monopoles.
|
arxiv:1810.07149
|
in this paper, we perform a full next - to - leading order ( nlo ) qcd calculation of neutralino scattering on protons or neutrons in the mssm. we match the results of the nlo qcd calculation to the scalar and axial - vector operators in the effective field theory approach. these govern the spin - independent and spin - dependent detection rates, respectively. the calculations have been performed for general bino, wino and higgsino decompositions of neutralino dark matter and required a novel tensor reduction method of loop integrals with vanishing relative velocities and gram determinants. numerically, the nlo qcd effects are shown to be of at least of similar size and sometimes larger than the currently estimated nuclear uncertainties. we also demonstrate the interplay of the direct detection rate with the relic density when consistently analyzed with the program $ \ mathtt { dm @ nlo } $.
|
arxiv:1607.06396
|
in the vertex planarization problem one asks to delete the minimum possible number of vertices from an input graph to obtain a planar graph. the parameterized complexity of this problem, parameterized by the solution size ( the number of deleted vertices ) has recently attracted significant attention. the state - of - the - art algorithm of jansen, lokshtanov, and saurabh [ soda 2014 ] runs in time $ 2 ^ { o ( k \ log k ) } \ cdot n $ on $ n $ - vertex graph with a solution of size $ k $. it remains open if one can obtain a single - exponential dependency on $ k $ in the running time bound. one of the core technical contributions of the work of jansen, lokshtanov, and saurabh is an algorithm that solves a weighted variant of vertex planarization in time $ 2 ^ { o ( w \ log w ) } \ cdot n $ on graphs of treewidth $ w $. in this short note we prove that the running time of this routine is tight under the exponential time hypothesis, even in unweighted graphs and when parameterizing by treedepth. consequently, it is unlikely that a potential single - exponential algorithm for vertex planarization parameterized by the solution size can be obtained by merely improving upon the aforementioned bounded treewidth subroutine.
|
arxiv:1511.08283
|
in certain analytically - tractable quantum chaotic systems, the calculation of out - of - time - order correlation functions, entanglement entropies after a quench, and other related dynamical observables, reduces to an effective theory of an ` ` entanglement membrane ' ' in spacetime. these tractable systems involve an average over random local unitaries defining the dynamical evolution. we show here how to make sense of this membrane in more realistic models, which do not involve an average over random unitaries. our approach relies on introducing effective pairing degrees of freedom in spacetime, describing a pairing of forward and backward feynman trajectories, inspired by the structure emerging in random unitary circuits. this provides a framework for applying ideas of coarse - graining to dynamical quantities in chaotic systems. we apply the approach to some translationally invariant floquet spin chains studied in the literature. we show that a consistent line tension may be defined for the entanglement membrane, and that there are qualitative differences in this tension between generic models and ` ` dual - unitary ' ' circuits. these results allow scaling pictures for out - of - time - order correlators and for entanglement to be taken over from random circuits to non - random floquet models. we also provide an efficient numerical algorithm for determining the entanglement line tension in 1 + 1d.
|
arxiv:1912.12311
|
iota tangle is a distributed ledger technology ( dlt ), primarily designed for internet - of - things ( iot ) networks and applications. iota tangle utilizes a direct acyclic graph ( dag ) structure for the ledger, with its protocol offering features attractive to the iot domain, over most blockchain alternatives, such as feeless transactions, higher achievable transactions per second ( tps ), and lower energy consumption. the original iota implementation relied on a bootstrap centralized coordinator solution for consensus which limited its degree of decentralization and scalability. this concern, alongside other limitations to its adoption, such as lack of smart contracts, are being addressed with the release of iota 2. 0. this update brings with it significant changes in order to remove the coordinator and achieve a scalable decentralized solution. to this end, this paper provides a technical overview of the key features of iota 2. 0 while discussing their relevance and benefits for the wider iot ecosystem. the paper also provides performance insights and future research directions for iota 2. 0.
|
arxiv:2209.04959
|
we study the propagation, observation and control properties of the 1 - d wave equation on a bounded interval discretized in space using the quadratic classical finite element approximation. a careful fourier analysis of the discrete wave dynamics reveals two different branches in the spectrum : the acoustic one, of physical nature, and the optic one, related to the perturbations that this second - order finite element approximation introduces with respect to the linear one. on both modes there are high frequencies with vanishing group velocity as the mesh size tends to zero. this shows that the classical property of continuous waves of being observable from the boundary fails to be uniform for this discretization scheme. as a consequence of this, the controls of the discrete waves may blow - up as the mesh size tends to zero. to remedy these high frequency pathologies, we design filtering mechanisms based on a bi - grid algorithm for which one can recover the uniformity of the observability constant in a finite time and, consequently, the possibility to control with uniformly bounded $ l ^ 2 $ - controls appropriate projections of the solutions. this also allows showing that, by relaxing the control requirement, the controls are uniformly bounded and converge to the continuous ones as the mesh size tends to zero.
|
arxiv:1112.4297
|
contraction properties of transport maps between probability measures play an important role in the theory of functional inequalities. the actual construction of such maps, however, is a non - trivial task and, so far, relies mostly on the theory of optimal transport. in this work, we take advantage of the infinite - dimensional nature of the gaussian measure and construct a new transport map, based on the f \ " ollmer process, which pushes forward the wiener measure onto probability measures on euclidean spaces. utilizing the tools of the malliavin and stochastic calculus in wiener space, we show that this brownian transport map is a contraction in various settings where the analogous questions for optimal transport maps are open. the contraction properties of the brownian transport map enable us to prove functional inequalities in euclidean spaces, which are either completely new or improve on current results. further and related applications of our contraction results are the existence of stein kernels with desirable properties ( which lead to new central limit theorems ), as well as new insights into the kannan - - lov \ ' asz - - simonovits conjecture. we go beyond the euclidean setting and address the problem of contractions on the wiener space itself. we show that optimal transport maps and causal optimal transport maps ( which are related to brownian transport maps ) between the wiener measure and other target measures on wiener space exhibit very different behaviors.
|
arxiv:2111.11521
|
cell - free massive mimo ( cf - mmimo ) networks have recently emerged as a promising solution to tackle the challenges arising from next - generation massive machine - type communications. in this paper, a fully grant - free deep learning ( dl ) - based method for user activity detection in cf - mmimo networks is proposed. initially, the known non - orthogonal pilot sequences are used to estimate the channel coefficients between each user and the access points. then, a deep convolutional neural network is used to estimate the activity status of the users. the proposed method is " blind ", i. e., it is fully data - driven and does not require prior large - scale fading coefficients estimation. numerical results show how the proposed dl - based algorithm is able to merge the information gathered by the distributed antennas to estimate the user activity status, yet outperforming a state - of - the - art covariance - based method.
|
arxiv:2408.02359
|
crops. = = = medicine = = = in medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing ( or genetic screening ). in 2021, nearly 40 % of the total company value of pharmaceutical biotech companies worldwide were active in oncology with neurology and rare diseases being the other two big applications. pharmacogenomics ( a combination of pharmacology and genomics ) is the technology that analyses how genetic makeup affects an individual ' s response to drugs. researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single - nucleotide polymorphisms with a drug ' s efficacy or toxicity. the purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients ' genotype, to ensure maximum efficacy with minimal adverse effects. such approaches promise the advent of " personalized medicine " ; in which drugs and drug combinations are optimized for each individual ' s unique genetic makeup. biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology β biopharmaceutics. modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. the first genetically engineered products were medicines designed to treat human diseases. to cite one example, in 1978 genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium escherichia coli. insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals ( cattle or pigs ). the genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. biotechnology has also enabled emerging therapeutics like gene therapy. the application of biotechnology to basic science ( for example through the human genome project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child ' s parentage ( genetic mother and father ) or in general a person ' s ancestry. in addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk
|
https://en.wikipedia.org/wiki/Biotechnology
|
we tackle the task of stylizing video objects in an intuitive and semantic manner following a user - specified text prompt. this is a challenging task as the resulting video must satisfy multiple properties : ( 1 ) it has to be temporally consistent and avoid jittering or similar artifacts, ( 2 ) the resulting stylization must preserve both the global semantics of the object and its fine - grained details, and ( 3 ) it must adhere to the user - specified text prompt. to this end, our method stylizes an object in a video according to two target texts. the first target text prompt describes the global semantics and the second target text prompt describes the local semantics. to modify the style of an object, we harness the representational power of clip to get a similarity score between ( 1 ) the local target text and a set of local stylized views, and ( 2 ) a global target text and a set of stylized global views. we use a pretrained atlas decomposition network to propagate the edits in a temporally consistent manner. we demonstrate that our method can generate consistent style changes over time for a variety of objects and videos, that adhere to the specification of the target texts. we also show how varying the specificity of the target texts and augmenting the texts with a set of prefixes results in stylizations with different levels of detail. full results are given on our project webpage : https : / / sloeschcke. github. io / text - driven - stylization - of - video - objects /
|
arxiv:2206.12396
|
node representation learning for signed directed networks has received considerable attention in many real - world applications such as link sign prediction, node classification and node recommendation. the challenge lies in how to adequately encode the complex topological information of the networks. recent studies mainly focus on preserving the first - order network topology which indicates the closeness relationships of nodes. however, these methods generally fail to capture the high - order topology which indicates the local structures of nodes and serves as an essential characteristic of the network topology. in addition, for the first - order topology, the additional value of non - existent links is largely ignored. in this paper, we propose to learn more representative node embeddings by simultaneously capturing the first - order and high - order topology in signed directed networks. in particular, we reformulate the representation learning problem on signed directed networks from a variational auto - encoding perspective and further develop a decoupled variational embedding ( dve ) method. dve leverages a specially designed auto - encoder structure to capture both the first - order and high - order topology of signed directed networks, and thus learns more representative node embedding. extensive experiments are conducted on three widely used real - world datasets. comprehensive results on both link sign prediction and node recommendation task demonstrate the effectiveness of dve. qualitative results and analysis are also given to provide a better understanding of dve.
|
arxiv:2008.12450
|
using data obtained from first - principles calculations, we show that the position of the morphotropic phase boundary ( mpb ) and transition temperature at mpb in ferroelectric perovskite solutions can be predicted with quantitative accuracy from the properties of the constituent cations. we find that the mole fraction of pbtio $ _ 3 $ at mpb in pb ( b $ ' $ b $ ' ' $ ) o $ _ 3 $ - pbtio $ _ 3 $, bibo $ _ 3 $ - pbtio $ _ 3 $ and bi ( b $ ' $ b $ ' ' $ ) o $ _ 3 $ - pbtio $ _ 3 $ exhibits a linear dependence on the ionic size ( tolerance factor ) and the ionic displacements of the b - cations as found by density functional theory calculations. this dependence is due to competition between the local repulsion and a - cation displacement alignment interactions. inclusion of first - principles displacement data also allows accurate prediction of transiton temperatures at the mpb. the obtained structure - property correlations are used to predict morphotropic phase boundaries and transition temperatures in as yet unsynthesized solid solutions.
|
arxiv:cond-mat/0509424
|
in this work, leader follower consensus objective has been addressed with the synthesis of an event based controller utilizing sliding mode robust control. the schema has been partitioned into two parts viz. finite time consensus problem and event triggered control mechanism. a nonlinear multi agent system with non identical dynamics has been put forward to illustrate the robust capabilities of the proposed control. the first part incorporates matching of states of the followers with those of the leader via consensus tracking algorithm. in the subsequent part, an event triggered rule is devised to save computational power and restrict periodic updating of the controller involved while ensuring desired closed loop performance of the system. switching of the event based controller is achieved via sliding mode control. advantage of using switched controller like sliding mode is that it retains its inherent robustness as well as event triggering approach aids in saving energy expenditure. efficacy of the proposed scheme is confirmed via numerical simulations.
|
arxiv:1712.01152
|
we numerically study structure and dynamics of single files composed of active particles, as well as, active - passive binary mixtures. our simulation results show that when the persistent length of self - propelled particles is much larger than the average inter - particle separation and the self - propulsion velocity is larger than the thermal velocity, particles in the file exist as clusters of various sizes. average cluster size and structures of the file are very sensitive to self - propulsion properties, thermal fluctuations and composition of the mixture. in addition to the variation of file composition, our study considers two sorts of mixture configurations. one corresponds to the uniform distribution of active passive throughout the mixture in the single file. in the other configuration, active particles are on one side of the file. for the both configurations, even a little fraction of active particles produces a large impact on the structure and dynamics of the file.
|
arxiv:2303.07843
|
we review our recent works on the supersymmetrization of the leading string correction ( the r ^ 4 term ) to n = 1, 2 supergravity theories in four dimensions. we show that, in the " old minimal " formulations of these theories, when going on - shell in the presence of this correction, the auxiliary fields which come from multiplets with physical fields cannot be eliminated, but those ones that come from compensating multiplets without any physical fields can be eliminated. we conjecture similar results for other versions of these theories.
|
arxiv:hep-th/0306285
|
network theory provides a principled abstraction of the human brain : reducing a complex system into a simpler representation from which to investigate brain organisation. recent advancement in the neuroimaging field are towards representing brain connectivity as a dynamic process in order to gain a deeper understanding of the interplay between functional modules for efficient information transport. in this work, we employ heat kernels to model the process of energy diffusion in functional networks. we extract node - based, multi - scale features which describe the propagation of heat over ' time ' which not only inform the importance of a node in the graph, but also incorporate local and global information of the underlying geometry of the network. as a proof - of - concept, we test the efficacy of two heat kernel features for discriminating between motor and working memory functional networks from the human connectome project. for comparison, we also classified task networks using traditional network metrics which similarly provide rankings of node importance. in addition, a variant of the smooth incremental graphical lasso estimation algorithm was used to estimate non - sparse, precision matrices to account for non - stationarity in the time series. we illustrate differences in heat kernel features between tasks, and also between regions of the brain. using a random forest classifier, we showed heat kernel metrics to capture intrinsic properties of functional networks that serve well as features for task classification.
|
arxiv:1604.08912
|
we consider a novel application of inverse reinforcement learning with behavioral economics constraints to model, learn and predict the commenting behavior of youtube viewers. each group of users is modeled as a rationally inattentive bayesian agent which solves a contextual bandit problem. our methodology integrates three key components. first, to identify distinct commenting patterns, we use deep embedded clustering to estimate framing information ( essential extrinsic features ) that clusters users into distinct groups. second, we present an inverse reinforcement learning algorithm that uses bayesian revealed preferences to test for rationality : does there exist a utility function that rationalizes the given data, and if yes, can it be used to predict commenting behavior? finally, we impose behavioral economics constraints stemming from rational inattention to characterize the attention span of groups of users. the test imposes a r { \ ' e } nyi mutual information cost constraint which impacts how the agent can select attention strategies to maximize their expected utility. after a careful analysis of a massive youtube dataset, our surprising result is that in most youtube user groups, the commenting behavior is consistent with optimizing a bayesian utility with rationally inattentive constraints. the paper also highlights how the rational inattention model can accurately predict commenting behavior. the massive youtube dataset and analysis used in this paper are available on github and completely reproducible.
|
arxiv:1910.11703
|
we prove that a conjecture of fujita on the semi - ampleness is true in the case of rank one direct summand, though it is wrong in higher rank case by catanese and dettweiler.
|
arxiv:2012.00300
|
various aspects of modern statistical physics and meteorology can be tied together. critical comments have to be made. however, the historical importance of the university of wroclaw in the field of meteorology should be first pointed out. next, some basic difference about time and space scales between meteorology and climatology can be outlined. the nature and role of clouds both from a geometric and thermal point of view are recalled. recent studies of scaling laws for atmospheric variables are mentioned, like studies on cirrus ice content, brightness temperature, liquid water path fluctuations, cloud base height fluctuations,.... technical time series analysis approaches based on modern statistical physics considerations are outlined.
|
arxiv:cond-mat/0402649
|
the concept of configuration was first introduced to give a characterization for the amenability of groups. then the concept of two - sided configuration was suggested to provide normality to study the group structures more efficiently. it has been interesting that for which groups, two - sided configuration equivalence would imply isomorphism. we introduce a class of groups, containing polycyclic and fc groups, which for them, the notions of two - sided configuration equivalence and isomorphism coincide.
|
arxiv:1512.03021
|
a field - interaction scheme is introduced for describing the aharonov - bohm effect, fully consistent with the principle of relativity. our theory is based on the fact that local field interactions are present even when a particle moves only in a field - free region. the interaction lagrangian between a charge and a flux is uniquely constructed from three principles : lorentz covariance, linearity in the interaction strength, and a correct stationary limit of charge. our result resolves fundamental questions raised on the standard interpretation of the aharonov - bohm effect, concerning its duality with the aharonov - casher effect and the equivalence between the potential and the field - interaction models for describing the electromagnetic interaction. most of all, potential is eliminated in our theory, and all kind of the force - free aharonov - bohm effect is understood in a unified framework of the lorentz - covariant local interaction of electromagnetic fields.
|
arxiv:1308.2093
|
a two - stage procedure for simultaneously detecting multiple thresholds and achieving model selection in the segmented accelerate failure time ( aft ) model is developed in this paper. in the first stage, we formulate the threshold problem as a group model selection problem so that a concave 2 - norm group selection method can be applied. in the second stage, the thresholds are finalized via a refining method. we establish the strong consistency of the threshold estimates and regression coefficient estimates under some mild technical conditions. the proposed procedure performs satisfactorily in our extensive simulation studies. its real world applicability is demonstrated via analyzing a follicular lymphoma data.
|
arxiv:1512.03500
|
we introduce and study an xy - type model of thermal and quantum phase fluctuations in a two - dimensional correlated lattice d - wave superconductor based on the qed3 effective theory of high temperature superconductors. general features of and selected results obtained within this model were reported earlier in an abbreviated format ( z. tesanovic, cond - mat / 0405235 ). the model is geared toward describing not only the long distance but also the intermediate lengthscale physics of underdoped cuprates. in particular, we elucidate the dynamical origin and investigate specific features of the cooper pair charge - density - wave ( cpcdw ), which we argue is the state behind the periodic charge density modulation discovered in recent stm experiments. we illustrate how mott - hubbard correlations near half - filling suppress superfluid density and favor an incompressible state which breaks translational symmetry of the atomic lattice. we show how the formation of the cpcdw in such a strongly quantum fluctuating superconductor can be understood as an abrikosov - hofstadter problem in a type - ii dual superconductor, with the role of the dual magnetic field played by the electron density. the resulting abrikosov lattice of dual vortices translates into the periodic modulation of the bogoliubov - degennes gap function and the electronic density. we compute detailed signatures of various abrikosov - hofstadter dual vortex arrays in the single - particle local tunneling density of states. a 4x4 checkerboard - type modulation pattern naturally arises as an energetically favored ground state at and near the x = 1 / 8 doping and produces good agreement with experimental observations.
|
arxiv:cond-mat/0408344
|
essentially all known quantum gates rely on a weak - coupling approximation resulting in linear dynamics. with the explicit example of trapped ions, we show how high - fidelity quantum gates can be achieved outside such an approximation, and we derive readily implementable driving fields to realize gates with extremely high fidelities for ions well outside the lamb - dicke regime with motional temperatures achievable by only doppler cooling.
|
arxiv:2003.11718
|
understanding collective mobility patterns is crucial to plan the restart of production and economic activities, which are currently put in stand - by to fight the diffusion of the epidemics. in this report, we use mobile phone data to infer the movements of people between italian provinces and municipalities, and we analyze the incoming, outcoming and internal mobility flows before and during the national lockdown ( march 9th, 2020 ) and after the closure of non - necessary productive and economic activities ( march 23th, 2020 ). the population flow across provinces and municipalities enable for the modelling of a risk index tailored for the mobility of each municipality or province. such an index would be a useful indicator to drive counter - measures in reaction to a sudden reactivation of the epidemics. mobile phone data, even when aggregated to preserve the privacy of individuals, are a useful data source to track the evolution in time of human mobility, hence allowing for monitoring the effectiveness of control measures such as physical distancing. we address the following analytical questions : how does the mobility structure of a territory change? do incoming and outcoming flows become more predictable during the lockdown, and what are the differences between weekdays and weekends? can we detect proper local job markets based on human mobility flows, to eventually shape the borders of a local outbreak?
|
arxiv:2004.11278
|
a self - training scheme geared at inducing students to improve their skills through independent homework is presented. the motivation is to identify an inexpensive, yet effective tool for raising the competence level of students in the fundamental sciences ( in particular physics ). since globally existing financial restrictions do not allow for extensive supervised work, a scheme is devised where the additional personal training is rewarded through bonuses in the grade, while safeguarding against the danger of cheating. overburdening the instructors is avoided through the use of computer - based grading of homework, while a carefully chosen bonus plan, weighted by the grades obtained in supervised tests, counters the effects of potential cheating.
|
arxiv:1809.04968
|
we realise bistability in the spinor of polariton condensates under non - resonant optical excitation and in the absence of biasing external fields. numerical modelling of the system using the ginzburg - landau equation with an internal josephson coupling between the two spin components of the condensate qualitatively describes the experimental observations. we demonstrate that polariton spin bistability persists for sweep times in the range of $ [ 10 \ mu sec, 1 sec ] $ offering a promising route to spin switches and spin memory elements.
|
arxiv:1709.07351
|
after a derivation of low - energy limit of qcd, being this a non - local nambu - jona - lasinio model, we are able to show that confinement emerges as a two - loop correction to the gluon propagator. one - gluon exchange is not enough as recently shown in literature about studies on the gluon propagator in the landau gauge.
|
arxiv:1208.3756
|
the usual dictionary between geometry and commutative algebra is not appropriate for arithmetic geometry because addition is a singular operation at the " real prime ". we replace rings, with addition and multiplication, by props ( = strict symmetric monoidal category generated by one object ), or by bioperad ( = two closed symmetric operads acting on each other ) : to a ring we associate the prop of all matrices over it, with matrix multiplication and block direct sums as the basic operations, or the bioperad consisting of all raw and column vectors over it. we define the " commutative " props and bioperads, and using them we develop a generalized algebraic geometry, following grothendieck footsteps closely. this new geometry is appropriate for arithmetic ( and potentially also for physics ).
|
arxiv:2402.04456
|
qbf solvers implementing the qcdcl paradigm are powerful algorithms that successfully tackle many computationally complex applications. however, our theoretical understanding of the strength and limitations of these qcdcl solvers is very limited. in this paper we suggest to formally model qcdcl solvers as proof systems. we define different policies that can be used for decision heuristics and unit propagation and give rise to a number of sound and complete qbf proof systems ( and hence new qcdcl algorithms ). with respect to the standard policies used in practical qcdcl solving, we show that the corresponding qcdcl proof system is incomparable ( via exponential separations ) to q - resolution, the classical qbf resolution system used in the literature. this is in stark contrast to the propositional setting where cdcl and resolution are known to be p - equivalent. this raises the question what formulas are hard for standard qcdcl, since q - resolution lower bounds do not necessarily apply to qcdcl as we show here. in answer to this question we prove several lower bounds for qcdcl, including exponential lower bounds for a large class of random qbfs. we also introduce a strengthening of the decision heuristic used in classical qcdcl, which does not necessarily decide variables in order of the prefix, but still allows to learn asserting clauses. we show that with this decision policy, qcdcl can be exponentially faster on some formulas. we further exhibit a qcdcl proof system that is p - equivalent to q - resolution. in comparison to classical qcdcl, this new qcdcl version adapts both decision and unit propagation policies.
|
arxiv:2109.04862
|
we study the generation of intergalactic magnetic fields in two models for first - order phase transitions in the early universe that have been studied previously in connection with the generation of gravitational waves ( gws ) : the standard model supplemented by an $ | h | ^ 6 $ operator ( sm + $ h ^ 6 $ ) and a classically scale - invariant model with an extra gauged u ( 1 ) $ b - l $ symmetry ( sm $ _ { b - l } $ ). we consider contributions to magnetic fields generated by bubble collisions and by turbulence in the primordial plasma, and we consider the hypotheses that helicity is seeded in the gauge field or kinetically. we study the conditions under which the intergalactic magnetic fields generated may be larger than the lower bounds from blazar observations, and correlate them with the observability of gws and possible collider signatures. in the sm + $ h ^ 6 $ model bubble collisions alone cannot yield large enough magnetic fields, whereas turbulence may do so. in the sm $ _ { b - l } $ model bubble collisions and turbulence may both yield magnetic fields above the blazar bound unless the b $ - $ l gauge boson is very heavy. in both models there may be observable gw and collider signatures if sufficiently large magnetic fields are generated.
|
arxiv:1907.04315
|
fermat ' s last theorem is proved by using the philosophical and mathematical knowledge of 1637 when the french mathematician pierre de fermat claimed to have a truly marvelous proof of his conjecture. our approach consists of setting three variables of fermat ' s equation as integers and then evaluating whether the remaining variable can be an integer as well. pythagorean triples play a fundamental role in claiming that at least an irrational number is needed to satisfy fermat ' s equation. as a result, we confirm that fermat ' s last theorem is valid.
|
arxiv:2106.11775
|
teachable agents are computer agents based on the pedagogical concept of learning - by - teaching. during the tutoring process, where students take on the role of the tutor to teach a computer agent tutee, learners have been observed to gain deeper understanding of the subject matter. teachable agents are commonly used in the areas of science and mathematics learning where learners are able to learn complex concepts and deep reasoning by teaching the teachable agent through graphic representation such as concept maps. literature review on teachable agents as well as observations during field studies conducted by the researcher, have shown that many current teachable agents lack the interaction abilities required to keep learners engage in learning tasks. the result of this is learners deviating from the teaching process, and thus the learners are unable to benefit fully from learning with the teachable agent. the applications of teachable agents are restricted to the learning of academic subjects such as mathematics and science. in this book, we have proposed the persuasive teachable agent ( pta ), a teachable agent based on the theoretical framework of persuasion, computational and goal - oriented agent modelling. we argue that the pta, an autonomous agent, capable of encouraging attitude and behavioural change can offer a more meaningful and engaging learning experiences for learners from different age groups. based on the findings from our research we argue that persuasive feedback actions generated by the pta provide significant influence over learner ' s decision to participate in intergenerational learning. the pta plays a crucial role in the development of future persuasive technologies in artificially intelligent agents.
|
arxiv:1601.07264
|
residual neural networks are state - of - the - art deep learning models. their continuous - depth analog, neural ordinary differential equations ( odes ), are also widely used. despite their success, the link between the discrete and continuous models still lacks a solid mathematical foundation. in this article, we take a step in this direction by establishing an implicit regularization of deep residual networks towards neural odes, for nonlinear networks trained with gradient flow. we prove that if the network is initialized as a discretization of a neural ode, then such a discretization holds throughout training. our results are valid for a finite training time, and also as the training time tends to infinity provided that the network satisfies a polyak - lojasiewicz condition. importantly, this condition holds for a family of residual networks where the residuals are two - layer perceptrons with an overparameterization in width that is only linear, and implies the convergence of gradient flow to a global minimum. numerical experiments illustrate our results.
|
arxiv:2309.01213
|
aims : we investigate the formation of flux ropes in a flux emergence region and their rise into the outer atmosphere of the sun. methods : we perform 3d numerical experiments solving the time - dependent and resistive mhd equations. results : a sub - photospheric twisted flux tube rises from the solar interior and expands into the corona. a flux rope is formed within the expanding field, due to shearing and reconnection of field lines at low atmospheric heights. if the tube emerges into a non - magnetized atmosphere, the flux rope rises, but remains confined inside the expanding magnetized volume. on the contrary, if the expanding tube is allowed to reconnect with a preexisting coronal field, the flux rope experiences a full eruption with a rise profile which is in qualitative agreement with erupting filaments and coronal mass ejections.
|
arxiv:0811.1134
|
the desire to extend the hubble diagram to higher redshifts than the range of current type ia supernovae observations has prompted investigation into spectral correlations in gamma ray bursts, in the hope that standard candle - like properties can be identified. in this paper we discuss the potential of these new ` cosmic rulers ' and highlight their limitations by investigating the constraints that current data can place on an alternative cosmological model in the form of conformal gravity. by fitting current type 1a supernovae and gamma ray burst ( grb ) data to the predicted luminosity distance redshift relation of both the standard concordance model and conformal gravity, we show that currently \ emph { neither } model is strongly favoured at high redshift. the scatter in the current grb data testifies to the further work required if grbs are to cement their place as effective probes of the cosmological distance scale.
|
arxiv:astro-ph/0612089
|
in causal mediation analysis, the natural direct and indirect effects ( natural effects ) are nonparametrically unidentifiable in the presence of treatment - induced confounding, which motivated the development of randomized interventional analogues ( rias ) of the natural effects. the rias are easier to identify and widely used in practice. applied researchers often interpret ria estimates as if they were the natural effects, even though the rias could be poor proxies for the natural effects. this calls for practical and theoretical guidance on when the rias differ from or coincide with the natural effects, which this paper aims to address. we develop a novel empirical test for the divergence between the rias and the natural effects under the weak assumptions sufficient for identifying the rias and illustrate the test using the moving to opportunity study. we also provide new theoretical insights on the relationship between the rias and the natural effects from a covariance perspective and a structural equation perspective. additionally, we discuss previously undocumented connections between the natural effects, the rias, and estimands in instrumental variable analysis and wilcoxon - mann - whitney tests.
|
arxiv:2407.02671
|
##spas distress frequency of 406 mhz. the satellites calculate the geographic location of the beacon within 2 km by measuring the doppler frequency shift of the radio waves due to the relative motion of the transmitter and the satellite, and quickly transmit the information to the appropriate local first responder organizations, which perform the search and rescue. radio direction finding ( rdf ) β this is a general technique, used since the early 1900s, of using specialized radio receivers with directional antennas ( rdf receivers ) to determine the exact bearing of a radio signal, to determine the location of the transmitter. the location of a terrestrial transmitter can be determined by simple triangulation from bearings taken by two rdf stations separated geographically, as the point where the two bearing lines cross, this is called a " fix ". military forces use rdf to locate enemy forces by their tactical radio transmissions, counterintelligence services use it to locate clandestine transmitters used by espionage agents, and governments use it to locate unlicensed transmitters or interference sources. older rdf receivers used rotatable loop antennas, the antenna is rotated until the radio signal strength is weakest, indicating the transmitter is in one of the antenna ' s two nulls. the nulls are used since they are sharper than the antenna ' s lobes ( maxima ). more modern receivers use phased array antennas which have a much greater angular resolution. animal migration tracking β a widely used technique in wildlife biology, conservation biology, and wildlife management in which small battery - powered radio transmitters are attached to wild animals so their movements can be tracked with a directional rdf receiver. sometimes the transmitter is implanted in the animal. the vhf band is typically used since antennas in this band are fairly compact. the receiver has a directional antenna ( typically a small yagi ) which is rotated until the received signal is strongest ; at this point the antenna is pointing in the direction of the animal. sophisticated systems used in recent years use satellites to track the animal, or geolocation tags with gps receivers which record and transmit a log of the animal ' s location. = = = = remote control = = = = radio remote control is the use of electronic control signals sent by radio waves from a transmitter to control the actions of a device at a remote location. remote control systems may also include telemetry channels in the other direction, used to transmit real - time information on the state of the device back to the control station. uncrewed spacecraft are an example of remote - controlled machines, controlled by commands transmitted by satellite ground stations
|
https://en.wikipedia.org/wiki/Radio
|
we recommend using a model - centric, boolean satisfiability ( sat ) formalism to obtain useful explanations of trained model behavior, different and complementary to what can be gleaned from lime and shap, popular data - centric explanation tools in artificial intelligence ( ai ). we compare and contrast these methods, and show that data - centric methods may yield brittle explanations of limited practical utility. the model - centric framework, however, can offer actionable insights into risks of using ai models in practice. for critical applications of ai, split - second decision making is best informed by robust explanations that are invariant to properties of data, the capability offered by model - centric frameworks.
|
arxiv:2110.13937
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.