text
stringlengths
1
3.65k
source
stringlengths
15
79
initially tcp was designed with the notion in mind that wired networks are generally reliable and any segment loss in a transmission is due to congestion in the network rather than an unreliable medium ( the assumptions is that the packet loss caused by damage is much less than 1 percent ). this notion doesnt hold in wireless parts of the network. wireless links are highly unreliable and they lose segments all the time due to a number of factors. very few papers are available which uses tcp for manet. in this paper, an attempt have been made to justify the use of tcp variants ( tahoe and reno ) for loss of packet due to random noise introduces in the manet. for the present analysis the simulation has been carried out for tcp variants ( tahoe and reno ) by introduces 0, 10, 20 and 30 percent noise. the comparison of tcp variants is made by running simulation for 0, 10, 20 and 30 percent of data packet loss due to noise in the transmission link and the effect of throughput and congestion window has been examined. during the simulation we have observed that throughput has been decreased when a drop of multiple segments happens, further we have observed in the case of tcp variant ( reno ) throughput is better at 1 percent ( figure 5 ) which implies a network with short burst of error and low ber, causing only one segment to be lost. when multiple segments are lost due to error prone nature of link, tahoe perform better than reno ( figure 13 ), that gives a significant saving of time ( 64. 28 percent ) in comparison with reno ( table 4 ). several simulations have been run with ns 2 simulator in order to acquire a better understanding of these tcp variants and the way they perform their function. we conclude with a discussion of whether these tcp versions can be used in mobile ad hoc network.
arxiv:1002.2403
the objective of this paper is to study the existence of the generalized drazin inverse of the sum $ a + b $ in a banach algebra and present explicit expressions for the generalized drazin inverse of this sum, under new conditions.
arxiv:1803.01083
graph convolutions have been a pivotal element in learning graph representations. however, recursively aggregating neighboring information with graph convolutions leads to indistinguishable node features in deep layers, which is known as the over - smoothing issue. the performance of graph neural networks decays fast as the number of stacked layers increases, and the dirichlet energy associated with the graph decreases to zero as well. in this work, we introduce a framelet system into the analysis of dirichlet energy and take a multi - scale perspective to leverage the dirichlet energy and alleviate the over - smoothing issue. specifically, we develop a framelet augmentation strategy by adjusting the update rules with positive and negative increments for low - pass and high - passes respectively. based on that, we design the energy enhanced convolution ( eeconv ), which is an effective and practical operation that is proved to strictly enhance dirichlet energy. from a message - passing perspective, eeconv inherits multi - hop aggregation property from the framelet transform and takes into account all hops in the multi - scale representation, which benefits the node classification tasks over heterophilous graphs. experiments show that deep gnns with eeconv achieve state - of - the - art performance over various node classification datasets, especially for heterophilous graphs, while also lifting the dirichlet energy as the network goes deeper.
arxiv:2311.05767
in this paper, we present a novel distributed state estimation approach in networked dc microgrids to detect the false data injection in the microgrid control network. each microgrid monitored by a distributed state estimator will detect if there is manipulated data received from their neighboring microgrids for control purposes. a dynamic model supporting the dynamic state estimation will be constructed for the networked microgrids. the optimal distributed state estimation, which is robust to load disturbances but sensitive to false data injected from neighboring microgrids will be presented. to demonstrate the effectiveness of the proposed approach, we simulate a 12kv three - bus networked dc microgrids in matlab / simulink. residual information corresponding to the false data injected from neighbors validates the efficacy of the proposed approach in detecting compromised agents of neighboring microgrids.
arxiv:1907.03139
discovering the mass of neutrinos is a principle goal in high energy physics and cosmology. in addition to cosmological measurements based on two - point statistics, the neutrino mass can also be estimated by observations of neutrino wakes resulting from the relative motion between dark matter and neutrinos. such a detection relies on an accurate reconstruction of the dark matter - neutrino relative velocity which is affected by non - linear structure growth and galaxy bias. we investigate our ability to reconstruct this relative velocity using large n - body simulations where we evolve neutrinos as distinct particles alongside the dark matter. we find that the dark matter velocity power spectrum is overpredicted by linear theory whereas the neutrino velocity power spectrum is underpredicted. the magnitude of the relative velocity observed in the simulations is found to be lower than what is predicted in linear theory. since neither the dark matter nor the neutrino velocity fields are directly observable from galaxy or 21 cm surveys, we test the accuracy of a reconstruction algorithm based on halo density fields and linear theory. assuming prior knowledge of the halo bias, we find that the reconstructed relative velocities are highly correlated with the simulated ones with correlation coefficients of 0. 94, 0. 93, 0. 91 and 0. 88 for neutrinos of mass 0. 05, 0. 1, 0. 2 and 0. 4 ev. we confirm that the relative velocity field reconstructed from large scale structure observations such as galaxy or 21 cm surveys can be accurate in direction and, with appropriate scaling, magnitude.
arxiv:1503.07480
a doctor of science ( latin : scientiae doctor ; most commonly abbreviated dsc or scd ) is a science doctorate awarded in a number of countries throughout the world. = = africa = = = = = algeria and morocco = = = in algeria, morocco, libya and tunisia, all universities accredited by the state award a " doctorate " in all fields of science and humanities, equivalent to a phd in the united kingdom or united states. some universities in these four north african countries award a " doctorate of the state " in some fields of study and science. a " doctorate of the state " is slightly higher in esteem than a regular doctorate, and is awarded after performing additional in - depth post - doctorate research or achievement. = = asia = = = = = japan = = = similarly to in the us and most of europe, japanese universities offer both the phd and the scd as initial doctorates in science. = = = india = = = in india only a few prestigious universities offer scd / dsc in science which is obtained in graduate school after satisfactory evaluation of knowledge, research accomplishment, and a doctoral defence. the oldest institute to award a dsc degree in india is rajabazar science college, university of calcutta. = = = thailand = = = higher education institutes in thailand generally grant phd as a doctoral research degree, some universities including chulalongkorn university award dsc. in exception, mahidol university can grant both phd and dsc. doctoral students in faculty of science are always awarded phd, but some other programs award dsc. = = = uzbekistan = = = dsc or phd degrees are awarded after dissertation and fulfilling the required publication number. in order to qualify for dsc, one is required to have attained a phd. the higher education institutes in uzbekistan also grant dsc degrees. as an example, the national university of uzbekistan and the uzbekistan academy of sciences offer dsc in various fields. = = europe = = = = = austria, germany, and switzerland = = = in germany, austria, and the german - speaking region of switzerland, common doctoral degrees in science are the following : dr. techn. : doctor technicae, awarded by austrian technical universities. in german : " doktor der technischen wissenschaften " which translates to doctor of engineering sciences, or doctor of science, or doctor of technical sciences, or doctor of technology. dr. techn. title is also awarded in denmark. dr. rer. nat. : doctor rerum natural
https://en.wikipedia.org/wiki/Doctor_of_Science
we developed a minimum gradient based method to track ridge features in 2d image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. through both analytic formulation and numerical simulation, we compare this new method with existing dc ( distribution curve ) based and higher order derivative based analyses. we find that the new method has good noise resilience and enhanced contrast especially for weak intensity features, meanwhile preserves the quantitative local maxima information from the raw image. an algorithm is proposed to extract 1d ridge dispersion from the 2d image plot, whose quantitative application to angle - resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.
arxiv:1612.07880
we report the discovery of tev gamma - ray emission coincident with the shell - type radio supernova remnant ( snr ) cta 1 using the veritas gamma - ray observatory. the source, ver j0006 + 729, was detected as a 6. 5 standard deviation excess over background and shows an extended morphology, approximated by a two - dimensional gaussian of semi - major ( semi - minor ) axis 0. 30 degree ( 0. 24 degree ) and a centroid 5 ' from the fermi gamma - ray pulsar psr j0007 + 7303 and its x - ray pulsar wind nebula ( pwn ). the photon spectrum is well described by a power - law dn / de = n _ 0 ( e / 3 tev ) ^ ( - \ gamma ), with a differential spectral index of \ gamma = 2. 2 + - 0. 2 _ stat + - 0. 3 _ sys, and normalization n _ 0 = ( 9. 1 + - 1. 3 _ stat + - 1. 7 _ sys ) x 10 ^ ( - 14 ) cm ^ ( - 2 ) s ^ ( - 1 ) tev ^ ( - 1 ). the integral flux, f _ \ gamma = 4. 0 x 10 ^ ( - 12 ) erg cm ^ ( - 2 ) s ^ ( - 1 ) above 1 tev, corresponds to 0. 2 % of the pulsar spin - down power at 1. 4 kpc. the energetics, co - location with the snr, and the relatively small extent of the tev emission strongly argue for the pwn origin of the tev photons. we consider the origin of the tev emission in cta 1.
arxiv:1212.4739
in this article, we study the \ ' etale cohomology of the compactification of deligne - lusztig varieties associated to a coxeter element. we prove a result for the integral coefficients in the case of general linear group $ gl _ d $, and we conjecture that the similar result holds for general reductive groups.
arxiv:1310.7259
de revolutionibus, by the astronomer nicolaus copernicus, were first printed. the period culminated with the publication of the philosophiæ naturalis principia mathematica in 1687 by isaac newton, representative of the unprecedented growth of scientific publications throughout europe. other significant scientific advances were made during this time by galileo galilei, johannes kepler, edmond halley, william harvey, pierre fermat, robert hooke, christiaan huygens, tycho brahe, marin mersenne, gottfried leibniz, isaac newton, and blaise pascal. in philosophy, major contributions were made by francis bacon, sir thomas browne, rene descartes, baruch spinoza, pierre gassendi, robert boyle, and thomas hobbes. christiaan huygens derived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena. william gilbert did some of the earliest experiments with electricity and magnetism, establishing that the earth itself is magnetic. = = = = heliocentrism = = = = the heliocentric astronomical model of the universe was refined by nicolaus copernicus. copernicus proposed the idea that the earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the sun. his heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all. his theory proposed the yearly rotation of the earth and the other heavenly spheres around the sun and was able to calculate the distances of planets using deferents and epicycles. although these calculations were not completely accurate, copernicus was able to understand the distance order of each heavenly sphere. the copernican heliocentric system was a revival of the hypotheses of aristarchus of samos and seleucus of seleucia. aristarchus of samos did propose that the earth rotated around the sun but did not mention anything about the other heavenly spheres ' order, motion, or rotation. seleucus of seleucia also proposed the rotation of the earth around the sun but did not mention anything about the other heavenly spheres. in addition, seleucus of seleucia understood that the moon rotated around the earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea. = = age of enlightenment = = = =
https://en.wikipedia.org/wiki/History_of_science
first steps towards developing a new perturbation theory for molecular liquids are taken. by choosing a new form of splitting the site - site potential functions between molecules, we will get a set of atomic fluids as the reference system with known structure and thermodynamics. the perturbative part of the potential function is then expanded up to two terms. the excess helmholtz free energy of the system is then obtained for three computable contributions. the derivation shows that the excess helmholtz free energy has nothing to do with intra - atomic potentials, all contributions come merely from inter - atomic potentials. then it is applied to compute the thermodynamics of two systems ; hard sphere chain and carbon dioxide molecular fluids. the results compared with the computer simulation data show that the theory works well at low densities.
arxiv:1711.06723
we use the smallgroups library to find the finite subgroups of u ( 3 ) of order smaller than 512 which possess a faithful three - dimensional irreducible representation. from the resulting list of groups we extract those groups that can not be written as direct products with cyclic groups. these groups are the basic building blocks for models based on finite subgroups of u ( 3 ). all resulting finite subgroups of su ( 3 ) can be identified using the well known list of finite subgroups of su ( 3 ) derived by miller, blichfeldt and dickson at the beginning of the 20th century. furthermore we prove a theorem which allows to construct infinite series of finite subgroups of u ( 3 ) from a special type of finite subgroups of u ( 3 ). this theorem is used to construct some new series of finite subgroups of u ( 3 ). the first members of these series can be found in the derived list of finite subgroups of u ( 3 ) of order smaller than 512. in the last part of this work we analyse some interesting finite subgroups of u ( 3 ), especially the group s _ 4 ( 2 ) \ cong a _ 4 \ rtimes z _ 4, which is closely related to the important su ( 3 ) - subgroup s _ 4.
arxiv:1006.1479
evolutionary deep intelligence has recently shown great promise for producing small, powerful deep neural network models via the organic synthesis of increasingly efficient architectures over successive generations. existing evolutionary synthesis processes, however, have allowed the mating of parent networks independent of architectural alignment, resulting in a mismatch of network structures. we present a preliminary study into the effects of architectural alignment during evolutionary synthesis using a gene tagging system. surprisingly, the network architectures synthesized using the gene tagging approach resulted in slower decreases in performance accuracy and storage size ; however, the resultant networks were comparable in size and performance accuracy to the non - gene tagging networks. furthermore, we speculate that there is a noticeable decrease in network variability for networks synthesized with gene tagging, indicating that enforcing a like - with - like mating policy potentially restricts the exploration of the search space of possible network architectures.
arxiv:1811.07966
the erbb receptor family, including egfr and her2, plays a crucial role in cell growth and survival and is associated with the progression of various cancers such as breast and lung cancer. in this study, we developed a deep learning model to predict the binding affinity of erbb inhibitors using molecular fingerprints derived from smiles representations. the smiles representations for each erbb inhibitor were obtained from the chembl database. we first generated morgan fingerprints from the smiles strings and applied autodock vina docking to calculate the binding affinity values. after filtering the dataset based on binding affinity, we trained a deep neural network ( dnn ) model to predict binding affinity values from the molecular fingerprints. the model achieved significant performance, with a mean squared error ( mse ) of 0. 2591, mean absolute error ( mae ) of 0. 3658, and an r - squared value of 0. 9389 on the training set. although performance decreased slightly on the test set ( r squared = 0. 7731 ), the model still demonstrated robust generalization capabilities. these results indicate that the deep learning approach is highly effective for predicting the binding affinity of erbb inhibitors, offering a valuable tool for virtual screening and drug discovery.
arxiv:2501.05607
collaborative virtual environments ( cves ) are used for collaboration and interaction of possibly many participants that may be spread over large distances. both commercial and freely available cves exist today. currently, cves are used already in a variety of different fields : gaming, business, education, social communication, and cooperative development. in this paper, a general framework is proposed for the development of a cooperative environment which is able to exploit a multi protocol network infrastructure. the framework offers support to concerns such as communication security and inter - protocol interoperability and let software engineers to focus on the specific business of the cve under development. to show the framework effectiveness we consider, as a case of study, the design of a reusable software layer for the development of distributed card games built on top of it. this layer is, in turn, used for the implementation of a specific card game.
arxiv:1412.3260
the large - scale cmb b - mode polarization is the direct probe to the low frequency primordial gravitational wave signal. however, unambiguous measurement of this signal requires a precise understanding of the possible contamination. one such potential contamination arises from the patchiness in the spatial distribution of free electrons during the epoch of reionization. we estimate the b - mode power spectrum due to patchy reionization using a combination of \ emph { photon - conserving } semi - numerical simulation and analytical calculation, and compare its amplitude with the primordial b - mode signal. for a reionization history which is in agreement with several latest observations, we find that a stronger secondary b - mode polarization signal is produced when the reionization is driven by the sources in massive halos and its amplitude can be comparable to the recombination bump for tensor to scalar ratio $ ( r ) \ lesssim 5 \ times 10 ^ { - 4 } $. if contamination from patchy reionization is neglected in the analysis of b - mode polarization data, then for the models of reionization considered in this analysis, we find a maximum bias of about $ 30 \ % $ in the value of $ r = \, 10 ^ { - 3 } $ when spatial modes between $ \ ell \ in [ 50, 200 ] $ are used with a delensing efficiency of $ 50 \ % $. the inferred bias from patchy reionization is not a severe issue for the upcoming ground - based cmb experiment simons observatory, but can be a potential source of confusion for proposed cmb experiments which target to detect the value of $ r < 10 ^ { - 3 } $. however, this obstacle can be removed by utilizing the difference in the shape of the power spectrum from the primordial signal.
arxiv:1903.01994
for every fixed integer $ k \ geq 1 $, we prove that $ k $ - edge colouring is fixed - parameter - tractable when parameterized by the number of vertices of maximum degree.
arxiv:1901.01861
cloud - scale surveys of molecular gas reveal the link between molecular clouds properties and star formation ( sf ) across a range of galactic environments. cloud populations in galaxy disks are considered to be representative of the ` normal ' sf. at high resolution, however, clouds with exceptional gas properties and sf activity may also be observed in normal disk environments. in this paper, we study the brightest cloud traced in co emission in the disk of ngc628. the cloud is spatially coincident with an extremely bright hii region. we characterize its molecular gas properties and investigate how feedback and large - scale processes influence the properties of the molecular gas. high resolution co alma observations are used to characterize its mass and dynamical state, which are compared to other clouds in ngc628. a lvg analysis is used to constrain the beam - diluted density and temperature of the molecular gas. we analyze the muse spectrum using starburst99 to characterize the young stellar population associated with the hii region. the cloud is massive ( $ 1 - 2 \ times10 ^ 7 $ m $ _ { \ odot } $ ), with a beam - diluted density of $ n _ { \ rm h _ 2 } = 5 \ times10 ^ 4 $ cm $ ^ { - 3 } $. it has a low virial parameter, suggesting that its co emission may be overluminous due to heating by the hii region. a young ( $ 2 - 4 $ myr ), massive $ 3 \ times10 ^ { 5 } $ m $ _ { \ odot } $ stellar population is associated. we argue that the cloud is currently being destroyed by feedback from young massive stars. due to the cloud ' s large mass, this phase of the cloud ' s evolution is long enough for the impact of feedback on the excitation of the gas to be observed. its high mass may be related to its location at a spiral co - rotation radius, where gas experiences reduced galactic shear compared to other regions of the disk, and receives a sustained inflow of gas that can promote the cloud ' s mass growth.
arxiv:1910.14311
turbulent boundary layers exhibit a universal structure which nevertheless is rather complex, being composed of a viscous sub - layer, a buffer zone, and a turbulent log - law region. in this letter we present a simple analytic model of turbulent boundary layers which culminates in explicit formulae for the profiles of the mean velocity, the kinetic energy and the reynolds stress as a function of the distance from the wall. the resulting profiles are in close quantitative agreement with measurements over the entire structure of the boundary layer, without any need of re - fitting in the different zones.
arxiv:nlin/0606035
we study the structure of nucleon pairs within a simple model consisting of a square well in three dimensions and a delta - function residual interaction between two weakly - bound particles at the fermi surface. we include the continuum by enclosing the entire system in a large spherical box. to a good approximation, the continuum can be replaced by a small set of optimally - determined resonance states, suggesting that in many nuclei far from stability it may be possible to incorporate continuum effects within traditional shell - model based approximations.
arxiv:nucl-th/9509028
in quasi - persistent neutron star transients, long outbursts cause the neutron star crust to be heated out of thermal equilibrium with the rest of the star. during quiescence, the crust then cools back down. such crustal cooling has been observed in two quasi - persistent sources : ks 1731 - 260 and mxb 1659 - 29. here we present an additional chandra observation of mxb 1659 - 29 in quiescence, which extends the baseline of monitoring to 6. 6 yr after the end of the outburst. this new observation strongly suggests that the crust has thermally relaxed, with the temperature remaining consistent over 1000 days. fitting the temperature cooling curve with an exponential plus constant model we determine an e - folding timescale of 465 + / - 25 days, with the crust cooling to a constant surface temperature of kt = 54 + / - 2 ev ( assuming d = 10 kpc ). from this, we infer a core temperature in the range 3. 5e7 - 8. 3e7 k ( assuming d = 10 kpc ), with the uncertainty due to the surface composition. importantly, we tested two neutron star atmosphere models as well as a blackbody model, and found that the thermal relaxation time of the crust is independent of the chosen model and the assumed distance.
arxiv:0806.1166
in this study, we investigate the appeared complexity of two - phase flow ( air / water ) in a heterogeneous soil where the supposed porous media is non - deformable media which is under the timedependent gas pressure. after obtaining of governing equations and considering the capillary pressuresaturation and permeability functions, the evolution of the model unknown parameters were obtained. in this way, using comsol ( femlab ) and fluid flow / script module, the role of heterogeneity in intrinsic permeability was analysed. also, the evolution of relative permeability of wetting and non - wetting fluid, capillary pressure and other parameters were elicited. in the last part, a complex network approach to analysis of emerged patterns will be employed.
arxiv:0909.5583
this study explores the applications of the prouhet - thue - morse ( ptm ) sequence in quantum computing, highlighting its mathematical elegance and practical relevance. we demonstrate the critical role of the ptm sequence in quantum error correction, in noise - resistant quantum memories, and in providing insights into quantum chaos. notably, we demonstrate how the ptm sequence naturally appears in ising x - x interacting systems, leading to a proposed robust encoding of quantum memories in such systems. furthermore, connections to number theory, including the riemann zeta function, bridge quantum computing with pure mathematics. our findings emphasize the ptm sequence ' s importance in understanding the mathematical structure of quantum computing systems and the development of the full potential of quantum technologies and invite further interdisciplinary research.
arxiv:2501.09610
this study explores the excitation of transverse laser modes through spatial gain shaping, while focusing on the boundary between selective single - mode and multi - mode lasing. by deliberately reducing the similarity between intensity distributions of pump and laser mode, it was studied, if and which other modes are excited besides the target mode, and how the modes compete for the spatially distributed gain. analysis of the usually unwanted multi - mode lasing revealed characteristic properties of pump distributions adapted to hermite - gaussian $ \ text { hg } _ { m, 0 } $ modes : a center - heavy pump distribution at first distinctly excites the target mode and eventually low - order modes, whereas an eccentric pump distribution reduces the lasing threshold at the expense of distinction to the neighboring modes. by understanding why certain gain distributions do not excite a single mode, we infer guidelines for the design of pump patterns in spatial gain shaping approaches.
arxiv:2503.02363
training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations. without learning from much richer context, these models often fail in challenging scenarios, e. g., severe occlusion, ambiguous lanes, and poor lighting conditions. in this paper, we present a novel knowledge distillation approach, i. e., self attention distillation ( sad ), which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels. specifically, we observe that attention maps extracted from a model trained to a reasonable level would encode rich contextual information. the valuable contextual information can be used as a form of ' free ' supervision for further representation learning through performing topdown and layer - wise attention distillation within the network itself. sad can be easily incorporated in any feedforward convolutional neural networks ( cnn ) and does not increase the inference time. we validate sad on three popular lane detection benchmarks ( tusimple, culane and bdd100k ) using lightweight models such as enet, resnet - 18 and resnet - 34. the lightest model, enet - sad, performs comparatively or even surpasses existing algorithms. notably, enet - sad has 20 x fewer parameters and runs 10 x faster compared to the state - of - the - art scnn, while still achieving compelling performance in all benchmarks. our code is available at https : / / github. com / cardwing / codes - for - lane - detection.
arxiv:1908.00821
comments are an integral part of software development ; they are natural language descriptions associated with source code elements. understanding explicit associations can be useful in improving code comprehensibility and maintaining the consistency between code and comments. as an initial step towards this larger goal, we address the task of associating entities in javadoc comments with elements in java source code. we propose an approach for automatically extracting supervised data using revision histories of open source projects and present a manually annotated evaluation dataset for this task. we develop a binary classifier and a sequence labeling model by crafting a rich feature set which encompasses various aspects of code, comments, and the relationships between them. experiments show that our systems outperform several baselines learning from the proposed supervision.
arxiv:1912.06728
this study investigates the efficacy of low - rank adaptation ( lora ) in fine - tuning earth observation ( eo ) foundation models for flood segmentation. we hypothesize that lora, a parameter - efficient technique, can significantly accelerate the adaptation of large - scale eo models to this critical task while maintaining high performance. we apply lora to fine - tune a state - of - the - art eo foundation model pre - trained on diverse satellite imagery, using a curated dataset of flood events. our results demonstrate that lora - based fine - tuning ( r - 256 ) improves f1 score by 6. 66 points and iou by 0. 11 compared to a frozen encoder baseline, while significantly reducing computational costs. notably, lora outperforms full fine - tuning, which proves computationally infeasible on our hardware. we further assess generalization through out - of - distribution ( ood ) testing on a geographically distinct flood event. while lora configurations show improved ood performance over the baseline. this work contributes to research on efficient adaptation of foundation models for specialized eo tasks, with implications for rapid response systems in disaster management. our findings demonstrate lora ' s potential for enabling faster deployment of accurate flood segmentation models in resource - constrained, time - critical scenarios.
arxiv:2409.09907
dilatation, i. e. scale, symmetry in the presence of the dilaton in minkowski space is derived from diffeomorphism symmetry in curved spacetime, incorporating the volume - preserving diffeomorphisms. the conditions for scale invariance are derived and their relation to conformal invariance is examined. in the presence of the dilaton scale invariance automatically guarantees conformal invariance due to diffeomorphism symmetry. low energy scale - invariant phenomenological lagrangians are derived in terms of dilaton - dressed fields, which are identified as the fields satisfying the usual scaling properties. the notion of spontaneous scale symmetry breaking is defined in the presence of the dilaton. in this context, possible phenomenological implications are advocated and by computing the dilaton mass the idea of pcdc ( partially conserved dilatation current ) is further explored.
arxiv:hep-th/9608148
we propose a complementary point of view on the topological invariants of two - dimensional tight - binding models restricted to half - spaces. the transfer operators for such systems are $ j $ - unitary on a infinite dimensional krein space $ ( \ mathcal { k }, j ) $ and, for energies in the bulk gap, only have discrete spectrum on the unit circle. these eigenvalues have krein inertia which can be used to define topological invariants determining the nature of the surface modes and allowing to distinguish different topological phases. this is illustrated by numerical results.
arxiv:1306.1816
3d delineation of anatomical structures is a cardinal goal in medical imaging analysis. prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology. prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology. today fully - convolutional networks ( fcns ), while dominant, do not offer these capabilities. we present deep implicit statistical shape models ( dissms ), a new approach to delineation that marries the representation power of convolutional neural networks ( cnns ) with the robustness of ssms. dissms use a deep implicit surface representation to produce a compact and descriptive shape latent space that permits statistical models of anatomical variance. to reliably fit anatomically plausible shapes to an image, we introduce a novel rigid and non - rigid pose estimation pipeline that is modelled as a markov decision process ( mdp ). we outline a training regime that includes inverted episodic training and a deep realization of marginal space learning ( msl ). intra - dataset experiments on the task of pathological liver segmentation demonstrate that dissms can perform more robustly than three leading fcn models, including nnu - net : reducing the mean hausdorff distance ( hd ) by 7. 7 - 14. 3mm and improving the worst case dice - sorensen coefficient ( dsc ) by 1. 2 - 2. 3 %. more critically, cross - dataset experiments on a dataset directly reflecting clinical deployment scenarios demonstrate that dissms improve the mean dsc and hd by 3. 5 - 5. 9 % and 12. 3 - 24. 5mm, respectively, and the worst - case dsc by 5. 4 - 7. 3 %. these improvements are over and above any benefits from representing delineations with high - quality surface.
arxiv:2104.02847
isobaric multiplets can be used to provide reliable mass predictions through the isobaric multiplet mass equation ( imme ). isobaric analogue states ( ias ) for isospin multiplets from $ t = 1 / 2 $ to $ t = 3 $ have been studied within the 2012 atomic mass evaluation ( ame2012 ). each ias established from published experimental reaction data has been expressed in the form of a primary reaction $ q $ - value, and if necessary, has been recalibrated. the evaluated ias masses are provided here along with the associated imme coefficients. quadratic and higher order forms of the imme have been considered, and global trends have been extracted. particular nuclides, requiring experimental investigation, have been identified and discussed. this dataset is the most precise and extensive set of evaluated ias to date.
arxiv:1312.1521
we introduce a whitney polynomial for hypermaps and use it to generalize the results connecting the circuit partition polynomial to the martin polynomial and the results on several graph invariants.
arxiv:2311.06662
the exact delay - zero calibration in an attosecond pump - probe experiment is important for the correct interpretation of experimental data. in attosecond transient absorption spectroscopy the determination of the delay - zero exclusively from the experimental results is not straightforward and may introduce significant errors. here, we report the observation of quarter - laser - cycle ( 4 { \ omega } ) oscillations in a transient absorption experiment in helium using an attosecond pulse train overlapped with a precisely synchronized, moderately strong infrared pulse. we demonstrate how to extract and calibrate the delay - zero with the help of the highly nonlinear 4 { \ omega } signal. a comparison with the solution of the time - dependent schr \ " odinger equation is used to confirm the accuracy and validity of the approach. moreover, we study the mechanisms behind the quarter - laser - cycle and the better - known half - laser - cycle oscillations as a function of experimental parameters. this investigation yields an indication of the robustness of our delay - zero calibration approach.
arxiv:1406.3137
a $ ^ { 238 } $ u projectile beam was used to create cadmium isotopes via abrasion - fission at 410 mev / u in a beryllium target at the entrance of the in - flight separator frs at gsi. the fission fragments were separated with the frs and injected into the isochronous storage ring esr for mass measurements. the isochronous mass spectrometry ( ims ) was performed under two different experimental conditions, with and without b $ \ rho $ - tagging at the dispersive central focal plane of the frs. in the experiment with b $ \ rho $ - tagging the magnetic rigidity of the injected fragments was determined by an accuracy of $ 2 \ times 10 ^ { - 4 } $. a new method of data analysis, using a correlation matrix for the combined data set from both experiments, has provided mass values for 25 different isotopes for the first time. the high selectivity and sensitivity of the experiment and analysis has given access even to rare isotopes detected with a few atoms per week. in this letter we present for the $ ^ { 129, 130, 131 } $ cd isotopes mass values directly measured for the first time. the cd results clearly show a very pronounced shell effect at $ n = 82 $ which is in agreement with the conclusion from $ \ gamma $ - ray spectroscopy of $ ^ { 130 } $ cd and confirms the assumptions of modern shell - model calculations.
arxiv:1507.04470
currently, approximately 30 % of epileptic patients treated with antiepileptic drugs ( aeds ) remain resistant to treatment ( known as refractory patients ). this project seeks to understand the underlying similarities in refractory patients vs. other epileptic patients, identify features contributing to drug resistance across underlying phenotypes for refractory patients, and develop predictive models for drug resistance in epileptic patients. in this study, epileptic patient data was examined to attempt to observe discernable similarities or differences in refractory patients ( case ) and other non - refractory patients ( control ) to map underlying mechanisms in causality. for the first part of the study, unsupervised algorithms such as kmeans, spectral clustering, and gaussian mixture models were used to examine patient features projected into a lower dimensional space. results from this study showed a high degree of non - linearity in the underlying feature space. for the second part of this study, classification algorithms such as logistic regression, gradient boosted decision trees, and svms, were tested on the reduced - dimensionality features, with accuracy results of 0. 83 ( + / - 0. 3 ) testing using 7 fold cross validation. observations of test results indicate using a radial basis function kernel pca to reduce features ingested by a gradient boosted decision tree ensemble lead to gains in improved accuracy in mapping a binary decision to highly non - linear features collected from epileptic patients.
arxiv:1704.08361
electron vortex beams were only recently discovered and their potential as a probe for magnetism in materials was shown. here we demonstrate a new method to produce electron vortex beams with a diameter of less than 1. 2 \ aa. this unique way to prepare free electrons to a state resembling atomic orbitals is fascinating from a fundamental physics point of view and opens the road for magnetic mapping with atomic resolution in an electron microscope.
arxiv:1405.7247
the vector boson fusion ( vbf ) topology at the large hadron collider at 14 tev provides an opportunity to search for new physics. a feasibility study for the search of sleptons in a compressed mass spectra scenario is presented in the final state of two jets, one or two low $ p _ { t } $ non - resonant leptons, and missing energy. the presence of the vbf tagged jets and missing energy are effective in reducing standard model backgrounds. using smuon production with a mass difference between $ \ tilde { l } _ { l } $ and $ \ tilde { \ chi } _ { 1 } ^ 0 $ of 5 - 15 gev, the significance of observing the signal events is found to be $ \ sim $ 3 - 6 $ \ sigma $ for $ m _ { \ tilde { l } } $ = 115 - 135 gev, considering an integrated luminosity of 3000 fb $ ^ { - 1 } $.
arxiv:1411.6043
we prove that certain galois - isotypic parts of the completed cohomology group for u ( 2 ) can be written as a completed tensor product of a representation coming from the p - adic langlands correspondence for $ gl _ 2 ( \ mathbb { q } _ p ) $ and a representation arising via the local langlands correspondence in famillies of emerton and helm.
arxiv:1406.1828
we study the quantum mechanics of self - gravitating thin shell collapse by solving the polymerized wheeler - dewitt equation. we obtain the energy spectrum and solve the time dependent equation using numerics. in contradistinction to the continuum theory, we are able to consistently quantize the theory for super - planckian black holes, and find two choices of boundary conditions which conserve energy and probability, as opposed to one in the continuum theory. another feature unique to the polymer theory is the existence of negative energy stationary states that disappear from the spectrum as the polymer scale goes to zero. in both theories the probability density is positive semi - definite only for the space of positive energy stationary states. dynamically, we find that an initial gaussian probability density develops regions of negative probability as the wavepacket approaches $ r = 0 $ and bounces. this implies that the bouncing state is a sum of both positive and negative eigenstates.
arxiv:1609.06665
there are many fundamental aspects of galactic structure and evolution which can be studied best or exclusively with high quality three dimensional kinematics. amongst these we note as examples determination of the orientation of the stellar velocity ellipsoid, and the detection of structure in velocity - position phase space. the first of these is the primary limitation at present to reliable and accurate measurement of the galactic gravitational potential. the second is a critical test of current standard models of galactic formation and detection of structure in velocity - position phase space. the first of these is the primary limitation at present to reliable and accurate measurement of the galactic gravitational potential. the second is a critical test of current standard models of galactic formation and evolution.
arxiv:astro-ph/9411068
as large language models ( llms ) grow increasingly powerful, ensuring their safety and alignment with human values remains a critical challenge. ideally, llms should provide informative responses while avoiding the disclosure of harmful or sensitive information. however, current alignment approaches, which rely heavily on refusal strategies, such as training models to completely reject harmful prompts or applying coarse filters are limited by their binary nature. these methods either fully deny access to information or grant it without sufficient nuance, leading to overly cautious responses or failures to detect subtle harmful content. for example, llms may refuse to provide basic, public information about medication due to misuse concerns. moreover, these refusal - based methods struggle to handle mixed - content scenarios and lack the ability to adapt to context - dependent sensitivities, which can result in over - censorship of benign content. to overcome these challenges, we introduce hiddenguard, a novel framework for fine - grained, safe generation in llms. hiddenguard incorporates prism ( representation router for in - stream moderation ), which operates alongside the llm to enable real - time, token - level detection and redaction of harmful content by leveraging intermediate hidden states. this fine - grained approach allows for more nuanced, context - aware moderation, enabling the model to generate informative responses while selectively redacting or replacing sensitive information, rather than outright refusal. we also contribute a comprehensive dataset with token - level fine - grained annotations of potentially harmful information across diverse contexts. our experiments demonstrate that hiddenguard achieves over 90 % in f1 score for detecting and redacting harmful content while preserving the overall utility and informativeness of the model ' s responses.
arxiv:2410.02684
uncertainty quantification in deep - learning ( dl ) based image reconstruction models is critical for reliable clinical decision making based on the reconstructed images. we introduce " npb - rec ", a non - parametric fully bayesian framework for uncertainty assessment in mri reconstruction from undersampled " k - space " data. we use stochastic gradient langevin dynamics ( sgld ) during the training phase to characterize the posterior distribution of the network weights. we demonstrated the added - value of our approach on the multi - coil brain mri dataset, from the fastmri challenge, in comparison to the baseline e2e - varnet with and without inference - time dropout. our experiments show that npb - rec outperforms the baseline by means of reconstruction accuracy ( psnr and ssim of $ 34. 55 $, $ 0. 908 $ vs. $ 33. 08 $, $ 0. 897 $, $ p < 0. 01 $ ) in high acceleration rates ( $ r = 8 $ ). this is also measured in regions of clinical annotations. more significantly, it provides a more accurate estimate of the uncertainty that correlates with the reconstruction error, compared to the monte - carlo inference time dropout method ( pearson correlation coefficient of $ r = 0. 94 $ vs. $ r = 0. 91 $ ). the proposed approach has the potential to facilitate safe utilization of dl based methods for mri reconstruction from undersampled data. code and trained models are available in \ url { https : / / github. com / samahkh / npb - rec }.
arxiv:2208.03966
some results of a calculation of electroweak radiative corrections to $ w $ and $ z $ boson production in hadronic collisions are presented.
arxiv:hep-ph/9609315
in this paper, a fuzzy inference system ( fis ) is fabricated on a riderless bicycle. the fuzzy inference system is based on a rule base inherited from human experience of bicycle riding. the steady turning motion and roll - angle tracking controls for the riderless bicycle were achieved by using fuzzy concept. collection of sensors, actuator, micro - controller and electrical circuits were employed to introduce new prototype autonomous bicycle. effectiveness of the control scheme was proved by experimental tests and stabilization and roll - angle tracking of the real bicycle was illustrated by results.
arxiv:1709.09014
the $ \ alpha $ - bosonic properties such as single - $ \ alpha $ orbits and occupation numbers in $ j ^ \ pi $ = $ 0 ^ + $, $ 2 ^ + $, $ 1 ^ - $ and $ 3 ^ - $ states of $ ^ { 12 } $ c around the $ 3 \ alpha $ threshold are investigated with the semi - microscopic $ 3 \ alpha $ cluster model. as in other studies, we found that the $ 0 ^ + _ 2 $ ( $ 2 ^ + _ 2 $ ) state has dilute - $ 3 \ alpha $ - condensate - like structure in which the $ \ alpha $ particle is occupied in the single $ s $ ( $ d $ ) orbit with about 70 % ( 80 % ) probability. the radial behaviors of the single - $ \ alpha $ orbits as well as the occupation numbers are discussed in detail in comparison with those for the $ 0 ^ + _ 1 $ and $ 2 ^ + _ 1 $ states together with the $ 1 ^ - _ 1 $ and $ 3 ^ - _ 1 $ states.
arxiv:nucl-th/0506048
sometime ago it was shown that the operatorial approach to classical mechanics, pioneered in the 30 ' s by koopman and von neumann, can have a functional version. in this talk we will extend this functional approach to the case of classical field theories and in particular to the yang - mills ones. we shall show that the issues of gauge - fixing and faddeev - popov determinant arise also in this classical formalism.
arxiv:hep-th/0107077
we study complete noncompact long time solutions $ ( m, g ( t ) ) $ to the k \ " ahler - ricci flow with uniformly bounded nonnegative holomorphic bisectional curvature. we will show that when the ricci curvature is positive and uniformly pinched, i. e. $ r _ \ ijb \ ge crg _ \ ijb $ at $ ( p, t ) $ for all $ t $ for some $ c > 0 $, then there always exists a local gradient k \ " ahler ricci soliton limit around $ p $ after possibly rescaling $ g ( t ) $ along some sequence $ t _ i \ to \ infty $. we will show as an immediate corollary that the injectivity radius of $ g ( t ) $ along $ t _ i $ is uniformly bounded from below along $ t _ i $, and thus $ m $ must in fact be simply connected. additional results concerning the uniformization of $ m $ and fixed points of the holomorphic isometry group will also be established. we will then consider removing the condition of positive ricci for $ ( m, g ( t ) ) $. combining our results with cao ' s splitting for k \ " ahler ricci flow \ cite { cao04 } and techniques of ni - tam \ cite { nitam03 }, we show that when the positive eigenvalues of the ricci curvature are uniformly pinched at some point $ p \ in m $, then $ m $ has a special holomorphic fiber bundle structure. we will treat a special cases, complete k \ " ahler manifolds with non - negative holomorphic bisectional and average quadratic curvature decay as well as the case of steady gradient k \ " ahler ricci solitons.
arxiv:0806.2457
extracting reliable indicators of chaos from a single experimental time series is a challenging task, in particular, for systems with many degrees of freedom. the techniques available for this purpose often require unachievably long time series. in this paper, we explore a new method of discriminating chaotic from multi - periodic integrable motion in many - particle systems. the applicability of this method is supported by our numerical simulations of the dynamics of classical spin lattices at high temperatures. we compared chaotic and nonchaotic regimes of these lattices and investigated the transition between the two. the method is based on analyzing higher - order time derivatives of the time series of a macroscopic observable - - - the total magnetization of the spin lattice. we exploit the fact that power spectra of the magnetization time series generated by chaotic spin lattices exhibit exponential high - frequency tails, while, for the integrable spin lattices, the power spectra are terminated in a non - exponential way. we have also demonstrated the applicability limits of the above method by investigating the high - frequency tails of the power spectra generated by quantum spin lattices and by the classical toda lattice.
arxiv:1105.4575
we use the perturbation method to calculate the masses and widths for 27 - plet baryons with spin 3 / 2 from chiral soliton models. according to the masses and quantum numbers, we find all the candidates for non - exotic members of 27 - plet. the calculation of the widths shows that these candidates manifest an approximate symmetry of the 27 representation of the su ( 3 ) group, and the quantum numbers of $ \ xi ( 1950 ) $ seem to be $ i ( j ^ p ) = { 1 / 2 } ( { 3 / 2 } ^ + ) $. up to leading order of the strange quark mass, we find that the exotic members have widths much larger than those of the anti - decuplet members.
arxiv:hep-ph/0312041
in this review, we outline the important results on the resistivity encountered by an electron in magnetically ordered materials. the mechanism of the collision between the electron and the lattice spins is shown. experiments on the spin resistivity in various magnetic materials as well as theoretical background are recalled. we focus on our works since 15 years using principally monte carlo simulations. in these works, we have studied the spin resistivity in various kinds of magnetic systems ranging from ferromagnets and antiferromagnets to frustrated spin systems. it is found that the spin resistivity shows a broad peak at the transition temperature in systems with a second - order phase transition, while it undergoes a discontinuous jump at the transition temperature of a first - order transition. new results on the hexagonal - close - packed ( hcp ) antiferromagnet are also shown in extended details for the ising case in both the frustrated and non - frustrated parameter regions.
arxiv:2301.02689
in this article, we study eigenvalue functions of varying transition probability matrices on finite, vertex transitive graphs. we prove that the eigenvalue function of an eigenvalue of fixed higher multiplicity has a critical point if and only if the corresponding spectral representation is equilateral. we also show how the geometric realisation of a finite coxeter group as a reflection group can be used to obtain an explicit orthogonal system of eigenfunctions. combining both results, we describe the behaviour of the spectral representations of the second highest eigenvalue function under the change of the transition probabilities in the case of archimedean solids.
arxiv:1106.2509
deep - learning models for language generation tasks tend to produce repetitive output. various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the output. in this work, we propose to ameliorate this cost by using an imitation learning approach to explore the level of diversity that a language generation model can reliably produce. specifically, we augment the decoding process with a meta - classifier trained to distinguish which words at any given timestep will lead to high - quality output. we focus our experiments on concept - to - text generation where models are sensitive to the inclusion of irrelevant words due to the strict relation between input and output. our analysis shows that previous methods for diversity underperform in this setting, while human evaluation suggests that our proposed method achieves a high level of diversity with minimal effect to the output ' s fluency and adequacy.
arxiv:2004.14364
high dynamic range ( hdr ) imaging is a highly challenging task since a large amount of information is lost due to the limitations of camera sensors. for hdr imaging, some methods capture multiple low dynamic range ( ldr ) images with altering exposures to aggregate more information. however, these approaches introduce ghosting artifacts when significant inter - frame motions are present. moreover, although multi - exposure images are given, we have little information in severely over - exposed areas. most existing methods focus on motion compensation, i. e., alignment of multiple ldr shots to reduce the ghosting artifacts, but they still produce unsatisfying results. these methods also rather overlook the need to restore the saturated areas. in this paper, we generate well - aligned multi - exposure features by reformulating a motion alignment problem into a simple brightness adjustment problem. in addition, we propose a coarse - to - fine merging strategy with explicit saturation compensation. the saturated areas are reconstructed with similar well - exposed content using adaptive contextual attention. we demonstrate that our method outperforms the state - of - the - art methods regarding qualitative and quantitative evaluations.
arxiv:2308.11140
this paper studies a variant of the minimum - cost flow problem in a graph with convex cost function where the demands at the vertices are functions depending on a one - dimensional parameter $ \ lambda $. we devise two algorithmic approaches for the approximate computation of parametric solutions for this problem. the first approach transforms an instance of the parametric problem into an instance with piecewise quadratic cost functions by interpolating the marginal cost functions. the new instance can be solved exactly with an algorithm we developed in prior work. in the second approach, we compute a fixed number of non - parametric solutions and interpolate the resulting flows yielding an approximate solution for the original, parametric problem. for both methods we formulate explicit bounds on the step sizes used in the respective interpolations that guarantee relative and absolute error margins. finally, we test our approaches on real - world traffic and gas instances in an empirical study.
arxiv:2203.13146
f ( x ) { \ displaystyle f ( x ) } has the said property. this notion is used, for example, in the study of hardy fields, which are fields made up of real functions, each of which have certain properties eventually. = = examples = = " all primes greater than 2 are odd " can be written as " eventually, all primes are odd. ” eventually, all primes are congruent to ±1 modulo 6. the square of a prime is eventually congruent to 1 mod 24 ( specifically, this is true for all primes greater than 3 ). the factorial of a natural number eventually ends in the digit 0 ( specifically, this is true for all natural numbers greater than 4 ). = = other uses in mathematics = = a 3 - manifold is called sufficiently large if it contains a properly embedded 2 - sided incompressible surface. this property is the main requirement for a 3 - manifold to be called a haken manifold. temporal logic introduces an operator that can be used to express statements interpretable as : certain property will eventually hold in a future moment in time. = = see also = = almost all big o notation mathematical jargon number theory = = references = =
https://en.wikipedia.org/wiki/Eventually_(mathematics)
the recognition of medical entities from natural language is an ubiquitous problem in the medical field, with applications ranging from medical act coding to the analysis of electronic health data for public health. it is however a complex task usually requiring human expert intervention, thus making it expansive and time consuming. the recent advances in artificial intelligence, specifically the raise of deep learning methods, has enabled computers to make efficient decisions on a number of complex problems, with the notable example of neural sequence models and their powerful applications in natural language processing. they however require a considerable amount of data to learn from, which is typically their main limiting factor. however, the c \ ' epidc stores an exhaustive database of death certificates at the french national scale, amounting to several millions of natural language examples provided with their associated human coded medical entities available to the machine learning practitioner. this article investigates the applications of deep neural sequence models to the medical entity recognition from natural language problem.
arxiv:2004.13839
developing therapeutics is a lengthy and expensive process that requires the satisfaction of many different criteria, and ai models capable of expediting the process would be invaluable. however, the majority of current ai approaches address only a narrowly defined set of tasks, often circumscribed within a particular domain. to bridge this gap, we introduce tx - llm, a generalist large language model ( llm ) fine - tuned from palm - 2 which encodes knowledge about diverse therapeutic modalities. tx - llm is trained using a collection of 709 datasets that target 66 tasks spanning various stages of the drug discovery pipeline. using a single set of weights, tx - llm simultaneously processes a wide variety of chemical or biological entities ( small molecules, proteins, nucleic acids, cell lines, diseases ) interleaved with free - text, allowing it to predict a broad range of associated properties, achieving competitive with state - of - the - art ( sota ) performance on 43 out of 66 tasks and exceeding sota on 22. among these, tx - llm is particularly powerful and exceeds best - in - class performance on average for tasks combining molecular smiles representations with text such as cell line names or disease names, likely due to context learned during pretraining. we observe evidence of positive transfer between tasks with diverse drug types ( e. g., tasks involving small molecules and tasks involving proteins ), and we study the impact of model size, domain finetuning, and prompting strategies on performance. we believe tx - llm represents an important step towards llms encoding biochemical knowledge and could have a future role as an end - to - end tool across the drug discovery development pipeline.
arxiv:2406.06316
in the paper we throw the first light on studying systematically the local entropy theory for a countable discrete amenable group action. for such an action, we introduce entropy tuples in both topological and measure - theoretic settings and build the variational relation between these two kinds of entropy tuples by establishing a local variational principle for a given finite open cover. moreover, based the idea of topological entropy pairs, we introduce and study two special classes of such an action : uniformly positive entropy and completely positive entropy. note that in the building of the local variational principle, following romagnoli ' s ideas two kinds of measure - theoretic entropy are introduced for finite borel covers. these two kinds of entropy turn out to be the same, where danilenko ' s orbital approach becomes an inevitable tool.
arxiv:1005.1335
an analysis of the astrophysical $ s $ factor of the proton - proton weak capture ( $ \ mathrm { p } + \ mathrm { p } \ rightarrow { } ^ 2 \ mathrm { h } + \ mathrm { e } ^ + + \ nu _ { \ mathrm { e } } $ ) is performed on a large energy range covering solar - core and early universe temperatures. the measurement of $ s $ being physically unachievable, its value relies on the theoretical calculation of the matrix element $ \ lambda $. surprisingly, $ \ lambda $ reaches a maximum near $ 0. 13 ~ \ mathrm { mev } $ that has been unexplained until now. a model - independent parametrization of $ \ lambda $ valid up to about $ 5 ~ \ mathrm { mev } $ is established on the basis of recent effective - range functions. it provides an insight into the relationship between the maximum of $ \ lambda $ and the proton - proton resonance pole at $ ( - 140 - 467 \, \ mathrm { i } ) ~ \ mathrm { kev } $ from analytic continuation. in addition, this parametrization leads to an accurate evaluation of the derivatives of $ \ lambda $, and hence of $ s $, in the limit of zero energy.
arxiv:1902.02324
in this phd thesis we investigate the geometry of random fields on compact riemannian manifolds, in particular the two - dimensional sphere. in the first part, we characterize isotropic gaussian fields on homogeneous spaces of a compact group and then we prove the non - existence of p. l \ ' evy ' s brownian field on the group so ( 3 ), which moreover allows to extend the same kind of result to so ( n ), su ( n ) for n bigger than 3. in the second part, we investigate the high - energy behavior of random eigenfunctions on the ( hyper ) sphere and on the torus proving quantitative clt results for some geometric functionals of those eiegenfunctions. a nice non - central and non - universal result is shown for nodal length distribution in the case of arithmetic random waves. finally, we extend representation formulas obtained in the first part to the case of spin random fields on the sphere, introducing a new approach i. e., the pullback random field, that easily allows to study random sections of homogeneous vector bundles.
arxiv:1603.07575
in this paper, we are concerned with the well - posedness and large time behavior of cauchy problem for 3d incompressible navier - stokes - cahn - hilliard equations. first, using banach fixed point theorem, we establish the local well - posedness of solutions. second, assuming $ \ | ( u _ 0, \ nabla \ phi _ 0 ) \ | _ { \ dot { h } ^ \ frac12 } $ is sufficiently small, we obtain the global well - posedness of solutions. moreover, the optimal decay rates of the higher - order spatial derivatives of the solution are also obtained.
arxiv:1910.07904
the generating functional of heavy baryon chiral perturbation theory at order o ( q ^ 2 ) in the mean field approximation ( with a pseudoscalar source coupling which is consistent with the pcac - ward identities on the current quark level ) has been exploited to derive migdal ' s in - - medium pion propagator. it is shown that the prediction for the density dependence of the quark condensate obtained on the composite hadron level by embedding pcac within the framework of migdal ' s approach to finite fermi systems is identical to that resulting from qcd.
arxiv:nucl-th/9603017
the analysis of natural disasters such as floods in a timely manner often suffers from limited data due to coarsely distributed sensors or sensor failures. at the same time, a plethora of information is buried in an abundance of images of the event posted on social media platforms such as twitter. these images could be used to document and rapidly assess the situation and derive proxy - data not available from sensors, e. g., the degree of water pollution. however, not all images posted online are suitable or informative enough for this purpose. therefore, we propose an automatic filtering approach using machine learning techniques for finding twitter images that are relevant for one of the following information objectives : assessing the flooded area, the inundation depth, and the degree of water pollution. instead of relying on textual information present in the tweet, the filter analyzes the image contents directly. we evaluate the performance of two different approaches and various features on a case - study of two major flooding events. our image - based filter is able to enhance the quality of the results substantially compared with a keyword - based filter, improving the mean average precision from 23 % to 53 % on average.
arxiv:2011.05756
the current autotuning approaches for quantum dot ( qd ) devices, while showing some success, lack an assessment of data reliability. this leads to unexpected failures when noisy or otherwise low - quality data is processed by an autonomous system. in this work, we propose a framework for robust autotuning of qd devices that combines a machine learning ( ml ) state classifier with a data quality control module. the data quality control module acts as a " gatekeeper " system, ensuring that only reliable data are processed by the state classifier. lower data quality results in either device recalibration or termination. to train both ml systems, we enhance the qd simulation by incorporating synthetic noise typical of qd experiments. we confirm that the inclusion of synthetic noise in the training of the state classifier significantly improves the performance, resulting in an accuracy of 95. 0 ( 9 ) % when tested on experimental data. we then validate the functionality of the data quality control module by showing that the state classifier performance deteriorates with decreasing data quality, as expected. our results establish a robust and flexible ml framework for autonomous tuning of noisy qd devices.
arxiv:2108.00043
this paper is a simple, quick guide to kloe, the flagship experiment of infn ' s \ phi factory dafne at frascati. kloe ' s design principles, properties, its physics accomplishments and its impact on " flavor physics ", are described in terms comprehensible to non specialists.
arxiv:hep-ex/0702016
we introduce and investigate reroutable flows, a robust version of network flows in which link failures can be mitigated by rerouting the affected flow. given a capacitated network, a path flow is reroutable if after failure of an arbitrary arc, we can reroute the interrupted flow from the tail of that arc to the sink, without modifying the flow that is not affected by the failure. similar types of restoration, which are often termed " local ", were previously investigated in the context of network design, such as min - cost capacity planning. in this paper, our interest is in computing maximum flows under this robustness assumption. an important new feature of our model, distinguishing it from existing max robust flow models, is that no flow can get lost in the network. we also study a tightening of reroutable flows, called strictly reroutable flows, making more restrictive assumptions on the capacities available for rerouting. for both variants, we devise a reroutable - flow equivalent of an s - t - cut and show that the corresponding max flow / min cut gap is bounded by 2. it turns out that a strictly reroutable flow of maximum value can be found using a compact lp formulation, whereas the problem of finding a maximum reroutable flow is np - hard, even when all capacities are in { 1, 2 }. however, the tightening can be used to get a 2 - approximation for reroutable flows. this ratio is tight in general networks, but we show that in the case of unit capacities, every reroutable flow can be transformed into a strictly reroutable flow of same value. while it is np - hard to compute a maximal integral flow even for unit capacities, we devise a surprisingly simple combinatorial algorithm that finds a half - integral strictly reroutable flow of value 1, or certifies that no such solutions exits. finally, we also give a hardness result for the case of multiple arc failures.
arxiv:1704.07067
the determination of mean first - passage time ( mfpt ) for random walks in networks is a theoretical challenge, and is a topic of considerable recent interest within the physics community. in this paper, according to the known connections between mfpt, effective resistance, and the eigenvalues of graph laplacian, we first study analytically the mfpt between all node pairs of a class of growing treelike networks, which we term deterministic uniform recursive trees ( durts ), since one of its particular cases is a deterministic version of the famous uniform recursive tree. the interesting quantity is determined exactly through the recursive relation of the laplacian spectra obtained from the special construction of durts. the analytical result shows that the mfpt between all couples of nodes in durts varies as $ n \ ln n $ for large networks with node number $ n $. second, we study trapping on a particular network of durts, focusing on a special case with the immobile trap positioned at a node having largest degree. we determine exactly the average trapping time ( att ) that is defined as the average of fpt from all nodes to the trap. in contrast to the scaling of the mfpt, the leading behavior of att is a linear function of $ n $. interestingly, we show that the behavior for att of the trapping problem is related to the trapping location, which is in comparison with the phenomenon of trapping on fractal t - graph although both networks exhibit treestructure. finally, we believe that the methods could open the way to exactly calculate the mfpt and att in a wide range of deterministic media.
arxiv:0907.1695
quantum + + is a modern general - purpose multi - threaded quantum computing library written in c + + 11 and composed solely of header files. the library is not restricted to qubit systems or specific quantum information processing tasks, being capable of simulating arbitrary quantum processes. the main design factors taken in consideration were the ease of use, portability, and performance. the library ' s simulation capabilities are only restricted by the amount of available physical memory. on a typical machine ( intel i5 8gb ram ) quantum + + can successfully simulate the evolution of 25 qubits in a pure state or of 12 qubits in a mixed state reasonably fast. the library also includes support for classical reversible logic, being able to simulate classical reversible operations on billions of bits. this latter feature may be useful in testing quantum circuits composed solely of toffoli gates, such as certain arithmetic circuits.
arxiv:1412.4704
in 1632, galileo galilei wrote a book called \ textit { dialogue concerning the two chief world systems } which compared the new copernican model of the universe with the old ptolemaic model. his book took the form of a dialogue between three philosophers, salviati, a proponent of the copernican model, simplicio, a proponent of the ptolemaic model, and sagredo, who was initially open - minded and neutral. in this paper, i am going to use galileo ' s idea to present a dialogue between three modern philosophers, mr. spock, a proponent of the view that $ \ mathsf { p } \ neq \ mathsf { np } $, professor simpson, a proponent of the view that $ \ mathsf { p } = \ mathsf { np } $, and judge wapner, who is initially open - minded and neutral.
arxiv:1605.08639
this paper delves into the intricate relationship between the formation of air alliances and the shifts in airport and economic gravity centers across european countries during the period spanning 1990 to 2019. employing descriptive analysis and the weighted mean center methodology, it explores the interplay between air passenger numbers and economic indicators, revealing a close and interdependent correlation between these factors. the study sheds light on the dynamic landscape of economic gravity centers, which experienced discernible shifts over time. however, it observes even more pronounced transitions in airport gravity centers. statistical t - tests underscore significant differences in standard deviations when comparing pre - and post - air alliance periods for airport gravity centers. these disparities serve as a testament to the profound impact of airline alliances on the distribution of air traffic. these findings underscore the pivotal role of air alliances in reshaping the aviation landscape, and they beckon further investigation into their influence on broader economic transformations. notably, this study pioneers the use of geographical means and standard deviations for rigorous statistical testing of economic hypotheses, signifying a significant contribution to the field.
arxiv:2401.02980
the stochastic trajectories of molecules in living cells, as well as the dynamics in many other complex systems, often exhibit memory in their path over long periods of time. in addition, these systems can show dynamic heterogeneities due to which the motion changes along the trajectories. such effects manifest themselves as spatiotemporal correlations. despite the broad occurrence of heterogeneous complex systems in nature, their analysis is still quite poorly understood and tools to model them are largely missing. we contribute to tackling this problem by employing an integral representation of mandelbrot ' s fractional brownian motion that is compliant with varying motion parameters while maintaining long memory. two types of switching fractional brownian motion are analysed, with transitions arising from a markovian stochastic process and scale - free intermittent processes. we obtain simple formulas for classical statistics of the processes, namely the mean squared displacement and the power spectral density. further, a method to identify switching fractional brownian motion based on the distribution of displacements is described. a validation of the model is given for experimental measurements of the motion of quantum dots in the cytoplasm of live mammalian cells that were obtained by single - particle tracking.
arxiv:2307.12919
we derive novel low - - energy theorems for single pion production off nucleons through the isovector axial current. we find that the $ k ^ 2 $ - dependence of the multipole $ l _ { 0 + } ^ { ( + ) } $ at threshold is given by the nucleon scalar form factor, namely $ \ sigma ( k ^ 2 - m _ \ pi ^ 2 ) / ( 3 \ pi m _ \ pi f _ \ pi ) $. the relation to pcac results for soft pions including electroweak form factors is also clarified.
arxiv:hep-ph/9312307
we consider skew extensions of expanding maps by compact lie groups. for a class of natural invariant measures, we prove an explicit lower bound on the rate of ( exponential ) mixing involving topological pressure. proof uses representation theory and some elliptic pde estimates.
arxiv:1104.1874
the ` ` jackiw - nair ' ' non - relativistic limit of the relativistic anyon equations provides us with infinite - component wave equations of the dirac - majorana - levy - leblond type for the ` ` exotic ' ' particle, associated with the two - fold central extension of the planar galilei group. an infinite dimensional representation of the galilei group is found. the velocity operator is studied, and the observable coordinates describing a noncommutative plane are identified.
arxiv:hep-th/0404137
we develop jacobson ' s refinement of engel ' s theorem for leibniz algebras. we then note some consequences of the result.
arxiv:1101.2438
in this paper, we propose a multi - target image tracking algorithm based on continuously apative mean - shift ( cam - shift ) and unscented kalman filter. we improved the single - lamp tracking algorithm proposed in our previous work to multi - target tracking, and achieved better robustness in the case of occlusion, the real - time performance to complete one positioning and relatively high accuracy by dynamically adjusting the weights of the multi - target motion states. our previous algorithm is limited to the analysis of tracking error. in this paper, the results of the tracking algorithm are evaluated with the tracking error we defined. then combined with the double - lamp positioning algorithm, the real position of the terminal is calculated and evaluated with the positioning error we defined. experiments show that the defined tracking error is 0. 61cm and the defined positioning error for 3 - d positioning is 3. 29cm with the average processing time of 91. 63ms per frame. even if nearly half of the led area is occluded, the tracking error remains at 5. 25cm. all of this shows that the proposed visible light positioning ( vlp ) method can track multiple targets for positioning at the same time with good robustness, real - time performance and accuracy. in addition, the definition and analysis of tracking errors and positioning errors indicates the direction for future efforts to reduce errors.
arxiv:2103.01053
we present a method to automatically extract ( " carve " ) parameterized unit tests from system executions. the unit tests execute the same functions as the system tests they are carved from, but can do so much faster as they call functions directly ; furthermore, being parameterized, they can execute the functions with a large variety of randomly selected input values. if a unit - level test fails, we lift it to the system level to ensure the failure can be reproduced there. our method thus allows to focus testing efforts on selected modules while still avoiding false alarms : in our experiments, running parameterized unit tests for individual functions was, on average, 30 ~ times faster than running the system tests they were carved from.
arxiv:1812.07932
we study algebraic, combinatorial and geometric aspects of weighted pbw - type degenerations of ( partial ) flag varieties in type $ a $. these degenerations are labeled by degree functions lying in an explicitly defined polyhedral cone, which can be identified with a maximal cone in the tropical flag variety. varying the degree function in the cone, we recover, for example, the classical flag variety, its abelian pbw degeneration, some of its linear degenerations and a particular toric degeneration.
arxiv:1711.00751
we use omnes representations of the form factors f _ + and f _ 0 for exclusive semileptonic b to pi decays, paying special attention to the treatment of the b * pole and its effect on f _ +. we apply them to combine experimental partial branching fraction information with theoretical calculations of both form factors to extract | vub |. the precision we achieve is competitive with the inclusive determination and we do not find a significant discrepancy between our result, | vub | = ( 3. 90 + / - 0. 32 + / - 0. 18 ) 10 ^ ( - 3 ), and the inclusive world average value, ( 4. 45 + / - 0. 20 + / - 0. 26 ) 10 ^ ( - 3 ).
arxiv:hep-ph/0703284
we discuss the application of wavelet transforms to a critical interface model, which is known to provide a good description of barkhausen noise in soft ferromagnets. the two - dimensional version of the model ( one - dimensional interface ) is considered, mainly in the adiabatic limit of very slow driving. on length scales shorter than a crossover length ( which grows with the strength of surface tension ), the effective interface roughness exponent $ \ zeta $ is $ \ simeq 1. 20 $, close to the expected value for the universality class of the quenched edwards - wilkinson model. we find that the waiting times between avalanches are fully uncorrelated, as the wavelet transform of their autocorrelations scales as white noise. similarly, detrended size - size correlations give a white - noise wavelet transform. consideration of finite driving rates, still deep within the intermittent regime, shows the wavelet transform of correlations scaling as $ 1 / f ^ { 1. 5 } $ for intermediate frequencies. this behavior is ascribed to intra - avalanche correlations.
arxiv:0706.1574
the observed spectrum of galactic cosmic rays has several exciting features such as the rise in the positron fraction above ~ 10 gev of energy and the spectral hardening of protons and helium at ~ 300 gev / nucleon of energy. the atic - 2 experiment has recently reported an unexpected spectral upturn in the elemental ratios involving iron, such as the c / fe or o / fe ratios, at energy above 50 gev per nucleon. it is recognized that the observed positron excess can be explained by pion production processes during diffusive shock acceleration of cosmic - ray hadrons in nearby sources. recently, it was suggested that a scenario with nearby source dominating the gev - tev spectrum may be connected with the change of slope observed in protons and nuclei, which would be interpreted as a flux transition between the local component and the large - scale distribution of galactic sources. here i show that, under a two - component scenario with nearby source, the shape of the spectral transition is expected to be slightly different for heavy nuclei, such as iron, because their propagation range is spatially limited by inelastic collisions with the interstellar matter. this enables a prediction for the primary / primary ratios between light and heavy nuclei. from this effect, a spectral upturn is predicted in the c / fe and o / fe ratios in good accordance with the atic - 2 data.
arxiv:1509.05774
with the advancement of image - to - image diffusion models guided by text, significant progress has been made in image editing. however, a persistent challenge remains in seamlessly incorporating objects into images based on textual instructions, without relying on extra user - provided guidance. text and images are inherently distinct modalities, bringing out difficulties in fully capturing the semantic intent conveyed through language and accurately translating that into the desired visual modifications. therefore, text - guided image editing models often produce generations with residual object attributes that do not fully align with human expectations. to address this challenge, the models should comprehend the image content effectively away from a disconnect between the provided textual editing prompts and the actual modifications made to the image. in our paper, we propose a novel method called locate and forget ( laf ), which effectively locates potential target concepts in the image for modification by comparing the syntactic trees of the target prompt and scene descriptions in the input image, intending to forget their existence clues in the generated image. compared to the baselines, our method demonstrates its superiority in text - guided image editing tasks both qualitatively and quantitatively.
arxiv:2405.19708
while much attention of neural network methods is devoted to high - dimensional pde problems, in this work we consider methods designed to work for elliptic problems on domains $ \ omega \ subset \ mathbb { r } ^ d, $ $ d = 1, 2, 3 $ in association with more standard finite elements. we suggest to connect finite elements and neural network approximations through training, i. e., using finite element spaces to compute the integrals appearing in the loss functionals. this approach, retains the simplicity of classical neural network methods for pdes, uses well established finite element tools ( and software ) to compute the integrals involved and it gains in efficiency and accuracy. we demonstrate that the proposed methods are stable and furthermore, we establish that the resulting approximations converge to the solutions of the pde. numerical results indicating the efficiency and robustness of the proposed algorithms are presented.
arxiv:2409.08362
let $ t _ 1, \ ldots, t _ n \ in \ mathbb { r } ^ d $ and consider the location recovery problem : given a subset of pairwise direction observations $ \ { ( t _ i - t _ j ) / \ | t _ i - t _ j \ | _ 2 \ } _ { i < j \ in [ n ] \ times [ n ] } $, where a constant fraction of these observations are arbitrarily corrupted, find $ \ { t _ i \ } _ { i = 1 } ^ n $ up to a global translation and scale. we propose a novel algorithm for the location recovery problem, which consists of a simple convex program over $ dn $ real variables. we prove that this program recovers a set of $ n $ i. i. d. gaussian locations exactly and with high probability if the observations are given by an \ erdosrenyi graph, $ d $ is large enough, and provided that at most a constant fraction of observations involving any particular location are adversarially corrupted. we also prove that the program exactly recovers gaussian locations for $ d = 3 $ if the fraction of corrupted observations at each location is, up to poly - logarithmic factors, at most a constant. both of these recovery theorems are based on a set of deterministic conditions that we prove are sufficient for exact recovery.
arxiv:1506.01437
this work deals with defect structures in models described by scalar fields. the investigations focus on generalized models, with the kinetic term modified to allow for a diversity of possibilities. we develop a new framework, in which we search for first - order differential equations which solve the equations of motion. the main issue concerns the introduction of a new function, which works like the superpotential usually considered in the standard situation. we investigate the problem in the general case, with an arbitrary number of fields, and we present several explicit examples in the case of a single real scalar field.
arxiv:0807.0213
we apply the bootstrap kernel within time dependent density functional theory to study one - dimensional chain of organic polymer poly - phenylene - vinylene and molecular crystals of picene and pentacene. the behaviour of this kernel in the presence and absence of local field effects is studied. the absorption spectra of poly - phenylene - vinylene has a bound excitonic peak which is well reproduced by the bootstrap kernel. pentacene and picene, electronically similar materials, have remarkably different excitonic physics which is also captured properly by the bootstrap kernel. inclusion of local - field effects dramatically change the spectra for both picene and pentacene. we highlight the reason behind this change. this also sheds light on the reasons behind the discrepancy in results between two different previous bethe - salpeter calculations.
arxiv:1311.2004
this paper introduces a quadrature - free, data - driven approach to balanced truncation for both continuous - time and discrete - time systems. the method non - intrusively constructs reduced - order models using available transfer function samples from the right half of the $ s $ - plane. it is highlighted that the proposed data - driven balanced truncation and existing quadrature - based balanced truncation algorithms share a common feature : both compress their respective data quadruplets to derive reduced - order models. additionally, it is demonstrated that by using different compression strategies, these quadruplets can be utilized to develop three data - driven formulations of the irka for both continuous - time and discrete - time systems. these formulations non - intrusively generate reduced models using transfer function samples from the $ j \ omega $ - axis or the right half of the $ s $ - plane, or impulse response samples. notably, these irka formulations eliminate the necessity of computing new transfer function samples as irka iteratively updates the interpolation points. the efficacy of the proposed algorithms is validated through numerical examples, which show that the proposed data - driven approaches perform comparably to their intrusive counterparts.
arxiv:2501.16683
we study a dirac fermion model with a random vector field, especially paying attention to a strong disorder regime. applying the bosonization techniques, we first derive an equivalent sine - gordon model, and next average over the random vector field using the replica trick. the operator product expansion based on the replica action leads to scaling equations of the coupling constants ( ` ` fugacities ' ' ) with nonlinear terms, if we take into account the fusion of the vertex operators. these equations are converted into a nonlinear diffusion equation known as the kpp equation. by the use of the asymptotic solution of the equation, we calculate the density of state, the generalized inverse participation ratios, and their spatial correlations. we show that results known so far are all derived in a unified way from the point of view of the renormalization group. remarkably, it turns out that the scaling exponent obtained in this paper reproduces the recent numerical calculations of the density correlation function. this implies that the freezing transition has actually revealed itself in such calculations.
arxiv:cond-mat/0209461
this work continues an ongoing effort to compare non - smooth optimization problems in abs - normal form to mathematical programs with complementarity constraints ( mpccs ). we study general nonlinear programs with equality and inequality constraints in abs - normal form, so - called abs - normal nlps, and their relation to equivalent mpcc reformulations. we introduce the concepts of abadie ' s and guignard ' s kink qualification and prove relations to mpcc - acq and mpcc - gcq for the counterpart mpcc formulations. due to non - uniqueness of a specific slack reformulation suggested in [ 10 ], the relations are non - trivial. it turns out that constraint qualifications of abadie type are preserved. we also prove the weaker result that equivalence of guginard ' s ( and abadie ' s ) constraint qualifications for all branch problems hold, while the question of gcq preservation remains open. finally, we introduce m - stationarity and b - stationarity concepts for abs - normal nlps and prove first order optimality conditions corresponding to mpcc counterpart formulations.
arxiv:2007.14653
in this paper we present results of direct numerical simulations of lean hydrogen / air flames freely propagating in a planar narrow channel with varying flow rate, using detailed chemistry and transport and including heat losses through the channel walls. our simulations show that double solutions, symmetric and non symmetric, can coexist for a given set of parameters. the symmetric solutions are calculated imposing symmetric boundary conditions in the channel mid plane and when this restriction is relaxed non symmetric solutions can develop. this indicates that the symmetric solutions are unstable to non symmetric perturbations, as predicted before within the context of a thermo diffusive model and simplified chemistry. it is also found that for lean hydrogen / air mixtures an increase in heat losses leads to a discontinuity of the steady state response curve, with flames extinguishing inside a finite interval of the flow rate. non symmetric flames burn more intensely and in consequence are much more robust to flame quenching by heat losses to the walls. the inclusion of the non symmetric solutions extends therefore the parametric range for which flames can propagate in the channel. this analysis seems to have received no attention in the literature, even if it can have important safety implications in micro scale combustion devices burning hydrogen in a lean premixed way.
arxiv:1802.05108
we study the min - max optimization problem where each function contributing to the max operation is strongly - convex and smooth with bounded gradient in the search domain. by smoothing the max operator, we show the ability to achieve an arbitrarily small positive optimality gap of $ \ delta $ in $ \ tilde { o } ( 1 / \ sqrt { \ delta } ) $ computational complexity ( up to logarithmic factors ) as opposed to the state - of - the - art strong - convexity computational requirement of $ o ( 1 / \ delta ) $. we apply this important result to the well - known minimal bounding sphere problem and demonstrate that we can achieve a $ ( 1 + \ varepsilon ) $ - approximation of the minimal bounding sphere, i. e. identify an hypersphere enclosing a total of $ n $ given points in the $ d $ dimensional unbounded space $ \ mathbb { r } ^ d $ with a radius at most $ ( 1 + \ varepsilon ) $ times the actual minimal bounding sphere radius for an arbitrarily small positive $ \ varepsilon $, in $ \ tilde { o } ( n d / \ sqrt { \ varepsilon } ) $ computational time as opposed to the state - of - the - art approach of core - set methodology, which needs $ o ( n d / \ varepsilon ) $ computational time.
arxiv:1905.12733
_ { 1 } } = 0. 45 \ pm 0. 02 $ and $ \ kappa _ { c _ { 2 } } = 0. 60 \ pm 0. 02 $, and with the transition at $ \ kappa _ { c _ { 1 } } $ being of continuous ( and probably of the deconfined ) type and that at $ \ kappa _ { c _ { 2 } } $ being of first - order type.
arxiv:1504.02275
the present collider data put severe constraints on any type of new strongly - interacting particle coupling to the higgs boson. we analyze the phenomenological limits on exotic quarks belonging to non - triplet su ( 3 ) _ c representations and their implications on higgs searches. the discovery of the standard model higgs, in the experimentally allowed mass range, would exclude the presence of exotic quarks coupling to it. thus, such qcd particles could only exist provided that their masses do not originate in the sm higgs mechanism.
arxiv:1202.3420
we study the variation of the dielectric response of a dielectric liquid ( e. g. water ) when a salt is added to the solution. employing field - theoretical methods we expand the gibbs free - energy to first order in a loop expansion and calculate self - consistently the dielectric constant. we predict analytically the dielectric decrement which depends on the ionic strength in a complex way. furthermore, a qualitative description of the hydration shell is found and is characterized by a single length scale. our prediction fits rather well a large range of concentrations for different salts using only one fit parameter related to the size of ions and dipoles.
arxiv:1201.6081
hamiltonian monte carlo ( hmc ) is a popular method in sampling. while there are quite a few works of studying this method on various aspects, an interesting question is how to choose its integration time to achieve acceleration. in this work, we consider accelerating the process of sampling from a distribution $ \ pi ( x ) \ propto \ exp ( - f ( x ) ) $ via hmc via time - varying integration time. when the potential $ f $ is $ l $ - smooth and $ m $ - strongly convex, i. e. \ for sampling from a log - smooth and strongly log - concave target distribution $ \ pi $, it is known that under a constant integration time, the number of iterations that ideal hmc takes to get an $ \ epsilon $ wasserstein - 2 distance to the target $ \ pi $ is $ o ( \ kappa \ log \ frac { 1 } { \ epsilon } ) $, where $ \ kappa : = \ frac { l } { m } $ is the condition number. we propose a scheme of time - varying integration time based on the roots of chebyshev polynomials. we show that in the case of quadratic potential $ f $, i. e., when the target $ \ pi $ is a gaussian distribution, ideal hmc with this choice of integration time only takes $ o ( \ sqrt { \ kappa } \ log \ frac { 1 } { \ epsilon } ) $ number of iterations to reach wasserstein - 2 distance less than $ \ epsilon $ ; this improvement on the dependence on condition number is akin to acceleration in optimization. the design and analysis of hmc with the proposed integration time is built on the tools of chebyshev polynomials. experiments find the advantage of adopting our scheme of time - varying integration time even for sampling from distributions with smooth strongly convex potentials that are not quadratic.
arxiv:2207.02189
two - dimensional transition metal dichalcogenides ( tmds ) and organic semiconductors ( oscs ) have emerged as promising material platforms for next - generation optoelectronic devices. the combination of both is predicted to yield emergent properties while retaining the advantages of their individual components. in oscs the optoelectronic response is typically dominated by localized frenkel - type excitons, whereas tmds host delocalized wannier - type excitons. however, much less is known about the spatial and electronic characteristics of excitons at hybrid tmd / osc interfaces, which ultimately determine the possible energy and charge transfer mechanisms across the 2d - organic interface. here, we use ultrafast momentum microscopy and many - body perturbation theory to elucidate a hybrid exciton at an tmd / osc interface that forms via the ultrafast resonant f \ " orster energy transfer process. we show that this hybrid exciton has both frenkel - and wannier - type contributions : concomitant intra - and interlayer electron - hole transitions within the osc layer and across the tmd / osc interface, respectively, give rise to an exciton wavefunction with mixed frenkel - wannier character. by combining theory and experiment, our work provides previously inaccessible insights into the nature of hybrid excitons at tmd / osc interfaces. it thus paves the way to a fundamental understanding of charge and energy transfer processes across 2d - organic heterostructures.
arxiv:2411.14993
we present a theory of the quantum vacuum radiation that is generated by a fast modulation of the vacuum rabi frequency of a single two - level system strongly coupled to a single cavity mode. the dissipative dynamics of the jaynes - cummings model in the presence of anti - rotating wave terms is described by a generalized master equation including non - markovian terms. peculiar spectral properties and significant extracavity quantum vacuum radiation output are predicted for state - of - the - art circuit cavity quantum electrodynamics systems with superconducting qubits.
arxiv:0906.2706
large - scale vision - and - language ( v + l ) pre - training for representation learning has proven to be effective in boosting various downstream v + l tasks. however, when it comes to the fashion domain, existing v + l methods are inadequate as they overlook the unique characteristics of both the fashion v + l data and downstream tasks. in this work, we propose a novel fashion - focused v + l representation learning framework, dubbed as fashionvil. it contains two novel fashion - specific pre - training tasks designed particularly to exploit two intrinsic attributes with fashion v + l data. first, in contrast to other domains where a v + l data point contains only a single image - text pair, there could be multiple images in the fashion domain. we thus propose a multi - view contrastive learning task for pulling closer the visual representation of one image to the compositional multimodal representation of another image + text. second, fashion text ( e. g., product description ) often contains rich fine - grained concepts ( attributes / noun phrases ). to exploit this, a pseudo - attributes classification task is introduced to encourage the learned unimodal ( visual / textual ) representations of the same concept to be adjacent. further, fashion v + l tasks uniquely include ones that do not conform to the common one - stream or two - stream architectures ( e. g., text - guided image retrieval ). we thus propose a flexible, versatile v + l model architecture consisting of a modality - agnostic transformer so that it can be flexibly adapted to any downstream tasks. extensive experiments show that our fashionvil achieves a new state of the art across five downstream tasks. code is available at https : / / github. com / brandonhanx / mmf.
arxiv:2207.08150
galactic winds are a key physical mechanism for understanding galaxy formation and evolution, yet empirical and theoretical constraints for the character of winds are limited and discrepant. recent empirical models find that local star - forming galaxies have a deficit of oxygen that scales with galaxy stellar mass. the oxygen deficit provides unique empirical constraints on the magnitude of mass loss, composition of outflowing material and metal reaccretion onto galaxies. we formulate the oxygen deficit constraints so they may be easily implemented into theoretical models of galaxy evolution. we parameterize an effective metal loading factor which combines the uncertainties of metal outflows and metal reaccretion into a single function of galaxy virial velocity. we determine the effective metal loading factor by forward - fitting the oxygen deficit. the effective metal loading factor we derive has important implications for the implementation of mass loss in models of galaxy evolution.
arxiv:1307.5909
substantial research on deep learning - based emergent communication uses the referential game framework, specifically the lewis signaling game, however we argue that successful communication in this game typically only need one or two symbols for target image classification because of a sampling pitfall in the training data. to address this issue, we provide a theoretical analysis and introduce a combinatorial algorithm solveminsym ( sms ) to solve the symbolic complexity for classification, which is the minimum number of symbols in the message for successful communication. we use the sms algorithm to create datasets with different symbolic complexity to empirically show that data with higher symbolic complexity increases the number of effective symbols in the emergent language.
arxiv:2410.18806