text
stringlengths
1
3.65k
source
stringlengths
15
79
primordial black holes ( pbhs ) are theorized objects that may make up some - or all - of the dark matter in the universe. at the lowest allowed masses, hawking radiation ( in the form of photons or electrons and positrons ) is the primary tool to search for pbhs. this paper is part of an ongoing series in which we aim to calculate the $ o ( \ alpha ) $ corrections to hawking radiation from asteroid - mass primordial black holes, based on a perturbative quantum electrodymanics ( qed ) calculation on schwarzschild background. silva et. al. ( 2023 ) divided the corrections into dissipative and conservative parts ; this work focuses on the numerical computation of the dissipative $ o ( \ alpha ) $ corrections to the photon spectrum. we generate spectra for primordial black holes of mass $ m = 1 $ - $ 8 \ times 10 ^ { 21 } m _ { \ rm planck } $. this calculation confirms the expectation that at low energies, the inner bremsstrahlung radiation is the dominant contribution to the hawking radiation spectrum. at high energies, the main $ o ( \ alpha ) $ effect is a suppression of the photon spectrum due to pair production ( emitted $ \ gamma \ rightarrow e ^ + e ^ - $ ), but this is small compared to the overall spectrum. we compare the low - energy tail in our curved spacetime qed calculation to several approximation schemes in the literature, and find deviations that could have important implications for constraints from hawking radiation on primordial black holes as dark matter.
arxiv:2408.17423
the internet of things ( iot ) has grown significantly in popularity, accompanied by increased capacity and lower cost of communications, and overwhelming development of technologies. at the same time, big data and real - time data analysis have taken on great importance and have been accompanied by unprecedented interest in sharing data among citizens, public administrations and other organisms, giving rise to what is known as the collaborative internet of things. this growth in data and infrastructure must be accompanied by a software architecture that allows its exploitation. although there are various proposals focused on the exploitation of the iot at edge, fog and / or cloud levels, it is not easy to find a software solution that exploits the three tiers together, taking maximum advantage not only of the analysis of contextual and situational data at each tier, but also of two - way communications between adjacent ones. in this paper, we propose an architecture that solves these deficiencies by proposing novel technologies which are appropriate for managing the resources of each tier : edge, fog and cloud. in addition, the fact that two - way communications along the three tiers of the architecture is allowed considerably enriches the contextual and situational information in each layer, and substantially assists decision making in real time. the paper illustrates the proposed software architecture through a case study of respiratory disease surveillance in hospitals. as a result, the proposed architecture permits efficient communications between the different tiers responding to the needs of these types of iot scenarios.
arxiv:2401.14968
interatomic potentials have been widely used in atomistic simulations such as molecular dynamics. recently, frameworks to construct accurate interatomic potentials that combine a systematic set of density functional theory ( dft ) calculations with machine learning techniques have been proposed. one of these methods is to use compressed sensing to derive a sparse representation for the interatomic potential. this facilitates the control of the accuracy of interatomic potentials. in this study, we demonstrate the applicability of compressed sensing to deriving the interatomic potential of ten elemental metals, namely, ag, al, au, ca, cu, ga, in, k, li and zn. for each elemental metal, the interatomic potential is obtained from dft calculations using elastic net regression. the interatomic potentials are found to have prediction errors of less than 3. 5 mev / atom, 0. 03 ev / \ aa \ and 0. 15 gpa for the energy, force and the stress tensor, respectively, which enable the accurate prediction of physical properties such as lattice constants and the phonon dispersion relationship.
arxiv:1505.03994
we consider operator - valued differential lyapunov and riccati equations, where the operators $ b $ and $ c $ may be relatively unbounded with respect to $ a $ ( in the standard notation ). in this setting, we prove that the singular values of the solutions decay fast under certain conditions. in fact, the decay is exponential in the negative square root if $ a $ generates an analytic semigroup and the range of $ c $ has finite dimension. this extends previous similar results for algebraic equations to the differential case. when the initial condition is zero, we also show that the singular values converge to zero as time goes to zero, with a certain rate that depends on the degree of unboundedness of $ c $. a fast decay of the singular values corresponds to a low numerical rank, which is a critical feature in large - scale applications. the results reported here provide a theoretical foundation for the observation that, in practice, a low - rank factorization usually exists.
arxiv:1804.02197
polar codes are a class of linear block codes that provably achieves channel capacity, and have been selected as a coding scheme for $ 5 ^ { \ rm th } $ generation wireless communication standards. successive - cancellation ( sc ) decoding of polar codes has mediocre error - correction performance on short to moderate codeword lengths : the sc - flip decoding algorithm is one of the solutions that have been proposed to overcome this issue. on the other hand, sc - flip has a higher implementation complexity compared to sc due to the required log - likelihood ratio ( llr ) selection and sorting process. moreover, it requires a high number of iterations to reach good error - correction performance. in this work, we propose two techniques to improve the sc - flip decoding algorithm for low - rate codes, based on the observation of channel - induced error distributions. the first one is a fixed index selection ( fis ) scheme to avoid the substantial implementation cost of llr selection and sorting with no cost on error - correction performance. the second is an enhanced index selection ( eis ) criterion to improve the error - correction performance of sc - flip decoding. a reduction of $ 24. 6 \ % $ in the implementation cost of logic elements is estimated with the fis approach, while simulation results show that eis leads to an improvement on error - correction performance improvement up to $ 0. 42 $ db at a target fer of $ 10 ^ { - 4 } $.
arxiv:1711.11096
the outstanding performance of transformer - based language models on a great variety of nlp and nlu tasks has stimulated interest in exploring their inner workings. recent research has focused primarily on higher - level and complex linguistic phenomena such as syntax, semantics, world knowledge, and common sense. the majority of the studies are anglocentric, and little remains known regarding other languages, precisely their morphosyntactic properties. to this end, our work presents morph call, a suite of 46 probing tasks for four indo - european languages of different morphology : english, french, german and russian. we propose a new type of probing task based on the detection of guided sentence perturbations. we use a combination of neuron -, layer - and representation - level introspection techniques to analyze the morphosyntactic content of four multilingual transformers, including their less explored distilled versions. besides, we examine how fine - tuning for pos - tagging affects the model knowledge. the results show that fine - tuning can improve and decrease the probing performance and change how morphosyntactic knowledge is distributed across the model. the code and data are publicly available, and we hope to fill the gaps in the less studied aspect of transformers.
arxiv:2104.12847
this research addresses the multiprocessor scheduling problem of hard real - time systems, and it especially focuses on optimal and global schedulers when practical constraints are taken into account. first, we propose an improvement of the optimal algorithm bf. we formally prove that our adaptation is ( i ) optimal, i. e., it always generates a feasible schedule as long as such a schedule exists, and ( ii ) valid, i. e., it complies with the all the requirements. we also show that it outperforms bf by providing a computing complexity of o ( n ), where n is the number of tasks to be scheduled. next, we propose a schedulability analysis which indicates a priori whether the real - time application can be scheduled by our improvement of bf without missing any deadline. this analysis is, to the best of our knowledge, the first such test for multiprocessors that takes into account all the main overheads generated by the operating system.
arxiv:1001.4115
g galbraith, expert in cell migration and super - resolution microscopy n. gregory hamilton, psychiatrist matthew keeslar, physician assistant instructor of urology, school of medicine. former professional actor ( waiting for guffman, scream 3, frank herbert ' s dune ) lena kenin, ob / gyn, psychiatrist john kitzhaber, physician, longest - serving governor in oregon ' s history muriel lezak, american neuropsychologist and author owen mccarty, chair of the department of biomedical engineering bita moghaddam, ruth matarazzo professor of behavioral neuroscience, author bud pierce, physician and politician lendon smith, ob / gyn, pediatrician, author, and television personality albert starr first surgeon to implant a heart valve successfully. kent l. thornburg, scientist, researcher, professor shoshana r. ungerleider, internal medicine physician, film producer melissa wong, cancer stem cell biologist d. george wyse, expert in cardiac arrhythmias = = see also = = art collection of oregon health & science university marquam nature park = = references = = = = external links = = media related to oregon health & science university at wikimedia commons official website
https://en.wikipedia.org/wiki/Oregon_Health_&_Science_University
various charge pairings in strongly correlated electron systems are interpreted as quantum entanglement of a composite system. particles in the intermediate phase have a tendency to form the coherent superposition state of the localized state and the itinerant state, which induces the entanglement of both particles in the bipartite subsystems for increasing the entropy of the system. the correction to the entropic coulomb force becomes an immediate cause of charge pairing.
arxiv:1203.0896
feature selection is an essential process in machine learning, especially when dealing with high - dimensional datasets. it helps reduce the complexity of machine learning models, improve performance, mitigate overfitting, and decrease computation time. this paper presents a novel feature selection framework, shap - select. the framework conducts a linear or logistic regression of the target on the shapley values of the features, on the validation set, and uses the signs and significance levels of the regression coefficients to implement an efficient heuristic for feature selection in tabular regression and classification tasks. we evaluate shap - select on the kaggle credit card fraud dataset, demonstrating its effectiveness compared to established methods such as recursive feature elimination ( rfe ), hisel ( a mutual information - based feature selection method ), boruta and a simpler shapley value - based method. our findings show that shap - select combines interpretability, computational efficiency, and performance, offering a robust solution for feature selection.
arxiv:2410.06815
we test the disk / corona model of janiuk et al. ( 2000 ) against the composite mean optical / uv / soft x - ray spectrum of radio loud and radio quiet quasars from laor et al. ( 1997 ) which applies to faint quasars. the model well represents the optical / uv continuum if the hardening of the locally emitted spectrum is included, with the color to effective temperature ratio $ f \ sim 2 $ in the inner 20 - 30 schwarzschild radii. comptonization seen in the soft x - ray band is well explained by the adopted corona model in the case of radio loud objects. however, this comptonization is much stronger in radio quiet objects and additional ad hoc assumed comptonizing medium must be present. we speculate that perhaps in both types of quasars there is a strong outflow of the hot plasma ; this plasma is collimated in radio loud objects but not collimated in the radio quiet objects so it is present along the line of sight, serving as the required comptonizing medium. hard x - ray power law is not explained by the model and it may come from non - thermal component of electron plasma, as it is the case for galactic black holes in their soft states.
arxiv:astro-ph/0103128
let $ a $ be a real $ m \ times n $ measurement matrix and $ b \ in \ mathbb { r } ^ m $ be an observations vector. the affine feasibility problem with sparsity and nonnegativity ( $ afp _ { sn } $ for short ) is to find a sparse and nonnegative vector $ x \ in \ mathbb { r } ^ n $ with $ ax = b $ if such $ x $ exists. in this paper, we focus on establishment of optimization approach to solving the $ afp _ { sn } $. by discussing tangent cone and normal cone of sparse constraint, we give the first necessary optimality conditions, $ \ alpha $ - stability, t - stability and n - stability, and the second necessary and sufficient optimality conditions for the related minimization problems with the $ afp _ { sn } $. by adopting armijo - type stepsize rule, we present a framework of gradient support projection algorithm for the $ afp _ { sn } $ and prove its full convergence when matrix $ a $ is $ s $ - regular. by doing some numerical experiments, we show the excellent performance of the new algorithm for the $ afp _ { sn } $ without and with noise.
arxiv:1406.7178
brownian motion with known positive drift is sampled in stages until it crosses a positive boundary $ a $. a family of multistage samplers that control the expected overshoot over the boundary by varying the stage size at each stage is shown to be optimal for large $ a $, minimizing a linear combination of overshoot and number of stages. applications to hypothesis testing are discussed.
arxiv:1105.2322
an interpretation of the formation of halo in accelerators based on quantum - like theory by a diffraction model is given in terms of the transversal beam motion. physical implications of the longitudinal dynamics are also examined.
arxiv:physics/9910026
recent work by nesterov and stich showed that momentum can be used to accelerate the rate of convergence for block gauss - seidel in the setting where a fixed partitioning of the coordinates is chosen ahead of time. we show that this setting is too restrictive, constructing instances where breaking locality by running non - accelerated gauss - seidel with randomly sampled coordinates substantially outperforms accelerated gauss - seidel with any fixed partitioning. motivated by this finding, we analyze the accelerated block gauss - seidel algorithm in the random coordinate sampling setting. our analysis captures the benefit of acceleration with a new data - dependent parameter which is well behaved when the matrix sub - blocks are well - conditioned. empirically, we show that accelerated gauss - seidel with random coordinate sampling provides speedups for large scale machine learning tasks when compared to non - accelerated gauss - seidel and the classical conjugate - gradient algorithm.
arxiv:1701.03863
it is shown that the joint spectral radius $ \ rho ( m ) $ of a precompact family $ m $ of operators on a banach space $ x $ is equal to the maximum of two numbers : the joint spectral radius $ \ rho _ { e } ( m ) $ of the image of $ m $ in the calkin algebra and the berger - wang radius $ r ( m ) $ defined by the formula \ [ r ( m ) = \ underset { n \ to \ infty } { \ limsup } ( \ sup \ left \ { \ rho ( a ) : a \ in m ^ { n } \ right \ } ^ { 1 / n } ). \ ] some more general banach - algebraic results of this kind are also proved. the proofs are based on the study of special radicals on the class of banach algebras.
arxiv:0805.0209
we continue investigating the interaction between flatness and $ \ mathfrak { a } $ - adic completion for infinitely generated modules over a commutative ring $ a $. we introduce the concept of $ \ mathfrak { a } $ - adic flatness, which is weaker than flatness. we prove that $ \ mathfrak { a } $ - adic flatness is preserved under completion when the ideal $ \ mathfrak { a } $ is weakly proregular. we also prove that when $ a $ is noetherian, $ \ mathfrak { a } $ - adic flatness coincides with flatness ( for complete modules ). an example is worked out of a non - noetherian ring $ a $, with a weakly proregular ideal $ \ mathfrak { a } $, for which the completion $ \ hat { a } $ is not flat. we also study $ \ mathfrak { a } $ - adic systems, and prove that if the ideal $ \ mathfrak { a } $ is finitely generated, then the limit of any $ \ mathfrak { a } $ - adic system is a complete module.
arxiv:1606.01832
we derive the character of neutrino oscillations that results from a model of equivalence principle violation suggested recently by damour and polyakov as a plausible consequence of string theory. in this model neutrino oscillations will take place through interaction with a long range scalar field of gravitational origin even if the neutrinos are degenerate in mass. the energy dependence of the oscillation length is identical to that in the conventional mass mixing mechanism. this possibility further highlghts the independence of and need for more exacting direct neutrino mass measurements together with a next generation of neutrinoless double beta decay experiments.
arxiv:hep-ph/9707407
the aim of this paper is to present a tool used to show that certain banach spaces can be endowed with $ c ^ k $ smooth equivalent norms. the hypothesis uses particular countable decompositions of certain subsets of $ b _ { x ^ * } $, namely boundaries. of interest is that the main result unifies two quite well known results. in the final section, some new corollaries are given.
arxiv:1311.1408
we show that flow anisotropies in relativistic heavy - ion collisions can be analyzed using a certain technique of shape analysis of excursion sets recently proposed by us for cmbr fluctuations to investigate anisotropic expansion history of the universe. the technique analyzes shapes ( sizes ) of patches above ( below ) certain threshold value for transverse energy / particle number ( the excursion sets ) as a function of the azimuthal angle and rapidity. modeling flow by imparting extra anisotropic momentum to the momentum distribution of particles from hijing, we compare the resulting distributions for excursion sets at two different azimuthal angles. angles with maximum difference in the two distributions identify the event plane, and the magnitude of difference in the two distributions relates to the magnitude of momentum anisotropy, i. e. elliptic flow.
arxiv:1112.1177
five cryogenic resonant gravitational antennas are now in operation. this is the first time that such a large number of high sensitive antennas are taking data and an agreement on data exchange has been signed by the responsible groups. the data exchanged will consist essentially in lists of ' ' candidate events ' '. in this paper the procedure used by the rome group in order to obtain ' ' candidate events ' ' is presented. some methods of analyzing the data of the " network " of the five antennas are shown.
arxiv:gr-qc/0002008
we study the superfluidity of single component dipolar fermi gases in three dimensions within a pairing fluctuation theory. the transition temperature $ t _ { c } $ for the dominant $ p _ z $ wave superfluidity exhibits a remarkable re - entrant behavior as a function of the pairing strength induced by the dipole - dipole interaction ( ddi ), which leads to an anisotropic pair dispersion. the anisotropy and the long range nature of the ddi cause $ t _ c $ to vanish for a narrow range of intermediate interaction strengths, where a pair density wave state emerges as the ground state. the superfluid density and thermodynamics below $ t _ { c } $, along with the density profiles in a harmonic trap, are investigated as well, throughout the bcs - bec crossover. implications for experiments are discussed.
arxiv:1503.04453
high - density microfluidics is becoming an important experimental platform for studying complex biological systems such as synthetic gene regulatory networks, molecular biocomputating of engineered cells, distributing rapid point - of - care diagnosis, and monitoring pathological environment. imaging transient bio - chemical reactions happening in these systems at a single particle or cellular level requires precise time - dependent control of sample reaction and imaging conditions at the desired fluidic momentum. in this study, we showed our novel miniaturized and programmable electronic - based pneumatic system to meet the requirement. we demonstrated its capability to control reaction parameters such as concentrations and injection rates in a liposome production system.
arxiv:2107.09447
observational evidence shows that coronal jets can hit prominences and set them in motion. the impact leads to large - amplitude oscillations ( laos ) of the prominence. in this paper we attempt to understand this process via 2. 5d mhd numerical experiments. in our model, the jets are generated in a sheared magnetic arcade above a parasitic bipolar region located in one of the footpoints of the filament channel ( fc ) supporting the prominence. the shear is imposed with velocities not far above observed photospheric values ; it leads to a multiple reconnection process, as in previous jet models. both a fast alfv \ ' enic perturbation and a slower supersonic front preceding a plasma jet are issued from the reconnection site ; in the later phase, a more violent ( eruptive ) jet is produced. the perturbation and jets run along the fc ; they are partially reflected at the prominence and partially transmitted through it. there results a pattern of counter - streaming flows along the fc and oscillations of the prominence. the oscillations are laos ( with amplitude above $ 10 ~ \ mathrm { km \, s ^ { - 1 } } $ ) in parts of the prominence both in the longitudinal and transverse directions. in some field lines, the impact is so strong that the prominence mass is brought out of the dip and down to the chromosphere along the fc. two cases are studied with different heights of the arcade above the parasitic bipolar region, leading to different heights for the region of the prominence perturbed by the jets. the obtained oscillation amplitudes and periods are in general agreement with the observations.
arxiv:2103.02661
deep convolutional neural networks ( dcnns ) are used extensively in medical image segmentation and hence 3d navigation for robot - assisted minimally invasive surgeries ( miss ). however, current dcnns usually use down sampling layers for increasing the receptive field and gaining abstract semantic information. these down sampling layers decrease the spatial dimension of feature maps, which can be detrimental to image segmentation. atrous convolution is an alternative for the down sampling layer. it increases the receptive field whilst maintains the spatial dimension of feature maps. in this paper, a method for effective atrous rate setting is proposed to achieve the largest and fully - covered receptive field with a minimum number of atrous convolutional layers. furthermore, a new and full resolution dcnn - atrous convolutional neural network ( acnn ), which incorporates cascaded atrous ii - blocks, residual learning and instance normalization ( in ) is proposed. application results of the proposed acnn to magnetic resonance imaging ( mri ) and computed tomography ( ct ) image segmentation demonstrate that the proposed acnn can achieve higher segmentation intersection over unions ( ious ) than u - net and deeplabv3 +, but with reduced trainable parameters.
arxiv:1901.09203
recent advances in the field of attosecond science hold the promise of tracking electronic processes at the shortest space and time scales. imaging methods that combine attosecond temporal with nanometer spatial resolution are currently out of reach. coherent diffractive imaging is based on the diffraction by a sample of a quasi - monochromatic illumination with a coherence time that exceeds the duration of an attosecond pulse. due to the extremely broad nature of attosecond spectra, novel imaging techniques are required. here, we present an approach that enables coherent diffractive imaging with a broadband isolated attosecond source. the method is based on a numerical monochromatisation of the broadband diffraction pattern by the regularised inversion of a matrix which depends only on the spectrum of the diffracted radiation. experimental validations in the visible and hard x - rays show the applicability of the method. because of its generality and ease of implementation for single attosecond pulses we expect this method to find widespread applications in future attosecond technologies such as petahertz electronics, attosecond nanomagnetism or attosecond energy transfer.
arxiv:1909.11345
the zx - calculus is a graphical calculus for reasoning about pure state qubit quantum mechanics. it is complete for pure qubit stabilizer quantum mechanics, meaning any equality involving only stabilizer operations that can be derived using matrices can also be derived pictorially. stabilizer operations include the unitary clifford group, as well as preparation of qubits in the state | 0 >, and measurements in the computational basis. for general pure state qubit quantum mechanics, the zx - calculus is incomplete : there exist equalities involving non - stabilizer unitary operations on single qubits which cannot be derived from the current rule set for the zx - calculus. here, we show that the zx - calculus for single qubits remains complete upon adding the operator t to the single - qubit stabilizer operations. this is particularly interesting as the resulting single - qubit clifford + t group is approximately universal, i. e. any unitary single - qubit operator can be approximated to arbitrary accuracy using only clifford operators and t.
arxiv:1412.8553
in this work, we use the sternberg phase space ( which may be considered as the classical phase space of particles in gauge fields ) in order to explore the dynamics of such particles in the context of hamilton - dirac systems and their associated hamilton - pontryagin variational principles. for this, we develop an analogue of the pontryagin bundle in the case of the sternberg phase space. moreover, we show the link of this new bundle to the so - called magnetized tulczyjew triple, which is an analogue of the link between the pontryagin bundle and the usual tulczyjew triple. taking advantage of the symplectic nature of the sternberg space, we induce a dirac structure on the sternberg - pontryagin bundle which leads to the hamilton - dirac structure that we are looking for. we also analyze the intrinsic and variational nature of the equations of motion of particles in gauge fields in regards of the defined new geometry. lastly, we illustrate our theory through the case of a $ u ( 1 ) $ gauge group, leading to the paradigmatic example of an electrically charged particle in an electromagnetic field.
arxiv:1410.3249
recent advances in image and video generation raise hopes that these models possess world modeling capabilities, the ability to generate realistic, physically plausible videos. this could revolutionize applications in robotics, autonomous driving, and scientific simulation. however, before treating these models as world models, we must ask : do they adhere to physical conservation laws? to answer this, we introduce morpheus, a benchmark for evaluating video generation models on physical reasoning. it features 80 real - world videos capturing physical phenomena, guided by conservation laws. since artificial generations lack ground truth, we assess physical plausibility using physics - informed metrics evaluated with respect to infallible conservation laws known per physical setting, leveraging advances in physics - informed neural networks and vision - language foundation models. our findings reveal that even with advanced prompting and video conditioning, current models struggle to encode physical principles despite generating aesthetically pleasing videos. all data, leaderboard, and code are open - sourced at our project page.
arxiv:2504.02918
, one example being the method of parameter variation. as stated by fung et al. in the revision to the classic engineering text foundations of solid mechanics : engineering is quite different from science. scientists try to understand nature. engineers try to make things that do not exist in nature. engineers stress innovation and invention. to embody an invention the engineer must put his idea in concrete terms, and design something that people can use. that something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. in the past engineers working on new designs found that they did not have all the required information to make design decisions. most often, they were limited by insufficient scientific knowledge. thus they studied mathematics, physics, chemistry, biology and mechanics. often they had to add to the sciences relevant to their profession. thus engineering sciences were born. although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution. = = = medicine and biology = = = the study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology. modern medicine can replace several of the body ' s functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. the fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. this has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. there are also substantial interdisciplinary interactions between engineering and medicine. both fields provide solutions to real world problems. this often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both. medicine, in part, studies the function of the
https://en.wikipedia.org/wiki/Engineering
recent controversy regarding the existence of massive ( log ( m / msun ) > 11 ) galaxies at z > 6 poses a challenge for galaxy formation theories. hence, it is of critical importance to understand the effects of sed fitting methods on stellar mass estimates of epoch of reionization galaxies. in this work, we perform a case study on the agn host galaxy candidate cos - 87259 with spectroscopic redshift z = 6. 853, that is claimed to have an extremely high stellar mass of log ( m / msun ) ~ 11. 2. we test a suite of different sed fitting algorithms and stellar population models on our independently measured photometry in 17 broad bands for this source. between five different code setups, the stellar mass estimates for cos - 87259 span log ( m / msun ) = 10. 24 - 11. 00, whilst the reduced chi - squared values of the fits are all close to unity within dchi2 = 1. 2, so that the quality of the sed fits is basically indistinguishable. only when we adopt a nonparametric star formation history model within prospector do we retrieve a stellar mass exceeding log ( m / msun ) = 11. although the derived stellar masses change when using previously reported photometry for this source, the nonparametric sed - fitting method always yields the highest values. as these models are becoming increasingly popular for james webb space telescope high - redshift science, we stress the absolute importance of testing various sed fitting routines particularly on apparently very massive galaxies at such high redshifts.
arxiv:2212.04511
we have witnessed remarkable progress in foundation models in vision tasks. currently, several recent works have utilized the segmenting anything model ( sam ) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine - tuning a large amount of pixel - wise annotated medical images following a fully supervised manner. in this paper, to reduce the labeling cost, we investigate a novel weakly - supervised sam - based segmentation model, namely weakmedsam. specifically, our proposed weakmedsam contains two modules : 1 ) to mitigate severe co - occurrence in medical images, a sub - class exploration module is introduced to learn accurate feature representations. 2 ) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of sam to obtain an affinity map for random - walk refinement. our method can be applied to any sam - like backbone, and we conduct experiments with samus and efficientsam. the experimental results on three popularly - used benchmark datasets, i. e., brats 2019, abdomenct - 1k, and msd cardiac dataset, show the promising results of our proposed weakmedsam. our code is available at https : / / github. com / wanghr64 / weakmedsam.
arxiv:2503.04106
the spectrum of glueballs with quantum numbers $ j ^ \ mathsf { pc } = 0 ^ { \ pm + }, 2 ^ { \ pm + }, 3 ^ { \ pm + }, 4 ^ { \ pm + } $ is calculated in quenched quantum chromodynamics from bound state equations. the input is taken from a parameter - free calculation of two - and three - point functions. our results agree well with lattice results where available and contain also some additional states. for the scalar glueball, we present first results for the effects of additional diagrams which turn out to be strongly suppressed.
arxiv:2201.05163
for a finitely generated free group f _ n, of rank at least 2, any finite subgroup of out ( f _ n ) can be realized as a group of automorphisms of a graph with fundamental group f _ n. this result, known as out ( f _ n ) realization, was proved by zimmermann, culler and khramtsov. this theorem is comparable to nielsen realization as proved by kerckhoff : for a closed surface with negative euler characteristic, any finite subgroup of the mapping class group can be realized as a group of isometries of a hyperbolic surface. both of these theorems have restatements in terms of fixed points of actions on spaces naturally associated to them. for a nonnegative integer n we define a class of groups ( gvp ( n ) ) and prove a similar statement for their outer automorphism groups.
arxiv:math/0502248
a study is presented in which a contrastive learning approach is used to extract low - dimensional representations of the acoustic environment from single - channel, reverberant speech signals. convolution of room impulse responses ( rirs ) with anechoic source signals is leveraged as a data augmentation technique that offers considerable flexibility in the design of the upstream task. we evaluate the embeddings across three different downstream tasks, which include the regression of acoustic parameters reverberation time rt60 and clarity index c50, and the classification into small and large rooms. we demonstrate that the learned representations generalize well to unseen data and perform similarly to a fully - supervised baseline.
arxiv:2302.11205
photosynthetic light harvesting provides a natural blueprint for bioengineered and biomimetic solar energy and light detection technologies. recent evidence suggests some individual light harvesting protein complexes ( lhcs ) and lhc subunits efficiently transfer excitons towards chemical reaction centers ( rcs ) via an interplay between excitonic quantum coherence, resonant protein vibrations, and thermal decoherence. the role of coherence in vivo is unclear however, where excitons are transferred through multi - lhc / rc aggregates over distances typically large compared with intra - lhc scales. here we assess the possibility of long - range coherent transfer in a simple chromophore network with disordered site and transfer coupling energies. through renormalization we find that, surprisingly, decoherence is diminished at larger scales, and long - range coherence is facilitated by chromophoric clustering. conversely, static disorder in the site energies grows with length scale, forcing localization. our results suggest sustained coherent exciton transfer may be possible over distances large compared with nearest - neighbour ( n - n ) chromophore separations, at physiological temperatures, in a clustered network with small static disorder. this may support findings suggesting long - range coherence in algal chloroplasts, and provides a framework for engineering large chromophore or quantum dot high - temperature exciton transfer networks.
arxiv:1206.3300
we consider the formation of low - mass x - ray binaries containing accreting neutron stars via the helium - star supernova channel. the predicted relative number of short - period transients provides a sensitive test of the input physics in this process. we investigate the effect of varying mean kick velocities, orbital angular momentum loss efficiencies, and common envelope ejection efficiencies on the subpopulation of short - period systems, both transient and persistent. guided by the thermal - viscous disk instability model in irradiation - dominated disks, we posit that short - period transients have donors close to the end of core - hydrogen burning. we find that with increasing mean kick velocity the overall short - period fraction, s, grows, while the fraction, r, of systems with evolved donors among short - period systems drops. this effect, acting in opposite directions on these two fractions, allows us to constrain models of lmxb formation through comparison with observational estimates of s and r. without fine tuning or extreme assumptions about evolutionary parameters, consistency between models and current observations is achieved for a regime of intermediate average kick magnitudes of about 100 - 200 km / s, provided that ( i ) orbital braking for systems with donor masses in the range 1 - 1. 5 solar masses is weak, i. e., much less effective than a simple extrapolation of standard magnetic braking beyond 1. 0 solar mass would suggest, and ( ii ) the efficiency of common envelope ejection is low.
arxiv:astro-ph/9803288
this paper introduces a new llm based agent framework for simulating electric vehicle ( ev ) charging behavior, integrating user preferences, psychological characteristics, and environmental factors to optimize the charging process. the framework comprises several modules, enabling sophisticated, adaptive simulations. dynamic decision making is supported by continuous reflection and memory updates, ensuring alignment with user expectations and enhanced efficiency. the framework ' s ability to generate personalized user profiles and real - time decisions offers significant advancements for urban ev charging management. future work could focus on incorporating more intricate scenarios and expanding data sources to enhance predictive accuracy and practical utility.
arxiv:2408.05233
a ( very ) brief review of topological defects and their possible cosmic roles. i emphasize in particular that superheavy defects needed for structure formation can peacefully coexist with inflation, despite the claims to the contrary which are often made in the literature.
arxiv:astro-ph/9610125
we use 106 $ \ ipb $ of data collected with the collider detector at fermilab to search for narrow - width, vector particles decaying to a top and an anti - top quark. model independent upper limits on the cross section for narrow, vector resonances decaying to $ \ ttbar $ are presented. at the 95 % confidence level, we exclude the existence of a leptophobic $ \ zpr $ boson in a model of topcolor - assisted technicolor with mass $ m _ { \ zpr } $ $ < $ 480 $ \ gev $ for natural width $ \ gamma $ = 0. 012 $ m _ { \ zpr } $, and $ m _ { \ zpr } $ $ < $ 780 $ \ gev $ for $ \ gamma $ = 0. 04 $ m _ { \ zpr } $.
arxiv:hep-ex/0003005
in this paper, a proof - of - concept study of a $ 1 $ - bit wideband reconfigurable intelligent surface ( ris ) comprising planar tightly coupled dipoles ( ptcd ) is presented. the developed ris operates at subthz frequencies and a $ 3 $ - db gain bandwidth of $ 27. 4 \ % $ with the center frequency at $ 102 $ ghz is shown to be obtainable via full - wave electromagnetic simulations. the binary phase shift offered by each ris unit element is enabled by changing the polarization of the reflected wave by $ 180 ^ \ circ $. the proposed ptcd - based ris has a planar configuration with one dielectric layer bonded to a ground plane, and hence, it can be fabricated by using cost - effective printed circuit board ( pcb ) technology. we analytically calculate the response of the entire designed ris and showcase that a good agreement between that result and equivalent full - wave simulations is obtained. to efficiently compute the $ 1 $ - bit ris response for different pointing directions, thus, designing a directive beam codebook, we devise a fast approximate beamforming optimization approach, which is compared with time - consuming full - wave simulations. finally, to prove our concept, we present several passive prototypes with frozen beams for the proposed $ 1 $ - bit wideband ris.
arxiv:2402.08445
we investigate the role of the delta isobar in the reaction $ \ pi d \ to \ pi d $ at threshold in chiral effective field theory. we discuss the corresponding power counting and argue that this calculation completes the evaluation of diagrams up to the order ( m _ pi / m _ n ) ^ ( 3 / 2 ), with m _ pi ( m _ n ) for the pion ( nucleon ) mass. the net effect of all delta contributions at this order to the pion - deuteron scattering length is ( 2. 4 + / - 0. 4 ) x 10 ^ ( - 3 ) m _ pi ^ { - 1 }.
arxiv:0706.4023
we report a new approach to controllable thermal stimulation of a single living cell and its compartments. the technique is based on the use of a single polycrystalline diamond particle containing silicon - vacancy ( siv ) color centers. due to the presence of amorphous carbon at its intercrystalline boundaries, such a particle is an efficient light absorber and becomes a local heat source when illuminated by a laser. furthermore, the temperature of such a local heater is tracked by the spectral shift of the zero - phonon line of siv centers [ 2 ]. thus, the diamond particle acts simultaneously as a heater and a thermometer. in the current work, we demonstrate the ability of such a diamond heater - thermometer ( dht ) to locally alter the temperature, one of the numerous parameters that play a decisive role for the living organisms at the nanoscale. in particular, we show that the local heating of 11 - 12 { \ deg } c relative to the ambient temperature ( 22 { \ deg } c ) next to individual hela cells and neurons, isolated from the mouse hippocampus, leads to a change in the intracellular distribution of the concentration of free calcium ions. for individual hela cells, a long - term ( about 30 s ) increase in the integral intensity of fluo - 4 nw fluorescence by about three times is observed, which characterizes an increase in the [ ca2 + ] cyt concentration of free calcium in the cytoplasm. heating near mouse hippocampal neurons also caused a calcium surge - an increase in the intensity of fluo - 4 nw fluorescence by 30 % and a duration of ~ 0. 4 ms.
arxiv:2206.14890
by systematically studying the proton selectivity of free - standing graphene membranes in aqueous solutions we demonstrate that protons are transported by passing through defects. we study the current - voltage characteristics of single - layer graphene grown by chemical vapour deposition ( cvd ) when a concentration gradient of hcl exists across it. our measurements can unambiguously determine that h + ions are responsible for the selective part of the ionic current. by comparing the observed reversal potentials with positive and negative controls we demonstrate that the as - grown graphene is only weakly selective for protons. we use atomic layer deposition to block most of the defects in our cvd graphene. our results show that a reduction in defect size decreases the ionic current but increases proton selectivity.
arxiv:1508.06494
the main aim of this paper is to obtain analytic relativistic anisotropic spherical solutions in f ( r, $ \ mathcal { t } $ ) scenario. to do so we use modified durgapal - fuloria metric potential and the isotropic condition is imposed in order to obtain the effective anisotropic factor $ \ tilde { \ delta } $. besides, a notable and viable election on f ( r, $ \ mathcal { t } $ ) gravity formulation is taken. specifically $ f ( r, \ mathcal { t } ) = r + 2 \ chi \ mathcal { t } $, where $ r $ is the ricci scalar, $ \ mathcal { t } $ the trace of the energy - momentum tensor and $ \ chi $ a dimensionless parameter. this choice of $ f ( r, \ mathcal { t } ) $ function modifies the matter sector only, including new ingredients to the physical parameters that characterize the model such as density, radial, and tangential pressure. moreover, other important quantities are affected such as subliminal speeds of the pressure waves in both radial and transverse direction, observational parameters, for example, the surface redshift which is related with the total mass $ m $ and the radius $ r _ { s } $ of the compact object. also, a transcendent mechanism like equilibrium through generalized tolman - oppenheimer - volkoff equation and stability of the system are upset. we analyze all the physical and mathematical general requirements of the configuration taking $ m = 1. 04 m _ { \ odot } $ and varying $ \ chi $ from $ - 0. 1 $ to $ 0. 1 $. it is shown by the graphical procedure that $ \ chi < 0 $ yields to a more compact object in comparison when $ \ chi \ geq0 $ ( where $ \ chi = 0. 0 $ corresponds to general relativity theory ) and increases the value of the surface redshift. however, negative values of $ \ chi $ introduce in the system an attractive anisotropic force ( inward ) and the configuration is completely unstable ( corroborated employing abreu ' s criterion ). furthermore, the model in einstein gravity theory presents cracking while for $ \ chi > 0 $ the system is fully stable.
arxiv:1906.11756
as a particle moves through a fluid, it may generate a laminar wake behind it. in the gauge - string duality, we show that such a diffusion wake is created by a heavy quark moving through a thermal plasma and that it has a universal strength when compared to the total drag force exerted on the quark by the plasma. the universality extends over all asymptotically anti - de sitter supergravity constructions with arbitrary scalar matter. we discuss how these results relate to the linearized hydrodynamic approximation and how they bear on our understanding of di - hadron correlators in heavy ion collisions.
arxiv:0709.1089
theoretical study of heavy ion acceleration from ultrathin ( < 200 nm ) gold foils irradiated by a short pulse laser is presented. using two dimensional particle - in - cell simulations the time history of the laser bullet is examined in order to get insight into the laser energy deposition and ion acceleration process. for laser pulses with intensity, duration 32 fs, focal spot size 5 mkm and energy 27 joules the calculated reflection, transmission and coupling coefficients from a 20 nm foil are 80 %, 5 % and 15 %, respectively. the conversion efficiency into gold ions is 8 %. two highly collimated counter - propagating ion beams have been identified. the forward accelerated gold ions have average and maximum charge - to - mass ratio of 0. 25 and 0. 3, respectively, maximum normalized energy 25 mev / nucleon and flux. analytical model was used to determine a range of foil thicknesses suitable for acceleration of gold ions in the radiation pressure acceleration regime and the onset of the target normal sheath acceleration regime. the numerical simulations and analytical model point to at least four technical challenges hindering the heavy ion acceleration : low charge - to - mass ratio, limited number of ions amenable to acceleration, delayed acceleration and high reflectivity of the plasma. finally, a regime suitable for heavy ion acceleration has been identified in an alternative approach by analyzing the energy absorption and distribution among participating species and scaling of conversion efficiency, maximum energy, and flux with laser intensity.
arxiv:1511.07518
within the scientific research community, memory information in the brain is commonly believed to be stored in the synapse - a hypothesis famously attributed to psychologist donald hebb. however, there is a growing minority who postulate that memory is stored inside the neuron at the molecular ( rna or dna ) level - an alternative postulation known as the cell - intrinsic hypothesis, coined by psychologist randy gallistel. in this paper, we review a selection of key experimental evidence from both sides of the argument. we begin with eric kandel ' s studies on sea slugs, which provided the first evidence in support of the synaptic hypothesis. next, we touch on experiments in mice by john o ' keefe ( declarative memory and the hippocampus ) and joseph ledoux ( procedural fear memory and the amygdala ). then, we introduce the synapse as the basic building block of today ' s artificial intelligence neural networks. after that, we describe david glanzman ' s study on dissociating memory storage and synaptic change in sea slugs, and susumu tonegawa ' s experiment on reactivating retrograde amnesia in mice using laser. from there, we highlight germund hesslow ' s experiment on conditioned pauses in ferrets, and beatrice gelber ' s experiment on conditioning in single - celled organisms without synapses ( paramecium aurelia ). this is followed by a description of david glanzman ' s experiment on transplanting memory between sea slugs using rna. finally, we provide an overview of brian dias and kerry ressler ' s experiment on dna transfer of fear in mice from parents to offspring. we conclude with some potential implications for the wider field of psychology.
arxiv:2112.05362
we analyze the structure of the standard model coupled to gravity with spatial dimensions compactified on a three - torus. we find that there are no stable one - dimensional vacua at zero temperature, although there does exist an unstable vacuum for a particular set of dirac neutrino masses.
arxiv:1106.0890
the phase shift a neutron interferometer caused by the gravitational field and the rotation of the earth is derived in a unified way from the standpoint of general relativity. general relativistic quantum interference effects in the slowly rotating braneworld as the sagnac effect and phase shift effect of interfering particle in neutron interferometer are considered. it was found that in the case of the sagnac effect the influence of brane parameter is becoming important due to the fact that the angular velocity of the locally non rotating observer must be larger than one in the kerr space - time. in the case of neutron interferometry it is found that due to the presence of the parameter $ q ^ { * } $ an additional term in the phase shift of interfering particle emerges from the results of the recent experiments we have obtained upper limit for the tidal charge as $ q ^ { * } \ lesssim 10 ^ { 7 } \ rm { cm } ^ { 2 } $. finally, as an example, we apply the obtained results to the calculation of the ( ultra - cold neutrons ) energy level modification in the braneworld.
arxiv:0906.5067
we analyze a specific class of random systems that are driven by a symmetric l \ ' { e } vy stable noise. in view of the l \ ' { e } vy noise sensitivity to the confining " potential landscape " where jumps take place ( in other words, to environmental inhomogeneities ), the pertinent random motion asymptotically sets down at the boltzmann - type equilibrium, represented by a probability density function ( pdf ) $ \ rho _ * ( x ) \ sim \ exp [ - \ phi ( x ) ] $. since there is no langevin representation of the dynamics in question, our main goal here is to establish the appropriate path - wise description of the underlying jump - type process and next infer the $ \ rho ( x, t ) $ dynamics directly from the random paths statistics. a priori given data are jump transition rates entering the master equation for $ \ rho ( x, t ) $ and its target pdf $ \ rho _ * ( x ) $. we use numerical methods and construct a suitable modification of the gillespie algorithm, originally invented in the chemical kinetics context. the generated sample trajectories show up a qualitative typicality, e. g. they display structural features of jumping paths ( predominance of small vs large jumps ) specific to particular stability indices $ \ mu \ in ( 0, 2 ) $.
arxiv:1209.5882
we investigate diffusion equations with time - fractional derivatives of space - dependent variable order. we examine the well - posedness issue and prove that the space - dependent variable order coefficient is uniquely determined among other coefficients of these equations, by the knowledge of a suitable time - sequence of partial dirichlet - to - neumann maps.
arxiv:1701.04046
a lattice regularized lax operator for the nonultralocal modified korteweg de vries ( mkdv ) equation is proposed at the quantum level with the basic operators satisfying a $ q $ - deformed braided algebra. finding further the associated quantum $ r $ and $ z $ - matrices the exact integrability of the model is proved through the braided quantum yang - - baxter equation, a suitably generalized equation for the nonultralocal models. using the algebraic bethe ansatz the eigenvalue problem of the quantum mkdv model is exactly solved and its connection with the spin - $ \ ha $ xxz chain is established, facilitating the investigation of the corresponding conformal properties.
arxiv:hep-th/9510131
we investigate non - linear scaling relations for two - dimensional gravitational collapse in an expanding background using a 2d treepm code and study the strongly non - linear regime ( $ \ bar \ xi \ leq 200 $ ) for power law models. evolution of these models is found to be scale - invariant in all our simulations. we find that the stable clustering limit is not reached, but there is a model independent non - linear scaling relation in the asymptotic regime. this confirms results from an earlier study which only probed the mildly non - linear regime ( $ \ bar \ xi \ leq 40 $ ). the correlation function in the extremely non - linear regime is a less steep function of scale than reported in earlier studies. we show that this is due to coherent transverse motions in massive haloes. we also study density profiles and find that the scatter in the inner and outer slopes is large and that there is no single universal profile that fits all cases. we find that the difference in typical density profiles for different models is smaller than expected from similarity solutions for halo profiles and transverse motions induced by substructure are a likely reason for this difference being small.
arxiv:astro-ph/0410041
we present a new method for identifying the latent categorization of items based on their rankings. complimenting a recent work that uses a dirichlet prior on preference vectors and variational inference, we show that this problem can be effectively dealt with using existing community detection algorithms, with the communities corresponding to item categories. in particular we convert the bipartite ranking data to a unipartite graph of item affinities, and apply community detection algorithms. in this context we modify an existing algorithm - namely the label propagation algorithm to a variant that uses the distance between the nodes for weighting the label propagation - to identify the categories. we propose and analyze a synthetic ordinal ranking model and show its relation to the recently much studied stochastic block model. we test our algorithms on synthetic data and compare performance with several popular community detection algorithms. we also test the method on real data sets of movie categorization from the movie lens database. in all of the cases our algorithm is able to identify the categories for a suitable choice of tuning parameter.
arxiv:1609.09544
a set of n boxes, located on the vertices of a hypergraph g, contain known but different rewards. a searcher opens all the boxes in some hyperedge of g with the objective of collecting the maximum possible total reward. some of the boxes, however, are booby trapped. if the searcher opens a booby trapped box, the search ends and she loses all her collected rewards. we assume the number k of booby traps is known, and we model the problem as a zero - sum game between the maximizing searcher and a minimizing hider, where the hider chooses k boxes to booby trap and the searcher opens all the boxes in some hyperedge. the payoff is the total reward collected by the searcher. this model could reflect a military operation in which a drone gathers intelligence from guarded locations, and a booby trapped box being opened corresponds to the drone being destroyed or incapacitated. it could also model a machine scheduling problem, in which rewards are obtained from successfully processing jobs but the machine may crash. we solve the game when g is a 1 - uniform hypergraph ( the hyperedges are all singletons ), so the searcher can open just 1 box. when g is the complete hypergraph ( containing all possible hyperedges ), we solve the game in a few cases : ( 1 ) same reward in each box, ( 2 ) k = 1, and ( 3 ) n = 4 and k = 2. the solutions to these few cases indicate that a general simple, closed form solution to the game appears unlikely.
arxiv:1903.12231
the paper extends in two directions the work of \ cite { plackett77 } who studied how, in a $ 2 \ times 2 $ table, the likelihood of the column totals depends on the odds ratio. first, we study the marginal likelihood of a single $ r \ times c $ frequency table when only the marginal frequencies are observed and then consider a collection of, say, $ s $ $ r \ times c $ tables, where only the row and column totals can be observed, which is the basic framework which in applications of ecological inference. in the simpler context, we derive the likelihood equations and show that the likelihood has a collection of local maxima which, after a suitable rearrangement of the row and column categories, exhibit the strongest positive association compatible with the marginals, a kind of paradox, considering that the available data are so poor. next, we derive the likelihood equations for the marginal likelihood of a collection of tow - way tables, under the assumption that they share the same row conditional distributions and derive a necessary condition for the information matrix to be well defined. we also describe a fisher - scoring algorithm for maximizing the marginal likelihood which, however, can be used only if the number of available replications reaches a given threshold.
arxiv:2502.20177
when the individual studies assembled for a meta - analysis report means ( $ \ mu _ c $, $ \ mu _ t $ ) for their treatment ( t ) and control ( c ) arms, but those data are on different scales or come from different instruments, the customary measure of effect is the standardized mean difference ( smd ). the smd is defined as the difference between the means in the treatment and control arms, standardized by the assumed common standard deviation, $ \ sigma $. however, if the variances in the two arms differ, there is no consensus on a definition of smd. thus, we propose a new effect measure, the difference of standardized means ( dsm ), defined as $ \ delta = \ mu _ t / \ sigma _ t - \ mu _ c / \ sigma _ c $. the estimated dsm can easily be used as an effect measure in standard meta - analysis. for random - effects meta - analysis of dsm, we introduce new point and interval estimators of the between - studies variance ( $ \ tau ^ 2 $ ) based on the $ q $ statistic with effective - sample - size weights, $ q _ f $. we study, by simulation, bias and coverage of these new estimators of $ \ tau ^ 2 $ and related estimators of $ \ delta $. for comparison, we also study bias and coverage of well - known estimators based on the $ q $ statistic with inverse - variance weights, $ q _ { iv } $, such as the mandel - paule, dersimonian - laird, and restricted - maximum - likelihood estimators.
arxiv:2304.07385
we define the notion of a formal connection for a smooth family of star products with fixed underlying symplectic structure. such a formal connection allows one to relate star products at different points in the family. this generalizes the formal hitchin connection introduced by the first author. we establish a necessary and sufficient condition that guarantees the existence of a formal connection, and we describe the space of formal connections for a family as an affine space modelled by the derivations of the star products. moreover we show that if the parameter space has trivial first cohomology group any two flat formal connections are related by an automorphism of the family of star products.
arxiv:1410.1641
the effect of coordination on transport is investigated theoretically using random networks of springs as model systems. an effective medium approximation is made to compute the density of states of the vibrational modes, their energy diffusivity ( a spectral measure of transport ) and their spatial correlations as the network coordination $ z $ is varied. critical behaviors are obtained as $ z \ to z _ c $ where these networks lose rigidity. a sharp cross - over from a regime where modes are plane - wave - like toward a regime of extended but strongly - scattered modes occurs at some frequency $ \ omega ^ * \ sim z - z _ c $, which does not correspond to the ioffe - regel criterion. above $ \ omega ^ * $ both the density of states and the diffusivity are nearly constant. these results agree remarkably with recent numerical observations of repulsive particles near the jamming threshold \ cite { ning }. the analysis further predicts that the length scale characterizing the correlation of displacements of the scattered modes decays as $ 1 / \ sqrt { \ omega } $ with frequency, whereas for $ \ omega < < \ omega ^ * $ rayleigh scattering is found with a scattering length $ l _ s \ sim ( z - z _ c ) ^ 3 / \ omega ^ 4 $. it is argued that this description applies to silica glass where it compares well with thermal conductivity data, and to transverse ultrasound propagation in granular matter.
arxiv:0909.3030
we construct an example of a continuous centered random process with light tails of finite - dimensional distribution but with ( relatively ) heavy tail of maximum distribution. the apparatus for tails comparison are embedding results for orlicz and grand lebesgue spaces ( gls ).
arxiv:1508.05646
one and two loop self - energies are worked out explicitly for a heavy scalar field interacting weakly with a light self - interacting scalar field at finite temperature. the ring / daisy diagrams and a set of necklace diagrams can be summed simultaneously. this simple model serves to illustrate the connection between multi - loop self - energy diagrams and multiple scattering in a medium.
arxiv:hep-th/0103065
when the agent ' s observations or interactions are delayed, classic reinforcement learning tools usually fail. in this paper, we propose a simple yet new and efficient solution to this problem. we assume that, in the undelayed environment, an efficient policy is known or can be easily learned, but the task may suffer from delays in practice and we thus want to take them into account. we present a novel algorithm, delayed imitation with dataset aggregation ( dida ), which builds upon imitation learning methods to learn how to act in a delayed environment from undelayed demonstrations. we provide a theoretical analysis of the approach that will guide the practical design of dida. these results are also of general interest in the delayed reinforcement learning literature by providing bounds on the performance between delayed and undelayed tasks, under smoothness conditions. we show empirically that dida obtains high performances with a remarkable sample efficiency on a variety of tasks, including robotic locomotion, classic control, and trading.
arxiv:2205.05569
red and blue galaxies are traditionally classified using some specific cuts in colour or other galaxy properties, which are supported by empirical arguments. the vagueness associated with such cuts are likely to introduce a significant contamination in these samples. fuzzy sets are vague boundary sets which can efficiently capture the classification uncertainty in the absence of any precise boundary. we propose a method for classification of galaxies according to their colours using fuzzy set theory. we use data from the sdss to construct a fuzzy set for red galaxies with its members having different degrees of ` redness '. we show that the fuzzy sets for the blue and green galaxies can be obtained from it using different fuzzy operations. we also explore the possibility of using fuzzy relation to study the relationship between different galaxy properties and discuss its strengths and limitations.
arxiv:2005.11678
3d dense captioning is a task to localize objects in a 3d scene and generate descriptive sentences for each object. recent approaches in 3d dense captioning have adopted transformer encoder - decoder frameworks from object detection to build an end - to - end pipeline without hand - crafted components. however, these approaches struggle with contradicting objectives where a single query attention has to simultaneously view both the tightly localized object regions and contextual environment. to overcome this challenge, we introduce sia ( see - it - all ), a transformer pipeline that engages in 3d dense captioning with a novel paradigm called late aggregation. sia simultaneously decodes two sets of queries - context query and instance query. the instance query focuses on localization and object attribute descriptions, while the context query versatilely captures the region - of - interest of relationships between multiple objects or with the global scene, then aggregated afterwards ( i. e., late aggregation ) via simple distance - based measures. to further enhance the quality of contextualized caption generation, we design a novel aggregator to generate a fully informed caption based on the surrounding context, the global environment, and object instances. extensive experiments on two of the most widely - used 3d dense captioning datasets demonstrate that our proposed method achieves a significant improvement over prior methods.
arxiv:2408.07648
the colombeau algebra of generalized functions allows to unrestrictedly carry out products of distributions. we analyze this operation from a microlocal point of view, deriving a general inclusion relation for wave front sets of products in the algebra. furthermore, we give explicit examples showing that the given result is optimal, i. e. its assumptions cannot be weakened. finally, we discuss the interrelation of these results with the concept of pullback under smooth maps.
arxiv:math/0008206
interaction between tannin and bovine serum albumin ( bsa ) was examined by the fluorescent quenching. the process of elimination between bsa and tannin was the one of a stationary state, and the coupling coefficient was one. the working strength between the tannin and the beef serum was hydrophobic one.
arxiv:1610.03208
a black hole can emit radiation called hawking radiation. such radiation seen by an observer outside the black hole differs from the original radiation near the horizon of the black hole by the so - called " greybody factor ". in this paper, the bounds of the greybody factors for the reissner - nordstr \ " { o } m black holes are obtained. these bounds can be derived by using the 2 x 2 transfer matrices. it is found that the charges of black holes act as good barriers.
arxiv:1301.7527
in this paper, we combine tools from pluripotential theory and commutative algebra to study singularity invariants of plurisubharmonic functions. we establish several relationships between the singularity invariants of plurisubharmonic functions and those of holomorphic functions. these results yield a sharp lower bound for the log canonical threshold of a plurisubharmonic function. our bound simultaneously improves upon the main result of demailly and pham ( acta math. 212 : 1 - - 9, 2014 ), the classical result of skoda ( bull. soc. math. france 100 : 353 - - 408, 1972 ), and the lower estimate of t. de fernex, l. ein and m. musta \ c { t } \ v { a } ( math. res. lett. 10 : 219 - - 236, 2003 ), which has played a crucial role in recent developments in birational geometry. finally, we present some applications of our results to the study of singularity invariants of complex spaces.
arxiv:2304.02238
we studied the euler - - poisson equation system in the case of cylindrical symmetry with the von neumann - - sedov - - taylor type of self - similar ansatz and present scaling solutions. we have analyzed the scenario governed by chaplygin ' s equation of state, which has historically been studied as a unifying framework of dark fluid for dark matter and dark energy.
arxiv:2503.19552
we construct a classification model that predicts if an earthquake with the magnitude above a threshold will take place at a given location in a time range 30 - 180 days from a given moment of time. a common approach is to use expert forecasts based on features like region - time - length ( rtl ) characteristics. the proposed approach uses machine learning on top of multiple rtl features to take into account effects at various scales and to improve prediction accuracy. for historical data about japan earthquakes 1992 - 2005 and predictions at locations given in this database the best model has precision up to ~ 0. 95 and recall up to ~ 0. 98.
arxiv:1905.10805
the approach of metric - affine gravity initially distinguishes it from einstein ' s general relativity. using an independent affine connection produces a theory with 10 + 64 unknowns. we write down the yang - mills action for the affine connection and produce the yang - mills equation and the so called complementary yang - mills equation by independently varying with respect to the connection and the metric respectively. we call this theory the yang - mielke theory of gravity. we construct explicit spacetimes with pp - metric and purely axial torsion and show that they represent a solution of yang - mills theory. finally we compare these spacetimes to existing solutions of metric - affine gravity and present future research possibilities.
arxiv:1509.07536
as a generalized uncertainty principle ( gup ) leads to the effects of the minimal length of the order of the planck scale and uv / ir mixing, some significant physical concepts and quantities are modified or corrected correspondingly. on the one hand, we derive the maximally localized states - - - the physical states displaying the minimal length uncertainty associated with a new gup proposed in our previous work. on the other hand, in the framework of this new gup we calculate quantum corrections to the thermodynamic quantities of the schwardzschild black hole, such as the hawking temperature, the entropy, and the heat capacity, and give a remnant mass of the black hole at the end of the evaporation process. moreover, we compare our results with that obtained in the frameworks of several other gups. in particular, we observe a significant difference between the situations with and without the consideration of the uv / ir mixing effect in the quantum corrections to the evaporation rate and the decay time. that is, the decay time can greatly be prolonged in the former case, which implies that the quantum correction from the uv / ir mixing effect may give rise to a radical rather than a tiny influence to the hawking radiation.
arxiv:1410.4115
this paper presents a new open logistics interconnection ( noli ) reference model for a physical internet, inspired by the open systems interconnection ( osi ) reference model for data networks. this noli model is compared to the osi model, and to the transmission control protocol / internet protocol ( tcp / ip ) model of internet. it is also compared to the oli model for a physical internet proposed by montreuil. the main differences between the presented noli model and all the other models named above are in the appearance of definitions of physical objects in different layers and not just the lowest one. also, the noli model we present locates the containerization and de - containerization operations in the topmost layer, and not in the layer below as does the oli model. finally, the noli model is closer to the tcp / ip and osi models than the oli model, keeping the integrity of the link layer that the oli model divides in two layers, and keeping separate the session and transport osi layers that the oli model unites in just one layer.
arxiv:1904.05069
the sedimentation of a heavy stokes particle in a laminar plane or axisymmetric flow is investigated by means of asymptotic methods. we focus on the occurrence of stommel ' s retention zones, and on the splitting of their separatrices. the goal of this paper is to analyze under which conditions these retention zones can form, and under which conditions they can break and induce chaotic particle settling. the terminal velocity of the particle in still fluid is of the order of the typical velocity of the flow, and the particle response time is much smaller than the typical flow time - scale. it is observed that if the flow is steady and has an upward streamline where the vertical velocity has a strict local maximum, then inertialess particle trajectories can take locally the form of elliptic stommel cells, provided the particle terminal velocity is close enough to the local peak flow velocity. these structures only depend on the local flow topology and do not require the flow to have closed streamlines or stagnation points. if, in addition, the flow is submitted to a weak time - periodic perturbation, classical expansions enable one to write the particle dynamics as a hamiltonian system with one degree of freedom, plus a perturbation containing both the dissipative terms of the particle motion equation and the flow unsteadiness. melnikov ' s method therefore provides accurate criteria to predict the splitting of the separatrices of the elliptic cell mentioned above, leading to chaotic particle trapping and chaotic settling. the effect of particle inertia and flow unsteadiness on the occurrence of simple zeros in melnikov ' s function is discussed. particle motion in a plane cellular flow and in a vertical pipe is then investigated to illustrate these results.
arxiv:1003.4157
in this paper we prove that any full perazzo algebra $ a _ f $, whose macaulay dual generator is a perazzo form $ f \ in k [ x _ 0, \ dots, x _ n, u _ 1, \ dots, u _ m ] _ d $ with $ n + 1 = \ binom { d + m - 2 } { m - 1 } $, is the doubling of a 0 - dimensional scheme in $ \ pp ^ { n + m } $ and we compute the graded betti numbers of a minimal free resolution of $ a _ f $.
arxiv:2501.12303
recent advances in solving ordinary differential equations ( odes ) with neural networks have been remarkable. neural networks excel at serving as trial functions and approximating solutions within functional spaces, aided by gradient backpropagation algorithms. however, challenges remain in solving complex odes, including high - order and nonlinear cases, emphasizing the need for improved efficiency and effectiveness. traditional methods have typically relied on established knowledge integration to improve problem - solving efficiency. in contrast, this study takes a different approach by introducing a new neural network architecture for constructing trial functions, known as ratio net. this architecture draws inspiration from rational fraction polynomial approximation functions, specifically the pade approximant. through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches, including polynomial - based and multilayer perceptron ( mlp ) neural network - based methods. the ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
arxiv:2105.11309
human label variation ( hlv ) challenges the standard assumption that a labelled instance has a single ground truth, instead embracing the natural variation in human annotation to train and evaluate models. while various training methods and metrics for hlv have been proposed, it is still unclear which methods and metrics perform best in what settings. we propose new evaluation metrics for hlv leveraging fuzzy set theory. since these new proposed metrics are differentiable, we then in turn experiment with employing these metrics as training objectives. we conduct an extensive study over 6 hlv datasets testing 14 training methods and 6 evaluation metrics. we find that training on either disaggregated annotations or soft labels performs best across metrics, outperforming training using the proposed training objectives with differentiable metrics. we also show that our proposed soft metric is more interpretable and correlates best with human preference.
arxiv:2502.01891
we present a detection of a bright burst from frb 20201124a, which is one of the most active repeating frbs, based on s - band observations with the 64 - m radio telescope at the usuda deep space center / jaxa. this is the first frb observed by using a japanese facility. our detection at 2 ghz in february 2022 is the highest frequency for this frb and the fluence of $ > $ 189 jy ms is one of the brightest bursts from this frb source. we place an upper limit on the spectral index $ \ alpha $ = - 2. 14 from the detection of the s band and non - detection of the x band at the same time. we compare an event rate of the detected burst with ones of the previous research, and suggest that the power - law of the luminosity function might be broken at lower fluence, and the fluences of bright frbs distribute up to over 2 ghz with the power - law against frequency. in addition, we show the energy density of the burst detected in this work was comparable to the bright population of one - off frbs. we propose that repeating frbs can be as bright as one - off frbs, and only their brightest bursts could be detected so some of repeating frbs intrinsically might have been classified as one - off frbs.
arxiv:2211.13835
we classify bions in the grassmann $ gr _ { n _ { \ rm f }, n _ { \ rm c } } $ sigma model ( including the $ { \ mathbb c } p ^ { n _ { \ rm f } - 1 } $ model ) on $ { \ mathbb r } ^ { 1 } \ times s ^ { 1 } $ with twisted boundary conditions. we formulate these models as $ u ( n _ { \ rm c } ) $ gauge theories with $ n _ { \ rm f } $ flavors in the fundamental representations. these theories can be promoted to supersymmetric gauge theories and further can be embedded into d - brane configurations in type ii superstring theories. we focus on specific configurations composed of multiple fractional instantons, termed neutral bions and charged bions, which are identified as perturbative infrared renormalons by \ " { u } nsal and his collaborators. we show that d - brane configurations as well as the moduli matrix offer a very useful tool to classify all possible bion configurations in these models. contrary to the $ { \ mathbb c } p ^ { n _ { \ rm f } - 1 } $ model, there exist bogomol ' nyi - prasad - sommerfield ( bps ) fractional instantons with topological charge greater than unity ( of order $ n _ { \ rm c } $ ) that cannot be reduced to a composite of an instanton and fractional instantons. as a consequence, we find that the grassmann sigma model admits neutral bions made of bps and anti - bps fractional instantons each of which has topological charge greater ( less ) than one ( minus one ), that are not decomposable into instanton anti - instanton and the rests. the $ { \ mathbb c } p ^ { n _ { \ rm f } - 1 } $ model is found to have no charged bions. in contrast, we find that the grassmann sigma model admits charged bions, for which we construct exact non - bps solutions of the field equations.
arxiv:1409.3444
using llms ( large language models ) in conjunction with external documents has made rag ( retrieval - augmented generation ) an essential technology. numerous techniques and modules for rag are being researched, but their performance can vary across different datasets. finding rag modules that perform well on specific datasets is challenging. in this paper, we propose the autorag framework, which automatically identifies suitable rag modules for a given dataset. autorag explores and approximates the optimal combination of rag modules for the dataset. additionally, we share the results of optimizing a dataset using autorag. all experimental results and data are publicly available and can be accessed through our github repository https : / / github. com / marker - inc - korea / autorag _ aragog _ paper.
arxiv:2410.20878
we study the diametric problem ( i. e., optimal anticodes ) in the space of permutations under the ulam distance. that is, let $ s _ n $ denote the set of permutations on $ n $ symbols, and for each $ \ sigma, \ tau \ in s _ n $, define their ulam distance as the number of distinct symbols that must be deleted from each until they are equal. we obtain a near - optimal upper bound on the size of the intersection of two balls in this space, and as a corollary, we prove that a set of diameter at most $ k $ has size at most $ 2 ^ { k + c k ^ { 2 / 3 } } n! / ( n - k )! $, compared to the best known construction of size $ n! / ( n - k )! $. we also prove that sets of diameter $ 1 $ have at most $ n $ elements.
arxiv:2403.02276
the mathematical model of orthodox quantum mechanics has been critically examined and some deficiencies have been summarized. the model based on the extended hilbert space and free of these shortages has been proposed ; parameters being until now denoted as " hidden " have been involved. some earlier arguments against a hidden - variable theory have been shown to be false, too. in the known einstein - bohr controversy einstein has been shown to be true. the extended model seems to be strongly supported also by the polarization experiments performed by us ten years ago.
arxiv:quant-ph/0501111
astrophysical sources of relativistic jets or outflows, such as gamma - ray bursts ( grbs ), active galactic nuclei ( agn ) or micro - quasars, often show strong time variability. despite such impulsive behavior, most models of these sources assume a steady state for simplicity. here i consider a time - dependent outflow that is initially highly magnetized and divided into many well - separated sub - shells, as it experiences impulsive magnetic acceleration and interacts with the external medium. in agn the deceleration by the external medium is usually unimportant and most of the initial magnetic energy is naturally converted into kinetic energy, leading to efficient dissipation in internal shocks as the sub - shells collide. such efficient low - magnetization internal shocks can also naturally occur in grbs, where the deceleration by the external medium can be important. a strong low - magnetization reverse shock can develop, and the initial division into sub - shells allows it to be relativistic and its emission to peak on the timescale of the prompt grb duration ( which is not possible for a single shell ). sub - shells also enable the outflow to reach much higher lorentz factors that help satisfy existing constraints on grbs from intrinsic pair opacity and from the afterglow onset time.
arxiv:1109.5315
human papillomavirus infection is the most common sexually transmitted infection, and causes serious complications such as cervical cancer in vulnerable female populations in regions such as east africa. due to the scarcity of empirical data about sexual relationships in varying demographics, computationally modelling the underlying sexual contact networks is important to understand human papillomavirus infection dynamics and prevention strategies. in this work we present seconet, a heterosexual contact network growth model for human papillomavirus disease simulation. the growth model consists of three mechanisms that closely imitate real - world relationship forming and discontinuation processes in sexual contact networks. we demonstrate that the networks grown from this model are scale - free, as are the real world sexual contact networks, and we demonstrate that the model can be calibrated to fit different demographic contexts by using a range of parameters. we also undertake disease dynamics analysis of human papillomavirus infection using a compartmental epidemic model on the grown networks. the presented seconet growth model is useful to computational epidemiologists who study sexually transmitted infections in general and human papillomavirus infection in particular.
arxiv:2310.08868
neural architecture search ( nas ) methods have been proposed to release human experts from tedious architecture engineering. however, most current methods are constrained in small - scale search due to the issue of computational resources. meanwhile, directly applying architectures searched on small datasets to large datasets often bears no performance guarantee. this limitation impedes the wide use of nas on large - scale tasks. to overcome this obstacle, we propose an elastic architecture transfer mechanism for accelerating large - scale neural architecture search ( eat - nas ). in our implementations, architectures are first searched on a small dataset, e. g., cifar - 10. the best one is chosen as the basic architecture. the search process on the large dataset, e. g., imagenet, is initialized with the basic architecture as the seed. the large - scale search process is accelerated with the help of the basic architecture. what we propose is not only a nas method but a mechanism for architecture - level transfer. in our experiments, we obtain two final models eatnet - a and eatnet - b that achieve competitive accuracies, 74. 7 % and 74. 2 % on imagenet, respectively, which also surpass the models searched from scratch on imagenet under the same settings. for the computational cost, eat - nas takes only less than 5 days on 8 titan x gpus, which is significantly less than the computational consumption of the state - of - the - art large - scale nas methods.
arxiv:1901.05884
within the standard v - a theory of weak interactions, quantum electrodynamics ( qed ) and the linear sigma - model ( lsm ) of strong low - energy hadronic interactions we analyse gauge and infrared properties of hadronic structure of the neutron and proton in the neutron beta decay to leading order in the large nucleon mass expansion. we show that the complete set of feynman diagrams describing radiative corrections of order o ( \ alpha / \ pi ), induced by hadronic structure of the nucleon, to the rate of the neutron beta decay is gauge non - invariant and unrenormalisable. we show that a gauge non - invariant contribution does not depend on the electron energy in agreement with sirlin ' s analysis of contributions of strong low - energy interactions ( phys. rev. 164, 1767 ( 1967 ) ). we show that infrared divergent and dependent on the electron energy contributions from the neutron radiative beta decay and neutron beta decay, caused by hadronic structure of the nucleon, are cancelled in the neutron lifetime. nevertheless, we find that divergent contributions of virtual photon exchanges to the neutron lifetime, induced by hadronic structure of the nucleon, are unrenormalisable even formally. such an unrenormalizability can be explained by the fact that the effective v - a vertex of hadron - lepton current - current interactions is not a vertex of the combined quantum field theory including qed and lsm, which are renormalizable theories.
arxiv:1806.08699
we present a deperturbation analysis of the spin - orbit coupled $ \ rm a ^ 1 \ sigma ^ + $ and $ \ rm b ^ 3 \ pi _ { 0 ^ + } $ states of lirb based on the rovibrational energy levels observed previously by photoassociation spectroscopy in bosonic $ ^ 7 $ li $ ^ { 85 } $ rb molecule. using the genetic algorithm, we fit the potential energy curves of the $ \ rm a ^ 1 \ sigma ^ + $ state and the $ \ rm b ^ 3 \ pi $ state into point - wise form. we then fit these point - wise potentials along with the spin - orbit coupling into expanded morse oscillator functional form and optimise analytical parameters based on the experimental data. from the fitted results, we calculate the transition dipole moment matrix elements for transitions from the rovibrational levels of the coupled $ \ rm a ^ 1 \ sigma ^ + $ - $ \ rm b ^ 3 \ pi _ { 0 ^ + } $ state to the feshbach state and the absolute rovibrational ground state for fermionic $ ^ 6 $ li $ ^ { 87 } $ rb molecule. based on the calculated transition dipole moment matrix elements, several levels of the coupled $ \ rm a ^ 1 \ sigma ^ + $ - $ \ rm b ^ 3 \ pi _ { 0 ^ + } $ state are predicted to be suitable as the intermediate state for stimulated raman adiabatic passage transfer from the feshbach state to the absolute rovibrational ground state. in addition, we also provide a similar estimation for $ { \ rm b } ^ 1 \ pi $ - $ { \ rm c } ^ 3 \ sigma _ 1 ^ + $ - $ { \ rm b } ^ 3 \ pi _ 1 $ state based on available $ ab \ initio $ interaction potentials.
arxiv:2406.13157
effective field theory ( eft ) provides a model - independent framework for interpreting the results of dark matter ( dm ) direct detection experiments. in this study, we demonstrate that the two fermionic dm - quark tensor operators, $ ( \ bar { \ chi } i \ sigma ^ { \ mu \ nu } \ gamma ^ 5 \ chi ) ( \ bar { q } \ sigma _ { \ mu \ nu } q ) $ and $ ( \ bar { \ chi } \ sigma ^ { \ mu \ nu } \ chi ) ( \ bar { q } \ sigma _ { \ mu \ nu } q ) $, can contribute to the dm electric and magnetic dipole moments via nonperturbative qcd effects, in addition to the well - studied contact dm - nucleon operators. we then investigate the constraints on these two operators by considering both the contact and the dipole contributions using the xenon1t nuclear recoil and migdal effect data. we also recast other existing bounds on the dm dipole operators, derived from electron and nuclear recoil measurements in various direct detection experiments, as constraints on the two tensor operators. for $ m _ \ chi \ lesssim 1 \, \ rm gev $, our results significantly extend the reach of constraints on the dm - quark tensor operators to masses as low as $ 5 \, \ rm mev $, with the bound exceeding that obtained by the migdal effect with only contact interactions by an order of magnitude or so. in particular, for the operator $ ( \ bar { \ chi } \ sigma ^ { \ mu \ nu } i \ gamma _ 5 \ chi ) ( \ bar { q } \ sigma _ { \ mu \ nu } q ) $ with dm mass $ m _ \ chi \ gtrsim 10 \, \ rm gev $, the latest pandax constraint on the dm electric dipole moment puts more stringent bounds than the previous direct detection limit.
arxiv:2401.05005
in this paper a class of discrete optimization problems with uncertain costs is discussed. the uncertainty is modeled by introducing a scenario set containing a finite number of cost scenarios. a probability distribution in the scenario set is available. in order to choose a solution the weighted owa criterion ( wowa ) is applied. this criterion allows decision makers to take into account both probabilities for scenarios and the degree of pessimism / optimism. in this paper the complexity of the considered class of discrete optimization problems is described and some exact and approximation algorithms for solving it are proposed. an application to a selection problem, together with results of computational tests are shown.
arxiv:1504.07863
a large number matrix optimization problems are described by orthogonally invariant norms. this paper is devoted to the study of variational analysis of the orthogonally invariant norm cone of symmetric matrices. for a general orthogonally invariant norm cone of symmetric matrices, formulas for the tangent cone, normal cone and second - order tangent set are established. the differentiability properties of the projection operator onto the orthogonally invariant norm cone are developed, including formulas for the directional derivative and the b - subdifferential. importantly, the directional derivative is characterized by the second - order derivative of the corresponding symmetric function, which is convenient for computation. all these results are specified to the schatten $ p $ - norm cone, especially to the second - order cone of symmetric matrices.
arxiv:2302.05560
we design numerical schemes for a class of slow - fast systems of stochastic differential equations, where the fast component is an ornstein - uhlenbeck process and the slow component is driven by a fractional brownian motion with hurst index $ h > 1 / 2 $. we establish the asymptotic preserving property of the proposed scheme : when the time - scale parameter goes to $ 0 $, a limiting scheme which is consistent with the averaged equation is obtained. with this numerical analysis point of view, we thus illustrate the recently proved averaging result for the considered sde systems and the main differences with the standard wiener case.
arxiv:2104.14198
hybrid petri nets have been extended to include general transitions that fire after a randomly distributed amount of time. with a single general one - shot transition the state space and evolution over time can be represented either as a parametric location tree or as a stochastic time diagram. recent work has shown that both representations can be combined and then allow multiple stochastic firings. this work presents an algorithm for building the parametric location tree with multiple general transition firings and shows how its transient probability distribution can be computed using multi - dimensional integration. we discuss the ( dis - ) advantages of an interval arithmetic and a geometric approach to compute the areas of integration. furthermore, we provide details on how to perform a monte carlo integration either directly on these intervals or convex polytopes, or after transformation to standard simplices. a case study on a battery - backup system shows the feasibility of the approach and discusses the performance of the different integration approaches.
arxiv:2010.11056
distributed llm serving is costly and often underutilizes hardware accelerators due to three key challenges : bubbles in pipeline - parallel deployments caused by the bimodal latency of prompt and token processing, gpu memory overprovisioning, and long recovery times in case of failures. in this paper, we propose d \ ' ej \ ` avu, a system to address all these challenges using a versatile and efficient kv cache streaming library ( d \ ' ej \ ` avulib ). using d \ ' ej \ ` avulib, we propose and implement efficient prompt - token disaggregation to reduce pipeline bubbles, microbatch swapping for efficient gpu memory management, and state replication for fault - tolerance. we highlight the efficacy of these solutions on a range of large models across cloud deployments.
arxiv:2403.01876
we study numerically statistical distributions of sums of orbit coordinates, viewed as independent random variables in the spirit of the central limit theorem, in weakly chaotic regimes associated with the excitation of the first ( $ k = 1 $ ) and last ( $ k = n $ ) linear normal modes of the fermi - pasta - ulam - $ \ alpha $ system under fixed boundary conditions. we show that at low energies ( $ e = 0. 19 $ ), when the $ k = 1 $ linear mode is excited, chaotic diffusion occurs characterized by distributions that are well approximated for long times ( $ t > 10 ^ 9 $ ) by a $ q $ - gaussian quasi - stationary state ( qss ) with $ q \ approx1. 4 $. on the other hand, when the $ k = n $ mode is excited at the same energy, diffusive phenomena are \ textit { absent } and the motion is quasi - periodic. in fact, as the energy increases to $ e = 0. 3 $, the distributions in the former case pass through \ textit { shorter } $ q $ - gaussian states and tend rapidly to a gaussian ( i. e. $ q \ rightarrow 1 $ ) where equipartition sets in, while in the latter we need to reach to e = 4 to see a \ textit { sudden transition } to gaussian statistics, without any passage through an intermediate qss. this may be explained by different energy localization properties and recurrence phenomena in the two cases, supporting the view that when the energy is placed in the first mode weak chaos and " sticky " dynamics lead to a more gradual process of energy sharing, while strong chaos and equipartition appear abruptly when only the last mode is initially excited.
arxiv:1105.1465
we report the magnetic, magnetocaloric and magnetoresistance results obtained in tb ( ni1 - xfex ) 2 compounds with x = 0, 0. 025 and 0. 05. fe substitution leads to an increase in the ordering temperature from 36 k for x = 0 to 124 k for x = 0. 05. contrary to a single sharp mce peak seen in tbni2, the mce peaks of the fe substituted compounds are quite broad. we attribute the anomalous mce behavior to the randomization of the tb moments brought about by the fe substitution. magnetic and magnetoresistance results seem to corroborate this proposition. the present study also shows that the anomalous magnetocaloric and magnetoresistance behavior seen in the present compounds is similar to that of ho ( ni, fe ) 2 compounds.
arxiv:cond-mat/0609400
we report a first measurement of inclusive b - > x _ s eta decays, where x _ s is a charmless state with unit strangeness. the measurement is based on a pseudo - inclusive reconstruction technique and uses a sample of 657 x 10 ^ 6 bb - bar pairs accumulated with the belle detector at the kekb e ^ + e ^ - collider. for m _ { x _ s } < 2. 6 gev / c ^ 2, we measure a branching fraction of ( 26. 1 + / - 3. 0 ( stat ) + 1. 9 - 2. 1 ( syst ) + 4. 0 - 7. 1 ( model ) ) x 10 ^ - 5 and a direct cp asymmetry of a _ { cp } = - 0. 13 + / - 0. 04 + 0. 02 - 0. 03. over half of the signal occurs in the range m _ { x _ s } > 1. 8 gev / c ^ 2.
arxiv:0910.4751
due to strong capabilities in conducting fluent, multi - turn conversations with users, large language models ( llms ) have the potential to further improve the performance of conversational recommender system ( crs ). unlike the aimless chit - chat that llm excels at, crs has a clear target. so it is imperative to control the dialogue flow in the llm to successfully recommend appropriate items to the users. furthermore, user feedback in crs can assist the system in better modeling user preferences, which has been ignored by existing studies. however, simply prompting llm to conduct conversational recommendation cannot address the above two key challenges. in this paper, we propose multi - agent conversational recommender system ( macrs ) which contains two essential modules. first, we design a multi - agent act planning framework, which can control the dialogue flow based on four llm - based agents. this cooperative multi - agent framework will generate various candidate responses based on different dialogue acts and then choose the most appropriate response as the system response, which can help macrs plan suitable dialogue acts. second, we propose a user feedback - aware reflection mechanism which leverages user feedback to reason errors made in previous turns to adjust the dialogue act planning, and higher - level user information from implicit semantics. we conduct extensive experiments based on user simulator to demonstrate the effectiveness of macrs in recommendation and user preferences collection. experimental results illustrate that macrs demonstrates an improvement in user interaction experience compared to directly using llms.
arxiv:2402.01135
combustion kinetic modeling is an integral part of combustion simulation, and extensive studies have been devoted to developing both high fidelity and computationally affordable models. despite these efforts, modeling combustion kinetics is still challenging due to the demand for expert knowledge and optimization against experiments, as well as the lack of understanding of the associated uncertainties. therefore, data - driven approaches that enable efficient discovery and calibration of kinetic models have received much attention in recent years, the core of which is the optimization based on big data. differentiable programming is a promising approach for learning kinetic models from data by efficiently computing the gradient of objective functions to model parameters. however, it is often challenging to implement differentiable programming in practice. therefore, it is still not available in widely utilized combustion simulation packages such as chemkin and cantera. here, we present a differentiable combustion simulation package leveraging the eco - system in julia, including differentialequations. jl for solving differential equations, forwarddiff. jl for auto - differentiation, and flux. jl for incorporating neural network models into combustion simulations and optimizing neural network models using the state - of - the - art deep learning optimizers. we demonstrate the benefits of differentiable programming in efficient and accurate gradient computations, with applications in uncertainty quantification, kinetic model reduction, data assimilation, and model discovery.
arxiv:2107.06172
the financial market and turbulence have been broadly compared on account of the same quantitative methods and several common stylized facts they shared. in this paper, the she - leveque ( sl ) hierarchy, proposed to explain the anomalous scaling exponents deviated from kolmogorov monofractal scaling of the velocity fluctuation in fluid turbulence, is applied to study and quantify the hierarchical structure of stock price fluctuations in financial markets. we therefore observed certain interesting results : ( i ) the hierarchical structure related to multifractal scaling generally presents in all the stock price fluctuations we investigated. ( ii ) the quantitatively statistical parameters that describes sl hierarchy are different between developed financial markets and emerging ones, distinctively. ( iii ) for the high - frequency stock price fluctuation, the hierarchical structure varies with different time period. all these results provide a novelty analogy in turbulence and financial market dynamics and a insight to deeply understand the multifractality in financial markets.
arxiv:1209.4175