text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
accurate modeling of tidal interactions is crucial for interpreting recent jwst observations of the thermal emissions of trappist - 1 ~ b and c and for characterizing the surface conditions and potential habitability of the other planets in the system. indeed, the rotation state of the planets, driven by tidal forces, significantly influences the heat redistribution regime. due to their proximity to their host star and the estimated age of the system, the trappist - 1 planets are commonly assumed to be in a synchronization state. in this work, we present the recent implementation of the co - planar tidal torque and forces equations within the formalism of kaula in the n - body code posidonius. this enables us to explore the hypothesis of synchronization using a tidal model well suited to rocky planets. we studied the rotational state of each planet by taking into account their multi - layer internal structure computed with the code burnman. simulations show that the trappist - 1 planets are not perfectly synchronized but oscillate around the synchronization state. planet - planet interactions lead to strong variations on the mean motion and tides fail to keep the spin synchronized with respect to the mean motion. as a result, the sub - stellar point of each planet experiences short oscillations and long - timescale drifts that lead the planets to achieve a synodic day with periods varying from $ 55 $ ~ years to $ 290 $ ~ years depending on the planet.
|
arxiv:2409.12065
|
in this paper, the use of third - generation machine learning, also known as spiking neural network architecture, for continuous learning was investigated and compared to conventional models. the experimentation was divided into three separate phases. the first phase focused on training the conventional models via transfer learning. the second phase trains a nengo model from their library. lastly, each conventional model is converted into a spiking neural network and trained. initial results from phase 1 are inline with known knowledge about continuous learning within current machine learning literature. all models were able to correctly identify the current classes, but they would immediately see a sharp performance drop in previous classes due to catastrophic forgetting. however, the snn models were able to retain some information about previous classes. although many of the previous classes were still identified as the current trained classes, the output probabilities showed a higher than normal value to the actual class. this indicates that the snn models do have potential to overcome catastrophic forgetting but much work is still needed.
|
arxiv:2310.05343
|
a novel method to identify trampoline skills using a single video camera is proposed herein. conventional computer vision techniques are used for identification, estimation, and tracking of the gymnast ' s body in a video recording of the routine. for each frame, an open source convolutional neural network is used to estimate the pose of the athlete ' s body. body orientation and joint angle estimates are extracted from these pose estimates. the trajectories of these angle estimates over time are compared with those of labelled reference skills. a nearest neighbour classifier utilising a mean squared error distance metric is used to identify the skill performed. a dataset containing 714 skill examples with 20 distinct skills performed by adult male and female gymnasts was recorded and used for evaluation of the system. the system was found to achieve a skill identification accuracy of 80. 7 % for the dataset.
|
arxiv:1709.03399
|
we use the hamiltonian formalism to study the asymptotic structure of 3 dimensional gravity with a negative cosmological constant. we start by defining very general fall - off conditions for the canonical variables and study the implied poisson structure of the boundary gravitons. from the allowed differentiable gauge transformations, we can extract all the possible boundary conditions on the lagrange multipliers and the associated boundary hamiltonians. in the last section, we use this general framework to describe some of the previoussly known boundary conditions.
|
arxiv:1507.01580
|
we analyze a single - epoch global mm - vlbi array ( gmva ) observation of the blazar bl lacertae ( bl lac ) at 86 ghz from april 2021. the participation of the upgraded, phased northern extended millimetre array ( noema ) adds additional sensitivity to the gmva, which has facilitated the imaging of bl lac during an unprecedentedly strong $ \ gamma $ - ray flare. we aim to explore the nature of the inner subparsec jet of bl lac and the impact of the noema participation in the observation. for the data reduction, we employed two advanced automatic pipelines : rpicard for the flux density calibration as well as the model - agnostic signal stabilization and gpcal for the antenna leakage calibration. the conventional hybrid imaging ( clean + amplitude and phase self - calibration ) was applied to the calibrated visibilities to generate final vlbi images. we performed a ridge - line analysis and gaussian model - fits on the final jet image to derive the jet parameters. in our data, the presence of noema improves the image sensitivity by a factor of 2. 5. the jet shows a clear wiggling structure within 0. 4 mas from the core. our ridge - line analysis suggests the presence of a helical jet structure ( i. e., a sinusoidal pattern ). six circular gaussian components were fitted to the inner jet region. we estimated an apparent brightness temperature of $ \ sim $ 3 $ \ times $ 10 $ ^ { 12 } $ k in the two innermost components. they are likely to be highly boosted by relativistic beaming effect. we find four significant polarized knots in the jet. interestingly, two of them are located in the core region. finally, we suggest a number of physical scenarios to interpret our results.
|
arxiv:2312.05191
|
this article is the written version of the closing talk presented at the conference ` a century of cosmology ' held at san servolo, italy, in august 2007. i focus on the prospects of constraining fundamental physics from cosmological observations, using the search for gravitational waves from inflation and constraints on the equation of state of dark energy as topical examples. i argue that it is important to strike a balance between the importance of a scientific discovery against the likelihood of making the discovery in the first place. astronomers should be wary of embarking on large observational projects with narrow and speculative scientific goals. we should maintain a diverse range of research programmes as we move into a second century of cosmology. if we do so, discoveries that will reshape fundamental physics will surely come.
|
arxiv:0712.1513
|
the use of the absolute measure of local chirality is championed since it has a uniform distribution for randomly reshuffled chiral components so that any deviations from uniformity in the associated " x - distribution " are directly attributable to qcd - induced dynamics. we observe a transition in the qualitative behavior of this absolute x - distribution of low - lying eigenmodes which, we propose, defines a chiral polarization scale of the qcd vacuum.
|
arxiv:1010.5474
|
94 ceti is a triple star system with a circumprimary gas giant planet and far - infrared excess. such excesses around main sequence stars are likely due to debris discs, and are considered as signposts of planetary systems and, therefore, provide important insights into the configuration and evolution of the planetary system. consequently, in order to learn more about the 94 ceti system, we aim to precisely model the dust emission to fit its observed sed and to simulate its orbital dynamics. we interpret our apex bolometric observations and complement them with archived spitzer and herschel bolometric data to explore the stellar excess and to map out background sources in the fields. dynamical simulations and 3d radiative transfer calculations were used to constrain the debris disc configurations and model the dust emission. the best fit dust disc model for 94 ceti implies a circumbinary disc around the secondary pair, limited by dynamics to radii smaller than 40 au and with a grain size power - law distribution of ~ a ^ - 3. 5. this model exhibits a dust - to - star luminosity ratio of 4. 6 + - 0. 4 * 10 ^ - 6. the system is dynamically stable and n - body symplectic simulations results are consistent with semi - analytical equations that describe orbits in binary systems. in the observations we also find tentative evidence of a circumtertiary ring that could be edge - on.
|
arxiv:1607.03038
|
anisotropic flows ( $ v _ 1 $, $ v _ 2 $, $ v _ 3 $ and $ v _ 4 $ ) of light fragments up till the mass number 4 as a function of rapidity have been studied for 25 mev / nucleon $ ^ { 40 } $ ca + $ ^ { 40 } $ ca at large impact parameters by quantum molecular dynamics model. a phenomenological scaling behavior of rapidity dependent flow parameters $ v _ n $ ( n = 1, 2, 3 and 4 ) has been found as a function of mass number plus a constant term, which may arise from the interplay of collective and random motions. in addition, $ v _ 4 / { v _ 2 } ^ 2 $ keeps almost independent of rapidity and remains a rough constant of 1 / 2 for all light fragments.
|
arxiv:0711.0127
|
uves spectra of the very young ( ~ 10 ^ 7 years ) peculiar b - type star hr 6000 were analyzed in the near - uv and visual spectral regions ( 3050 - 9460 a ) with the aim to extend to other spectral ranges the study made previously in the uv using iue spectra. stellar parameters teff = 12850k, logg = 4. 10, and xi = 0km / s, as determined from h _ beta, h _ gamma, h _ delta balmer profiles and from the fe i, fe ii ionization equilibrium, were used to compute an individual abundances atlas12 model. we identified spectral peculiarities and obtained final stellar abundances by comparing observed and computed equivalent widths and line profiles. the adopted model fails to reproduce the ( b - y ) and c color indices. the spectral analysis has revealed : the presence of emission lines for mn ii, cr ii, and fe ii ; isotopic anomalies for hg, ca ; the presence of interstellar lines of na i at lambda lambda 3302. 3, 3302. 9, 5890, 5896 a, and of k i at 7665, 7699 a ; the presence of a huge quantity of unidentified lines, which we presume to be mostly due to fe ii transitions owing to the large fe overabundance amounting to [ + 0. 7 ]. the main chemical peculiarities are an extreme overabundance of xe, followed by those of hg, p, y, mn, fe, be, and ti. the most underabundant element is si, followed by c, n, al, s, mg, v, sr, co, cl, sc, and ni. the silicon underabundance [ - 2. 9 ] is the lowest value for si ever observed in any hgmn star. the observed lines of he i can not be reproduced by a single value of the he abundance, but they require values ranging from [ - 0. 8 ] to [ - 1. 6 ]. furthermore, when the observed and computed wings of he i lines are fitted, the observed line cores are much weaker than the computed ones. from the present analysis we infer the presence of vertical abundance stratification for he, mn, and possibly also p.
|
arxiv:0710.0005
|
we derive a general dimensionless form for granular locomotion, which is validated in experiments and discrete element method ( dem ) simulations. the form instructs how to scale size, mass, and driving parameters in order to relate dynamic behaviors of different locomotors in the same granular media. the scaling can be derived by assuming intrusion forces arise from resistive force theory ( rft ) or equivalently by assuming the granular material behaves as a continuum obeying a frictional yield criterion. the scalings are experimentally confirmed using pairs of wheels of various shapes and sizes under many driving conditions in a common sand bed. we discuss why the two models provide such a robust set of scaling laws even though they neglect a number of the complexities of granular rheology. motivated by potential extra - planetary applications, the dimensionless form also implies a way to predict wheel performance in one ambient gravity based on tests in a different ambient gravity. we confirm this using dem simulations, which show that scaling relations are satisfied over an array of driving modes even when gravity differs between scaled tests.
|
arxiv:1604.02490
|
we calculate the nernst signal in disordered conductors with the chemical potential near the mobility edge. the nernst effect originates from interference of itinerant and localised - carrier contributions to the thermomagnetic transport. it reveals a strong temperature and magnetic field dependence, which describes quantitatively the anomalous nernst signal in high - tc cuprates.
|
arxiv:cond-mat/0401272
|
a novel concept has been recently proposed for explaining the temporal coincidence of some gamma ray bursts ( grbs ) with an associated supernova ( sn ) in terms of the gravitational collapse of a neutron star ( ns ) to a black hole ( bh ), induced by a type ib / c sn explosion. we apply these considerations to the exceptional case of grb 090618, for which there is evidence of a sn $ \ sim 10 $ days after the grb occurrence. we calculate the accretion rate and total accreted mass onto a ns from a sn ib / c originated from a companion evolved star. it is shown that the ns reaches in a few seconds the critical mass and undergoes gravitational collapse to a bh leading to the emission of a grb. we find for the mass of the ns companion, $ m _ { \ rm ns } $, and for the sn core progenitor, $ m _ { \ rm core } $, the following mass ranges : $ 1. 8 \ lesssim m _ { ns } / m _ \ odot \ lesssim 2. 1 $ and $ 3 \ leq m _ { \ rm core } / m _ \ odot \ leq 8 $. finally, we discuss the complementarity of these considerations to alternative processes explaining long and short grbs.
|
arxiv:1206.2887
|
robust obstacle avoidance is one of the critical steps for successful goal - driven indoor navigation tasks. due to the obstacle missing in the visual image and the possible missed detection issue, visual image - based obstacle avoidance techniques still suffer from unsatisfactory robustness. to mitigate it, in this paper, we propose a novel implicit obstacle map - driven indoor navigation framework for robust obstacle avoidance, where an implicit obstacle map is learned based on the historical trial - and - error experience rather than the visual image. in order to further improve the navigation efficiency, a non - local target memory aggregation module is designed to leverage a non - local network to model the intrinsic relationship between the target semantic and the target orientation clues during the navigation process so as to mine the most target - correlated object clues for the navigation decision. extensive experimental results on ai2 - thor and robothor benchmarks verify the excellent obstacle avoidance and navigation efficiency of our proposed method. the core source code is available at https : / / github. com / xwaiyy123 / object - navigation.
|
arxiv:2308.12845
|
on the large phased array ( lpa ) of lebedev physics institute ( lpi ), a search for pulsars outside the galaxy plane was carried out in a 300 sq. deg area. the search with a sensitivity 5 - 10 times better than that of previously conducted surveys was at a frequency of 111 mhz. the search was carried out in the summed power spectra. with an accumulation equivalent to 100 hours of continuous observations for each point of the area, 5 known pulsars were detected with a signal - to - noise ratio ( s / n ) from 20 to 1300 in the first harmonic of the spectrum. average profiles were obtained for the detected pulsars. estimates of the peak and integral flux densities of the found pulsars are given for individual sessions and for the power spectra summarized over 5. 5 years, obtained using the developed method based on measurements of the height of harmonics in the power spectrum. no new pulsars have been detected in the area. apparently, when searching for pulsars in the area, we have approached the lower limit of the luminosity of the second pulsars. the completeness of the survey is at the level of 0. 5 mjy.
|
arxiv:2305.01409
|
compatibility conditions of quantum channels featuring symmetry through covariance are studied. compatibility here means the possibility of obtaining two or more channels through partial trace out of a broadcasting channel. we see that covariance conditions can be used to simplify compatibility conditions as the broadcasting channel can be assumed to be covariant in a particular way. a particular emphasis is on weyl covariance and in determining compatibility conditions for weyl - covariant channels. the concrete examples studied include the case of a non - compact continuous phase space and the case of a finite phase space.
|
arxiv:1901.06113
|
we report the experimental realization of a non - galvanic, primary thermometer capable of measuring the electron temperature of a two - dimensional electron gas with negligible thermal load. such a thermometer consists of a quantum dot whose temperature - dependent, single - electron transitions are detected by means of a quantum - point - contact electrometer. its operating principle is demonstrated for a wide range of electron temperatures from 40 to 800 mk. this noninvasive thermometry can find application in experiments addressing the thermal properties of micrometer - scale mesoscopic electron systems, where heating or cooling electrons requires relatively low thermal budgets.
|
arxiv:1309.2176
|
is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded dna fragments. this process is repeated until full length single stranded dna is obtained. = = = = sequence homology - independent protein recombination ( shiprec ) = = = = this method generates recombination between genes with little to no sequence homology. these chimeras are fused via a linker sequence containing several restriction sites. this construct is then digested using dnase1. fragments are made are made blunt ended using s1 nuclease. these blunt end fragments are put together into a circular sequence by ligation. this circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. this results in a library of chimeric genes in which contribution of genes to 5 ' and 3 ' end will be reversed as compared to the starting construct. = = = = sequence independent site directed chimeragenesis ( sisdc ) = = = = this method results in a library of genes with multiple crossovers from several parental genes. this method does not require sequence identity among the parental genes. this does require one or two conserved amino acids at every crossover position. it begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. this is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with bac1, resulting in genes with cohesive ends. these gene fragments are mixed and ligated in an appropriate order to form chimeric libraries. = = = = degenerate homo - duplex recombination ( dhr ) = = = = this method begins with alignment of homologous genes, followed by identification of regions of polymorphism. next the top strand of the gene is divided into small degenerate oligonucleotides. the bottom strand is also digested into oligonucleotides to serve as scaffolds. these fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. gaps between these fragments are filled with polymerase and ligated. = = = = random multi - recombinant pcr ( rm - pcr ) = = = = this method involves the shuffling of plural dna fragments without homology, in a single pcr. this results in the reconstruction of complete proteins by assembly of modules encoding different structural units. = = =
|
https://en.wikipedia.org/wiki/Protein_engineering
|
we study irregular states of rank - two and three in liouville theory, based on an ansatz proposed by d. gaiotto and j. teschner. using these irregular states, we evaluate asymptotic expansions of irregular conformal blocks corresponding to the partition functions of $ ( a _ 1, a _ 3 ) $ and $ ( a _ 1, d _ 4 ) $ argyres - douglas theories for general $ \ omega $ - background parameters. in the limit of vanishing liouville charge, our result reproduces strong coupling expansions of the partition functions recently obtained via the painlev \ ' e / gauge correspondence. this suggests that the irregular conformal block for one irregular singularity of rank 3 on sphere is also related to painlev \ ' e ii. we also find that our partition functions are invariant under the action of the weyl group of flavor symmetries once four and two - dimensional parameters are correctly identified. we finally propose a generalization of this parameter identification to general irregular states of integer rank.
|
arxiv:1905.03795
|
this paper uses the relationship between graph conductance and spectral clustering to study ( i ) the failures of spectral clustering and ( ii ) the benefits of regularization. the explanation is simple. sparse and stochastic graphs create a lot of small trees that are connected to the core of the graph by only one edge. graph conductance is sensitive to these noisy ` dangling sets '. spectral clustering inherits this sensitivity. the second part of the paper starts from a previously proposed form of regularized spectral clustering and shows that it is related to the graph conductance on a ` regularized graph '. we call the conductance on the regularized graph corecut. based upon previous arguments that relate graph conductance to spectral clustering ( e. g. cheeger inequality ), minimizing corecut relaxes to regularized spectral clustering. simple inspection of corecut reveals why it is less sensitive to small cuts in the graph. together, these results show that unbalanced partitions from spectral clustering can be understood as overfitting to noise in the periphery of a sparse and stochastic graph. regularization fixes this overfitting. in addition to this statistical benefit, these results also demonstrate how regularization can improve the computational speed of spectral clustering. we provide simulations and data examples to illustrate these results.
|
arxiv:1806.01468
|
the spin dimer system $ \ mathrm { ba } _ { 3 - x } \ mathrm { sr } _ x \ mathrm { cr _ 2o _ 8 } $ is a solid solution of the triplon bose - einstein condensation candidates $ \ mathrm { ba _ 3cr _ 2o _ 8 } $ and $ \ mathrm { sr _ 3cr _ 2o _ 8 } $. the magnetic intradimer interaction constant $ j _ 0 $ in this spin system can be tuned by varying the sr content $ x $. very interestingly, this variation of $ j _ 0 $ with $ x $ is highly nonlinear. in the present study, we show that this peculiar behavior of $ j _ 0 $ can be only partly explained by the changes in the average crystal structure alone. we report on neutron powder diffraction experiments to probe the corresponding structural details. performing extended h \ " { u } ckel tight binding calculations based on those structural details obtained at liquid helium temperatures, we found that the change of the magnetic interaction constant can be well reproduced by taking into account the presence of a structural transition due to the jahn - teller active cr $ ^ { 5 + } $ - ions. this transition, lifting the orbital degeneracy and thereby the magnetic frustration in the system, is heavily influenced by disorder in the system arising from partially exchanging ba with sr.
|
arxiv:1404.7375
|
the exchange coupling underlies ferroic magnetic coupling and is thus the key element that governs statics and dynamics of magnetic systems. this fundamental interaction comes in two flavors - symmetric and antisymmetric coupling. while symmetric coupling leads to ferro - and antiferromagnetism, antisymmetric coupling has attracted significant interest owing to its major role in promoting topologically non - trivial spin textures that promise high - speed and energy - efficient devices. so far, the antisymmetric exchange coupling rather short - ranged and limited to a single magnetic layer has been demonstrated, while the symmetric coupling also leads to long - range interlayer exchange coupling. here, we report the missing component of the long - range antisymmetric interlayer exchange coupling in perpendicularly magnetized synthetic antiferromagnets with parallel and antiparallel magnetization alignments. asymmetric hysteresis loops under an in - plane field unambiguously reveal a unidirectional and chiral nature of this novel interaction, which cannot be accounted for by existing coupling mechanisms, resulting in canted magnetization alignments. this can be explained by spin - orbit coupling combined with reduced symmetry in multilayers. this new class of chiral interaction provides an additional degree of freedom for engineering magnetic structures and promises to enable a new class of three - dimensional topological structures.
|
arxiv:1809.01080
|
monte carlo ( mc ) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. in light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions ( e. g. performance measures ) becomes important. however, mc techniques often suffer from high variance and slow convergence as the number of samples increases. in this paper we present stacked monte carlo ( stackmc ), a new method for post - processing an existing set of mc samples to improve the associated integral estimate. stackmc is based on the supervised learning techniques of fitting functions and cross validation. it should reduce the variance of any type of monte carlo integral estimate ( simple sampling, importance sampling, quasi - monte carlo, mcmc, etc. ) without adding bias. we report on an extensive set of experiments confirming that the stackmc estimate of an integral is more accurate than both the associated unprocessed monte carlo estimate and an estimate based on a functional fit to the mc samples. these experiments run over a wide variety of integration spaces, numbers of sample points, dimensions, and fitting functions. in particular, we apply stackmc in estimating the expected value of the fuel burn metric of future commercial aircraft and in estimating sonic boom loudness measures. we compare the efficiency of stackmc with that of more standard methods and show that for negligible additional computational cost significant increases in accuracy are gained.
|
arxiv:1108.4879
|
a method is proposed for the determination of the unitarity angle alpha through tree penguin interference. the modes needed would be of the form b0 / b0 - bar - > rho0 m and b0 / b0 - bar - > omega m where m is spin - 0 uu - bar / dd - bar meson, for instance m = pi0, eta, eta ', a0 or f0. an analogous method can also determine gamma using m = ks or k _ l. the validity of the theoretical approximations used may be tested by over determining alpha with several modes. if two or more modes are used, the determination has a four - fold ambiguity but additional information from pure penguin decays or theoretical estimates may be used to reduce the ambiguity to alpha, alpha + pi. the method as applied to determining gamma is probably less promising.
|
arxiv:hep-ph/0106250
|
we combine hst imaging from the gems survey with photometric redshifts from combo - 17 to explore the evolution of disk - dominated galaxies since z < 1. 1. the sample is comprised of all gems galaxies with sersic indices n < 2. 5, derived from fits to the galaxy images. we account fully for selection effects through careful analysis of image simulations ; we are limited by the depth of the redshift and hst data to the study of galaxies with absolute magnitudes m ( v ) < - 20, or equivalently stellar masses log ( m ) > 10. we find strong evolution in the magnitude - size scaling relation for galaxies with m ( v ) < - 20, corresponding to a brightening of 1 mag per sqarcsec in rest - frame v - band by z = 1. yet, disks at a given absolute magnitude are bluer and have lower stellar mass - to - light ratios at z = 1 than at the present day. as a result, our findings indicate weak or no evolution in the relation between stellar mass and effective disk size for galaxies with log ( m ) > 10 over the same time interval. this is strongly inconsistent with the most naive theoretical expectation, in which disk size scales in proportion to the halo virial radius, which would predict that disks are a factor of two denser at fixed mass at z = 1. the lack of evolution in the stellar mass - size relation is consistent with an ` ` inside - out ' ' growth of galaxy disks on average ( galaxies increasing in size as they grow more massive ), although we cannot rule out more complex evolutionary scenarios.
|
arxiv:astro-ph/0502416
|
we analyze the behavior of the outer envelope in a massive star during and after the collapse of its iron core into a protoneutron star ( pns ) in terms of the equations of one - dimensional spherically symmetric ideal hydrodynamics. the profiles obtained in the studies of the evolution of massive stars up to the final stages of their existence, immediately before a supernova explosion ( boyes et al. 1999 ), are used as the initial data for the distribution of thermodynamic quantities in the envelope. we use a complex equation of state for matter with allowances made for arbitrary electron degeneracy and relativity, the appearance of electron - positron pairs, the presence of radiation, and the possibility of iron nuclei dissociating into free nucleons and helium nuclei. we performed calculations with the help of a numerical scheme based on godunov ' s method. these calculations allowed us to ascertain whether the emersion of the outer envelope in a massive star is possible through the following two mechanisms : first, the decrease in the gravitational mass of the central pns through neutrino - signal emission and, second, the effect of hot nucleon bubbles, which are most likely formed in the pns corona, on the envelope emersion. we show that the second mechanism is highly efficient in the range of acceptable masses of the nucleon bubbles ( $ \ leq 0. 01m _ \ odot $ ) simulated in our hydrodynamic calculations in a rough, spherically symmetric approximation.
|
arxiv:astro-ph/0402153
|
following the classical approach of birkhoff, we suggest an enriched version of enriched universal algebra. given a suitable base of enrichment $ \ mathcal v $, we define a language $ \ mathbb l $ to be a collection of $ ( x, y ) $ - ary function symbols whose arities are taken among the objects of $ \ mathcal v $. the class of $ \ mathbb l $ - terms is constructed recursively from the symbols of $ \ mathbb l $, the morphisms in $ \ mathcal v $, and by incorporating the monoidal structure of $ \ mathcal v $. then, $ \ mathbb l $ - structures and interpretations of terms are defined, leading to enriched equational theories. in this framework we characterize algebras for finitary monads on $ \ mathcal v $ as models of an equational theories.
|
arxiv:2310.11972
|
we report on the study of cleaved - edge - overgrown line junctions with a serendipitously created narrow opening in an otherwise thin, precise line barrier. two sets of zero - bias anomalies are observed with an enhanced conductance for filling factors $ \ nu > 1 $ and a strongly suppressed conductance for $ \ nu < 1 $. a transition between the two behaviors is found near $ \ nu \ approx 1 $. the zero - bias anomaly ( zba ) line shapes find explanation in luttinger liquid models of tunneling between quantum hall edge states. the zba for $ \ nu < 1 $ occurs from strong backscattering induced by suppression of quasiparticle tunneling between the edge channels for the $ n = 0 $ landau levels. the zba for $ \ nu > 1 $ arises from weak tunneling of quasiparticles between the $ n = 1 $ edge channels.
|
arxiv:1006.3107
|
currently used wireless capsule endoscopy ( wce ) is limited in terms of inspection time and flexibility since the capsule is passively moved by peristalsis and cannot be accurately positioned. different methods have been proposed to facilitate active locomotion of wce based on simultaneous magnetic actuation and localization technologies. in this work, we investigate the trajectory following problem of a robotic capsule under rotating magnetic actuation in a tubular environment, in order to realize safe, efficient and accurate inspection of the intestine at given points using wireless capsule endoscopes. specifically, four trajectory following strategies are developed based on the pd controller, adaptive controller, model predictive controller and robust multi - stage model predictive controller. moreover, our method takes into account the uncertainty in the intestinal environment by modeling the intestinal peristalsis and friction during the controller design. we validate our methods in simulation as well as in real - world experiments in various tubular environments, including plastic phantoms with different shapes and an ex - vivo pig colon. the results show that our approach can effectively actuate a reciprocally rotating capsule to follow a desired trajectory in complex tubular environments, thereby having the potential to enable accurate and repeatable inspection of the intestine for high - quality diagnosis.
|
arxiv:2108.11620
|
the recent devolopment on the charged lepton mass forumula m _ e + m _ { \ mu } + m _ { \ tau } = { 2 / 3 } ( \ sqrt { m _ e } + \ sqrt { m _ \ mu } + \ sqrt { m _ { \ tau } } ) ^ 2 is reviewed. an s _ 3 or a _ 4 model will be promising for the mass relation.
|
arxiv:0706.2534
|
the quotient - cusp singularities are isolated complex surface singularities that are double - covered by cusp singularities. we show that the universal abelian cover of such a singularity, branched only at the singular point, is a complete intersection cusp singularity of embedding dimension 4. this supports a general conjecture that we make about the universal abelian cover of a $ \ q $ - gorenstein singularity.
|
arxiv:math/0101251
|
partially observable markov decision processes ( pomdps ) are rich environments often used in machine learning. but the issue of information and causal structures in pomdps has been relatively little studied. this paper presents the concepts of equivalent and counterfactually equivalent pomdps, where agents cannot distinguish which environment they are in though any observations and actions. it shows that any pomdp is counterfactually equivalent, for any finite number of turns, to a deterministic pomdp with all uncertainty concentrated into the initial state. this allows a better understanding of pomdp uncertainty, information, and learning.
|
arxiv:1801.03737
|
for primes $ q \ equiv 7 \ mod 16 $, the present manuscript shows that elementary methods enable one to prove surprisingly strong results about the iwasawa theory of the gross family of elliptic curves with complex multiplication by the ring of integers of the field $ k = \ mathbb { q } ( \ sqrt { - q } ) $, which are in perfect accord with the predictions of the conjecture of birch and swinnerton - dyer. we also prove some interesting phenomena related to a classical conjecture of greenberg, and give a new proof of an old theorem of hasse.
|
arxiv:2008.10310
|
galaxies in the universe are distributed in a web - like structure characterised by different large - scale environments : dense clusters, elongated filaments, sheetlike walls, and under - dense regions, called voids. the low density in voids is expected to affect the properties of their galaxies. indeed, previous studies have shown that galaxies in voids are on average bluer and less massive, and have later morphologies and higher current star formation rates than galaxies in denser large - scale environments. however, it has never been observationally proved that the star formation histories ( sfhs ) in void galaxies are substantially different from those in filaments, walls, and clusters. here we show that void galaxies have had, on average, slower sfhs than galaxies in denser large - scale environments. we also find two main sfh types present in all the environments : ' short - timescale ' galaxies are not affected by their large - scale environment at early times but only later in their lives ; ' long - timescale ' galaxies have been continuously affected by their environment and stellar mass. both types have evolved slower in voids than in filaments, walls, and clusters.
|
arxiv:2306.16818
|
the " lost " information of black hole through the hawking radiation was discovered being stored in the correlation among the non - thermally radiated particles [ phys. rev. lett 85, 5042 ( 2000 ), phys. lett. b 675, 1 ( 2009 ) ]. this correlation information, which has not yet been proved locally observable in principle, is named by dark information. in this paper, we systematically study the influences of dark energy on black hole radiation, especially on the dark information. calculating the radiation spectrum in the existence of dark energy by the approach of canonical typicality, which is reconfirmed by the quantum tunneling method, we find that the dark energy will effectively lower the hawking temperature, and thus makes the black hole has longer life time. it is also discovered that the non - thermal effect of the black hole radiation is enhanced by dark energy so that the dark information of the radiation is increased. our observation shows that, besides the mechanical effect ( e. g., gravitational lensing effect ), the dark energy rises the the stored dark information, which could be probed by a non - local coincidence measurement similar to the coincidence counting of the hanbury - brown - twiss experiment in quantum optics.
|
arxiv:1802.01118
|
we propose a setup comprising an arbitrarily large array of static qubits ( sqs ), which interact with a flying qubit ( fq ). the sqs work as a quantum register, which can be written or read - out by means of the fq through quantum state transfer ( qst ). the entire system, including the fq ' s motional degrees of freedom, behaves quantum mechanically. we demonstrate a strategy allowing for selective qst between the fq and a single sq chosen from the register. this is achieved through a perfect mirror located beyond the sqs and suitable modulation of the inter - sq distances.
|
arxiv:1212.4430
|
using data samples of 102 million upsilon ( 1s ) events and 158 million upsilon ( 2s ) events collected by the belle detector at the kekb asymmetric - energy $ e ^ + e ^ - $ collider, we search for [ udsccbar ] pentaquark states decaying to jpsi lambda. using the first observations of upsilon ( 1s, 2s ) inclusive decays to jpsi lambda, we find evidence of the p _ ccbars ( 4459 ) 0 state with a significance of 3. 3 standard deviations, including statistical and systematic uncertainties. we measure the mass and width of the pccbars ( 4459 ) 0 to be ( 4471. 7 + - 4. 8 + - 0. 6 ) mev / c2 and ( 21. 9 + - 13. 1 + - 2. 7 ) mev, respectively. the branching fractions for p _ ccbars ( 4459 ) 0 production are measured to be b [ upsilon ( 1s ) - > p _ ccbars ( 4459 ) 0 / pbar _ ccbars ( 4459 ) 0 + anything ] = ( 3. 5 + - 2. 0 + - 0. 2 ) * 10 - 6 and b [ upsilin ( 2s ) - > p _ ccbars ( 4459 ) 0 / pbar _ ccbars ( 4459 ) 0 + anything ] = ( 2. 9 + - 1. 7 + - 0. 4 ) * 10 - 6. the inclusive branching fractions of upsilon ( 1s, 2s ) - > jpsi lambda / lambdabar are measured to be b [ upsilin ( 1s ) - > jpsi lambda / lambdabar + anything ] = ( 36. 9 + - 5. 3 + - 2. 4 ) * 10 - 6 and b [ upsilon ( 2s ) - > jpsi lambda / lambdabar + anything ] = ( 22. 3 + - 5. 7 + - 3. 1 ) * 10 - 6. we measure the visible cross section $ \ sigma ( e ^ + e ^ - \ to j / psi \ lambda / \ bar \ lambda $ + anything ) = ( 90 + - 14 + - 6 ) fb for the continuum production at $ \ sqrt { s } = 10. 52 $ gev. in all cases, the first uncertainties are statistical and the second are
|
arxiv:2502.09951
|
diffusion and flow matching models have achieved remarkable success in text - to - image generation. however, these models typically rely on the predetermined denoising schedules for all prompts. the multi - step reverse diffusion process can be regarded as a kind of chain - of - thought for generating high - quality images step by step. therefore, diffusion models should reason for each instance to adaptively determine the optimal noise schedule, achieving high generation quality with sampling efficiency. in this paper, we introduce the time prediction diffusion model ( tpdm ) for this. tpdm employs a plug - and - play time prediction module ( tpm ) that predicts the next noise level based on current latent features at each denoising step. we train the tpm using reinforcement learning to maximize a reward that encourages high final image quality while penalizing excessive denoising steps. with such an adaptive scheduler, tpdm not only generates high - quality images that are aligned closely with human preferences but also adjusts diffusion time and the number of denoising steps on the fly, enhancing both performance and efficiency. with stable diffusion 3 medium architecture, tpdm achieves an aesthetic score of 5. 44 and a human preference score ( hps ) of 29. 59, while using around 50 % fewer denoising steps to achieve better performance.
|
arxiv:2412.01243
|
we have analyzed archival hst / acs images in sloan g and z of the globular cluster ( gc ) systems of 53 ellipticals in the virgo cluster, spanning massive galaxies to des. among the new results are : ( 1 ) in the ges m87 and ngc 4649, there is a correlation between luminosity and color for individual metal - poor gcs, such that more massive gcs are more metal - rich. a plausible interpretation of this result is self - enrichment, and may suggest that these gcs once possessed dark matter halos. ( 2 ) many des have metal - rich gc subpopulations. we also confirm the gc color - - galaxy luminosity relations found previously for both metal - poor and metal - rich gc subpopulations. ( 3 ) there are large differences in gc specific frequency among des. over - 15 < m _ b < - 18, there is little correlation between specific frequency ( s _ n ) and m _ b. but we do find evidence for two separate s _ n classes of des : those with b - band s _ n ~ 2, and others with populous gc systems that have s _ n ranging from ~ 5 - 20 with median s _ n ~ 10. together, these points suggest multiple formation channels for des in virgo. ( 4 ) the peak of the gc luminosity function ( gclf ) is the same for both ges and des. this is contrary to expectations of dynamical friction on massive gcs, unless the primordial gclf varies between ges and des. among ges the gclf turnover varies by a surprising small 0. 05 mag, an encouraging result for its use as an accurate standard candle. ( 5 ) de, ns appear bimodal in their nuclear properties : there are small bright red nuclei consistent with formation by dynamical friction of gcs, and larger faint blue nuclei which appear to have formed by a dissipative process with little contribution from gcs. the role of dynamical evolution in shaping the present - day properties of de gc systems and their nuclei remains ambiguous. ( abridged )
|
arxiv:astro-ph/0508001
|
in these notes, we develop a path integral approach for the partial differential equations with random initial conditions. then, we apply it to the dynamics of the spiked tensor model and show that the large - $ n $ saddle point equations are dominated by the melonic type diagrams.
|
arxiv:2208.12586
|
mobile edge caching is a promising technology for the next - generation mobile networks to effectively offer service environment and cloud - storage capabilities at the edge of networks. by exploiting the storage and computing resources at the network edge, mobile edge caching can significantly reduce service latency, decrease network load, and improve user experience. on the other hand, edge caching is subject to a number of threats regarding privacy violation and security breach. in this article, we first introduce the architecture of mobile edge caching, and address the key problems regarding why, where, what, and how to cache. then, we examine the potential cyber threats, including cache poisoning attacks, cache pollution attacks, cache side - channel attacks, and cache deception attacks, which result in huge concerns about privacy, security, and trust in content placement, content delivery, and content usage for mobile users, respectively. after that, we propose a service - oriented and location - based efficient key distribution protocol ( solek ) as an example in response to efficient and secure content delivery in mobile edge caching. finally, we discuss the potential techniques for privacy - preserving content placement, efficient and secure content delivery, and trustful content usage, which are expected to draw more attention and efforts into secure edge caching.
|
arxiv:2012.03165
|
the technological singularity β or simply the singularity β is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. according to the most popular version of the singularity hypothesis, i. j. good ' s intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self - improvement cycles ; more intelligent generations would appear more and more rapidly, causing a rapid increase ( " explosion " ) in intelligence which would culminate in a powerful superintelligence, far surpassing all human intelligence. the hungarian - american mathematician john von neumann ( 1903 - 1957 ) became the first known person to use the concept of a " singularity " in the technological context. alan turing, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. his pivotal 1950 paper, " computing machinery and intelligence ", introduced the idea of a machine ' s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. stanislaw ulam reported in 1958 an earlier discussion with von neumann " centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue ". subsequent authors have echoed this viewpoint. the concept and the term " singularity " were popularized by vernor vinge : first in 1983, in an article that claimed that, once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to " the knotted space - time at the center of a black hole " ; and later in his 1993 essay " the coming technological singularity ", in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate, and he would be surprised if it occurred before 2005 or after 2030. another significant contribution to wider circulation of the notion was ray kurzweil ' s 2005 book the singularity is near, predicting singularity by 2045. some scientists, including stephen hawking, have expressed concerns that artificial superintelligence ( asi ) could result in human extinction. the consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated
|
https://en.wikipedia.org/wiki/Technological_singularity
|
we present a brief overview of the transport of quantum light across a one - dimensional waveguide which is integrated with a periodic string of quantum - scale dipoles. we demonstrate a scheme to implement transparency by suitably tuning the atomic frequencies without applying a coupling field and bring out the pronounced non - reciprocity of this optical device. the fiber - mediated interaction between integrated dipoles allows one to achieve both dispersive and dissipative couplings, level repulsion and attraction, and enhanced sensing capabilities. all these ideas can be translated to a wide variety of experimental setups of topical interest such as resonators on a transmission line, cold atoms near a fiber and quantum dots coupled to plasmonic excitations in a nanowire or photonic crystal waveguides.
|
arxiv:2111.03200
|
a simple model of quantum ratchet transport that can generate unbounded linear acceleration of the quantum ratchet current is proposed, with the underlying classical dynamics fully chaotic. the results demonstrate that generic acceleration of quantum ratchet transport can occur with any type of classical phase space structure. the quantum ratchet transport with full classical chaos is also shown to be very robust to noise due to the large linear acceleration afforded by the quantum dynamics. one possible experiment allowing observation of these predictions is suggested.
|
arxiv:quant-ph/0609036
|
coset constructions in the framework of chern - simons topological gauge theories are studied. two examples are considered : models of the types $ { u ( 1 ) _ p \ times u ( 1 ) _ q \ over u ( 1 ) _ { p + q } } \ cong u ( 1 ) _ { pq ( p + q ) } $ with $ p $ and $ q $ coprime integers, and $ { su ( 2 ) _ m \ times su ( 2 ) _ 1 \ over su ( 2 ) _ { m + 1 } } $. in the latter case it is shown that the chern - simons wave functionals can be identified with t he characters of the minimal unitary models, and an explicit representation of the knot ( verlinde ) operators acting on the space of $ c < 1 $ characters is obtained.
|
arxiv:hep-th/9201027
|
let $ \ mathcal { a } _ 1 $ and $ \ mathcal { a } _ 2 $ be standard operator algebras on complex banach spaces $ x _ 1 $ and $ x _ 2 $, respectively. for $ k \ geq2 $, let $ ( i _ 1,..., i _ m ) $ be a sequence with terms chosen from $ \ { 1, \ ldots, k \ } $, and assume that at least one of the terms in $ ( i _ 1, \ ldots, i _ m ) $ appears exactly once. define the generalized product $ t _ 1 * t _ 2 * \ cdots * t _ k = t _ { i _ 1 } t _ { i _ 2 } \ cdots t _ { i _ m } $ on elements in $ \ mathcal { a } _ i $. let $ \ phi : \ mathcal { a } _ 1 \ rightarrow \ mathcal { a } _ 2 $ be a map with the range containing all operators of rank at most two. we show that $ \ phi $ satisfies that $ \ sigma _ \ pi ( \ phi ( a _ 1 ) * \ cdots * \ phi ( a _ k ) ) = \ sigma _ \ pi ( a _ 1 * \ cdots * a _ k ) $ for all $ a _ 1, \ ldots, a _ k $, where $ \ sigma _ \ pi ( a ) $ stands for the peripheral spectrum of $ a $, if and only if $ \ phi $ is an isomorphism or an anti - isomorphism multiplied by an $ m $ th root of unity, and the latter case occurs only if the generalized product is quasi - semi jordan. if $ x _ 1 = h $ and $ x _ 2 = k $ are complex hilbert spaces, we characterize also maps preserving the peripheral spectrum of the skew generalized products, and prove that such maps are of the form $ a \ mapsto cuau ^ * $ or $ a \ mapsto cua ^ tu ^ * $, where $ u \ in \ mathcal { b } ( h, k ) $ is a unitary operator, $ c \ in \ { 1, - 1 \ } $.
|
arxiv:1305.7100
|
cp - violation in the higgs sector remains a possible source of the baryon asymmetry of the universe. recent differential measurements of signed angular distributions in higgs boson production provide a general experimental probe of the cp structure of higgs boson interactions. we interpret these measurements using the standard model effective field theory and show that they do not distinguish the various cp - violating operators that couple the higgs and gauge fields. however, the constraints can be sharpened by measuring additional cp - sensitive observables and exploiting phase - space - dependent effects. using these observables, we demonstrate that perturbatively meaningful constraints on cp - violating operators can be obtained at the lhc with luminosities of $ { \ cal { o } } $ ( 100 / fb ). our results provide a roadmap to a global higgs boson coupling analysis that includes cp - violating effects.
|
arxiv:1808.06577
|
many novel methods have been proposed to mitigate stellar activity for exoplanet detection as the presence of stellar activity in radial velocity ( rv ) measurements is the current major limitation. unlike traditional methods that model stellar activity in the rv domain, more methods are moving in the direction of disentangling stellar activity at the spectral level. the goal of this paper is to present a novel convolutional neural network - based algorithm that efficiently models stellar activity signals at the spectral level, enhancing the detection of earth - like planets. we trained a convolutional neural network to build the correlation between the change in the spectral line profile and the corresponding rv, full width at half maximum ( fwhm ) and bisector span ( bis ) values derived from the classical cross - correlation function. this algorithm has been tested on three intensively observed stars : alpha centauri b ( hd128621 ), tau ceti ( hd10700 ), and the sun. by injecting simulated planetary signals at the spectral level, we demonstrate that our machine learning algorithm can achieve, for hd128621 and hd10700, a detection threshold of 0. 5 m / s in semi - amplitude for planets with periods ranging from 10 to 300 days. this threshold would correspond to the detection of a $ \ sim $ 4 $ \ mathrm { m } _ { \ oplus } $ in the habitable zone of those stars. on the harps - n solar dataset, our algorithm is even more efficient at mitigating stellar activity signals and can reach a threshold of 0. 2 m / s, which would correspond to a 2. 2 $ \ mathrm { m } _ { \ oplus } $ planet on the orbit of the earth. to the best of our knowledge, it is the first time that such low detection thresholds are reported for the sun, but also for other stars, and therefore this highlights the efficiency of our convolutional neural network - based algorithm at mitigating stellar activity in rv measurements.
|
arxiv:2405.13247
|
the problem of decentralized sequential change detection is considered, where an abrupt change occurs in an area monitored by a number of sensors ; the sensors transmit their data to a fusion center, subject to bandwidth and energy constraints, and the fusion center is responsible for detecting the change as soon as possible. a novel sequential detection rule is proposed that requires communication from the sensors at random times and transmission of only low - bit messages, on which the fusion center runs in parallel a cusum test. the second - order asymptotic optimality of the proposed scheme is established both in discrete and in continuous time. specifically, it is shown that the inflicted performance loss ( with respect to the optimal detection rule that uses the complete sensor observations ) is asymptotically bounded as the rate of false alarms goes to 0, for any fixed rate of communication. when the rate of communication from the sensors is asymptotically low, the proposed scheme remains first - order asymptotically optimal. finally, simulation experiments illustrate its efficiency and its superiority over a decentralized detection rule that relies on communication at deterministic times.
|
arxiv:1210.2029
|
recent advancements in language models ( lms ) have demonstrated strong capabilities in semantic understanding and contextual modeling, which have flourished in generative speech enhancement ( se ). however, many lm - based se approaches primarily focus on semantic information, often neglecting the critical role of acoustic information, which leads to acoustic inconsistency after enhancement and limited generalization across diverse se tasks. in this paper, we introduce llase - g1, a llama - based language model that incentivizes generalization capabilities for speech enhancement. llase - g1 offers the following key contributions : first, to mitigate acoustic inconsistency, llase - g1 employs continuous representations from wavlm as input and predicts speech tokens from x - codec2, maximizing acoustic preservation. second, to promote generalization capability, llase - g1 introduces dual - channel inputs and outputs, unifying multiple se tasks without requiring task - specific ids. third, llase - g1 outperforms prior task - specific discriminative and generative se models, demonstrating scaling effects at test time and emerging capabilities for unseen se tasks. additionally, we release our code and models to support further research in this area.
|
arxiv:2503.00493
|
we present a deep neural network ( dnn ) that uses both sensor data ( gyroscope ) and image content ( optical flow ) to stabilize videos through unsupervised learning. the network fuses optical flow with real / virtual camera pose histories into a joint motion representation. next, the lstm block infers the new virtual camera pose, and this virtual pose is used to generate a warping grid that stabilizes the frame. novel relative motion representation as well as a multi - stage training process are presented to optimize our model without any supervision. to the best of our knowledge, this is the first dnn solution that adopts both sensor data and image for stabilization. we validate the proposed framework through ablation studies and demonstrated the proposed method outperforms the state - of - art alternative solutions via quantitative evaluations and a user study.
|
arxiv:2102.01279
|
multimodal large language models ( mllms ) have demonstrated promising results in a variety of tasks that combine vision and language. as these models become more integral to research and applications, conducting comprehensive evaluations of their capabilities has grown increasingly important. however, most existing benchmarks fail to consider that, in certain situations, images need to be interpreted within a broader context. in this work, we introduce a new benchmark, named as codis, designed to assess the ability of models to use context provided in free - form text to enhance visual comprehension. our findings indicate that mllms consistently fall short of human performance on this benchmark. further analysis confirms that these models struggle to effectively extract and utilize contextual information to improve their understanding of images. this underscores the pressing need to enhance the ability of mllms to comprehend visuals in a context - dependent manner. view our project website at https : / / thunlp - mt. github. io / codis.
|
arxiv:2402.13607
|
this article is the third of four that completely characterize a solution space $ \ mathcal { s } _ n $ for a homogeneous system of $ 2n + 3 $ linear partial differential equations ( pdes ) in $ 2n $ variables that arises in conformal field theory ( cft ) and multiple schramm - lowner evolution ( sle ). the system comprises $ 2n $ null - state equations and three conformal ward identities that govern cft correlation functions of $ 2n $ one - leg boundary operators. in the previous two articles ( parts i and ii ), we use methods of analysis and linear algebra to prove that $ \ dim \ mathcal { s } _ n \ leq c _ n $, with $ c _ n $ the $ n $ th catalan number. extending these results, we prove in this article that $ \ dim \ mathcal { s } _ n = c _ n $ and $ \ mathcal { s } _ n $ entirely consists of ( real - valued ) solutions constructed with the cft coulomb gas ( contour integral ) formalism. in order to prove this claim, we show that a certain set of $ c _ n $ such solutions is linearly independent. because the formulas for these solutions are complicated, we prove linear independence indirectly. we use the linear injective map of lemma 15 in part i to send each solution of the mentioned set to a vector in $ \ mathbb { r } ^ { c _ n } $, whose components we find as inner products of elements in a temperley - lieb algebra. we gather these vectors together as columns of a symmetric $ c _ n $ by $ c _ n $ matrix, with the form of a meander matrix. if the determinant of this matrix does not vanish, then the set of $ c _ n $ coulomb gas solutions is linearly independent. and if this determinant does vanish, then we construct an alternative set of $ c _ n $ coulomb gas solutions and follow a similar procedure to show that this set is linearly independent. the latter situation is closely related to cft minimal models.
|
arxiv:1303.7182
|
the star experiment at the relativistic heavy ion collider rhic studies the new state of matter produced in relativistic heavy ion collisions and the spin structure of the nucleon in collisions of polarized protons. in order to improve the capabilities for heavy flavor measurements and the reconstruction of charged vector bosons an upgrade of the tracking system both in the central and the forward region is pursued. the integrated system providing high resolution tracking and secondary vertex reconstruction capabilities will use silicon pixel, strip and gem technology.
|
arxiv:physics/0608199
|
we present results from the analysis of 88 carbon stars selected from hamburg / eso ( hes ) survey using low - resolution spectra ( r $ \ sim $ 1330 \ & 2190 ). the spectra were obtained with the himalayan faint object spectrograph camera ( hfosc ) attached to the 2 - m himalayan chandra telescope ( hct ). using a well - defined spectral criteria based on the strength of carbon molecular bands, the stars are classified into different groups. in our sample, we have identified 53 ch stars, four c - r stars, and two c - n type stars. twenty - nine stars could not be classified due to the absence of prominent c $ _ { 2 } $ molecular bands in their spectra. we could derive the atmospheric parameters for 36 stars. the surface temperature is determined using photometric calibrations and synthesis of the h - alpha line profile. the surface gravity log g estimates are obtained using parallax estimates from the gaia dr3 database whenever possible. microturbulent velocity ( $ \ zeta $ ) is derived using calibration equation of log g \ & $ { \ zeta } $. we could determine metallicity for 48 objects from near - infrared ca ii triplet features using calibration equations. the derived metallicity ranges from $ - $ 0. 43 $ \ leq $ [ fe / h ] $ \ leq $ $ - $ 3. 49. nineteen objects are found to be metal - poor ( [ fe / h ] $ \ leq $ $ - $ 1 ), 14 very metal - poor ( [ fe / h ] $ \ leq $ $ - $ 2 ), and five extremely metal - poor ( [ fe / h ] $ \ leq $ $ - $ 3. 0 ) stars. eleven objects are found to have a metallicity in the range $ - $ 0. 43 $ \ leq $ [ fe / h ] $ \ leq $ $ - $ 0. 97. we could derive the carbon abundance for 25 objects using the spectrum synthesis calculation of the c $ _ { 2 } $ band around 5165 \ aa. the most metal - poor objects found will make important targets for follow - up detailed chemical composition studies based on high - resolution spectroscopy, that are likely to provide insight into the galactic chemical evolution.
|
arxiv:2401.04955
|
we introduce the space of mixed - volume forms endowed with an $ l ^ 2 $ metric on a balanced manifold. a geodesic equation can be derived in this space that has an interesting structure and extends the equation of donaldson \ cite { donaldson10 } and chen - he \ cite { ch11 } in the space of volume forms on a riemannian manifold. this nonlinear pde is studied in detail and the existence of weak solution is shown for the dirichlet problem, under a positivity assumption. later we study the calabi - yau equation for balanced metrics and introduce a geometric criteria for prescribing volume forms that is closely related to the positivity assumption above. we call this assumption the sub - astheno - k \ " ahler condition. by deriving $ c ^ 0 $ a priori estimates, we show that the existence of solutions can be established on all sub - astheno - k \ " ahler manifolds.
|
arxiv:2406.00995
|
in this paper we consider the matrix structure of arithmetic processors based on distributed arithmetic in multi - row codes. scope - development of supercomputers.
|
arxiv:1602.08391
|
in this paper we study general nonlinear stochastic differential equations, where the usual brownian motion is replaced by a l \ ' evy process. we also suppose that the coefficient multiplying the increments of this process is merely lipschitz continuous and not necessarily linear in the time - marginals of the solution as is the case in the classical mckean - vlasov model. we first study existence, uniqueness and particle approximations for these stochastic differential equations. when the driving process is a pure jump l \ ' evy process with a smooth but unbounded l \ ' evy measure, we develop a stochastic calculus of variations to prove that the time - marginals of the solutions are absolutely continuous with respect to the lebesgue measure. in the case of a symmetric stable driving process, we deduce the existence of a function solution to a nonlinear integro - differential equation involving the fractional laplacian.
|
arxiv:0707.2723
|
high performance quantum information processing requires efficient control of undesired decohering effects, which are present in realistic quantum dynamics. to deal with this issue, a powerful strategy is to employ transitionless quantum driving ( tqd ), where additional fields are added to speed up the evolution of the quantum system, achieving a desired state in a short time in comparison with the natural decoherence time scales. in this paper, we provide an experimental investigation of the performance of a generalized approach for tqd to implement shortcuts to adiabaticity in nuclear magnetic resonance ( nmr ). as a first discussion, we consider a single nuclear spin - $ \ frac { 1 } { 2 } $ system in a time - dependent rotating magnetic field. while the adiabatic dynamics is violated at a resonance situation, the tqd hamiltonian is shown to be robust against resonance, allowing us to mimic the adiabatic behavior in a fast evolution even under the resonant configurations of the original ( adiabatic ) hamiltonian. moreover, we show that the generalized tqd theory requires less energy resources, with the strength of the magnetic field less than that required by standard tqd. as a second discussion, we analyze the experimental implementation of shortcuts to single - qubit adiabatic gates. by adopting generalized tqd, we can provide feasible time - independent driving hamiltonians, which are optimized in terms of the number of pulses used to implement the quantum dynamics. the robustness of adiabatic and generalized tqd evolutions against typical decoherence processes in nmr is also analyzed.
|
arxiv:1906.08065
|
we present a proof of qualitative stochastic homogenization for a nonconvex hamilton - jacobi equation. the new idea is to introduce a family of " sub - equations " and to control solutions of the original equation by the maximal subsolutions of the latter, which have deterministic limits by the subadditive ergodic theorem and maximality.
|
arxiv:1311.2029
|
we solve two long - standing open problems on word equations. firstly, we prove that a one - variable word equation with constants has either at most three or an infinite number of solutions. the existence of such a bound had been conjectured, and the bound three is optimal. secondly, we consider independent systems of three - variable word equations without constants. if such a system has a nonperiodic solution, then this system of equations is at most of size 17. although probably not optimal, this is the first finite bound found. however, the conjecture of that bound being actually two still remains open.
|
arxiv:1805.09535
|
we address the problem of learning in an online, bandit setting where the learner must repeatedly select among $ k $ actions, but only receives partial feedback based on its choices. we establish two new facts : first, using a new algorithm called exp4. p, we show that it is possible to compete with the best in a set of $ n $ experts with probability $ 1 - \ delta $ while incurring regret at most $ o ( \ sqrt { kt \ ln ( n / \ delta ) } ) $ over $ t $ time steps. the new algorithm is tested empirically in a large - scale, real - world dataset. second, we give a new algorithm called ve that competes with a possibly infinite set of policies of vc - dimension $ d $ while incurring regret at most $ o ( \ sqrt { t ( d \ ln ( t ) + \ ln ( 1 / \ delta ) ) } ) $ with probability $ 1 - \ delta $. these guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.
|
arxiv:1002.4058
|
we tackle the few - shot open - set recognition ( fsosr ) problem, i. e. classifying instances among a set of classes for which we only have a few labeled samples, while simultaneously detecting instances that do not belong to any known class. we explore the popular transductive setting, which leverages the unlabelled query instances at inference. motivated by the observation that existing transductive methods perform poorly in open - set scenarios, we propose a generalization of the maximum likelihood principle, in which latent scores down - weighing the influence of potential outliers are introduced alongside the usual parametric model. our formulation embeds supervision constraints from the support set and additional penalties discouraging overconfident predictions on the query set. we proceed with a block - coordinate descent, with the latent scores and parametric model co - optimized alternately, thereby benefiting from each other. we call our resulting formulation \ textit { open - set likelihood optimization } ( oslo ). oslo is interpretable and fully modular ; it can be applied on top of any pre - trained model seamlessly. through extensive experiments, we show that our method surpasses existing inductive and transductive methods on both aspects of open - set recognition, namely inlier classification and outlier detection.
|
arxiv:2301.08390
|
dit university ( erstwhile dehradun institute of technology ) is a private university in dehradun, uttarakhand, india. dit university has been accorded by the national assessment and accreditation council with grade a. = = campus = = dit university ' s campus is located in dehradun, in the foothills of mussoorie. dehradun is 240 kilometres northeast of delhi. the area of the campus is 25 acres out of which 23 acres is developed, the prominent buildings are vedanta, chanakya and civil block. there is a two acre ground available for students, parking, and other facilities are also available in dit. the campus has classrooms equipped with ict facilities, including projectors, screens, and other technological tools. = = academics = = = = = academic programmes = = = dit university has programs in engineering, architecture, pharmacy, management studies, computing. = = rankings = = the national institutional ranking framework ( nirf ) ranked the university between 201 - 300 in the engineering rankings in 2024. = = student life = = = = = events = = = = = = = youthopia = = = = youthopia is the annual cultural and technical inter - college festival of ditu. the prominent events include battle of bands, robowars, codehunt and perceptrix. = = = = sphurti = = = = sphurti is the annual dance competition at the ditu campus. ditu invites colleges throughout india to participate in events including cricket, basketball, football, volleyball, track and field, badminton, table tennis. since the first sphurti, there are more than 69 colleges that have come to participate in sphurti. = = = = vision 2k35 = = = = aiming to promote dr. a. p. j. abdul kalam ' s vision of an era when the youth of india would enrich the world with their social, technical and academic brilliance, vision 2k35 is a dit university ' s initiative, to reach out to young india. vision 2k35 is a national level youth summit wherein, students will explore and evaluate the potential of renewable and conserving energy infrastructure for the nation by implementing most innovative technical ideas, energy auditing & audit presentation. the theme of the summit is what role the youth can play in bringing india in the league of superpowers by 2035. = = references = = = = external links = = official website
|
https://en.wikipedia.org/wiki/DIT_University
|
whether the goal is to estimate the number of people that live in a congressional district, to estimate the number of individuals that have died in an armed conflict, or to disambiguate individual authors using bibliographic data, all these applications have a common theme - integrating information from multiple sources. before such questions can be answered, databases must be cleaned and integrated in a systematic and accurate way, commonly known as record linkage, de - duplication, or entity resolution. in this article, we review motivational applications and seminal papers that have led to the growth of this area. specifically, we review the foundational work that began in the 1940 ' s and 50 ' s that have led to modern probabilistic record linkage. we review clustering approaches to entity resolution, semi - and fully supervised methods, and canonicalization, which are being used throughout industry and academia in applications such as human rights, official statistics, medicine, citation networks, among others. finally, we discuss current research topics of practical importance.
|
arxiv:2008.04443
|
we consider mesons composed of light and heavy quarks and discuss the construction of the corresponding meson wave functions in soft - wall ads / qcd. we specifically take care that constraints imposed by chiral symmetry breaking and by the heavy quark limit are fulfilled. the main results are : i ) the wave functions of light mesons have a nontrivial dependence on the current quark mass, which gives rise to a mass spectrum consistent with the one including explicit breaking of chiral symmetry ; ii ) the wave functions of heavy - light mesons generate their correct mass spectrum, the mass splittings of vector and pseudoscalar states, and the correct scaling of leptonic decay constants f ( q \ bar q ) \ sim 1 / sqrt ( mq ) ; iii ) the wave functions of heavy quarkonia produce their correct mass spectrum and lead to a scaling behaviour of the leptonic decay constants f ( q \ bar q ) \ sim sqrt ( mq ) and f ( c \ bar b ) \ sim mc / sqrt ( mb ) at mc < < mb, consistent with potential models and qcd sum rules.
|
arxiv:1212.5196
|
= rehabilitation engineering and assistive technology society of north america rehabilitation act of 1973 disability discrimination act 1995 medical engineering prosthetics mind controlled wheelchair = = references = =
|
https://en.wikipedia.org/wiki/Rehabilitation_engineering
|
interferometric synthetic aperture radar ( insar ) imaging methods are usually based on algorithms of match - filtering type, without considering the scene ' s characteristic, which causes limited imaging quality. besides, post - processing steps are inevitable, like image registration, flat - earth phase removing and phase noise filtering. to solve these problems, we propose a new insar imaging method. first, to enhance the imaging quality, we propose a new imaging framework base on 2d sparse regularization, where the characteristic of scene is embedded. second, to avoid the post processing steps, we establish a new forward observation process, where the back - projection imaging method is embedded. third, a forward and backward iterative solution method is proposed based on proximal gradient descent algorithm. experiments on simulated and measured data reveal the effectiveness of the proposed method. compared with the conventional method, higher quality interferogram can be obtained directly from raw echoes without post - processing. besides, in the under - sampling situation, it ' s also applicable.
|
arxiv:2209.10417
|
this paper evaluates algorithms for classification and outlier detection accuracies in temporal data. we focus on algorithms that train and classify rapidly and can be used for systems that need to incorporate new data regularly. hence, we compare the accuracy of six fast algorithms using a range of well - known time - series datasets. the analyses demonstrate that the choice of algorithm is task and data specific but that we can derive heuristics for choosing. gradient boosting machines are generally best for classification but there is no single winner for outlier detection though gradient boosting machines ( again ) and random forest are better. hence, we recommend running evaluations of a number of algorithms using our heuristics.
|
arxiv:1805.00811
|
hypo - elastoplasticity is a framework suitable for modeling the mechanics of many hard materials that have small elastic deformation and large plastic deformation. in most laboratory tests for these materials the cauchy stress is in quasi - static equilibrium. rycroft et al. discovered a mathematical correspondence between this physical system and the incompressible navier - stokes equations, and developed a projection method similar to chorin ' s projection method ( 1968 ) for incompressible newtonian fluids. here, we improve the original projection method to simulate quasi - static hypo - elastoplasticity, by making three improvements. first, drawing inspiration from the second - order projection method for incompressible newtonian fluids, we formulate a second - order in time numerical scheme for quasi - static hypo - elastoplasticity. second, we implement a finite element method for solving the elliptic equations in the projection step, which provides both numerical benefits and flexibility. third, we develop an adaptive global time - stepping scheme, which can compute accurate solutions in fewer timesteps. our numerical tests use an example physical model of a bulk metallic glass based on the shear transformation zone theory, but the numerical methods can be applied to any elastoplastic material.
|
arxiv:2404.10863
|
video surveillance is an essential component of the public security system. the security video surveillance system is a powerful means to prevent violence and crimes, and it is closely coupled with the construction of smart cities. a post - project evaluation is an evaluation of a project ' s actions and outcomes after its completion. post - project evaluation can scientifically and objectively evaluate the construction effectiveness of video surveillance system at a certain stage. utilizing post - project evaluation can find out the causes of success or failure to make recommendations for the construction of a security video surveillance system in the next stage. therefore, we propose a fuzzy post - project evaluation approach for the security video surveillance system in a real - world community. the fuzzy theory and fuzzy multi - level evaluation method are applied. the evaluation result demonstrates that the proposed approach is practically applicable to real - world security video surveillance systems.
|
arxiv:2106.15316
|
in this paper, we present an exponential modification for the action of an ads black hole in the absence of a matter field. an approximated black hole solution is obtained up to the third order of perturbation coefficient. a thermodynamic investigation in canonical ensemble shows that the behavior of a van der waals fluid is not seen in this model. nevertheless, the study of thermodynamic potentials and other related quantities suggests that the thermodynamic phase transitions of the first and second types can occur in this model. the forms of the phase transitions are more similar to the hawking - page phase transitions.
|
arxiv:2407.04339
|
the recent breakthroughs in large language models ( llms ) are positioned to transition many areas of software. database technologies particularly have an important entanglement with llms as efficient and intuitive database interactions are paramount. in this paper, we present db - gpt, a revolutionary and production - ready project that integrates llms with traditional database systems to enhance user experience and accessibility. db - gpt is designed to understand natural language queries, provide context - aware responses, and generate complex sql queries with high accuracy, making it an indispensable tool for users ranging from novice to expert. the core innovation in db - gpt lies in its private llm technology, which is fine - tuned on domain - specific corpora to maintain user privacy and ensure data security while offering the benefits of state - of - the - art llms. we detail the architecture of db - gpt, which includes a novel retrieval augmented generation ( rag ) knowledge system, an adaptive learning mechanism to continuously improve performance based on user feedback and a service - oriented multi - model framework ( smmf ) with powerful data - driven agents. our extensive experiments and user studies confirm that db - gpt represents a paradigm shift in database interactions, offering a more natural, efficient, and secure way to engage with data repositories. the paper concludes with a discussion of the implications of db - gpt framework on the future of human - database interaction and outlines potential avenues for further enhancements and applications in the field. the project code is available at https : / / github. com / eosphoros - ai / db - gpt. experience db - gpt for yourself by installing it with the instructions https : / / github. com / eosphoros - ai / db - gpt # install and view a concise 10 - minute video at https : / / www. youtube. com / watch? v = kys4ntdzehk.
|
arxiv:2312.17449
|
if f is a nontrivial automorphism of a thick building delta of purely infinite type, we prove that there is no bound on the distance that f moves a chamber. this has the following group - theoretic consequence : if g is a group of automorphisms of delta with bounded quotient, then the center of g is trivial.
|
arxiv:0710.1426
|
kolmogorov - arnold network ( kan ) is a novel multi - layer neuromorphic network. many groups worldwide have studied this network, including image processing, time series analysis, solving physical problems, and practical applications such as medical use. therefore, we propose an adaptive variational quantum kolmogorov - arnold network ( vqkan ) that takes advantage of kan for variational quantum algorithms in an adaptive manner. the adaptive vqkan is vqkan that uses adaptive ansatz as the ansatz and repeat vqkan growing the ansatz just like adaptive variational quantum eigensolver ( vqe ). the scheme inspired by adaptive vqe is promised to ascend the accuracy of vqkan to practical value. as a result, adaptive vqkan has been revealed to calculate the fitting problem more accurately and faster than quantum neural networks by far less number of parametric gates.
|
arxiv:2503.21336
|
this paper presents realistic system - level modelling and simulation of effective noise sources in a coupled resonating mems sensors. a governing set of differential equations are used to build a numerical model of a mechanical noise source in a coupled - resonator sensor. an effective thermomechanical noise is then quantified through the system - level simulation obtained via simulink. on a similar note, various noise sources in electronic readout are identified and the contribution of each is quantified to determine an effective noise that stems from the electronic readout. a comparison between an effective mechanical and electronic noise aids in identifying the dominant noise source in a sensor system. a method to optimize the system noise floor for an amplitude - based the readout is presented. the proposed models present a variety of operating conditions, such as finite quality factor, varying coupled electric spring strength, and operation with in - phase and out - of - phase mode. the proposed models aim to determine the impact of fundamental noise processes and thus quantify the ultimate detection limit into a coupled resonating system used for various sensing applications.
|
arxiv:2109.04800
|
alma hco + observations of the infrared dark cloud g0. 253 + 0. 016 located in the central molecular zone of the galaxy are presented. the 89 ghz emission is area - filling, optically thick, and sub - thermally excited. two types of filaments are seen in absorption against the hco + emission. broad - line absorption filaments ( blas ) have widths of less than a few arcseconds ( 0. 07 - 0. 14 pc ), lengths of 30 to 50 arcseconds ( 1. 2 - 1. 8 pc ), and absorption profiles extending over a velocity range larger than 20 km / sec. the blas are nearly parallel to the nearby g0. 18 non - thermal filaments and may trace hco + molecules gyrating about highly ordered magnetic fields located in front of g0. 253 + 0. 016 or edge - on sheets formed behind supersonic shocks propagating orthogonal to our line - of - sight in the foreground. narrow - line absorption filaments ( nlas ) have line - widths less than 20 km / sec. some nlas are also seen in absorption in other species with high optical depth such as hcn and occasionally in emission where the background is faint. the nlas, which also trace low - density, sub - thermally excited hco + molecules, are mostly seen on the blueshifted side of the emission from g0. 253 + 0. 016. if associated with the surface of g0. 253 + 0. 016, the kinematics of the nlas indicate that the cloud surface is expanding. the decompression of entrained, milli - gauss magnetic fields may be responsible for the re - expansion of the surface layers of g0. 253 + 0. 016 as it recedes from the galactic center following a close encounter with sgr a.
|
arxiv:1409.3640
|
we propose a composite grand unified theory to study the anomalies in the semileptonic $ b $ decays. we show a simple group containing the custodial and standard model gauge symmetries, that can deliver a set of composite pseudo nambu - goldstone bosons : the higgs, a colorless su ( 2 ) $ _ l $ - fourplet and three leptoquarks : a triplet and two doublets. we give a description in terms of an effective theory of resonances. by assuming anarchic partial compositeness of the standard model fermions, we find representations for the composite fermions that allow to obtain the higgs yukawa couplings, as well as leptoquark interactions explaining the deviations in $ r ^ { \ mu e } _ { k ^ { ( * ) } } $. we calculate the one - loop potential, we show that it can trigger electroweak symmetry breaking and we find a region of the parameter space that can reproduce the standard model spectrum. the model predicts leptoquark masses of order $ 0. 4 - 1. 3 $ tev, corrections to some electroweak observables, with $ zb _ l \ bar b _ l $ saturating the current bounds, and a very reach phenomenology at lhc. we also study the possibility of explaining $ r ^ { \ tau \ ell } _ { d ^ { ( * ) } } $.
|
arxiv:1812.08678
|
we propose a generalization of the adaptive n - intertwined mean - field approximation ( animfa ) model studied in achterberg and sensi ( 2023 ) to a heterogeneous network of communities. in particular, the multigroup animfa model describes the impact of both local and global disease awareness on the spread of a disease in a network. we obtain results on existence and stability of the equilibria of the system, in terms of the basic reproduction number $ r _ 0 $. assuming individuals have no reason to decrease their contacts in the absence of disease, we show that the basic reproduction number $ r _ 0 $ is equivalent to the basic reproduction number of the nimfa model on static networks. based on numerical simulations, we demonstrate that with just two communities periodic behaviour can occur, which contrasts the case with only a single community, in which periodicity was ruled out analytically. we also find that breaking connections between communities is more fruitful compared to breaking connections within communities to reduce the disease outbreak on dense networks, but both strategies are viable to networks with fewer links. finally, we emphasize that our method of modelling adaptivity is not limited to sis models, but has huge potential to be applied in other compartmental models in epidemiology.
|
arxiv:2407.17639
|
as end - to - end automatic speech recognition ( asr ) models reach promising performance, various downstream tasks rely on good confidence estimators for these systems. recent research has shown that model - based confidence estimators have a significant advantage over using the output softmax probabilities. if the input data to the speech recogniser is from mismatched acoustic and linguistic conditions, the asr performance and the corresponding confidence estimators may exhibit severe degradation. since confidence models are often trained on the same in - domain data as the asr, generalising to out - of - domain ( ood ) scenarios is challenging. by keeping the asr model untouched, this paper proposes two approaches to improve the model - based confidence estimators on ood data : using pseudo transcriptions and an additional ood language model. with an asr model trained on librispeech, experiments show that the proposed methods can greatly improve the confidence metrics on ted - lium and switchboard datasets while preserving in - domain performance. furthermore, the improved confidence estimators are better calibrated on ood data and can provide a much more reliable criterion for data selection.
|
arxiv:2110.03327
|
had created. marxian economics was further developed by karl kautsky ( 1854 β 1938 ) ' s the economic doctrines of karl marx and the class struggle ( erfurt program ), rudolf hilferding ' s ( 1877 β 1941 ) finance capital, vladimir lenin ( 1870 β 1924 ) ' s the development of capitalism in russia and imperialism, the highest stage of capitalism, and rosa luxemburg ( 1871 β 1919 ) ' s the accumulation of capital. = = = neoclassical economics = = = at its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by jean - baptiste say in his treatise on political economy or, the production, distribution, and consumption of wealth ( 1803 ). these three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. say ' s definition has survived in part up to the present, modified by substituting the word " wealth " for " goods and services " meaning that wealth may include non - material objects as well. one hundred and thirty years later, lionel robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. in his essay on the nature and significance of economic science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economise ( seeking the greatest welfare while avoiding the wasting of scarce resources ). according to robbins : " economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses ". robbins ' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. although far from unanimous, most mainstream economists would accept some version of robbins ' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition. a body of theory later termed " neoclassical economics " formed from about 1870 to 1910. the term " economics " was popularised by such neoclassical economists as alfred marshall and mary paley marshall as a concise synonym for " economic science " and a substitute for the earlier " political economy ". this corresponded to the influence on the subject of mathematical methods used in the natural sciences. neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and
|
https://en.wikipedia.org/wiki/Economics
|
let n be the normalizer of a maximal torus t in a split reductive group over f _ q, and let w be an involution in the weyl group n / t. we construct a section of w satisfying the braid relations, such that the image of the lift n of w under the frobenius map is equal to the inverse of n.
|
arxiv:2107.06794
|
this extends a theorem of davenport and erd \ " os on sequences of rational integers to sequences of integral ideals in arbitrary number fields $ k $. more precisely, we introduce a logarithmic density for sets of integral ideals in $ k $ and provide a formula for the logarithmic density of the set of so - called $ \ mathscr a $ - free ideals, i. e. integral ideals that are not multiples of any ideal from a fixed set $ \ mathscr a $.
|
arxiv:1712.03015
|
the rise in energy demand highlights the importance of suitable subsurface storage, requiring detailed and accurate subsurface characterization often reliant on high - quality borehole well log data. however, obtaining complete well - log data is costly and time - consuming, with missing data being common due to borehole conditions or tool errors. while machine learning and deep learning algorithms have been implemented to address these issues, they often fail to capture the intricate, nonlinear relationships and long - term dependencies in complex well log sequences. additionally, prior ai - driven models typically require retraining when introduced to new datasets and are constrained to deployment in the same basin. in this study, we explored and evaluated the potential of a time - series foundation model leveraging transformer architecture and a generative pre - trained approach for predicting and detecting anomalies in borehole well log data. specifically, we fine - tuned and adopted the timegpt architecture to forecast key log responses and detect anomalies with high accuracy. our proposed model demonstrated excellent performance, achieving r2 of up to 87 % and a mean absolute percentage error ( mape ) as low as 1. 95 %. additionally, the model ' s zero - shot capability successfully identified subtle yet critical anomalies, such as drilling hazards or unexpected geological formations, with an overall accuracy of 93 %. the model represents a significant advancement in predictive accuracy and computational efficiency, enabling zero - shot inference through fine - tuning. its application in well - log prediction enhances operational decision - making while reducing risks associated with subsurface exploration. these findings demonstrate the model ' s potential to transform well - log data analysis, particularly in complex geological settings.
|
arxiv:2412.05681
|
in recent years, image compression for high - level vision tasks has attracted considerable attention from researchers. given that object information in images plays a far more crucial role in downstream tasks than background information, some studies have proposed semantically structuring the bitstream to selectively transmit and reconstruct only the information required by these tasks. however, such methods structure the bitstream after encoding, meaning that the coding process still relies on the entire image, even though much of the encoded information will not be transmitted. this leads to redundant computations. traditional image compression methods require a two - dimensional image as input, and even if the unimportant regions of the image are set to zero by applying a semantic mask, these regions still participate in subsequent computations as part of the image. to address such limitations, we propose an image compression method based on a position - indexed self - attention mechanism that encodes and decodes only the visible parts of the masked image. compared to existing semantic - structured compression methods, our approach can significantly reduce computational costs.
|
arxiv:2504.12923
|
from flocking birds to schooling fish, organisms interact to form collective dynamics across the natural world. self - organization is present at smaller scales as well : cells interact and move during development to produce patterns in fish skin, and wound healing relies on cell migration. across these examples, scientists are interested in shedding light on the individual behaviors informing spatial group dynamics and in predicting the patterns that will emerge under altered agent interactions. one challenge to these goals is that images of self - organization - - whether empirical or generated by models - - are qualitative. to get around this, there are many methods for transforming qualitative pattern data into quantitative information. in this tutorial chapter, i survey some methods for quantifying self - organization, including order parameters, pair correlation functions, and techniques from topological data analysis. i also discuss some places that i see as especially promising for quantitative data, modeling, and data - driven approaches to continue meeting in the future.
|
arxiv:2407.10832
|
we give a strongly polynomial - time algorithm for integer linear programs defined by integer coefficient matrices whose subdeterminants are bounded by a constant and that contain at most two nonzero entries in each row. the core of our approach is the first polynomial - time algorithm for the weighted stable set problem on graphs that do not contain more than $ k $ vertex - disjoint odd cycles, where $ k $ is any constant. previously, polynomial - time algorithms were only known for $ k = 0 $ ( bipartite graphs ) and for $ k = 1 $. we observe that integer linear programs defined by coefficient matrices with bounded subdeterminants and two nonzeros per column can be also solved in strongly polynomial - time, using a reduction to $ b $ - matching.
|
arxiv:2106.05947
|
in this paper, we examine atiyah ' s hermitian axiom for two - dimensional complex topological quantum field theories. building on the correspondence between 2d tqfts and frobenius algebras, we find the algebraic objects corresponding to hermitian and unitary tqfts respectively and prove structure theorems about them. we then clarify a few older results on unitary tqfts using our structure theorems.
|
arxiv:2206.07193
|
as the spread of false information on the internet has increased dramatically in recent years, more and more attention is being paid to automated fake news detection. some fake news detection methods are already quite successful. nevertheless, there are still many vulnerabilities in the detection algorithms. the reason for this is that fake news publishers can structure and formulate their texts in such a way that a detection algorithm does not expose this text as fake news. this paper shows that it is possible to automatically attack state - of - the - art models that have been trained to detect fake news, making these vulnerable. for this purpose, corresponding models were first trained based on a dataset. then, using text - attack, an attempt was made to manipulate the trained models in such a way that previously correctly identified fake news was classified as true news. the results show that it is possible to automatically bypass fake news detection mechanisms, leading to implications concerning existing policy initiatives.
|
arxiv:2107.07970
|
believe in pseudoscience more than scientific evidence. some people believe the prevalence of pseudoscientific beliefs is due to widespread scientific illiteracy. individuals lacking scientific literacy are more susceptible to wishful thinking, since they are likely to turn to immediate gratification powered by system 1, our default operating system which requires little to no effort. this system encourages one to accept the conclusions they believe, and reject the ones they do not. further analysis of complex pseudoscientific phenomena require system 2, which follows rules, compares objects along multiple dimensions and weighs options. these two systems have several other differences which are further discussed in the dual - process theory. the scientific and secular systems of morality and meaning are generally unsatisfying to most people. humans are, by nature, a forward - minded species pursuing greater avenues of happiness and satisfaction, but we are all too frequently willing to grasp at unrealistic promises of a better life. psychology has much to discuss about pseudoscience thinking, as it is the illusory perceptions of causality and effectiveness of numerous individuals that needs to be illuminated. research suggests that illusionary thinking happens in most people when exposed to certain circumstances such as reading a book, an advertisement or the testimony of others are the basis of pseudoscience beliefs. it is assumed that illusions are not unusual, and given the right conditions, illusions are able to occur systematically even in normal emotional situations. one of the things pseudoscience believers quibble most about is that academic science usually treats them as fools. minimizing these illusions in the real world is not simple. to this aim, designing evidence - based educational programs can be effective to help people identify and reduce their own illusions. = = boundaries with science = = = = = classification = = = philosophers classify types of knowledge. in english, the word science is used to indicate specifically the natural sciences and related fields, which are called the social sciences. different philosophers of science may disagree on the exact limits β for example, is mathematics a formal science that is closer to the empirical ones, or is pure mathematics closer to the philosophical study of logic and therefore not a science? β but all agree that all of the ideas that are not scientific are non - scientific. the large category of non - science includes all matters outside the natural and social sciences, such as the study of history, metaphysics, religion, art, and the humanities. dividing the category again, unscientific claims are a subset of the large category of non - scientific claims. this category specifically
|
https://en.wikipedia.org/wiki/Pseudoscience
|
we introduce a new algorithm for online linear - quadratic control in a known system subject to adversarial disturbances. existing regret bounds for this setting scale as $ \ sqrt { t } $ unless strong stochastic assumptions are imposed on the disturbance process. we give the first algorithm with logarithmic regret for arbitrary adversarial disturbance sequences, provided the state and control costs are given by known quadratic functions. our algorithm and analysis use a characterization for the optimal offline control law to reduce the online control problem to ( delayed ) online learning with approximate advantage functions. compared to previous techniques, our approach does not need to control movement costs for the iterates, leading to logarithmic regret.
|
arxiv:2003.00189
|
in this article, we show that the riemann hypothesis for an $ l $ - function $ f $ belonging to the selberg class implies that all the derivatives of $ f $ can have at most finitely many zeros on the left of the critical line with imaginary part greater than a certain constant. this was shown for the riemann zeta function by levinson and montgomery in 1974.
|
arxiv:2202.12126
|
the increasing integration of large language model ( llm ) based search engines has transformed the landscape of information retrieval. however, these systems are vulnerable to adversarial attacks, especially ranking manipulation attacks, where attackers craft webpage content to manipulate the llm ' s ranking and promote specific content, gaining an unfair advantage over competitors. in this paper, we study the dynamics of ranking manipulation attacks. we frame this problem as an infinitely repeated prisoners ' dilemma, where multiple players strategically decide whether to cooperate or attack. we analyze the conditions under which cooperation can be sustained, identifying key factors such as attack costs, discount rates, attack success rates, and trigger strategies that influence player behavior. we identify tipping points in the system dynamics, demonstrating that cooperation is more likely to be sustained when players are forward - looking. however, from a defense perspective, we find that simply reducing attack success probabilities can, paradoxically, incentivize attacks under certain conditions. furthermore, defensive measures to cap the upper bound of attack success rates may prove futile in some scenarios. these insights highlight the complexity of securing llm - based systems. our work provides a theoretical foundation and practical insights for understanding and mitigating their vulnerabilities, while emphasizing the importance of adaptive security strategies and thoughtful ecosystem design.
|
arxiv:2501.00745
|
we present the angular distribution of electrons knocked out from an atom in a fast charge particle collision at small momentum transfer. it is determined not only by dipole but also by quadrupole transitions, the contribution of which can be considerably enhanced as compared to the case of photoionization. there the non - dipole parameters are suppressed as compared to the dipole ones by the parameter \ omega r / c < < 1, where is the photon energy, r is the ionized shell radius and c is the speed of light. this suppression in fast electron - atom collisions can be considerably reduced : the corresponding expansion parameter \ omega r / \ nu < < 1 is much bigger than in photoionization, since the speed of the incoming electron is much smaller than c. in formation of the angular distribution it is decisively important that the ionizing field in collision process is longitudinal, while in photoionization - it is transversal. we illustrate the general formulas by concrete results for outer s -, p -, and some nd - subshells of multi - electron noble gas atoms ar, kr and xe, at several transferred momentum values : q = 0. 0, 0. 1, 1. 1, 2. 1. even for very small transferred momentum q, i. e. in the so - called optical limit, the deviations from the photoionization case are prominent.
|
arxiv:1111.4062
|
we proved convergence rates of linear sampling recovery of functions in an abstract bochner space satisfying some weighted $ \ ell _ 2 $ - summability of their generalized polynomial chaos expansion coefficients, by extended least squares methods. as applications to a problem in computational uncertainty quantification, we derived convergence rates of linear collocation approximation of solutions to parametric elliptic pdes with log - normal random inputs, and of relevant infinite dimensional holomorphic functions on $ \ mathbb r ^ \ infty $. these convergence rates significantly improve the known results. from the general results we also received the same convergence rates of linear collocation approximation of solutions to parametric elliptic pdes with affine random inputs.
|
arxiv:2409.05050
|
we report in this survey some new results concerning noncommutative chern characters : construction and the cases when they are exactly computed. the major result indicates some clear relation of these noncommutative objects and their commutative counterparts. this survey can be considered as the second part of the previous survey ( j. of lie theory vol. 3 ( 1993 ), 149 - 176.
|
arxiv:math/0003108
|
common algorithms for sentence and word - alignment allow the automatic identification of word translations from parallel texts. this study suggests that the identification of word translations should also be possible with non - parallel and even unrelated texts. the method proposed is based on the assumption that there is a correlation between the patterns of word co - occurrences in texts of different languages.
|
arxiv:cmp-lg/9505037
|
we present a unified description of heat flow in two - terminal hybrid quantum systems. using simple models, we analytically study nonlinear aspects of heat transfer between various reservoirs : metals, solids, and spin baths, mediated by the excitation / relaxation of a central ( subsystem ) mode. we demonstrate rich nonlinear current - temperature characteristics, originating from either the molecular anharmonicity, or the reservoirs ( complex ) energy spectra. in particular, we establish sufficient conditions for thermal rectification in two - terminal junctions. we identify two classes of rectifiers. in type - a rectifiers the density of states of the reservoirs are dissimilar. in type - b rectifiers the baths are identical, but include particles whose statistics differ from that of the subsystem, to which they asymmetrically couple. nonlinear heat flow, and specifically thermal rectification, are thus ubiquitous effects that could be observed in a variety of systems, phononic, electronic, and photonic.
|
arxiv:0905.4015
|
we present a simple proof for bounding the smallest eigenvalue of the empirical covariance in a causal gaussian process. along the way, we establish a one - sided tail inequality for gaussian quadratic forms using a causal decomposition. our proof only uses elementary facts about the gaussian distribution and the union bound. we conclude with an example in which we provide a performance guarantee for least squares identification of a vector autoregression.
|
arxiv:2212.09508
|
we have modified the iterative procedure introduced by lin et al. ( 2016 ), to systematically combine the submm images taken from ground based ( e. g., cso, jcmt, apex ) and space ( e. g., herschel, planck ) telescopes. we applied the updated procedure to observations of three well studied infrared dark clouds ( irdcs ) : g11. 11 - 0. 12, g14. 225 - 0. 506 and g28. 34 + 0. 06, and then performed single - component, modified black - body fits to derive $ \ sim $ 10 $ " $ resolution dust temperature and column density maps. the derived column density maps show that these three irdcs exhibit complex filamentary structures embedding with rich clumps / cores. we compared the column density probability distribution functions ( n - pdfs ) and two - point correlation ( 2pt ) functions of the column density field between these irdcs with several ob cluster - forming regions. based on the observed correlation and measurements, and complementary hydrodynamical simulations for a 10 $ ^ { 4 } $ $ \ rm m _ { \ odot } $ molecular cloud, we hypothesize that cloud evolution can be better characterized by the evolution of the ( column ) density distribution function and the relative power of dense structures as a function of spatial scales, rather than merely based on the presence of star - forming activity. based on the small analyzed sample, we propose four evolutionary stages, namely : { \ it cloud integration, stellar assembly, cloud pre - dispersal and dispersed - cloud. } the initial { \ it cloud integration } stage and the final { \ it dispersed cloud } stage may be distinguished from the two intermediate stages by a steeper than $ - $ 4 power - law index of the n - pdf. the { \ it cloud integration } stage and the subsequent { \ it stellar assembly } stage are further distinguished from each other by the larger luminosity - to - mass ratio ( $ > $ 40 $ \ rm l _ { \ odot } / m _ { \ odot } $ ) of the latter.
|
arxiv:1704.06448
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.