text
stringlengths
1
3.65k
source
stringlengths
15
79
this paper is dedicated to the memory of vilen mitrofanovich strutinsky who would have been 80 this year. his achievements in theoretical nuclear physics are briefly summarized. i discuss in more detail the most successful and far - reaching of them, namely ( 1 ) the shell - correction method and ( 2 ) the extension of gutzwiller ' s semiclassical theory of shell structure and its application to finite fermionic systems, and mention some applications in other domains of physics.
arxiv:1006.5539
we suggest a new possible high dimensional analogue to metric distortion. we then show a possible method for providing lower bounds to this distortion and use this method to prove a " bourgain - type " distortion theorem for linial - meshulam random complexes.
arxiv:1412.7142
direct evaluation of the rate - distortion function has rarely been achieved when it is strictly greater than its shannon lower bound. in this paper, we consider the rate - distortion function for the distortion measure defined by an epsilon - insensitive loss function. we first present the shannon lower bound applicable to any source distribution with finite differential entropy. then, focusing on the laplacian and gaussian sources, we prove that the rate - distortion functions of these sources are strictly greater than their shannon lower bounds and obtain analytically evaluable upper bounds for the rate - distortion functions. small distortion limit and numerical evaluation of the bounds suggest that the shannon lower bound provides a good approximation to the rate - distortion function for the epsilon - insensitive distortion measure.
arxiv:1302.6315
a method is proposed to employ entangled and squeezed light for determining the position of a party and for synchronizing distant clocks. an accuracy gain over analogous protocols that employ classical resources is demonstrated and a quantum - cryptographic positioning application is given, which allows only trusted parties to learn the position of whatever must be localized. the presence of a lossy channel and imperfect photodetection is considered. the advantages in using partially entangled states is discussed.
arxiv:quant-ph/0107140
existing methods utilizing spatial information for sound source separation require prior knowledge of the direction of arrival ( doa ) of the source or utilize estimated but imprecise localization results, which impairs the separation performance, especially when the sound sources are moving. in fact, sound source localization and separation are interconnected problems, that is, sound source localization facilitates sound separation while sound separation contributes to refined source localization. this paper proposes a method utilizing the mutual facilitation mechanism between sound source localization and separation for moving sources. the proposed method comprises three stages. the first stage is initial tracking, which tracks each sound source from the audio mixture based on the source signal envelope estimation. these tracking results may lack sufficient accuracy. the second stage involves mutual facilitation : sound separation is conducted using preliminary sound source tracking results. subsequently, sound source tracking is performed on the separated signals, thereby refining the tracking precision. the refined trajectories further improve separation performance. this mutual facilitation process can be iterated multiple times. in the third stage, a neural beamformer estimates precise single - channel separation results based on the refined tracking trajectories and multi - channel separation outputs. simulation experiments conducted under reverberant conditions and with moving sound sources demonstrate that the proposed method can achieve more accurate separation based on refined tracking results.
arxiv:2409.04843
within self - consistent field theory we study the phase behaviour of a symmetric binary ab polymer blend confined into a thin film. the film surfaces interact with the monomers via short range potentials. one surface attracts the a component and the corresponding semi - infinite system exhibits a first order wetting transition. the surface interaction of the opposite surface is varied as to study the crossover from capillary condensation for symmetric surface fields to the interface localisation / delocalisation transition for antisymmetric surface fields. in the former case the phase diagram has a single critical point close to the bulk critical point. in the latter case the phase diagram exhibits two critical points which correspond to the prewetting critical points of the semi - infinite system. the crossover between these qualitatively different limiting behaviours occurs gradually, however, the critical temperature and the critical composition exhibit a non - monotonic dependence on the surface field.
arxiv:cond-mat/0005059
blockchain consensus mechanisms must balance security, decentralization, and efficiency while ensuring fair participation. proof of team sprint ( pots ) is a cooperative consensus mechanism designed to address the energy inefficiencies and centralization tendencies of traditional proof of work ( pow ). unlike pow, where rewards disproportionately favor high - performance nodes, pots encourages collaboration by forming teams and distributing rewards more equitably among participants. in this study, we evaluate the fairness properties of pots by analyzing reward distribution under varying computational power distributions. through extensive simulations, we compare equal - share allocation and proportional reward allocation, highlighting their impact on decentralization and participation. our results demonstrate that pots significantly reduces reward disparity between high - performance and low - performance nodes, fostering a more inclusive ecosystem. additionally, we observe that as team sizes increase, the influence of individual computational power is mitigated, allowing lower - performance nodes to contribute meaningfully. moreover, our findings reveal that the marginal benefit of investing in extremely high - performance hardware diminishes, which discourages centralization and aligns incentives toward sustainable participation. we also discuss the economic implications of pots, particularly its potential to reshape blockchain mining strategies by balancing fairness with computational efficiency. these insights contribute to the broader discussion on blockchain fairness and provide a foundation for further research into cooperative consensus mechanisms.
arxiv:2503.19301
volatile electrical energy prices are a challenge and an opportunity for small and medium - size companies in energy - intensive industries. by using electrical energy storage and / or an adaptation of production processes, companies can significantly profit from time - depending energy prices and reduce their energy costs. we consider a time - discrete optimal control problem to reach a desired final state of the energy storage at a certain time step. thereby, the energy input is discrete since only multiples of 100 kwh can be purchased at the epex spot market. we use available price estimations to minimize the total energy cost by a rounding based dynamic programming approach. with our model non - linear energy loss functions of the storage can be considered and we obtain a significant speed - up compared to the integer ( linear ) programming formulation.
arxiv:2209.05316
we consider the stochastic burgers equation $ \ dnachd { t } \ psi ( t, r ) = \ delta \ psi ( t, r ) + \ nabla \ psi ^ 2 ( t, r ) + \ sqrt { \ gamma \ psi ( t, r ) } \ eta ( t, r ) $ with periodic boundary conditions, where $ t \ ge 0, $ $ r \ in [ 0, 1 ], $ and $ \ eta $ is some space - time white noise. a certain markov jump process is constructed to approximate a solution of this equation. }
arxiv:math/0408323
recently the belle collaboration has discovered a narrow $ s = - 3 $ baryon, the $ \ omega ( 2012 ) $. we explore the possibility that the $ \ omega ( 2012 ) $ is a $ \ xi ( 1530 ) \, \ bar k $ molecule, where the binding mechanism is the coupled channel dynamics with the $ \ omega \, \ eta $ channel. the characteristic signature of a molecular $ \ omega ( 2012 ) $ will be its decay into the three body channel $ \ xi \ pi \ bar { k } $, for which we expect a partial decay width of $ 2 - 3 \, { \ rm mev } $. the partial decay width into the $ \ xi \ bar { k } $ channel should lie in the range of $ 1 - 11 \, { \ rm mev } $, a figure compatible with experiment and which we have deduced from the assumption that the coupling involved in this decay is of natural size. for comparison purposes the decay of a purely compact $ \ omega ( 2012 ) $ into the $ \ xi \ bar { k } $ and $ \ xi \ pi \ bar { k } $ channels is of the same order of magnitude as and one order of magnitude smaller than in the molecular scenario, respectively. this comparison indicates that the current experimental information is insufficient to distinguish between a compact and a molecular $ \ omega ( 2012 ) $ and further experiments will be required to determine its nature. a molecular $ \ omega ( 2012 ) $ will also imply the existence of two - and three - body molecular partners. the two - body partners comprise two $ \ lambda $ hyperons located at $ 1740 $ and $ 1950 \, { \ rm mev } $ respectively, the first of which might correspond to the $ \ lambda ( 1800 ) $ while the second to the $ \ lambda ( 2000 ) $ or the $ \ lambda ( 2050 ) $. the three - body partners include a $ \ xi ( 1530 ) k \ bar { k } $ and a $ \ xi ( 1530 ) \ eta \ bar { k } $ molecule, with masses of $ m = 2385 - 2445 \, { \ rm mev } $ and $ m = 2434 - 2503 \, { \ rm mev } $ respectively. we might be tempted to identify the first with the $ \ xi ( 2370 ) $ and the latter with the $ \ omega ( 2470 ) $ listed in the pdg.
arxiv:1807.00718
fooji inc. is a social media engagement platform that has created a proprietary " just - in - time " delivery network to provide prizes to social media marketing campaign participants in real - time. in this paper, we prove the efficacy of the " just - in - time " delivery network through a cluster analysis that extracts and presents the underlying drivers of campaign engagement. we utilize a machine learning methodology with a principal component analysis to organize fooji campaigns across these principal components. the arrangement of data across the principal component space allows us to expose underlying trends using a $ k $ - means clustering technique. the most important of these trends is the demonstration of how the " just - in - time " delivery network improves social media engagement.
arxiv:2212.12285
this work proposes and analyses the application of a robotic platform as an digital skills assistant. analysing the ethical issues relating to the decision making process in the use case of online food shopping in order to inform a co design session on what, and how, the digital skills assistant should make decisions.
arxiv:2304.01886
( 1 ) $ amortized time, for $ t \ geq \ log \ log \ delta \ log { n } $.
arxiv:2102.07221
few - shot learning ( fsl ), which aims to classify unseen classes with few samples, is challenging due to data scarcity. although various generative methods have been explored for fsl, the entangled generation process of these methods exacerbates the distribution shift in fsl, thus greatly limiting the quality of generated samples. to these challenges, we propose a novel information bottleneck ( ib ) based disentangled generation framework for fsl, termed as disgenib, that can simultaneously guarantee the discrimination and diversity of generated samples. specifically, we formulate a novel framework with information bottleneck that applies for both disentangled representation learning and sample generation. different from existing ib - based methods that can hardly exploit priors, we demonstrate our disgenib can effectively utilize priors to further facilitate disentanglement. we further prove in theory that some previous generative and disentanglement methods are special cases of our disgenib, which demonstrates the generality of the proposed disgenib. extensive experiments on challenging fsl benchmarks confirm the effectiveness and superiority of disgenib, together with the validity of our theoretical analyses. our codes will be open - source upon acceptance.
arxiv:2211.16185
we consider a multidimensional stochastic differential equation with a gaussian noise and a drift vector having a jump discontinuity along a hyperplane. the large time behavior of the distance between two solutions starting from different points is studied. we consider a multidimensional stochastic differential equation with a gaussian noise and a drift vector having a jump discontinuity along a hyperplane. the large time behavior of the distance between two solutions starting from different points is studied.
arxiv:1912.12457
orbital phase - dependent variations in thermal emission and reflected stellar energy spectra can provide meaningful constraints on the climate states of terrestrial extrasolar planets orbiting m dwarf stars. spatial distributions of water vapor, clouds, and surface ice are controlled by climate. in turn, water, in each of its thermodynamic phases, imposes significant modulations to thermal and reflected planetary spectra. here we explore these characteristic spectral signals, based on 3d climate simulations of earth - sized aquaplanets orbiting m dwarf stars near the habitable zone. by using 3d models, we can self - consistently predict surface temperatures and the location of water vapor, clouds, and surface ice in the climate system. habitable zone planets in m dwarf systems are expected to be in synchronous rotation with their host star and thus present distinct differences in emitted and reflected energy fluxes depending on the observed hemisphere. here we illustrate that icy, temperate, and incipient runaway greenhouse climate states exhibit phase - dependent spectral signals that enable their characterization.
arxiv:1906.02697
we show a close connection between structural hardness for $ k $ - partite graphs and tight inapproximability results for scheduling problems with precedence constraints. assuming a natural but nontrivial generalisation of the bipartite structural hardness result of bansal and khot, we obtain a hardness of $ 2 - \ epsilon $ for the problem of minimising the makespan for scheduling precedence - constrained jobs with preemption on identical parallel machines. this matches the best approximation guarantee for this problem. assuming the same hypothesis, we also obtain a super constant inapproximability result for the problem of scheduling precedence - constrained jobs on related parallel machines, making progress towards settling an open question in both lists of ten open questions by williamson and shmoys, and by schuurman and woeginger. the study of structural hardness of $ k $ - partite graphs is of independent interest, as it captures the intrinsic hardness for a large family of scheduling problems. other than the ones already mentioned, this generalisation also implies tight inapproximability to the problem of minimising the weighted completion time for precedence - constrained jobs on a single machine, and the problem of minimising the makespan of precedence - constrained jobs on identical parallel machine, and hence unifying the results of bansal and khot, and svensson, respectively.
arxiv:1507.01906
we present branching fraction measurements of charged and neutral b decays to dpi -, d * pi - and d * * pi - with a missing mass method, based on a sample of 231 million y ( 4s ) - - > bbar pairs collected by the babar detector at the pep - ii e + e - collider. one of the b mesons is fully reconstructed and the other one decays to a reconstructed charged pion and a companion charmed meson identified by its recoil mass, inferred by kinematics. here d * * refers to the sum of all the non - strange charm meson states with masses in the range 2. 2 - 2. 8 gev / c2.
arxiv:hep-ex/0609033
in this article, we obtain a super - exponential rate of convergence in total variation between the traces of the first $ m $ powers of an $ n \ times n $ random unitary matrices and a $ 2m $ - dimensional gaussian random variable. this generalizes previous results in the scalar case to the multivariate setting, and we also give the precise dependence on the dimensions $ m $ and $ n $ in the estimate with explicit constants. we are especially interested in the regime where $ m $ grows with $ n $ and our main result basically states that if $ m \ ll \ sqrt { n } $, then the rate of convergence in the gaussian approximation is $ \ gamma ( \ frac nm + 1 ) ^ { - 1 } $ times a correction. we also show that the gaussian approximation remains valid for all $ m \ ll n ^ { 2 / 3 } $ without a fast rate of convergence.
arxiv:2002.01879
a boolean expression is in read - once form if each of its variables appears exactly once. when the variables denote independent events in a probability space, the probability of the event denoted by the whole expression in read - once form can be computed in polynomial time ( whereas the general problem for arbitrary expressions is # p - complete ). known approaches to checking read - once property seem to require putting these expressions in disjunctive normal form. in this paper, we tell a better story for a large subclass of boolean event expressions : those that are generated by conjunctive queries without self - joins and on tuple - independent probabilistic databases. we first show that given a tuple - independent representation and the provenance graph of an spj query plan without self - joins, we can, without using the dnf of a result event expression, efficiently compute its co - occurrence graph. from this, the read - once form can already, if it exists, be computed efficiently using existing techniques. our second and key contribution is a complete, efficient, and simple to implement algorithm for computing the read - once forms ( whenever they exist ) directly, using a new concept, that of co - table graph, which can be significantly smaller than the co - occurrence graph.
arxiv:1012.0335
usual coset construction $ \ su { k } \ times \ su { l } / \ su { k + l } $ of wess - - zumino conformal field theory is presented as a coset construction of minimal models. this new coset construction can be defined rigorously and allows one to calculate easily correlation functions of a number of primary fields.
arxiv:hep-th/9304116
the united states national airspace system ( nas ). efforts began in 2007 with a goal to deliver major modernization components by 2025. the modernization effort intends to increase the safety, efficiency, capacity, access, flexibility, predictability, and resilience of the nas while reducing the environmental impact of aviation. the aviation systems division of nasa ames operates the joint nasa / faa north texas research station. the station supports all phases of nextgen research, from concept development to prototype system field evaluation. this facility has already transitioned advanced nextgen concepts and technologies to use through technology transfers to the faa. nasa contributions also include development of advanced automation concepts and tools that provide air traffic controllers, pilots, and other airspace users with more accurate real - time information about the nation ' s traffic flow, weather, and routing. ames ' advanced airspace modeling and simulation tools have been used extensively to model the flow of air traffic flow across the us, and to evaluate new concepts in airspace design, traffic flow management, and optimization. = = = technology research = = = = = = = nuclear in - space power and propulsion ( ongoing ) = = = = nasa has made use of technologies such as the multi - mission radioisotope thermoelectric generator ( mmrtg ), which is a type of radioisotope thermoelectric generator used to power spacecraft. shortages of the required plutonium - 238 have curtailed deep space missions since the turn of the millennium. an example of a spacecraft that was not developed because of a shortage of this material was new horizons 2. in july 2021, nasa announced contract awards for development of nuclear thermal propulsion reactors. three contractors will develop individual designs over 12 months for later evaluation by nasa and the us department of energy. nasa ' s space nuclear technologies portfolio are led and funded by its space technology mission directorate. in january 2023, nasa announced a partnership with defense advanced research projects agency ( darpa ) on the demonstration rocket for agile cislunar operations ( draco ) program to demonstrate a ntr engine in space, an enabling capability for nasa missions to mars. in july 2023, nasa and darpa jointly announced the award of $ 499 million to lockheed martin to design and build an experimental ntr rocket to be launched in 2027. = = = = other initiatives = = = = free space optics. nasa contracted a third party to study the probability of using free space optics ( fso ) to communicate with optical ( laser ) stations on the ground ( ogs ) called laser - com
https://en.wikipedia.org/wiki/NASA
we present an architecture for voice trigger detection for virtual assistants. the main idea in this work is to exploit information in words that immediately follow the trigger phrase. we first demonstrate that by including more audio context after a detected trigger phrase, we can indeed get a more accurate decision. however, waiting to listen to more audio each time incurs a latency increase. progressive voice trigger detection allows us to trade - off latency and accuracy by accepting clear trigger candidates quickly, but waiting for more context to decide whether to accept more marginal examples. using a two - stage architecture, we show that by delaying the decision for just 3 % of detected true triggers in the test set, we are able to obtain a relative improvement of 66 % in false rejection rate, while incurring only a negligible increase in latency.
arxiv:2010.15446
recent results on cross sections sensitive to the parton distribution functions ( pdfs ) within the proton from the atlas and cms collaborations are presented. the potential impact on the inclusion of these data in fits to the pdfs is discussed. recent results from fits including the data from jet, or vector boson production from the atlas and cms experiments are discussed.
arxiv:1406.1606
we give a simple proof of dorronsoro ' s theorem and use similar ideas to establish an equivalence for embeddings of vector fields.
arxiv:1506.06383
in this paper we analyze metastability and nucleation in the context of a local version of the kawasaki dynamics for the two - dimensional strongly anisotropic ising lattice gas at very low temperature. let $ \ lambda \ subset \ mathbb { z } ^ 2 $ be a finite box. particles perform simple exclusion on $ \ lambda $, but when they occupy neighboring sites they feel a binding energy $ - u _ 1 < 0 $ in the horizontal direction and $ - u _ 2 < 0 $ in the vertical one. thus the kawasaki dynamics is conservative inside the volume $ \ lambda $. along each bond touching the boundary of $ \ lambda $ from the outside to the inside, particles are created with rate $ \ rho = e ^ { - \ delta \ beta } $, while along each bond from the inside to the outside, particles are annihilated with rate $ 1 $, where $ \ beta $ is the inverse temperature and $ \ delta > 0 $ is an activity parameter. thus, the boundary of $ \ lambda $ plays the role of an infinite gas reservoir with density $ \ rho $. we consider the parameter regime $ u _ 1 > 2u _ 2 $ also known as the strongly anisotropic regime. we take $ \ delta \ in { ( u _ 1, u _ 1 + u _ 2 ) } $ and we prove that the empty ( respectively full ) configuration is a metastable ( respectively stable ) configuration. we consider the asymptotic regime corresponding to finite volume in the limit of large inverse temperature $ \ beta $. we investigate how the transition from empty to full takes place. in particular, we estimate in probability, expectation and distribution the asymptotic transition time from the metastable configuration to the stable configuration. moreover, we identify the size of the \ emph { critical droplets }, as well as some of their properties. we observe very different behavior in the weakly and strongly anisotropic regimes. we find that the \ emph { wulff shape }, i. e., the shape minimizing the energy of a droplet at fixed volume, is not relevant for the nucleation pattern.
arxiv:2007.04063
full waveform inversion ( fwi ) is an iterative identification process that serves to minimize the misfit of model - based simulated and experimentally measured wave field data, with the goal of identifying a field of parameters for a given physical object. the inverse optimization process of fwi is based on forward and backward solutions of the ( elastic or acoustic ) eave equation. in a previous paper [ 1 ], we explored opportunities of using the finite cell method ( fcm ) as the wave field solver to incorporate highly complex geometric models. furthermore, we demonstrated that the identification of the model ' s density outperforms that of the velocity - - particularly in cases where unknown voids characterized by homogeneous neumann boundary conditions need to be detected. the paper at hand extends this previous study : the isogeometric finite cell analysis ( iga - fcm ) - - a combination of isogeometric analysis ( iga ) and fcm - - is applied for the wave field solver, with the advantage that the polynomial degree and subsequently also the sampling frequency of the wave field can be increased quite easily. since the inversion efficiency strongly depends on the accuracy of the forward and backward wave field solution and of the gradient of the functional, consistent and lumped mass matrix discretization are compared. the resolution of the grid describing the unknown material density is the decouple from the knot span grid. finally, we propose an adaptive multi - resolution algorithm that refines the material grid only locally using an image processing - based refinement indicator. the developed inversion framework allows fast and memory - efficient wave simulation and object identification. while we study the general behavior of the proposed approach on 2d benchmark problems, a final 3d problem shows that it can also be used to identify voids in geometrically complex spatial structures.
arxiv:2305.19699
large language models ( llms ) can be improved by aligning with human preferences through fine - tuning - - the so - called reinforcement learning from human feedback ( rlhf ). however, the cost of fine - tuning an llm is prohibitive for many users. due to their ability to bypass llm fine - tuning, prediction - time tokenwise reward - guided text generation ( rgtg ) methods have recently been proposed. they use a reward model trained on full sequences to score partial sequences during decoding in a bid to steer the generation towards sequences with high rewards. however, these methods have so far been only heuristically motivated and poorly analyzed. in this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. to alleviate this issue, we propose to train a bradley - terry reward model on partial sequences explicitly, and autoregressively sample from the implied tokenwise policy during decoding time. we study the properties of this reward model and the resulting policy : we show that this policy is proportional to the ratio of two distinct rlhf policies. our simple approach outperforms previous rgtg methods and performs similarly to strong offline baselines without large - scale llm finetuning.
arxiv:2406.07780
we construct real polarizable hodge structures on the reduced leafwise cohomology of k \ " ahler - riemann foliations by complex manifolds. as in the classical case one obtains a hard lefschetz theorem for this cohomology. serre ' s k \ " ahlerian analogue of the weil conjectures carries over as well. generalizing a construction of looijenga and lunts one obtains possibly infinite dimensional lie algebras attached to k \ " ahler - riemann foliations. finally using $ ( \ mathfrak { g }, k ) $ - cohomology we discuss a class of examples obtained by dividing a product of symmetric spaces by a cocompact lattice and considering the foliations coming from the factors.
arxiv:math/0204111
it is known that vilenkin ' s phenomenological equation of state for static straight cosmic strings is inconsistent with brans - dicke theory. we will prove that, in the presence of a cosmological constant, this equation of state is consistent with brans - dicke theory. the general solution of the full nonlinear field equations, representing the interior of a cosmic string with a cosmological constant is also presented.
arxiv:gr-qc/0609016
expanding thurston maps form a class of branched covering maps on the topological $ 2 $ - sphere $ s ^ { 2 } $, which are topological models of some non - uniformly expanding rational maps without any smoothness or holomorphicity assumption initially investigated by w. p. thurston, m. bonk, d. meyer, p. ha \ " issinsky, and k. m. pilgrim. the measures of maximal entropy and the absolutely continuous invariant measures for these maps have been studied by these authors, and equilibrium states by the first - named author. in this paper, we initiate the investigation on two new classes of invariant measures, namely, the maximizing measures and ground states, and establish the liv \ v { s } ic theorem, a local anosov closing lemma, and give a positive answer to the typically periodic optimization conjecture from ergodic optimization for these maps. as an application, we establish these results for misiurewicz - - thurston rational maps ( i. e., postcritically - finite rational maps without periodic critical points ) on the riemann sphere including the latt \ ` es maps with respect to the spherical metric. our strategy relies on the visual metrics developed by the above authors. in particular, we verify, in a first non - uniformly expanding setting, the typically periodic optimization conjecture, establishing that for a generic h \ " { o } lder continuous potential, there exists a unique maximizing measure, moreover, this measure is supported on a periodic orbit, it satisfies the locking property, and it is the unique ground state. the expanding thurston maps we consider include those that are not topologically conjugate to rational maps ; in particular, they can have periodic critical points.
arxiv:2303.00514
thermodynamic principles governing energy and information are important tools for a deeper understanding and better control of quantum systems. in this work, we experimentally investigate the interplay of the thermodynamic costs and information flow in a quantum system undergoing iterative quantum measurement and feedback. our study employs a state stabilization protocol involving repeated measurement and feedback on an electronic spin qubit associated with a silicon - vacancy center in diamond, which is strongly coupled to a diamond nanocavity. this setup allows us to verify the fundamental laws of nonequilibrium quantum thermodynamics, including the second law and the fluctuation theorem, both of which incorporate measures of quantum information flow induced by iterative measurement and feedback. we further assess the reducible entropy based on the feedback ' s causal structure and quantitatively demonstrate the thermodynamic advantages of non - markovian feedback over markovian feedback. for that purpose, we extend the theoretical framework of quantum thermodynamics to include the causal structure of the applied feedback protocol. our work lays the foundation for investigating the entropic and energetic costs of real - time quantum control in various quantum systems.
arxiv:2411.06709
we present the first results on the new black hole candidate, maxi j1305 - 704, observed by maxi / gsc. the new x - ray transient, named as maxi j1305 - 704, was first detected by the maxi - gsc all - sky survey on 2012 april 9 in the direction to the outer galactic bulge at ( l, b ) = ( 304. 2deg, - 7. 6deg ). the swift / xrt follow - up observation confirmed the uncatalogued point source and localized to the position at ( 13h06m56s. 44, - 70d27 ' 4 ". 91 ). the source continued the activity for about five months until 2012 august. the maxi / gsc light curve in the 2 - - 10 kev band and the variation of the hardness ratio of the 4 - 10 kev to the 2 - 4 kev flux revealed the hard - to - soft state transition on the the sixth day ( april 15 ) in the brightening phase and the soft - to - hard transition on the ~ 60th day ( june 15 ) in the decay phase. the luminosity at the initial hard - to - soft transition was significantly higher than that at the soft - to - hard transition in the decay phase. the x - ray spectra in the hard state are represented by a single power - law model with a photon index of ~ 2. 0, while those in the soft state need such an additional soft component as represented by a multi - color disk blackbody emission with an inner disk temperature ~ 0. 5 - - 1. 2 kev. all the obtained features support the source identification of a galactic black - hole binary located in the galactic bulge.
arxiv:1307.2514
coreference resolution - - which is a crucial task for understanding discourse and language at large - - has yet to witness widespread benefits from large language models ( llms ). moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. in this paper, we introduce a qa - based prompt - engineering method and discern \ textit { generative }, pre - trained llms ' abilities and limitations toward the task of coreference resolution. our experiments show that gpt - 2 and gpt - neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and prompt - sensitive, leading to inconsistent results.
arxiv:2205.07407
the explicit double sum for the associated laguerre polynomials is derived combinatorially. the moments are described using certain statistics on permutations and permutation tableaux. another derivation of the double sum is provided using only the moment generating function.
arxiv:1501.03880
energy - based latent variable models ( eblvms ) are more expressive than conventional energy - based models. however, its potential on visual tasks are limited by its training process based on maximum likelihood estimate that requires sampling from two intractable distributions. in this paper, we propose bi - level doubly variational learning ( bidvl ), which is based on a new bi - level optimization framework and two tractable variational distributions to facilitate learning eblvms. particularly, we lead a decoupled eblvm consisting of a marginal energy - based distribution and a structural posterior to handle the difficulties when learning deep eblvms on images. by choosing a symmetric kl divergence in the lower level of our framework, a compact bidvl for visual tasks can be obtained. our model achieves impressive image generation performance over related works. it also demonstrates the significant capacity of testing image reconstruction and out - of - distribution detection.
arxiv:2203.14702
in this paper, we study the gravitational lensing by some black hole classes within the non - linear electrodynamics in weak field limits. first, we calculate an optical geometry of the non - linear electrodynamics black hole then we use the gauss - bonnet theorem for finding deflection angle in weak field limits. the effect of non - linear electrodynamics on the deflection angle in leading order terms is studied. furthermore, we discuss the effects of the plasma medium on the weak deflection angle.
arxiv:2008.06711
estimating historical evapotranspiration ( et ) is essential for understanding the effects of climate change and human activities on the water cycle. this study used historical weather station data to reconstruct et trends over the past 300 years with machine learning. a random forest model, trained on fluxnet2015 flux stations ' monthly data using precipitation, temperature, aridity index, and rooting depth as predictors, achieved an r2 of 0. 66 and a kge of 0. 76 through 10 - fold cross - validation. applied to 5267 weather stations, the model produced monthly et data showing a general increase in global et from 1700 to the present, with a notable acceleration after 1900 due to warming. regional differences were observed, with higher et increases in mid - to - high latitudes of the northern hemisphere and decreases in some mid - to - low latitudes and the southern hemisphere. in drylands, et and temperature were weakly correlated, while in humid areas, the correlation was much higher. the correlation between et and precipitation has remained stable over the centuries. this study extends the et data time span, providing valuable insights into long - term historical et trends and their drivers, aiding in reassessing the impact of historical climate change and human activities on the water cycle and supporting future climate adaptation strategies.
arxiv:2407.16265
official environmental justice movement was started by a black community in north carolina in 1982. two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in bhopal, india, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. in a groundbreaking discovery in 1985, a british team of researchers studying antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons ( cfcs ), which were previously used in nearly all aerosols and refrigerants. notably, in 1986, the meltdown at the chernobyl nuclear power plant in ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. over the next couple of years, the brundtland commission ( previously known as the world commission on environment and development ) published a report titled our common future and the montreal protocol formed the international panel on climate change ( ipcc ) as international communication focused on finding solutions for climate change and degradation. in the late 1980s, the exxon valdez company was fined for spilling large quantities of crude oil off the coast of alaska and the resulting cleanup, involving the work of environmental scientists. after hundreds of oil wells were burned in combat in 1991, warfare between iraq and kuwait polluted the surrounding atmosphere just below the air quality threshold environmental scientists believed was life - threatening. = = = = 21st century = = = = many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. in 2002, the society for the environment as well as the institute of air quality management were founded to share knowledge and develop solutions around the world. later, in 2008, the united kingdom became the first country to pass legislation ( the climate change act ) that aims to reduce carbon dioxide output to a specified threshold. in 2016 the kyoto protocol became the paris agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts earth ' s rise in temperature to a 2 degrees celsius maximum. the agreement is one of the most expansive international efforts to limit the effects of global warming to date. most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. in 2010, bp was responsible for the largest american oil spill in the gulf of mexico, known as the
https://en.wikipedia.org/wiki/Environmental_science
the black hole information loss paradox has long been one of the most studied and fascinating aspects of black hole physics. in its latest incarnation, it takes the form of the firewall paradox. in this paper, we first give a conceptually oriented presentation of the paradox, based on the notion of causal structure. we then suggest a possible strategy for its resolutions and see that the core idea behind it is that there are connections that are non - local for semiclassical physics, which have to be taken into account when studying black holes. we see how to concretely implement this strategy in some physical models connected to the er = epr conjecture.
arxiv:2108.01939
many recent advances in metal halide perovskite solar cell ( psc ) performance are attributed to surface treatments which passivate interfacial trap states, minimise charge recombination and boost photovoltages. surprisingly, these photovoltages exceed the cells ' built - in potentials, often with large energetic offsets reported between the perovskite and transport layer semiconductor band edges - contradicting standard photovoltaic design principles. here we show that this tolerance to energetic offsets results from mixed ionic / electronic conduction in the perovskite layer. combining drift - diffusion simulations with experiments probing the current - voltage performance of pscs as a function of ion distribution, we demonstrate that electrostatic redistribution of ionic charge reduces surface recombination currents at steady - state, increasing the photovoltage by tens to hundreds of millivolts. thus, mobile ions can reduce the sensitivity of photovoltage to energetic misalignments at perovskite / transport layer interfaces, benefitting overall efficiency. building on these insights, we show how photovoltaic design principles are modified to account for mobile ions.
arxiv:2407.04523
or reporting their concerns. this question is of great importance since much research suggests that it is very difficult for people to act or come forward when they see unacceptable behavior, unless they have help from their organizations. a " user - friendly guide " and the existence of a confidential organizational ombudsman may help people who are uncertain about what to do, or afraid of bad consequences for their speaking up. = = = responsibility of journals = = = journals are responsible for safeguarding the research record and hence have a critical role in dealing with suspected misconduct. this is recognised by the committee on publication ethics ( cope ), which has issued clear guidelines on the form ( e. g. retraction ) that concerns over the research record should take. the cope guidelines state that journal editors should consider retracting a publication if they have clear evidence that the findings are unreliable, either as a result of misconduct ( e. g. data fabrication ) or honest error ( e. g. miscalculation or experimental error ). retraction is also appropriate in cases of redundant publication, plagiarism and unethical research. journal editors should consider issuing an expression of concern if they receive inconclusive evidence of research or publication misconduct by the authors, there is evidence that the findings are unreliable but the authors ' institution will not investigate the case, they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive, or an investigation is underway but a judgement will not be available for a considerable time. journal editors should consider issuing a correction if a small portion of an otherwise reliable publication proves to be misleading ( especially because of honest error ), or the author / contributor list is incorrect ( i. e. a deserving author has been omitted or somebody who does not meet authorship criteria has been included ). evidence emerged in 2012 that journals learning of cases where there is strong evidence of possible misconduct, with issues potentially affecting a large portion of the findings, frequently fail to issue an expression of concern or correspond with the host institution so that an investigation can be undertaken. in one case, nature allowed a corrigendum to be published despite clear evidence of image fraud. subsequent retraction of the paper required the actions of an independent whistleblower. the cases of joachim boldt and yoshitaka fujii in anaesthesiology focussed attention on the role that journals play in perpetuating scientific fraud as well as how they can deal with it.
https://en.wikipedia.org/wiki/Scientific_misconduct
the hubble law, widely considered the first observational basis for the expansion of the universe, may in the future be known as the hubble - lema \ ^ itre law. this is what the general assembly of the international astronomical union recommended at its recent meeting in vienna. however, the resolution in favour of a renamed law is problematic in so far as concerns its arguments based on the history of cosmology in the relevant period from about 1927 to the early 1930s. a critical examination of the resolution reveals flaws of a non - trivial nature. the purpose of this note is to highlight these problems and to provide a better historically informed background for the voting among the union ' s members, which in a few months ' time will result in either a confirmation or a rejection of the decision made by the general assembly.
arxiv:1809.02557
it \ ^ { o } processes are the most common form of continuous semimartingales, and include diffusion processes. this paper is concerned with the nonparametric regression relationship between two such it \ ^ { o } processes. we are interested in the quadratic variation ( integrated volatility ) of the residual in this regression, over a unit of time ( such as a day ). a main conceptual finding is that this quadratic variation can be estimated almost as if the residual process were observed, the difference being that there is also a bias which is of the same asymptotic order as the mixed normal error term. the proposed methodology, ` ` anova for diffusions and it \ ^ { o } processes, ' ' can be used to measure the statistical quality of a parametric model and, nonparametrically, the appropriateness of a one - regressor model in general. on the other hand, it also helps quantify and characterize the trading ( hedging ) error in the case of financial applications.
arxiv:math/0611274
the spin chain formulation of the operator spectrum of the n = 4 super yang - mills theory is haunted by the problem of ` ` wrapping ' ', i. e. the inapplicability of the formalism for short spin chain length at high loop - order. the first instance of wrapping concerns the fourth anomalous dimension of the konishi operator. while we do not obtain this number yet, we lay out an operational scheme for its calculation. the approach passes through a five - and six - loop sector. we show that all but one of the feynman integrals from this sector are related to five master graphs which ought to be calculable by the method of partial integration. the remaining supergraph is argued to be vanishing or finite ; a numerical treatment should be possible. the number of numerator terms remains small even if a further four - loop sector is included. there is no need for infrared rearrangements.
arxiv:0712.3513
[ abridged ] the chemical composition of two stars in wlm has been determined from high quality uves data obtained at the vlt ut2 ( program 65. n - 0375 ). the model atmospheres analysis shows that they have the same metallicity, [ fe / h ] = - 0. 38 + / - 0. 20, and [ mg / fe ] = - 0. 24 + / - 0. 16. this result suggests that the [ alpha ( mg ) / fe ] ratio in wlm may be suppressed relative to solar abundances ( also supported by differential abundances relative to similar stars in ngc6822 and the smc ). the absolute mg abundance, [ mg / h ] = - 0. 62 is high relative to what is expected from the nebulae though, where two independent spectroscopic analyses of the hii regions in wlm yield [ o / h ] = - 0. 89. intriguingly, the oxygen abundance determined from the oi 6158 feature in one wlm star is [ o / h ] = - 0. 21 + / - 0. 10, corresponding to five times higher than the nebular oxygen abundance. this is the first time that a significant difference between young stellar and nebular oxygen abundances has been found, and presently, there is no simple explanation for this difference. if the stellar abundances reflect the true composition of wlm, then this galaxy lies well above the metallicity - luminosity relationship for dwarf irregular galaxies. it also suggests that wlm is more chemically evolved than currently interpreted from its color - magnitude diagram.
arxiv:astro-ph/0306160
in 1999, molodtsov initiated the theory of soft sets as a new mathematical tool for dealing with uncertainties in many fields of applied sciences. in 2011, shabir and naz introduced and studied the notion of soft topological spaces, also defining and investigating many new soft properties as generalization of the classical ones. in this paper, we introduce the notions of soft separation between soft points and soft closed sets in order to obtain a generalization of the well - known embedding lemma to the class of soft topological spaces.
arxiv:1905.13050
under certain mild conditions, limit theorems for additive functionals of some $ d $ - dimensional self - similar gaussian processes are obtained. these limit theorems work for general gaussian processes including fractional brownian motions, sub - fractional brownian motions and bi - fractional brownian motions. to prove these results, we use the method of moments and an enhanced chaining argument. the gaussian processes under consideration are required to satisfy certain strong local nondeterminism property. a tractable sufficient condition for the strong local nondeterminism property is given and it only relays on the covariance functions of the gaussian processes. moreover, we give a sufficient condition for the distribution function of a random vector to be determined by its moments.
arxiv:2305.13146
we report new results on the lmc globular cluster ngc 1866 obtained by analyzing f555w and f814w images from wfpc2 @ hst. on the basis of the cmd we derive information on the cluster distance and constraints on stellar evolution theory. evidence of mass segregation are found in the cluster core.
arxiv:astro-ph/0209626
security has become a main concern for the smart grid to move from research and development to industry. the concept of security has usually referred to resistance to threats by an active or passive attacker. however, since smart meters ( sms ) are often placed in unprotected areas, physical security has become one of the important security goals in the smart grid. physical unclonable functions ( pufs ) have been largely utilized for ensuring physical security in recent years, though their reliability has remained a major problem to be practically used in cryptographic applications. although fuzzy extractors have been considered as a solution to solve the reliability problem of pufs, they put a considerable computational cost to the resource - constrained sms. to that end, we first propose an on - chip - error - correcting ( ocec ) puf that efficiently generates stable digits for the authentication process. afterward, we introduce a lightweight authentication protocol between the sms and neighborhood gateway ( ng ) based on the proposed puf. the provable security analysis shows that not only the proposed protocol can stand secure in the canetti - krawczyk ( ck ) adversary model but also provides additional security features. also, the performance evaluation demonstrates the significant improvement of the proposed scheme in comparison with the state - of - the - art.
arxiv:2307.12374
clustering bipartite graphs is a fundamental task in network analysis. in the high - dimensional regime where the number of rows $ n _ 1 $ and the number of columns $ n _ 2 $ of the associated adjacency matrix are of different order, existing methods derived from the ones used for symmetric graphs can come with sub - optimal guarantees. due to increasing number of applications for bipartite graphs in the high dimensional regime, it is of fundamental importance to design optimal algorithms for this setting. the recent work of ndaoud et al. ( 2022 ) improves the existing upper - bound for the misclustering rate in the special case where the columns ( resp. rows ) can be partitioned into $ l = 2 $ ( resp. $ k = 2 $ ) communities. unfortunately, their algorithm cannot be extended to the more general setting where $ k \ neq l \ geq 2 $. we overcome this limitation by introducing a new algorithm based on the power method. we derive conditions for exact recovery in the general setting where $ k \ neq l \ geq 2 $, and show that it recovers the result in ndaoud et al. ( 2022 ). we also derive a minimax lower bound on the misclustering error when $ k = l $ under a symmetric version of our model, which matches the corresponding upper bound up to a factor depending on $ k $.
arxiv:2205.12104
we optimize chiral interactions at next - to - next - to leading order to observables in two - and three - nucleon systems, and compute gamow - teller transitions in carbon - 14, oxygen - 22 and oxygen - 24 using consistent two - body currents. we compute spectra of the daughter nuclei nitrogen - 14, fluorine - 22 and fluorine - 24 via an isospin - breaking coupled - cluster technique, with several predictions. the two - body currents reduce the ikeda sum rule, corresponding to a quenching factor q ^ 2 ~ 0. 84 - 0. 92 of the axial - vector coupling. the half life of carbon - 14 depends on the energy of the first excited 1 + state, the three - nucleon force, and the two - body current.
arxiv:1406.4696
phenalenyl ( c $ _ { 13 } $ h $ _ 9 $ ) is an open - shell spin - $ 1 / 2 $ nanographene. using scanning tunneling microscopy ( stm ) inelastic electron tunneling spectroscopy ( iets ), covalently - bonded phenalenyl dimers have been shown to feature conductance steps associated with singlet - triplet excitations of a spin - $ 1 / 2 $ dimer with antiferromagnetic exchange. here, we address the possibility of tuning the magnitude of the exchange interactions by varying the dihedral angle between the two molecules within a dimer. theoretical methods, ranging from density functional theory calculations to many - body model hamiltonians solved within different levels of approximation, are used to explain stm - iets measurements of twisted phenalenyl dimers on a h - bn / rh ( 111 ) surface. by means of first - principles calculations, we also propose strategies to induce sizable twist angles in surface - adsorbed phenalenyl dimers via functional groups, including a photoswitchable scheme. this work paves the way toward tuning magnetic couplings in carbon - based spin chains and two - dimensional lattices.
arxiv:2407.11506
we develop a covariant method for studying the effects of a reheating phase on the primordial adiabatic and isocurvature perturbations in two - field models of inflation. to model the decay of the scalar fields into radiation at the end of inflation, we introduce a prescription in which radiation is treated as an additional effective scalar field, requiring us to extend the two - field setup into a three - field system. in this prescription, the coupling between radiation and the scalars can be interpreted covariantly in terms geometrical quantities that parametrize the evolution of a background trajectory in a three - field space. in order to obtain concrete results, we consider two scenarios characterized for having unsuppressed isocurvature fluctuations at the end of inflation : ( 1 ) canonical two - field inflation with the product exponential potential, which sources a large negative amount of non - gaussianity and, ( 2 ) two - field inflation with an ultra - light field, a model in which the isocurvature mode becomes approximately massless, and its interaction with the curvature perturbation persists during the entire period of inflation. in both cases we discuss how their predictions are modified by the coupling of the scalar fields to the radiation fluid.
arxiv:1805.10360
this paper studies the input queued switch operating under the maxweight algorithm when the arrivals are according to a markovian process. we exactly characterize the heavy - traffic scaled mean sum queue length in the heavy - traffic limit, and shows that it is within a factor of less than $ 2 $ from a universal lower bound. moreover, we obtain lower and upper bounds, that are applicable in all traffic regimes, and they become tight in the heavy - traffic regime. the paper obtains these results by generalizing the drift method recently developed for the case of i. i. d. arrivals, to the case of markovian arrivals. the paper illustrates this generalization by first obtaining the heavy - traffic mean queue length and its distribution in a single server queue under markovian arrivals and then applying it to the case of input queued switch. the key idea is to exploit the geometric mixing of finite - state markov chains, and to work with a time horizon that is picked so that the error due to mixing depends on the heavy - traffic parameter.
arxiv:2006.06150
the controversial problem of an isolated system with an internal adiabatic wall is investigated with the use of a simple microscopic model and the boltzmann equation. in the case of two infinite volume one - dimensional ideal fluids separated by a piston whose mass is equal to the mass of the fluid particles we obtain a rigorous explicit stationary non - equilibrium solution of the boltzmann equation. it is shown that at equal pressures on both sides of the piston, the temperature difference induces a non - zero average velocity, oriented toward the region of higher temperature. it thus turns out that despite the absence of macroscopic forces the asymmetry of fluctuations results in a systematic macroscopic motion. this remarkable effect is analogous to the dynamics of stochastic ratchets, where fluctuations conspire with spatial anisotropy to generate direct motion. however, a different mechanism is involved here. the relevance of the discovered motion to the adiabatic piston problem is discussed.
arxiv:cond-mat/9810196
experimental results from the cdf experiment at the tevatron in $ p \ bar { p } $ collisions at $ \ sqrt { s } $ = 1. 96 tev are presented on the diffractive structure function at different values of the exchanged momentum transfer squared in the range $ 0 < q ^ 2 < 10, 000 $ gev $ ^ 2 $, on the four - momentum transfer $ | t | $ distribution in the region $ 0 < | t | < 1 $ gev $ ^ 2 $ for both soft and hard diffractive events up to $ q ^ 2 \ approx 4, 500 $ gev $ ^ 2 $, and on the first experimental evidence of exclusive production in both dijet and diphoton events. a novel technique to align the roman pot detectors is also presented.
arxiv:hep-ex/0606024
for two meromorphic functions $ f $ and $ g $, the equation $ f ^ m + g ^ m = 1 $ can be regarded as fermat - type equations. using nevanlinna theory for meromorphic functions in several complex variables, the main purpose of this paper is to investigate the properties of the transcendental entire solutions of fermat - type difference and partial differential - difference equations in $ \ mathbb { c } ^ n $. in addition, we find the precise form of the transcendental entire solutions in $ \ mathbb { c } ^ 2 $ with finite order of the fermat - type partial differential - difference equation $ $ \ left ( \ frac { \ partial f ( z _ 1, z _ 2 ) } { \ partial z _ 1 } \ right ) ^ 2 + ( f ( z _ 1 + c _ 1, z _ 2 + c _ 2 ) - f ( z _ 1, z _ 2 ) ) ^ 2 = 1 $ $ and $ $ f ^ 2 ( z _ 1, z _ 2 ) + p ^ 2 ( z _ 1, z _ 2 ) \ left ( \ frac { \ partial f ( z _ 1 + c _ 1, z _ 2 + c _ 2 ) } { \ partial z _ 1 } - \ frac { \ partial f ( z _ 1, z _ 2 ) } { \ partial z _ 1 } \ right ) ^ 2 = 1, $ $ where $ p ( z _ 1, z _ 2 ) $ is a polynomial in $ \ mathbb { c } ^ 2 $. moreover, one of the main results of the paper significantly improved the result of xu and cao [ mediterr. j. math. ( 2018 ) 15 : 227, 1 - 14 and mediterr. j. math. ( 2020 ) 17 : 8, 1 - 4 ].
arxiv:2201.10513
data clustering is a process of arranging similar data into groups. a clustering algorithm partitions a data set into several groups such that the similarity within a group is better than among groups. in this paper a hybrid clustering algorithm based on k - mean and k - harmonic mean ( khm ) is described. the proposed algorithm is tested on five different datasets. the research is focused on fast and accurate clustering. its performance is compared with the traditional k - means & khm algorithm. the result obtained from proposed hybrid algorithm is much better than the traditional k - mean & khm algorithm.
arxiv:1205.5353
we define stacky lie groups to be group objects in the 2 - category of differentiable stacks. we show that every connected and etale stacky lie group is equivalent to a crossed module of the form ( h, g ) where h is the fundamental group of the given stacky lie group and g is the connected and simply connected lie group integrating the lie algebra of the stacky group. our result is closely related to a strictification result of baez and lauda.
arxiv:1006.1262
we perform a non - perturbative study of the scale - dependent renormalization factors of a multiplicatively renormalizable basis of $ \ delta { b } = 2 $ parity - odd four - fermion operators in quenched lattice qcd. heavy quarks are treated in the static approximation with various lattice discretizations of the static action. light quarks are described by non - perturbatively $ { \ rm o } ( a ) $ improved wilson - type fermions. the renormalization group running is computed for a family of schroedinger functional ( sf ) schemes through finite volume techniques in the continuum limit. we compute non - perturbatively the relation between the renormalization group invariant operators and their counterparts renormalized in the sf at a low energy scale. furthermore, we provide non - perturbative estimates for the matching between the lattice regularized theory and all the sf schemes considered.
arxiv:0706.4153
it is known that the usual schur $ s $ - and $ p $ - polynomials can be described via the gysin homomorphisms for flag bundles in the ordinary cohomology theory. recently, p. pragacz generalized these gysin formulas to the hall - littlewood polynomials. in this paper, we introduce a { \ it universal } analogue of the hall - littlewood polynomials, which we call the { \ it universal hall - littlewood functions }, and give gysin formulas for various flag bundles in the complex cobordism theory. furthermore, we give two kinds of the { \ it universal } analogue of the schur polynomials, and some gysin formulas for these functions are established.
arxiv:1604.00451
we present a comprehensive analysis of the presence of very massive stars ( vms > $ 100 m _ { \ odot } $ ) in the integrated spectra of 13 uv - bright star - forming galaxies at $ 2. 2 \ lesssim z \ lesssim 3. 6 $ taken with the gran telescopio canarias ( gtc ). these galaxies have very high uv absolute magnitudes ( $ m _ { \ rm uv } \ simeq - 24 $ ), intense star formation ( sfr $ \ simeq 100 - 1000 $ $ m _ { \ odot } $ yr $ ^ { - 1 } $ ), and metallicities in the range of 12 + log ( o / h ) $ \ simeq8. 10 - 8. 50 $ inferred from strong rest - optical lines. the gtc rest - uv spectra reveal spectral features indicative of very young stellar populations with vms, such as strong p - cygni line profiles in the wind lines n ~ { \ sc v } $ \ lambda 1240 $ and c ~ { \ sc iv } $ \ lambda 1550 $ along with intense and broad he ~ { \ sc ii } $ \ lambda 1640 $ emission with $ ew _ { 0 } $ $ \ simeq 1. 40 - 4. 60 $ \ aa, and fwhm $ \ simeq 1150 - 3170 $ $ km \ s ^ { - 1 } $. a comparison with known vms - dominated sources and typical galaxies without vms reveals that some uv - bright galaxies closely resemble vms - dominated clusters ( e. g., r136 cluster ). the presence of vms is further supported by a quantitative comparison of the observed strength of the he ~ { \ sc ii } emission with population synthesis models with and without vms, where models with vms are clearly preferred. employing an empirical threshold for $ ew _ { 0 } $ ( \ heii ) $ \ geq 3. 0 $ \ aa, along with the detection of other vms - related spectral profiles ( n ~ { \ sc iv } $ \ lambda 1486, 1719 $ ), we classify nine out of 13 uv - bright galaxies as vms - dominated sources. this high incidence of vms - dominated sources in the uv - bright galaxy population ( $ \ approx 70 \ % $ ) contrasts significantly with the negligible presence of vms in typical $ l _ { \ rm uv } ^ {
arxiv:2401.16165
we study linear operators $ t $ on banach spaces for which there exists a $ c _ 0 $ - semigroup $ ( t ( t ) ) _ { t \ geq 0 } $ such that $ t = t ( 1 ) $. we present a necessary condition in terms of the spectral value 0 and give classes of examples where this can or cannot be achieved.
arxiv:0808.2419
in this paper we investigate lott - sturm - villani ' s synthetic lower ricci curvature bound on riemannian manifolds with boundary. we prove several measure rigidity results for some important functional and geometric inequalities, which completely characterize $ { \ rm cd } ( k, \ infty ) $ condition and non - collapsed $ { \ rm cd } ( k, n ) $ condition on riemannian manifolds with boundary. in particular, using $ l ^ 1 $ - optimal transportation theory, we prove that $ { \ rm cd } ( k, \ infty ) $ condition implies geodesical convexity.
arxiv:1902.00942
quantum optimal control has enjoyed wide success for a variety of theoretical and experimental objectives. these favorable results have been attributed to advantageous properties of the corresponding control landscapes, which are free from local optima if three conditions are met : ( 1 ) the quantum system is controllable, ( 2 ) the jacobian of the map from the control field to the evolution operator is full rank, and ( 3 ) the control field is not constrained. this paper explores how gradient searches for globally optimal control fields are affected by deviations from assumption ( 2 ). in some quantum control problems, so - called singular critical points, at which the jacobian is rank - deficient, may exist on the landscape. using optimal control simulations, we show that search failure is only observed when a singular critical point is also a second - order trap, which occurs if the control problem meets additional conditions involving the system hamiltonian and / or the control objective. all known second - order traps occur at constant control fields, and we also show that they only affect searches that originate very close to them. as a result, even when such traps exist on the control landscape, they are unlikely to affect well - designed gradient optimizations under realistic searching conditions.
arxiv:1405.0204
super - resolution methods form high - resolution images from low - resolution images. in this paper, we develop a new bayesian nonparametric model for super - resolution. our method uses a beta - bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. because it is nonparametric, the number of elements found is also determined from the data. we test the results on both benchmark and natural images, comparing with several other models from the research literature. we perform large - scale human evaluation experiments to assess the visual quality of the results. in a first implementation, we use gibbs sampling to approximate the posterior. however, this algorithm is not feasible for large - scale data. to circumvent this, we then develop an online variational bayes ( vb ) algorithm. this algorithm finds high quality dictionaries in a fraction of the time needed by the gibbs sampler.
arxiv:1209.5019
the dissemination of hateful memes online has adverse effects on social media platforms and the real world. detecting hateful memes is challenging, one of the reasons being the evolutionary nature of memes ; new hateful memes can emerge by fusing hateful connotations with other cultural ideas or symbols. in this paper, we propose a framework that leverages multimodal contrastive learning models, in particular openai ' s clip, to identify targets of hateful content and systematically investigate the evolution of hateful memes. we find that semantic regularities exist in clip - generated embeddings that describe semantic relationships within the same modality ( images ) or across modalities ( images and text ). leveraging this property, we study how hateful memes are created by combining visual elements from multiple images or fusing textual information with a hateful image. we demonstrate the capabilities of our framework for analyzing the evolution of hateful memes by focusing on antisemitic memes, particularly the happy merchant meme. using our framework on a dataset extracted from 4chan, we find 3. 3k variants of the happy merchant meme, with some linked to specific countries, persons, or organizations. we envision that our framework can be used to aid human moderators by flagging new variants of hateful memes so that moderators can manually verify them and mitigate the problem of hateful content online.
arxiv:2212.06573
the destabilization mechanism of the collisional microtearing mode driven by an electron temperature gradient is studied using theoretical analyses and gyrokinetic simulations including a comprehensive collision model, in magnetized slab plasmas. the essential destabilization mechanism of the microtearing mode is the lag of the parallel inductive electric field behind the magnetic field owing to the time - dependent thermal force and inertia force induced by the velocity - dependent electron - - ion collisions. quantitative measurements of the collision effects enable us to identify the unstable regime against collisionality and reveal the relevance of the collisional microtearing mode with existing toroidal experiments. a nonlinear simulation demonstrates that the microtearing mode does not drive magnetic reconnection with the explosive release and conversion of the magnetic energy.
arxiv:2212.09283
we present a clustering analysis of qsos over the redshift range z = 0. 3 - 2. 9. we use a sample of 10558 qsos taken from the preliminary catalogue of the 2df qso redshift survey ( 2qz ). the two - point redshift - space correlation function of qsos is shown to follow a power law on scales s ~ 1 - 35h - 1mpc. fitting a power law to qso clustering averaged over the redshift interval 0. 3 < z < 2. 9 we find s _ 0 = 3. 99 + 0. 28 - 0. 34h - 1mpc and gamma = 1. 58 + 0. 10 - 0. 09 for an einstein - de sitter cosmology ( eds ). with omega _ 0 = 0. 3 and lambda _ 0 = 0. 7 the power law extends to s ~ 60h - 1mpc with a best fit of s _ 0 = 5. 69 + 0. 42 - 0. 50h - 1mpc and gamma = 1. 56 + 0. 10 - 0. 09. these values, measured at a mean redshift of z = 1. 49, are comparable to the clustering of local optically selected galaxies. we measure the evolution of qso clustering as a function of redshift. for an eds cosmology there is no evolution in comoving coordinates over the redshift range of the 2qz. for omega _ 0 = 0. 3 and lambda _ 0 = 0. 7 qso clustering shows a marginal increase at high redshift. although the clustering of qsos is measured on large scales where linear theory should apply, the evolution of qso clustering does not follow the linear theory predictions for growth via gravitational instability ( rejected at the > 99 per cent confidence level ). a redshift dependent bias is required to reconcile qso clustering observations with theory. a simple biasing model, in which qsos have cosmologically long lifetimes ( or alternatively form in peaks above a constant threshold in the density field ) is acceptable in an eds cosmology, but is only marginally acceptable if omega _ 0 = 0. 3 and lambda _ 0 = 0. 7. biasing models which assume qsos form over a range in redshift, based on the press - schechter formalism are approximately consistent with qso clustering evolution ( abridged ).
arxiv:astro-ph/0012375
##computability. the theory of semantics of programming languages is related to model theory, as is program verification ( in particular, model checking ). the curry – howard correspondence between proofs and programs relates to proof theory, especially intuitionistic logic. formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages. computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming. descriptive complexity theory relates logics to computational complexity. the first significant result in this area, fagin ' s theorem ( 1974 ) established that np is precisely the set of languages expressible by sentences of existential second - order logic. = = foundations of mathematics = = in the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. it was shown that euclid ' s axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. the use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as weierstrass ' nowhere - differentiable continuous function were discovered. cantor ' s study of arbitrary infinite sets also drew criticism. leopold kronecker famously stated " god made the integers ; all else is the work of man, " endorsing a return to the study of finite, concrete objects in mathematics. although kronecker ' s argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. david hilbert argued in favor of the study of the infinite, saying " no one shall expel us from the paradise that cantor has created. " mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. in addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. in the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. thus, for example, non - euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. the resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate. with the development of formal logic, hilbert asked whether it would be possible to prove that an axiom system is consistent
https://en.wikipedia.org/wiki/Mathematical_logic
the second h. weyl curvature invariant of a riemannian manifold, denoted $ h _ 4 $, is the second curvature invariant which appears in the well known tube formula of h. weyl. it coincides with the gauss - bonnet integrand in dimension 4. a crucial property of $ h _ 4 $ is that it is nonnegative for einstein manifolds, hence it provides a geometric obstruction to the existence of einstein metrics in dimensions $ \ geq 4 $, independently from the sign of the einstein constant. this motivates our study of the positivity of this invariant. here in this paper, we prove many constructions of metrics with positive second h. weyl curvature invariant, generalizing similar well known results for the scalar curvature.
arxiv:math/0403286
simplicial lattices provide an elegant framework for discrete spacetimes. the inherent orthogonality between a simplicial lattice and its circumcentric dual yields an austere representation of spacetime which provides a conceptually simple form of einstein ' s geometric theory of gravitation. a sufficient understanding of simplicial spacetimes has been demonstrated in the literature for spacetimes devoid of all non - gravitational sources. however, this understanding has not been adequately extended to non - vacuum spacetime models. consequently, a deep understanding of the diffeomorphic structure of the discrete theory is lacking. conservation laws and symmetry properties are attractive starting points for coupling matter with the lattice. we present a simplicial form of the contracted bianchi identity which is based on the e. cartan moment of rotation operator. this identity manifests itself in the conceptually - simple form of a kirchhoff - like conservation law. this conservation law enables one to extend regge calculus to non - vacuum spacetimes and provides a deeper understanding of the simplicial diffeomorphism group.
arxiv:0807.3041
we introduce the quantum dimer - pentamer model ( qdpm ) on the square lattice. this model is a generalization of the square lattice quantum dimer model as its configuration space comprises fully - packed hard - core dimer coverings as well as dimer configurations containing pentamers, where four dimers touch a vertex. thus in the qdpm, the fully - packed, hard - core constraint of the quantum dimer model is relaxed such that the local dimer number at each vertex is fixed modulo 3, resulting in an exact local $ z _ 3 $ gauge symmetry. we construct a local hamiltonian for which the rokhsar - kivelson ( rk ) equal superposition state is the exact ground state and has a 9 - fold topological degeneracy on the torus. using monte carlo calculations, we find no spontaneous symmetry breaking in the rk wavefunction and that its dimer - dimer correlation function decays exponentially. by doping the qdpm rk state with a pair of monomers, we demonstrate that $ z _ 3 $ electric charges are deconfined. additionally, we introduce a $ z _ 3 $ magnetic string operator that we find decays exponentially and shows no signatures of magnetic vortex condensation and with correlations. these numerical results suggest that the ground state of the qdpm is a dimer liquid with $ z _ 3 $ topological order.
arxiv:1704.02063
detecting mechanical fatigue of metallic components is always a challenge in industries. in this work, we proposed to monitor the low - cycle fatigue of a 6061 aluminum alloy based on internal friction ( if ) measurement, which is realized by a quantitative electromechanical impedance ( q - emi ) method using a small piezoelectric wafer bonded on the specimen. large strain amplitude ( 3. 3 * 10 ^ - 3 ) was employed thus the fatigue life can always be below 10 ^ 5 cycles. it was found that except for the initial testing stage, the if always increases steadily with the increasing fatigue cycles. before the fatigue failure, the if can reach 2. 5 to 3. 4 times of the initial value, which is thought to be caused by the micro - cracks forming and growing. in comparison, the resonance frequency of the specimen just drops less than 2 % compared with the initial value. finally, a general fatigue criterion based on if measurement is suggested for all the metallic materials.
arxiv:2109.12759
we consider a hydrodynamic model of self - organized evolution of agents, with singular interaction kernel $ \ phi _ \ alpha ( x ) = 1 / | x | ^ { 1 + \ alpha } $ ( $ 0 < \ alpha < 2 $ ), in the presence of an additional external force. well - posedness results are already available for the unforced system in classical regularity spaces. we define a notion of solution in larger function spaces, in particular in $ l ^ \ infty $ ( " weak solutions " ) and in $ w ^ { 1, \ infty } $ ( " strong solutions " ), and we discuss existence and uniqueness of these solutions. furthermore, we show that several important properties of classical solutions carry over to these less regular ones. in particular, we give onsager - type criteria for the validity of the natural energy law for weak solutions of the system, and we show that fast alignment ( weak and strong solutions ) and flocking ( strong solutions ) still occur in the forceless case.
arxiv:1803.01101
quantum dots inserted inside semiconductor nanowires are extremely promising candidates as building blocks for solid - state based quantum computation and communication. they provide very high crystalline and optical properties and offer a convenient geometry for electrical contacting. having a complete determination and full control of their emission properties is one of the key goals of nanoscience researchers. here we use strain as a tool to create in a single magnetic nanowire quantum dot a light - hole exciton, an optically active quasiparticle formed from a single electron bound to a single light hole. in this frame, we provide a general description of the mixing within the hole quadruplet induced by strain or confinement. a multi - instrumental combination of cathodoluminescence, polarisation - resolved fourier imaging and magneto - optical spectroscopy, allow us to fully characterize the hole ground state, including its valence band mixing with heavy hole states.
arxiv:1612.01563
we give a review, in the style of an essay, of the author ' s 1998 matter - gravity entanglement hypothesis which, unlike the standard approach to entropy based on coarse - graining, offers a definition for the entropy of a closed system as a real and objective quantity. we explain how this approach offers an explanation for the second law of thermodynamics in general and a non - paradoxical understanding of information loss during black hole formation and evaporation in particular. it also involves a radically different from usual description of black hole equilibrium states in which the total state of a black hole in a box together with its atmosphere is a pure state - - entangled in just such a way that the reduced state of the black hole and of its atmosphere are each separately approximately thermal. we also briefly recall some recent work of the author which involves a reworking of the string - theory understanding of black hole entropy consistent with this alternative description of black hole equilibrium states and point out that this is free from some unsatisfactory features of the usual string theory understanding. we also recall the author ' s recent arguments based on this alternative description which suggest that the ads / cft correspondence is a bijection between the boundary cft and just the matter degrees of freedom of the bulk theory.
arxiv:1504.00882
in this paper we consider the slightly $ l ^ 2 $ - supercritical gkdv equations $ \ partial _ t u + ( u _ { xx } + u | u | ^ { p - 1 } ) _ x = 0 $, with the nonlinearity $ 5 < p < 5 + \ varepsilon $ and $ 0 < \ varepsilon \ ll 1 $. we will prove the existence and stability of a blow - up dynamic with self - similar blow - up rate in the energy space $ h ^ 1 $ and give a specific description of the formation of the singularity near the blow - up time.
arxiv:1503.02712
dynamical equations for fermion masses are derived using high scale universal mass generation and consequent mass evolution due to $ su ( 3 ), su ( 2 ) $, and u ( 1 $ gauge interaction. assuming mass generation at the gut scale $ m = 10 ^ { 14 } $ gev, one obtains hierarchy and a large spread in fermion masses with roughly correct values of $ m _ \ nu, m _ \ tau, m _ t, m _ b $ in the third generation. the smallness of neutrino mass, $ \ nu _ 3 \ sim 10 ^ { - 12 } m _ { t } $, naturally arises in the solution.
arxiv:1106.3872
in this article we study combinatorial non - positive curvature aspects of various simplicial complexes with natural $ \ widetilde a _ n $ shaped simplicies, including euclidean buildings of type $ \ widetilde a _ n $ and cayley graphs of garside groups and their quotients by the garside elements. all these examples fit into the more general setting of lattices with order - increasing $ \ mathbb z $ - actions and the associated lattice quotients proposed in a previous work by the first named author. we show that both the lattice quotients and the lattices themselves give rise to weakly modular graphs, which is a form of combinatorial non - positive curvature. we also show that several other complexes fit into this setting of lattices / lattice quotients, hence our result applies, including artin complexes of artin - tits groups of type $ \ widetilde a _ n $, a class of arc complexes and weak garside groups arising from a categorical garside structure in the sense of bessis. along the way, we also clarify the relationship between categorical garside structure, lattices with $ \ mathbb z $ action and different classes of complexes studied this article. we use this point of view to describe the first examples of garside groups with exotic properties, like non - linearity or rigidity results.
arxiv:2211.03257
we consider a general two - higgs - doublet model with cp violation in the scalar sector, that leads, at the one - loop level of the perturbation expansion, to cp - violation in the process $ e ^ + e ^ - \ to t \ bar { t } \ to l ^ \ pm... $ and $ e ^ + e ^ - \ to t \ bar { t } \ to b / \ bar { b }... $. the goal of this study is to include { \ it consistently } cp - violating effects in distributions of top - quark decay products ( $ l ^ \ pm $ or $ b / \ bar { b } $ ) that emerge { \ it both } from $ t \ bar { t } $ production { \ it and } from $ t $ or $ \ bar { t } $ decay processes.
arxiv:hep-ph/0104011
in this paper, we consider the inverse submonoids $ am _ n $ of monotone transformations and $ ao _ n $ of order - preserving transformations of the alternating inverse monoid $ ai _ n $ on a chain with $ n $ elements. we compute the cardinalities, describe the green ' s structures and the congruences, and calculate the ranks of these two submonoids of $ ai _ n $.
arxiv:2503.00820
energy is no doubt an intuitive concept. following a previous analysis on the nature of elementary particles and associated elementary quantum fields, the peculiar status and role of energy is scrutinised further at elementary and larger scales. energy physical characterisation shows that it is a primordial component of reality highlighting the quantum fields natural tendencies to interact, the elementary particles natural tendency to constitute complex bodies and every material thing natural tendency to actualise and be active. energy therefore is a primordial notion in need of a proper assessment.
arxiv:2412.17858
we study the quantum group deformation of the lorentzian eprl spin - foam model. the construction uses the harmonic analysis on the quantum lorentz group. we show that the quantum group spin - foam model so defined is free of the infra - red divergence, thus gives a finite partition function on a fixed triangulation. we expect this quantum group spin - foam model is a spin - foam quantization of discrete gravity with a cosmological constant.
arxiv:1012.4216
the riken data on the coulomb dissociation ( cd ) of 8b were shown to be in good agreement with the direct capture ( dc ) data on the 7be ( p, g ) 8b reaction ( that were known at that time ) of filippone { \ em et al. } yet recently it was claimed that the riken2 cd data must be corrected in order to be reconciled with the slope of dc data. considering the ( correct ) so called scale independent b - slope parameter of the riken2 cd data, the resultant corrected b - slope parameter suggested by esbensen, bertsch and snover is shown to be considerably smaller than the so called average b - slope parameter of dc data. the suggested corrections of the b - slope parameter lead to a large disagreement with dc data, in sharp contrast to the claim. the slope corrections are only significant for the riken2 cd data. for the gsi kinematics, where in fact one may observe slope different than for dc ( at least for the gsi1 data ), they find a fortuitous cancellation that leads to a vanishingly small slope correction. hence the validity of these correction based on the observed slopes can not be substantiated.
arxiv:nucl-ex/0504008
dreaming is a fundamental but not fully understood part of human experience that can shed light on our thought patterns. traditional dream analysis practices, while popular and aided by over 130 unique scales and rating systems, have limitations. mostly based on retrospective surveys or lab studies, they struggle to be applied on a large scale or to show the importance and connections between different dream themes. to overcome these issues, we developed a new, data - driven mixed - method approach for identifying topics in free - form dream reports through natural language processing. we tested this method on 44, 213 dream reports from reddit ' s r / dreams subreddit, where we found 217 topics, grouped into 22 larger themes : the most extensive collection of dream topics to date. we validated our topics by comparing it to the widely - used hall and van de castle scale. going beyond traditional scales, our method can find unique patterns in different dream types ( like nightmares or recurring dreams ), understand topic importance and connections, and observe changes in collective dream experiences over time and around major events, like the covid - 19 pandemic and the recent russo - ukrainian war. we envision that the applications of our method will provide valuable insights into the intricate nature of dreaming.
arxiv:2307.04167
we investigate the effects of stellar limb - darkening and photospheric perturbations for the onset of wind structure arising from the strong, intrinsic line - deshadowing instability ( ldi ) of a line - driven stellar wind. a linear perturbation analysis shows that including limb - darkening reduces the stabilizing effect of the diffuse radiation, leading to a net instability growth rate even at the wind base. numerical radiation - hydrodynamics simulations of the non - linear evolution of this instability then show that, in comparison with previous models assuming a uniformly bright star without base perturbations, wind structure now develops much closer ( $ r \ la 1. 1 r _ \ star $ ) to the photosphere. this is in much better agreement with observations of o - type stars, which typically indicate the presence of strong clumping quite near the wind base.
arxiv:1210.1861
computed tomography ( ct ) is a prominent example of imaging inverse problem highlighting the unrivaled performances of data - driven methods in degraded measurements setups like sparse x - ray projections. although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. in contrast, fully unsupervised techniques, most notably using score - based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. however, their use cases are limited as they need considerable amounts of training data to have good generalization properties. another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, deep image prior, has recently been adapted to solve sparse ct by reparameterizing the reconstruction problem. although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data - driven methods. to fill the gap between these two strategies, we propose an unsupervised conditional approach to the generative latent optimization framework ( cglo ). similarly to dip, without any training dataset, cglo benefits from the structural bias of a decoder network. however, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. in addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. the resulting approach is tested on full - dose sparse - view ct using multiple training dataset sizes and varying numbers of viewing angles.
arxiv:2307.16670
daring predictions of the proximate future can establish shared discursive frameworks, mobilize capital, and steer complex processes. among the prophetic visions that encouraged and accompanied the development of new communication technologies was the " digital earth, " described in a 1998 speech by al gore as a high - resolution representation of the planet to share and analyze detailed information about its state. this article traces a genealogy of the digital earth as a techno - scientific myth, locating it in a constellation of media futures, arguing that a common subtext of these envisionments consists of a dream of wholeness, an afflatus to overcome perceived fragmentation among humans, and between humans and the earth.
arxiv:1412.2078
responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using gps, and other kinds of image processing, video processing, audio processing, and speech processing. = = = instrumentation = = = instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. the design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. for example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. similarly, thermocouples use the peltier - seebeck effect to measure the temperature difference between two points. often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. for example, a thermocouple might be used to help ensure a furnace ' s temperature remains constant. for this reason, instrumentation engineering is often viewed as the counterpart of control. = = = computers = = = computer engineering deals with the design of computers and computer systems. this may involve the design of new hardware. computer engineers may also work on a system ' s software. however, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer - like architectures are now found in a range of embedded devices including video game consoles and dvd players. computer engineers are involved in many hardware and software aspects of computing. robots are one of the applications of computer engineering. = = = photonics and optics = = = photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. the application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. other prominent applications of optics include electro - optical sensors and measurement systems, lasers, fiber - optic communication systems, and optical disc systems ( e. g. cd and dvd ). photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics ( mostly involving semiconductors ), laser systems, optical amplifiers and novel materials ( e. g. metamaterials ). = = related disciplines = = mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. such combined systems are known as electromechanical systems and have widespread adoption.
https://en.wikipedia.org/wiki/Electrical_engineering
in - app advertisements have become a major revenue source for app developers in the mobile app ecosystem. ad libraries play an integral part in this ecosystem as app developers integrate these libraries into their apps to display ads. in this paper, we study ad library integration practices by analyzing 35, 459 updates of 1, 837 top free - to - download apps of the google play store. we observe that ad libraries ( e. g., google admob ) are not always used for serving ads - - 22. 5 % of the apps that integrate google admob do not display ads. they instead depend on google admob for analytical purposes. among the apps that display ads, we observe that 57. 9 % of them integrate multiple ad libraries. we observe that such integration of multiple ad libraries occurs commonly in apps with a large number of downloads and ones in app categories with a high proportion of ad - displaying apps. we manually analyze a sample of apps and derive a set of rules to automatically identify four common strategies for integrating multiple ad libraries. our analysis of the apps across the identified strategies shows that app developers prefer to manage their own integrations instead of using off - the - shelf features of ad libraries for integrating multiple ad libraries. our findings are valuable for ad library developers who wish to learn first hand about the challenges of integrating ad libraries.
arxiv:2104.00182
we propose a unifying framework for the pricing of debt securities under general time - inhomogeneous short - rate diffusion processes. the pricing of bonds, bond options, callable / putable bonds, and convertible bonds ( cbs ) is covered. using continuous - time markov chain ( ctmc ) approximations, we obtain closed - form matrix expressions to approximate the price of bonds and bond options under general one - dimensional short - rate processes. a simple and efficient algorithm is also developed to price callable / putable debt. the availability of a closed - form expression for the price of zero - coupon bonds allows for the perfect fit of the approximated model to the current market term structure of interest rates, regardless of the complexity of the underlying diffusion process selected. we further consider the pricing of cbs under general bi - dimensional time - inhomogeneous diffusion processes to model equity and short - rate dynamics. credit risk is also incorporated into the model using the approach of tsiveriotis and fernandes ( 1998 ). based on a two - layer ctmc method, an efficient algorithm is developed to approximate the price of convertible bonds. when conversion is only allowed at maturity, a closed - form matrix expression is obtained. numerical experiments show the accuracy and efficiency of the method across a wide range of model parameters and short - rate models.
arxiv:2403.06303
we have surveyed andromeda vi, a dwarf spheroidal galaxy companion to m31, for variable stars using f450w and f555w observations obtained with the hubble space telescope. a total of 118 variables were found, with 111 being rr lyrae, 6 anomalous cepheids, and 1 variable we were unable to classify. we find that the andromeda vi anomalous cepheids have properties consistent with those of anomalous cepheids in other dwarf spheroidal galaxies. we revise the existing period - luminosity relations for these variables. further, using these and other available data, we show that there is no clear difference between fundamental and first - overtone anomalous cepheids in a period - amplitude diagram at shorter periods, unlike the rr lyrae. for the andromeda vi rr lyrae, we find that they lie close to the oosterhoff type i galactic globular clusters in the period - amplitude diagram, although the mean period of the rrab stars, < p _ ab > = 0. 588 d, is slightly longer than the typical oosterhoff type i cluster. the mean v magnitude of the rr lyrae in andromeda vi is 25. 29 + / - 0. 03, resulting in a distance 815 + / - 25 kpc on the lee, demarque, & zinn distance scale. this is consistent with the distance derived from the i magnitude of the tip of the red giant branch. similarly, the properties of the rr lyrae indicate a mean abundance for andromeda vi which is consistent with that derived from the mean red giant branch color.
arxiv:astro-ph/0205361
an analog hadron calorimeter ( ahcal ) prototype of 5. 3 nuclear interaction lengths thickness has been constructed by members of the calice collaboration. the ahcal prototype consists of a 38 - layer sandwich structure of steel plates and highly - segmented scintillator tiles that are read out by wavelength - shifting fibers coupled to sipms. the signal is amplified and shaped with a custom - designed asic. a calibration / monitoring system based on led light was developed to monitor the sipm gain and to measure the full sipm response curve in order to correct for non - linearity. ultimately, the physics goals are the study of hadron shower shapes and testing the concept of particle flow. the technical goal consists of measuring the performance and reliability of 7608 sipms. the ahcal was commissioned in test beams at desy and cern. the entire prototype was completed in 2007 and recorded hadron showers, electron showers and muons at different energies and incident angles in test beams at cern and fermilab.
arxiv:1003.2662
we propose a holographic image restoration method using an autoencoder, which is an artificial neural network. because holographic reconstructed images are often contaminated by direct light, conjugate light, and speckle noise, the discrimination of reconstructed images may be difficult. in this paper, we demonstrate the restoration of reconstructed images from holograms that record page data in holographic memory and qr codes by using the proposed method.
arxiv:1612.03959
recent experiments in various cell types have shown that two - dimensional tissues often display local nematic order, with evidence of extensile stresses manifest in the dynamics of topological defects. using a mesoscopic model where tissue flow is generated by fluctuating traction forces coupled to the nematic order parameter, we show that the resulting tissue dynamics can spontaneously produce local nematic order and an extensile internal stress. a key element of the model is the assumption that in the presence of local nematic alignment, cells preferentially crawl along the nematic axis, resulting in anisotropy of fluctuations. our work shows that activity can drive either extensile or contractile stresses in tissue, depending on the relative strength of the contractility of the cortical cytoskeleton and tractions by cells on the extracellular matrix.
arxiv:2011.01981
we calculate the scattering cross section between two $ 0 ^ { + + } $ glueballs in $ su ( 2 ) $ yang - mills theory on lattice at $ \ beta = 2. 1, 2. 2, 2. 3, 2. 4 $, and 2. 5 using the indirect ( hal qcd ) method. we employ the cluster - decomposition error reduction technique and use all space - time symmetries to improve the signal. in the use of the hal qcd method, the centrifugal force was subtracted to remove the systematic effect due to nonzero angular momenta of lattice discretization. from the extracted interglueball potential we determine the low energy glueball effective theory by matching with the one - glueball exchange process. we then calculate the scattering phase shift, and derive the relation between the interglueball cross section and the scale parameter $ \ lambda $ as $ \ sigma _ { \ phi \ phi } = ( 2 - 51 ) \ lambda ^ { - 2 } $ ( stat. + sys. ). from the observational constraints of galactic collisions, we obtain the lower bound of the scale parameter, as $ \ lambda > 60 $ mev. we also discuss the naturalness of the yang - mills theory as the theory explaining dark matter.
arxiv:1910.07756
when developers use different keywords such as todo and fixme in source code comments to describe self - admitted technical debt ( satd ), we refer it as keyword - labeled satd ( kl - satd ). we study kl - satd from 33 software repositories with 13, 588 kl - satd comments. we find that the median percentage of kl - satd comments among all comments is only 1, 52 %. we find that kl - satd comment contents include words expressing code changes and uncertainty, such as remove, fix, maybe and probably. this makes them different compared to other comments. kl - satd comment contents are similar to manually labeled satd comments of prior work. our machine learning classifier using logistic lasso regression has good performance in detecting kl - satd comments ( auc - roc 0. 88 ). finally, we demonstrate that using machine learning we can identify comments that are currently missing but which should have a satd keyword in them. automating satd identification of comments that lack satd keywords can save time and effort by replacing manual identification of comments. using kl - satd offers a potential to bootstrap a complete satd detector.
arxiv:2008.05159
the rheological properties of cells and tissues are central to embryonic development and homoeostasis in adult tissues and organs and are closely related to their physiological activities. in this work, we present our study of rheological experiments on cell monolayer under serum starvation compared to that of healthy cell monolayer with full serum. the normal functioning of cells depends on the micronutrient supply provided by the serum in the growth media. serum starvation is one of the most widely used procedures in cell biology. serum deficiency may lead to genomic instability, variation in protein expression, chronic diseases, and some specific types of cancers. however, the effect of deprivation of serum concentration on the material properties of cells is still unknown. therefore, we performed the macro - rheology experiments to investigate the effect of serum starvation on a fully confluent madin darby canine kidney ( mdck ) cell monolayer. the material properties such as storage modulus ( g ' ) and loss modulus ( g ' ' ), of the monolayer, were measured using oscillatory shear experiments under serum - free ( 0 % fbs ) and full serum ( 10 % fbs ) conditions. additionally, the step strain experiments were performed to gain more insights into the viscoelastic properties of the cell monolayer. our results indicate that without serum, the loss and storage moduli decrease and do not recover fully even after small deformation. this is because of the lack of nutrients, which may result in many permanent physiological changes. whereas, the healthy cell monolayer under full serum condition, remains strong & flexible, and can fully recover even from a large deformation at higher strain.
arxiv:2103.09294