text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
understanding the evolution of scholarly impact is essential for many real - life decision - making processes in academia, such as research planning, frontier exploration, and award selection. popular platforms like google scholar and web of science rely on numerical indicators that are too abstract to convey the context and content of scientific impact, while most existing visualization approaches on mapping science do not consider the presentation of individual scholars ' impact evolution using curated self - citation data. this paper builds on our previous work and proposes an integrated pipeline to visualize a scholar ' s impact evolution from multiple topic facets. a novel 3d prism - shaped visual metaphor is introduced as the overview of a scholar ' s impact, whilst their scientific evolution on each topic is displayed in a more structured manner. additional designs by topic chord diagram, streamgraph visualization, and inter - topic flow map, optimized by an elaborate layout algorithm, assist in perceiving the scholar ' s scientific evolution across topics. a new six - degree - impact glyph metaphor highlights key interdisciplinary works driving the evolution. the proposed visualization methods are evaluated through case studies analyzing the careers of prestigious turing award laureates and a major visualization venue.
|
arxiv:2408.08912
|
a search for massive resonances decaying to a z boson and a photon is performed in events with a hadronically decaying z boson candidate, separately in light - quark and b quark decay modes, identified using jet substructure and advanced b tagging techniques. results are based on samples of proton - proton collisions collected with the cms detector at the lhc at center - of - mass energies of 8 and 13 tev, corresponding to integrated luminosities of 19. 7 and 2. 7 inverse femtobarns, respectively. the results of the search are combined with those of a similar search in the leptonic decay modes of the z boson, based on the same data sets. spin - 0 resonances with various widths and with masses in a range between 0. 2 and 3. 0 tev are considered. no significant excess is observed either in the individual analyses or the combination. the results are presented in terms of upper limits on the production cross section of such resonances and constitute the most stringent limits to date for a wide range of masses.
|
arxiv:1612.09516
|
a naive ( or idiot ) bayes network is a network with a single hypothesis node and several observations that are conditionally independent given the hypothesis. we recently surveyed a number of members of the uai community and discovered a general lack of understanding of the implications of the naive bayes assumption on the kinds of problems that can be solved by these networks. it has long been recognized [ minsky 61 ] that if observations are binary, the decision surfaces in these networks are hyperplanes. we extend this result ( hyperplane separability ) to naive bayes networks with m - ary observations. in addition, we illustrate the effect of observation - observation dependencies on decision surfaces. finally, we discuss the implications of these results on knowledge acquisition and research in learning.
|
arxiv:1302.3594
|
tree frogs are able take advantage of an interconnected network of epithelial cells in their toe pads to modulate their adhesion to surfaces under dry, wet, and flooded environments. it has been hypothesized that these interconnected drainage channels reduce the hydrodynamic repulsion to facilitate contact under a completely submerged environment ( flooded conditions ). using a custom - built apparatus we investigate the interplay between surface structure and loading conditions on the peeling force. by combining a normal approach and detachment by peeling we can isolate the effects of surface structure from the loading conditions. we investigate three surfaces : two rigid structured surfaces that consist of arrays of cylindrical posts and a flat surface as a control. we observe three regimes in the work required to separate the structured surface that depend on the fluid film thickness prior to pull out. these three regimes are based on hydrodynamics and our experimental results agree with a simple scaling argument that relates the surface features to the different regimes observed. overall we find that the work of separation of a structured surface is always less than or equal to the one for a smooth surface when considering purely viscous contributions.
|
arxiv:1502.02152
|
gerrymandering, the deliberate manipulation of electoral district boundaries for political advantage, is a persistent issue in u. s. redistricting cycles. this paper introduces and analyzes a new phenomenon, ' votemandering ' - a strategic blend of gerrymandering and targeted political campaigning, devised to gain more seats by circumventing fairness measures. it leverages accurate demographic and socio - political data to influence voter decisions, bolstered by advancements in technology and data analytics, and executes better - informed redistricting. votemandering is established as a mixed integer program ( mip ) that performs fairness - constrained gerrymandering over multiple election rounds, via unit - specific variables for campaigns. to combat votemandering, we present a computationally efficient heuristic for creating and testing district maps that more robustly preserve voter preferences. we analyze the influence of various redistricting constraints and parameters on votemandering efficacy. we explore the interconnectedness of gerrymandering, substantial campaign budgets, and strategic campaigning, illustrating their collective potential to generate biased electoral maps. a wisconsin state senate redistricting case study substantiates our findings on real data, demonstrating how major parties can secure additional seats through votemandering. our findings underscore the practical implications of these manipulations, stressing the need for informed policy and regulation to safeguard democratic processes.
|
arxiv:2308.07414
|
we show that the open unit ball of the space of operators from a finite dimensional hilbert space into a separable hilbert space ( we call it " operator ball " ) has a restricted form of normal structure if we endow it with a hyperbolic metric ( which is an analogue of the standard hyperbolic metric on the unit disc in the complex plane ). we use this result to get a fixed point theorem for groups of biholomorphic automorphisms of the operator ball. the fixed point theorem is used to show that a bounded representation in a separable hilbert space which has an invariant indefinite quadratic form with finitely many negative squares is unitarizable ( equivalent to a unitary representation ). we apply this result to find dual pairs of invariant subspaces in pontryagin spaces. in the appendix we present results of itai shafrir about hyperbolic metrics on the operator ball.
|
arxiv:0811.1759
|
while training can mostly be accelerated by reducing the time needed to propagate neural gradients back throughout the model, most previous works focus on the quantization / pruning of weights and activations. these methods are often not applicable to neural gradients, which have very different statistical properties. distinguished from weights and activations, we find that the distribution of neural gradients is approximately lognormal. considering this, we suggest two closed - form analytical methods to reduce the computational and memory burdens of neural gradients. the first method optimizes the floating - point format and scale of the gradients. the second method accurately sets sparsity thresholds for gradient pruning. each method achieves state - of - the - art results on imagenet. to the best of our knowledge, this paper is the first to ( 1 ) quantize the gradients to 6 - bit floating - point formats, or ( 2 ) achieve up to 85 % gradient sparsity - - in each case without accuracy degradation. reference implementation accompanies the paper.
|
arxiv:2006.08173
|
detection and control of andreev bound states ( abss ) localized at semiconductor - superconductor interfaces are essential for their use in quantum applications. here we investigate the impact of abss on the supercurrent through a josephson junction containing a quantum dot ( qd ). additional normal - metal tunneling probes on both sides of the junction unveil the abss residing at the semi - superconductor interfaces. such knowledge provides an ingredient missing in previous studies, improving the connection between theory and experimental data. by varying the abs energies using electrostatic gates, we show control of the switching current, with the ability to alter it by more than an order of magnitude. finally, the large degree of abs tunability allows us to realize a three - site abs - qd - abs molecule ( andreev trimer ) in which the central qd is screened by both abss. this system is studied simultaneously using both supercurrent and spectroscopy.
|
arxiv:2402.19284
|
in this paper, we perform a cosmological model - independent test of the cosmic distance - duality relation ( cddr ) in terms of the ratio of angular diameter distance ( add ) $ d = d _ { \ rm a } ^ { \ rm sl } / d _ { \ rm a } ^ { \, \ rm s } $ from strong gravitational lensing ( sgl ) and the ratio of luminosity distance ( ld ) $ d ^ \ ast = d _ { \ rm l } ^ { \, \ rm l } / d _ { \ rm l } ^ { \, \ rm s } $ obtained from the joint of type ia supernovae ( snia ) union2. 1 compilation and the latest gamma - ray bursts ( grbs ) data, where the superscripts s and l correspond to the redshifts $ z _ { \, \ rm s } $ and $ z _ { \, \ rm l } $ at the source and lens from sgl samples. the purpose of combining grb data with snia compilation is to test cddr in a wider redshift range. the ld associated with the redshits of the observed add, is obtained through two cosmological model - independent methods, namely, method a : binning the snia + grbs data, and method b : reconstructing the function of dl by combining the crossing statistic with the smoothing method. we find that cddr is compatible with the observations at $ 1 \ sigma $ confidence level for the power law model which is assumed to describe the mass distribution of lensing systems with method b in a wider redshift range.
|
arxiv:1702.03626
|
context : the utility of prediction models in empirical software engineering ( ese ) is heavily reliant on the quality of the data used in building those models. several data quality challenges such as noise, incompleteness, outliers and duplicate data points may be relevant in this regard. objective : we investigate the reporting of three potentially influential elements of data quality in ese studies : data collection, data pre - processing, and the identification of data quality issues. this enables us to establish how researchers view the topic of data quality and the mechanisms that are being used to address it. greater awareness of data quality should inform both the sound conduct of ese research and the robust practice of ese data collection and processing. method : we performed a targeted literature review of empirical software engineering studies covering the period january 2007 to september 2012. a total of 221 relevant studies met our inclusion criteria and were characterized in terms of their consideration and treatment of data quality. results : we obtained useful insights as to how the ese community considers these three elements of data quality. only 23 of these 221 studies reported on all three elements of data quality considered in this paper. conclusion : the reporting of data collection procedures is not documented consistently in ese studies. it will be useful if data collection challenges are reported in order to improve our understanding of why there are problems with software engineering data sets and the models developed from them. more generally, data quality should be given far greater attention by the community. the improvement of data sets through enhanced data collection, pre - processing and quality assessment should lead to more reliable prediction models, thus improving the practice of software engineering.
|
arxiv:2105.10895
|
over the past several decades, the proliferation of global classical communication networks has transformed various facets of human society. concurrently, quantum networking has emerged as a dynamic field of research, driven by its potential applications in distributed quantum computing, quantum sensor networks, and secure communications. this prompts a fundamental question : rather than constructing quantum networks from scratch, can we harness the widely available classical fiber - optic infrastructure to establish hybrid quantum - classical networks? this paper aims to provide a comprehensive review of ongoing research endeavors aimed at integrating quantum communication protocols, such as quantum key distribution, into existing lightwave networks. this approach offers the substantial advantage of reducing implementation costs by allowing classical and quantum communication protocols to share optical fibers, communication hardware, and other network control resources, arguably the most pragmatic solution in the near term. in the long run, classical communication will also reap the rewards of innovative quantum communication technologies, such as quantum memories and repeaters. accordingly, our vision for the future of the internet is that of heterogeneous communication networks thoughtfully designed for the seamless support of both classical and quantum communications.
|
arxiv:2502.07298
|
generalized einstein relation between the mobility and diffusion in conductors with a large built - in field near the thermodynamic equilibrium has been derived.
|
arxiv:1010.0859
|
two new numbers, $ \ nu $ and $ \ zeta $, inspired by particle - hole symmetry are introduced. these numbers have extreme values at a closed shell and vanish mid - shell. a combination of even powers of these numbers has been used to model experimentally measured quantities such as $ r4 / 2 = e ( 4 ^ + _ 1 ) / e ( 2 ^ + _ 1 ) $ and the " microscopic " contribution to binding energies. a binding energy fit consisting of a total of six fit coefficients, including one new shell term, reproduces the experimental binding energies of 2353 nuclei with an r. m. s. standard deviation of 1. 55 mev. the difference between the experimental and fit values of observables, specifically the $ r4 / 2 $, provides an indication of where shell closure features are less pronounced and where sub - shells closures occur.
|
arxiv:1409.3800
|
we present searches for charged higgs production in decays of top quarks and also pair production of doubly charged higgs boson decaying to di - tau, di - muon, and muon + tau final states. the searches are performed in proton - antiproton collisions at sqrt { s } = 1. 96 tev using an integrated luminosity of up to 7 fb - 1 collected by the cdf and d0 experiments at the fermilab tevatron collider. we find no evidence for charged higgs production and set limits on the production cross - section for a variety of theoretical models. this represents the first search for pair production of doubly - charged higgs bosons decaying into tau leptons at a hadron collider.
|
arxiv:1110.1935
|
we consider the massive wave equation on asymptotically ads spaces. we show that the timelike scri behaves like a finite timelike boundary, on which one may impose the equivalent of dirichlet, neumann or robin conditions for a range of ( negative ) mass parameter which includes the conformally coupled case. we demonstrate well posedness for the associated initial - boundary value problems at the $ h ^ 1 $ level of regularity. we also prove that higher regularity may be obtained, together with an asymptotic expansion for the field near scri. the proofs rely on energy methods, tailored to the modified energy introduced by breitenlohner and freedman. we do not assume the spacetime is stationary, nor that the wave equation separates.
|
arxiv:1202.3445
|
quantization can accelerate large language model ( llm ) inference. going beyond int8 quantization, the research community is actively exploring even lower precision, such as int4. nonetheless, state - of - the - art int4 quantization techniques only accelerate low - batch, edge llm inference, failing to deliver performance gains in large - batch, cloud - based llm serving. we uncover a critical issue : existing int4 quantization methods suffer from significant runtime overhead ( 20 - 90 % ) when dequantizing either weights or partial sums on gpus. to address this challenge, we introduce qoq, a w4a8kv4 quantization algorithm with 4 - bit weight, 8 - bit activation, and 4 - bit kv cache. qoq stands for quattuor - octo - quattuor, which represents 4 - 8 - 4 in latin. qoq is implemented by the qserve inference library that achieves measured speedup. the key insight driving qserve is that the efficiency of llm serving on gpus is critically influenced by operations on low - throughput cuda cores. building upon this insight, in qoq algorithm, we introduce progressive quantization that can allow low dequantization overhead in w4a8 gemm. additionally, we develop smoothattention to effectively mitigate the accuracy degradation incurred by 4 - bit kv quantization. in the qserve system, we perform compute - aware weight reordering and take advantage of register - level parallelism to reduce dequantization latency. we also make fused attention memory - bound, harnessing the performance gain brought by kv4 quantization. as a result, qserve improves the maximum achievable serving throughput of llama - 3 - 8b by 1. 2x on a100, 1. 4x on l40s ; and qwen1. 5 - 72b by 2. 4x on a100, 3. 5x on l40s, compared to tensorrt - llm. remarkably, qserve on l40s gpu can achieve even higher throughput than tensorrt - llm on a100. thus, qserve effectively reduces the dollar cost of llm serving by 3x. code is available at https : / / github. com / mit - han - lab / omniserve.
|
arxiv:2405.04532
|
the forward - backward asymmetry in top pair production at the tevatron has long been in tension with the standard model prediction. one of the only viable new physics scenarios capable of explaining this anomaly is an s - channel axigluon - like resonance, with the quantum numbers of the gluon but with significant axial couplings to quarks. while such a resonance can lead to a clear bump or excess in the ttbar or dijet mass spectra, it may also simply be too broad to cleanly observe. here, we point out that broad ttbar resonances generally lead to net top and antitop polarizations transverse to the production plane. this polarization is consistent with all discrete spacetime symmetries, and, analogous to the forward - backward asymmetry itself, is absent in qcd at leading order. within the parameter space consistent with the asymmetry measurements, the induced polarization can be sizable, and might be observable at the tevatron or the lhc.
|
arxiv:1303.1200
|
the purpose of this paper is to introduce a new family of semigroups - the free projection - generated regular $ * $ - semigroups - and initiate their systematic study. such a semigroup $ pg ( p ) $ is constructed from a projection algebra $ p $, using the recent groupoid approach to regular $ * $ - semigroups. the assignment $ p \ mapsto pg ( p ) $ is a left adjoint to the forgetful functor that maps a regular $ * $ - semigroup $ s $ to its projection algebra $ p ( s ) $. in fact, the category of projection algebras is coreflective in the category of regular $ * $ - semigroups. the algebra $ p ( s ) $ uniquely determines the biordered structure of the idempotents $ e ( s ) $, up to isomorphism, and this leads to a category equivalence between projection algebras and regular $ * $ - biordered sets. as a consequence, $ pg ( p ) $ can be viewed as a quotient of the classical free idempotent - generated ( regular ) semigroups $ ig ( e ) $ and $ rig ( e ) $, where $ e = e ( pg ( p ) ) $ ; this is witnessed by a number of presentations in terms of generators and defining relations. the semigroup $ pg ( p ) $ can also be interpreted topologically, through a natural link to the fundamental groupoid of a simplicial complex explicitly constructed from $ p $. the theory is then illustrated on a number of examples. in one direction, the free construction applied to the projection algebras of adjacency semigroups yields a new family of graph - based path semigroups. in another, it turns out that, remarkably, the temperley - lieb monoid $ tl _ n $ is the free regular $ * $ - semigroup over its own projection algebra $ p ( tl _ n ) $.
|
arxiv:2406.09109
|
let $ k \ geq 1 $ be a cube - free integer with $ k \ equiv 1 \ pmod { 9 } $ and $ \ gcd ( k, 7 \ cdot 571 ) = 1 $. in this paper, we prove the existence of infinitely many triples of imaginary quadratic fields $ \ mathbb { q } ( \ sqrt { d } ) $, $ \ mathbb { q } ( \ sqrt { d + 1 } ) $ and $ \ mathbb { q } ( \ sqrt { d + k ^ 2 } ) $ with $ d \ in \ mathbb { z } $ such that the class number of each of them is divisible by $ 3 $. this affirmatively answers a weaker version of a conjecture of iizuka \ cite { iizuka - jnt }.
|
arxiv:1907.12097
|
wall - crossing phenomena are ubiquitous in many problems of algebraic geometry and theoretical physics. various ways to encode the relevant information and the need to track the changes under the variation of parameters lead to rather complicated transformation rules and non - trivial combinatorial problems. in this paper we propose a framework, reminiscent of collections and plethysms in the theory of operads, that conceptualizes those transformation rules. as an application we obtain new streamlined proofs of some existing wall - crossing formulas as well as some new formulas related to attractor invariants.
|
arxiv:2101.07636
|
introduction. hope is a positive attitude oriented toward a possible ( yet uncertain ), desired outcome. though hope is a virtue, hopelessness is widespread and seems related not only to current events but also to information about current events. this paper examines how hope can be sparked through information. method. this study uses the philosophical methods of conceptual analysis and design to advance a theoretical argument. analysis. first, a conceptualization of hope is offered, drawing on work primarily in virtue ethics. then, four types of information sources for hope are theorized, building on and synthesizing work from philosophy and psychology. results. four categories of information source conducive to hopefulness are identified : information for forming beliefs about the past or future ; information for engaging the moral imagination regarding possibilities for the future ; information for sparking desire for particular moral outcomes ; and information for metacognition, or about how we become informed with respect to hope. conclusions. hope is, in many cases, responsive to information. this suggests a moral opportunity for information professionals and scholars to work toward connecting people with information for hope, particularly in difficult times. avenues for further research, particularly in information behavior and practices, are suggested.
|
arxiv:2206.03311
|
sensitivity analysis plays an important role in the development of computer models / simulators through identifying the contribution of each ( uncertain ) input factor to the model output variability. this report investigates different aspects of the variance - based global sensitivity analysis in the context of complex black - box computer codes. the analysis is mainly conducted using two r packages, namely sensobol ( puy et al., 2021 ) and sensitivity ( iooss et al., 2021 ). while the package sensitivity is equipped with a rich set of methods to conduct sensitivity analysis, especially in the case of models with dependent inputs, the package sensobol offers a bunch of user - friendly tools for the visualisation purposes. several illustrative examples are supplied that allow the user to learn both packages easily and benefit from their features.
|
arxiv:2206.11348
|
statistical parameters of the ism driven by thermal energy injections from supernova explosions have been obtained from 3d, nonlinear, magnetohydrodynamic, shearing - box simulations for spiral arm and interarm regions. the density scale height obtained for the interarm regions is 50 % larger than within the spiral arms because of the higher gas temperature. the filling factor of the hot gas is also significantly larger between the arms and depends sensitively on magnetic field strength.
|
arxiv:astro-ph/0212260
|
dynamic tasks like table tennis are relatively easy to learn for humans but pose significant challenges to robots. such tasks require accurate control of fast movements and precise timing in the presence of imprecise state estimation of the flying ball and the robot. reinforcement learning ( rl ) has shown promise in learning of complex control tasks from data. however, applying step - based rl to dynamic tasks on real systems is safety - critical as rl requires exploring and failing safely for millions of time steps in high - speed regimes. in this paper, we demonstrate that safe learning of table tennis using model - free reinforcement learning can be achieved by using robot arms driven by pneumatic artificial muscles ( pams ). softness and back - drivability properties of pams prevent the system from leaving the safe region of its state space. in this manner, rl empowers the robot to return and smash real balls with 5 m \ s and 12m \ s on average to a desired landing point. our setup allows the agent to learn this safety - critical task ( i ) without safety constraints in the algorithm, ( ii ) while maximizing the speed of returned balls directly in the reward function ( iii ) using a stochastic policy that acts directly on the low - level controls of the real system and ( iv ) trains for thousands of trials ( v ) from scratch without any prior knowledge. additionally, we present hysr, a practical hybrid sim and real training that avoids playing real balls during training by randomly replaying recorded ball trajectories in simulation and applying actions to the real robot. this work is the first to ( a ) fail - safe learn of a safety - critical dynamic task using anthropomorphic robot arms, ( b ) learn a precision - demanding problem with a pam - driven system despite the control challenges and ( c ) train robots to play table tennis without real balls. videos and datasets are available at musculartt. embodied. ml.
|
arxiv:2006.05935
|
in this note, we fill in a gap in the literature by proving that the teichmueller modular groups ( mapping class groups ) are not poincare duality groups and the complexes of curves of surfaces have infinite homotopy type ( i. e. are not homotopy equivalent to a finite cw - complex ).
|
arxiv:0707.4322
|
we use ace / swics elemental composition data to compare the variations in solar wind fractionation as measured by swics during the last solar maximum ( 1999 - 2001 ), the solar minimum ( 2006 - 2009 ) and the period in which the genesis spacecraft was collecting solar wind ( late 2001 - early 2004 ). we differentiate our analysis in terms of solar wind regimes ( i. e. originating from interstream or coronal hole flows, or coronal mass ejecta ). abundances are normalized to the low - fip ion magnesium to uncover correlations that are not apparent when normalizing to high - fip ions. we find that relative to magnesium, the other low - fip elements are measurably fractionated, but the degree of fractionation does not vary significantly over the solar cycle. for the high - fip ions, variation in fractionation over the solar cycle is significant : greatest for ne / mg and c / mg, less so for o / mg, and the least for he / mg. when abundance ratios are examined as a function of solar wind speed, we find a strong correlation, with the remarkable observation that the degree of fractionation follows a mass - dependent trend. we discuss the implications for correcting the genesis sample return results to photospheric abundances.
|
arxiv:1508.04566
|
we present a pontryagin - guided direct policy optimization ( pg - dpo ) method for constrained dynamic portfolio choice - incorporating consumption and multi - asset investment - that scales to thousands of risky assets. by combining neural - network controls with pontryagin ' s maximum principle ( pmp ), it circumvents the curse of dimensionality that renders dynamic programming ( dp ) grids intractable beyond a handful of assets. unlike value - based pde or bsde approaches, pg - dpo enforces pmp conditions at each gradient step, naturally accommodating no - short - selling or borrowing constraints and optional consumption bounds. a " one - shot " variant rapidly computes pontryagin - optimal controls after a brief warm - up, leading to substantially higher accuracy than naive baselines. on modern gpus, near - optimal solutions often emerge within just one or two minutes of training. numerical experiments confirm that, for up to 1, 000 assets, pg - dpo accurately recovers the known closed - form solution in the unconstrained case and remains tractable under constraints - - far exceeding the longstanding dp - based limit of around seven assets.
|
arxiv:2501.12600
|
the concepts of linkage, building blocks, and problem decomposition have long existed in the genetic algorithm ( ga ) field and have guided the development of model - based gas for decades. however, their definitions are usually vague, making it difficult to develop theoretical support. this paper provides an algorithm - independent definition to describe the concept of linkage. with this definition, the paper proves that any problems with a bounded degree of linkage are decomposable and that proper problem decomposition is possible via linkage learning. the way of decomposition given in this paper also offers a new perspective on nearly decomposable problems with bounded difficulty and building blocks from the theoretical aspect. finally, this paper relates problem decomposition to pac learning and proves that the global optima of these problems and the minimum decomposition blocks are pac learnable under certain conditions.
|
arxiv:2501.10777
|
in this paper, we consider randers change of some special $ ( \ alpha, \ beta ) - $ metrics. first we find the fundamental metric tensor and cartan tensor of these randers changed $ ( \ alpha, \ beta ) - $ metrics. next, we establish a general formula for inverse of fundamental metric tensors of these metrics. finally, we find the necessary and sufficient conditions under which the randers change of these $ ( \ alpha, \ beta ) - $ metrics are projectively and locally dually flat.
|
arxiv:1712.07865
|
kic8462852 is a completely - ordinary f3 main sequence star, except that the light curve from kepler shows episodes of unique and inexplicable day - long dips with up to 20 % dimming. here, i provide a light curve of 1338 johnson b - band magnitudes from 1890 to 1989 taken from archival photographic plates at harvard. kic8462852 displays a secular dimming at an average rate of 0. 164 + - 0. 013 magnitudes per century. from the early - 1890s to the late - 1980s, kic8462852 faded by 0. 193 + - 0. 030 mag. the decline is not an artifact because nearby check stars have closely flat light curves. this century - long dimming is unprecedented for any f - type main sequence star. thus the harvard light curve provides the first confirmation ( past the several dips seen in the kepler light curve alone ) that kic8462852 has anything unusual. the century - long dimming and the day - long dips are both just extreme ends of a spectrum of timescales for unique dimming events. by ockham ' s razor, two such unique and similar effects are very likely produced by one physical mechanism. this one mechanism does not appear as any isolated catastrophic event in the last century, but rather must be some ongoing process with continuous effects. within the context of dust - occultation models, the century - long dimming trend requires 10 ^ 4 to 10 ^ 7 times as much dust as for the deepest kepler dip. within the context of the comet - family idea, the century - long dimming trend requires an estimated 648, 000 giant comets ( each with 200 km diameter ) all orchestrated to pass in front of the star within the last century.
|
arxiv:1601.03256
|
in this paper, we invest the domain transfer learning problem with multi - instance data. we assume we already have a well - trained multi - instance dictionary and its corresponding classifier from the source domain, which can be used to represent and classify the bags. but it cannot be directly used to the target domain. thus we propose to adapt them to the target domain by adding an adaptive term to the source domain classifier. the adaptive function is a linear function based a domain transfer multi - instance dictionary. given a target domain bag, we first map it to a bag - level feature space using the domain transfer dictionary, and then apply a the linear adaptive function to its bag - level feature vector. to learn the domain - transfer dictionary and the adaptive function parameter, we simultaneously minimize the average classification error of the target domain classifier over the target domain training set, and the complexities of both the adaptive function parameter and the domain transfer dictionary. the minimization problem is solved by an iterative algorithm which update the dictionary and the function parameter alternately. experiments over several benchmark data sets show the advantage of the proposed method over existing state - of - the - art domain transfer multi - instance learning methods.
|
arxiv:1605.08397
|
we present a calculation of the perturbative quark - to - quark transverse parton distribution function at next - to - next - to - leading order based on a gauge invariant operator definition. we demonstrate for the first time that such a definition works beyond the first non - trivial order. we extract from our calculation the coefficient functions relevant for a next - to - next - to - next - to - leading logarithmic $ q _ t $ resummation in a large class of processes at hadron colliders.
|
arxiv:1209.0682
|
maintaining the roadway infrastructure is one of the essential factors in enabling a safe, economic, and sustainable transportation system. manual roadway damage data collection is laborious and unsafe for humans to perform. this area is poised to benefit from the rapid advance and diffusion of artificial intelligence technologies. specifically, deep learning advancements enable the detection of road damages automatically from the collected road images. this work proposes to collect and label road damage data using google street view and use yolov7 ( you only look once version 7 ) together with coordinate attention and related accuracy fine - tuning techniques such as label smoothing and ensemble method to train deep learning models for automatic road damage detection and classification. the proposed approaches are applied to the crowdsensing - based road damage detection challenge ( crddc2022 ), ieee bigdata 2022. the results show that the data collection from google street view is efficient, and the proposed deep learning approach results in f1 scores of 81. 7 % on the road damage data collected from the united states using google street view and 74. 1 % on all test images of this dataset.
|
arxiv:2211.00091
|
while powerful methods have been developed for high - dimensional hypothesis testing assuming orthogonal parameters, current approaches struggle to generalize to the more common non - orthogonal case. we propose stable distillation ( sd ), a simple paradigm for iteratively extracting independent pieces of information from observed data, assuming a parametric model. when applied to hypothesis testing for large regression models, sd orthogonalizes the effect estimates of non - orthogonal predictors by judiciously introducing noise into the observed outcomes vector, yielding mutually independent p - values across predictors. generic regression and gene - testing simulations show that sd yields a scalable approach for non - orthogonal designs that exceeds or matches the power of existing methods against sparse alternatives. while we only present explicit sd algorithms for hypothesis testing in ordinary least squares and logistic regression, we provide general guidance for deriving and improving the power of sd procedures.
|
arxiv:2212.12539
|
we distinguish between faint, weak, strong and strict localizations of categories at morphism families and show that this framework captures the different types of derived functors that are considered in the literature. more precisely, we show that kan and faint derived functors coincide when we use the classical kan homotopy category, and when we use the quillen homotopy category, kan and strong derived functors coincide. our comparison results are based on the fact that the kan homotopy category is a weak localization and that the quillen homotopy category is a strict localization.
|
arxiv:2109.12392
|
we establish the existence of a deformation of the usual carter constant which is conserved along the motion in a fixed kerr background of a spinning test body possessing the spin - induced quadrupole coupling of a black hole. the conservation holds perturbatively up to second order in the test body ' s spin. this constant of motion is obtained through the explicit resolution of the conservation constraint equations, employing covariant algebraic and differential relations amongst covariant building blocks of the kerr background. for generic spin - induced quadrupole couplings, which describe compact objects such as neutron stars, we obtain a no - go result on the existence of such a conserved quantity.
|
arxiv:2302.14549
|
achieving optimal balance in games is essential to their success, yet reliant on extensive manual work and playtesting. to facilitate this process, the procedural content generation via reinforcement learning ( pcgrl ) framework has recently been effectively used to improve the balance of existing game levels. this approach, however, only assesses balance heuristically, neglecting actual human perception. for this reason, this work presents a survey to empirically evaluate the created content paired with human playtesting. participants in four different scenarios are asked about their perception of changes made to the level both before and after balancing, and vice versa. based on descriptive and statistical analysis, our findings indicate that the pcgrl - based balancing positively influences players ' perceived balance for most scenarios, albeit with differences in aspects of the balancing between scenarios.
|
arxiv:2407.11396
|
we characterize observable sets for 1 - dim schr \ " { o } dinger equations in $ \ mathbb { r } $ : $ i \ partial _ t u = ( - \ partial _ x ^ 2 + x ^ { 2m } ) u $ ( with $ m \ in \ mathbb { n } : = \ { 0, 1, \ dots \ } $ ). more precisely, we obtain what follows : first, when $ m = 0 $, $ e \ subset \ mathbb { r } $ is an observable set at some time if and only if it is thick, namely, there is $ \ gamma > 0 $ and $ l > 0 $ so that $ $ \ left | e \ bigcap [ x, x + l ] \ right | \ geq \ gamma l \ ; \ ; \ mbox { for each } \ ; \ ; x \ in \ mathbb { r } ; $ $ second, when $ m = 1 $ ( $ m \ geq 2 $ resp. ), $ e $ is an observable set at some time ( at any time resp. ) if and only if it is weakly thick, namely $ $ \ varliminf _ { x \ rightarrow + \ infty } \ frac { | e \ bigcap [ - x, x ] | } { x } > 0. $ $ from these, we see how potentials $ x ^ { 2m } $ affect the observability ( including the geometric structures of observable sets and the minimal observable time ). besides, we obtain several supplemental theorems for the above results, in particular, we find that a half line is an observable set at time $ t > 0 $ for the above equation with $ m = 1 $ if and only if $ t > \ frac { \ pi } { 2 } $.
|
arxiv:2003.11263
|
we study the behavior of the soliton which, while moving in non - dissipative medium encounters a barrier with dissipation. the modelling included the case of a finite dissipative layer as well as a wave passing from a dissipative layer into a non - dissipative one and vice versa. new effects are presented in the case of numerically finite barrier on the soliton path : first, if the form of dissipation distribution has a form of a frozen soliton, the wave that leaves the dissipative barrier becomes a bi - soliton and a reflection wave arises as a comparatively small and quasi - harmonic oscillation. second, if the dissipation is negative ( the wave, instead of loosing energy, is pumped with it ) the passed wave is a soliton of a greater amplitude and velocity. third, when the travelling wave solution of the kdv - burgers ( it is a shock wave in a dissipative region ) enters a non - dissipative layer this shock transforms into a quasi - harmonic oscillation known for the kdv.
|
arxiv:1907.10489
|
accurately predicting the dynamics of robotic systems is crucial for model - based control and reinforcement learning. the most common way to estimate dynamics is by fitting a one - step ahead prediction model and using it to recursively propagate the predicted state distribution over long horizons. unfortunately, this approach is known to compound even small prediction errors, making long - term predictions inaccurate. in this paper, we propose a new parametrization to supervised learning on state - action data to stably predict at longer horizons - - that we call a trajectory - based model. this trajectory - based model takes an initial state, a future time index, and control parameters as inputs, and directly predicts the state at the future time index. experimental results in simulated and real - world robotic tasks show that trajectory - based models yield significantly more accurate long term predictions, improved sample efficiency, and the ability to predict task reward. with these improved prediction properties, we conclude with a demonstration of methods for using the trajectory - based model for control.
|
arxiv:2012.09156
|
the magnetic theory for the production of jets by accreting objects is reviewed with emphasis on outstanding problem areas. an effort is made to show the connections behind the occasionally diverging nomenclature in the literature, to contrast the different points of view about basic mechanisms, and to highlight concepts for interpreting the results of numerical simulations. the role of dissipation of magnetic energy in accelerating the flow is discussed, and its importance for explaining high lorentz factors. the collimation of jets to the observed narrow angles is discussed, including a critical discussion of the role of ` hoop stress '. the transition between disk and outflow is one of the least understood parts of the magnetic theory ; its role in setting the mass flux in the wind, in possible modulations of the mass flux, and the uncertainties in treating it realistically are discussed. current views on most of these problems are still strongly influenced by the restriction to 2 dimensions ( axisymmetry ) in previous analytical and numerical work ; 3 - d effects likely to be important are suggested. an interesting problem area is the nature and origin of the strong, preferably highly ordered magnetic fields known to work best for jet production. the observational evidence for such fields and their behavior in numerical simulations is discussed. i argue that the presence or absence of such fields may well be the ` second parameter ' governing not only the presence of jets but also the x - ray spectra and timing behavior of x - ray binaries.
|
arxiv:0804.3096
|
searches for wimp dark matter will in the near future be sensitive to solar neutrinos. directional detection offers a method to reject solar neutrinos and improve wimp searches, but reaching that sensitivity with existing directional detectors poses challenges. we propose a combined atomic / particle physics approach using a large - volume diamond detector. wimp candidate events trigger a particle detector, after which spectroscopy of nitrogen vacancy centers reads out the direction of the incoming particle. we discuss the current state of technologies required to realize directional detection in diamond and present a path towards a detector with sensitivity below the neutrino floor.
|
arxiv:2009.01028
|
in temporal action localization, given an input video, the goal is to predict which actions it contains, where they begin, and where they end. training and testing current state - of - the - art deep learning models requires access to large amounts of data and computational power. however, gathering such data is challenging and computational resources might be limited. this work explores and measures how current deep temporal action localization models perform in settings constrained by the amount of data or computational power. we measure data efficiency by training each model on a subset of the training set. we find that temporalmaxer outperforms other models in data - limited settings. furthermore, we recommend tridet when training time is limited. to test the efficiency of the models during inference, we pass videos of different lengths through each model. we find that temporalmaxer requires the least computational resources, likely due to its simple architecture.
|
arxiv:2308.13082
|
temporal information extraction ( tie ) has attracted a great deal of interest over the last two decades, leading to the development of a significant number of datasets. despite its benefits, having access to a large volume of corpora makes it difficult when it comes to benchmark tie systems. on the one hand, different datasets have different annotation schemes, thus hindering the comparison between competitors across different corpora. on the other hand, the fact that each corpus is commonly disseminated in a different format requires a considerable engineering effort for a researcher / practitioner to develop parsers for all of them. this constraint forces researchers to select a limited amount of datasets to evaluate their systems which consequently limits the comparability of the systems. yet another obstacle that hinders the comparability of the tie systems is the evaluation metric employed. while most research works adopt traditional metrics such as precision, recall, and $ f _ 1 $, a few others prefer temporal awareness - - a metric tailored to be more comprehensive on the evaluation of temporal systems. although the reason for the absence of temporal awareness in the evaluation of most systems is not clear, one of the factors that certainly weights this decision is the necessity to implement the temporal closure algorithm in order to compute temporal awareness, which is not straightforward to implement neither is currently easily available. all in all, these problems have limited the fair comparison between approaches and consequently, the development of temporal extraction systems. to mitigate these problems, we have developed tieval, a python library that provides a concise interface for importing different corpora and facilitates system evaluation. in this paper, we present the first public release of tieval and highlight its most relevant features.
|
arxiv:2301.04643
|
measures of tree balance play an important role in the analysis of phylogenetic trees. one of the oldest and most popular indices in this regard is the colless index for rooted bifurcating trees, introduced by colless ( 1982 ). while many of its statistical properties under different probabilistic models for phylogenetic trees have already been established, little is known about its minimum value and the trees that achieve it. in this manuscript, we fill this gap in the literature. to begin with, we derive both recursive and closed expressions for the minimum colless index of a tree with $ n $ leaves. surprisingly, these expressions show a connection between the minimum colless index and the so - called blancmange curve, a fractal curve. we then fully characterize the tree shapes that achieve this minimum value and we introduce both an algorithm to generate them and a recurrence to count them. after focusing on two extremal classes of trees with minimum colless index ( the maximally balanced trees and the greedy from the bottom trees ), we conclude by showing that all trees with minimum colless index also have minimum sackin index, another popular balance index.
|
arxiv:1907.05064
|
four diabatic states are used to construct a simple model for double proton transfer in hydrogen bonded complexes. key parameters in the model are the proton donor - acceptor separation r and the ratio, d _ 1 / d _ 2, between the proton affinity of a donor with one and two protons. depending on the values of these two parameters the model describes four qualitatively different ground state potential energy surfaces, having zero, one, two, or four saddle points. in the limit d _ 2 = d _ 1 the model reduces to two decoupled hydrogen bonds. as r decreases a transition can occur from a concerted to a sequential mechanism for double proton transfer.
|
arxiv:1407.3536
|
we report the detection of adfs - 27, a dusty, starbursting major merger at a redshift of z = 5. 655, using the atacama large millimeter / submillimeter array ( alma ). adfs - 27 was selected from herschel / spire and apex / laboca data as an extremely red " 870 micron riser " ( i. e., s _ 250 < s _ 350 < s _ 500 < s _ 870 ), demonstrating the utility of this technique to identify some of the highest - redshift dusty galaxies. a scan of the 3mm atmospheric window with alma yields detections of co ( 5 - 4 ) and co ( 6 - 5 ) emission, and a tentative detection of h2o ( 211 - 202 ) emission, which provides an unambiguous redshift measurement. the strength of the co lines implies a large molecular gas reservoir with a mass of m _ gas = 2. 5x10 ^ 11 ( alpha _ co / 0. 8 ) ( 0. 39 / r _ 51 ) msun, sufficient to maintain its ~ 2400 msun / yr starburst for at least ~ 100 myr. the 870 micron dust continuum emission is resolved into two components, 1. 8 and 2. 1 kpc in diameter, separated by 9. 0 kpc, with comparable dust luminosities, suggesting an ongoing major merger. the infrared luminosity of l _ ir ~ = 2. 4x10 ^ 13lsun implies that this system represents a binary hyper - luminous infrared galaxy, the most distant of its kind presently known. this also implies star formation rate surface densities of sigma _ sfr = 730 and 750msun / yr / kpc2, consistent with a binary " maximum starburst ". the discovery of this rare system is consistent with a significantly higher space density than previously thought for the most luminous dusty starbursts within the first billion years of cosmic time, easing tensions regarding the space densities of z ~ 6 quasars and massive quiescent galaxies at z > ~ 3.
|
arxiv:1705.09660
|
learning dynamics from repeated observation of the time evolution of an open quantum system, namely, the problem of quantum process tomography is an important task. this task is difficult in general, but, with some additional constraints could be tractable. this motivates us to look at the problem of lindblad operator discovery from observations. we point out that for moderate size hilbert spaces, low kraus rank of the channel, and short time steps, the eigenvalues of the choi matrix corresponding to the channel have a special structure. we use the least - square method for the estimation of a channel where, for fixed inputs, we estimate the outputs by classical shadows. the resultant noisy estimate of the channel can then be denoised by diagonalizing the nominal choi matrix, truncating some eigenvalues, and altering it to a genuine choi matrix. this processed choi matrix is then compared to the original one. we see that as the number of samples increases, our reconstruction becomes more accurate. we also use tools from random matrix theory to understand the effect of estimation noise in the eigenspectrum of the estimated choi matrix.
|
arxiv:2309.12631
|
we study a twisted generalization of novikov superalgebras, called hom - novikov superalgebras. it is shown that two classes of hom - novikov superalgebras can be constructed from hom - supercommutative algebras together with derivations and hom - novikov superalgebras with rota - baxter operators, respectively. we show that quadratic hom - novikov superalgebras are hom - associative superalgebras and the sub - adjacent hom - lie superalgebras of hom - novikov superalgebras are 2 - step nilpotent. moreover, we develop the 1 - parameter formal deformation theory of hom - novikov superalgebras.
|
arxiv:1501.00229
|
to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials ( usually the last one ). conversely, one passes from an affine surface to its associated projective surface ( called projective completion ) by homogenizing the defining polynomial ( in case of surfaces in a space of dimension three ), or by homogenizing all polynomials of the defining ideal ( for surfaces in a space of higher dimension ). = = = in higher dimensional spaces = = = one cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety. in fact, an algebraic surface is an algebraic variety of dimension two. more precisely, an algebraic surface in a space of dimension n is the set of the common zeros of at least n – 2 polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. generally, n – 2 polynomials define an algebraic set of dimension two or higher. if the dimension is two, the algebraic set may have several irreducible components. if there is only one component the n – 2 polynomials define a surface, which is a complete intersection. if there are several components, then one needs further polynomials for selecting a specific component. most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two. in the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non - irreducible algebraic sets of dimension two are considered as surfaces or not. = = topological surface = = in topology, a surface is generally defined as a manifold of dimension two. this means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a euclidean plane. every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles. the combinatorial study of such arrangements of triangles ( or, more generally, of higher - dimensional simplexes ) is the starting object of algebraic topology. this allows the characterization of the properties of surfaces in terms of purely algebraic invariants, such as the genus and homology groups
|
https://en.wikipedia.org/wiki/Surface_(mathematics)
|
we present radiation transfer models that demonstrate that reflected light levels from three dimensional ( 3d ) exoplanetary atmospheres can be more than 50 % lower than those predicted by models of homogeneous or smooth atmospheres. compared to smooth models, 3d atmospheres enable starlight to penetrate to larger depths resulting in a decreased probability for the photons to scatter back out of the atmosphere before being absorbed. the increased depth of penetration of starlight in a 3d medium is a well known result from theoretical studies of molecular clouds and planetary atmospheres. for the first time we study the reflectivity of 3d atmospheres as a possible explanation for the apparent low geometric albedos inferred for extrasolar planetary atmospheres. our models indicate that 3d atmospheric structure may be an important contributing factor to the non - detections of scattered light from exoplanetary atmospheres. we investigate the self - shadowing radiation transfer effects of patchy cloud cover in 3d scattered light simulations of the atmosphere of hd209458b. we find that, for a generic planet, geometric albedos can be as high as 0. 45 in some limited situations, but that in general the geometric albedo is much lower. we conclude with some explanations on why extrasolar planets are likely dark at optical wavelengths.
|
arxiv:0807.1561
|
members of the cak ( fe $ _ { 1 - x } $ mn $ _ { x } $ ) $ _ { 4 } $ as $ _ { 4 } $ series have been synthesized in single crystalline form and characterized by elemental analysis, thermodynamic and transport measurements. these measurements show that the superconducting transition temperature decreases monotonically and is finally suppressed below 1. 8 k. for $ x $ - values greater than 0. 016, signatures of a magnetic transition can be detected in both thermodynamic and transport measurements in which kink - like features allow for the determination of the transition temperature, $ t ^ * $, that increases as mn substitution increases. a temperature - composition ( $ t $ - $ x $ ) phase diagram is constructed, revealing a half - dome of superconductivity with the magnetic transition temperature, $ t ^ * $, appearing near 26 k for $ x $ $ \ sim $ 0. 017 and rising slowly up to 33 k for $ x $ $ \ sim $ 0. 036. specific heat data are used to track the jump in specific heat at $ t _ c $ ; the cak ( fe $ _ { 1 - x } $ mn $ _ x $ ) $ _ 4 $ as $ _ 4 $ data does not follow the scaling of $ \ delta $ $ c _ { p } $ with $ t _ { c } ^ 3 $ as many of the other fe - based superconducting systems do. elastoresistivity coefficients, $ 2m _ { 66 } $ and $ m _ { 11 } - m _ { 12 } $, as a function of temperature are also measured. $ 2m _ { 66 } $ and $ m _ { 11 } - m _ { 12 } $ are qualitatively similar to cak ( fe $ _ { 1 - x } $ ni $ _ x $ ) $ _ 4 $ as $ _ 4 $. this may indicate that the magnetic order in mn substituted system may be still the same as cak ( fe $ _ { 1 - x } $ ni $ _ x $ ) $ _ 4 $ as $ _ 4 $. a clear change in $ h ^ \ prime _ { c2 } $ ( $ t $ ) / $ t _ c $, where $ h ^ \ prime _ { c2 } $ ( $ t $ ) is d $ h _ { c2 } $ ( $ t $ ) / d $ t
|
arxiv:2204.10925
|
quantum non - gaussian states represent an important class of highly non - classical states whose preparation requires quantum operations or measurements beyond the class of gaussian operations and statistical mixing. here we derive criteria for certification of quantum non - gaussianity based on probability of vacuum in the original quantum state and a state transmitted through a lossy channel with transmittance t. we prove that the criteria hold for arbitrary multimode states, which is important for their applicability in experiments with broadband sources and single - photon detectors. interestingly, our approach allows to detect quantum non - gaussianity using only one photodetector instead of complex multiplexed photon detection schemes, at the cost of increased experimental time. we also formulate a quantum non - gaussianity criterion based on the vacuum probability and mean photon number of the state and we show that this criterion is closely related to the criteria based on pair of vacuum probabilities. we illustrate the performance of the obtained criteria on the example of realistic imperfect single - photon states modeled as a mixture of vacuum and single - photon states with background poissonian noise.
|
arxiv:2107.09380
|
monte carlo ( mc ) simulations of lattice models are a widely used way to compute thermodynamic properties of substitutional alloys. a limitation to their more widespread use is the difficulty of driving a mc simulation in order to obtain the desired quantities. to address this problem, we have devised a variety of high - level algorithms that serve as an interface between the user and a traditional mc code. the user specifies the goals sought in a high - level form that our algorithms convert into elementary tasks to be performed by a standard mc code. for instance, our algorithms permit the determination of the free energy of an alloy phase over its entire region of stability within a specified accuracy, without requiring any user intervention during the calculations. our algorithms also enable the direct determination of composition - temperature phase boundaries without requiring the calculation of the whole free energy surface of the alloy system.
|
arxiv:cond-mat/0201473
|
we describe a simulation method for the accurate study of the equilibrium freezing properties of polydisperse fluids under the experimentally relevant condition of fixed polydispersity. the approach is based on the phase switch monte carlo method of wilding and bruce [ phys. rev. lett. { \ bf 85 }, 5138 ( 2000 ) ]. this we have generalized to deal with particle size polydispersity by incorporating updates which alter the diameter $ \ sigma $ of a particle, under the control of a distribution of chemical potential differences $ \ tilde \ mu ( \ sigma ) $. within the resulting isobaric semi - grand canonical ensemble, we detail how to adapt $ \ tilde \ mu ( \ sigma ) $ and the applied pressure such as to study coexistence, whilst ensuring that the ensemble averaged density distribution $ \ rho ( \ sigma ) $ matches a fixed functional form. results are presented for the effects of small degrees of polydispersity on the solid - liquid transition of soft spheres.
|
arxiv:0810.3801
|
in the era of multimedia and internet, people are eager to obtain information from offline to online. quick response ( qr ) codes and digital watermarks help us access information quickly. however, qr codes look ugly and invisible watermarks can be easily broken in physical photographs. therefore, this paper proposes a novel method to embed hyperlinks into natural images, making the hyperlinks invisible for human eyes but detectable for mobile devices. our method is an end - to - end neural network with an encoder to hide information and a decoder to recover information. from original images to physical photographs, camera imaging process will introduce a series of distortion such as noise, blur, and light. to train a robust decoder against the physical distortion from the real world, a distortion network based on 3d rendering is inserted between the encoder and the decoder to simulate the camera imaging process. besides, in order to maintain the visual attraction of the image with hyperlinks, we propose a loss function based on just noticeable difference ( jnd ) to supervise the training of encoder. experimental results show that our approach outperforms the previous method in both simulated and real situations.
|
arxiv:1912.01224
|
given the widespread attention to individual thermal comfort, coupled with significant energy - saving potential inherent in energy management systems for optimizing indoor environments, this paper aims to introduce advanced " humans - in - the - building " control techniques to redefine the paradigm of indoor temperature design. firstly, we innovatively redefine the role of individuals in the control loop, establishing a model for users ' thermal comfort and constructing discomfort signals based on individual preferences. unlike traditional temperature - centric approaches, " thermal comfort control " prioritizes personalized comfort. then, considering the diversity among users, we propose a novel method to determine the optimal indoor temperature range, thus minimizing discomfort for various users and reducing building energy consumption. finally, the efficacy of the " thermal comfort control " approach is substantiated through simulations conducted using matlab.
|
arxiv:2403.07453
|
in this research we present a novel algorithm for background subtraction using a moving camera. our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. in our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. at first, we estimate the focus of expansion, which is used to model and simulate 3d points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. we validated our algorithm using data taken from a common bus route.
|
arxiv:1811.06660
|
the flow of viscoelastic wormlike micellar solutions ( wlms ) in porous media is encountered in many practical applications like enhanced oil recovery or groundwater remediation. to understand the flow dynamics of these complex fluids in porous media, a model porous media consisting of a straight microchannel with micropore throats present in it, is very often used. in this study, we perform an extensive numerical investigation to understand the flow dynamics of wormlike micellar solutions based on the two - species vasquez - cook - mckinley ( vcm ) model for micelles through such model porous media. we find the existence of an elastic instability once the weissenberg number exceeds a critical value ; likewise, it was seen in many prior experimental and numerical studies dealing with polymer solutions. however, for the present case of a wlm solution, we observe that this elastic instability is greatly influenced by the breakage and reformation mechanisms of the wormlike micelles. in particular, we notice that the intensity of this instability ( characterized by the fluctuating flow fields ) increases as the weissenberg number increases ; however, beyond a critical value of it, this elastic instability and / or the flow field fluctuation is suppressed because of the breakage of long micelles. this is in contrast to the polymer solutions for which the flow field gradually transits to a more chaotic and turbulent - like state ( or the so - called elastic turbulence state ) as the weissenberg number gradually increases. additionally, we observe that the flow dynamics of these wlm solutions are strongly dependent on the type of micropore throat, the number of pore throats, and the spacing between two consecutive pore throats. an extensive discussion on the pressure drop and apparent viscosity is also presented in the present study.
|
arxiv:2107.09453
|
servo lag errors in adaptive optics lead to inaccurate compensation of wavefront distortions. an attempt has been made to predict future wavefronts using data mining on wavefronts of the immediate past to reduce these errors. monte carlo simulations were performed on experimentally obtained data that closely follows kolmogorov phase characteristics. an improvement of 6 % in wavefront correction is reported after data mining is performed. data mining is performed in three steps ( a ) data cube segmentation ( b ) polynomial interpolation and ( c ) wavefront estimation. it is important to optimize the segment size that gives best prediction results. optimization of the best predictable future helps in selecting a suitable exposure time.
|
arxiv:0911.0822
|
we introduce a novel resource analysis for typed term rewrite systems based on a potential - based type system. this type system gives rise to polynomial bounds on the innermost runtime complexity. we relate the thus obtained amortised resource analysis to polynomial interpretations and obtain the perhaps surprising result that whenever a rewrite system r can be well - typed, then there exists a polynomial interpretation that orients r. for this we adequately adapt the standard notion of polynomial interpretations to the typed setting.
|
arxiv:1402.1922
|
we present an experimental demonstration of passive, dynamic thermal regulation in a solid - state system with temperature - dependent thermal emissivity switching. we achieve this effect using a multilayered device, comprised of a vanadium dioxide ( vo2 ) thin film on a silicon substrate with a gold back reflector. we experimentally characterize the optical properties of the vo2 film and use the results to optimize device design. using a calibrated, transient calorimetry experiment we directly measure the temperature fluctuations arising from a time - varying heat load. under laboratory conditions, we find that the device regulates temperature better than a constant emissivity sample. we use the experimental results to validate our thermal model, which can be used to predict device performance under the conditions of outer space. in this limit, thermal fluctuations are halved with reference to a constant - emissivity sample.
|
arxiv:2003.00031
|
a significant fraction of planetary nebulae ( pne ) exhibit collimated outflows, distinct narrow kinematical components with notable velocity shifts with respect to the main nebular shells typically associated with low - ionization compact knots and linear or precessing jet - like features. we present here a spatio - kinematical investigation of a sample of twelve pne with morphologies in emission lines of low - ionization species suggestive of collimated outflows. using archival narrow - band images and our own high - dispersion long - slit echelle spectra, we confirm the presence of collimated outflows in hen 2 - 429, j 320, m 1 - 66, m 2 - 40, m 3 - 1, and ngc 6210 and possibly in ngc 6741, for which the spatio - kinematical data can also be interpreted as a pair of bipolar lobes. the presence of collimated outflows is rejected in hen 2 - 47, hen 2 - 115, m 1 - 26, and m 1 - 37, but their morphology and kinematics are indicative of the action of supersonic outflows that have not been able to pierce through the nebular envelope. in this sense, m 1 - 66 appears to have experienced a similar interaction between the outflow and nebular envelope, but, as opposed to these four pne, the outflow has been able to break through the nebular envelope. it is suggested that the pne without collimated outflows in our sample are younger or descend from lower mass progenitors than those that exhibit unambiguous collimated outflows.
|
arxiv:1911.11325
|
let $ g = < x, t \ mid w > $ be a one - relator group, where $ w $ is a word in $ x, t $. if $ w $ is a product of conjugates of $ x $ then, associated with $ w $, there is a polynomial $ a _ w ( x ) $ over the integers, which in the case when $ g $ is a knot group, is the alexander polynomial of the knot. we prove, subject to certain restrictions on $ w $, that if all roots of $ a _ w ( x ) $ are real and positive then $ g $ is bi - orderable, and that if $ g $ is bi - orderable then at least one root is real and positive. this sheds light on the bi - orderability of certain knot groups and on a question of clay and rolfsen. one of the results relies on an extension of work of g. baumslag on adjunction of roots to groups, and this may have independent interest.
|
arxiv:1405.0994
|
we calculate the two - body triton photodisintegration cross section as a function of photon energy to next - to - next - to leading order ( nnlo ) in pionless effective field theory ( eft ( $ \ pi \! \! / $ ) ) and show good agreement with experiment. in addition we calculate the polarization asymmetry $ r _ c = - 0. 441 ( 15 ) $ in cold neutron - deuteron capture to nnlo in eft ( $ \ pi \! \! / $ ), in agreement with the experimental value of $ r _ c = - 0. 42 \ pm 0. 03 $ [ m. w. konijnenberg et al. in phys. lett. b 205, 215 ( 1988 ) ]. we also assess the dependence of $ r _ c $ on different fits of the two - nucleon magnetic currents. finally, we consider the impact of wigner - su ( 4 ) symmetry and demonstrate that starting from the wigner - su ( 4 ) symmetric limit and including perturbative corrections to the breaking of wigner - su ( 4 ) symmetry does a good job of describing two - body triton photodisintegration.
|
arxiv:2408.14602
|
this paper focuses on the performance of equalizer zero - determinant ( zd ) strategies in discounted repeated stackerberg asymmetric games. in the leader - follower adversarial scenario, the strong stackelberg equilibrium ( sse ) deriving from the opponents ' best response ( br ), is technically the optimal strategy for the leader. however, computing an sse strategy may be difficult since it needs to solve a mixed - integer program and has exponential complexity in the number of states. to this end, we propose to adopt an equalizer zd strategy, which can unilaterally restrict the opponent ' s expected utility. we first study the existence of an equalizer zd strategy with one - to - one situations, and analyze an upper bound of its performance with the baseline sse strategy. then we turn to multi - player models, where there exists one player adopting an equalizer zd strategy. we give bounds of the sum of opponents ' utilities, and compare it with the sse strategy. finally, we give simulations on unmanned aerial vehicles ( uavs ) and the moving target defense ( mtd ) to verify the effectiveness of our approach.
|
arxiv:2310.03441
|
we considered a charged quantum mechanical particle with spin $ { 1 \ over 2 } $ and gyromagnetic ratio $ g \ ne 2 $ in the field af a magnetic string. whereas the interaction of the charge with the string is the well kown aharonov - bohm effect and the contribution of magnetic moment associated with the spin in the case $ g = 2 $ is known to yield an additional scattering and zero modes ( one for each flux quantum ), an anomaly of the magnetic moment ( i. e. $ g > 2 $ ) leads to bound states. we considered two methods for treating the case $ g > 2 $. \ \ the first is the method of self adjoint extension of the corresponding hamilton operator. it yields one bound state as well as additional scattering. in the second we consider three exactly solvable models for finite flux tubes and take the limit of shrinking its radius to zero. for finite radius, there are $ n + 1 $ bound states ( $ n $ is the number of flux quanta in the tube ). \ \ for $ r \ to 0 $ the bound state energies tend to infinity so that this limit is not physical unless $ g \ to 2 $ along with $ r \ to 0 $. thereby only for fluxes less than unity the results of the method of self adjoint extension are reproduced whereas for larger fluxes $ n $ bound states exist and we conclude that this method is not applicable. \ \ we discuss the physically interesting case of small but finite radius whereby the natural scale is given by the anomaly of the magnetic moment of the electron $ a _ e = ( g - 2 ) / 2 \ approx 10 ^ { - 3 } $.
|
arxiv:hep-th/9304017
|
production logistics has an important role as a chain that connects the components of the production system. the most important goal of production logistics plans is to keep the flow of the production system well. however, compared to the production system, the level of planning, management, and digitalization of the production logistics system is not high enough, so it is difficult to respond flexibly when unexpected situations occur in the production logistics system. optimization and heuristic algorithms have been proposed to solve this problem, but due to their inflexible nature, they can only achieve the desired solution in a limited environment. in this paper, the relationship between the production and production logistics system is analyzed and stochastic variables are introduced by modifying the pickup and delivery problem with time windows ( pdptw ) optimization model to establish a flexible production logistics plan. this model, taking into account stochastic variables, gives the scheduler a new perspective, allowing them to have new insights based on the mathematical model. however, since the optimization model is still insufficient to respond to the dynamic environment, future research will cover how to derive meaningful results even in a dynamic environment such as a machine learning model.
|
arxiv:2203.17033
|
we studied the variation of $ e ^ + $ and $ \ bar p $ top of the atmosphere spectra due to the parameters uncertainties of the milky way geometry, propagation models and cross sections. we used the b / c data and galprop code for the propagation analysis. we also derived the uncertainty bands for subfe / fe ratio, h and he. finally, we considered a neutralino induced component in the antiproton flux in the msugra framework. pamela expectations for positrons and antiprotons are calculated. we studied in details the possibility of disentanglement of an eventual signal component in the antiproton spectra in a clumpy halo scenario : minimal values of clumpiness factors necessary to disentangle the signal from the background without violating the quality of the antiproton data fit are found. there are also given examples of total spectra in comparison with existing experimental data and an example of pamela prediction for the total spectra. the main result of this work is that for the diffusion and convection background model pamela will be able to disentangle an eventual supersymmetric signal even for small clumpiness factors.
|
arxiv:astro-ph/0502406
|
we present an effective method to generate second harmonic ( sh ) waves using nonlinear metamaterial composed of coupled split ring resonators ( csrrs ) with varactor ( variable capacitance ) diodes. the csrr structure has two resonant modes : a symmetric mode that resonates at the fundamental frequency and an anti - symmetric mode that resonates at the sh frequency. resonant fundamental waves in the symmetric mode generate resonant sh waves in the anti - symmetric mode. the double resonance contributes to effective sh radiation. in the experiment, we observe 19. 6 db enhancement in the sh radiation in comparison with the nonlinear metamaterial that resonates only for the fundamental waves.
|
arxiv:1201.5196
|
we prove by means of elementary methods that phase retrieval of complex polynomials p of degree less than n is possible with 4n - 4 phaseless fourier measurements of p and p '. in addition, we provide an associated algorithm and prove that it recovers p up to global phase.
|
arxiv:1403.4769
|
up to now, only limited research has been conducted on cross - modal retrieval of suitable music for a specified video or vice versa. moreover, much of the existing research relies on metadata such as keywords, tags, or associated description that must be individually produced and attached posterior. this paper introduces a new content - based, cross - modal retrieval method for video and music that is implemented through deep neural networks. we train the network via inter - modal ranking loss such that videos and music with similar semantics end up close together in the embedding space. however, if only the inter - modal ranking constraint is used for embedding, modality - specific characteristics can be lost. to address this problem, we propose a novel soft intra - modal structure loss that leverages the relative distance relationship between intra - modal samples before embedding. we also introduce reasonable quantitative and qualitative experimental protocols to solve the lack of standard protocols for less - mature video - music related tasks. finally, we construct a large - scale 200k video - music pair benchmark. all the datasets and source code can be found in our online repository ( https : / / github. com / csehong / vm - net ).
|
arxiv:1704.06761
|
we investigate the interaction between scalar fields and radiation in the framework of warm inflationary models by using the irreversible thermodynamics of open systems with matter creation / annihilation. we consider the scalar fields and radiation as an interacting two component cosmological fluid in a homogeneous, spatially flat and isotropic friedmann - robertson - walker ( frw ) universe. the thermodynamics of open systems as applied together with the gravitational field equations to the two component cosmological fluid leads to a generalization of the elementary scalar field - radiation interaction model, which is the theoretical basis of warm inflationary models, with the decay ( creation ) pressures explicitly considered as parts of the cosmological fluid energy - momentum tensor. specific models describing coherently oscillating scalar waves, scalar fields with a constant potential, and scalar fields with a higgs type potential are considered in detail. for each case exact and numerical solutions of the gravitational field equations with scalar field - radiation interaction are obtained, and they show the transition from an accelerating inflationary phase to a decelerating one. the theoretical predictions of the warm inflationary scenario with irreversible matter creation are also compared in detail with the planck 2018 observational data, and constraints on the free parameters of the model are obtained.
|
arxiv:2003.02257
|
the rapid progress in quantum - optical experiments especially in the field of cavity quantum electrodynamics and nanoplasmonics, allows to substantially modify and control chemical and physical properties of atoms, molecules and solids by strongly coupling to the quantized field. alongside such experimental advances has been the recent development of ab - initio approaches such as quantum electrodynamical density - functional theory ( qedft ) that is capable of describing these strongly coupled systems from first - principles. to investigate response properties of relatively large systems coupled to a wide range of photon modes, ab - initio methods that scale well with system size become relevant. in light of this, we extend the linear - response sternheimer approach within the framework of qedft to efficiently compute excited - state properties of strongly coupled light - matter systems. using this method, we capture features of strong light - matter coupling both in the dispersion and absorption properties of a molecular system strongly coupled to the modes of a cavity. we exemplify the efficiency of the sternheimer approach by coupling the matter system to the continuum of an electromagnetic field. we observe changes in the spectral features of the coupled system as lorentzian line shapes turn into fano resonances when the molecule interacts strongly with the continuum of modes. this work provides an alternative approach for computing efficiently excited - state properties of large molecular systems interacting with the quantized electromagnetic field.
|
arxiv:2201.08734
|
of developing genetic disorders. genetic testing identifies changes in chromosomes, genes, or proteins. most of the time, testing is used to find changes that are associated with inherited disorders. the results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person ' s chance of developing or passing on a genetic disorder. as of 2011 several hundred genetic tests were in use. since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. = = = agriculture = = = genetically modified crops ( " gm crops ", or " biotech crops " ) are plants used in agriculture, the dna of which has been modified with genetic engineering techniques. in most cases, the main aim is to introduce a new trait that does not occur naturally in the species. biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments ( e. g. resistance to a herbicide ), reduction of spoilage, or improving the nutrient profile of the crop. examples in non - food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. farmers have widely adopted gm technology. between 1996 and 2011, the total surface area of land cultivated with gm crops had increased by a factor of 94, from 17, 000 to 1, 600, 000 square kilometers ( 4, 200, 000 to 395, 400, 000 acres ). 10 % of the world ' s crop lands were planted with gm crops in 2010. as of 2011, 11 different transgenic crops were grown commercially on 395 million acres ( 160 million hectares ) in 29 countries such as the us, brazil, argentina, india, canada, china, paraguay, pakistan, south africa, uruguay, bolivia, australia, philippines, myanmar, burkina faso, mexico and spain. genetically modified foods are foods produced from organisms that have had specific changes introduced into their dna with the methods of genetic engineering. these techniques have allowed for the introduction of new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods
|
https://en.wikipedia.org/wiki/Biotechnology
|
aiming at separating the cartoon and texture layers from an image, cartoon - texture decomposition approaches resort to image priors to model cartoon and texture respectively. in recent years, patch recurrence has emerged as a powerful prior for image recovery. however, the existing strategies of using patch recurrence are ineffective to cartoon - texture decomposition, as both cartoon contours and texture patterns exhibit strong patch recurrence in images. to address this issue, we introduce the isotropy prior of patch recurrence, that the spatial configuration of similar patches in texture exhibits the isotropic structure which is different from that in cartoon, to model the texture component. based on the isotropic patch recurrence, we construct a nonlocal sparsification system which can effectively distinguish well - patterned features from contour edges. incorporating the constructed nonlocal system into morphology component analysis, we develop an effective method to both noiseless and noisy cartoon - texture decomposition. the experimental results have demonstrated the superior performance of the proposed method to the existing ones, as well as the effectiveness of the isotropic patch recurrence prior.
|
arxiv:1811.04208
|
we show how text from news articles can be used to predict intraday price movements of financial assets using support vector machines. multiple kernel learning is used to combine equity returns with text as predictive features to increase classification performance and we develop an analytic center cutting plane method to solve the kernel learning problem efficiently. we observe that while the direction of returns is not predictable using either text or returns, their size is, with text features producing significantly better performance than historical returns alone.
|
arxiv:0809.2792
|
we investigate the behavior of drainage displacements in heterogeneous porous media finding a transition from viscous fingering to foam - like region. a pore network model incorporating the formation of blobs is adopted to study this phenomenon. by imposing a pressure difference between the inlet and outlet, we observe that the displacement pattern undergoes a significant transition from a continuous front of growing viscous fingers to the emergence of foam, which develops and propagates until breakthrough. this transition occurs at a specific distance from the inlet, which we measure and analyze as a function of the viscosity ratio and the capillary number, demonstrating that it follows a non - trivial power - law decay with both the parameters. moreover, we discuss the relationship between the evolution of the total flow rate and the local pressure drop, showing that the foam developed reduces global mobility. we observe that foam is formed from the fragmentation of viscous fingers beneath the front, and this instability mechanism is connected with fluctuations of the local flow rate, which we analyze both in the viscous fingering region and in the foam region.
|
arxiv:2307.13451
|
misinformation is one of the key challenges facing society today. user - centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. while an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. this review systematizes the landscape of user - centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision - making. over 5, 700 scholarly publications were screened and a systematic literature review ( n = 163 ) was conducted. a taxonomy was derived regarding intervention design ( e. g., ( binary ) label ), user interaction ( active or passive ), and timing ( e. g., post exposure to misinformation ). we provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.
|
arxiv:2301.06517
|
we show the existence of a polynomial - size extended formulation for the base polytope of a $ ( k, \ ell ) $ - sparsity matroid. for an undirected graph $ g = ( v, e ) $, the size of the formulation is $ o ( | v | | e | ) $ when $ k \ geq \ ell $ and $ o ( | v | ^ 2 | e | ) $ when $ k \ leq \ ell $. to this end, we employ the technique developed by faenza et al. recently that uses a randomized communication protocol.
|
arxiv:1403.7272
|
this article reviews two currently available analytic models of the dielectric function of a plasma consisting of quantum particles interacting via coulomb forces, namely the random phase approximation ( rpa ) and the standard ( simple ) plasmon pole approximation ( sppa ). it is shown that these models describe different non - overlapping plasma regimes. the rpa describes a weakly coupled plasma whose dynamics are an admixture of independent particle modes at short wavelengths and collective plasma modes at long wavelengths. the sppa describes a system dominated by collective resonances at all wavelengths. a new model of a multicomponent plasma, the generalized plasmon pole approximation ( gppa ), is formulated and shown to provide a description of a range of plasma regimes encompassing both those described by the rpa and the sppa. therefore, as well as providing a good representation of the rpa, in both quantum and classical, regimes, this new model also provides a promising basis for modelling non - ideal quantum plasmas, which are plasmas in regimes of significantly stronger coupling than are addressed by the rpa.
|
arxiv:1508.05606
|
we describe and analyze a cellular nonlinear network based on magnetic nanostructures for image processing. the network consists of magneto - electric cells integrated onto a common ferromagnetic film - spin wave bus. the magneto - electric cell is an artificial two - phase multiferroic structure comprising piezoelectric and ferromagnetic materials. a bit of information is assigned to the cell ' s magnetic polarization, which can be controlled by the applied voltage. the information exchange among the cells is via the spin waves propagating in the spin wave bus. each cell changes its state as a combined effect of two : the magneto - electric coupling and the interaction with the spin waves. the distinct feature of the network with spin wave bus is the ability to control the inter - cell communication by an external global parameter - magnetic field. the latter makes possible to realize different image processing functions on the same template without rewiring or reconfiguration. we present the results of numerical simulations illustrating image filtering, erosion, dilation, horizontal and vertical line detection, inversion and edge detection accomplished on one template by the proper choice of the strength and direction of the external magnetic field. we also present numerical assets on the major network parameters such as cell density, power dissipation and functional throughput, and compare them with the parameters projected for other nano - architectures such as cmol - crossnet, quantum dot cellular automata, and quantum dot image processor. potentially, the utilization of spin waves phenomena at the nanometer scale may provide a route to low - power consuming and functional logic circuits for special task data processing.
|
arxiv:0907.5453
|
this paper is withdrawn by the authors, a better version is available as hep - ph / 0505139.
|
arxiv:hep-ph/0501219
|
in this work, we propose a new gaussian process regression ( gpr ) method : physics information aided kriging ( phik ). in the standard data - driven kriging, the unknown function of interest is usually treated as a gaussian process with assumed stationary covariance with hyperparameters estimated from data. in phik, we compute the mean and covariance function from realizations of available stochastic models, e. g., from realizations of governing stochastic partial differential equations solutions. such constructed gaussian process generally is non - stationary, and does not assume a specific form of the covariance function. our approach avoids the optimization step in data - driven gpr methods to identify the hyperparameters. more importantly, we prove that the physical constraints in the form of a deterministic linear operator are guaranteed in the resulting prediction. we also provide an error estimate in preserving the physical constraints when errors are included in the stochastic model realizations. to reduce the computational cost of obtaining stochastic model realizations, we propose a multilevel monte carlo estimate of the mean and covariance functions. further, we present an active learning algorithm that guides the selection of additional observation locations. the efficiency and accuracy of phik are demonstrated for reconstructing a partially known modified branin function, studying a three - dimensional heat transfer problem and learning a conservative tracer distribution from sparse concentration measurements.
|
arxiv:1809.03461
|
we present a general formalism for investigating the second - order optical response of solids to an electric field in weakly disordered crystals with arbitrarily complicated band structures based on density - matrix equations of motion, on a born approximation treatment of disorder, and on an expansion in scattering rate to leading non - trivial order. one of the principal aims of our work is to enable extensive transport theory applications that accounts fully for the interplay between electric - field - induced interband and intraband coherence, and bloch - state scattering. the quasiparticle bands are treated in a completely general manner that allows for arbitrary forms of the intrinsic spin - orbit coupling ( soc ) and could be extended to the extrinsic soc. according to the previous results, in the presence of the disorder potential, the interband response in conductors in addition to an intrinsic contribution due to the entire fermi sea that captures, among other effects, the berry curvature contribution to wave - packet dynamics includes an anomalous contribution caused by scattering that is sensitive to the presence of the fermi surface. to demonstrate the rich physics captured by our theory, the relaxation time matrix for different strength order is considered and at the same time we explicitly solve for some electric - field response properties of simple disordered rashba model that are known to be dominated by interband coherence contributions. the expressions we present are amenable for numerical calculations, and we demonstrate this by performing a full band - structure calculation of the interband contribution, even in metals.
|
arxiv:2207.00331
|
sharpness - aware minimization ( sam ) is a highly effective regularization technique for improving the generalization of deep neural networks for various settings. however, the underlying working of sam remains elusive because of various intriguing approximations in the theoretical characterizations. sam intends to penalize a notion of sharpness of the model but implements a computationally efficient variant ; moreover, a third notion of sharpness was used for proving generalization guarantees. the subtle differences in these notions of sharpness can indeed lead to significantly different empirical results. this paper rigorously nails down the exact sharpness notion that sam regularizes and clarifies the underlying mechanism. we also show that the two steps of approximations in the original motivation of sam individually lead to inaccurate local conclusions, but their combination accidentally reveals the correct effect, when full - batch gradients are applied. furthermore, we also prove that the stochastic version of sam in fact regularizes the third notion of sharpness mentioned above, which is most likely to be the preferred notion for practical performance. the key mechanism behind this intriguing phenomenon is the alignment between the gradient and the top eigenvector of hessian when sam is applied.
|
arxiv:2211.05729
|
in a previous work ( callegari and yokoyama 2007, celest. mech. dyn. astr. vol. 98 ), the main features of the motion of the pair enceladus - dione were analyzed in the frozen regime, i. e., without considering the tidal evolution. here, the results of a great deal of numerical simulations of a pair of satellites similar to enceladus and dione crossing the 2 : 1 mean - motion resonance are shown. the resonance crossing is modeled with a linear tidal theory, considering a two - degrees - of - freedom model written in the framework of the general three - body planar problem. the main regimes of motion of the system during the passage through resonance are studied in detail. we discuss our results comparing them with classical scenarios of tidal evolution of the system. we show new scenarios of evolution of the enceladus - dione system through resonance not shown in previous approaches of the problem.
|
arxiv:0803.2264
|
in this paper, we apply borcea - - voisin ' s construction and give new examples of calabi - - yau fourfolds $ y $, which admit an elliptic fibration onto a smooth threefold $ v $, whose singular fibers of type $ i _ 5 $ lie above a del pezzo surface $ dp \ subset v $. these are relevant models for f - theory according to papers by c. beasley, j. j. heckman, c. vafa. moreover, at the end of the paper we will give the explicit equations of some of these calabi - - yau fourfolds and their fibrations.
|
arxiv:1706.01689
|
we study in this paper the impact of communication latency on the classical work stealing load balancing algorithm. our approach considers existing performance models and the underlying algorithms. we introduce a latency parameter in the model and study its overall impact by careful observations of simulation results. using this method we are able to derive a new expression of the expected running time of divisible load applications. this expression enables us to predict under which conditions a given run will yield acceptable performance. for instance, we can easily calibrate the maximal number of processors one should use for a given work platform combination. we also consider the impact of several algorithmic variants like simultaneous transfers of work or thresholds for avoiding useless transfers. all our results are validated through simulation on a wide range of parameters.
|
arxiv:1805.01768
|
we develop and study fpga implementations of algorithms for charged particle tracking based on graph neural networks. the two complementary fpga designs are based on opencl, a framework for writing programs that execute across heterogeneous platforms, and hls4ml, a high - level - synthesis - based compiler for neural network to firmware conversion. we evaluate and compare the resource usage, latency, and tracking performance of our implementations based on a benchmark dataset. we find a considerable speedup over cpu - based execution is possible, potentially enabling such algorithms to be used effectively in future computing workflows and the fpga - based level - 1 trigger at the cern large hadron collider.
|
arxiv:2012.01563
|
in the paper we use the theory of framed correpondences to construct milnor - witt transfers on homotopy modules. as a consequence we identify the zeroth stable $ \ mathbb { a } ^ 1 $ - homotopy sheaves of smooth varieties with the zeroth homology of corresponding mw - motivic complexes and prove that the hearts of homotopy $ t $ - structures on the stable $ \ mathbb { a } ^ 1 $ - derived category and the category of milnor - witt motives are equivalent.
|
arxiv:1710.07412
|
geodesics are used in a wide array of applications in cosmology and astrophysics. however, it is not a trivial task to efficiently calculate exact geodesic distances in an arbitrary spacetime. we show that in spatially flat ( 3 + 1 ) - dimensional friedmann - lemaitre - robertson - walker ( flrw ) spacetimes, it is possible to integrate the second - order geodesic differential equations, and derive a general method for finding both timelike and spacelike distances given initial - value or boundary - value constraints. in flat spacetimes with either dark energy or matter, whether dust, radiation, or a stiff fluid, we find an exact closed - form solution for geodesic distances. in spacetimes with a mixture of dark energy and matter, including spacetimes used to model our physical universe, there exists no closed - form solution, but we provide a fast numerical method to compute geodesics. a general method is also described for determining the geodesic connectedness of an flrw manifold, provided only its scale factor.
|
arxiv:1705.00730
|
this study investigates the effects of incorporating 11b4c interlayers into fe / si multilayers, with a focus on interface quality, reflectivity, polarization, and magnetic properties for polarized neutron optics. it is found that the introduction of 1 { \ aa } and 2 { \ aa } 11b4c interlayers significantly improves the interface sharpness, reducing interface width and preventing excessive si diffusion into the fe layers. x - ray reflectivity and polarized neutron reflectivity measurements show enhanced reflectivity and polarization, with a notable increase in polarization for 30 { \ aa } period multilayers. the inclusion of interlayers also helps prevent the formation of iron - silicides, improving both the magnetic properties and neutron optical performance. however, the impact of interlayers is less pronounced in thicker - period multilayers ( 100 { \ aa } ), primarily due to the ratio between layer and interface widths. these results suggest that 11b4c interlayers offer a promising route for optimizing fe / si multilayer performance in polarized neutron mirrors.
|
arxiv:2502.07507
|
we present quenched results for b meson decay constants using nrqcd b quarks and o ( a ) tadpole improved clover light quarks. for the first time, one - loop matching factors between lattice and continuum currents are incorporated through o ( \ alpha / m ) taking operator mixing fully into account. this includes an important o ( \ alpha a ) discretization correction to the heavy - light axial vector current. we find f _ b = 0. 147 ( 11 ) ( ^ { + 8 } _ { - 12 } ) ( 9 ) ( 6 ) mev and f _ { b _ s } / f _ b = 1. 20 ( 4 ) ( ^ { + 4 } _ { - 0 } ). pacs numbers : 12. 38. gc, 12. 39. hg, 13. 20. he, 14. 40. nd
|
arxiv:hep-lat/9801038
|
we address the problem of automatic american sign language fingerspelling recognition from video. prior work has largely relied on frame - level labels, hand - crafted features, or other constraints, and has been hampered by the scarcity of data for this task. we introduce a model for fingerspelling recognition that addresses these issues. the model consists of an auto - encoder - based feature extractor and an attention - based neural encoder - decoder, which are trained jointly. the model receives a sequence of image frames and outputs the fingerspelled word, without relying on any frame - level training labels or hand - crafted features. in addition, the auto - encoder subcomponent makes it possible to leverage unlabeled data to improve the feature learning. the model achieves 11. 6 % and 4. 4 % absolute letter accuracy improvement respectively in signer - independent and signer - adapted fingerspelling recognition over previous approaches that required frame - level training labels.
|
arxiv:1710.03255
|
satellite imagery and remote sensing provide explanatory variables at relatively high resolutions for modeling geospatial phenomena, yet regional summaries are often desirable for analysis and actionable insight. in this paper, we propose a novel method of inducing spatial aggregations as a component of the machine learning process, yielding regional model features whose construction is driven by model prediction performance rather than prior assumptions. our results demonstrate that genetic programming is particularly well suited to this type of feature construction because it can automatically synthesize appropriate aggregations, as well as better incorporate them into predictive models compared to other regression methods we tested. in our experiments we consider a specific problem instance and real - world dataset relevant to predicting snow properties in high - mountain asia.
|
arxiv:1706.07888
|
a flow line function is proposed to describe the material deformation in ecae for a 120 \ degree die. this new analytical approach is incorporated into a viscoplastic self - consistent polycrystal code to simulate the texture evolution in route a of copper and compared to experimental textures as well as to those corresponding to simple shear.
|
arxiv:1205.2745
|
children develop narrative skills by understanding and actively building connections between elements, image - text matching and consequences. however, it is challenging for children to clearly grasp these multi - level links only through explanations of text or facilitator ' s speech. to address this, we developed colin, an interactive storytelling tool that supports children ' s multi - level narrative skills through both voice and visual modalities. in the generation stage, colin supports facilitator to define and review generated text and image content freely. in the understanding stage, a question - feedback model helps children understand multi - level connections while co - creating stories with colin. in the building phase, colin actively encourages children to create connections between elements through drawing and speaking. a user study with 20 participants evaluated colin by measuring children ' s engagement, understanding of cause - and - effect relationships, and the quality of their new story creations. our results demonstrated that colin significantly enhances the development of children ' s narrative skills across multiple levels.
|
arxiv:2405.06495
|
we present a new " integral = series " type identity of multiple zeta values, and show that this is equivalent in a suitable sense to the fundamental theorem of regularization. we conjecture that this identity is enough to describe all linear relations of multiple zeta values over q. we also establish the regularization theorem for multiple zeta - star values, which too is equivalent to our new identity. a connection to kawashima ' s relation is discussed as well.
|
arxiv:1605.03117
|
knowledge distillation is a popular paradigm for learning portable neural networks by transferring the knowledge from a large model into a smaller one. most existing approaches enhance the student model by utilizing the similarity information between the categories of instance level provided by the teacher model. however, these works ignore the similarity correlation between different instances that plays an important role in confidence prediction. to tackle this issue, we propose a novel method in this paper, called similarity transfer for knowledge distillation ( stkd ), which aims to fully utilize the similarities between categories of multiple samples. furthermore, we propose to better capture the similarity correlation between different instances by the mixup technique, which creates virtual samples by a weighted linear interpolation. note that, our distillation loss can fully utilize the incorrect classes similarities by the mixed labels. the proposed approach promotes the performance of student model as the virtual sample created by multiple images produces a similar probability distribution in the teacher and student networks. experiments and ablation studies on several public classification datasets including cifar - 10, cifar - 100, cinic - 10 and tiny - imagenet verify that this light - weight method can effectively boost the performance of the compact student model. it shows that stkd substantially has outperformed the vanilla knowledge distillation and has achieved superior accuracy over the state - of - the - art knowledge distillation methods.
|
arxiv:2103.10047
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.