text
stringlengths
1
3.65k
source
stringlengths
15
79
in the black - scholes context we consider the probability distribution function ( pdf ) of financial returns implied by volatility smile and we study the relation between the decay of its tails and the fitting parameters of the smile. we show that, considering a scaling law derived from data, it is possible to get a new fitting procedure of the volatility smile that considers also the exponential decay of the real pdf of returns observed in the financial markets. our study finds application in the risk management activities where the tails characterization of financial returns pdf has a central role for the risk estimation.
arxiv:1010.2184
reward function is essential in reinforcement learning ( rl ), serving as the guiding signal to incentivize agents to solve given tasks, however, is also notoriously difficult to design. in many cases, only imperfect rewards are available, which inflicts substantial performance loss for rl agents. in this study, we propose a unified offline policy optimization approach, \ textit { rgm ( reward gap minimization ) }, which can smartly handle diverse types of imperfect rewards. rgm is formulated as a bi - level optimization problem : the upper layer optimizes a reward correction term that performs visitation distribution matching w. r. t. some expert data ; the lower layer solves a pessimistic rl problem with the corrected rewards. by exploiting the duality of the lower layer, we derive a tractable algorithm that enables sampled - based learning without any online interactions. comprehensive experiments demonstrate that rgm achieves superior performance to existing methods under diverse settings of imperfect rewards. further, rgm can effectively correct wrong or inconsistent rewards against expert preference and retrieve useful information from biased rewards.
arxiv:2302.01667
this paper has been withdrawn by the author ( s ). the material contained in the paper will be published in a subtantially reorganized form, part of it is now included in math. qa / 0510174
arxiv:math/0405332
a remarkable distinction in arguments and techniques to achieve our main results compared to that of the euclidean case.
arxiv:2211.14618
modeling environmental ecosystems is essential for effective resource management, sustainable development, and understanding complex ecological processes. however, traditional data - driven methods face challenges in capturing inherently complex and interconnected processes and are further constrained by limited observational data in many environmental applications. foundation models, which leverages large - scale pre - training and universal representations of complex and heterogeneous data, offer transformative opportunities for capturing spatiotemporal dynamics and dependencies in environmental processes, and facilitate adaptation to a broad range of applications. this survey presents a comprehensive overview of foundation model applications in environmental science, highlighting advancements in common environmental use cases including forward prediction, data generation, data assimilation, downscaling, inverse modeling, model ensembling, and decision - making across domains. we also detail the process of developing these models, covering data collection, architecture design, training, tuning, and evaluation. through discussions on these emerging methods as well as their future opportunities, we aim to promote interdisciplinary collaboration that accelerates advancements in machine learning for driving scientific discovery in addressing critical environmental challenges.
arxiv:2504.04280
this paper examines the problem of estimating the states, including state of charge, of battery cells connected in parallel. previous research highlights the importance of this problem, and presents multiple approaches for solving it. algorithm scalability and observability analysis can both be challenging, particularly because the underlying pack dynamics are governed by differential algebraic equations. our work addresses these challenges from a novel perspective that begins by inverting the causality of parallel pack dynamics, which breaks the pack model ' s underlying algebraic loop. this simplifies observability analysis and observer design significantly, leading to three novel contributions. first, the paper derives mathematical conditions for state observability that apply regardless of the number of battery cells and the order of their individual dynamics. second, the paper presents an approach for grouping battery cells such that their lumped dynamics are observable. finally, the paper presents a novel pack state estimator that achieves computational tractability by employing inverse dynamic modeling. we conclude by presenting a monte carlo simulation study of this estimator using experimentally - parameterized models of two battery chemistries. the simulation results highlight the computational benefits of both the clustering strategy and inverse dynamics approach for state estimation.
arxiv:2409.19189
social science ( often rendered in the plural as the social sciences ) is one of the branches of science, devoted to the study of societies and the relationships among members within those societies. the term was formerly used to refer to the field of sociology, the original " science of society ", established in the 18th century. it now encompasses a wide array of additional academic disciplines, including anthropology, archaeology, economics, geography, history, linguistics, management, communication studies, psychology, culturology, and political science. the majority of positivist social scientists use methods resembling those used in the natural sciences as tools for understanding societies, and so define science in its stricter modern sense. speculative social scientists, otherwise known as interpretivist scientists, by contrast, may use social critique or symbolic interpretation rather than constructing empirically falsifiable theories, and thus treat science in its broader sense. in modern academic practice, researchers are often eclectic, using multiple methodologies ( combining both quantitative and qualitative research ). to gain a deeper understanding of complex human behavior in digital environments, social science disciplines have increasingly integrated interdisciplinary approaches, big data, and computational tools. the term social research has also acquired a degree of autonomy as practitioners from various disciplines share similar goals and methods. = = history = = the history of the social sciences began in the age of enlightenment after 1651, which saw a revolution within natural philosophy, changing the basic framework by which individuals understood what was scientific. social sciences came forth from the moral philosophy of the time and were influenced by the age of revolutions, such as the industrial revolution and the french revolution. the social sciences developed from the sciences ( experimental and applied ), or the systematic knowledge - bases or prescriptive practices, relating to the social improvement of a group of interacting entities. the beginnings of the social sciences in the 18th century are reflected in the grand encyclopedia of diderot, with articles from jean - jacques rousseau and other pioneers. the growth of the social sciences is also reflected in other specialized encyclopedias. the term " social science " was coined in french by mirabeau in 1767, before becoming a distinct conceptual field in the nineteenth century. social science was influenced by positivism, focusing on knowledge based on actual positive sense experience and avoiding the negative ; metaphysical speculation was avoided. auguste comte used the term science sociale to describe the field, taken from the ideas of charles fourier ; comte also referred to the field as social physics. following this period, five paths of development sprang forth in the social sciences, influenced by comte in
https://en.wikipedia.org/wiki/Social_science
we investigate the propagation of a pulse field in an optomechanical system. we examine the question of advance of the pulse under the conditions of electromagnetically induced transparency in the mechanical system contained in a high quality cavity. we show that the group delay can be controlled by the power of the coupling field. the time delay is negative which corresponds to superluminal light when there is a strong coupling between the nano - oscillator and the cavity.
arxiv:1210.6213
we present a theory of the cavity quantum electrodynamics of the graphene cyclotron resonance. by employing a canonical transformation, we derive an effective hamiltonian for the system comprised of two neighboring landau levels dressed by the cavity electromagnetic field ( integer quantum hall polaritons ). this generalized dicke hamiltonian, which contains terms that are quadratic in the electromagnetic field and respects gauge invariance, is then used to calculate thermodynamic properties of the quantum hall polariton system. finally, we demonstrate that the generalized dicke description fails when the graphene sheet is heavily doped, i. e. when the landau level spectrum of 2d massless dirac fermions is approximately harmonic. in this case we ` integrate out ' the landau levels in valence band and obtain an effective hamiltonian for the entire stack of landau levels in conduction band, as dressed by strong light - matter interactions.
arxiv:1402.2270
we survey the highly ionized circumgalactic media ( cgm ) of 29 blindly selected galaxies at 0. 49 < z _ ( gal ) < 1. 44 based on high - s / n ultraviolet spectra of z > 1 qsos and the galaxy database from the cos absorption survey of baryon harbors ( casbah ). we detect the ne viii doublet in nine of the galaxies, and for gas with n ( ne viii ) > 10 ^ 13. 3 cm ^ - 2 ( > 10 ^ 13. 5 cm ^ - 2 ), we derive a ne viii covering fraction f _ c = 75 + 15 / - 25 % ( 44 + 22 / - 20 % ) within impact parameter ( rho ) < 200 kpc of m _ * = 10 ^ ( 9. 5 - 11. 5 ) msol galaxies and f _ c = 70 + 16 / - 22 % ( f _ c = 42 + 20 / - 17 % ) within rho < 1. 5 virial radii. we estimate the mass in ne viii - traced gas to be m _ gas ( ne viii ) > 10 ^ 9. 5 msol ( z / zsol ) ^ - 1, or 6 - 20 % of the expected baryonic mass if the ne viii absorbers have solar metallicity. ionizing ne vii to ne viii requires 207 ev, and photons with this energy are scarce in the cgm. however, for the median halo mass and redshift of our sample, the virial temperature is close to the peak temperature for the ne viii ion, and the ne viii - bearing gas is plausibly collisionally ionized near this temperature. moreover, we find that photoionized ne viii requires cool and low - density clouds that would be highly underpressured ( by approximately two orders of magnitude ) relative to the putative, ambient virialized medium, complicating scenarios where such clouds could survive. thus, more complex ( e. g., non - equilibrium ) models may be required ; this first statistical sample of ne viii absorber / galaxy systems will provide stringent constraints for future cgm studies.
arxiv:1810.06560
. p and q will be the points of intersection of these two circles. point q is then the reflection of point p through line ab. = = properties = = the matrix for a reflection is orthogonal with determinant βˆ’1 and eigenvalues βˆ’1, 1, 1,..., 1. the product of two such matrices is a special orthogonal matrix that represents a rotation. every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. thus reflections generate the orthogonal group, and this result is known as the cartan – dieudonne theorem. similarly the euclidean group, which consists of all isometries of euclidean space, is generated by reflections in affine hyperplanes. in general, a group generated by reflections in affine hyperplanes is known as a reflection group. the finite groups generated in this way are examples of coxeter groups. = = reflection across a line in the plane = = reflection across an arbitrary line through the origin in two dimensions can be described by the following formula ref l ( v ) = 2 v β‹… l l β‹… l l βˆ’ v, { \ displaystyle \ operatorname { ref } _ { l } ( v ) = 2 { \ frac { v \ cdot l } { l \ cdot l } } l - v, } where v { \ displaystyle v } denotes the vector being reflected, l { \ displaystyle l } denotes any vector in the line across which the reflection is performed, and v β‹… l { \ displaystyle v \ cdot l } denotes the dot product of v { \ displaystyle v } with l { \ displaystyle l }. note the formula above can also be written as ref l ( v ) = 2 proj l ( v ) βˆ’ v, { \ displaystyle \ operatorname { ref } _ { l } ( v ) = 2 \ operatorname { proj } _ { l } ( v ) - v, } saying that a reflection of v { \ displaystyle v } across l { \ displaystyle l } is equal to 2 times the projection of v { \ displaystyle v } on l { \ displaystyle l }, minus the vector v { \ displaystyle v }. reflections in a line have the eigenvalues of 1, and βˆ’1. = = reflection through a hyperplane in n dimensions = = given a vector v { \ displaystyle v } in euclidean
https://en.wikipedia.org/wiki/Reflection_(mathematics)
in this paper, we study the problem of inferring time - varying markov random fields ( mrf ), where the underlying graphical model is both sparse and changes sparsely over time. most of the existing methods for the inference of time - varying mrfs rely on the regularized maximum likelihood estimation ( mle ), that typically suffer from weak statistical guarantees and high computational time. instead, we introduce a new class of constrained optimization problems for the inference of sparsely - changing mrfs. the proposed optimization problem is formulated based on the exact $ \ ell _ 0 $ regularization, and can be solved in near - linear time and memory. moreover, we show that the proposed estimator enjoys a provably small estimation error. as a special case, we derive sharp statistical guarantees for the inference of sparsely - changing gaussian mrfs ( gmrf ) in the high - dimensional regime, showing that such problems can be learned with as few as one sample per time. our proposed method is extremely efficient in practice : it can accurately estimate sparsely - changing graphical models with more than 500 million variables in less than one hour.
arxiv:2102.03585
the \ emph { sparse johnson - lindenstrauss transform } of kane and nelson ( soda 2012 ) provides a linear dimensionality - reducing map $ a \ in \ mathbb { r } ^ { m \ times u } $ in $ \ ell _ 2 $ that preserves distances up to distortion of $ 1 + \ varepsilon $ with probability $ 1 - \ delta $, where $ m = o ( \ varepsilon ^ { - 2 } \ log 1 / \ delta ) $ and each column of $ a $ has $ o ( \ varepsilon m ) $ non - zero entries. the previous analyses of the sparse johnson - lindenstrauss transform all assumed access to a $ \ omega ( \ log 1 / \ delta ) $ - wise independent hash function. the main contribution of this paper is a more general analysis of the sparse johnson - lindenstrauss transform with less assumptions on the hash function. we also show that the \ emph { mixed tabulation hash function } of dahlgaard, knudsen, rotenberg, and thorup ( focs 2015 ) satisfies the conditions of our analysis, thus giving us the first analysis of a sparse johnson - lindenstrauss transform that works with a practical hash function.
arxiv:2305.03110
a good neural sequence - to - sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder ' s memory. in this paper, we aim to improve the memorization capabilities of the encoder of a pointer - generator model by adding an additional ' closed - book ' decoder without attention and pointer mechanisms. such a decoder forces the encoder to be more selective in the information encoded in its memory state because the decoder can ' t rely on the extra information provided by the attention and possibly copy modules, and hence improves the entire model. on the cnn / daily mail dataset, our 2 - decoder model outperforms the baseline significantly in terms of rouge and meteor metrics, for both cross - entropy and reinforced setups ( and on human evaluation ). moreover, our model also achieves higher scores in a test - only duc - 2002 generalizability setup. we further present a memory ability test, two saliency metrics, as well as several sanity - check ablations ( based on fixed - encoder, gradient - flow cut, and model capacity ) to prove that the encoder of our 2 - decoder model does in fact learn stronger memory representations than the baseline encoder.
arxiv:1809.04585
state - of - the - art image captioners can generate accurate sentences to describe images in a sequence to sequence manner without considering the controllability and interpretability. this, however, is far from making image captioning widely used as an image can be interpreted in infinite ways depending on the target and the context at hand. achieving controllability is important especially when the image captioner is used by different people with different way of interpreting the images. in this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by capturing the co - dependence between part - of - speech tags and semantics. our model decouples direct dependence between successive variables. in this way, it allows the decoder to exhaustively search through the latent part - of - speech choices, while keeping decoding speed proportional to the size of the pos vocabulary. given a control signal in the form of a sequence of part - of - speech tags, we propose a method to generate captions through a transformer network, which predicts words based on the input part - of - speech tag sequences. experiments on publicly available datasets show that our model significantly outperforms state - of - the - art methods on generating diverse image captions with high qualities.
arxiv:2204.13324
the b anomalies, by their distinctive flavour structure i. e. $ u ( 2 ) ^ 5 $, bring a new piece to the long - standing flavour puzzle. the three - site flavour non - universal pati - salam ( ps ) model, which unifies quarks and leptons, provides through the ratios of vacuum expectation values ( vev ) acquired at different scales, a combined explanation of the charged and neutral current b anomalies as well as of the mass hierarchies of the standard model ( sm ). the mixings, in new, flavour non - universal, gauge interactions, as well as in the yukawa couplings, are obtained through suppressed nearest - neighbour interactions. in this three - site model context, the inverse seesaw mechanism is realised, with the minimal addition of three fermion singlets $ s _ l ^ { ( i ) } $ and where $ u ( 1 ) _ f $ fermion number is broken dynamically via new singlet scalars $ \ phi _ i $, yielding an anarchic light neutrino mass matrix in consistency with data, despite the $ u ( 2 ) ^ 5 $ flavour symmetry observed in the yukawa matrices. a prediction of this model is pontecorvo - maki - nakagawa - sakata ( pmns ) unitarity violation with a $ 33 $ entry close to experimental bounds. the full model finds a natural 5d interpretation with three ( almost equidistant ) defects in a warped extra dimension, where the exponential hierarchies in vev ratios of the 4d lagrangian arise from o ( 1 ) differences in the 5d field bulk masses. this proceeding is based on arxiv : 2012. 10492.
arxiv:2109.13150
to develop a minimal model for a cell moving in a crowded environment such as in tissue, we investigate the response of a liquid drop of active matter moving on a flat rigid substrate to forces applied at its boundaries. we consider two different self - propulsion mechanisms, active stresses and treadmilling polymerisation, and we investigate how the active drop motion is altered by these surface forces. we find a highly non - linear response to forces that we characterise using drop velocity, drop shape, and the traction between the drop and the substrate. each self - propulsion mechanism gives rise to two main modes of motion : a long thin drop with zero traction in the bulk, mostly occurring under strong stretching forces, and a parabolic drop with finite traction in the bulk, mostly occurring under strong squeezing forces. in each case there is a sharp transition between parabolic, and long thin drops as a function of the applied forces and indications of drop break - up where large forces stretch the drop.
arxiv:2107.14556
the decay of any unstable quantum state can be inhibited or enhanced by carefully tailored measurements, known as the quantum zeno effect ( qze ) or anti - zeno effect ( qaze ). to date, studies of qze ( qaze ) transitions have since expanded to various system - environment coupling, in which the time evolution can be suppressed ( enhanced ) not only by projective measurement but also through dissipation processes. however, a general criterion, which could extend to arbitrary dissipation strength and periodicity, is still lacking. in this letter, we show a general framework to unify qze - qaze effects and parity - time ( pt ) symmetry breaking transitions, in which the dissipative hamiltonian associated to the measurement effect is mapped onto a pt - symmetric non - hermitian hamiltonian, thus applying the pt symmetry transitions to distinguish qze ( qaze ) and their crossover behavior. as a concrete example, we show that, in a two - level system periodically coupled to a dissipative environment, qze starts at an exceptional point ( ep ), which separates the pt - symmetric ( pts ) phase and pt - symmetry broken ( ptb ) phase, and ends at the resonance point ( rp ) of the maximum pt - symmetry breaking ; while qaze extends the rest of ptb phase and remains the whole pts phase. such findings reveal a hidden relation between qze - qaze and pts - ptb phases in non - hermitian quantum dynamics.
arxiv:2004.01364
we study the asymptotic properties of the number of open paths of length $ n $ in an oriented $ \ rho $ - percolation model. we show that this number is $ e ^ { n \ alpha ( \ rho ) ( 1 + o ( 1 ) ) } $ as $ n \ to \ infty $. the exponent $ \ alpha $ is deterministic, it can be expressed in terms of the free energy of a polymer model, and it can be explicitely computed in some range of the parameters. moreover, in a restricted range of the parameters, we even show that the number of such paths is $ n ^ { - 1 / 2 } w e ^ { n \ alpha ( \ rho ) } ( 1 + o ( 1 ) ) $ for some nondegenerate random variable $ w $. we build on connections with the model of directed polymers in random environment, and we use techniques and results developed in this context.
arxiv:0707.0818
observational and theoretical arguments increasingly suggest that the initial mass function ( imf ) of stars may depend systematically on environment, yet most galaxy formation models to date assume a universal imf. here we investigate simulations of the formation of milky way analogues run with an empirically derived metallicity - dependent imf and the moving - mesh code arepo in order to characterize the associated uncertainties. in particular, we compare a constant chabrier and a varying metallicity - dependent imf in cosmological, magneto - hydrodynamical zoom - in simulations of milky way - sized halos. we find that the non - linear effects due to imf variations typically have a limited impact on the morphology and the star formation histories of the formed galaxies. our results support the view that constraints on stellar - to - halo mass ratios, feedback strength, metallicity evolution and metallicity distributions are in part degenerate with the effects of a non - universal, metallicity - dependent imf. interestingly, the empirical relation we use between metallicity and the high mass slope of the imf does not aid in the quenching process. it actually produces up to a factor of 2 - 3 more stellar mass if feedback is kept constant. additionally, the enrichment history and the z = 0 metallicity distribution are significantly affected. in particular, the alpha enhancement pattern shows a steeper dependence on iron abundance in the metallicity - dependent model, in better agreement with observational constraints.
arxiv:1710.04222
the unitary description of beam splitters ( bss ) and optical parametric amplifiers ( opas ) in terms of the dynamical lie groups $ su ( 2 ) $ and $ su ( 1, 1 ) $ has a long history. recently, an inherent duality has been proposed that relates the unitaries of both optical devices. at the physical level, this duality relates the linear nature of a lossless bs to the nonlinear parametric down - conversion ( pdc ) process exhibited by an opa. here, we argue that the duality between bs and pdc can instead be naturally interpreted by analyzing the geometrical properties of both lie groups, an approach that explicitly connects the dynamical group description of the optical devices with the aforementioned duality. furthermore, we show that the bs - pdc duality can be represented through tensor network diagrams, enabling the implementation of a pdc as a circuit on a standard quantum computing platform. thus, it is feasible to simulate nonlinear processes by using single - qubit unitaries that can be implemented on currently available digital quantum processors.
arxiv:2310.20416
we present a compactified version of the 3 - dimensional black hole recently found by considering extra identifications and determine the analytical continuation of the solution beyond its coordinate singularity by extending the identifications to the extended region of the spacetime. in the extended region of the spacetime, we find a topology change and non - trivial closed timelike curves both in the ordinary 3 - dimensional black hole and in the compactified one. especially, in the case of the compactified 3 - dimensional black hole, we show an example of topology change from one double torus to eight spheres with three punctures.
arxiv:gr-qc/9312011
recent advancements in large language models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations. we define generic personas to represent demographic groups, such as " an asian person ", whereas specific personas may take the form of specific popular asian names like " yumi ". while the adoption of personas enriches user experiences by making dialogue systems more engaging and approachable, it also casts a shadow of potential risk by exacerbating social biases within model responses, thereby causing societal harm through interactions with users. in this paper, we systematically study " persona biases ", which we define to be the sensitivity of dialogue models ' harmful behaviors contingent upon the personas they adopt. we categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects : offensiveness, toxic continuation, regard, stereotype agreement, and toxic agreement. additionally, we propose to investigate persona biases by experimenting with universalpersona, a systematically constructed persona dataset encompassing various types of both generic and specific model personas. through benchmarking on four different models - - including blender, chatgpt, alpaca, and vicuna - - our study uncovers significant persona biases in dialogue systems. our findings also underscore the pressing need to revisit the use of personas in dialogue agents to ensure safe application.
arxiv:2310.05280
we present the results of cosmological hydrodynamic simulations with zoom - in initial conditions, and investigate the formation of the first galaxies and their evolution towards observable galaxies at $ z \ sim 6 $. we focus on three different galaxies which end up in halos with masses $ m _ { h } = 2. 4 \ times10 ^ { 10 } ~ h ^ { - 1 } \ ; m _ { \ odot } $ ( halo - 10 ), $ 1. 6 \ times10 ^ { 11 } ~ h ^ { - 1 } \ ; m _ { \ odot } $ ( halo - 11 ) and $ 0. 7 \ times10 ^ { 12 } ~ h ^ { - 1 } m _ { \ odot } $ ( halo - 12 ) at z = 6. our simulations also probe impacts of different sub - grid assumptions, i. e., sf efficiency and cosmic reionization, on sf histories in the first galaxies. we find that star formation occurs intermittently due to supernova ( sn ) feedback at z > 10, and then it proceeds more smoothly as the halo mass grows at lower redshifts. galactic disks are destroyed due to sn feedback, while galaxies in simulations with no - feedback or lower sf efficiency models can sustain galactic disk for long periods > 10 myr. the expulsion of gas at the galactic center also affects the inner dark matter density profile. however, sn feedback does not seem to keep the shallow profile of dark matter for a long period. our simulated galaxies in halo - 11 and halo - 12 reproduce the star formation rates ( sfr ) and stellar masses of observed lyman - $ \ alpha $ emitters ( laes ) at z = 7 - 8 fairly well given observational uncertainties. in addition, we investigate the effect of uv background radiation on star formation as an external feedback source, and find that earlier reionization extends the quenching time of star formation due to photo - ionization heating, but does not affect the stellar mass at z = 6.
arxiv:1704.03117
this paper addresses fault tolerant control ( ftc ) of large power systems ( lps ) subject to sensor failure. hiding the fault from the controller allows the nominal controller to remain in the loop. we assume specific faults that violate observability of a subsystem, and we cannot rely on these faulty subsystems when estimating states. we use a new method for reconfiguration control of these faults that lead to unobservability of subsystems. the method proposes augmenting a faulty subsystems with another subsystem ( s ) until a new subsystem is achieved that is observable. next, finding the best subsystems among available candidates is considered and using structural analysis methods and grammian definition, a complete algorithm is proposed for ftc of lps. the proposed approach is applied to the ieee 14 - bus test case and interactions are considered in nonlinear form. simulation results show that the proposed approach works as intended.
arxiv:1509.00091
this paper gives a review of the recent progress in the study of fourier bases and fourier frames on self - affine measures. in particular, we emphasize the new matrix analysis approach for checking the completeness of a mutually orthogonal set. this method helps us settle down a long - standing conjecture that hadamard triples generates self - affine spectral measures. it also gives us non - trivial examples of fractal measures with fourier frames. furthermore, a new avenue is open to investigate whether the middle third cantor measure admits fourier frames.
arxiv:1602.04750
an alternative explanation of 1 / f - noise in manganites is suggested and discussed
arxiv:1202.3805
the nonlocal models of peridynamics have successfully predicted fractures and deformations for a variety of materials. in contrast to local mechanics, peridynamic boundary conditions must be defined on a finite volume region outside the body. therefore, theoretical and numerical challenges arise in order to properly formulate dirichlet - type nonlocal boundary conditions, while connecting them to the local counterparts. while a careless imposition of local boundary conditions leads to a smaller effective material stiffness close to the boundary and an artificial softening of the material, several strategies were proposed to avoid this unphysical surface effect. in this work, we study convergence of solutions to nonlocal state - based linear elastic model to their local counterparts as the interaction horizon vanishes, under different formulations and smoothness assumptions for nonlocal dirichlet - type boundary conditions. our results provide explicit rates of convergence that are sensitive to the compatibility of the nonlocal boundary data and the extension of the solution for the local model. in particular, under appropriate assumptions, constant extensions yield $ \ frac { 1 } { 2 } $ order convergence rates and linear extensions yield $ \ frac { 3 } { 2 } $ order convergence rates. with smooth extensions, these rates are improved to quadratic convergence. we illustrate the theory for any dimension $ d \ geq 2 $ and numerically verify the convergence rates with a number of two dimensional benchmarks, including linear patch tests, manufactured solutions, and domains with curvilinear surfaces. numerical results show a first order convergence for constant extensions and second order convergence for linear extensions, which suggests a possible room of improvement in the future convergence analysis.
arxiv:2106.13878
this paper provides a survey of methods and tools for automated code - reuse exploit generation. such exploits use code that is already contained in a vulnerable program. the code - reuse approach allows one to exploit vulnerabilities in the presence of operating system protection that prohibits data memory execution. this paper contains a description of various code - reuse methods : return - to - libc attack, return - oriented programming, jump - oriented programming, and others. we define fundamental terms : gadget, gadget frame, gadget catalog. moreover, we show that, in fact, a gadget is an instruction, and a set of gadgets defines a virtual machine. we can reduce an exploit creation problem to code generation for this virtual machine. each particular executable file defines a virtual machine instruction set. we provide a survey of methods for gadgets searching and determining their semantics ( creating a gadget catalog ). these methods allow one to get the virtual machine instruction set. if a set of gadgets is turing - complete, then a compiler can use a gadget catalog as a target architecture. however, some instructions can be absent. hence we discuss several approaches to replace missing instructions with multiple gadgets. an exploit generation tool can chain gadgets by pattern searching ( regular expressions ) or considering gadgets semantics. furthermore, some chaining methods use genetic algorithms, while others use smt - solvers. we compare existing open - source tools and propose a testing system rop - benchmark that can be used to verify whether a generated chain successfully opens a shell.
arxiv:2011.07862
memorization in large language models ( llms ) is a growing concern. llms have been shown to easily reproduce parts of their training data, including copyrighted work. this is an important problem to solve, as it may violate existing copyright laws as well as the european ai act. in this work, we propose a systematic analysis to quantify the extent of potential copyright infringements in llms using european law as an example. unlike previous work, we evaluate instruction - finetuned models in a realistic end - user scenario. our analysis builds on a proposed threshold of 160 characters, which we borrow from the german copyright service provider act and a fuzzy text matching algorithm to identify potentially copyright - infringing textual reproductions. the specificity of countermeasures against copyright infringement is analyzed by comparing model behavior on copyrighted and public domain data. we investigate what behaviors models show instead of producing protected text ( such as refusal or hallucination ) and provide a first legal assessment of these behaviors. we find that there are huge differences in copyright compliance, specificity, and appropriate refusal among popular llms. alpaca, gpt 4, gpt 3. 5, and luminous perform best in our comparison, with opengpt - x, alpaca, and luminous producing a particularly low absolute number of potential copyright violations. code can be found at https : / / github. com / felixbmuller / llms - memorization - copyright.
arxiv:2405.18492
we present spatially resolved mid - infrared spectroscopy of the class i / flat - spectrum protostellar binary system svs20 in the serpens cloud core. the spectra were obtained with the mid - infrared instrument t - recs on gemini - south. svs20 - south, the more luminous of the two sources, exhibits a mid - infrared emission spectrum peaking near 11. 3 \ micron, while svs20 - north exhibits a shallow amorphous silicate absorption spectrum with a peak optical depth of $ \ tau \ sim 0. 3 $. after removal of the the line - of - sight extinction by the molecular common envelope, the ` ` protostar - only ' ' spectra are found to be dominated by strong amorphous olivine emission peaking near 10 \ micron. we also find evidence for emission from crystalline forsterite and enstatite associated with both svs20 - s and svs20 - n. the presence of crystalline silicate in such a young binary system indicates that the grain processing found in more evolved haebe and t tauri pre - main sequence stars likely begins at a relatively young evolutionary stage, while mass accretion is still ongoing.
arxiv:astro-ph/0504665
the solar atmosphere shows anomalous variation in temperature, starting from the 5500 k photosphere to the million - degree kelvin corona. the corona itself expands into the interstellar medium as the free streaming solar wind, which modulates and impacts the near - earth space weather. the precise source regions of different structures in the solar wind, their formation height, and the heating of the solar atmosphere are inextricably linked and unsolved problems in astrophysics. observations suggest correlations between coronal holes ( chs ), which are cool, intensity deficit structures in the solar corona, with structures in the solar wind. observations also suggest the local plasma heating in the corona through power - law distributed impulsive events. in this thesis, we use narrowband photometric, spectroscopic, and disc - integrated emission of the solar atmosphere ranging from near ultraviolet to x - rays along with in - situ solar wind measurements to understand ( i ). the source regions of the solar wind, ( ii ). the underlying mechanism of solar coronal heating, and ( iii ). the differentiation in dynamics of chs with the background quiet sun ( qs ) regions, which do not show any significant signature of the solar wind. we leverage machine learning and numerical modeling tools to develop solar wind forecasting codes using interpretable ai, inversion codes to infer the properties of impulsive events and to understand the differences in the thermodynamics of chs and qs regions. we finally present a unified scenario of solar wind emergence and heating in the solar atmosphere and discuss the implications of inferences from this thesis.
arxiv:2304.01553
in this paper, we propose a new architecture for real - time anomaly detection in video data, inspired by human behavior combining spatial and temporal analyses. this approach uses two distinct models : ( i ) for temporal analysis, a recurrent convolutional network ( cnn + rnn ) is employed, associating vgg19 and a gru to process video sequences ; ( ii ) regarding spatial analysis, it is performed using yolov7 to analyze individual images. these two analyses can be carried out either in parallel, with a final prediction that combines the results of both analysis, or in series, where the spatial analysis enriches the data before the temporal analysis. some experimentations are been made to compare these two architectural configurations with each other, and evaluate the effectiveness of our hybrid approach in video anomaly detection.
arxiv:2410.15909
research challenges encountered across science, engineering, and economics can frequently be formulated as optimization tasks. in chemistry and materials science, recent growth in laboratory digitization and automation has sparked interest in optimization - guided autonomous discovery and closed - loop experimentation. experiment planning strategies based on off - the - shelf optimization algorithms can be employed in fully autonomous research platforms to achieve desired experimentation goals with the minimum number of trials. however, the experiment planning strategy that is most suitable to a scientific discovery task is a priori unknown while rigorous comparisons of different strategies are highly time and resource demanding. as optimization algorithms are typically benchmarked on low - dimensional synthetic functions, it is unclear how their performance would translate to noisy, higher - dimensional experimental tasks encountered in chemistry and materials science. we introduce olympus, a software package that provides a consistent and easy - to - use framework for benchmarking optimization algorithms against realistic experiments emulated via probabilistic deep - learning models. olympus includes a collection of experimentally derived benchmark sets from chemistry and materials science and a suite of experiment planning strategies that can be easily accessed via a user - friendly python interface. furthermore, olympus facilitates the integration, testing, and sharing of custom algorithms and user - defined datasets. in brief, olympus mitigates the barriers associated with benchmarking optimization algorithms on realistic experimental scenarios, promoting data sharing and the creation of a standard framework for evaluating the performance of experiment planning strategies
arxiv:2010.04153
" in the second stanza ; for, if the orientation was meant to be the same in the two layers, it would either not be mentioned at all or be only mentioned in the first stanza. all these inferences are made by the officiant as he recalls the formula from his memory. = = the written tradition : prose commentary = = with the increasing complexity of mathematics and other exact sciences, both writing and computation were required. consequently, many mathematical works began to be written down in manuscripts that were then copied and re - copied from generation to generation. india today is estimated to have about thirty million manuscripts, the largest body of handwritten reading material anywhere in the world. the literate culture of indian science goes back to at least the fifth century b. c.... as is shown by the elements of mesopotamian omen literature and astronomy that entered india at that time and ( were ) definitely not... preserved orally. the earliest mathematical prose commentary was that on the work, aryabhatiya ( written 499 ce ), a work on astronomy and mathematics. the mathematical portion of the aryabhatiya was composed of 33 sutras ( in verse form ) consisting of mathematical statements or rules, but without any proofs. however, according to hayashi, " this does not necessarily mean that their authors did not prove them. it was probably a matter of style of exposition. " from the time of bhaskara i ( 600 ce onwards ), prose commentaries increasingly began to include some derivations ( upapatti ). bhaskara i ' s commentary on the aryabhatiya, had the following structure : rule ( ' sutra ' ) in verse by aryabhata commentary by bhaskara i, consisting of : elucidation of rule ( derivations were still rare then, but became more common later ) example ( uddesaka ) usually in verse. setting ( nyasa / sthapana ) of the numerical data. working ( karana ) of the solution. verification ( pratyayakarana, literally " to make conviction " ) of the answer. these became rare by the 13th century, derivations or proofs being favoured by then. typically, for any mathematical topic, students in ancient india first memorised the sutras, which, as explained earlier, were " deliberately inadequate " in explanatory details ( in order to pithily convey the bare - bone mathematical rules ). the students then worked through the topics of the prose commentary by writing
https://en.wikipedia.org/wiki/Indian_mathematics
we compare three notions of effectiveness on uncountable structures. the first notion is that of a $ \ real $ - computable structure, based on a model of computation proposed by blum, shub, and smale, which uses full - precision real arithmetic. the second notion is that of an $ f $ - parameterizable structure, defined by morozov and based on mal ' tsev ' s notion of a constructive structure. the third is $ \ sigma $ - definability over $ hf ( \ real ) $, defined by ershov as a generalization of the observation that the computably enumerable sets are exactly those $ \ sigma _ 1 $ - definable in $ hf ( \ mathbb { n } ) $. we show that every $ \ real $ - computable structure has an $ f $ - parameterization, but that the expansion of the real field by the exponential function is $ f $ - parameterizable but not $ \ real $ - computable. we also show that the structures with $ \ real $ - computable copies are exactly the structures with copies $ \ sigma $ - definable over $ hf ( \ real ) $. one consequence of this equivalence is a method of approximating certain $ \ real $ - computable structures by turing computable structures.
arxiv:0803.3073
the d4m tool is used by hundreds of researchers to perform complex analytics on unstructured data. over the past few years, the d4m toolbox has evolved to support connectivity with a variety of database engines, graph analytics in the apache accumulo database, and an implementation using the julia programming language. in this article, we describe some of our latest additions to the d4m toolbox and our upcoming d4m 3. 0 release.
arxiv:1702.03253
in the split to block vertex deletion and split to threshold vertex deletion problems the input is a split graph $ g $ and an integer $ k $, and the goal is to decide whether there is a set $ s $ of at most $ k $ vertices such that $ g - s $ is a block graph and $ g - s $ is a threshold graph, respectively. in this paper we give algorithms for these problems whose running times are $ o ^ * ( 2. 076 ^ k ) $ and $ o ^ * ( 2. 733 ^ k ) $, respectively.
arxiv:1906.10012
we show that the proof nets introduced in [ hughes & van glabbeek 2003, 2005 ] for mall ( multiplicative additive linear logic, without units ) identify cut - free proofs modulo rule commutation : two cut - free proofs translate to the same proof net if and only if one can be obtained from the other by a succession of rule commutations. this result holds with and without the mix rule, and we extend it with cut.
arxiv:1609.04693
the current leading paradigm for temporal information extraction from text consists of three phases : ( 1 ) recognition of events and temporal expressions, ( 2 ) recognition of temporal relations among them, and ( 3 ) time - line construction from the temporal relations. in contrast to the first two phases, the last phase, time - line construction, received little attention and is the focus of this work. in this paper, we propose a new method to construct a linear time - line from a set of ( extracted ) temporal relations. but more importantly, we propose a novel paradigm in which we directly predict start and end - points for events from the text, constituting a time - line without going through the intermediate step of prediction of temporal relations as in earlier work. within this paradigm, we propose two models that predict in linear complexity, and a new training loss using timeml - style annotations, yielding promising results.
arxiv:1808.09401
we study the parallel transport of modular hamiltonians encoding entanglement properties of a state. in the case of 2d cft, we consider a change of state through action with a suitable diffeomorphism on the circle : one that diagonalizes the adjoint action of the modular hamiltonian. these vector fields exhibit kinks at the interval boundary, thus together with their central extension they differ from usual elements of the virasoro algebra. the berry curvature associated to state - changing parallel transport is the kirillov - kostant symplectic form on an associated coadjoint orbit, one which differs appreciably from known virasoro orbits. we find that the boundary parallel transport process computes a bulk symplectic form for a euclidean geometry obtained from the backreaction of a cosmic brane, with dirichlet boundary conditions at the location of the brane. we propose that this gives a reasonable definition for the symplectic form on an entanglement wedge.
arxiv:2111.05345
in this paper, we refine the framework of arveson ' s version of the gauss - bonnet - chern formula by proving that a submodule in the drury - arveson module being locally algebraic is equivalent to arveson ' s version of the gauss - bonnet - chern formula holding true for the associated quotient module. moreover, we establish the asymptotic arveson ' s curvature invariant and the asymptotic euler characteristic for contractive hilbert modules over polynomial rings in infinitely many variables, and obtain the infinitely - many - variables analogue of arveson ' s version of gauss - bonnet - chern formula. finally, we solve the finite defect problem for submodules of the drury - arveson module $ h ^ 2 $ in infinitely many variables by proving that $ h ^ 2 $ has no nontrivial submodules of finite rank.
arxiv:2501.04919
we show that, contrary to a claim made in arxiv : 1011. 0645, the von neumann - winger bound states that lie in the continuum of the scattering states are fundamentally different from naimark ' s spectral singularities.
arxiv:1207.2278
copulas are used to construct joint distributions in many areas. in some problems, it is necessary to deal with correlation structures that are more complicated than the commonly known copulas. a finite order multivariate hermite polynomial expansion, as an approximation of a joint density function, can handle complex correlation structures. however, it does not construct copulas because the density function can take negative values. in this study, we propose a method to construct a copula based on the finite sum of multivariate hermite polynomial expansions by applying corrections to the joint density function. furthermore, we apply this copula to estimate the volatility smile of cross currency pairs in the foreign exchange option market. this method can easily reproduce the volatility smile of cross currency pairs by appropriately adjusting the parameters and following the daily volatility fluctuations even if the higher - order parameters are fixed. in the numerical experiments, we compare the estimation results of the volatility smile of eur - jpy with those of usd - jpy and eur - usd for the proposed and other copulas, and show the validity of the proposed copula.
arxiv:2301.10044
a selection of results from the h1 and zeus experiments at hera are reviewed, particularly in the area of deep inelastic scattering and diffraction. quantum chromodynamics gives a good explanation of these data down to surprisingly low values of the four - momentum transfer, $ q ^ 2 $. data at smaller $ q ^ 2 $ can be described by regge models as well as by dipole models including parton - saturation effects. the latter can also give a unified description of many features of diffractive data.
arxiv:hep-ex/0211016
intelligent automation supports us against cyclones, droughts, and seismic events with recent technology advancements. algorithmic learning has advanced fields like neuroscience, genetics, and human - computer interaction. time - series data boosts progress. challenges persist in adopting these approaches in traditional fields. neural networks face comprehension and bias issues. ai ' s expansion across scientific areas is due to adaptable descriptors and combinatorial argumentation. this article focuses on modeling forest loss using the vanya model, incorporating prey predator dynamics. vanya predicts forest cover, demonstrated on amazon rainforest data against other forecasters like long short - term memory, n - beats, rcn.
arxiv:2308.06471
the current trend of scaling language models involves increasing both parameter count and training dataset size. extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. motivated by this limit, we investigate scaling language models in data - constrained regimes. specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. we find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. however, with more repetition, the value of adding compute eventually decays to zero. we propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. models and datasets from our 400 training runs are freely available at https : / / github. com / huggingface / datablations.
arxiv:2305.16264
the photon transfer curve ( ptc, variance vs. signal level ) is a commonly used and effective tool in characterizing ccd performance. it is theoretically linear in the range where photon shot noise dominates, and its slope is utilized to derive the gain of the ccd. however, recent researches on different ccds have revealed that the variance progressively drops at high signal levels, while the linearity shown by signal versus exposure time is still excellent and unaffected. on the other hand, bright stars are found to exhibit fatter point spread function ( psf ). both nonlinear ptc and the brighter - fatter effect are regarded as the result of spreading of charges between pixels, an interaction progress increasing with signal level. in this work we investigate the nonlinear ptc based on the images with a sta1600ft ccd camera, whose ptc starts to become nonlinear at about 1 / 3 full well. to explain the phenomenon, we present a model to characterize the charge - sharing psf. this signal - dependent psf can be derived from flat - field frames, and allow us to quantify the effects on photometry and measured shape of stars. this effect is essentially critical for projects requiring accurate photometry and shape parameters.
arxiv:1407.8280
we report about la0. 67sr0. 33mno3 single crystal manganite thin films in interaction with a gold capping layer. with respect to uncoated manganite layers of the same thickness, au - capped 4 nm - thick manganite films reveal a dramatic reduction ( about 185 k ) of the curie temperature tc and a lower saturation low - temperature magnetization m0. a sizeable tc reduction ( about 60 k ) is observed even when an inert srtio3 layer is inserted between the gold film and the 4 nm - thick manganite layer, suggesting that this effect might have an electrostatic origin.
arxiv:0706.2688
model independent arguments following from the covariant entropy principle imply that causal diamonds in the very early universe were entirely filled with a single equilibrated system with finite entropy. a universe where this condition persists forever has no localized excitations. our own universe appears to be headed toward such a state. within a few hundred times its current age it will approach a state where our local group of galaxies sit in empty de sitter space. eventually, the local group collapse into a black hole, which evaporates. localized excitations in de sitter space are low entropy constrained states of the vacuum ensemble. the origin of these constraints must be in the early universe : the apparent horizon must expand after some initial period, in a constrained state that is the origin of all localized excitations in the universe. we argue that in global frw coordinates, this corresponds to slow roll inflation that ends in a dilute gas of tiny black holes, with mass determined by the inflationary scale. we then review arguments that these black holes can account for the hot big bang, baryogenesis, a distinctive pattern of cmb fluctuations, and possibly primordial black hole dark matter consisting of larger black holes that survive until the matter dominated era. the more complicated question of whether these small black holes can evolve in a way that is consistent with all observational constraints requires computer simulations that have not yet been done.
arxiv:2109.05571
we present an algorithm that decides whether a finitely generated linear group over an infinite field is solvable - by - finite : a computationally effective version of the tits alternative. we also give algorithms to decide whether the group is nilpotent - by - finite, abelian - by - finite, or central - by - finite. our algorithms have been implemented in magma and are publicly available.
arxiv:1905.05234
we begin by summarizing the relevance and importance of inductive analytics based on the geometry and topology of data and information. contemporary issues are then discussed. these include how sampling data for representativity is increasingly to be questioned. while we can always avail of analytics from a " bag of tools and techniques ", in the application of machine learning and predictive analytics, nonetheless we present the case for bourdieu and benz \ ' ecri - based science of data, as follows. this is to construct bridges between data sources and position - taking, and decision - making. there is summary presentation of a few case studies, illustrating and exemplifying application domains.
arxiv:1705.08503
spatio - temporal control of ultrafast plasmon resonances has gained research interest in recent years because of their tremendous implications in nonlinear optics and ultrafast quantum technology. in particular, the lifetime of ultrashort plasmon oscillations has become a debatable subject in recent experimental and theoretical studies to fulfill the future challenges concerning their effective employment in the vast applications of the plasmonic industry. here, we examined the temporal properties of nonlinear plasmonic modes in metal nanostructures by interacting them with quantum objects in the weak coupling regime in order to distinguish it from the fundamental plasmonic mode. first of all, we present an analytical description of nonlinear ultrafast dynamics of localized surface plasmon resonances when the second harmonic plasmon mode interacts with long - lived dark mode or quantum emitter. later, the coupled plasmonic system is realized in two different ways to control the lifetime of second harmonic mode by coupling, i ) driven mode to dark mode ( or long lifetime quantum emitter ) ii ) itself to dark mode ( or long lifetime quantum emitter ). the driven - dissipative dynamics are solved through a numerical technique governing the spatial and temporal changes in the second harmonic plasmonic response supported by aunp. finally, the lifetime enhancement of nonlinear plasmon mode is manifested by performing fdtd simulations for a nonlinear plasmonic system of au nanoparticles coupled with a long lifetime quantum emitter.
arxiv:2108.00251
a generalized fractional derivative ( gfd ) definition is proposed in this work. for a differentiable function that can be expanded by taylor series, we show that d ^ elafa * d ^ beta f ( t ) = d ^ ( elafa + beta ) f ( t ). gfd is applied for some functions in which we investigate that gfd coincides with caputo and riemann - liouville fractional derivatives ' results. the solutions of riccati fractional differential equation are simply obtained via gfd. a comparison with other definitions is also discussed. the results show that the proposed definition in this work gives better accuracy than the commonly known conformable derivative definition. therefore, gfd has some advantages in comparison with other definitions in which a new path is provided for simple analytical solutions of many problems in the context of fractional calculus.
arxiv:2108.06354
time.
arxiv:1404.4506
we present a derivation and experimental implementation of a dimension - dependent contextuality inequality to certify both the quantumness and dimensionality of a given system. existing methods for certification of the dimension of quantum system can be cheated by using larger classical systems, creating a potential loophole in these benchmarks. our approach uses contextuality inequalities that cannot be violated by classical systems thus closing the previous loophole. we validate this framework experimentally with photons, observing violations of a chsh - based contextuality inequality and surpassing the qutrit bound of the cglmp4 - based contextuality inequality. these show that contextuality can be used for noise - robust tests of the number of qubits.
arxiv:2412.09659
based on first - principles calculations, we investigated the topological transport properties of mn $ _ 3 $ gan with coplanar noncollinear magnetic structures. the intrinsic anomalous hall conductivity ( iahc ) displays a significant dependence with respect to the in - plane magnetization direction between the $ \ gamma _ { 5g } $ and $ \ gamma _ { 4g } $ magnetic configurations, where large anomalous nernst effect ( ane ) can be induced by tailoring the magnetization direction. moreover, we observed strong piezospintronic effect in mn $ _ 3 $ gan, where large iahc can be induced by moderate epitaxial strain. symmetry analysis reveals that for both cases, the nonzero iahc is originated from the spin - orbit coupling instead of the noncollinear magnetic configurations
arxiv:1905.11798
participatory design ( pd ) in human - robot interaction ( hri ) typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in wizard - of - oz ( woz ) systems that rarely achieve autonomy. we present leador ( led - by - experts automation and design of robots ) an end - to - end pd methodology for domain expert co - design, automation and evaluation of social robots. leador starts with typical pd to co - design the interaction specifications and state and action space of the robot. it then replaces traditional offline programming or woz by an in - situ, online teaching phase where the domain expert can live - program or teach the robot how to behave while being embedded in the interaction context. we believe that this live teaching can be best achieved by adding a learning component to a woz setup, to capture experts ' implicit knowledge, as they intuitively respond to the dynamics of the situation. the robot progressively learns an appropriate, expert - approved policy, ultimately leading to full autonomy, even in sensitive and / or ill - defined environments. however, leador is agnostic to the exact technical approach used to facilitate this learning process. the extensive inclusion of the domain expert ( s ) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. the combination of this expert inclusion with the focus on in - situ development also means leador supports a mutual shaping approach to social robotics. we draw on two previously published, foundational works from which this ( generalisable ) methodology has been derived in order to demonstrate the feasibility and worth of this approach, provide concrete examples in its application and identify limitations and opportunities when applying this framework in new environments.
arxiv:2105.01910
in this article, we propose a concise theoretical framework based on mixed field - susceptibilities to describe the decay of magnetic dipoles induced by non - - magnetic nanostructures. this approach is first illustrated in simple cases in which analytical expressions of the decay rate can be obtained. we then show that a more refined numerical implementation of this formalism involving a volume discretization and the computation of a generalized propagator can predict the dynamics of magnetic dipoles in the vicinity of nanostructures of arbitrary geometries. we finally demonstrate the versatility of this numerical method by coupling it to an evolutionary optimization algorithm. in this way we predict a structure geometry which maximally promotes the decay of magnetic transitions with respect to electric emitters.
arxiv:1707.07006
the traditional image captioning task uses generic reference captions to provide textual information about images. different user populations, however, will care about different visual aspects of images. in this paper, we propose a new task, captioning with a purpose ( capwap ). our goal is to develop systems that can be tailored to be useful for the information needs of an intended population, rather than merely provide generic information about an image. in this task, we use question - answer ( qa ) pairs - - - a natural expression of information need - - - from users, instead of reference captions, for both training and post - inference evaluation. we show that it is possible to use reinforcement learning to directly optimize for the intended information need, by rewarding outputs that allow a question answering model to provide correct answers to sampled user questions. we convert several visual question answering datasets into capwap datasets, and demonstrate that under a variety of scenarios our purposeful captioning system learns to anticipate and fulfill specific information needs better than its generic counterparts, as measured by qa performance on user questions from unseen images, when using the caption alone as context.
arxiv:2011.04264
we study approximations of evolving probability measures by an interacting particle system. the particle system dynamics is a combination of independent markov chain moves and importance sampling / resampling steps. under global regularity conditions, we derive non - asymptotic error bounds for the particle system approximation. in a few simple examples, including high dimensional product measures, bounds with explicit constants of feasible size are obtained. our main motivation are applications to sequential mcmc methods for monte carlo integral estimation.
arxiv:1010.1696
in domain generalization, multiple labeled non - independent and non - identically distributed source domains are available during training while neither the data nor the labels of target domains are. currently, learning so - called domain invariant representations ( dirs ) is the prevalent approach to domain generalization. in this work, we define dirs employed by existing works in probabilistic terms and show that by learning dirs, overly strict requirements are imposed concerning the invariance. particularly, dirs aim to perfectly align representations of different domains, i. e. their input distributions. this is, however, not necessary for good generalization to a target domain and may even dispose of valuable classification information. we propose to learn so - called hypothesis invariant representations ( hirs ), which relax the invariance assumptions by merely aligning posteriors, instead of aligning representations. we report experimental results on public domain generalization datasets to show that learning hirs is more effective than learning dirs. in fact, our approach can even compete with approaches using prior knowledge about domains.
arxiv:2010.07591
we investigate the possibility to detect primordial non - gaussianity by analysing the bulk of the probability distribution function ( pdf ) of late - time cosmic density fluctuations. for this purpose we devise a new method to predict the impact of general non - gaussian initial conditions on the late - time density pdf. at redshift $ z = 1 $ and for a smoothing scale of 30mpc / $ h $ our predictions agree with the high - resolution quijote n - body simulations to $ \ sim 0. 2 \ % $ precision. this is within cosmic variance of a $ \ sim 100 ( \ mathrm { gpc } / h ) ^ 3 $ survey volume. when restricting to this 30mpc / $ h $ smoothing scale and to mildly non - linear densities ( $ \ delta [ 30 \ mathrm { mpc } / h ] \ in [ - 0. 3, 0. 4 ] $ ) and also marginalizing over potential ignorance of the amplitude of the non - linear power spectrum an analysis of the pdf for such a survey volume can still measure the amplitude of different primordial bispectrum shapes to an accuracy of \ smash { $ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { loc } } = \ pm 7. 4 \, \ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { equi } } = \ pm 22. 0 \, \ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { ortho } } = \ pm 46. 0 $ }. when pushing to smaller scales and assuming a joint analysis of the pdf with smoothing radii of 30mpc / $ h $ and 15mpc / $ h $ ( $ \ delta [ 15 \ mathrm { mpc } / h ] \ in [ - 0. 4, 0. 5 ] $ ) this improves to \ smash { $ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { loc } } = \ pm 3. 3 \, \ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { equi } } = \ pm 11. 0 \, \ \ delta f _ { \ mathrm { nl } } ^ { \ mathrm { ortho } } = \ pm 17. 0 \ $ } - even when marginalizing over the non
arxiv:1912.06621
how and where cosmic rays are produced, and how they diffuse through various turbulent media, represent fundamental problems in astrophysics with far reaching implications, both in terms of our theoretical understanding of high - energy processes in the milky way and beyond, and the successful interpretation of space - based and ground based gev and tev observations. for example, recent and ongoing detections, e. g., by fermi ( in space ) and hess ( in namibia ), of $ \ gamma $ - rays produced in regions of dense molecular gas hold important clues for both processes. in this paper, we carry out a comprehensive numerical investigation of relativistic particle acceleration and transport through turbulent magnetized environments in order to derive broadly useful scaling laws for the energy diffusion coefficients.
arxiv:1402.5469
in this note we discuss trees similar to the calkin - wilf tree, a binary tree that enumerates all positive rational numbers in a simple way. the original construction of calkin and wilf is reformulated in a more algebraic language, and an elementary application of methods from analytic number theory gives restrictions on possible analogues.
arxiv:1201.1851
surface acoustic waves on piezoelectric substrates can be used to investigate the dynamic conductivity of thin films in a non - contact and very sensitive way, especially at low conductivities. here, we report on such surface acoustic wave studies to characterize thin manganite film like la0. 67ca0. 33mno3, exhibiting a jan teller effect with a strong electron phonon interaction and a metal insulator transition at high temperatures. we report on the deposition of la0. 67ca0. 33mno3 on piezoelectric substrates ( linbo3 in different crystal cuts ) employing a pulsed laser deposition technique. the structural qualities of the thin films are examined by x - ray diffraction, scanning electron microscope and energy dispersive x - ray spectroscopy. for the electrical characterization, we employ the surface acoustic wave technique, accompanied by conventional dc - resistance measurements for comparison.
arxiv:cond-mat/0505727
reproducibility is a key aspect for scientific advancement across disciplines, and reducing barriers for open science is a focus area for the theme of interspeech 2023. availability of source code is one of the indicators that facilitates reproducibility. however, less is known about the rates of reproducibility at interspeech conferences in comparison to other conferences in the field. in order to fill this gap, we have surveyed 27, 717 papers at seven conferences across speech and language processing disciplines. we find that despite having a close number of accepted papers to the other conferences, interspeech has up to 40 % less source code availability. in addition to reporting the difficulties we have encountered during our research, we also provide recommendations and possible directions to increase reproducibility for further studies.
arxiv:2306.10033
photoswitchable molecules display two or more isomeric forms that may be accessed using light. separating the electronic absorption bands of these isomers is key to selectively addressing a specific isomer and achieving high photostationary states whilst overall red - shifting the absorption bands serves to limit material damage due to uv - exposure and increases penetration depth in photopharmacological applications. engineering these properties into a system through synthetic design however, remains a challenge. here, we present a data - driven discovery pipeline for molecular photoswitches underpinned by dataset curation and multitask learning with gaussian processes. in the prediction of electronic transition wavelengths, we demonstrate that a multioutput gaussian process ( mogp ) trained using labels from four photoswitch transition wavelengths yields the strongest predictive performance relative to single - task models as well as operationally outperforming time - dependent density functional theory ( td - dft ) in terms of the wall - clock time for prediction. we validate our proposed approach experimentally by screening a library of commercially available photoswitchable molecules. through this screen, we identified several motifs that displayed separated electronic absorption bands of their isomers, exhibited red - shifted absorptions, and are suited for information transfer and photopharmacological applications. our curated dataset, code, as well as all models are made available at https : / / github. com / ryan - rhys / the - photoswitch - dataset
arxiv:2008.03226
the data from ground based gravitational - wave detectors such as advanced ligo and virgo must be calibrated to convert the digital output of photodetectors into a relative displacement of the test masses in the detectors, producing the quantity of interest for inference of astrophysical gravitational wave sources. both statistical uncertainties and systematic errors are associated with the calibration process, which would in turn affect the analysis of detected sources, if not accounted for. currently, source characterization algorithms either entirely neglect the possibility of calibration uncertainties or account for them in a way that does not use knowledge of the calibration process itself. we present physical, a new approach to account for calibration errors during the source characterization step, which directly uses all the information available about the instrument calibration process. rather than modeling the overall detector ' s response function, we consider the individual components that contribute to the response. we implement this method and apply it to the compact binaries detected by ligo and virgo during the second observation run, as well as to simulated binary neutron stars for which the sky position and distance are known exactly. we find that the physical model performs as well as the method currently used within the ligo - virgo collaboration, but additionally it enables improving the measurement of specific components of the instrument control through astrophysical calibration.
arxiv:2009.10192
we summarize previous work on $ \ bar b \ bar bud $ four - quark systems in the born - oppenheimer approximation and discuss first steps towards an extension to the theoretically more challenging $ b \ bar b u \ bar d $ system. strategies to identify a possibly existing $ b \ bar b u \ bar d $ bound state are discussed and first numerical results are presented.
arxiv:1709.03306
garcia and campuzano claim to have found a previously overlooked family of stationary and axisymmetric conformally flat spacetimes, contradicting an old theorem of collinson. in both these papers it is tacitly assumed that the isometry group is orthogonally transitive. under the same assumption, we point out here that collinson ' s result still holds if one demands the existence of an axis of symmetry on which the axial killing vector vanishes. on the other hand if the assumption of orthogonal transitivity is dropped, a wider class of metrics is allowed and it is possible to find explicit counterexamples to collinson ' s result.
arxiv:gr-qc/0305091
this work proposes the autohvsr algorithm that allows for fully - automated processing of horizontal - to - vertical spectral ratio ( hvsr ) measurements, including those with zero, one, or multiple clear resonances. the autohvsr algorithm integrates robust signal processing and computational methods with state - of - the - art machine - learning models trained using a diverse dataset of 1109 hvsr measurements. the autohvsr algorithm demonstrates excellent performance by correctly determining the number of hvsr resonances for 1099 of the 1109 hvsr measurements ( > 99 % ) and predicting the mean resonant frequency of the correctly identified resonances with a root mean square error ( rmse ) of 0. 05 hz. furthermore, the autohvsr algorithm was able to produce these predictions in 13 minutes ( including hvsr processing time ) compared to the 30 hours required for traditional processing ( a speed up of 138 ). the autohvsr algorithm is further demonstrated on a challenging dataset from canterbury, new zealand that included many hvsr curves with multiple and / or ambiguous resonances. the autohvsr algorithm was capable of correctly determining the number of hvsr resonances for 113 of the 129 hvsr measurements ( > 87 % ) and predicting the mean resonant frequency of the correctly identified resonances with a rmse of 0. 10 hz. the autohvsr algorithm produced these predictions under 2 minutes ( including hvsr processing time ) compared to the 4 hours required for traditional processing ( a speed up of 120 ). finally, while the autohvsr algorithm was developed using microtremor measurements where the horizontal components were combined using the geometric mean, it is shown to extend without modification to microtremor hvsr measurements where the two horizontal components are rotated azimuthally and to hvsr measurements from earthquake recordings. the autohvsr algorithm has been made publicly available in v0. 3. 0 of hvsrweb.
arxiv:2304.05559
if tan ( beta ) is large, down - type quark mass matrices and yukawa couplings cannot be simultaneously diagonalized, and flavour violating couplings of the neutral higgs bosons are induced at the 1 - loop level. these couplings lead to higgs - mediated contributions to the decays b _ s - > mu + mu - and b _ d - > tau + tau -, at a level that might be of interest for the current tevatron run, or possibly, at b - factories. we evaluate the branching ratios for these decays within the framework of minimal gravity -, gauge - and anomaly - mediated susy breaking models, and also in su ( 5 ) supergravity models with non - universal gaugino mass parameters at the gut scale. we find that the contribution from gluino loops, which seems to have been left out in recent phenomenological analyses, is significant. we explore how the branching fraction varies in these models, emphasizing parameter regions consistent with other observations.
arxiv:hep-ph/0208078
this paper investigates asynchronous multiple - input multiple - output ( mimo ) massive unsourced random access ( ura ) in an orthogonal frequency division multiplexing ( ofdm ) system over frequency - selective fading channels, with the presence of both timing and carrier frequency offsets ( to and cfo ) and non - negligible codeword collisions. the proposed coding framework segregates the data into two components, namely, preamble and coding parts, with the former being tree - coded and the latter ldpc - coded. by leveraging the dual sparsity of the equivalent channel across both codeword and delay domains ( cd and dd ), we develop a message - passing - based sparse bayesian learning algorithm, combined with belief propagation and mean field, to iteratively estimate dd channel responses, to, and delay profiles. furthermore, by jointly leveraging the observations among multiple slots, we establish a novel graph - based algorithm to iteratively separate the superimposed channels and compensate for the phase rotations. additionally, the proposed algorithm is applied to the flat fading scenario to estimate both to and cfo, where the channel and offset estimation is enhanced by leveraging the geometric characteristics of the signal constellation. extensive simulations reveal that the proposed algorithm achieves superior performance and substantial complexity reduction in both channel and offset estimation compared to the codebook enlarging - based counterparts, and enhanced data recovery performances compared to state - of - the - art ura schemes.
arxiv:2405.11883
in arxiv : 1108. 5413v1, lee and lee use several comparisons to argue that the physics of the three - band model found ( phys. rev. lett. 106 036401 ( 2011 ) ) can be explained in the one - band model ' s framework ( phys. rev. lett. 91 057001 ( 2003 ) ). while superficial similarities exist between the two sets of results, for reasons discussed in this reply, we disagree that they describe the same physics.
arxiv:1109.2087
magnetization isotherms of the 5f - electron ferromagnets urhga, ucoga and uco0. 98ru0. 02al were measured at temperatures in the vicinity of their curie temperature in order to investigate the critical behavior near the ferromagnetic phase transition. these compounds adopt the layered hexagonal zrnial - type structure and exhibit huge uniaxial magnetocrystalline anisotropy. the critical \ b { eta }, { \ gamma } and { \ delta } exponents were determined by analyzing arrott - noakes plots, kouvel - fisher plots, critical isotherms, scaling theory and widom scaling relations. the values obtained for urhga and ucoga can be explained by the results of the renormalization group theory for a 2d ising system with long - range interactions similar to urhal reported by other investigators. on the other hand, the critical exponents determined for uco0. 98ru0. 02al are characteristic of a 3d ising ferromagnet with short - range interactions suggested in previous studies also for the itinerant 5f - electron paramagnet ucoal situated near a ferromagnetic transition. the change from the 2d to the 3d ising system is related to the gradual delocalization of 5f electrons in the series of the urhga, urhal, ucoga to uco0. 98ru0. 02al and ucoal compounds and appears close to the strongly itinerant nonmagnetic limit. this indicates possible new phenomena that may be induced by the change of dimensionality in the vicinity of the quantum critical point.
arxiv:2008.12061
in this short note, we report a curious appearance of the recently discovered 4d - 5d connection of extremal blackholes in the topological string b - model. the holomorphic anomaly equations in the schrodinger - weil representation are written { \ it formally } in terms of m2 charges. in the phase space the 4d - 5d charges are related by a non - linear canonical transformation. the blackhole partition function factors into m2 - anti - m2 contributions in leading approximation.
arxiv:hep-th/0701027
we developed a countermeasure against blinding attacks on low - noise detectors with a background noise cancellation scheme in quantum key distribution ( qkd ) systems. background noise cancellation includes self - differencing and balanced avalanche photon diode ( apd ) schemes and is considered a promising solution for low - noise apds, which are critical components in high - performance qkd systems. however, its vulnerability to blinding attacks has been recently reported. in this work, we propose a new countermeasure that prevents this potential security loophole from being used in detector blinding attacks. an experimental qkd setup is implemented and various tests are conducted to verify the feasibility and performance of the proposed method. the obtained measurement results show that the proposed scheme successfully detects occurring blinding - attack - based hacking attempts.
arxiv:1611.04267
plenty of quantum information protocols are enabled by manipulation and detection of photonic spectro - temporal degrees of freedom via light - matter interfaces. while present implementations are well suited for high - bandwidth photon sources such as quantum dots, they lack the high resolution required for intrinsically narrow - band light - atom interactions. here, we demonstrate far - field temporal imaging based on ac - stark spatial spin - wave phase manipulation in a multimode gradient echo memory. we achieve spectral resolution of 20 khz with mhz - level bandwidth and ultra - low noise equivalent to 0. 023 photons, enabling operation in the single - quantum regime.
arxiv:1911.03995
we present numerical simulations modeling the orbital evolution of very wide binaries, pairs of stars separated by over ~ 1000 au. due to perturbations from other passing stars and the milky way ' s tide, the orbits of very wide binary stars occasionally become extremely eccentric, which forces close encounters between the companion stars ( kaib et al. 2013 ). we show that this process causes a stellar collision between very wide binary companion stars once every 1000 - 7500 years on average in the milky way. one of the main uncertainties in this collision rate is the amount of energy dissipated by dynamic tides during close ( but not collisional ) periastron passages. this dissipation presents a dynamical barrier to stellar collisions and can instead transform very wide binaries into close or contact binaries. however, for any plausible tidal dissipation model, very wide binary stars are an unrealized, and potentially the dominant, source of stellar collisions in our galaxy. such collisions should occur throughout the thin disk of the milky way. stellar collisions within very wide binaries should yield a small population of single, li - depleted, rapidly rotating massive stars.
arxiv:1309.3272
combining three mechanisms, we reanalysis processes of $ b \ to \ eta ^ { \ prime } k ( k ^ * ), \ eta k ( k ^ * ) $ and calculate their branching ratios. the results are compared with other mechanisms in the literature. the striking feature of the gluon fusion mechanism is emphasized and its experimental test is discussed.
arxiv:hep-ph/9805451
in this article we prove two versions of the liapunov center theorem for symmetric potentials. we consider a ~ second order autonomous system $ \ ddot q ( t ) = - \ nabla u ( q ( t ) ) $ in the presence of symmetries of a compact lie group $ \ gamma $ acting linearly on $ \ mathbb { r } ^ n. $ we look for non - stationary periodic solutions of this system in a ~ neighborhood of an orbit of critical points of the potential $ u. $
arxiv:1810.09293
estimates of the integrated ( sfr ) and specific ( ssfr ) rates of star formation are given for 181 galaxies of later sc, scd, and sd types seen almost face - on. their sfrs were determined from fuv fluxes in the galex survey. the median values of the ssfr are : - 10. 66 dex for sc, - 10. 44 dex for scd, and - 10. 40 dex for sd types in units of yr - 1. the average value of the ssfr for these galaxies falls off smoothly from lowmass to giant disks. after accounting for photometric errors, the specific star formation rate has a small cosmic variation of 0. 16 dex. in order to reproduce the observed stellar mass on a cosmic time of 13. 8 gyr, the galaxies without bulges viewed face - on must have had an sfr two - three times higher in the past than observed now.
arxiv:2005.12555
3d generative shape modeling is a fundamental research area in computer vision and interactive computer graphics, with many real - world applications. this paper investigates the novel problem of generating 3d shape point cloud geometry from a symbolic part tree representation. in order to learn such a conditional shape generation procedure in an end - to - end fashion, we propose a conditional gan " part tree " - to - " point cloud " model ( pt2pc ) that disentangles the structural and geometric factors. the proposed model incorporates the part tree condition into the architecture design by passing messages top - down and bottom - up along the part tree hierarchy. experimental results and user study demonstrate the strengths of our method in generating perceptually plausible and diverse 3d point clouds, given the part tree condition. we also propose a novel structural measure for evaluating if the generated shape point clouds satisfy the part tree conditions.
arxiv:2003.08624
we propose model with larger spatial size of feature maps and evaluate it on object detection task. with the goal to choose the best feature extraction network for our model we compare several popular lightweight networks. after that we conduct a set of experiments with channels reduction algorithms in order to accelerate execution. our vehicle detection models are accurate, fast and therefore suit for embedded visual applications. with only 1. 5 gflops our best model gives 93. 39 ap on validation subset of challenging detrac dataset. the smallest of our models is the first to achieve real - time inference speed on cpu with reasonable accuracy drop to 91. 43 ap.
arxiv:1707.01395
reinforcement learning problems are often described through rewards that indicate if an agent has completed some task. this specification can yield desirable behavior, however many problems are difficult to specify in this manner, as one often needs to know the proper configuration for the agent. when humans are learning to solve tasks, we often learn from visual instructions composed of images or videos. such representations motivate our development of perceptual reward functions, which provide a mechanism for creating visual task descriptions. we show that this approach allows an agent to learn from rewards that are based on raw pixels rather than internal parameters.
arxiv:1608.03824
in the classical balls - and - bins paradigm, where $ n $ balls are placed independently and uniformly in $ n $ bins, typically the number of bins with at least two balls in them is $ \ theta ( n ) $ and the maximum number of balls in a bin is $ \ theta ( \ frac { \ log n } { \ log \ log n } ) $. it is well known that when each round offers $ k $ independent uniform options for bins, it is possible to typically achieve a constant maximal load if and only if $ k = \ omega ( \ log n ) $. moreover, it is possible w. h. p. to avoid any collisions between $ n / 2 $ balls if $ k > \ log _ 2n $. in this work, we extend this into the setting where only $ m $ bits of memory are available. we establish a tradeoff between the number of choices $ k $ and the memory $ m $, dictated by the quantity $ km / n $. roughly put, we show that for $ km \ gg n $ one can achieve a constant maximal load, while for $ km \ ll n $ no substantial improvement can be gained over the case $ k = 1 $ ( i. e., a random allocation ). for any $ k = \ omega ( \ log n ) $ and $ m = \ omega ( \ log ^ 2n ) $, one can achieve a constant load w. h. p. if $ km = \ omega ( n ) $, yet the load is unbounded if $ km = o ( n ) $. similarly, if $ km > cn $ then $ n / 2 $ balls can be allocated without any collisions w. h. p., whereas for $ km < \ epsilon n $ there are typically $ \ omega ( n ) $ collisions. furthermore, we show that the load is w. h. p. at least $ \ frac { \ log ( n / m ) } { \ log k + \ log \ log ( n / m ) } $. in particular, for $ k \ leq \ operatorname { polylog } ( n ) $, if $ m = n ^ { 1 - \ delta } $ the optimal maximal load is $ \ theta ( \ frac { \ log n } { \ log \ log n } ) $ ( the same as in the case $ k = 1 $ ), while $ m = 2n $ suffices
arxiv:0901.4056
the golden ( british columbia, canada ) meteorite fall occurred on oct 4, 2021 at 0534 ut with the first recovered fragment ( 1. 3 kg ) landing on an occupied bed. the meteorite is an unbrecciated, low - shock ( s2 ) ordinary chondrite of intermediate composition, typed as an l / ll5. from noble gas measurements the cosmic ray exposure age is 25 ma while gas retention ages are all > 2 ga. short - lived radionuclides and noble gas measurements of the pre - atmospheric size overlap with estimates from infrasound and lightcurve modelling producing a preferred pre - atmospheric mass of 70 - 200 kg. the orbit of golden has a high inclination ( 23. 5 degs ) and is consistent with delivery from the inner main belt. the highest probability ( 60 % ) of an origin is from the hungaria group. we propose that golden may originate among the background s - type asteroids found interspersed in the hungaria region. the current collection of 18 l and ll chondrite orbits shows a strong preference for origins in the inner main belt, suggesting multiple parent bodies may be required to explain the diversity in cre ages and shock states.
arxiv:2310.17822
we investigate neutrino mass generation scenarios where the lepton number breaking new physics couples only to the standard model ( sm ) right - handed charged lepton chirality. the lowest - order lepton number violating effective operator which describes this framework is a unique dimension nine operator involving sm gauge fields, $ \ mathcal { o } _ 9 $. we find that there are two possible classes of new physics scenarios giving rise to this $ \ mathcal { o } _ 9 $ operator. in these scenarios neutrino masses are induced radiatively via dark matter interactions, linking the dark matter to a natural explanation for the smallness of neutrino masses compared to the electroweak scale. we discuss the phenomenology and existing constraints in the different neutrino mass models within each class. in particular, we analyze the important interplay between neutrino mixing and neutrinoless double $ \ beta $ - decay in order to predict characteristic signatures and disfavour certain scenarios.
arxiv:2006.13564
in this article, we propose new bayesian methods for selecting and estimating a sparse coefficient vector for skewed heteroscedastic response. our novel bayesian procedures effectively estimate the median and other quantile functions, accommodate non - local prior for regression effects without compromising ease of implementation via sampling based tools, and asymptotically select the true set of predictors even when the number of covariates increases in the same order of the sample size. we also extend our method to deal with some observations with very large errors. via simulation studies and a re - analysis of a medical cost study with large number of potential predictors, we illustrate the ease of implementation and other practical advantages of our approach compared to existing methods for such studies.
arxiv:1602.09100
as typically implemented, single photon sources cannot be made to produce single photons with high probability, while simultaneously suppressing the probability of yielding two or more photons. because of this, single photon sources cannot really produce single photons on demand. we describe a multiplexed system that allows the probabilities of producing one and more photons to be adjusted independently, enabling a much better approximation of a source of single photons on demand.
arxiv:quant-ph/0205140
tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech - savvy users to gain insights from open datasets without resorting to specialized data analysis tools or even without having to fully understand the dataset structure. the goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. our prototype is publicly available and open - sourced ( see https : / / svakulenko. ai. wu. ac. at / tableqa ).
arxiv:1705.06504
the model of the homogenous and isotropic universe is considered in which the coordinate system of reference is not defined by the matter but is a priori specified. the scale factor of the universe changes following the linear law. the scale of mass changes proportional to the scale factor of the universe. the model under consideration avoids the flatness and horizon problems. the predictions of the model is fitted to the observational constraints : hubble parameter, age of the universe and cmb data.
arxiv:astro-ph/9809243
applying the phenomenon of neutrino lasing in the solar interior, we show how the rate for the generic neutrino decay process ` \ nu - > fermion + boson ', can in principal be enhanced by many orders of magnitude over its normal decay rate. such a large enhancement could be of import to neutrino - decay models invoked in response to the apparent deficit of electron neutrinos observed from the sun. the significance of this result to such models depends on the specific form of the neutrino decay, and the particle model within which it is embedded.
arxiv:hep-ph/9312331
the knowledge base ( kb ) used for real - world applications, such as booking a movie or restaurant reservation, keeps changing over time. end - to - end neural networks trained for these task - oriented dialogs are expected to be immune to any changes in the kb. however, existing approaches breakdown when asked to handle such changes. we propose an encoder - decoder architecture ( bossnet ) with a novel bag - of - sequences ( boss ) memory, which facilitates the disentangled learning of the response ' s language model and its knowledge incorporation. consequently, the kb can be modified with new knowledge without a drop in interpretability. we find that bossnet outperforms state - of - the - art models, with considerable improvements ( > 10 \ % ) on babi oov test sets and other human - human datasets. we also systematically modify existing datasets to measure disentanglement and show bossnet to be robust to kb modifications.
arxiv:1805.01216
symmetric ensembles of neutral atoms interacting via the rydberg blockade are well - described by the jaynes - cummings hamiltonian. we use this framework to study the problem of generating arbitrary superpositions of dicke states of hyperfine qubits in such ensembles. the combination of the symmetric rydberg blockade and microwaves that drive the qubits with a time - dependent phase is sufficient to make these ensembles completely controllable, in the sense that one can generate an arbitrary unitary transformation on the system. we apply this to the problem of state mapping. with currently feasible parameters, it is possible to generate arbitrary symmetric states of ~ 10 hypefine qubits with high fidelity in ~ 1 microsecond, assuming fast microwave phase switching times. to reduce the requirements on phase switching, we propose a " dressed ground control " scheme, in which the control task is simplified by restricting the system ' s dynamics to the dressed ground subspace.
arxiv:1607.03169
this article grew out of the application part of my master ' s thesis at the faculty of mathematics and information science at ruprecht - karls - universit \ " at heidelberg under the supervision of pd dr. andreas ott. in the context of time series analyses of rna virus datasets with persistent homology, this article introduces a new method for reducing two - dimensional persistence to one - dimensional persistence by transforming time information into distances.
arxiv:2203.00616
this paper considers a joint multi - graph inference and clustering problem for simultaneous inference of node centrality and association of graph signals with their graphs. we study a mixture model of filtered low pass graph signals with possibly non - white and low - rank excitation. while the mixture model is motivated from practical scenarios, it presents significant challenges to prior graph learning methods. as a remedy, we consider an inference problem focusing on the node centrality of graphs. we design an expectation - maximization ( em ) algorithm with a unique low - rank plus sparse prior derived from low pass signal property. we propose a novel online em algorithm for inference from streaming data. as an example, we extend the online algorithm to detect if the signals are generated from an abnormal graph. we show that the proposed algorithms converge to a stationary point of the maximum - a - posterior ( map ) problem. numerical experiments support our analysis.
arxiv:2207.14019
this paper focuses on the isogeometric vibration analysis of curvilinearly stiffened composite panels. the stiffness matrices and the mass matrices are derived using the first - order shear deformation theory ( fsdt ). the present method models the plate and the stiffener separately, which allows the stiffener element nodes to not coincide with the plate shell - element nodes. the stiffness and mass matrices of a stiffener are transformed to those of the plate through the displacement compatibility conditions at the plate / stiffener interface by interpolation using nurbs basis functions. cutouts are modeled using a single nurbs patch generated by creating a ruled surface between two curves. the proposed formulation is first validated by comparing it with available literature. the effects of width - to - thickness ratio, fiber orientation, ply layups, shape and size of the cutouts and the boundary conditions on the response of stiffened composite plates are then analyzed and the numerical results are used to derive useful conclusions.
arxiv:2104.12856
in this paper, we evaluate the algebraic $ k $ - groups of a planar cuspidal curve over a perfect $ \ mathbb { f } _ p $ - algebra relative to the cusp point. a conditional calculation of these groups was given earlier by hesselholt, assuming a conjecture on the structure of certain polytopes. our calculation here, however, is unconditional and illustrates the advantage of the new setup for topological cyclic homology by nikolaus - scholze, which is used throughout. the only input necessary for our calculation is the evaluation by the buenos aires cyclic homology group and by larsen of the structure of hochschild complex of the coordinate ring as a mixed complex, that is, as an object of the infinity category of chain complexes with circle action.
arxiv:1903.08295