text
stringlengths
1
3.65k
source
stringlengths
15
79
given a double complex $ x $ there are spectral sequences with the $ e _ 2 $ terms being either h $ _ i $ ( h $ _ { ii } ( x ) ) $ or h $ _ { ii } ( $ h $ _ i ( x ) ) $. but if $ h _ i ( x ) = h _ { ii } ( x ) = 0 $ both spectral sequences have all their terms 0. this can happen even though there is nonzero ( co ) homology of interest associated with $ x $. this is frequently the case when dealing with tate ( co ) homology. so in this situation the spectral sequences may not give any information about the ( co ) homology of interest. in this article we give a different way of constructing homology groups of $ x $ when h $ _ i ( x ) = $ h $ _ { ii } ( x ) = 0 $. with this result we give a new and elementary proof of balance of tate homology and cohomology.
arxiv:1108.1100
and all operations stemming therefrom are only meaningful when restricted to certain matrices, since the sum featuring in the above definition of the matrix product will contain an infinity of summands. an easy way to circumvent this issue is to restrict to matrices all of whose rows ( or columns ) contain only finitely many nonzero terms. as in the finite case ( see above ), where matrices describe linear maps, infinite matrices can be used to describe operators on hilbert spaces, where convergence and continuity questions arise. however, the explicit point of view of matrices tends to obfuscate the matter, and the abstract and more powerful tools of functional analysis are used instead, by relating matrices to linear maps ( as in the finite case above ), but imposing additional convergence and continuity constraints. = = = empty matrix = = = an empty matrix is a matrix in which the number of rows or columns ( or both ) is zero. empty matrices help to deal with maps involving the zero vector space. for example, if a is a 3 - by - 0 matrix and b is a 0 - by - 3 matrix, then ab is the 3 - by - 3 zero matrix corresponding to the null map from a 3 - dimensional space v to itself, while ba is a 0 - by - 0 matrix. there is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. the determinant of the 0 - by - 0 matrix is 1 as follows regarding the empty product occurring in the leibniz formula for the determinant as 1. this value is also consistent with the fact that the identity map from any finite - dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants. = = applications = = there are numerous applications of matrices, both in mathematics and other sciences. some of them merely take advantage of the compact representation of a set of numbers in a matrix. for example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given ( finite ) set of strategies the players choose. text mining and automated thesaurus compilation makes use of document - term matrices such as tf - idf to track frequencies of certain words in several documents. complex numbers can be represented by particular real 2 - by - 2 matrices via a + i b ↔ [ a − b b a ], { \ displaystyle a + ib \ leftrightarrow { \ begin {
https://en.wikipedia.org/wiki/Matrix_(mathematics)
in contrast to the well known fermi liquid theory of three dimensions, interacting one - dimensional and quasi one - dimensional systems of fermions are described at low energy by an effective theory known as luttinger liquid theory. this theory is expressed in terms of collective many - body excitations that show exotic behavior such as spin - charge separation. luttinger liquid theory is commonly applied on the premise that " low energy " describes both the spin and charge sectors. however, when the interactions in the system are very strong, as they typically are at low particle densities, the ratio of spin to charge energy may become exponentially small. it is then possible at very low temperatures for the energy to be low compared to the characteristic charge energy, but still high compared to the characteristic spin energy. this energy window of near ground - state charge degrees of freedom, but highly thermally excited spin degrees of freedom is called a spin - incoherent luttinger liquid. the spin - incoherent luttinger liquid exhibits a higher degree universality than the luttinger liquid and its properties are qualitatively distinct. in this colloquium i detail some of the recent theoretical developments in the field and describe experimental indications of such a regime in gated semiconductor quantum wires.
arxiv:cond-mat/0611597
the ( single ) black hole solutions of ba \ ~ nados, teitelboim and zanelli ( btz ) in 2 + 1 dimensional anti - de sitter space are generalized to an arbitrary number $ n $ of such black holes. the resulting multi - black - hole ( mbh ) spacetime is locally isometric to anti - de sitter space, and globally it is obtained from the latter as a quotient space by means of suitable identifications. the mbh spacetime has $ n $ asymptotically anti - de sitter exterior regions, each of which has the geometry of a single btz black hole. these exterior regions are separated by $ n $ horizons from a common interior region. this interior region can be described as a ` ` closed " universe containing $ n $ black holes. similar configurations in 3 + 1 dimensions, with horizons of toroidal and higher genus topologies, are also presented.
arxiv:gr-qc/9608010
off - stoichiometric alloys exhibit partial disorder, in the sense that only some of the sublattices of the stoichiometric ordered alloy become disordered. this paper puts forward a generalization of the augmented space recursion ( asr ) ( introduced earlier by one of us ( mookerjee et al 1997 ( * ) ) ) for systems with many atoms per unit cell. in order to justify the convergence properties of asr we have studied the convergence of various moments of local density of states and other physical quantities like fermi energy and band energy. we have also looked at the convergence of the magnetic moment of ni, which is very sensitive to numerical approximations towards the k - space value 0. 6 $ \ mu _ { b } $ with the number of recursion steps prior to termination.
arxiv:cond-mat/0107245
m / n ) ) $ quantum relative entropy net on the spectraplex. $ \ bullet $ matrix discrepancy for schatten norms. we generalize our discrepancy bound for matrix spencer to schatten norms $ 2 \ le p \ leq q $. given $ \ | a _ i \ | _ { s _ p } \ leq 1 $ and $ \ mathrm { rank } ( a _ i ) \ leq r $, we can efficiently find a partial coloring $ x \ in [ - 1, 1 ] ^ n $ with $ | \ { i : | x _ i | = 1 \ } | \ ge n / 2 $ and $ \ | \ sum _ { i = 1 } ^ n x _ i a _ i \ | _ { s _ q } \ lesssim \ sqrt { n \ min ( p, \ log ( rk ) ) } \ cdot k ^ { 1 / p - 1 / q } $, where $ k : = \ min ( 1, m / n ) $.
arxiv:2111.03171
in this paper, we improve polyak ' s local convexity result for quadratic transformations. extension and open problems are also presented.
arxiv:1405.6042
electric vehicles ( evs ) are considered as sustainable alternatives to conventional vehicles, as they reduce emission and fossil fuel dependency. a recent study has proposed a charging infrastructure planning tool to support intercity trips for the estimated ev market share ( 6 percent ) in michigan for 2030. the main goal of this study is to estimate the emission reduction associated with this electrification rate and infrastructure investment for light duty vehicles. to this end, a state - of - the - art emission estimation framework is proposed to be applied to the state - wide intercity travels. the main contributions of the proposed framework includes : 1 ) incorporating a micro emission estimation model for simulated vehicle trajectories of the intercity network of michigan, 2 ) adjusting the micro emission model results considering impacts of monthly travel demand and temperature variations, and heterogeneity of vehicles based on their make, model, and age. the emission estimation framework is then compared with the traditional vmt analysis method as a benchmark. finally, five different scenarios are explored for ev adoption to assess potential emission savings from the given electrification rate for each scenario. the results suggest an annual co2 emission savings of 0. 58 - 0. 92 million - ton. the co2 social cost savings may justify the investment on the network electrification. note that only 3. 7 to 8. 6 percent of the total ev energy requirements must be provided via the dc fast charger network proposed by the charging infrastructure planning tool. this requires annual energy consumption of 22. 15 to 51. 76 bwh for the estimated ev market share in michigan for 2030.
arxiv:2012.04773
for an infinite type surface $ \ sigma $, we consider the space of ( marked ) convex hyperbolic structures on $ \ sigma $, denoted $ h ( \ sigma ) $, with the fenchel - nielsen topology. the ( big ) mapping class group acts faithfully on this space allowing us to investigate a number of mapping class group invariant subspaces of $ h ( \ sigma ) $ which arise from various geometric properties ( e. g. geodesic or metric completeness, ergodicity of the geodesic flow, lower systole bound, discrete length spectrum ) of the hyperbolic structure. in particular, we show that the space of geodesically complete convex hyperbolic structures in $ h ( \ sigma ) $ is locally path connected, connected and decomposes naturally into teichm \ " uller subspaces. the big mapping class group of $ \ sigma $ acts faithfully on this space allowing us to classify mapping classes into three types ( { \ it always quasiconformal, sometimes quasiconformal, and never quasiconformal } ) in terms of their dynamics on the teichm \ " uller subspaces. moreover, each type contains infinitely many mapping classes, and the type is relative to the underlying subspace of $ h ( \ sigma ) $ that is being considered. as an application of our work, we show that if the mapping class group of a general topological surface $ \ sigma $ is algebraically isomorphic to the modular group of a riemann surface $ x $, then $ \ sigma $ is of finite topological type and $ x $ is homeomorphic to it. moreover, a big mapping class group can not act on any teichm \ " uller space with orbits equivalent to modular group orbits.
arxiv:2410.05606
in this study we coded, for individual student participation on each question, the video of twenty - seven groups interacting in the group phase of a variety of two - phase exams. we found that maximum group participation occurred on questions where at least one person in the group had answered that question incorrectly during the solo phase of the exam. we also observed that those students that were correct on a question during the solo phase have higher participation than those that were incorrect. finally we observed that, from a participation standpoint, the strongest ( weakest ) students seem to benefit the most ( least ) from heterogeneous groups, while homogeneous groups do not seem to favor students of any particular performance level.
arxiv:1607.03960
we study the impact that uncertainties on assumed relations between galaxy bias parameters have on constraints of the local png $ f _ { \ rm nl } $ parameter. we focus on the relation between the linear density galaxy bias $ b _ 1 $ and local png bias $ b _ \ phi $ in an idealized forecast setup with multitracer galaxy power spectrum and bispectrum data. we consider two parametrizations of galaxy bias : 1 ) one inspired by the universality relation where $ b _ \ phi = 2 \ delta _ c \ left ( b _ 1 - p \ right ) $ and $ p $ is a free parameter ; and 2 ) another in which the product of bias parameters and $ f _ { \ rm nl } $, like $ f _ { \ rm nl } b _ \ phi $, is directly fitted for. the constraints on the $ f _ { \ rm nl } - p $ plane are markedly bimodal, and both the central value and width of marginalized constraints on $ f _ { \ rm nl } $ depend sensitively on the priors on $ p $. assuming fixed $ p = 1 $ in the constraints with a fiducial value of $ p = 0. 55 $ can bias the inferred $ f _ { \ rm nl } $ by $ 0. 5 \ sigma $ to $ 1 \ sigma $ ; priors $ \ delta p \ approx 0. 5 $ around this fiducial value are however sufficient in our setup to return unbiased constraints. in power spectrum analyses, parametrization 2, that makes no assumptions on $ b _ \ phi $, can distinguish $ f _ { \ rm nl } \ neq 0 $ with the same significance as parametrization 1 assuming perfect knowledge of $ b _ \ phi $ ( the value of $ f _ { \ rm nl } $ is however left unknown ). a drawback of parametrization 2 is that the addition of the bispectrum information is not as beneficial as in parametrization 1. our results motivate strongly the incorporation of mitigation strategies for bias uncertainties in png constraint analyses, as well as further theoretical studies on the relations between bias parameters to better inform those strategies.
arxiv:2009.06622
we extract the long - distance asymptotic behaviour of two - point correlation functions in massless quantum integrable models containing multi - species excitations. for such a purpose, we extend to these models the method of a large - distance regime re - summation of the form factor expansion of correlation functions. the key feature of our analysis is a technical hypothesis on the large - volume behaviour of the form factors of local operators in such models. we check the validity of this hypothesis on the example of the $ su ( 3 ) $ - invariant xxx magnet by means of the determinant representations for the form factors of local operators in this model. our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested bethe ansatz.
arxiv:1601.04475
since the pioneering works by landau, zener, st \ " uckelberg, and majorana ( lzsm ), it has been known that driving a quantum two - level system results in tunneling between its states. even though the interference between these transitions is known to be important, it is only recently that it became both accessible, controllable, and useful for engineering quantum systems. here, we study systematically various aspects of lzsm physics and review the relevant literature, significantly expanding the review article in [ shevchenko, s. n., s. ashhab, and f. nori ( 2010 ), " landau - zener - st \ " uckelberg interferometry, " phys. rep. 492, 1 ].
arxiv:2203.16348
inspired by our earlier work on automatic repeat request ( arq ) secrecy, we propose a simple, yet efficient, security overlay protocol to existing 802. 11 networks. our work targets networks secured by the wired equivalent privacy ( wep ) protocol because of its widespread use and vulnerability to a multitude of security threats. by exploiting the existing arq protocol in the 802. 11 standard, our proposed opportunistic secrecy scheme is shown to defend against all known passive wep attacks. moreover, our implementation on the madwifi - ng driver is used to establish the achievability of a vanishing secrecy outage probability in several realistic scenarios.
arxiv:0908.2328
we verify the qed ward identity for the two - and three - point functions at non - equilibrium in the htl limit. we use the keldysh formalism of real time finite temperature field theory. we obtain an identity of the same form as the ward identity for a set of one loop self - energy and one loop three - point vertex diagrams which are constructed from htl effective propagators and vertices.
arxiv:hep-th/9801103
we present a new approach to a technique known as compiling control, whose aim is to compile away special mechanisms for non - standard atom selection in logic programs. it has previously been conjectured that compiling control could be implemented as an instance of the first futamura projection, in which an interpreter is specialized for an input program. however, the exact nature of such an interpreter and of the required technique for specialization were never specified. in this work, we propose a prolog meta - interpreter which applies the desired non - standard selection rule and which is amenable to specialization using offline partial deduction. after the initial analysis phase of compiling control, we collect annotations to specialize the interpreter using the logen system for offline partial deduction. we also show that the result of the specialization is equivalent to the program obtained using the traditional approach to compiling control. in this way, we simplify the synthesis step.
arxiv:1808.05360
we propose that the non - observation of wimps may be explained by dark matter primarily annihilating to the darker concealed sector while coupling to the standard model with only minimal strength. to demonstrate this scenario, we focus on the wimp dark matter candidate from a $ u ( 1 ) _ x $ hidden sector, which couples more strongly to another concealed $ u ( 1 ) _ c $ sector than to the standard model. we explore two possible cases for the evolution of dark particles among hidden sectors : ( 1 ) the wimp annihilates efficiently and achieves the observed relic density with the assistance of the concealed sector. ( 2 ) the wimp transforms into another type of dark matter within the concealed sector and attains the observed relic density. annihilating to the darker explains why wimps have remained undetected, and all wimp models will continue to hold interest.
arxiv:2409.17217
the thesis is devoted to abstract, geometric and symmetric aspects of modern elementary particle theories. a new direction in constructing supersymmetric and superstring models based on consequent and strong consideration and inclusion of semigroups, ideals and noninvertible properties into their mathematical structure is proposed. a theory of semisupermanifolds ( chapter i ) and noninvertible generalization of superconformal ( chapters ii - iii ) and hyperbolic geometries ( chapter v ) are introduced. new continuous supermatrix representations of semigroups ( chapter iv ) are obtained. the carried out investigations will allow us to formulate a theoretical model of elementary particles based on supersymmetry in terms of more general categories and new structures - as a theory of abstract and supermatrix semigroups which includes previous theories as a particular invertible case in abstract sense.
arxiv:math-ph/9910045
deep neural networks ( dnn ) are powerful models for many pattern recognition tasks, yet their high computational complexity and memory requirement limit them to applications on high - performance computing platforms. in this paper, we propose a new method to evaluate dnns trained with 32bit floating point ( float32 ) accuracy using only low precision integer arithmetics in combination with binary shift and clipping operations. because hardware implementation of these operations is much simpler than high precision floating point calculation, our method can be used for an efficient dnn inference on dedicated hardware. in experiments on mnist, we demonstrate that dnns trained with float32 can be evaluated using a combination of 2bit integer arithmetics and a few float32 calculations in each layer or only 3bit integer arithmetics in combination with binary shift and clipping without significant performance degradation.
arxiv:1810.09854
with the growing attention and investment in recent ai approaches such as large language models, the narrative that the larger the ai system the more valuable, powerful and interesting it is is increasingly seen as common sense. but what is this assumption based on, and how are we measuring value, power, and performance? and what are the collateral consequences of this race to ever - increasing scale? here, we scrutinize the current scaling trends and trade - offs across multiple axes and refute two common assumptions underlying the ' bigger - is - better ' ai paradigm : 1 ) that performance improvements are driven by increased scale, and 2 ) that all interesting problems addressed by ai require large - scale models. rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. first, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e. g. health, education, or the climate. finally, it exacerbates a concentration of power, which centralizes decision - making in the hands of a few actors while threatening to disempower others in the context of shaping both ai research and its applications throughout society.
arxiv:2409.14160
apache kafka addresses the general problem of delivering extreme high volume event data to diverse consumers via a publish - subscribe messaging system. it uses partitions to scale a topic across many brokers for producers to write data in parallel, and also to facilitate parallel reading of consumers. even though apache kafka provides some out of the box optimizations, it does not strictly define how each topic shall be efficiently distributed into partitions. the well - formulated fine - tuning that is needed in order to improve an apache kafka cluster performance is still an open research problem. in this paper, we first model the apache kafka topic partitioning process for a given topic. then, given the set of brokers, constraints and application requirements on throughput, os load, replication latency and unavailability, we formulate the optimization problem of finding how many partitions are needed and show that it is computationally intractable, being an integer program. furthermore, we propose two simple, yet efficient heuristics to solve the problem : the first tries to minimize and the second to maximize the number of brokers used in the cluster. finally, we evaluate its performance via large - scale simulations, considering as benchmarks some apache kafka cluster configuration recommendations provided by microsoft and confluent. we demonstrate that, unlike the recommendations, the proposed heuristics respect the hard constraints on replication latency and perform better w. r. t. unavailability time and os load, using the system resources in a more prudent way.
arxiv:2205.09415
conformal prediction is a powerful tool to generate uncertainty sets with guaranteed coverage using any predictive model, under the assumption that the training and test data are i. i. d.. recently, it has been shown that adversarial examples are able to manipulate conformal methods to construct prediction sets with invalid coverage rates, as the i. i. d. assumption is violated. to address this issue, a recent work, randomized smoothed conformal prediction ( rscp ), was first proposed to certify the robustness of conformal prediction methods to adversarial noise. however, rscp has two major limitations : ( i ) its robustness guarantee is flawed when used in practice and ( ii ) it tends to produce large uncertainty sets. to address these limitations, we first propose a novel framework called rscp + to provide provable robustness guarantee in evaluation, which fixes the issues in the original rscp method. next, we propose two novel methods, post - training transformation ( ptt ) and robust conformal training ( rct ), to effectively reduce prediction set size with little computation overhead. experimental results in cifar10, cifar100, and imagenet suggest the baseline method only yields trivial predictions including full label set, while our methods could boost the efficiency by up to $ 4. 36 \ times $, $ 5. 46 \ times $, and $ 16. 9 \ times $ respectively and provide practical robustness guarantee. our codes are available at https : / / github. com / trustworthy - ml - lab / provably - robust - conformal - prediction.
arxiv:2404.19651
we give a real - analytic section for the teichm \ " uller projection onto the vmo - teichm \ " uller space by using the variant of beurling - ahlfors extension by heat kernel introduced by fefferman, kenig and pipher in 1991. based on this result, we prove that the vmo - teichm \ " uller space can be endowed with a real banach manifold structure that is real - analytically equivalent to its complex banach manifold structure. we also obtain that the vmo - teichm \ " uller space admits a real - analytic contraction mapping.
arxiv:2112.08962
navigating unseen environments based on natural language instructions remains difficult for egocentric agents in vision - and - language navigation ( vln ). existing approaches primarily rely on rgb images for environmental representation, underutilizing latent textual semantic and spatial cues and leaving the modality gap between instructions and scarce environmental representations unresolved. intuitively, humans inherently ground semantic knowledge within spatial layouts during indoor navigation. inspired by this, we propose a versatile semantic understanding and spatial awareness ( susa ) architecture to encourage agents to ground environment from diverse perspectives. susa includes a textual semantic understanding ( tsu ) module, which narrows the modality gap between instructions and environments by generating and associating the descriptions of environmental landmarks in agent ' s immediate surroundings. additionally, a depth - enhanced spatial perception ( dsp ) module incrementally constructs a depth exploration map, enabling a more nuanced comprehension of environmental layouts. experiments demonstrate that susa ' s hybrid semantic - spatial representations effectively enhance navigation performance, setting new state - of - the - art performance across three vln benchmarks ( reverie, r2r, and soon ). the source code will be publicly available.
arxiv:2412.06465
snow is a highly complex medium composed of ice crystals of various shapes and sizes. knowledge of its intrinsic optical properties such as the scattering and absorption coefficient is tantamount to radiative transfer models in climate research. the absorption coefficient, in particular, allows us to access information about light - absorbing particles contained in the snow. in contrast to snow ' s apparent properties like the albedo, measuring the intrinsic properties is challenging. here, we present a simple apparatus that can measure bulk optical properties of snow using readily available components and a smartphone camera, and a robust diffuse - optical framework for data analysis. we demonstrate the instrument both on scattering phantoms with known scattering and absorption coefficient as well as in the field. its low cost, simplicity and portability uniquely qualify this setup for large - scale field work, undergraduate education and citizen science.
arxiv:2204.08432
the dynamics of a standing shock front in a poynting - flux dominated relativistic flow is investigated by using a one - dimensional, relativistic, two - fluid simulation. an upstream flow containing a circularly polarized, sinusoidal magnetic shear wave is considered, mimicking a wave driven by an obliquely rotating pulsar. it is demonstrated that this wave is converted into large amplitude electromagnetic waves with superluminal phase speeds by interacting with the shock when the shock - frame frequency of the wave exceeds the proper plasma frequency. the superluminal waves propagate in the upstream, modify the shock structure substantially, and form a well - developed precursor region ahead of a subshock. dissipation of poynting flux occurs in the precursor as well as in the downstream region through a parametric instability driven by the superluminal waves. the poynting flux remaining in the downstream region is carried entirely by the superluminal waves. the downstream plasma is therefore an essentially unmagnetized, relativistically hot plasma with a non - relativistic flow speed, as suggested by observations of pulsar wind nebulae.
arxiv:1303.2702
nominal unification is an extension of first - order unification that takes into account the \ alpha - equivalence relation generated by binding operators, following the nominal approach. we propose a sound and complete procedure for nominal unification with commutative operators, or nominal c - unification for short, which has been formalised in coq. the procedure transforms nominal c - unification problems into simpler ( finite families ) of fixpoint problems, whose solutions can be generated by algebraic techniques on combinatorics of permutations.
arxiv:1709.05384
as large language models ( llms ) are increasingly deployed in real - world settings, understanding the knowledge they implicitly use when making decisions is critical. one way to capture this knowledge is in the form of bayesian prior distributions. we develop a prompt - based workflow for eliciting prior distributions from llms. our approach is based on iterated learning, a markov chain monte carlo method in which successive inferences are chained in a way that supports sampling from the prior distribution. we validated our method in settings where iterated learning has previously been used to estimate the priors of human participants - - causal learning, proportion estimation, and predicting everyday quantities. we found that priors elicited from gpt - 4 qualitatively align with human priors in these settings. we then used the same method to elicit priors from gpt - 4 for a variety of speculative events, such as the timing of the development of superhuman ai.
arxiv:2406.01860
model - based systems engineering ( mbse ) has been widely utilized to formalize system artifacts and facilitate their development throughout the entire lifecycle. during complex system development, mbse models need to be frequently exchanged across stakeholders. concerns about data security and tampering using traditional data exchange approaches obstruct the construction of a reliable marketplace for digital assets. the emerging distributed ledger technology ( dlt ), represented by blockchain, provides a novel solution for this purpose owing to its unique advantages such as tamper - resistant and decentralization. in this paper, we integrate mbse approaches with dlt aiming to create a decentralized marketplace to facilitate the exchange of digital engineering assets ( deas ). we first define deas from perspectives of digital engineering objects, development processes and system architectures. based on this definition, the graph - object - property - point - role - relationship ( gopprr ) approach is used to formalize the deas. then we propose a framework of a decentralized deas marketplace and specify the requirements, based on which we select a directed acyclic graph ( dag ) structured dlt solution. as a proof - of - concept, a prototype of the proposed deas marketplace is developed and a case study is conducted to verify its feasibility. the experiment results demonstrate that the proposed marketplace facilitates free deas exchange with a high level of security, efficiency and decentralization.
arxiv:2005.05415
variational quantum algorithms constitute one of the most widespread methods for using current noisy quantum computers. however, it is unknown if these heuristic algorithms provide any quantum - computational speedup, although we cannot simulate them classically for intermediate sizes. since entanglement lies at the core of quantum computing power, we investigate its role in these heuristic methods for solving optimization problems. in particular, we use matrix product states to simulate the quantum approximate optimization algorithm with reduced bond dimensions $ d $, a parameter bounding the system entanglement. moreover, we restrict the simulation further by deterministically sampling solutions. we conclude that entanglement plays a minor role in the maxcut and exact cover 3 problems studied here since the simulated algorithm analysis, with up to $ 60 $ qubits and $ p = 100 $ algorithm layers, shows that it provides solutions for bond dimension $ d \ approx 10 $ and depth $ p \ approx 30 $. additionally, we study the classical optimization loop in the approximated algorithm simulation with $ 12 $ qubits and depth up to $ p = 4 $ and show that the approximated optimal parameters with low entanglement approach the exact ones.
arxiv:2207.03404
we perform the first model independent analysis of experimental data using deep neural networks to determine the nature of an exotic hadron. specifically, we study the line shape of the $ p _ c ( 4312 ) $ signal reported by the lhcb collaboration and we find that its most likely interpretation is that of a virtual state. this method can be applied to other near - threshold resonance candidates.
arxiv:2110.13742
an inhomogeneity into a conductive matrix deforms the flow pattern of an applied electric current. a usual current cloak can be defined as a permanent modification of the matrix properties around the inhomogeneity guaranteeing that the current flow pattern is similar before and after passing by the modified zone, so it implies the " electrical invisibility " of the inhomogeneous region. here we introduce the concept of a current cloak that can be tuned - - switched on and off, for example - - by means on an external field. we demonstrate analytically and using finite elements simulations that a current cloak can be constructed and manipulated by an external magnetic field for a concrete system consisting in a magneto - resistive matrix with a stainless steel inclusion.
arxiv:2011.07629
advancements in light modulator technology have been driving discoveries and progress across various fields. the problem of large - scale coherent optical control of atomic quantum systems - including cold atoms, ions, and solid - state color centers - presents among the most stringent requirements. this motivates a new generation of high - speed large - scale modulator technology with the following requirements : ( r1 ) operation at a design wavelength of choice in the visible ( vis ) to near - infrared ( nir ) spectrum, ( r2 ) a scalable technology with a high channel density ( > 100mm - 2 ), ( r3 ) a high modulation speed ( > 100mhz ), and ( r4 ) a high extinction ratio ( > 20 db ). to fulfill these requirements, we introduce a modulator technology based on piezoelectrically actuated silicon nitride resonant waveguide gratings fabricated on 200mm diameter silicon wafers with cmos compatible processes. we present a proof - of - concept device with 4 x 4 individually addressable 50 { mu } m x 50 { mu } m pixels or channels, each containing a resonant waveguide grating with a ~ 780 nm design wavelength, supporting > 100mhz modulation speeds, and a spectral response with > 20 db extinction.
arxiv:2410.19058
we derive presentations of the interval groups related to all quasi - coxeter elements in the coxeter group of type $ d _ n $. type $ d _ n $ is the only infinite family of finite coxeter groups that admits proper quasi - coxeter elements. the presentations we obtain are over a set of generators in bijection with what we call a carter generating set, and the relations are those defined by the related carter diagram together with a twisted or a cycle commutator relator, depending on whether the quasi - coxeter element is a coxeter element or not. the proof is based on the description of two combinatorial techniques related to the intervals of quasi - coxeter elements. in a subsequent work [ 4 ], we complete our analysis to cover all the exceptional cases of finite coxeter groups, and establish that almost all the interval groups related to proper quasi - coxeter elements are not isomorphic to the related artin groups, hence establishing a new family of interval groups with nice presentations. alongside the proof of the main results, we establish important properties related to the dual approach to coxeter and artin groups.
arxiv:2103.06570
there exist various defect - brane backgrounds in supergravity theories which arise as the low energy limit of string theories. these backgrounds typically have non - trivial monodromies, and if we move a charged probe around the center of a defect, its charge will be changed by the action of the monodromy. during the process, the charge conservation law seems to be violated. in this paper, to resolve this puzzle, we examine a dynamics of the charge changing process and show that the missing charge of the probe is transferred to the background. we then explicitly construct the resultant background after the charge transfer process by utilizing dualities. this background has the same monodromy as the original defect brane, but has an additional charge which does not have any localized source. in the literature, such a charge without localized source is known to appear in the presence of alice strings. we argue that defect branes can in fact be regarded as a realization of alice strings in string theory and examine the charge transfer process from that perspective.
arxiv:1411.1043
i make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent ai systems.
arxiv:1704.00783
a hyperplane arrangement is said to satisfy the ` ` riemann hypothesis ' ' if all roots of its characteristic polynomial have the same real part. this property was conjectured by postnikov and stanley for certain families of arrangements which are defined for any irreducible root system and was proved for the root system $ a _ { n - 1 } $. the proof is based on an explicit formula for the characteristic polynomial, which is of independent combinatorial significance. here our previous derivation of this formula is simplified and extended to similar formulae for all but the exceptional root systems. the conjecture follows in these cases.
arxiv:math/9705223
we have recently introduced a new model for the distribution of dark matter ( dm ) in galaxies based on a self - gravitating system of massive fermions at finite temperatures, the ruffini - arg \ " uelles - rueda ( rar ) model. we show that this model, for fermion masses in the kev range, explains the dm halo of the galaxy and predicts the existence of a denser quantum core at the center. we demonstrate here that the introduction of a cutoff in the fermion phase - space distribution, necessary to account for the finite galaxy size, defines a new solution with a central core which represents an alternative to the black hole ( bh ) scenario for sgra *. for a fermion mass in the range $ mc ^ 2 = 48 $ - - $ 345 $ ~ kev, the dm halo distribution is in agreement with the milky way rotation curve data, while harbors a dense quantum core of about $ 4 \ times10 ^ 6 m _ \ odot $ within the s2 - star pericenter.
arxiv:1606.07040
existing research has made impressive strides in reconstructing human facial shapes and textures from images with well - illuminated faces and minimal external occlusions. nevertheless, it remains challenging to recover accurate facial textures from scenarios with complicated illumination affected by external occlusions, e. g. a face that is partially obscured by items such as a hat. existing works based on the assumption of single and uniform illumination cannot correctly process these data. in this work, we introduce a novel approach to model 3d facial textures under such unnatural illumination. instead of assuming single illumination, our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions combined with learned neural representations, named light decoupling. according to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach in modeling facial textures under challenging illumination affected by occlusions. please check https : / / tianxinhuang. github. io / projects / deface for our videos and codes.
arxiv:2412.08524
we study dependence of jet quenching on matter density, using " tomography " of the fireball provided by rhic data on azimuthal anisotropy $ v _ 2 $ of high $ p _ t $ hadron yield at different centralities. slicing the fireball into shells with constant ( entropy ) density, we derive a " layer - wise geometrical limit " $ v _ 2 ^ { max } $ which is indeed above the data $ v _ 2 < v _ 2 ^ { max } $. interestingly, the limit is reached only if quenching is dominated by shells with the entropy density exactly in the near - $ t _ c $ region. we show two models that simultaneously describe the high $ p _ t $ $ v _ 2 $ and $ r _ { aa } $ data and conclude that such a description can be achieved only if the jet quenching is few times stronger in the near - $ t _ c $ region relative to qgp at $ t > t _ c $. one possible reason for that may be recent indications that the near - $ t _ c $ region is a magnetic plasma of relatively light color - magnetic monopoles.
arxiv:0810.4116
dark photons ( dp ) are interesting as potential mediators between the dark matter ( dm ) sector and the fields of the standard model ( sm ). the interaction of the dp, described by a broken $ u ( 1 ) _ d $ gauge symmetry, with the sm is usually generated at the one - loop level via kinetic mixing through the existence of portal matter ( pm ), here assumed to be fermionic, which carries both a dark charge as well as a sm $ u ( 1 ) _ y $ hypercharge. for theoretical consistency, as well as for many phenomenological reasons, this pm must be vector - like with respect to the sm and dark gauge groups and, in particular, is shown to be allowed only to transform as vector - like copies of the usual sm fields. the dark higgs that is responsible for the breaking of $ u ( 1 ) _ d $ can then generate a mixing between the pm and sm fields with the same electric charge thus altering the dp interactions with ( at least some of ) the sm fields and also providing a path for the pm fields to decay. in this paper we briefly explore the phenomenology of some specific simple models of this pm including, for the case where the pm is leptonic in nature, their potential impact on experiments probing low energy parity - violation and the g - 2 of the muon. in the case of color - triplet, bottom quark - like pm, their direct pair - and single - production at the lhc is shown to be observable in final states that include missing $ e _ t $ and / or very highly boosted lepton - jets together with pairs of high $ p _ t $ b - jets that can be used to trigger on such events. these signatures are quite distinct from those usually employed in the search for vector - like quarks at the lhc and, furthermore, we demonstrate that the conventional signal channels for vector - like quarks involving the sm higgs and gauge fields are essentially closed in the case of pm.
arxiv:1810.07531
in today ' s scenario, imagining a world without negativity is something very unrealistic, as bad news spreads more virally than good ones. though it seems impractical in real life, this could be implemented by building a system using machine learning and natural language processing techniques in identifying the news datum with negative shade and filter them by taking only the news with positive shade ( good news ) to the end user. in this work, around two lakhs datum have been trained and tested using a combination of rule - based and data driven approaches. vader along with a filtration method has been used as an annotating tool followed by statistical machine learning approach that have used document term matrix ( representation ) and support vector machine ( classification ). deep learning algorithms then came into picture to make this system reliable ( doc2vec ) which finally ended up with convolutional neural network ( cnn ) that yielded better results than the other experimented modules. it showed up a training accuracy of 96 %, while a test accuracy of ( internal and external news datum ) above 85 % was obtained.
arxiv:1804.03673
we prove a dynamical shafarevich theorem on the finiteness of the set of isomorphism classes of rational maps with fixed degeneracies. more precisely, fix an integer d at least 2 and let k be either a number field or the function field of a curve x over a field k, where k is of characteristic zero or p > 2d - 2 that is either algebraically closed or finite. let s be a finite set of places of k. we prove the finiteness of the set of isomorphism classes of rational maps over k with a natural kind of good reduction outside of s. we also prove auxiliary results on finiteness of reduced effective divisors in $ \ mathbb { p } ^ 1 _ k $ with good reduction outside of s and on the existence of global models for rational maps.
arxiv:1705.05489
we prove the generalized margulis lemma with a uniform index bound on an alexandrov $ n $ - space $ x $ with curvature bounded below, i. e., small loops at $ p \ in x $ generate a subgroup of the fundamental group of unit ball $ b _ 1 ( p ) $ that contains a nilpotent subgroup of index $ \ le w ( n ) $, where $ w ( n ) $ is a constant depending only on the dimension $ n $. the proof is based on the main ideas of v. ~ kapovitch, a. ~ petrunin, and w. ~ tuschmann, and the following results : ( 1 ) we prove that any regular almost lipschitz submersion constructed by yamaguchi on a collapsed alexandrov space with curvature bounded below is a hurewicz fibration. we also prove that such fibration is uniquely determined up to a homotopy equivalence. ( 2 ) we give a detailed proof on the gradient push, improving the universal pushing time bound given by v. ~ kapovitch, a. ~ petrunin, and w. ~ tuschmann, and justifying in a specific way that the gradient push between regular points can always keep away from extremal subsets.
arxiv:1902.10973
the generalized hierarchies of compound wki - sp ( wadati - konno - ichikawa and short pulse ) equations are presented. the proposed integrable nonlinear equations include the wki - type equations, the sp - type equations and the compound generalized wki - sp equations. a chain of hodograph transformations are established to relate the compound wki - sp equations with the mkdv - sg ( modified korteweg - de vries and sine - gordon ) equations. as applications, the multiloop soliton solutions of one compound wki - sp equation are obtained. we emphasize on showing abundant solitonic behaviors of two loop solitons. the role of each parameter plays in the movement of two - loop solion are shown detailedly in a table.
arxiv:2305.02532
extracting work from a physical system is one of the cornerstones of quantum thermodynamics. the extractable work, as quantified by ergotropy, necessitates a complete description of the quantum system. this is significantly more challenging when the state of the underlying system is unknown, as quantum tomography is extremely inefficient. in this article, we analyze the number of samples of the unknown state required to extract work. with only a single copy of an unknown state, we prove that extracting any work is nearly impossible. in contrast, when multiple copies are available, we quantify the sample complexity required to estimate extractable work, establishing a scaling relationship that balances the desired accuracy with success probability. our work develops a sample - efficient protocol to assess the utility of unknown states as quantum batteries and opens avenues for estimating thermodynamic quantities using near - term quantum computers.
arxiv:2412.02673
measurements of galaxy clustering are now becoming possible over a range of redshifts out to z = 3. we use a semi - analytic model of galaxy formation to compute the expected evolution of the galaxy correlation function with redshift. we illustrate how the degree of clustering evolution is sensitive to the details of the sample selection. for a fixed apparent magnitude limit, galaxies selected at higher redshifts are located in progressively rarer dark matter haloes, compared with the general population of galaxies in place at each redshift. as a result these galaxies are highly biased tracers of the underlying dark matter distribution and exhibit stronger clustering than the dark matter. in general, the correlation length measured in comoving units, decreases at first with increasing redshift, before increasing again at higher redshift. we show that the epsilon - model often used to interpret the angular correlation function of faint galaxies gives an inadequate description of the evolution of clustering, and offers no physical insight into the clustering process. we compare our predictions with those of a simple, popular model in which a one - to - one correspondence between galaxies and dark halos is assumed. qualitatively, this model reproduces the correct evolutionary behaviour at high redshift, but the quantitative results can be significantly in error. our theoretical expectations are in good agreement with the high redshift clustering data of carlberg etal and postman etal but are higher than the measurements of le fevre etal.
arxiv:astro-ph/9811222
( tmttf ) 2asf6 undergoes two phase transitions upon cooling from 300 k. at tco = 103 k a charge - ordering ( co ) occurs, and at tsp ( b = 9 t ) = 11 k the material undergoes a spin - peierls ( sp ) transition. within the intermediate, co phase, the charge disproportionation ratio is found to be at least 3 : 1 from carbon - 13 nmr 1 / t1 measurements on spin - labeled samples. above tsp, up to about 3tsp, 1 / t1 is independent of temperature, indicative of low - dimensional magnetic correlations. with the application of about 0. 15 gpa pressure, tsp increases substantially, while tco is rapidly suppressed, demonstrating that the two orders are competing. the experiments are compared to results obtained from calculations on the 1d extended peierls - hubbard model.
arxiv:cond-mat/0205026
we model raman processes in silicene and germanene involving scattering of quasiparticles by, either, two phonons, or, one phonon and one point defect. we compute the resonance raman intensities and lifetimes for laser excitations between 1 and 3 $ \, $ ev using a newly developed third - nearest neighbour tight - binding model parametrized from first principles density functional theory. we identify features in the raman spectra that are unique to the studied materials or the defects therein. we find that in silicene, a new raman resonance arises from the $ 2. 77 \, \ rm $ ev $ \ pi - \ sigma $ plasmon at the m point, measurably higher than the raman resonance originating from the $ 2. 12 \, \ rm $ ev $ \ pi $ plasmon energy. we show that in germanene, the lifetimes of charge carriers, and thereby the linewidths of the raman peaks, are influenced by spin - orbit splittings within the electronic structure. we use our model to predict scattering cross sections for defect induced raman scattering involving adatoms, substitutional impurities, stone - wales pairs, and vacancies, and argue that the presence of each of these defects in silicene and germanene can be qualitatively matched to specific features in the raman response.
arxiv:1808.01354
error - correcting codes for quantum computing are crucial to address the fundamental problem of communication in the presence of noise and imperfections. audoux used khovanov homology to define families of quantum error - correcting codes with desirable properties. we explore khovanov homology and some of its many extensions, namely reduced, annular, and $ \ mathfrak { sl } _ 3 $ homology, to generate new families of quantum codes and to establish several properties about codes that arise in this way, such as behavior of distance under reidemeister moves or connected sums.
arxiv:2410.11252
we show that certain free energy functionals that are not convex with respect to the usual convex structure on their domain of definition, are strictly convex in the sense of displacement convexity under a natural change of variables. we use this to show that in certain cases, the only critical points of these functionals are minimizers. this approach based on displacement convexity permits us to treat multicomponent systems as well as single component systems. the developments produce new examples of displacement convex functionals, and, in the multi - component setting, jointly displacement convex functionals.
arxiv:0706.0133
shared mobility - on - demand services are expanding rapidly in cities around the world. as a prominent example, app - based ridesourcing is becoming an integral part of many urban transportation ecosystems. despite the centrality, limited public availability of detailed temporal and spatial data on ridesourcing trips has limited research on how new services interact with traditional mobility options and how they impact travel in cities. improving data - sharing agreements are opening unprecedented opportunities for research in this area. this study examines emerging patterns of mobility using recently released city of chicago public ridesourcing data. the detailed spatio - temporal ridesourcing data are matched with weather, transit, and taxi data to gain a deeper understanding of ridesourcings role in chicagos mobility system. the goal is to investigate the systematic variations in patronage of ride - hailing. k - prototypes is utilized to detect user segments owing to its ability to accept mixed variable data types. an extension of the k - means algorithm, its output is a classification of the data into several clusters called prototypes. six ridesourcing prototypes are identified and discussed based on significant differences in relation to adverse weather conditions, competition with alternative modes, location and timing of use, and tendency for ridesplitting. the paper discusses implications of the identified clusters related to affordability, equity and competition with transit.
arxiv:2006.13924
the rho - rho - n and rho - rho - delta three - body systems have been studied within the framework of the fixed center approximation of faddeev equation. the rho - rho interaction in isospin i = 0, spin s = 2 is strongly attractive, and so are the n - rho, delta - rho interactions. this leads to bound states of both rho - rho - n and rho - rho - delta. we find peaks of the modulus squared of the scattering matrix around 2227 mev for rho - rho - n, and 2372 mev for rho - rho - delta. yet, the strength of the peak for the rho - rho - n amplitude is much smaller than for rho - rho - delta, weakening the case for a rho - rho - n bound state, or a dominant rho - rho - n component. a discussion is made on how these states can be searched for in present programs looking for multimeson final states in different reactions.
arxiv:1107.0209
bulk metallic glass forms when liquid metal alloys solidify without crystalization. in the search for iron - based bulk glass - forming alloys of the metal - metalloid type ( fe - b - and fe - c - based ), crystals based on the structural prototype c6cr23 often preempt the amorphous phase. destabilizing this competing crystal structure could enhance glass - formability. we carry out first - principles total energy calculations of enthalpy of formation to identify third elements that can effectively destabilize c6cr23. yttrium appears optimal among transition metals, and rare earths also are suitable. atomic size is the dominant factor.
arxiv:cond-mat/0407633
this paper studies the bearing - based simultaneous localization and affine formation tracking ( slaft ) control problem for fixed - wing unmanned aerial vehicles ( uavs ). in the considered problem, only a small set of uavs, named leaders, can obtain their global positions, and the other uavs only have access to bearing information relative to their neighbors. to address the problem, we propose novel schemes by integrating the distributed bearing - based self - localization algorithm and the observer - based affine formation tracking controller. the designed localization algorithm estimates the global position by using inter - uav bearing measurements, and the observer - based controller tracks the desired formation with the estimated positions. a key distinction of our approach is extending the slaft control scheme to the bearing - based coordination of nonholonomic uav systems, where the desired inter - uav bearings can be time - varying, instead of constant ones assumed in most of the existing results. two control schemes with different convergence rates are designed to meet desired task requirements under different conditions. the stability analysis of the two schemes for slaft control is proved, and numerous simulations are carried out to validate the theoretical analysis.
arxiv:2306.10749
we present exact solutions of the incompressible navier - stokes equations in a background linear shear flow. the method of construction is based on kelvin ' s investigations into linearized disturbances in an unbounded couette flow. we obtain explicit formulae for all three components of a kelvin mode in terms of elementary functions. we then prove that kelvin modes with parallel ( though time - dependent ) wave vectors can be superposed to construct the most general plane transverse shearing wave. an explicit solution is given, with any specified initial orientation, profile and polarization structure, with either unbounded or shear - periodic boundary conditions.
arxiv:1101.5507
we briefly review the main aspects of leptogenesis, describing both the unflavoured and the flavoured versions of the $ n _ 2 $ - dominated scenario. a study of the success rates of both classes of models has been carried out. we comment on these results and discuss corrective effects to this simplest scenario. focusing on the flavoured case, we consider the conditions required by strong thermal leptogenesis, where the final asymmetry is fully independent of the initial conditions. barring strong cancellations in the seesaw formula and in the flavoured decay parameters, we show that strong thermal leptogenesis favours a lightest neutrino mass $ m _ 1 \ gtrsim10 \, \ mbox { mev } $ for normal ordering ( no ) and $ m _ 1 \ gtrsim 3 \, \ mbox { mev } $ for inverted ordering ( io ). finally, we briefly comment on the power of absolute neutrino mass scale experiments to either support or severely corner strong thermal leptogenesis.
arxiv:1405.2318
upcoming gravitational wave ( gw ) detectors might detect a stochastic background of gws potentially arising from many possible sources, including bubble collisions from a strongly first - order electroweak phase transition. we investigate whether it is possible to connect, via a semi - analytical approximation to the tunneling rate of scalar fields with quartic potentials, the gw signal through detonations with the parameters entering the potential that drives the electroweak phase transition. to this end, we consider a finite temperature effective potential similar in form to the higgs potential in the standard model ( sm ). in the context of a semi - analytic approximation to the three dimensional euclidean action, we derive a general approximate form for the tunneling temperature and the relevant gw parameters. we explore the gw signal across the parameter space describing the potential which drives the phase transition. we comment on the potential detectability of a gw signal with future experiments, and physical relevance of the associated potential parameters in the context of theories which have effective potentials similar in form to that of the sm. in particular we consider singlet, triplet, higher dimensional operators, and top - flavor extensions to the higgs sector of the sm. we find that the addition of a temperature independent cubic term in the potential, arising from a gauge singlet for instance, can greatly enhance the gw power. the other parameters have milder, but potentially noticeable, effects.
arxiv:0911.0687
electric bikes ( e - bikes ), including lightweight e - bikes with pedals and e - bikes in scooter form, are gaining popularity around the world because of their convenience and affordability. at the same time, e - bike - related accidents are also on the rise and many policymakers and practitioners are debating the feasibility of building e - bike lanes in their communities. by collecting e - bikes and bikes data in shanghai city, the study first recalibrates the capacity of the conventional bike lane based on the traffic movement characteristics of the mixed bikes flow. then, the study evaluates the traffic safety performance of the mixed bike flow in the conventional bike lane by the observed passing events. finally, this study proposes a comprehensive model for evaluating the feasibility of building an e - bike lane by integrating the analytic hierarchy process and fuzzy mathematics by considering the three objectives : capacity, safety, and budget constraint. the proposed model, one of the first of its kind, can be used to ( i ) evaluate the existing road capacity and safety performance improvement of a mixed bike flow with e - bikes and human - powered bikes by analyzing the mixed bike flow arrival rate and passing maneuvers, and ( ii ) quantify the changes to the road capacity and safety performance if a new e - bike lane is constructed. numerical experiments are performed to calibrate the proposed model and evaluate its performance using non - motorized vehicles ' trajectories in shanghai, china. the numerical experiment results suggest that the proposed model can be used by policymakers and practitioners to evaluate the feasibility of building e - bike lanes.
arxiv:2307.13628
by applying a coupled - bloch - mode approach, we have derived a simple expression for the transmission properties of photonic crystal ( phc ) line - defect waveguides with a complex refractive index perturbation. we have provided physical insights on the coupling mechanism by analyzing the frequency dependence and relative strength of the coupling coefficients. we have shown the impact of the perturbation on the waveguide dispersion relation and how the gain - induced distributed feedback limits the maximum attainable slow - light enhancement of the gain itself. we have then applied our approach to analyze the threshold behaviour of various phc laser cavities and proved the significant impact of coherent distributed feedback effects in these lasers. importantly, our approach also reveals that a structure simply consisting of an active region with zero back reflections from the passive output waveguides can achieve lasing oscillation with reasonable threshold gain.
arxiv:1906.04058
transfer learning is a widely used strategy in medical image analysis. instead of only training a network with a limited amount of data from the target task of interest, we can first train the network with other, potentially larger source datasets, creating a more robust model. the source datasets do not have to be related to the target task. for a classification task in lung ct images, we could use both head ct images, or images of cats, as the source. while head ct images appear more similar to lung ct images, the number and diversity of cat images might lead to a better model overall. in this survey we review a number of papers that have performed similar comparisons. although the answer to which strategy is best seems to be " it depends ", we discuss a number of research directions we need to take as a community, to gain more understanding of this topic.
arxiv:1810.05444
the development of methods and algorithms to solve the $ n $ - body problem for classical, collisionless, non - relativistic particles has made it possible to follow the growth and evolution of cosmic dark matter structures over most of the universe ' s history. in the best studied case $ - $ the cold dark matter or cdm model $ - $ the dark matter is assumed to consist of elementary particles that had negligible thermal velocities at early times. progress over the past three decades has led to a nearly complete description of the assembly, structure and spatial distribution of dark matter haloes, and their substructure in this model, over almost the entire mass range of astronomical objects. on scales of galaxies and above, predictions from this standard cdm model have been shown to provide a remarkably good match to a wide variety of astronomical data over a large range of epochs, from the temperature structure of the cosmic background radiation to the large - scale distribution of galaxies. the frontier in this field has shifted to the relatively unexplored subgalactic scales, the domain of the central regions of massive haloes, and that of low - mass haloes and subhaloes, where potentially fundamental questions remain. answering them may require : ( i ) the effect of known but uncertain baryonic processes ( involving gas and stars ), and / or ( ii ) alternative models with new dark matter physics. here we present a review of the field, focusing on our current understanding of dark matter structure from $ n $ - body simulations and on the challenges ahead.
arxiv:1907.11775
in this paper, we test the hypothesis that interesting events in unstructured videos are inherently audiovisual. we combine deep image representations for object recognition and scene understanding with representations from an audiovisual affect recognition model. to this set, we include content agnostic audio - visual synchrony representations and mel - frequency cepstral coefficients to capture other intrinsic properties of audio. these features are used in a modular supervised model. we present results from two experiments : efficacy study of single features on the task, and an ablation study where we leave one feature out at a time. for the video summarization task, our results indicate that the visual features carry most information, and including audiovisual features improves over visual - only information. to better study the task of highlight detection, we run a pilot experiment with highlights annotations for a small subset of video clips and fine - tune our best model on it. results indicate that we can transfer knowledge from the video summarization task to a model trained specifically for the task of highlight detection.
arxiv:2102.05811
hypersemitoric system, including the existence of a unique flap and two parabolic orbits. furthermore, we study the transitions between these stages. \ \ we also come up with new explicit semitoric systems on all hirzebruch surfaces which, together with the previous systems and the systems already contained in the literature, gives an explicit model for every type of strictly minimal system. moreover, we show how to obtain every strictly minimal system by applying sequences of alternating toric type blowups and blowdowns to simple explicit systems. in particular, we obtain that every strictly minimal semitoric polygon can be obtained from a semitoric system which is part of a family $ ( m, \ omega, f _ t = ( j, h _ t ) ) $ which is semitoric for all but a finite number of values of $ t $, called a semitoric family.
arxiv:2307.10670
many activity classifications segments data into fixed window size for feature extraction and classification. however, animal behaviors have various durations that do not match the predetermined window size. the dense labeling and dense prediction methods address this limitation by predicting labels for every point. thus, by tracing the starting and ending points, we could know the time location and duration of all occurring activities. still, the dense prediction could be noisy with misalignments problems. we modified the u - net and conditional generative adversarial network ( cgan ) with customized loss functions as a training strategy to reduce fragmentation and other misalignments. in cgan, the discriminator and generator trained against each other like an adversarial competition. the generator produces dense predictions. the discriminator works as a high - level consistency check, in our case, pushing the generator to predict activities with reasonable duration. the model trained with cgan shows better or comparable performance in the cow, pig, and uci hapt dataset. the cgan - trained modified u - net improved from 92. 17 % to 94. 66 % for the uci hapt dataset and from 90. 85 % to 93. 18 % for pig data compared to previous dense prediction work.
arxiv:2209.03758
efficient generation and manipulation of spin signals in a given material without invoking external magnetism remain one of the challenges in spintronics. the spin hall effect ( she ) and rashba - edelstein effect ( ree ) are well - known mechanisms to electrically generate spin accumulation in materials with strong spin - orbit coupling ( soc ), but the exact role of the strength and type of soc, especially in crystals with low symmetry, has yet to be explained. in this study, we investigate ree in two different families of non - magnetic chiral materials, elemental semiconductors ( te and se ) and semimetallic disilicides ( tasi $ _ 2 $ and nbsi $ _ 2 $ ), using an approach based on density functional theory ( dft ). by analyzing spin textures across the full brillouin zones and comparing them with ree magnitudes calculated as a function of chemical potential, we link specific features in the electronic structure with the efficiency of the induced spin accumulation. our findings show that magnitudes of ree can be increased by : ( i ) the presence of purely radial ( weyl - type ) spin texture manifesting as the parallel spin - momentum locking, ( ii ) high spin polarization of bands along one specific crystallographic direction, ( iii ) low band velocities. by comparing materials possessing the same crystal structures, but different strengths of soc, we conclude that larger soc may indirectly contribute to the enhancement of ree. it yields greater spin - splitting of bands along specific crystallographic directions, which prevents canceling the contributions from the oppositely spin - polarized bands over wider energy regions and helps maintain larger ree magnitudes. we believe that these results will be useful for designing spintronics devices and may aid further computational studies searching for efficient ree in materials with different symmetries and soc strengths.
arxiv:2304.05287
the 5 - point tensors have the property that after insertion of the metric tensor $ g ^ { \ mu \ nu } $ in terms of external momenta, all $ g ^ { \ mu \ nu } $ - contributions in the tensor decomposition cancel. if furthermore the tensors are contracted with external momenta, the inverse 5 - point gram determinant $ ( ) _ 5 $ cancels automatically. if the remaining 4 - point sub - gram determinant $ { s \ choose s } _ 5 $ is not small then this approach appears to be particularly efficient in numerical calculations. we also indicate how to deal with small $ { s \ choose s } _ 5 $. explicit formulae for tensors of degree 2 and 3 are given for large and small ( sub - ) gram determinants.
arxiv:1111.4153
emotions have been shown to play a role in argument convincingness, yet this aspect is underexplored in the natural language processing ( nlp ) community. unlike prior studies that use static analyses, focus on a single text domain or language, or treat emotion as just one of many factors, we introduce a dynamic framework inspired by manipulation checks commonly used in psychology and social science ; leveraging llm - based manipulation checks, this framework examines the extent to which perceived emotional intensity influences perceived convincingness. through human evaluation of arguments across different languages, text domains, and topics, we find that in over half of cases, judgments of convincingness remain unchanged despite variations in perceived emotional intensity ; when emotions do have an impact, they more often enhance rather than weaken convincingness. we further analyze how 11 llms behave in the same scenario, finding that while llms generally mirror human patterns, they struggle to capture nuanced emotional effects in individual judgments.
arxiv:2503.00024
the diffractive program of the cdf collaboration at the fermilab tevatron pbar - p collider is reviewed with emphasis on recent results from run - ii and future prospects.
arxiv:hep-ex/0507072
unitary representations of the fundamental group of a kahler manifold correspond to polystable vector bundles ( with vanishing chern classes ). semisimple linear representations correspond to polystable higgs bundles. in this paper we find the objects corresponding to affine representations : the linear part gives a higgs bundle and the translation part corresponds to an element of a generalized de rham cohomology.
arxiv:math/9912043
the random - force ( larkin ) model of a directed elastic string subject to quenched random forces in the transverse directions has been a paradigm in the statistical physics of disordered systems. in this brief note, we investigate a modified version of the above model where the total transverse force along the polymer contour and the related total torque, in each realization of disorder, vanish. we discuss the merits of adding these constraints and show that they leave the qualitative behavior in the strong stretching regime unchanged, but they reduce the effects of the random force by significant numerical prefactors. we also show that a transverse random force effectively makes the filament softer to compression by inducing undulations. we calculate the related linear compression coefficient in both the usual and the constrained random force model.
arxiv:1107.3328
aims. we aim to develop a chemical model that contains a consistent description of spin - state chemistry in reactions involving chemical species with multiple deuterons. we apply the model to the specific case of deuterated ammonia, to derive values for the various spin - state ratios. methods. we apply symmetry rules in the complete scrambling assumption to calculate branching ratio tables for reactions between chemical species that include multiple protons and / or deuterons. reaction sets for both gas - phase and grain - surface chemistry are generated using an automated routine that forms all possible spin - state variants of any given reaction with up to six h / d atoms. single - point and modified bonnor - ebert models are used to study the density and temperature dependence of ammonia and its isotopologs, and the associated spin - state ratios. results. we find that the spin - state ratios of the ammonia isotopologs are, at late times, very different from their statistical values. the ratios are rather insensitive to variations in the density, but present strong temperature dependence. we derive high peak values ( $ \ sim $ 0. 1 ) for the deuterium fraction in ammonia, in agreement with previous ( gas - phase ) models. the deuterium fractionation is strongest at high density, corresponding to a high degree of depletion, and also presents temperature dependence. we find that in the temperature range 5 to 20 k, the deuterium fractionation peaks at $ \ sim $ 15 k while most of the ortho / para ( and meta / para for $ \ rm nd _ 3 $ ) ratios present a minimum at 10 k ( ortho / para $ \ rm nh _ 2d $ has instead a maximum at this temperature ). conclusions. owing to the density and temperature dependence found in the abundances and spin - state ratios of ammonia and its isotopologs, it is evident that observations of ammonia and its deuterated forms can provide important constraints on the physical structure of molecular clouds.
arxiv:1507.02856
interactions among sensors can provide, in addition to entanglement, an important resource for boosting the precision in quantum estimation protocols. dephasing noise, however, remains a leading source of decoherence in state - of - the - art quantum sensing platforms. we analyze the impact of classical { \ em collective dephasing with arbitrary temporal correlations } on the performance of generalized ramsey interferometry protocols with \ emph { quadratic } encoding of a target frequency parameter. the optimal asymptotic precision bounds are derived for both product coherent spin states and for a class of experimentally relevant entangled spin - squeezed states of $ n $ qubit sensors. while, as in linear metrology, entanglement offers no advantage if the noise is markovian, a precision scaling of $ n ^ { - 1 } $ is reachable with classical input states in the quadratic setting, which is improved to $ n ^ { - 5 / 4 } $ when temporal correlations are present and the zeno regime is accessible. the use of nonclassical spin - squeezed states and a nonlinear readout further allows for an $ n ^ { - 3 / 2 } $ precision scaling, which we prove is asymptotically optimal. we also show how to counter { \ em noise - induced bias } by introducing a simple ratio estimator which relies on detecting two suitable system observables, and show that it remains asymptotically unbiased in the presence of dephasing, without detriment to the achievable precision.
arxiv:2501.00189
this article provides a cartoon of the quantization of general relativity using the ideas of effective field theory. these ideas underpin the use of general relativity as a theory from which precise predictions are possible, since they show why quantum corrections to standard classical calculations are small. quantum corrections can be computed controllably provided they are made for the weakly - curved geometries associated with precision tests of general relativity, such as within the solar system or for binary pulsars. they also bring gravity back into the mainstream of physics, by showing that its quantization ( at low energies ) exactly parallels the quantization of other, better understood, non - renormalizable field theories which arise elsewhere in physics. of course effective field theory techniques do not solve the fundamental problems of quantum gravity discussed elsewhere in these pages, but they do helpfully show that these problems are specific to applications on very small distance scales. they also show why we may safely reject any proposals to modify gravity at long distances if these involve low - energy problems ( like ghosts or instabilities ), since such problems are unlikely to be removed by the details of the ultimate understanding of gravity at microscopic scales.
arxiv:gr-qc/0606108
tukia and vaisala showed that every quasi - conformal map of $ \ r ^ n $ extends to a quasi - conformal self - map of $ \ r ^ { n + 1 } $. the restriction of the extended map to the upper half - space $ \ r ^ n \ times \ r ^ + $ is, in fact, bi - lipschitz with respect to the hyperbolic metric. more generally, every homogeneous negatively curved manifold decomposes as $ m = n \ rtimes \ r ^ + $ where $ n $ is a nilpotent group with a metric on which $ \ r ^ + $ acts by dilations. we show that under some assumptions on $ n $, every quasi - symmetry of $ n $ extends to a bi - lipschitz map of $ m $. the result applies to a wide class of manifolds $ m $ including non - compact rank one symmetric spaces and certain manifolds that do not admit co - compact group actions. although $ m $ must be gromov hyperbolic, its curvature need not be strictly negative.
arxiv:1112.2684
ordinary differential equations ( odes ) are foundational in modeling intricate dynamics across a gamut of scientific disciplines. yet, a possibility to represent a single phenomenon through multiple ode models, driven by different understandings of nuances in internal mechanisms or abstraction levels, presents a model selection challenge. this study introduces a testing - based approach for ode model selection amidst statistical noise. rooted in the model misspecification framework, we adapt foundational insights from classical statistical paradigms ( vuong and hotelling ) to the ode context, allowing for the comparison and ranking of diverse causal explanations without the constraints of nested models. our simulation studies validate the theoretical robustness of our proposed test, revealing its consistent size and power. real - world data examples further underscore the algorithm ' s applicability in practice. to foster accessibility and encourage real - world applications, we provide a user - friendly python implementation of our model selection algorithm, bridging theoretical advancements with hands - on tools for the scientific community.
arxiv:2308.16438
the masses of central massive black holes in bl lac objects are estimated from their host galaxy absolute magnitude at r - band by using the empirical relation between absolute magnitude of host galaxy and black hole mass. only a small fraction of bl lac objects exhibit weak broad - line emission, and we derive the sizes of the broad - line regions ( blrs ) in these bl lac objects from the widths of their broad emission lines on the assumption of the clouds being virilized in blrs. it is found that the sizes of the blrs in these sources are usually 2 - 3 orders of magnitude larger than that expected by the empirical correlation between blr size and optical luminosity defined by a sample of seyfert galaxies and quasars. we discuss a variety of possibilities and suggest it may probably be attributed to anisotropic motion of the blr clouds in these bl lac objects. if the blr geometry of these sources is disk - like, the viewing angles between the axis and the line of sight are in the range of 2 - 12 degrees, which is consistent with the unification schemes.
arxiv:astro-ph/0403298
axions are a well - motivated dark matter candidate particle. haloscopes aim to detect axions in the galactic halo by measuring the photon signal resulting from axions interacting with a strong magnetic field. existing haloscopes are primarily targeting axion masses which produce microwave - range photons and rely on microwave resonators to enhance the signal power. only a limited subset of resonator modes are useful for this process, and current cylindrical - style cavities suffer from mode mixing and crowding from other fundamental modes. the majority of these modes can be eliminated by using photonic band gap ( pbg ) resonators. the band gap behavior of these structures allows for a resonator with mode selectivity based on frequency. we present results from the first tunable pbg resonator, a proof - of - concept design with a footprint compatible with axion haloscopes. we have thoroughly characterized the tuning range of two versions of the structure and report the successful confinement of the operating tm $ _ { 010 } $ mode and the elimination of all te modes within the tuning range.
arxiv:2408.03861
we discuss the problem of designing unambiguous programmable discriminators for any $ n $ unknown quantum states in an $ m $ - dimensional hilbert space. the discriminator is a fixed measurement which has two kinds of input registers : the program registers and the data register. the program registers consist of the $ n $ states, while the data register is prepared among them. the task of the discriminator is to tell us which state stored in the program registers is equivalent to that in the data register. first, we give a necessary and sufficient condition for judging an unambiguous programmable discriminator. then, if $ m = n $, we present an optimal unambiguous programmable discriminator for them, in the sense of maximizing the worst - case probability of success. finally, we propose a universal unambiguous programmable discriminator for arbitrary $ n $ quantum states. we also show how to use this universal discriminator to unambiguously discriminate mixed states.
arxiv:quant-ph/0606189
understanding the writing frame of news articles is vital for addressing social issues, and thus has attracted notable attention in the fields of communication studies. yet, assessing such news article frames remains a challenge due to the absence of a concrete and unified standard dataset that considers the comprehensive nuances within news content. to address this gap, we introduce an extended version of a large labeled news article dataset with 16, 687 new labeled pairs. leveraging the pairwise comparison of news articles, our method frees the work of manual identification of frame classes in traditional news frame analysis studies. overall we introduce the most extensive cross - lingual news article similarity dataset available to date with 26, 555 labeled news article pairs across 10 languages. each data point has been meticulously annotated according to a codebook detailing eight critical aspects of news content, under a human - in - the - loop framework. application examples demonstrate its potential in unearthing country communities within global news coverage, exposing media bias among news outlets, and quantifying the factors related to news creation. we envision that this news similarity dataset will broaden our understanding of the media ecosystem in terms of news coverage of events and perspectives across countries, locations, languages, and other social constructs. by doing so, it can catalyze advancements in social science research and applied methodologies, thereby exerting a profound impact on our society.
arxiv:2405.13272
word - level auto - completion ( wlac ) plays a crucial role in computer - assisted translation. it aims at providing word - level auto - completion suggestions for human translators. while previous studies have primarily focused on designing complex model architectures, this paper takes a different perspective by rethinking the fundamental question : what kind of words are good auto - completions? we introduce a measurable criterion to answer this question and discover that existing wlac models often fail to meet this criterion. building upon this observation, we propose an effective approach to enhance wlac performance by promoting adherence to the criterion. notably, the proposed approach is general and can be applied to various encoder - based architectures. through extensive experiments, we demonstrate that our approach outperforms the top - performing system submitted to the wlac shared tasks in wmt2022, while utilizing significantly smaller model sizes.
arxiv:2310.14523
the ir finite one - loop box scalar integral with massless internal lines has been recalculated. the result is very compact, simple and valid for arbitrary values of the relevant kinematic variables. it is given in terms of only two dilogarithms and a few logarithms, all of very simple arguments.
arxiv:hep-ph/0201306
we have calculated the composite ( pseudo ) scalar contributions to the anomalous magnetic moment of muons in models of walking technicolor. by the axial or scale anomaly the light scalars such as techni - dilaton, techni - pions or techni - eta have anomalous couplings to two - photons, which make them natural candidates for the recent 750 gev resonance excess, observed at lhc. due to the anomalous couplings, their contributions to muon ( g - 2 ) are less suppressed and might explain the current deviation in muon ( g - 2 ) measurements from theory.
arxiv:1602.06628
open source software ( oss ) development challenges traditional software engineering practices. in particular, oss projects are managed by a large number of volunteers, working freely on the tasks they choose to undertake. oss projects also rarely rely on explicit system - level design, or on project plans or schedules. moreover, oss developers work in arbitrary locations and collaborate almost exclusively over the internet, using simple tools such as email and software code tracking databases ( e. g. cvs ). all the characteristics above make oss development akin to weaving a tapestry of heterogeneous components. the oss design process relies on various types of actors : people with prescribed roles, but also elements coming from a variety of information spaces ( such as email and software code ). the objective of our research is to understand the specific hybrid weaving accomplished by the actors of this distributed, collective design process. this, in turn, challenges traditional methodologies used to understand distributed software engineering : oss development is simply too " fibrous " to lend itself well to analysis under a single methodological lens. in this paper, we describe the methodological framework we articulated to analyze collaborative design in the open source world. our framework focuses on the links between the heterogeneous components of a project ' s hybrid network. we combine ethnography, text mining, and socio - technical network analysis and visualization to understand oss development in its totality. this way, we are able to simultaneously consider the social, technical, and cognitive aspects of oss development. we describe our methodology in detail, and discuss its implications for future research on distributed collective practices.
arxiv:cs/0703009
let $ \ mathcal { x } _ { \ gamma } g : = \ mathrm { hom } ( \ gamma, g ) / \! / g $ be the $ g $ - character variety of $ \ gamma $, where $ g $ is a complex reductive group and $ \ gamma $ a finitely presented group. we introduce new techniques for computing hodge - deligne and serre polynomials of $ \ mathcal { x } _ { \ gamma } g $, and present some applications, focusing on the cases when $ \ gamma $ is a free or free abelian group. detailed constructions and proofs of the main results will appear elsewhere.
arxiv:2006.14520
motivated by the stellar wind ejected from the upper atmosphere ( corona ) of a star, we explore a boundary problem of the two - species nonlinear relativistic vlasov - poisson systems in the 3d half space in the presence of a constant vertical magnetic field and strong background gravity. we allow species to have different mass and charge ( as proton and electron, for example ). as the main result, we construct stationary solutions and establish their nonlinear dynamical asymptotic stability in time and space.
arxiv:2310.09865
in repeated games, strategies are often evaluated by their ability to guarantee the performance of the single best action that is selected in hindsight, a property referred to as \ emph { hannan consistency }, or \ emph { no - regret }. however, the effectiveness of the single best action as a yardstick to evaluate strategies is limited, as any static action may perform poorly in common dynamic settings. our work therefore turns to a more ambitious notion of \ emph { dynamic benchmark consistency }, which guarantees the performance of the best \ emph { dynamic } sequence of actions, selected in hindsight subject to a constraint on the allowable number of action changes. our main result establishes that for any joint empirical distribution of play that may arise when all players deploy no - regret strategies, there exist dynamic benchmark consistent strategies such that if all players deploy these strategies the same empirical distribution emerges when the horizon is large enough. this result demonstrates that although dynamic benchmark consistent strategies have a different algorithmic structure and provide significantly enhanced individual assurances, they lead to the same equilibrium set as no - regret strategies. moreover, the proof of our main result uncovers the capacity of independent algorithms with strong individual guarantees to foster a strong form of coordination.
arxiv:2212.03152
using arakelov geometry, we compute the partition function of the noncompact free boson at genus two. we begin by compiling a list of modular invariants which appear in the arakelov theory of riemann surfaces. using these quantities, we express the genus two partition function as a product of modular forms, as in the well - known genus one case. we check that our result has the expected obstruction to holomorphic factorization and behavior under degeneration.
arxiv:1902.02420
anomaly detection is the process of identifying cases, or groups of cases, that are in some way unusual and do not fit the general patterns present in the dataset. numerous algorithms use discretization of numerical data in their detection processes. this study investigates the effect of the discretization method on the unsupervised detection of each of the six anomaly types acknowledged in a recent typology of data anomalies. to this end, experiments are conducted with various datasets and secoda, a general - purpose algorithm for unsupervised non - parametric anomaly detection in datasets with numerical and categorical attributes. this algorithm employs discretization of continuous attributes, exponentially increasing weights and discretization cut points, and a pruning heuristic to detect anomalies with an optimal number of iterations. the results demonstrate that standard secoda can detect all six types, but that different discretization methods favor the discovery of certain anomaly types. the main findings also hold for other detection techniques using discretization.
arxiv:2008.12330
in topologically - protected quantum computation, quantum gates can be carried out by adiabatically braiding two - dimensional quasiparticles, reminiscent of entangled world lines. bonesteel et al. [ phys. rev. lett. 95, 140503 ( 2005 ) ], as well as leijnse and flensberg [ phys. rev. b 86, 104511 ( 2012 ) ] recently provided schemes for computing quantum gates from quasiparticle braids. mathematically, the problem of executing a gate becomes that of finding a product of the generators ( matrices ) in that set that approximates the gate best, up to an error. to date, efficient methods to compute these gates only strive to optimize for accuracy. we explore the possibility of using a generic approach applicable to a variety of braiding problems based on evolutionary ( genetic ) algorithms. the method efficiently finds optimal braids while allowing the user to optimize for the relative utilities of accuracy and / or length. furthermore, when optimizing for error only, the method can quickly produce efficient braids.
arxiv:1211.7359
we introduce a family of hamiltonian systems for measurement - based quantum computation with continuous variables. the hamiltonians ( i ) are quadratic, and therefore two body, ( ii ) are of short range, ( iii ) are frustration - free, and ( iv ) possess a constant energy gap proportional to the squared inverse of the squeezing. their ground states are the celebrated gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. these hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous - variable quantum computing beyond the standard optical approaches. we characterize the correlations in these systems at thermal equilibrium. in particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law.
arxiv:1007.0951
in this work, physics - informed neural networks are applied to incompressible two - phase flow problems. we investigate the forward problem, where the governing equations are solved from initial and boundary conditions, as well as the inverse problem, where continuous velocity and pressure fields are inferred from scattered - time data on the interface position. we employ a volume of fluid approach, i. e. the auxiliary variable here is the volume fraction of the fluids within each phase. for the forward problem, we solve the two - phase couette and poiseuille flow. for the inverse problem, three classical test cases for two - phase modeling are investigated : ( i ) drop in a shear flow, ( ii ) oscillating drop and ( iii ) rising bubble. data of the interface position over time is generated by numerical simulation. an effective way to distribute spatial training points to fit the interface, i. e. the volume fraction field, and the residual points is proposed. furthermore, we show that appropriate weighting of losses associated with the residual of the partial differential equations is crucial for successful training. the benefit of using adaptive activation functions is evaluated for both the forward and inverse problem.
arxiv:2101.09833
the performance of large language models ( llms ) relies heavily on the quality of prompts, which are often manually engineered and task - specific, making them costly and non - scalable. we propose a novel approach, supervisory prompt training ( spt ). spt automates the generation of highly effective prompts using a dual llm system. in this system, one llm, the generator, performs a task while the other, the corrector, provides feedback and generates improved prompts. in contrast to earlier techniques, both the generator and corrector collaboratively and continuously improve their prompts over time. we also introduce the concept of \ textit { impact scores } to measure the sentence - level effectiveness of the prompts. our method was tested on four benchmarks, testing the level of hallucinations in llms. notably, we were able to increase the accuracy of gpt - 4 on gsm8k from 65. 8 \ % to 94. 1 \ % ( 28. 3 \ % increase ). spt advances llms by refining prompts to enhance performance and reduce hallucinations, offering an efficient and scalable alternative to traditional model fine - tuning.
arxiv:2403.18051
we consider the geometric transition and compute the all - genus topological string amplitudes expressed in terms of hopf link invariants and topological vertices of chern - simons gauge theory. we introduce an operator technique of 2 - dimensional cft which greatly simplifies the computations. we in particular show that in the case of local calabi - yau manifolds described by toric geometry basic amplitudes are written as vacuum expectation values of a product vertex operators and thus appear quite similar to the veneziano amplitudes of the old dual resonance models. topological string amplitudes can be easily evaluated using vertex operator algebra.
arxiv:hep-th/0312234
in this paper, we study optimal radar deployment for intrusion detection, with focus on network coverage. in contrast to the disk - based sensing model in a traditional sensor network, the detection range of a bistatic radar depends on the locations of both the radar transmitter and radar receiver, and is characterized by cassini ovals. furthermore, in a network with multiple radar transmitters and receivers, since any pair of transmitter and receiver can potentially form a bistatic radar, the detection ranges of different bistatic radars are coupled and the corresponding network coverage is intimately related to the locations of all transmitters and receivers, making the optimal deployment design highly non - trivial. clearly, the detectability of an intruder depends on the highest snr received by all possible bistatic radars. we focus on the worst - case intrusion detectability, i. e., the minimum possible detectability along all possible intrusion paths. although it is plausible to deploy radars on a shortest line segment across the field, it is not always optimal in general, which we illustrate via counter - examples. we then present a sufficient condition on the field geometry for the optimality of shortest line deployment to hold. further, we quantify the local structure of detectability corresponding to a given deployment order and spacings of radar transmitters and receivers, building on which we characterize the optimal deployment to maximize the worst - case intrusion detectability. our results show that the optimal deployment locations exhibit a balanced structure. we also develop a polynomial - time approximation algorithm for characterizing the worse - case intrusion path for any given locations of radars under random deployment.
arxiv:1206.1355
a cram \ ' er - rao bound ( crb ) optimization framework for near - field sensing ( nise ) with continuous - aperture arrays ( capas ) is proposed. in contrast to conventional spatially discrete arrays ( spdas ), capas emit electromagnetic ( em ) probing signals through continuous source currents for target sensing, thereby exploiting the full spatial degrees of freedom ( dofs ). the maximum likelihood estimation ( mle ) method for estimating target locations in the near - field region is developed. to evaluate the nise performance with capas, the crb for estimating target locations is derived based on continuous transmit and receive array responses of capas. subsequently, a crb minimization problem is formulated to optimize the continuous source current of capas. this results in a non - convex, integral - based functional optimization problem. to address this challenge, the optimal structure of the source current is derived and proven to be spanned by a series of basis functions determined by the system geometry. to solve the crb minimization problem, a low - complexity subspace manifold gradient descent ( smgd ) method is proposed, leveraging the derived optimal structure of the source current. our simulation results validate the effectiveness of the proposed smgd method and further demonstrate that i ) ~ the proposed smgd method can effectively solve the crb minimization problem with reduced computational complexity, and ii ) ~ capa achieves a tenfold improvement in sensing performance compared to its spda counterpart, due to full exploitation of spatial dofs.
arxiv:2412.15007
we argue against foreman ' s proposal to settle the continuum hypothesis and other classical independent questions via the adoption of generic large cardinal axioms.
arxiv:1901.02074
in a high dimensional regression setting in which the number of variables ( $ p $ ) is much larger than the sample size ( $ n $ ), the number of possible two - way interactions between the variables is immense. if the number of variables is in the order of one million, which is usually the case in e. g., genetics, the number of two - way interactions is of the order one million squared. in the pursuit of detecting two - way interactions, testing all pairs for interactions one - by - one is computational unfeasible and the multiple testing correction will be severe. in this paper we describe a two - stage testing procedure consisting of a screening and an evaluation stage. it is proven that, under some assumptions, the tests - statistics in the two stages are asymptotically independent. as a result, multiplicity correction in the second stage is only needed for the number of statistical tests that are actually performed in that stage. this increases the power of the testing procedure. also, since the testing procedure in the first stage is computational simple, the computational burden is lowered. simulations have been performed for multiple settings and regression models ( generalized linear models and cox ph model ) to study the performance of the two - stage testing procedure. the results show type i error control and an increase in power compared to the procedure in which the pairs are tested one - by - one.
arxiv:2406.17466
prnu - based image processing is a key asset in digital multimedia forensics. it allows for reliable device identification and effective detection and localization of image forgeries, in very general conditions. however, performance impairs significantly in challenging conditions involving low quality and quantity of data. these include working on compressed and cropped images, or estimating the camera prnu pattern based on only a few images. to boost the performance of prnu - based analyses in such conditions we propose to leverage the image noiseprint, a recently proposed camera - model fingerprint that has proved effective for several forensic tasks. numerical experiments on datasets widely used for source identification prove that the proposed method ensures a significant performance improvement in a wide range of challenging situations.
arxiv:2001.06440
the structural characterization of hetero - aggregates in 3d is of great interest, e. g., for deriving process - structure or structure - property relationships. however, since 3d imaging techniques are often difficult to perform as well as time and cost intensive, a characterization of hetero - aggregates based on 2d image data is desirable, but often non - trivial. to overcome the issues of characterizing 3d structures from 2d measurements, a method is presented that relies on machine learning combined with methods of spatial stochastic modeling, where the latter are utilized for the generation of synthetic training data. this kind of training data has the advantage that time - consuming experiments for the synthesis of differently structured materials followed by their 3d imaging can be avoided. more precisely, a parametric stochastic 3d model is presented, from which a wide spectrum of virtual hetero - aggregates can be generated. additionally, the virtual structures are passed to a physics - based simulation tool in order to generate virtual scanning transmission electron microscopy ( stem ) images. the preset parameters of the 3d model together with the simulated stem images serve as a database for the training of convolutional neural networks, which can be used to determine the parameters of the underlying 3d model and, consequently, to predict 3d structures of hetero - aggregates from 2d stem images. furthermore, an error analysis is performed to evaluate the prediction power of the trained neural networks with respect to structural descriptors, e. g. the hetero - coordination number.
arxiv:2310.18523