text
stringlengths
1
3.65k
source
stringlengths
15
79
copula is a powerful tool to model multivariate data. we propose the modelling of intraday financial returns of multiple assets through copula. the problem originates due to the asynchronous nature of intraday financial data. we propose a consistent estimator of the correlation coefficient in case of elliptical copula and show that the plug - in copula estimator is uniformly convergent. for non - elliptical copulas, we capture the dependence through kendall ' s tau. we demonstrate underestimation of the copula parameter and use a quadratic model to propose an improved estimator. in simulations, the proposed estimator reduces the bias significantly for a general class of copulas. we apply the proposed methods to real data of several stock prices.
arxiv:1904.10182
in this article, the author provides full details of the proof of the concordance / isotopy problem. the first published proof, [ 5 ], accomplished this task only partially since there was an error, see the erratum [ 6 ], which damaged the main argument of [ 5, theorem 2. 9 ], and, consequently, the proof of [ 5, theorem a ].
arxiv:1604.07466
the ensemble of light - cone fock wavefunctions $ \ { \ psi _ { n / h } ( x _ i, \ vec k _ { \ perp i }, \ lambda _ i ) \ } $ provides a conceptual basis for representing physical hadrons and nuclei in terms of their fundamental quark and gluon degrees of freedom. a number of applications of the light - cone formalism to qcd phenomenology are briefly reviewed, such as the origin of regge behavior of polarized structure functions, the high momentum transfer behavior of exclusive reactions, the color transparency properties of diffractive vector meson photoproduction, and the behavior of quark distributions at large $ x _ { bj } $. the light - cone formalism illuminates novel features of hadron physics, such as the intrinsic gluon and heavy quark distributions, the quark - antiquark asymmetry of the intrinsic heavy quark sea, and the importance of rearrangement mechanisms in heavy quarkonium decay. i also discuss the potential for measuring the shape of the valence light - cone fock wavefunction of hadrons and photons in nuclear diffractive multi - jet production.
arxiv:hep-ph/9706236
we study vortex patterns in a prototype nonlinear optical system : counterpropagating laser beams in a photorefractive crystal, with or without the background photonic lattice. the vortices are effectively planar and described by the winding number and the " flavor " index, stemming from the fact that we have two parallel beams propagating in opposite directions. the problem is amenable to the methods of statistical field theory and generalizes the berezinsky - kosterlitz - thouless transition of the xy model to the " two - flavor " case. in addition to the familiar conductor and insulator phases, we also have the perfect conductor ( vortex proliferation in both beams / " flavors " ) and the frustrated insulator ( energy costs of vortex proliferation and vortex annihilation balance each other ). in the presence of disorder in the background lattice, a novel phase appears which shows long - range correlations and absence of long - range order, thus being analogous to spin glasses. an important benefit of this approach is that qualitative behavior of patterns can be known without intensive numerical work over large areas of the parameter space. more generally, we would like to draw attention to connections between the ( classical ) pattern - forming systems in photorefractive optics and the methods of ( quantum ) condensed matter and field theory : on one hand, we use the field - theoretical methods ( renormalization group, replica formalism ) to analyze the patterns ; on the other hand, the observed phases are analogous to those seen in magnetic systems, and make photorefractive optics a fruitful testing ground for condensed matter systems. as an example, we map our system to a doped $ o ( 3 ) $ antiferromagnet with $ \ mathbb { z } _ 2 $ defects, which has the same structure of the phase diagram.
arxiv:1701.03451
the triangular - lattice heisenberg antiferromagnet ( haf ) is known to carry topological z _ 2 vortex excitations which form a gas at finite temperatures. here we show that the spin - orbit interaction, introduced via a kitaev term in the exchange hamiltonian, condenses these vortices into a triangular $ z _ 2 $ vortex crystal at zero temperature. the cores of the z _ 2 vortices show abrupt, soliton - like magnetization modulations and arise by a special intertwining of three honeycomb superstructures of ferromagnetic domains, one for each of the three sublattices of the 120 - degree state of the pure haf. this is a new example of a nucleation transition, analogous to the spontaneous formation of magnetic domains, abrikosov vortices in type - ii syperconductors, blue phases in cholesteric liquid crystals, and skyrmions in chiral helimagnets. as the mechanism relies on the interplay of geometric frustration and spin - orbital anisotropies, such vortex mesophases can materialize as a ground - state property in spin - orbit coupled correlated systems with nearly hexagonal topology, as in triangular or strongly frustrated honeycomb iridates.
arxiv:1209.5895
in this paper, we propose a high - order energy - conserving semi - lagrangian discontinuous galerkin ( ecsldg ) method for the vlasov - ampere system. the method employs a semi - lagrangian discontinuous galerkin scheme for spatial discretization of the vlasov equation, achieving high - order accuracy while removing the courant - friedrichs - lewy ( cfl ) constraint. to ensure energy conservation and eliminate the need to resolve the plasma period, we adopt an energy - conserving time discretization introduced by liu et al. [ j. comput. phys., 492 ( 2023 ), 112412 ]. temporal accuracy is further enhanced through a high - order operator splitting strategy, yielding a method that is high - order accurate in both space and time. the resulting ecsldg scheme is unconditionally stable and conserves both mass and energy at the fully discrete level, regardless of spatial or temporal resolution. numerical experiments demonstrate the accuracy, stability, and conservation properties of the proposed method. in particular, the method achieves more accurate enforcement of gauss ' s law and improved numerical fidelity over low - order schemes, especially when using a large cfl number.
arxiv:2504.20813
we propose a coin - flip protocol which yields a string of strong, random coins and is fully simulatable against poly - sized quantum adversaries on both sides. it can be implemented with quantum - computational security without any set - up assumptions, since our construction only assumes mixed commitment schemes which we show how to construct in the given setting. we then show that the interactive generation of random coins at the beginning or during outer protocols allows for quantum - secure realizations of classical schemes, again without any set - up assumptions. as example applications we discuss quantum zero - knowledge proofs of knowledge and quantum - secure two - party function evaluation. both applications assume only fully simulatable coin - flipping and mixed commitments. since our framework allows to construct fully simulatable coin - flipping from mixed commitments, this in particular shows that mixed commitments are complete for quantum - secure two - party function evaluation. this seems to be the first completeness result for quantum - secure two - party function evaluation from a generic assumption.
arxiv:1102.0887
using high spatial and temporal data from the \ emph { solar dynamics observatory } ( \ emph { sdo } ) and the \ emph { interface region imaging spectrograph } ( \ emph { iris } ), several observational signatures of magnetic reconnection in the course of magnetic flux cancellation are presented, including two loop - loop interaction processes, multiple plasma blob ejections, and a sheet - like structure that appeared above the flux cancellation sites with a y - shaped and an inverted y - shaped ends. the \ emph { iris } 1400 \ aa \ observations show that the plasma blobs were ejected from the tip of the y - shaped ends of the sheet - like structure. obvious photospheric magnetic flux cancellation occurred after the first loop - loop interaction and continued until the end of the observation. complemented by the nonlinear force - free field extrapolation, we found that two sets of magnetic field lines, which reveal an x - shaped configuration, align well with the interacted coronal loops. moreover, a magnetic null point is found to be situated at about $ 0. 9 $ mm height right above the flux cancellation sites and located between the two sets of magnetic field lines. these results suggest that the flux cancellation might be a result of submergence of magnetic field lines following magnetic reconnection that occurs in the lower atmosphere of the sun, and the ejected plasma blobs should be plasmoids created in the sheet - like structure due to the tearing - mode instability. this observation reveals detailed magnetic field structure and dynamic process above the flux cancellation sites and will help us to understand magnetic reconnection in the lower atmosphere of the sun.
arxiv:1806.04857
residential segregation is a wide - spread phenomenon that can be observed in almost every major city. in these urban areas residents with different racial or socioeconomic background tend to form homogeneous clusters. schelling ' s famous agent - based model for residential segregation explains how such clusters can form even if all agents are tolerant, i. e., if they agree to live in mixed neighborhoods. for segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors. very recently, schelling ' s model has been investigated from a game - theoretic point of view with selfish agents that strategically select their residential location. in these games, agents can improve on their current location by performing a location swap with another agent who is willing to swap. we significantly deepen these investigations by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the price of anarchy and on the dynamic properties of the resulting strategic multi - agent system. moreover, as a new conceptual contribution, we also consider the influence of locality, i. e., if the location swaps are restricted to swaps of neighboring agents. we give improved almost tight bounds on the price of anarchy for arbitrary underlying graphs and we present ( almost ) tight bounds for regular graphs, paths and cycles. moreover, we give almost tight bounds for grids, which are commonly used in empirical studies. for grids we also show that locality has a severe impact on the game dynamics.
arxiv:2005.02752
in the near future there will be the request for very large liquid xenon ( lxe ) detectors for dark matter ( dm ) searches in the 50 - ton range. to avoid an impractically long, single drift space of a dual - phase detector, it seems beneficial to use the single - phase technique. since electrons then can drift in any direction, we can segment the homogeneous medium and thus avoid an excessive maximum drift path of order 4 m. the shorter detector length has several benefits, e. g. requiring a lower cathode voltage for the same drift field. we can easily split the tpc into two regions with the cathode in the center and two anodes at the top and bottom. one also can use multiple tpcs stacked on top of each other in the same liquid volume to reduce the maximum drift length even further. a further division of the drift space by installing an additional anode in the center would require s2 photons to traverse the liquid for several times the rayleigh scattering length in lxe, which is only 30 - 40 cm. this seems to be excessive for good x - y localization. we therefore suggest a geometry of two independent tpcs with two drift spaces each. despite earlier publications concerns persisted about the effect of shadowing. a detailed fem model of the anode regions shows that with an aligned wire arrangement the drifting electrons impinge sideways on the anode in a narrow angular range of width 15 $ ^ { \ circ } $ - 20 $ ^ { \ circ } $. most s2 photons are emitted in full view of the close - by pmt array. about 37 % of the s2 photons are shadowed by the anode wire out of which 30 \ % will be reflected back again on the gold plating of the wires. thus we can observe 74 % of the total s2 light. compared to a dual - phase detector, however, we do not suffer from the extraction efficiency, sometimes reported as low as 50 %.
arxiv:2107.07798
has evolved from the generation i, proof - of concept, reactors of the 1950s and 1960s, to generation ii, generation iii, and generation iv concepts thermal hydraulics and heat transfer. in a typical nuclear power plant, heat generates steam that drives a steam turbine and a generator that produces electricity materials science as it relates to nuclear power applications managing the nuclear fuel cycle, in which fissile material is obtained, formed into fuel, removed when depleted, and safely stored or reprocessed nuclear propulsion, mainly for military naval vessels, but there have been concepts for aircraft and missiles. nuclear power has been used in space since the 1960s plasma physics, which is integral to the development of fusion power weapons development and management generation of radionuclides, which have applications in industry, medicine, and many other areas nuclear waste management health physics nuclear medicine and medical physics health and safety instrumentation and control engineering process engineering project management quality engineering reactor operations nuclear security ( detection of clandestine nuclear materials ) nuclear engineering even has a role in criminal investigation, and agriculture. many chemical, electrical and mechanical and other types of engineers also work in the nuclear industry, as do many scientists and support staff. in the u. s., nearly 100, 000 people directly work in the nuclear industry. including secondary sector jobs, the number of people supported by the u. s. nuclear industry is 475, 000. = = employment = = in the united states, nuclear engineers are employed as follows : electric power generation 25 % federal government 18 % scientific research and development 15 % engineering services 5 % manufacturing 10 % other areas 27 % worldwide, job prospects for nuclear engineers are likely best in those countries that are active in or exploring nuclear technologies : = = education = = organizations that provide study and training in nuclear engineering include the following : = = organizations = = american nuclear society asian network for education in nuclear technology ( anent ) https : / / www. iaea. org / services / networks / anent canadian nuclear association chinese nuclear society international atomic energy agency international energy agency ( iea ) japan atomic industrial forum ( jaif ) korea nuclear energy agency ( knea ) latin american network for education in nuclear technology ( lanent ) https : / / www. iaea. org / services / networks / lanent minerals council of australia nucleareurope nuclear institute nuclear energy institute ( nei ) nuclear industry association of south africa ( niasa ) nuclear technology education consortion https : / / www. ntec. ac. uk / oecd nuclear energy agency ( nea )
https://en.wikipedia.org/wiki/Nuclear_engineering
we consider a class of monotone operators which are appropriate for symbolic representation and manipulation within a computer algebra system. various structural properties of the class ( e. g., closure under taking inverses, resolvents ) are investigated as well as the role played by maximal monotonicity within the class. in particular, we show that there is a natural correspondence between our class of monotone operators and the subdifferentials of convex functions belonging to a class of convex functions deemed suitable for symbolic computation of fenchel conjugates which were previously studied by bauschke & von mohrenschildt and by borwein & hamilton. a number of illustrative examples utilizing the introduced class of operators are provided including computation of proximity operators, recovery of a convex penalty function associated with the hard thresholding operator, and computation of superexpectations, superdistributions and superquantiles with specialization to risk measures.
arxiv:1703.05946
high dynamic range ( hdr ) imaging is vital for capturing the full range of light tones in scenes, essential for computer vision tasks such as autonomous driving. standard commercial imaging systems face limitations in capacity for well depth, and quantization precision, hindering their hdr capabilities. modulo imaging, based on unlimited sampling ( us ) theory, addresses these limitations by using a modulo analog - to - digital approach that resets signals upon saturation, enabling estimation of pixel resets through neighboring pixel intensities. despite the effectiveness of ( us ) algorithms in one - dimensional signals, their optimization problem for two - dimensional signals remains unclear. this work formulates the us framework as an autoregressive $ \ ell _ 2 $ phase unwrapping problem, providing computationally efficient solutions in the discrete cosine domain jointly with a stride removal algorithm also based on spatial differences. by leveraging higher - order finite differences for two - dimensional images, our approach enhances hdr image reconstruction from modulo images, demonstrating its efficacy in improving object detection in autonomous driving scenes without retraining.
arxiv:2504.04228
according to the hamiltonian formalism, nonrelativistic phase space may be considered as an arena of physics, with momentum and position treated as independent variables. invariance of x ^ 2 + p ^ 2 constitutes then a natural generalization of ordinary rotational invariance. we consider dirac - like linearization of this form, with position and momentum satisfying standard commutation relations. this leads to the identification of a quantum - level structure from which some phase space properties might emerge. genuine rotations and reflections in phase space are tied to the existence of new quantum numbers, unrelated to ordinary 3d space. their properties allow their identification with the internal quantum numbers characterising the structure of a single quark - lepton generation in the standard model. in particular, the algebraic structure of the harari - shupe preon model of fundamental particles is reproduced exactly and without invoking any subparticles. analysis of the clifford algebra of nonrelativistic phase space singles out an element which might be associated with the concept of lepton mass. this element is transformed into a corresponding element for a single coloured quark, leading to a generalization of the concept of mass and a different starting point for the discussion of quark unobservability.
arxiv:0901.2896
we study the composition of bivariate l \ ' evy process with bivariate inverse subordinator. the explicit expressions for its dispersion and auto correlation matrices are obtained. also, the time - changed two parameter l \ ' evy processes with rectangular increments are studied. we introduce some time - changed variants of the poisson random field in plane with and without drift, and derive the associated fractional differential equations for their distributions. later, we consider some time - changed l \ ' evy processes where the time - changing components are two parameter poisson random fields with drifts. moreover, two parameter coordinatewise semigroup operators associated with some of the introduced processes are discussed.
arxiv:2503.04166
the $ \ lambda $ - exponential family generalizes the standard exponential family via a generalized convex duality motivated by optimal transport. it is the constant - curvature analogue of the exponential family from the information - geometric point of view, but the development of computational methodologies is still in an early stage. in this paper, we propose a fixed point iteration for maximum likelihood estimation under i. i. d. ~ sampling, and prove using the duality that the likelihood is monotone along the iterations. we illustrate the algorithm with the $ q $ - gaussian distribution and the dirichlet perturbation.
arxiv:2505.03582
we present the results from the rxte observations of cygnus x - 1 in its high state. in the energy range of 2 - 200 kev, the observed x - ray spectrum can be described by a model consisting of a soft blackbody component and a broken power - law with a high energy cutoff. the low energy spectrum ( below about 11 kev ) varies significantly from observation to observation while the high energy portion changes little. the x - ray flux varies on all timescales down to milliseconds. the power density spectrum ( pds ) can be characterized by excess red noise ( ` ` 1 / f ' ' ) at low frequencies and a white noise component that extends to 1 - 3 hz before being cut off. at higher frequencies, the pds becomes power - law again, with a slope of roughly - 2 ( i. e., ` ` 1 / f ^ 2 ' ' ). broad peaks in the range of 3 - 9 hz are present, and might be due to quasi - periodic oscillations. the pds shows interesting spectral dependence : the 1 / f component becomes more prominent when the low - energy spectrum becomes softer. the difference in the observed spectral and timing properties between the low and high states is qualitatively consistent with a simple ` ` fluctuating corona ' ' model.
arxiv:astro-ph/9610071
active matter systems are inherently out of equilibrium and break the detailed balance ( db ) at the microscopic scale, exhibiting vital collective phenomena such as motility - induced phase separation ( mips ). here, we introduce a coarse - grained mapping method to probe db breaking in the density - energy phase space, which allows us to reveal the dynamic and thermodynamic origins of mips based on nonequilibrium potential and flux landscape theory. hallmarks of nonequilibrium properties are manifested by identifying the visible probability flux in the coarse - grained phase space. remarkably, the flux for the system with the activity lower than the mips threshold tends to ` ` tear up " the single potential well of the uniform - density phase to create two wells of phases with different densities, presenting directly that the nonequilibrium flux is the dynamic origin of mips. moreover, we find that the obtained entropy production rate ( epr ) of the system undergoes a transition from nearly independent of activity to increasing proportionally as activity increases after the single well is " teared up ". the transition of epr ' s scaling behavior might provide a hint of the thermodynamic origin of mips in the coarse - grained space. our findings propose a new route to explore the nonequilibrium nature of active systems, and provide new insights into dynamic and thermodynamic properties of mips.
arxiv:2211.16084
deep learning approaches to 3d shape segmentation are typically formulated as a multi - class labeling problem. existing models are trained for a fixed set of labels, which greatly limits their flexibility and adaptivity. we opt for top - down recursive decomposition and develop the first deep learning model for hierarchical segmentation of 3d shapes, based on recursive neural networks. starting from a full shape represented as a point cloud, our model performs recursive binary decomposition, where the decomposition network at all nodes in the hierarchy share weights. at each node, a node classifier is trained to determine the type ( adjacency or symmetry ) and stopping criteria of its decomposition. the features extracted in higher level nodes are recursively propagated to lower level ones. thus, the meaningful decompositions in higher levels provide strong contextual cues constraining the segmentations in lower levels. meanwhile, to increase the segmentation accuracy at each node, we enhance the recursive contextual feature with the shape feature extracted for the corresponding part. our method segments a 3d shape in point cloud into an unfixed number of parts, depending on the shape complexity, showing strong generality and flexibility. it achieves the state - of - the - art performance, both for fine - grained and semantic segmentation, on the public benchmark and a new benchmark of fine - grained segmentation proposed in this work. we also demonstrate its application for fine - grained part refinements in image - to - shape reconstruction.
arxiv:1903.00709
let $ \ mathbb { f } $ be a field and $ f : \ mathfrak { s } _ n \ rightarrow \ mathbb { f } \ setminus \ { 0 \ } $ be an arbitrary map. the schur matrix functional associated to $ f $ is defined as $ m \ in \ text { m } _ n ( \ mathbb { f } ) \ mapsto \ widetilde { f } ( m ) : = \ sum _ { \ sigma \ in \ mathfrak { s } _ n } f ( \ sigma ) \ prod _ { j = 1 } ^ n m _ { \ sigma ( j ), j } $. typical examples of such functionals are the determinant ( where $ f $ is the signature morphism ) and the permanent ( where $ f $ is constant with value $ 1 $ ). given two such maps $ f $ and $ g $, we study the endomorphisms $ u $ of the vector space $ \ text { m } _ n ( \ mathbb { f } ) $ that satisfy $ \ widetilde { g } ( u ( m ) ) = \ widetilde { f } ( m ) $ for all $ m \ in \ text { m } _ n ( \ mathbb { f } ) $. in particular, we give a closed form for the linear preservers of the functional $ \ widetilde { f } $ when $ f $ is central, and as a special case we extend to an arbitrary field botta ' s characterization of the linear preservers of the permanent.
arxiv:1807.06264
we present theoretical and experimental data on the magneto - optical contributions to the complex refractive index in the extreme ultraviolet ( xuv ) range covering the 3p resonances of fe, co, and ni. comparing the spectra from density functional theory with magnetic circular dichroism measurements we find that many body corrections and local field efects are of crucial importance for accurate description of the spectra. our results are relevant for the application of static xuv spectroscopy for multi element magnetic systems as well as for the investigation of ultrafast magnetization dynamics.
arxiv:1812.06703
the simplest decomposition of a toffoli gate acting on three qubits requires { \ em five } 2 - qubit gates. if we restrict ourselves to controlled - sign ( or controlled - not ) gates this number climbs to six. we show that the number of controlled - sign gates required to implement a toffoli gate can be reduced to just { \ em three } if one of the three quantum systems has a third state that is accessible during the computation, i. e. is actually a qutrit. such a requirement is not unreasonable or even atypical since we often artificially enforce a qubit structure on multilevel quantums systems ( eg. atoms, photonic polarization and spatial modes ). we explore the implementation of these techniques in optical quantum processing and show that linear optical circuits could operate with much higher probabilities of success.
arxiv:0806.0654
early dark energy ( ede ) relies on scalar field dynamics to resolve the hubble tension, by boosting the pre - recombination length scales and thereby raising the cmb - inferred value of the hubble constant into agreement with late universe probes. however, the collateral effect of scalar field microphysics on the linear perturbation spectra appears to preclude a fully satisfactory solution. $ h _ 0 $ is not raised without the inclusion of a late universe prior, and the " $ s _ 8 $ - tension ", a discrepancy between early - and late - universe measurements of the structure growth parameter, is exacerbated. what if ede is not a scalar field? here, we investigate whether different microphysics, encoded in the constitutive relationships between pressure and energy density fluctuations, can relieve these tensions. we show that ede with an anisotropic sound speed can soften both the $ h _ 0 $ and $ s _ 8 $ tensions while still providing a quality fit to cmb data. future observations from the cmb - s4 experiment may be able to distinguish the underlying microphysics at the $ 4 \ sigma $ level, and thereby test whether a scalar field or some richer physics is at work.
arxiv:2202.08291
to guide the design of better iterative optimisation heuristics, it is imperative to understand how inherent structural biases within algorithm components affect the performance on a wide variety of search landscapes. this study explores the impact of structural bias in the modular covariance matrix adaptation evolution strategy ( modcma ), focusing on the roles of various modulars within the algorithm. through an extensive investigation involving 435, 456 configurations of modcma, we identified key modules that significantly influence structural bias of various classes. our analysis utilized the deep - bias toolbox for structural bias detection and classification, complemented by shap analysis for quantifying module contributions. the performance of these configurations was tested on a sequence of affine - recombined functions, maintaining fixed optimum locations while gradually varying the landscape features. our results demonstrate an interplay between module - induced structural bias and algorithm performance across different landscape characteristics.
arxiv:2404.17323
we determine the limiting distribution of the normalized euler factors of an abelian surface a defined over a number field k when a is isogenous to the square of an elliptic curve defined over k with complex multiplication. as an application, we prove the sato - tate conjecture for jacobians of q - twists of the curves y ^ 2 = x ^ 5 - x and y ^ 2 = x ^ 6 + 1, which give rise to 18 of the 34 possibilities for the sato - tate group of an abelian surface defined over q. with twists of these two curves one encounters, in fact, all of the 18 possibilities for the sato - tate group of an abelian surface that is isogenous to the square of an elliptic curve with complex multiplication. key to these results is the twisting sato - tate group of a curve, which we introduce in order to study the effect of twisting on the sato - tate group of its jacobian.
arxiv:1203.1476
in this paper we propose a new method of estimation for discrete choice demand models when individual level data are available. the method employs a two - step procedure. step 1 predicts the choice probabilities as functions of the observed individual level characteristics. step 2 estimates the structural parameters of the model using the estimated choice probabilities at a particular point of interest and the moment restrictions. in essence, the method uses nonparametric approximation ( followed by ) moment estimation. hence the name - - - name. we use simulations to compare the performance of name with the standard methodology. we find that our method improves precision as well as convergence time. we supplement the analysis by providing the large sample properties of the proposed estimator.
arxiv:2010.08016
for the fractional diffusion - wave equation with the caputo - dzhrbashyan fractional derivative of order $ \ alpha \ in ( 1, 2 ) $ with respect to the time variable, we prove an analog of the principle of limiting amplitude ( well - known for the wave equation and some other hyperbolic equations ) and a pointwise stabilization property of solutions ( similar to a well - known property of the heat equation and some other parabolic equations ).
arxiv:1404.7612
for the majority of the machine learning community, the expensive nature of collecting high - quality human - annotated data and the inability to efficiently finetune very large state - of - the - art pretrained models on limited compute are major bottlenecks for building models for new tasks. we propose a zero - shot simple approach for one such task, video moment retrieval ( vmr ), that does not perform any additional finetuning and simply repurposes off - the - shelf models trained on other tasks. our three - step approach consists of moment proposal, moment - query matching and postprocessing, all using only off - the - shelf models. on the qvhighlights benchmark for vmr, we vastly improve performance of previous zero - shot approaches by at least 2. 5x on all metrics and reduce the gap between zero - shot and state - of - the - art supervised by over 74 %. further, we also show that our zero - shot approach beats non - pretrained supervised models on the recall metrics and comes very close on map metrics ; and that it also performs better than the best pretrained supervised model on shorter moments. finally, we ablate and analyze our results and propose interesting future directions.
arxiv:2211.02178
given the monomial ideal i = ( x _ 1 ^ { { \ alpha } _ 1 },..., x _ { n } ^ { { \ alpha } _ { n } } ) \ subset k [ x _ 1,..., x _ { n } ] where { \ alpha } _ { i } are positive integers and k a field and let j be the integral closure of i. it is a challenging problem to translate the question of the normality of j into a question about the exponent set { \ gamma } ( j ) and the newton polyhedron np ( j ). a relaxed version of this problem is to give necessary or sufficient conditions on { \ alpha } _ 1,..., { \ alpha } _ { n } for the normality of j. we show that if { \ alpha } _ { i } \ epsilon { s, l } with s and l arbitrary positive integers, then j is normal.
arxiv:1009.0786
a warped space model with a constant boundary superpotential has been an efficient model both to break supersymmetry and to stabilize the radius, when hypermultiplet, compensator and radion multiplet are taken into account. in such a model of the radius stabilization, the radion and moduli masses, the gravitino mass and the induced soft masses are studied. we find that a lighter physical mode composed of the radion and the moduli can have mass of the order of a tev and that the gravitino mass can be of the order of 10 $ ^ 7 $ gev. it is also shown that soft mass induced by the anomaly mediation can be of the order of 100gev and can be dominant compared to that mediated by bulk fields. localized f terms and d terms are discussed as candidates of cancelling the cosmological constant. we find that there is no flavor changing neutral current problem in a wide range of parameters.
arxiv:hep-th/0612071
we introduce a new number - theoretic spin chain and explore its thermodynamics and connections with number theory. the energy of each spin configuration is defined in a translation - invariant manner in terms of the farey fractions, and is also expressed using pauli matrices. we prove that the free energy exists and exhibits a unique phase transition at inverse temperature beta = 2. the free energy is the same as that of a related, non translation - invariant number - theoretic spin chain. using a number - theoretic argument, the low - temperature ( beta > 3 ) state is shown to be completely magnetized for long chains. the number of states of energy e = log ( n ) summed over chain length is expressed in terms of a restricted divisor problem. we conjecture that its asymptotic form is ( n log n ), consistent with the phase transition at beta = 2, and suggesting a possible connection with the riemann zeta function. the spin interaction coefficients include all even many - body terms and are translation invariant. computer results indicate that all the interaction coefficients, except the constant term, are ferromagnetic.
arxiv:cond-mat/9808182
we provide necessary and sufficient conditions for hypercontractivity of the minima of nonnegative, i. i. d. random variables and of both the maxima of minima and the minima of maxima for such r. v. ' s. it turns out that the idea of hypercontractivity for minima is closely related to small ball probabilities and gaussian correlation inequalities.
arxiv:math/9607209
solutions are investigated for 1d linear counter - current spontaneous imbibition ( cousi ). the diffusion problem is scaled to depend only on a normalized coefficient { \ lambda } _ n ( s _ n ) with mean 1 and no other parameters. a dataset of 5500 functions { \ lambda } _ n was generated using combinations of ( mixed - wet and strongly water - wet ) relative permeabilities, capillary pressure and mobility ratios. since the possible variation in { \ lambda } _ n appears limited ( mean 1, positive, zero at s _ n = 0, one maximum ) the generated functions span most relevant cases. the scaled diffusion equation was solved for all 5500 cases and recovery profiles were analyzed in terms of time scales and early - and late time behavior. scaled recovery falls exactly on the square root curve rf = t _ n ^ 0. 5 at early time. the scaled time t _ n = t / { \ tau } t _ ch accounts for system length l and magnitude d of the unscaled diffusion coefficient via { \ tau } = l ^ 2 / d, and t _ ch accounts for { \ lambda } _ n. scaled recovery was characterized by rf _ tr ( highest recovery reached as t _ n ^ 0. 5 ) and lr, a parameter controlling the decline in imbibition rate afterwards. this correlation described the 5500 recovery curves with mean r ^ 2 = 0. 9989. rf _ tr was 0. 05 to 0. 2 units higher than recovery when water reached the no - flow boundary. the shape of { \ lambda } _ n was quantified by three fractions z _ ( a, b ). the parameters describing { \ lambda } _ n and recovery were correlated which permitted to ( 1 ) accurately predict full recovery profiles ( without solving the diffusion equation ) ; ( 2 ) predict diffusion coefficients explaining experimental recovery ; ( 3 ) explain the combined impact of interactions between wettability / saturation functions, viscosities and other input on early - and late time recovery behavior.
arxiv:2211.07571
numerical simulation of superconducting devices is a powerful tool for understanding the principles of their work and improving their design. we present a new pseudospectral method for two - dimensional magnetization and transport current superconducting strip problems with an arbitrary current - voltage relation, spatially inhomogeneous strips, and strips in a nonuniform applied field. the method is based on the bivariate expansions in chebyshev polynomials and hermite functions. it can be used for numerical modeling magnetic flux pumps of different types and investigating ac losses in coated conductors with local defects. using a realistic two - dimensional version of the superconducting dynamo benchmark problem as an example, we showed that our new method is a competitive alternative to finite element methods.
arxiv:2108.10654
the deployment of generative ai ( genai ) models raises significant fairness concerns, addressed in this paper through novel characterization and enforcement techniques specific to genai. unlike standard ai performing specific tasks, genai ' s broad functionality requires " conditional fairness " tailored to the context being generated, such as demographic fairness in generating images of poor people versus successful business leaders. we define two fairness levels : the first evaluates fairness in generated outputs, independent of prompts and models ; the second assesses inherent fairness with neutral prompts. given the complexity of genai and challenges in fairness specifications, we focus on bounding the worst case, considering a genai system unfair if the distance between appearances of a specific group exceeds preset thresholds. we also explore combinatorial testing for accessing relative completeness in intersectional fairness. by bounding the worst case, we develop a prompt injection scheme within an agent - based framework to enforce conditional fairness with minimal intervention, validated on state - of - the - art genai systems.
arxiv:2404.16663
the direct collapse model of supermassive black hole seed formation provides an attractive solution to the origin of the quasars now routinely observed at $ z \ gtrsim 6 $. we use the adaptive mesh refinement code enzo to simulate the collapse of gas at high redshift, including a nine species chemical model of h, he, and h $ _ 2 $. the direct collapse model requires that the gas cools predominantly via atomic hydrogen. to this end we simulate the effect of an anisotropic radiation source on the collapse of a halo at high redshift. the radiation source is placed at a distance of 3 kpc ( physical ) from the collapsing object. the source is set to emit monochromatically in the center of the lyman - werner ( lw ) band only at $ 12. 8 \ \ rm { ev } $. the lw radiation emitted from the high redshift source is followed self - consistently using ray tracing techniques. we find that, due to self - shielding, a small amount of h $ _ 2 $ is able to form at the very center of the collapsing halo even under very strong lw radiation. furthermore, we find that a radiation source, emitting $ > 10 ^ { 54 } \ ( \ sim10 ^ 3 \ \ rm { j _ { 21 } } ) $ photons per second is required to cause the collapse of a clump of $ \ rm { m \ sim 10 ^ 5 } $ m $ _ { \ odot } $. the resulting accretion rate onto the collapsing object is $ \ sim 0. 25 $ m $ _ { \ odot } $ $ \ rm { yr ^ { - 1 } } $. our results display significant differences, compared to the isotropic radiation field case, in terms of h $ _ 2 $ fraction at an equivalent radius. these differences will significantly effect the dynamics of the collapse. with the inclusion of a strong anisotropic radiation source, the final mass of the collapsing object is found to be $ \ rm { m \ sim 10 ^ 5 } $ m $ _ { \ odot } $. this is consistent with predictions for the formation of a supermassive star or quasi - star leading to a supermassive black hole.
arxiv:1407.4472
in this article, we construct the scalar - diquark - scalar - diquark - antiquark type current to study the ground state triply charmed pentaquark states with the qcd sum rules. we separate the contributions of the negative - parity and positive parity triply charmed pentaquark states explicitly, and take the energy scale formula $ \ mu = \ sqrt { m ^ 2 _ { p } - ( 3 { \ mathbb { m } } _ c ) ^ 2 } $ to determine the optimal energy scales of the qcd spectral densities. the predicted pentaquark masses can be confronted to the experimental data in the future.
arxiv:1801.08419
in this article, we use deformation theory of galois representations valued in the symplectic group of degree four to prove a freeness result for the cohomology of certain quaternionic unitary shimura variety over the universal deformation ring for certain type of residual representation satisfying a property called rigidity. this result plays an important role in the proof of the arithmetic level raising theorem for the symplectic similitude group of degree four over the field of rational numbers by the author.
arxiv:2204.07807
materials.
arxiv:1908.07194
reverberation mapping of nearby active galactic nuclei has led to estimates of broad - line - region ( blr ) sizes and central - object masses for some 37 objects to date. however, successful reverberation mapping has yet to be performed for quasars of either high luminosity ( above l _ opt ~ 10 ^ { 46 } erg / s ) or high redshift ( z > 0. 3 ). over the past six years, we have carried out, at the hobby - eberly telescope, rest - frame - ultraviolet spectrophotometric monitoring of a sample of six quasars at redshifts z = 2. 2 - - 3. 2, with luminosities of l _ opt ~ 10 ^ { 46. 4 } - - 10 ^ { 47. 6 } erg / s, an order of magnitude greater than those of previously mapped quasars. the six quasars, together with an additional five having similar redshift and luminosity properties, were monitored photometrically at the wise observatory during the past decade. all 11 quasars monitored show significant continuum variations of order 10 % - - 70 %. this is about a factor of two smaller variability than for lower luminosity quasars monitored over the same rest - frame period. in the six objects which have been spectrophotometrically monitored, significant variability is detected in the civ1550 broad emission line. in several cases the variations track the continuum variations in the same quasar, with amplitudes comparable to, or even greater than, those of the corresponding continua. in contrast, no significant ly \ alpha variability is detected in any of the four objects in which it was observed. thus, uv lines may have different variability trends in high - luminosity and low - luminosity agns. for one quasar, s5 ~ 0836 + 71 at z = 2. 172, we measure a tentative delay of 595 days between civ and uv - continuum variations, corresponding to a rest - frame delay of 188 days and a central black - hole mass of 2. 6 \ times10 ^ 9 m _ \ odot.
arxiv:astro-ph/0612722
we report measurements of the critical temperature of ybco - co doped ybco superconductor - normal bilayer films. depending on the morphology of the s - n interface, the coupling between s and n layers can be turned on to depress the critical temperature of s by tens of degrees, or turned down so the layers appear almost totally decoupled. this novel effect can be explained by the mechanism of quasiparticle transmission into an anisotropic superconductor.
arxiv:cond-mat/9803361
in this paper, we study the combinatorics of congruence subgroups of the modular group by generalizing results obtained in the non - modular case. for this, we define a notion of irreducible solutions from which we can build all the solutions. in particular, we give a particular solution, irreducible for any $ n $, and the list of irreducible solutions for $ n \ leq 6 $.
arxiv:2006.01470
regular satellites of giant planets are formed by accretion of solid bodies in circumplanetary disks. planetesimals that are moving on heliocentric orbits and are sufficiently large to be decoupled from the flow of the protoplanetary gas disk can be captured by gas drag from the circumplanetary disk. in the present work, we examine the distribution of captured planetesimals in circumplanetary disks using orbital integrations. we find that the number of captured planetesimals reaches an equilibrium state as a balance between continuous capture and orbital decay into the planet. the number of planetesimals captured into retrograde orbits is much smaller than those on prograde orbits, because the former ones experience strong headwind and spiral into the planet rapidly. we find that the surface number density of planetesimals at the current radial location of regular satellites can be significantly enhanced by gas drag capture, depending on the velocity dispersions of planetesimals and the width of the gap in the protoplanetary disk. using a simple model, we also examine the ratio of the surface densities of dust and captured planetesimals in the circumplanetary disk, and find that solid material at the current location of regular satellites can be dominated by captured planetesimals when the velocity dispersion of planetesimals is rather small and a wide gap is not formed in the protoplanetary disk. in this case, captured planetesimals in such a region can grow by mutual collision before spiraling into the planet, and would contribute to the growth of regular satellites.
arxiv:1703.07917
we combine two recently established methods, the extended coupled - ladder approximation ( ecla ) [ phys. rev. b 95, 035122 ( 2017 ) ] and a dynamic keldysh functional renormalization group ( frg ) approach for inhomogeneous systems [ phys. rev. lett. 119, 196401 ( 2017 ) ] to tackle the problem of finite - ranged interactions in quantum point contacts ( qpcs ) at finite temperature. working in the keldysh formalism, we develop an ecla framework, proceeding from a static to a fully dynamic description. finally, we apply our new keldysh ecla method to a qpc model with finite - ranged interactions and show evidence that an interaction range comparable to the length of the qpc might be an essential ingredient for the development of a pronounced 0. 7 - shoulder in the linear conductance. we also discuss problems arising from a violation of a ward identity in second - order frg.
arxiv:1912.02700
tabular reinforcement learning methods cannot operate directly on continuous state spaces. one solution for this problem is to partition the state space. a good partitioning enables generalization during learning and more efficient exploitation of prior experiences. consequently, the learning process becomes faster and produces more reliable policies. however, partitioning introduces approximation, which is particularly harmful in the presence of nonlinear relations between state components. an ideal partition should be as coarse as possible, while capturing the key structure of the state space for the given problem. this work extracts partitions from the environment dynamics by symbolic execution. we show that symbolic partitioning improves state space coverage with respect to environmental behavior and allows reinforcement learning to perform better for sparse rewards. we evaluate symbolic state space partitioning with respect to precision, scalability, learning agent performance and state space coverage for the learnt policies.
arxiv:2409.16791
in this paper we propose a neural message passing approach to augment an input 3d indoor scene with new objects matching their surroundings. given an input, potentially incomplete, 3d scene and a query location, our method predicts a probability distribution over object types that fit well in that location. our distribution is predicted though passing learned messages in a dense graph whose nodes represent objects in the input scene and edges represent spatial and structural relationships. by weighting messages through an attention mechanism, our method learns to focus on the most relevant surrounding scene context to predict new scene objects. we found that our method significantly outperforms state - of - the - art approaches in terms of correctly predicting objects missing in a scene based on our experiments in the suncg dataset. we also demonstrate other applications of our method, including context - based 3d object recognition and iterative scene generation.
arxiv:1907.11308
predicting loan eligibility with high accuracy remains a significant challenge in the finance sector. accurate predictions enable financial institutions to make informed decisions, mitigate risks, and effectively adapt services to meet customer needs. however, the complexity and the high - dimensional nature of financial data have always posed significant challenges to achieving this level of precision. to overcome these issues, we propose a novel approach that employs quantum machine learning ( qml ) for loan eligibility prediction using quantum neural networks ( lep - qnn ). our innovative approach achieves an accuracy of 98 % in predicting loan eligibility from a single, comprehensive dataset. this performance boost is attributed to the strategic implementation of a dropout mechanism within the quantum circuit, aimed at minimizing overfitting and thereby improving the model ' s predictive reliability. in addition, our exploration of various optimizers leads to identifying the most efficient setup for our lep - qnn framework, optimizing its performance. we also rigorously evaluate the resilience of lep - qnn under different quantum noise scenarios, ensuring its robustness and dependability for quantum computing environments. this research showcases the potential of qml in financial predictions and establishes a foundational guide for advancing qml technologies, marking a step towards developing advanced, quantum - driven financial decision - making tools.
arxiv:2412.03158
trajectory optimization is an essential tool for generating efficient and dynamically consistent gaits in legged locomotion. this paper explores the indirect method of trajectory optimization, emphasizing its application in creating optimal periodic gaits for legged systems and contrasting it with the more commonly used direct method. while the direct method provides considerable flexibility in its implementation, it is limited by its input space parameterization. in contrast, the indirect method improves accuracy by defining control inputs as functions of the system ' s states and costates. we tackle the convergence challenges associated with indirect shooting methods, particularly through the systematic development of gait libraries by utilizing numerical continuation methods. our contributions include : ( 1 ) the formalization of a general periodic trajectory optimization problem that extends existing first - order necessary conditions for a broader range of cost functions and operating conditions ; ( 2 ) a methodology for efficiently generating libraries of optimal trajectories ( gaits ) utilizing a single shooting approach combined with numerical continuation methods, including a novel approach for reconstructing lagrange multipliers and costates from passive gaits ; and ( 3 ) a comparative analysis of the indirect and direct shooting methods using a compass - gait walker as a case study, demonstrating the former ' s superior accuracy in generating optimal gaits. the findings underscore the potential of the indirect method for generating families of optimal gaits, thereby advancing the field of trajectory optimization in legged robotics.
arxiv:2410.09512
the existence of galaxy intrinsic clustering severely hampers the weak lensing reconstruction from cosmic magnification. in paper i \ citep { yang2011 }, we proposed a minimal variance estimator to overcome this problem. by utilizing the different dependences of cosmic magnification and galaxy intrinsic clustering on galaxy flux, we demonstrated that the otherwise overwhelming galaxy intrinsic clustering can be significantly suppressed such that lensing maps can be reconstructed with promising accuracy. this procedure relies heavily on the accuracy of determining the galaxy bias from the same data. paper i adopts an iterative approach, which degrades toward high redshift. the current paper presents an alternative method, improving over paper i. we prove that the measured galaxy clustering between flux bins allows for simultaneous determination of the lensing power spectrum and the flux dependence of galaxy bias, at this redshift bin. comparing to paper i, the new approach is not only more straightforward, but also more robust. it identifies an ambiguity in determining the galaxy bias and further discovers a mathematically robust way to suppress this ambiguity to non - negligible level ( $ \ sim 0. 1 % $ ). the accurately determined galaxy bias can then be applied to the minimal variance estimator proposed in paper i to improve the lensing map - making. the gain at high redshift is significant. these maps can be used to measure other statistics, such as cluster finding and peak statistics. furthermore, by including galaxy clustering measurement between different redshift bins, we can also determine the lensing cross power spectrum between these bins, up to a small and correctable multiplicative factor.
arxiv:1309.2474
in this research work, security concepts are formalized in steganography, and the common paradigms based on information theory are replaced by another ones inspired from cryptography, more practicable are closer than what is usually done in other cryptographic domains. these preliminaries lead to a first proof of a cryptographically secure information hiding scheme.
arxiv:1706.08752
speaker counting is the task of estimating the number of people that are simultaneously speaking in an audio recording. for several audio processing tasks such as speaker diarization, separation, localization and tracking, knowing the number of speakers at each timestep is a prerequisite, or at least it can be a strong advantage, in addition to enabling a low latency processing. in a previous work, we addressed the speaker counting problem with a multichannel convolutional recurrent neural network which produces an estimation at a short - term frame resolution. in this work, we show that, for a given frame, there is an optimal position in the input sequence for best prediction accuracy. we empirically demonstrate the link between that optimal position, the length of the input sequence and the size of the convolutional filters.
arxiv:2101.01977
iterating on creating pixel art character sprite sheets is essential to the game development process. however, it can take a lot of effort until the final versions containing different poses and animation clips are achieved. this paper investigates using conditional generative adversarial networks to aid the designers in creating such sprite sheets. we propose an architecture based on pix2pix to generate images of characters facing a target side ( e. g., right ) given sprites of them in a source pose ( e. g., front ). experiments with small pixel art datasets yielded promising results, resulting in models with varying degrees of generalization, sometimes capable of generating images very close to the ground truth. we analyze the results through visual inspection and quantitatively with fid.
arxiv:2208.06413
in this paper, we study the leader - following consensus problem of multiple euler - lagrange systems subject to an uncertain leader system. we first establish an adaptive distributed observer for a neutrally stable linear leader system whose system matrix is not known exactly. under standard assumptions, this adaptive distributed observer can estimate and pass the leader ' s state to each follower through the communication network of the system without knowing the leader ' s system matrix exactly. under the additional assumption that the leader ' s state is persistently exciting, this adaptive distributed observer can also asymptotically learn the parameters of the leader ' s system matrix. on the basis of this adaptive distributed observer, we further synthesize an adaptive distributed control law to solve our problem via the certainty equivalence principle. our result allows the leader - following consensus problem of multiple euler - lagrange systems to be solved even if none of the followers knows the system matrix of the leader system exactly.
arxiv:1909.07851
much previous work has been done in attempting to identify humor in text. in this paper we extend that capability by proposing a new task : assessing whether or not a joke is humorous. we present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned from reddit pages, consisting of almost 16, 000 labeled instances. using these ratings to determine the level of humor, we then employ a transformer architecture for its advantages in learning from sentence context. we demonstrate the effectiveness of this approach and show results that are comparable to human performance. we further demonstrate our model ' s increased capabilities on humor identification problems, such as the previously created datasets for short jokes and puns. these experiments show that this method outperforms all previous work done on these tasks, with an f - measure of 93. 1 % for the puns dataset and 98. 6 % on the short jokes dataset.
arxiv:1909.00252
we investigate the equation of state ( eos ) of classical systems having 300 and 512 particles confined in a box with periodic boundary conditions. we show that such a system, independently on the number of particles investigated, has a critical density of about 1 / 3 the ground state density and a critical temperature of about $ 2. 5 ~ mev $. the mass distribution at the critical point exhibits a power law with $ \ tau = 2. 23 $. making use of the grand partition function of fisher ' s droplet model, we obtain an analytical eos around the critical point in good agreement with the one extracted from the numerical simulations.
arxiv:nucl-th/9512019
work - related transportation incidents significantly impact urban mobility and productivity. these incidents include traffic crashes, collisions between vehicles, and falls that occurred during commuting or work - related transportation ( e. g., falling while getting off a bus during the morning commute or while riding a bicycle for work ). this study analyzes a decade of work - related transportation incident data ( 2012 - - 2021 ) in santiago, chile, using records from a major worker ' s insurance company. using negative binomial regression, we assess the impact of a 2018 urban speed limit reduction law on incident injury severity. we also explore broader temporal, spatial, and demographic patterns in these incidents in urban and rural areas. the urban speed limit reduction is associated with a decrease of 4. 26 days in prescribed medical leave for incidents in urban areas, suggesting that lower speed limits contribute to reduced injury severity. our broader analysis reveals distinct incident patterns across different groups. workers traveling by motorcycle and bicycle experience more severe injuries when involved in traffic incidents, with marginal effects of 26. 94 and 13. 06 additional days of medical leave, respectively, compared to motorized vehicles. women workers tend to have less severe injuries, with an average of 7. 57 fewer days of medical leave. age is also a significant factor, with older workers experiencing more severe injuries - - each additional year of age is associated with 0. 57 more days of medical leave. our results provide insights for urban planning, transportation policy, and workplace safety initiatives.
arxiv:2408.00687
we have used sensitive archival data from the infrared space observatory ( iso ) to make maps of the edge - on low sfr galaxy, ngc 5907, in 6 different mir bands : lw2, lw5, lw6, lw7, lw8, and lw10, covering the spectrum from 6. 5 to 15. 0 microns and including several narrow bands that isolate the infrared aromatic spectral features commonly referred to as pahs. most of the mir emission is dominated by pahs and it is likely that emission from vsgs contribute only negligibly except in the broad iras - equivalent band. the flux ratios are typical of galaxies with low sfrs or quiesent regions within galaxies ( e. g m ~ 83 ) and a very high pah / continuum ratio is observed. the pah emission follows the co distribution and also shows some correlation within the disk with the lambda 850 micron distribution. however, the pah emission also reaches larger galactocentric radii than the co and other correlations suggest that the pahs are also more widespread. a significant new discovery is the presence of pahs in the halo of the galaxy. in the narrow bands that isolate single pah features, the emission shows structure similar to high latitude features seen in other galaxies in other tracers. the features extend as far as 6. 5 kpc from the plane but scale heights of 3. 5 kpc are more typical. the ( lambda 11. 3 / lambda7. 7 ) ratio also appears to increase with distance from the major axis. to our knowledge, this is the first time pahs have been seen in the halo of an external galaxy. just as significantly, they are seen in a low sfr galaxy, suggesting that strong sne and winds are not necessary for these large molecules to reach high latitudes.
arxiv:astro-ph/0509726
agn masses can be estimated by " single epoch " spectral measurements through a mass - luminosity - linewidth relation calibrated by echo mapping measurements of a reference sample of low redshift ( $ z < 0. 3 $ ) and low luminosity ( $ m _ b > - 26 $ ) agns. to analyze the possible dependence of this relation on luminosity we selected a sample of bright, intermediate redhift ( $ z \ sim 1 $ ) objects and we started a spectrophotometric monitoring campaign with a typical sampling time of about one month. variability observations of lines with shorter wavelength than h $ _ \ beta $ will also provide new information on the structure of the broad line region. cross - correlation analysis of continuum and line variations will require years of monitoring. we present a preliminary analysis of the data collected during the first year of observations and we discuss the adequacy of the spectrophotometric accuracy attained and future prospects of this project.
arxiv:astro-ph/0408075
implicit neural representation ( inr ), in combination with geometric rendering, has recently been employed in real - time dense rgb - d slam. despite active research endeavors being made, there lacks a unified protocol for fair evaluation, impeding the evolution of this area. in this work, we establish, to our knowledge, the first open - source benchmark framework to evaluate the performance of a wide spectrum of commonly used inrs and rendering functions for mapping and localization. the goal of our benchmark is to 1 ) gain an intuition of how different inrs and rendering functions impact mapping and localization and 2 ) establish a unified evaluation protocol w. r. t. the design choices that may impact the mapping and localization. with the framework, we conduct a large suite of experiments, offering various insights in choosing the inrs and geometric rendering functions : for example, the dense feature grid outperforms other inrs ( e. g. tri - plane and hash grid ), even when geometric and color features are jointly encoded for memory efficiency. to extend the findings into the practical scenario, a hybrid encoding strategy is proposed to bring the best of the accuracy and completion from the grid - based and decomposition - based inrs. we further propose explicit hybrid encoding for high - fidelity dense grid mapping to comply with the rgb - d slam system that puts the premise on robustness and computation efficiency.
arxiv:2403.19473
chameleon fields are quantum fields with an increasing mass as a function of the matter density of the environment. recently chameleon fields have been exploited to solve the cosmological constant problem in the modified fujii ' s model - mfm [ phys rev d82 ( 2010 ) 044006 ]. however, gravity has been treated basically at a semiclassical level in that paper. in this article the stringy origin of the mfm is further discussed : as we will see, the mfm can be obtained from heterotic - m - theory. consequently, a quantum description of gravity is obtained and the theory is finite because we choose the string mass as our uv cut - off. this stringy origin of the mfm creates stronger theoretical grounds for our solution to the cosmological constant problem. in our analysis, time will be compactified on a $ s ^ 1 / z _ 2 $ orbifold and this peculiar compactification of time has a number of consequences. for example, as we will see, quantum gravity and a quantum gauge theory are actually the same theory in the sense that gravity is the time - evolution of a gauge theory. this might be the key to obtain a non - approximated stabilizing potential for the dilaton in the string frame. in this paper we will further discuss the non - equivalence of different conformal frames at the quantum level. as we will see, in our approach we use basically a unique conformal frame : the frame where the masses of particles are field dependent. a word of caution is necessary : we do not take into account massive string states and ir divergences.
arxiv:1607.04195
being able to reduce the size of a rheometer down to the micron scale is a unique opportunity to explore the mechanical response of expensive and / or confined liquids and gels. to this aim, we synthesize micron size wires with magnetic properties and examine the possibility of using them as microrheology probes. in this work, we exploit the technique of rotational magnetic spectroscopy by placing a wire in a rotating magnetic field and monitor its temporal evolution by time - lapse microscopy. the wire - based microrheology technique is tested on wormlike micellar surfactant solutions showing very different relaxation dynamics and viscosities. a model for the wire rotation is also developed and used to predict the wire behavior. it is shown that the rheological parameters of the surfactant solutions including the static shear viscosity, the entangled micellar network relaxation time and the elastic modulus are in good agreement with those of conventional rheometry.
arxiv:1606.06667
we study a simple stochastic differential equation that models the dispersion of close heavy particles moving in a turbulent flow. in one and two dimensions, the model is closely related to the one - dimensional stationary schroedinger equation in a random delta - correlated potential. the ergodic properties of the dispersion process are investigated by proving that its generator is hypoelliptic and using control theory.
arxiv:1009.0782
we present a new approach for stably evolving general relativistic magnetohydrodynamic ( grmhd ) simulations in regions where the magnetization $ \ sigma = b ^ 2 / \ rho c ^ 2 $ becomes large. grmhd codes typically struggle to evolve plasma above $ \ sigma \ approx100 $ in simulations of black hole accretion. to ensure stability, grmhd codes will inject mass density artificially to the simulation as necessary to keep the magnetization below a ceiling value $ \ sigma _ { \ rm max } $. we propose an alternative approach where the simulation transitions to solving the equations of general relativistic force - free electrodynamics ( grffe ) above a magnetization $ \ sigma _ { \ rm trans } $. we augment the grffe equations in the highly magnetized region with approximate equations to evolve the decoupled field - parallel velocity, plasma energy density, and plasma mass density. our hybrid scheme is explicit and easily added to the framework of standard - volume grmhd codes. we present a variety of tests of our method, implemented in the grmhd code koral, and we show first results from a 3d hybrid grmhd + grffe simulation of a magnetically arrested disc ( mad ) around a spinning black hole. our hybrid mad simulation closely matches the average properties of a standard grmhd mad simulation with the same initial conditions in low magnetization regions, but it achieves a magnetization $ \ sigma \ approx10 ^ 6 $ in the evacuated jet funnel. we present simulated horizon - scale images of both simulations at 230 ghz with the black hole mass and accretion rate matched to m87 *. images from the hybrid simulation are less affected by the choice of magnetization cutoff $ \ sigma _ { \ rm cut } $ imposed in radiative transfer than images from the standard grmhd simulation.
arxiv:2404.01471
lmntal is a programming and modeling language based on hierarchical graph rewriting that uses logical variables to represent connectivity and membranes to represent hierarchy. on the theoretical side, it allows logical interpretation based on intuitionistic linear logic ; on the practical side, its full - fledged implementation supports a graph - based parallel model checker and has been used to model diverse applications including various computational models. this paper discuss how we extend lmntal to qlmntal ( lmntal with quantification ) to further enhance the usefulness of hierarchical graph rewriting for high - level modeling by introducing quantifiers into rewriting as well as matching. those quantifiers allows us to express universal quantification, cardinality and non - existence in an integrated manner. unlike other attempts to introduce quantifiers into graph rewriting, qlmntal has term - based syntax, whose semantics is smoothly integrated into the small - step semantics of the base language lmntal. the proposed constructs allow combined and nested use of quantifiers within individual rewrite rules.
arxiv:2409.11015
the carbon footprint associated with large language models ( llms ) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. an essential aspect is accurately estimating the carbon impact of emerging llms even before their training, which heavily relies on gpu usage. existing studies have reported the carbon footprint of llm training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. however, mlco2 has several serious limitations. it cannot extend its estimation to dense or mixture - of - experts ( moe ) llms, disregards critical architectural parameters, focuses solely on gpus, and cannot model embodied carbon footprints. addressing these gaps, we introduce \ textit { \ carb }, an end - to - end carbon footprint projection model designed for both dense and moe llms. compared to mlco2, \ carb ~ significantly enhances the accuracy of carbon footprint estimations for various llms. the source code is released at \ url { https : / / github. com / sotarokaneda / mlcarbon }.
arxiv:2309.14393
observations with the extreme ultraviolet explorer satellite are purported to show extreme ultraviolet ( euv ) and soft x - ray excesses in several clusters of galaxies ( bonamente, lieu & mittaz 2001 ). if interpreted as thermal emission, this would imply the presence of warm ( t \ sim 10 ^ 6 k ) gas in these clusters with a mass comparable to that of gas at coronal temperatures. if true, this would have profound implications for our understanding of galaxy clusters and the distribution of baryons in the universe. here we show that because of the large ionizing photon emissivities of gas at such low temperatures, the ionizing photon fluxes seen by disk galaxies in the observed clusters can be very large, resulting in minimum emission measures from neutral gas in such disks as high as 100 cm ^ ( - 6 ) pc. this result is essentially independent of the mechanism actually responsible for producing the alleged euv excesses. the predicted emission measures in abell 1795 ( z = 0. 063 ) are about an order of magnitude larger than seen in the reynolds layer of the galaxy, providing a straightforward observational test of the reality of the euv excess. new tunable filter h alpha images and wfpc images from the hubble space telescope archive do not support the existence of the claimed euv excess.
arxiv:astro-ph/0104422
for n > 2, we shall show that the group aut ( ns ( m ) ) of simplicial automorphisms of the complex ns ( m ) of non - separating embedded spheres in the manifold m, connected sum of n copies of s ^ 2 x s ^ 1, isomorphic to the group out ( f _ n ) of outer automorphisms of the free group f _ n, where $ f _ n $ is identified with the fundamental group of m up to conjugacy of the base point in m.
arxiv:1204.0338
we propose a muon - proton collider with asymmetrical multi - tev beam energies and integrated luminosities of $ 0. 1 - 1 $ ab $ ^ { - 1 } $. with its large center - of - mass energies and yet small standard model background, such a machine can not only improve electroweak precision measurements but also probe new physics beyond the standard model to an unprecedented level. we study its potential in measuring the higgs properties, probing the r - parity - violating supersymmetry, as well as testing heavy new physics in the muon $ g - 2 $ anomaly. we find that for these physics cases the muon - proton collider can perform better than both the ongoing and future high - energy collider experiments.
arxiv:2101.10476
we report on the time dependent solutions of the $ q - $ generalized schr \ " odinger equation proposed by nobre et al. [ phys. rev. lett. 106, 140601 ( 2011 ) ]. here we investigate the case of two free particles and also the case where two particles were subjected to a moshinsky - like potential with time dependent coefficients. we work out analytical and numerical solutions for different values of the parameter $ q $ and also show that the usual schr \ " odinger equation is recovered in the limit of $ q \ rightarrow 1 $. an intriguing behavior was observed for $ q = 2 $, where the wave function displays a ring - like shape, indicating a bind behavior of the particles. differently from the results previously reported for the case of one particle, frozen states appear only for special combinations of the wave function parameters in case of $ q = 3 $.
arxiv:1502.03381
backdoor attacks compromise the integrity and reliability of machine learning models by embedding a hidden trigger during the training process, which can later be activated to cause unintended misbehavior. we propose a novel backdoor mitigation approach via machine unlearning to counter such backdoor attacks. the proposed method utilizes model activation of domain - equivalent unseen data to guide the editing of the model ' s weights. unlike the previous unlearning - based mitigation methods, ours is computationally inexpensive and achieves state - of - the - art performance while only requiring a handful of unseen samples for unlearning. in addition, we also point out that unlearning the backdoor may cause the whole targeted class to be unlearned, thus introducing an additional repair step to preserve the model ' s utility after editing the model. experiment results show that the proposed method is effective in unlearning the backdoor on different datasets and trigger patterns.
arxiv:2407.07662
schwarzschild black holes with quantum corrections are studied under scalar field perturbations and electromagnetic field perturbations to analyze the effect of the correction term on the potential function and quasinormal mode ( qnm ). in classical general relativity, spacetime is continuous and there is no existence of the so - called minimal length. the introduction of the correction items of the generalized uncertainty principle ( gup ), the parameter $ \ beta $, can change the singularity structure of the black hole gauge and may lead to discretization in time and space. we apply the sixth - order wkb method to approximate the qnm of schwarzschild black holes with quantum corrections and perform numerical analysis to derive the results of the method. also, we find that the effective potential and qnm in scalar fields are larger than those in electromagnetic fields.
arxiv:2204.11262
in geometry processing, numerical optimization methods often involve solving sparse linear systems of equations. these linear systems have a structure that strongly resembles to adjacency graphs of the underlying mesh. we observe how classic linear solvers behave on this specific type of problems. for the sake of simplicity, we minimise either the squared gradient or the squared laplacian, evaluated by finite differences on a regular 1d or 2d grid. we observed the evolution of the solution for both energies, in 1d and 2d, and with different solvers : jacobi, gauss - seidel, ssor ( symmetric successive over - relaxation ) and cg ( conjugate gradient [ she94 ] ). plotting results at different iterations allows to have an intuition of the behavior of these classic solvers.
arxiv:1510.01118
recent swift observations suggest that the traditional long vs. short grb classification scheme does not always associate grbs to the two physically motivated model types, i. e. type ii ( massive star origin ) vs. type i ( compact star origin ). we propose a new phenomenological classification method of grbs by introducing a new parameter epsilon = e _ { gamma, iso, 52 } / e ^ { 5 / 3 } _ { p, z, 2 }, where e _ { \ gamma, iso } is the isotropic gamma - ray energy ( in units of 10 ^ { 52 } erg ), and e _ { p, z } is the cosmic rest frame spectral peak energy ( in units of 100 kev ). for those short grbs with " extended emission ", both quantities are defined for the short / hard spike only. with the current complete sample of grbs with redshift and e _ p measurements, the epsilon parameter shows a clear bimodal distribution with a separation at epsilon ~ 0. 03. the high - epsilon region encloses the typical long grbs with high - luminosity, some high - z " rest - frame - short " grbs ( such as grb 090423 and grb 080913 ), as well as some high - z short grbs ( such as grb 090426 ). all these grbs have been claimed to be of the type ii origin based on other observational properties in the literature. all the grbs that are argued to be of the type i origin are found to be clustered in the low - epsilon region. they can be separated from some nearby low - luminosity long grbs ( in 3sigma ) by an additional t _ { 90 } criterion, i. e. t _ { 90, z } < ~ 5 s in the swift / bat band. we suggest that this new classification scheme can better match the physically - motivated type ii / i classification scheme.
arxiv:1001.0598
prior information often takes the form of parameter constraints. bayesian methods include such information through prior distributions having constrained support. by using posterior sampling algorithms, one can quantify uncertainty without relying on asymptotic approximations. however, sharply constrained priors are ( a ) not necessary in some settings ; and ( b ) tend to limit modeling scope to a narrow set of distributions that are tractable computationally. inspired by the vast literature that replaces the slab - and - spike prior with a continuous approximation, we propose to replace the sharp indicator function of the constraint with an exponential kernel, thereby creating a close - to - constrained neighborhood within the euclidean space in which the constrained subspace is embedded. this kernel decays with distance from the constrained space at a rate depending on a relaxation hyperparameter. by avoiding the sharp constraint, we enable use of off - the - shelf posterior sampling algorithms, such as hamiltonian monte carlo, facilitating automatic computation in broad models. we study the constrained and relaxed distributions under multiple settings, and theoretically quantify their differences. we illustrate the method through multiple novel modeling examples.
arxiv:1801.01525
mkn 3 is a seyfert 2 galaxy that is widely regarded as an exemplary compton - thick agn. we study the suzaku x - ray spectrum using models of the x - ray reprocessor that self - consistently account for the fe k $ \ alpha $ fluorescent emission line and the associated compton - scattered, or reflection, continuum. we find a solution in which the average global column density, $ 0. 234 ^ { + 0. 012 } _ { - 0. 010 } \ times 10 ^ { 24 } \ \ rm cm ^ { - 2 } $, is very different to the line - of - sight column density, $ 0. 902 ^ { + 0. 012 } _ { - 0. 013 } \ times 10 ^ { 24 } \ \ rm cm ^ { - 2 } $. the global column density is $ \ sim 5 $ times smaller than that required for the matter distribution to be compton - thick. our model accounts for the profiles of the fe k $ \ alpha $ and fe k $ \ beta $ lines, and the fe k edge remarkably well, with a solar abundance of fe. the matter distribution could consist of a clumpy medium with a line - of - sight column density higher than the global average. a uniform, spherically - symmetric distribution alone cannot simultaneously produce the correct fluorescent line spectrum and reflection continuum. previous works on mkn 3, and other agn, that assumed a reflection continuum from matter with an infinite column density could therefore lead to erroneous or " puzzling " conclusions if the matter out of the line - of - sight is really compton - thin. whereas studies of samples of agn have generally only probed the line - of - sight column density, with simplistic, one - dimensional models, it is important now to establish the global column densities in agn. it is the global properties that affect the energy budget in terms of reprocessing of x - rays into infrared emission, and that constrain population synthesis models of the cosmic x - ray background.
arxiv:1508.07685
recent observations have indicated the existence of dust in high - redshift galaxies, however, the dust properties in them are still unknown. here we present theoretical constraints on dust properties in lyman break galaxies ( lbgs ) at z = 3 by post - processing a cosmological smoothed particle hydrodynamics simulation with radiative transfer calculations. we calculate the dust extinction in 2800 dark matter halos using the metallicity information of individual gas particles in our simulation. we use only bright galaxies with rest - frame uv magnitude m _ 1700 < - 20 mag, and study the dust size, dust - to - metal mass ratio, and dust composition. from the comparison of calculated color excess between b and v - band ( i. e., e ( b - v ) ) and the observations, we constrain the typical dust size, and show that the best - fitting dust grain size is ~ 0. 05 micron, which is consistent with the results of theoretical dust models for type - ii supernova. our simulation with the dust extinction effect can naturally reproduce the observed rest - frame uv luminosity function of lbgs at z = 3 without assuming an ad hoc constant extinction value. in addition, in order to reproduce the observed mean e ( b - v ), we find that the dust - to - metal mass ratio needs to be similar to that of the local galaxies, and that the graphite dust is dominant or at least occupy half of dust mass.
arxiv:1309.7389
the special issue contains a monograph by m. popel, in which the methodical foundations of the formation of professional competences of mathematics teachers in institutions of higher education of ukraine are considered ; the place of cloud service cocalc in the system of teaching mathematical disciplines is specified ; the features of cocalc use in teaching mathematical disciplines were discovered and the model of use of cloud service cocalc as a means of formation of professional competences of mathematics teacher was developed ; the method of using cocalc as a means of forming the professional competencies of the mathematics teacher is designed. for scientists, postgraduates, teachers of mathematical disciplines and students of pedagogical educational institutions, all who are interested in the application of cloud - oriented systems in education.
arxiv:1902.10507
strokes are a leading cause of disability, with many experiencing difficulty in recovering arm movement, particularly hand function and grasping ability. there is currently no objective measure of movement quality, and without it, rehabilitative interventions remain at best estimations of the underlying neural structures response to produce movement. in this paper, we utilize a novel modification to procrustean distance to quantify curve dissimilarity and propose the reach severity and dissimilarity index ( rsdi ) as an objective measure of motor deficits. all experiments took place at the medstar national rehabilitation hospital ; persons with stroke were recruited from the hospital patient population. using fugl - meyer ( fm ) scores and reach capacities, stroke survivors were placed in mild or severe impairment groups. individuals completed sets of reach - to - target tasks to extrapolate kinematic metrics describing motor performance. the procrustes method of statistical shape analysis was modified to identify reaching sub - movements that were congruous to able - bodied sub - movements. movement initiation proceeds comparably to the reference curve in two - and three - dimensional representations of mild impairment movement. there were significant effects of the location of congruent segments between subject and reference curves, mean velocities, peak roll angle, and target error. these metrics were used to calculate a preliminary rsdi score with severity and dissimilarity sub - scores, and subjects were reclassified in terms of rehabilitation goals as speed emphasis, strength emphasis, and combined emphasis. the modified procrustes method shows promise in identifying disruptions in movement and monitoring recovery without adding to patient burden. the proposed rsdi score, while limited in scope, can be adapted and expanded to other functional movements and used as an objective clinical tool.
arxiv:2305.13524
in support of the main conjecture.
arxiv:0908.3667
an accurate force calculation with the poisson - boltzmann equation is challenging, as it requires the electric field on the molecular surface. here, we present a calculation of the electric field on the solute - solvent interface that is exact for piece - wise linear variations of the potential and analyze four different alternatives to compute the force using a boundary element method. we performed a verification exercise for two cases : the isolated and two interacting molecules. our results suggest that the boundary element method outperforms the finite difference method, as the latter needs a much finer mesh than in solvation energy calculations to get acceptable accuracy in the force, whereas the same surface mesh than a standard energy calculation is appropriate for the boundary element method. among the four evaluated alternatives of force calculation, we saw that the most accurate one is based on the maxwell stress tensor. however, for a realistic application, like the barnase - barstar complex, the approach based on variations of the energy functional, which is less accurate, gives equivalent results. this analysis is useful towards using the poisson - boltzmann equation for force calculations in applications where high accuracy is key, for example, to feed molecular dynamics models or to enable the study of the interaction between large molecular structures, like viruses adsorbed onto substrates.
arxiv:2301.05019
we perform a detailed study of linear perturbations of the jmart family of non - bps smooth horizonless solutions of type iib supergravity beyond the near - decoupling limit. in addition to the unstable quasi normal modes ( qnms ) responsible for the ergo - region instability, already studied in the literature, we find a new class of ` charged ' unstable modes with positive imaginary part, that can be interpreted in terms of the emission of charged ( scalar ) quanta with non zero kk momentum. we use both matched asymptotic expansions and numerical integration methods. moreover, we exploit the recently discovered correspondence between jmart perturbation theory, governed by a reduced confluent heun equation, and the quantum seiberg - witten ( sw ) curve of $ \ mathcal { n } = 2 $ sym theory with gauge group su ( 2 ) and $ n _ f = ( 0, 2 ) $ flavours.
arxiv:2305.00865
illusions of causality occur when people develop the belief that there is a causal connection between two variables with no supporting evidence. this cognitive bias has been proposed to underlie many societal problems including social prejudice, stereotype formation, misinformation and superstitious thinking. in this research we investigate whether large language models develop the illusion of causality in real - world settings. we evaluated and compared news headlines generated by gpt - 4o - mini, claude - 3. 5 - sonnet, and gemini - 1. 5 - pro to determine whether the models incorrectly framed correlations as causal relationships. in order to also measure sycophantic behavior, which occurs when a model aligns with a user ' s beliefs in order to look favorable even if it is not objectively correct, we additionally incorporated the bias into the prompts, observing if this manipulation increases the likelihood of the models exhibiting the illusion of causality. we found that claude - 3. 5 - sonnet is the model that presents the lowest degree of causal illusion aligned with experiments on correlation - to - causation exaggeration in human - written press releases. on the other hand, our findings suggest that while mimicry sycophancy increases the likelihood of causal illusions in these models, especially in gpt - 4o - mini, claude - 3. 5 - sonnet remains the most robust against this cognitive bias.
arxiv:2410.11684
one - shot action recognition allows the recognition of human - performed actions with only a single training example. this can influence human - robot - interaction positively by enabling the robot to react to previously unseen behaviour. we formulate the one - shot action recognition problem as a deep metric learning problem and propose a novel image - based skeleton representation that performs well in a metric learning setting. therefore, we train a model that projects the image representations into an embedding space. in embedding space the similar actions have a low euclidean distance while dissimilar actions have a higher distance. the one - shot action recognition problem becomes a nearest - neighbor search in a set of activity reference samples. we evaluate the performance of our proposed representation against a variety of other skeleton - based image representations. in addition, we present an ablation study that shows the influence of different embedding vector sizes, losses and augmentation. our approach lifts the state - of - the - art by 3. 3 % for the one - shot action recognition protocol on the ntu rgb + d 120 dataset under a comparable training setup. with additional augmentation our result improved over 7. 7 %.
arxiv:2012.13823
the ability to predict the long - term impact of a scientific article soon after its publication is of great value towards accurate assessment of research performance. in this work we test the hypothesis that good predictions of long - term citation counts can be obtained through a combination of a publication ' s early citations and the impact factor of the hosting journal. the test is performed on a corpus of 123, 128 wos publications authored by italian scientists, using linear regression models. the average accuracy of the prediction is good for citation time windows above two years, decreases for lowly - cited publications, and varies across disciplines. as expected, the role of the impact factor in the combination becomes negligible after only two years from publication.
arxiv:1909.08907
accurate left atrium ( la ) segmentation from pre - operative scans is crucial for diagnosing atrial fibrillation, treatment planning, and supporting surgical interventions. while deep learning models are key in medical image segmentation, they often require extensive manually annotated data. foundation models trained on larger datasets have reduced this dependency, enhancing generalizability and robustness through transfer learning. we explore dinov2, a self - supervised learning vision transformer trained on natural images, for la segmentation using mri. the challenges for la ' s complex anatomy, thin boundaries, and limited annotated data make accurate segmentation difficult before & during the image - guided intervention. we demonstrate dinov2 ' s ability to provide accurate & consistent segmentation, achieving a mean dice score of. 871 & a jaccard index of. 792 for end - to - end fine - tuning. through few - shot learning across various data sizes & patient counts, dinov2 consistently outperforms baseline models. these results suggest that dinov2 effectively adapts to mri with limited data, highlighting its potential as a competitive tool for segmentation & encouraging broader use in medical imaging.
arxiv:2411.09598
chern - simons theory on a closed contact three - manifold is studied when the lie group for gauge transformations is compact, connected and abelian. a rigorous definition of an abelian chern - simons partition function is derived using the faddeev - popov gauge fixing method. a symplectic abelian chern - simons partition function is also derived using the technique of non - abelian localization. this physically identifies the symplectic abelian partition function with the abelian chern - simons partition function as rigorous topological three - manifold invariants. this study leads to a natural identification of the abelian reidemeister - ray - singer torsion as a specific multiple of the natural unit symplectic volume form on the moduli space of flat abelian connections for the class of sasakian three - manifolds. the torsion part of the abelian chern - simons partition function is computed explicitly in terms of seifert data for a given sasakian three - manifold.
arxiv:1208.1724
while the body of research focusing on intelligent environments ( ies ) programming by adults is steadily growing, informed insights about children as programmers of such environments are limited. previous work already established that young children can learn programming basics. yet, there is still a need to investigate whether this capability can be transferred in the context of ies, since encouraging children to participate in the management of their intelligent surroundings can enhance responsibility, independence, and the spirit of cooperation. we performed a user study ( n = 15 ) with children aged 7 - 12, using a block - based, gamified ar spatial coding prototype allowing to manipulate smart artifacts in an intelligent living room. our results validated that children understand and can indeed program ies. based on our findings, we contribute preliminary implications regarding the use of specific technologies and paradigms ( e. g. ar, trigger - action programming ) to inspire future systems that enable children to create enriching experiences in ies.
arxiv:2105.04904
in this paper, we present a non - parametric structured latent variable model for image generation, called np - draw, which sequentially draws on a latent canvas in a part - by - part fashion and then decodes the image from the canvas. our key contributions are as follows. 1 ) we propose a non - parametric prior distribution over the appearance of image parts so that the latent variable ` ` what - to - draw ' ' per step becomes a categorical random variable. this improves the expressiveness and greatly eases the learning compared to gaussians used in the literature. 2 ) we model the sequential dependency structure of parts via a transformer, which is more powerful and easier to train compared to rnns used in the literature. 3 ) we propose an effective heuristic parsing algorithm to pre - train the prior. experiments on mnist, omniglot, cifar - 10, and celeba show that our method significantly outperforms previous structured image models like draw and air and is competitive to other generic generative models. moreover, we show that our model ' s inherent compositionality and interpretability bring significant benefits in the low - data learning regime and latent space editing. code is available at https : / / github. com / zengxh / npdraw.
arxiv:2106.13435
we study the natural extended - variable formulation for the disjunction of $ n + 1 $ polytopes in $ \ mathbb { r } ^ d $. we demonstrate that the convex hull $ d $ in the natural extended - variable space $ \ mathbb { r } ^ { d + n } $ is given by full optimal big - m lifting ( i ) when $ d \ leq 2 $ ( and that it is not generally true for $ d \ geq 3 $ ), and also ( ii ) under some technical conditions, when the polytopes have a common facet - describing constraint matrix, for arbitrary $ d \ geq 1 $ and $ n \ geq 1 $. we give a broad family of examples with $ d \ geq 3 $ and $ n = 1 $, where the convex hull is not described after employing all full optimal big - m lifting inequalities, but it is described after one round of mir inequalities. additionally, we give some general results on the polyhedral structure of $ d $, and we demonstrate that all facets of $ d $ can be enumerated in polynomial time when $ d $ is fixed.
arxiv:2407.15244
we address the problem of how to " obfuscate " texts by removing stylistic clues which can identify authorship, whilst preserving ( as much as possible ) the content of the text. in this paper we combine ideas from " generalised differential privacy " and machine learning techniques for text processing to model privacy for text documents. we define a privacy mechanism that operates at the level of text documents represented as " bags - of - words " - these representations are typical in machine learning and contain sufficient information to carry out many kinds of classification tasks including topic identification and authorship attribution ( of the original documents ). we show that our mechanism satisfies privacy with respect to a metric for semantic similarity, thereby providing a balance between utility, defined by the semantic content of texts, with the obfuscation of stylistic clues. we demonstrate our implementation on a " fan fiction " dataset, confirming that it is indeed possible to disguise writing style effectively whilst preserving enough information and variation for accurate content classification tasks.
arxiv:1811.10256
focusing on the bipartite stable marriage problem, we investigate different robustness measures related to stable matchings. we analyze the computational complexity of computing them and analyze their behavior in extensive experiments on synthetic instances. for instance, we examine whether a stable matching is guaranteed to remain stable if a given number of adversarial swaps in the agent ' s preferences are performed and the probability of stability when applying swaps uniformly at random. our results reveal that stable matchings in our synthetic data are highly unrobust to adversarial swaps, whereas the average - case view presents a more nuanced and informative picture.
arxiv:2408.09160
nanofluidics is pivotal in fundamental research and diverse applications, from water desalination to energy harvesting and biological analysis. dynamically manipulating nanofluidic properties, such as diffusion and friction, presents an avenue for advancement in this field. twisted bilayer graphene, particularly at the magic angle, has garnered attention for its unconventional superconductivity and correlated insulator behavior due to strong electronic correlations. however, the impact of the electronic properties of moir \ ' e patterns in twisted bilayer graphene on structural and dynamic properties of water remains largely unexplored. computational challenges, stemming from simulating large unit cells using density functional theory, have hindered progress. this study addresses this gap by investigating water behavior on twisted bilayer graphene, employing a deep neural network potential ( dp ) model trained with a dataset from ab initio molecular dynamics simulations. it is found that as the twisted angle approaches the magic angle, interfacial water friction increases, leading to reduced water diffusion. notably, the analysis shows that at smaller twisted angles with larger moir \ ' e patterns, water is more likely to reside in aa stacking regions than ab ( or ba ) stacking regions, a distinction that diminishes with smaller moir \ ' e patterns. this exploration illustrates the potential for leveraging the distinctive properties of twisted bilayer graphene to effectively control and optimize nanofluidic behavior.
arxiv:2312.06830
diffusion models can be used as learned priors for solving various inverse problems. however, most existing approaches are restricted to linear inverse problems, limiting their applicability to more general cases. in this paper, we build upon denoising diffusion restoration models ( ddrm ) and propose a method for solving some non - linear inverse problems. we leverage the pseudo - inverse operator used in ddrm and generalize this concept for other measurement operators, which allows us to use pre - trained unconditional diffusion models for applications such as jpeg artifact correction. we empirically demonstrate the effectiveness of our approach across various quality factors, attaining performance levels that are on par with state - of - the - art methods trained specifically for the jpeg restoration task.
arxiv:2209.11888
the biomedical field relies heavily on concept linking in various areas such as literature mining, graph alignment, information retrieval, question - answering, data, and knowledge integration. although large language models ( llms ) have made significant strides in many natural language processing tasks, their effectiveness in biomedical concept mapping is yet to be fully explored. this research investigates a method that exploits the in - context learning ( icl ) capabilities of large models for biomedical concept linking. the proposed approach adopts a two - stage retrieve - and - rank framework. initially, biomedical concepts are embedded using language models, and then embedding similarity is utilized to retrieve the top candidates. these candidates ' contextual information is subsequently incorporated into the prompt and processed by a large language model to re - rank the concepts. this approach achieved an accuracy of 90. % in bc5cdr disease entity normalization and 94. 7 % in chemical entity normalization, exhibiting a competitive performance relative to supervised learning methods. further, it showed a significant improvement, with an over 20 - point absolute increase in f1 score on an oncology matching dataset. extensive qualitative assessments were conducted, and the benefits and potential shortcomings of using large language models within the biomedical domain were discussed. were discussed.
arxiv:2307.01137
the concepts of the perfect system and degeneracy are introduced. a special symmetry is found which is related to the entropy invariant. the inversion relation of system is obtained which is used to give the oppsite direction of time to classical sencond law of thermodanymics. the nature of time is discussed together with causality relation. a new understanding of quantum mechanics is put forward which describes a new picture of the world.
arxiv:quant-ph/9605019
the largest eigenvalue of the hessian, or sharpness, of neural networks is a key quantity to understand their optimization dynamics. in this paper, we study the sharpness of deep linear networks for univariate regression. minimizers can have arbitrarily large sharpness, but not an arbitrarily small one. indeed, we show a lower bound on the sharpness of minimizers, which grows linearly with depth. we then study the properties of the minimizer found by gradient flow, which is the limit of gradient descent with vanishing learning rate. we show an implicit regularization towards flat minima : the sharpness of the minimizer is no more than a constant times the lower bound. the constant depends on the condition number of the data covariance matrix, but not on width or depth. this result is proven both for a small - scale initialization and a residual initialization. results of independent interest are shown in both cases. for small - scale initialization, we show that the learned weight matrices are approximately rank - one and that their singular vectors align. for residual initialization, convergence of the gradient flow for a gaussian initialization of the residual network is proven. numerical experiments illustrate our results and connect them to gradient descent with non - vanishing learning rate.
arxiv:2405.13456
we prove that spacetime is locally inertial at points of shock wave collision in general relativity. the result applies for collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. we give a constructive proof that there exist coordinate transformations which raise the regularity of the gravitational metric tensor from $ c ^ { 0, 1 } $ to $ c ^ { 1, 1 } $ in a neighborhood of such points of shock wave interaction, and a $ c ^ { 1, 1 } $ metric regularity suffices for locally inertial frames to exist. this result corrects an error in our earlier rspa - publication, which led us to the wrong conclusion that such coordinate transformations, which smooth the metric to $ c ^ { 1, 1 } $, cannot exist. our result here proves that regularity singularities, ( a type of mild singularity introduced in our rspa - publication ), do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes, and this generalizes israel ' s famous 1966 result to the case of such shock wave interactions. the strategy of proof here is an extension of the strategy outlined in our rspa - paper, but differs fundamentally from the method used by israel. the question whether regularity singularities exist in more complicated shock wave solutions of the einstein euler equations still remains open.
arxiv:1409.5060
we show that a canonical, minimally coupled scalar field which is non - self interacting and massless is equivalent to a null dust fluid ( whether it is a test or a gravitating field ), in a spacetime region in which its gradient is null. under similar conditions, the gravitating and nonminimally coupled brans - dicke - like scalar of scalar - tensor gravity, instead, cannot be represented as a null dust unless its gradient is also a killing vector field.
arxiv:1812.06457
in a broad class of scenarios, inflation is followed by an extended era of matter - dominated expansion during which the inflaton condensate is nonrelativistic on subhorizon scales. during this phase density perturbations grow to the point of nonlinearity and collapse into bound structures. this epoch strongly resembles structure formation with ultra - light axion - like particles. this parallel permits us to adapt results from studies of cosmological structure formation to describe the nonlinear dynamics of this post - inflationary epoch. we show that the inflaton condensate fragments into " inflaton clusters ", analogues of axion dark matter halos in present - day cosmology. moreover, solitonic objects or " inflaton stars " can form inside these clusters, leading to density contrasts as large as $ 10 ^ 6 $ in the post - inflationary universe.
arxiv:1911.01661
we propose and analyze an algorithmic framework for " bias bounties " : events in which external participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security. our framework allows participants to submit arbitrary subgroup improvements, which are then algorithmically incorporated into an updated model. our algorithm has the property that there is no tension between overall and subgroup accuracies, nor between different subgroup accuracies, and it enjoys provable convergence to either the bayes optimal model or a state in which no further improvements can be found by the participants. we provide formal analyses of our framework, experimental evaluation, and findings from a preliminary bias bounty event.
arxiv:2201.10408