text
stringlengths
1
3.65k
source
stringlengths
15
79
we present new vla observations at 1. 4 ghz confirming the presence of a radio halo at the centre of the cluster a3562, in the core of the shapley concentration. we also report a detailed multifrequency radio study of the head tail galaxy j1333 - - 3141, which is completely embedded in the halo emission. the radio halo has an irregular shape, and a largest linear size of $ \ sim $ 620 kpc, which is among the smallest found in the literature. the source has a steep spectrum, i. e. $ \ alpha _ { 843 mhz } ^ { 1. 4 ghz } \ sim 2 $, and its total radio power, p $ _ { 1. 4 ghz } \ sim 2 \ times10 ^ { 23 } $ w hz $ ^ { - 1 } $, is the lowest known to date. the radio power of the halo and the x - ray parameters of the cluster, such as l $ _ x $ and kt, nicely fit the correlations found in the literature for the other halo clusters, extending them to low radio powers. we found that the total number of electrons injected in the cluster environment by the head - - tail source is enough to feed the halo, if we assume that the galaxy has been radio active over a large fraction of its crossing time. we discuss possible origins of the radio halo in the light of the two - - phase model ( brunetti et al. 2001 ) and propose that the observed scenario is the result of a young source at the beginning of the reacceleration phase.
arxiv:astro-ph/0302080
in this paper, we study the interaction between two two - level atoms and two coupled modes of a quantized radiation field in the form of parametric frequency converter injecting within an optical cavity enclosed by a medium with kerr nonlinearity. it is demonstrated that, by applying the bogoliubov - valatin canonical transformation, the introduced model is reduced to a well - known form of the generalized jaynes - cummings model. then, under particular initial conditions which may be prepared for the atoms ( in a coherent superposition of its ground and upper states ) and the fields ( in a standard coherent state ), the time evolution of state vector of the entire system is analytically evaluated. in order to understand the degree of entanglement between subsystems ( atom - field and atom - atom ), the dynamics of entanglement through different measures, namely, von neumann reduced entropy, concurrence and negativity is evaluated. in each case, the effects of kerr nonlinearity and detuning parameter on the above criteria are numerically analyzed, in detail. it is illustrated that the amount of the degree of entanglement can be tuned by choosing the evolved parameters, appropriately.
arxiv:1407.8302
we experiment with adding dynamical gauge field to kaplan ( defect ) fermions. in the case of u ( 1 ) gauge theory we use an inhomogenous higgs mechanism to restrict the 3d gauge dynamics to a planar 2d defect. in our simulations the 3d theory produce the correct 2d gauge dynamics. we measure fermion propagators with dynamical gauge fields. they posses the correct chiral structure. the fermions at the boundary of the support of the gauge field ( waveguide ) are non - chiral, and have a mass two times heavier than the chiral modes. moreover, these modes cannot be excited by a source at the defect ; implying that they are dynamically decoupled. we have also checked that the anomaly relation is fullfilled for the case of a smooth external gauge field. this is an uuencoded ps - file. use ' uudecode hepchiral. ps. z ' and ' uncompress hepchiral. ps. z ' to produce the psfile.
arxiv:hep-lat/9312045
we study the monodromy representation of the system $ e _ c $ of differential equations annihilating lauricella ' s hypergeometric function $ f _ c $ of $ m $ variables. our representation space is the twisted homology group associated with an integral representation of $ f _ c $. we find generators of the fundamental group of the complement of the singular locus of $ e _ c $, and give some relations for these generators. we express the circuit transformations along these generators, by using the intersection forms defined on the twisted homology group and its dual.
arxiv:1403.1654
the theory of the dynamical systems is a very complex subject which has brought several surprises in the recent past in connection with the theory of chaos and fractals. the application of the tools of the dynamical systems in cosmological settings is less known in spite of the amount of published scientific papers on this subject. in this paper a - - mostly pedagogical - - introduction to the application in cosmology of the basic tools of the dynamical systems theory is presented. it is shown that, in spite of their amazing simplicity, these allow to extract essential information on the asymptotic dynamics of a wide variety of cosmological models. the power of these tools is illustrated within the context of the so called $ \ lambda $ cdm and scalar field models of dark energy. this paper is suitable for teachers, undergraduate and postgraduate students from physics and mathematics disciplines.
arxiv:1501.04851
it is known, but perhaps not well - known, that when the mortality is assumed to be of gompertz - makeham - type, the expected remaining life - length and the commutation functions used for calculating the expected values of various types of life insurances can be expressed with an incomplete gamma function with a negative shape parameter. this is not of much use if ones software cannot calculate these values. the aim of this note is to show that one can express the commutation functions using only the exponential function, the ( ordinary ) gamma function and the gamma distribution function, which are all implemented in common statistical and spreadsheet software. this eliminates the need to evaluate the commutation functions and expected remaining life - length with numerical integration.
arxiv:0902.4855
to make predictions for the existence of ` ` dark galaxies ' ', it is necessary to understand what determines whether a gas cloud will form stars. star formation thresholds are generally explained in terms of the toomre criterion for gravitational instability. i contrast this theory with the thermo - gravitational instability hypothesis of schaye ( 2004 ), in which star formation is triggered by the formation of a cold gas phase and which predicts a nearly constant surface density threshold. i argue that although the toomre analysis is useful for the global stability of disc galaxies, it relies on assumptions that break down in the outer regions, where star formation thresholds are observed. the thermo - gravitational instability hypothesis can account for a number of observed phenomena, some of which were thought to be unrelated to star formation thresholds.
arxiv:0708.3366
in this paper it is shown that a measurement of the relative luminosity changes at the lhc may be obtained by analysing the currents drawn from the high voltage power supplies of the electromagnetic section of the forward calorimeter of the atlas detector. the method was verified with a reproduction of a small section of the atlas forward calorimeter using proton beams of known beam energies and variable intensities at the u - 70 accelerator at ihep in protvino, russia. the experimental setup and the data taking during a test beam run in april 2008 are described in detail. a comparison of the measured high voltage currents with reference measurements from beam intensity monitors shows a linear dependence on the beam intensity. the non - linearities are measured to be less than 0. 5 % combining statistical and systematic uncertainties.
arxiv:1005.1784
the problem of constructing an exact solution of singular integro - differential equations related to problems of adhesive interaction between elastic thin semi - infinite homogeneous patch and elastic plate is investigated. for the patch loaded with horizontal forces the usual model of the uniaxial stress state is valid. using the methods of the theory of analytic functions and integral transformation the singular integro - differential equation is reduced to the riemann boundary value problem of the theory of analytic functions. the exact solution of this problem and asymptotic estimates of tangential contact stresses are obtained.
arxiv:2405.16572
the design of data markets has gained importance as firms increasingly use machine learning models fueled by externally acquired training data. a key consideration is the externalities firms face when data, though inherently freely replicable, is allocated to competing firms. in this setting, we demonstrate that a data seller ' s optimal revenue increases as firms can pay to prevent allocations to others. to do so, we first reduce the combinatorial problem of allocating and pricing multiple datasets to the auction of a single digital good by modeling utility for data through the increase in prediction accuracy it provides. we then derive welfare and revenue maximizing mechanisms, highlighting how the form of firms ' private information - whether the externalities one exerts on others is known, or vice - versa - affects the resulting structures. in all cases, under appropriate assumptions, the optimal allocation rule is a single threshold per firm, where either all data is allocated or none is.
arxiv:2003.08345
we revisit the theory of null shells in general relativity, with a particular emphasis on null shells placed at horizons of black holes. we study in detail the considerable freedom that is available in the case that one solders two metrics together across null hypersurfaces ( such as killing horizons ) for which the induced metric is invariant under translations along the null generators. in this case the group of soldering transformations turns out to be infinite dimensional, and these solderings create non - trivial horizon shells containing both massless matter and impulsive gravitational wave components. we also rephrase this result in the language of carrollian symmetry groups. to illustrate this phenomenon we discuss in detail the example of shells on the horizon of the schwarzschild black hole ( with equal interior and exterior mass ), uncovering a rich classical structure at the horizon and deriving an explicit expression for the general horizon shell energy - momentum tensor. in the special case of bms - like soldering supertranslations we find a conserved shell - energy that is strikingly similar to the standard expression for asymptotic bms supertranslation charges, suggesting a direct relation between the physical properties of these horizon shells and the recently proposed bms supertranslation hair of a black hole.
arxiv:1512.02858
the dipole formalism provides a powerful framework from which parton showers can be constructed. in a recent paper, we proposed a dipole shower with improved colour accuracy and in this paper we show how it can be further improved. after an explicit check at $ \ mathcal { o } ( \ alpha _ { \ mathrm { s } } ^ { 2 } ) $ we confirm that our original shower performs as it was designed to, i. e. inheriting its handling of angular - ordered radiation from a coherent branching algorithm. we also show how other dipole shower algorithms fail to achieve this. nevertheless, there is an $ \ mathcal { o } ( \ alpha _ { \ mathrm { s } } ^ { 2 } ) $ topology where it differs at sub - leading $ n _ { \ mathrm { c } } $ from a coherent branching algorithm. this erroneous topology can contribute a leading logarithm to some observables and corresponds to emissions that are ordered in $ k _ t $ but not angle. we propose a simple, computationally efficient way to correct this and assign colour factors in accordance with the coherence properties of qcd to all orders in $ \ alpha _ { \ mathrm { s } } $.
arxiv:2011.15087
we give a sufficient condition for an open 3 - manifold to admit a decomposition along properly embedded open annuli and tori, generalizing the toric splitting of jaco - shalen and johannson.
arxiv:0802.1447
we analyzed temporal and spectral properties, focusing on the short bursts, for three anomalous x - ray pulsars ( axps ) and soft gamma repeaters ( sgrs ), including sgr 1806 - 20, 1e 1048 - 5937 and sgr 0501 + 4516. using the data from xmm - newton, we located the short bursts by bayesian blocks algorithm. the short bursts ' duration distributions for three sources were fitted by two lognormal functions. the spectra of shorter bursts ( $ < 0. 2 ~ \ rm s $ ) and longer bursts ( $ \ geq 0. 2 ~ \ rm s $ ) can be well fitted in two blackbody components model or optically thin thermal bremsstrahlung model for sgr 0501 + 4516. we also found that there is a positive correlation between the burst luminosity and the persistent luminosity with a power law index $ \ gamma = 1. 23 \ pm 0. 18 $. the energy ratio of this persistent emission to the time averaged short bursts is in the range of $ 10 - 10 ^ 3 $, being comparable to the case in type i x - ray burst.
arxiv:1403.6244
nowadays, metadata information is often given by the authors themselves upon submission. however, a significant part of already existing research papers have missing or incomplete metadata information. german scientific papers come in a large variety of layouts which makes the extraction of metadata a non - trivial task that requires a precise way to classify the metadata extracted from the documents. in this paper, we propose a multimodal deep learning approach for metadata extraction from scientific papers in the german language. we consider multiple types of input data by combining natural language processing and image vision processing. this model aims to increase the overall accuracy of metadata extraction compared to other state - of - the - art approaches. it enables the utilization of both spatial and contextual features in order to achieve a more reliable extraction. our model for this approach was trained on a dataset consisting of around 8800 documents and is able to obtain an overall f1 - score of 0. 923.
arxiv:2111.05736
the filtered lie splitting scheme is an established method for the numerical integration of the periodic nonlinear schr \ " { o } dinger equation at low regularity. its temporal convergence was recently analyzed in a framework of discrete bourgain spaces in one and two space dimensions for initial data in $ h ^ s $ with $ 0 < s \ leq 2 $. here, this analysis is extended to dimensions $ d = 3, 4, 5 $ for data satisfying $ d / 2 - 1 < s \ leq 2 $. in this setting, convergence of order $ s / 2 $ in $ l ^ 2 $ is proven. numerical examples illustrate these convergence results.
arxiv:2312.11071
in this paper, we show a mean convergence theorem for a mapping with an attractive point in a hilbert space by using a quasinonexpansive extension of the mapping and a mean convergence theorem for a quasinonexpansive mapping.
arxiv:2205.11045
by a classical result of jordan, each finite subgroup g of a complex linear group gl _ n ( c ) has an abelian subgroup whose index in g is bounded by a constant depending only on n. we consider the problem if this remains true for finite subgroups g of the diffeomorphism group of a smooth manifold, and show that it is true for all compact 3 - manifolds as well as for euclidean spaces of dimension n < 7. the question remains open at present e. g. for odd - dimensional spheres of dimension greater or equal to five, and for euclidean spaces of dimension greater or equal to seven.
arxiv:1402.1612
a large volume of research has considered the creation of predictive models for clinical data ; however, much existing literature reports results using only a single source of data. in this work, we evaluate the performance of models trained on the publicly - available eicu collaborative research database. we show that cross - validation using many distinct centers provides a reasonable estimate of model performance in new centers. we further show that a single model trained across centers transfers well to distinct hospitals, even compared to a model retrained using hospital - specific data. our results motivate the use of multi - center datasets for model development and highlight the need for data sharing among hospitals to maximize model performance.
arxiv:1812.02275
let $ \ chi $ be an irreducible character of a group $ g. $ we denote the sum of the codegrees of the irreducible characters of $ g $ by $ s _ c ( g ) = \ sum _ { \ chi \ in { \ rm irr } ( g ) } { \ rm cod } ( \ chi ). $ we consider the question if $ s _ c ( g ) \ leq s _ c ( c _ n ) $ is true for any finite group $ g, $ where $ n = | g | $ and $ c _ n $ is a cyclic group of order $ n. $ we show this inequality holds for many classes of groups. in particular, we provide an affirmative answer for any finite group whose order is divisible by up to 99 primes. however, we show that the question does not hold true in all cases, by evidence of a counterexample.
arxiv:2402.12628
psr b1259 - 63 / ls 2883 is a gamma - ray binary system composed of an o9. 5ve main sequence star, ls 2883, and a 47. 8 ms spinning neutron star in a highly eccentric 3. 4 yr orbit ( eccentricity e = 0. 87 ). psr b1259 - 63 / ls 2883 is so far the only gamma - ray binary in which the compact object has been firmly identified. h. e. s. s. observed this system around its periastron passages in 2004, 2007, 2011 and 2014. for this latter event, a detailed campaign was organised making use of the new capabilities of h. e. s. s. ii, in particular its improved sensitivity and a lower energy threshold. this campaign covered for the first time the time of periastron and parts of the orbit so far unexplored at vhe energies, and included as well observations during the gev flare observed contemporaneously with the fermi - lat. the analysis of the h. e. s. s. ii data indicates a relatively high tev flux during this gev flare and also at orbital phases preceding the first neutron star crossing of the circumstellar disk. these results will be summarised and discussed in the context of previous models attempting to explain the complex gamma - ray emission from this source.
arxiv:1708.00895
we present an adaptive multilevel monte carlo algorithm for solving the stochastic drift - diffusion - poisson system with non - zero recombination rate. the a - posteriori error is estimated to enable goal - oriented adaptive mesh refinement for the spatial dimensions, while the a - priori error is estimated to guarantee \ red { linear } convergence of the $ h ^ 1 $ error. in the adaptive mesh refinement, efficient estimation of the error indicator gives rise to better error control. for the stochastic dimensions, we use the multilevel monte carlo method to solve this system of stochastic partial differential equations. finally, the advantage of the technique developed here compared to uniform mesh refinement is discussed using a realistic numerical example.
arxiv:1904.05851
twin - field quantum key distribution ( tf - qkd ) promises ultra - long secure key distribution which surpasses the rate distance limit and can reduce the number of the trusted nodes in long - haul quantum network. tremendous efforts have been made towards implementation of tf - qkd, among which, the secure key with finite size analysis can distribute more than 500 km in the lab and in the field. here, we demonstrate the sending - or - not - sending tf - qkd experimentally, achieving a secure key distribution with finite size analysis over 658 km ultra - low - loss optical fiber, improve the secure distance record by around 100 km. meanwhile, in a tf - qkd system, any phase fluctuation due to temperature variation and ambient variation during the channel must be recorded and compensated, and all these phase information can then be utilized to sense the channel vibration perturbations. with our qkd system, we recovered the external vibrational perturbations on the fiber generated by an artificial vibroseis and successfully located the perturbation position with a resolution better than 1 km. our results not only set a new distance record of qkd, but also demonstrate that the redundant information of tf - qkd can be used for remote sensing of the channel vibration, which can find applications in earthquake detection and landslide monitoring besides secure communication.
arxiv:2110.11671
highly supercritical accretion discs are probable sources of dense optically thick axisymmetric winds. we introduce a new approach based on diffusion approximation radiative transfer in a funnel geometry and obtain an analytical solution for the energy density distribution inside the wind assuming that all the mass, momentum and energy are injected well inside the spherization radius. this allows to derive the spectrum of emergent emission for various inclination angles. we show that self - irradiation effects play an important role altering the temperature of the outcoming radiation by about 20 % and the apparent x - ray luminosity by a factor of 2 - 3. the model has been successfully applied to two ulxs. the basic properties of the high ionization hii - regions found around some ulxs are also easily reproduced in our assumptions.
arxiv:0809.0917
we consider incorporating topic information into the sequence - to - sequence framework to generate informative and interesting responses for chatbots. to this end, we propose a topic aware sequence - to - sequence ( ta - seq2seq ) model. the model utilizes topics to simulate prior knowledge of human that guides them to form informative and interesting responses in conversation, and leverages the topic information in generation by a joint attention mechanism and a biased generation probability. the joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention, synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre - trained lda model, and let these vectors jointly affect the generation of words in decoding. to increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. empirical study on both automatic evaluation metrics and human annotations shows that ta - seq2seq can generate more informative and interesting responses, and significantly outperform the - state - of - the - art response generation models.
arxiv:1606.08340
finite size corrections to scaling laws in the centers of landau levels are studied systematically by numerical calculations. the corrections can account for the apparent non - universality of the localization length exponent $ \ nu { } $. in the second lowest landau level the irrelevant scaling index is $ y _ { \ mathrm { irr } } = - 0. 38 \ pm0. 04 $. at the center of the lowest landau level an additional periodic potential is found to be irrelevant with the same scaling index. these results suggest that the localization length exponent $ \ nu $ is universal with respect to landau level index and an additional periodic potential.
arxiv:cond-mat/9402048
\ emph { a root frame } for $ \ mathbb { r } ^ d $ is a finite frame whose vectors form a root system. in this note we establish some elementary properties of this class of frames and prove that root frames constitute a subclass of scalable frames. in addition, we show that root frames are examples of a larger class of frames called \ emph { eigenframes }.
arxiv:2204.08576
in this work, we show that under specific choices of the copula, the lasso, elastic net, and $ g $ - prior are particular cases of ` copula prior, ' for regularization and variable selection method. we present ` lasso with gauss copula prior ' and ` lasso with t - copula prior. ' the simulation study and real - world data for regression, classification, and large time - series data show that the ` copula prior ' often outperforms the lasso and elastic net while having a comparable sparsity of representation. also, the copula prior encourages a grouping effect. the strongly correlated predictors tend to be in or out of the model collectively under the copula prior. the ` copula prior ' is a generic method, which can be used to define the new prior distribution. the application of copulas in modeling prior distribution for bayesian methodology has not been explored much. we present the resampling - based optimization procedure to handle big data with copula prior.
arxiv:1709.05514
let $ g $ be a wheeler graph and $ r $ be the number of runs in a burrows - wheeler transform of $ g $, and suppose $ g $ can be decomposed into $ \ upsilon $ edge - disjoint directed paths whose internal vertices each have in - and out - degree exactly 1. we show how to store $ g $ in $ o ( r + \ upsilon ) $ space such that later, given a pattern $ p $, in $ o ( | p | \ log \ log | g | ) $ time we can count the vertices of $ g $ reachable by directed paths labelled $ p $, and then report those vertices in $ o ( \ log \ log | g | ) $ time per vertex.
arxiv:2101.12341
this paper proposes a new easy - to - implement parameter - free gradient - based optimizer : dowg ( distance over weighted gradients ). we prove that dowg is efficient - - matching the convergence rate of optimally tuned gradient descent in convex optimization up to a logarithmic factor without tuning any parameters, and universal - - automatically adapting to both smooth and nonsmooth problems. while popular algorithms following the adagrad framework compute a running average of the squared gradients to use for normalization, dowg maintains a new distance - based weighted version of the running average, which is crucial to achieve the desired properties. to complement our theory, we also show empirically that dowg trains at the edge of stability, and validate its effectiveness on practical machine learning tasks.
arxiv:2305.16284
we propose a new algorithmic framework for sequential hypothesis testing with i. i. d. data, which includes a / b testing, nonparametric two - sample testing, and independence testing as special cases. it is novel in several ways : ( a ) it takes linear time and constant space to compute on the fly, ( b ) it has the same power guarantee as a non - sequential version of the test with the same computational constraints up to a small factor, and ( c ) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem. all our test statistics are constructed to be zero - mean martingales under the null hypothesis, and the rejection threshold is governed by a uniform non - asymptotic law of the iterated logarithm ( lil ). for the case of nonparametric two - sample mean testing, we also provide a finite sample power analysis, and the first non - asymptotic stopping time calculations for this class of problems. we verify our predictions for type i and ii errors and stopping times using simulations.
arxiv:1506.03486
the production of renewable and sustainable energy is one of the most important challenges currently facing mankind. wind has made an increasing contribution to the world ' s energy supply mix, but still remains a long way from reaching its full potential. in this paper, we investigate the use of artificial evolution to design vertical - axis wind turbine prototypes that are physically instantiated and evaluated under fan generated wind conditions. initially a conventional evolutionary algorithm is used to explore the design space of a single wind turbine and later a cooperative coevolutionary algorithm is used to explore the design space of an array of wind turbines. artificial neural networks are used throughout as surrogate models to assist learning and found to reduce the number of fabrications required to reach a higher aerodynamic efficiency. unlike in other approaches, such as computational fluid dynamics simulations, no mathematical formulations are used and no model assumptions are made.
arxiv:1308.3136
this white paper is the outcome of the w \ " urzburg seminar on " crowdsourced network and qoe measurements " which took place from 25 - 26 september 2019 in w \ " urzburg, germany. international experts were invited from industry and academia. they are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, quality of service ( qos ), and quality of experience ( qoe ). the discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the quality of experience in new 5g networks that enable various new applications and network architectures. as a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms " crowdsourced network measurements " and " crowdsourced qoe measurements ", describing relevant use cases for such crowdsourced data, and its underlying challenges. during the seminar, those main topics were identified, intensively discussed in break - out groups, and brought back into the plenum several times. the outcome of the seminar is this white paper at hand which is - to our knowledge - the first one covering the topic of crowdsourced network and qoe measurements.
arxiv:2006.16896
bit plane complexity segmentation ( bpcs ) digital picture steganography is a technique to hide data inside an image file. bpcs achieves high embedding rates with low distortion based on the theory that noise - like regions in an image ' s bit - planes can be replaced with noise - like secret data without significant loss in image quality.. in this framework we will propose a collaborate approach for select frame for hiding data within mpeg video using bit plane complexity segmentation. this approach will invent high secure data hidden using select frame form mpeg video and furthermore we will assign the well - built of the approach ; during this review the author will answer the question why they used select frame steganography. in additional to the security issues we will use the digital video as a cover to the data hidden. the reason behind opt the video cover in this approach is the huge amount of single frames image per sec which in turn overcome the problem of the data hiding quantity, as the experiment result shows the success of the hidden data within select frame, extract data from the frames sequence. these function without affecting the quality of the video.
arxiv:0912.3986
let r be a ring with identity, ( m ; \ leq ) a commutative positive strictly ordered monoid and w _ m an automorphism for each m \ in m. the skew generalized power series ring r [ [ m, w ] ] is a common generalization of ( skew ) polynomial rings, ( skew ) power series rings, ( skew ) laurent polynomial rings, ( skew ) group rings, and mal ' cev neumann laurent series rings. if s \ subset r is a multiplicative set, then r is called right s - noetherian, if for each ideal i of r, is \ subseteq j \ subseteq i for some s \ in s and some finitely generated right ideal j. unifying and generalizing a number of known results, we study transfers of s - noetherian property to the ring r [ [ m, w ] ]. we also show that the ring r [ [ m, w ] ] is left noetherian if and only if r is left noetherian and m is finitely generated. generalizing a result of anderson and dumitrescu, we show that, when s \ subset r is a - anti - archimedean multiplicative set with a an automorphism of r, then r is right s - noetherian if and only if the skew polynomial ring r [ x, a ] is right s - noetherian.
arxiv:1605.09132
we develop a general method to study the fisher information distance in central limit theorem for nonlinear statistics. we first construct completely new representations for the score function. we then use these representations to derive quantitative estimates for the fisher information distance. to illustrate the applicability of our approach, explicit rates of fisher information convergence for quadratic forms and the functions of sample means are provided. for the sums of independent random variables, we obtain the fisher information bounds without requiring the finiteness of poincar \ ' e constant. our method can also be used to bound the fisher information distance in non - central limit theorems.
arxiv:2205.14446
machine learning ( ml ) models have significantly impacted various domains in our everyday lives. while large language models ( llms ) offer intuitive interfaces and versatility, task - specific ml models remain valuable for their efficiency and focused performance in specialized tasks. however, developing these models requires technical expertise, making it particularly challenging for non - expert users to customize them for their unique needs. although interactive machine learning ( iml ) aims to democratize ml development through user - friendly interfaces, users struggle to translate their requirements into appropriate ml tasks. we propose human - llm collaborative ml as a new paradigm bridging human - driven iml and machine - driven llm approaches. to realize this vision, we introduce duetml, a framework that integrates multimodal llms ( mllms ) as interactive agents collaborating with users throughout the ml process. our system carefully balances mllm capabilities with user agency by implementing both reactive and proactive interactions between users and mllm agents. through a comparative user study, we demonstrate that duetml enables non - expert users to define training data that better aligns with target tasks without increasing cognitive load, while offering opportunities for deeper engagement with ml task formulation.
arxiv:2411.18908
structural, electrical and magnetic measurements of 115 single crystals of prin $ _ 5 $ are reported. it has a tetragonal structure and has slightly lower cell volume than its isomorphic counter part cecoin _ 5. the resistivity saturates for t \ geq 10k. analysis of the resistivity for 10k < t < 60k indicates a regular fermi liquid behavior. it does not exhibit superconductivity down to t \ sim 1k. the magnetic susceptibility analysis yielded the moment to be 4. 00 \ mu _ b indicating that the magnetism of prcoin _ 5 is dominated by pr ^ { 3 + } free ions with some admixture of the magnetic moment of the co sublattice. the paramagnetic curie temperature \ theta \ sim - 40k. at low temperatures the susceptibility follows a broad maximum around t _ n \ sim 14. 5k, and increases as the temperature is lowered. the disappearance of superconductivity for t > 1k is attributed to chemical pressure effects and magnetic pair breaking.
arxiv:0905.4536
explaining the output of a deep network remains a challenge. in the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. a starting point for this strategy is the gradient of the class score function with respect to the input image. this gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. this paper makes two contributions : it introduces smoothgrad, a simple method that can help visually sharpen gradient - based sensitivity maps, and it discusses lessons in the visualization of these maps. we publish the code for our experiments and a website with our results.
arxiv:1706.03825
in statistical inference, it is rarely realistic that the hypothesized statistical model is well - specified, and consequently it is important to understand the effects of misspecification on inferential procedures. when the hypothesized statistical model is misspecified, the natural target of inference is a projection of the data generating distribution onto the model. we present a general method for constructing valid confidence sets for such projections, under weak regularity conditions, despite possible model misspecification. our method builds upon the universal inference method of wasserman et al. ( 2020 ) and is based on inverting a family of split - sample tests of relative fit. we study settings in which our methods yield either exact or approximate, finite - sample valid confidence sets for various projection distributions. we study rates at which the resulting confidence sets shrink around the target of inference and complement these results with a simulation study and a causal discovery using linear causal model on causaleffectpairs dataset.
arxiv:2307.04034
an optical switch based on liquid - crystal tunable long - range metal stripe waveguides is proposed and theoretically investigated. a nematic liquid crystal layer placed between a vertical configuration consisting of two gold stripes is shown to allow for the extensive electro - optic tuning of the coupler ' s waveguiding characteristics. rigorous liquid - crystal switching studies are coupled with the investigation of the optical properties of the proposed plasmonic structure, taking into account different excitation conditions and the impact of lc - scattering losses. a directional coupler optical switch is demonstrated, which combines low power consumption, low cross - talk, short coupling lengths, along with sufficiently reduced insertion losses.
arxiv:1211.6071
complex network theory is being widely used to study many real - life systems. one of the fields that can benefit from complex network theory approach is transportation network. in this paper, we briefly review the complex network theory method assimilated into transportation network research and the analysis it provided. it is irrefutable that complex network theory is capable to explain the structure, dynamic, node significance, performance as well as evolution of the transportation network.
arxiv:2308.04636
this systematic review undertakes a comprehensive analysis of current research on data - to - text generation, identifying gaps, challenges, and future directions within the field. relevant literature in this field on datasets, evaluation metrics, application areas, multilingualism, language models, and hallucination mitigation methods is reviewed. various methods for producing high - quality text are explored, addressing the challenge of hallucinations in data - to - text generation. these methods include re - ranking, traditional and neural pipeline architecture, planning architectures, data cleaning, controlled generation, and modification of models and training techniques. their effectiveness and limitations are assessed, highlighting the need for universally applicable strategies to mitigate hallucinations. the review also examines the usage, popularity, and impact of datasets, alongside evaluation metrics, with an emphasis on both automatic and human assessment. additionally, the evolution of data - to - text models, particularly the widespread adoption of transformer models, is discussed. despite advancements in text quality, the review emphasizes the importance of research in low - resourced languages and the engineering of datasets in these languages to promote inclusivity. finally, several application domains of data - to - text are highlighted, emphasizing their relevance in such domains. overall, this review serves as a guiding framework for fostering innovation and advancing data - to - text generation.
arxiv:2402.08496
active galactic nuclei ( agn ) are powered by the accretion of disks of gas onto supermassive black holes ( smbhs ). stars and stellar remnants orbiting the smbh in the nuclear star cluster ( nsc ) will interact with the agn disk. orbiters plunging through the disk experience a drag force and, through repeated passage, can have their orbits captured by the disk. a population of embedded objects in agn disks may be a significant source of binary black hole mergers, supernovae, tidal disruption events and embedded gamma - ray bursts. for two representative agn disk models we use geometric drag and bondi - hoyle - littleton drag to determine the time to capture for stars and stellar remnants. we assume a range of initial inclination angles and semi - major axes for circular keplerian prograde orbiters. capture time strongly depends on the density and aspect ratio of the chosen disk model, the relative velocity of the stellar object with respect to the disk, and the agn lifetime. we expect that for an agn disk density $ \ rho \ gtrsim 10 ^ { - 11 } \ rm g / cm ^ 3 $ and disk lifetime $ \ geq 1 $ myr, there is a significant population of embedded stellar objects, which can fuel mergers detectable in gravitational waves with ligo - virgo and lisa.
arxiv:2006.11229
galactic - scale outflows regulate the stellar mass growth and chemical enrichment of galaxies, yet key outflow properties such as the chemical composition and mass loss rate remain largely unknown. we address these properties with keck / esi echellete spectra of nine gravitationally lensed z = 2 - 3 star forming galaxies, probing a range of absorption transitions. interstellar absorption in our sample is dominated by outflowing material, with typical velocities - 150 km / s. approximately 80 % of the total column density is associated with a net outflow. mass loss rates in the low ionization phase are comparable to or in excess of the star formation rate, with total outflow rates likely higher when accounting for ionized gas. of order half of the heavy element yield from star formation is ejected in the low ionization phase, confirming that outflows play a critical role in regulating galaxy chemical evolution. covering fractions vary and are in general non - uniform, with most galaxies having incomplete covering by the low ions across all velocities. low ion abundance patterns show remarkably little scatter, revealing a distinct " chemical fingerprint " of outflows. gas phase si / fe abundances are significantly super - solar ( [ si / fe ] $ \ gtrsim $ 0. 4 ) indicating a combination of $ \ alpha $ - enhancement and dust depletion. derived properties are comparable to the most kinematically broad, metal - rich, and depleted intergalactic absorption systems at similar redshifts, suggesting that these extreme systems are associated with galactic outflows at impact parameters conservatively within a few tens of kpc. we discuss implications of the abundance patterns in z = 2 - 3 galaxies and the role of outflows at this epoch.
arxiv:1805.01484
presence of the trait or the value 0 in the absence of the trait. quantitative linguistics is an area of linguistics that relies on quantification. for example, indices of grammaticalization of morphemes, such as phonological shortness, dependence on surroundings, and fusion with the verb, have been developed and found to be significantly correlated across languages with stage of evolution of function of the morpheme. = = hard versus soft science = = the ease of quantification is one of the features used to distinguish hard and soft sciences from each other. scientists often consider hard sciences to be more scientific or rigorous, but this is disputed by social scientists who maintain that appropriate rigor includes the qualitative evaluation of the broader contexts of qualitative data. in some social sciences such as sociology, quantitative data are difficult to obtain, either because laboratory conditions are not present or because the issues involved are conceptual but not directly quantifiable. thus in these cases qualitative methods are preferred. = = see also = = calibration internal standard isotope dilution physical quantity quantitative analysis ( chemistry ) standard addition = = references = = = = further reading = = crosby, alfred w. ( 1996 ) the measure of reality : quantification and western society, 1250 – 1600. cambridge university press. wiese, heike, 2003. numbers, language, and the human mind. cambridge university press. isbn 0 - 521 - 83182 - 2.
https://en.wikipedia.org/wiki/Quantification_(science)
multiscale and multiphysics problems need novel numerical methods in order for them to be solved correctly and predictively. to that end, we develop a wavelet based technique to solve a coupled system of nonlinear partial differential equations ( pdes ) while resolving features on a wide range of spatial and temporal scales. the algorithm exploits the multiresolution nature of wavelet basis functions to solve initial - boundary value problems on finite domains with a sparse multiresolution spatial discretization. by leveraging wavelet theory and embedding a predictor - corrector procedure within the time advancement loop, we dynamically adapt the computational grid and maintain accuracy of the solutions of the pdes as they evolve. consequently, our method provides high fidelity simulations with significant data compression. we present verification of the algorithm and demonstrate its capabilities by modeling high - strain rate damage nucleation and propagation in nonlinear solids using a novel eulerian - lagrangian continuum framework.
arxiv:2209.12380
the baryon self - energies are expressed in terms of the qcd condensates of the lowest dimension in symmetric and asymmetric nuclear matter within the qcd sum - rule approach. the self - energies are shown to satisfy the gell - mann - - okubo relations in the linear su ( 3 ) breaking approximation. the results are in qualitative agreement with those obtained by the standard nuclear physics methods.
arxiv:1107.5955
the contact goniometer is a commonly used tool in lithic and zooarchaeological analysis, despite suffering from a number of shortcomings due to the physical interaction between the measuring implement, the object being measured, and the individual taking the measurements. however, lacking a simple and efficient alternative, researchers in a variety of fields continue to use the contact goniometer to this day. in this paper, we present a new goniometric method that we call the virtual goniometer, which takes angle measurements virtually on a 3d model of an object. the virtual goniometer allows for rapid data collection, and for the measurement of many angles that cannot be physically accessed by a manual goniometer. we compare the intra - observer variability of the manual and virtual goniometers, and find that the virtual goniometer is far more consistent and reliable. furthermore, the virtual goniometer allows for precise replication of angle measurements, even among multiple users, which is important for reproducibility of goniometric - based research. the virtual goniometer is available as a plug - in in the open source mesh processing packages meshlab and blender, making it easily accessible to researchers exploring the potential for goniometry to improve archaeological methods and address anthropological questions.
arxiv:2011.04898
we present the deep picard iteration ( dpi ) method, a new deep learning approach for solving high - dimensional partial differential equations ( pdes ). the core innovation of dpi lies in its use of picard iteration to reformulate the typically complex training objectives of neural network - based pde solutions into much simpler, standard regression tasks based on function values and gradients. this design not only greatly simplifies the optimization process but also offers the potential for further scalability through parallel data generation. crucially, to fully realize the benefits of regressing on both function values and gradients in the dpi method, we address the issue of infinite variance in the estimators of gradients by incorporating a control variate, supported by our theoretical analysis. our experiments on problems up to 100 dimensions demonstrate that dpi consistently outperforms existing state - of - the - art methods, with greater robustness to hyperparameters, particularly in challenging scenarios with long time horizons and strong nonlinearity.
arxiv:2409.08526
the use of lightweight formal methods ( lfm ) for the development of industrial applications has become a major trend. although the term " lightweight formal methods " has been used for over ten years now, there seems to be no common agreement on what " lightweight " actually means, and different communities apply the term in all kinds of ways. in this paper, we explore the recent trends in the use of lfm, and establish our opinion that cost - effectiveness is the driving force to deploy lfm. further, we propose a simple framework that should help to classify different lfm approaches and to estimate which of them are most cost - effective for a certain software engineering project. we demonstrate our framework using some examples.
arxiv:1807.01923
we report on eso - - vlt adaptive optics imaging of one radio - loud quasar at z $ \ sim $ 3. in spite of the large distance of the object we are able to detect its surrounding extended nebulosity the properties of which are consistent with an underlying massive galaxy of m $ _ k $ $ \ sim $ - - 27 and effective radius r $ _ e $ = 7 kpc. as far as we know this is the clearest detection of a radio loud quasar host at high redshift. the host luminosity is indicative of the existence of massive spheroids even at this early cosmic epoch. the host luminosity is about 1 magnitude fainter than the expected value based on the average trend of the host galaxies of rlq at lower redshift. the result, which however is based on a single object, suggests that at z $ \ sim $ 3 there is a deviation from a luminosity - - redshift dependence regulated only by passive evolution.
arxiv:astro-ph/0512328
evolutionary algorithms ( eas ) play a crucial role in the architectural configuration and training of artificial deep neural networks ( dnns ), a process known as neuroevolution. however, neuroevolution is hindered by its inherent computational expense, requiring multiple generations, a large population, and numerous epochs. the most computationally intensive aspect lies in evaluating the fitness function of a single candidate solution. to address this challenge, we employ surrogate - assisted eas ( saeas ). while a few saeas approaches have been proposed in neuroevolution, none have been applied to truly large dnns due to issues like intractable information usage. in this work, drawing inspiration from genetic programming semantics, we use phenotypic distance vectors, outputted from dnns, alongside kriging partial least squares ( kpls ), an approach that is effective in handling these large vectors, making them suitable for search. our proposed approach, named neuro - linear genetic programming surrogate model ( neurolgp - sm ), efficiently and accurately estimates dnn fitness without the need for complete evaluations. neurolgp - sm demonstrates competitive or superior results compared to 12 other methods, including neurolgp without sm, convolutional neural networks, support vector machines, and autoencoders. additionally, it is worth noting that neurolgp - sm is 25 % more energy - efficient than its neurolgp counterpart. this efficiency advantage adds to the overall appeal of our proposed neurolgp - sm in optimising the configuration of large dnns.
arxiv:2404.08786
we study systematically the lax description of the kdv hierarchy in terms of an operator which is the geometrical recursion operator. we formulate the lax equation for the $ n $ - th flow, construct the hamiltonians which lead to commuting flows. in this formulation, the recursion relation between the conserved quantities follows naturally. we give a simple and compact definition of all the hamiltonian structures of the theory which are related through a power law.
arxiv:hep-th/9501095
we study the asymptotic distribution of the resonances near the landau levels $ \ lambda \ _ q = ( 2q + 1 ) b $, $ q \ in \ mathbb { n } $, of the dirichlet ( resp. neumann, resp. robin ) realization in the exterior of a compact domain of $ \ mathbb { r } ^ 3 $ of the 3d schr { \ " o } dinger operator with constant magnetic field of scalar intensity $ b \ textgreater { } 0 $. we investigate the corresponding resonance counting function and obtain the main asymptotic term. in particular, we prove the accumulation of resonances at the landau levels and the existence of resonance free sectors. in some cases, it provides the discreteness of the set of embedded eigenvalues near the landau levels.
arxiv:1505.06026
by generalizing the well known results for reflection and refraction of plane waves at the vacuum - medium interface to gaussian light beams, we obtain analytic formulas for reflection and refraction of the tm and te laser light pulses. this enables us to give a possible explanation why no reflection was observed in light pulse photographs in some vicinity of the air - resin interface, given in l. gao, j. liang, c. li, and l. v. wang, nature 516 ( 2014 ) 74 - 77. we suggest how to modify the experimental setup so as to observe the reflected pulse.
arxiv:1511.08709
new generation ground and space - based cmb experiments have ushered in discoveries of massive galaxy clusters via the sunyaev - zeldovich ( sz ) effect, providing a new window for studying cluster astrophysics and cosmology. many of the newly discovered, sz - selected clusters contain hot intracluster plasma ( kte > 10 kev ) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity ( v > 1000 km s ^ { - 1 } ). it is well - known that for the interpretation of the sz signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. our approach is based on an alternative derivation of the boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. this allows us to obtain a clean separation of kinematic and scattering terms which differs from previous works. we briefly mention additional complications connected with kinematic effects that should be considered when interpreting future sz data for individual clusters. one of the main outcomes of this work is szpack, a numerical library which allows very fast and precise ( < ~ 0. 001 % at frequencies h nu < ~ 20kt _ g ) computation of the sz signals up to high electron temperature ( kt _ e ~ 25 kev ) and large peculiar velocity ( v / c ~ 0. 01 ). the accuracy is well beyond the current and future precision of sz observations and practically eliminates uncertainties related to more expensive numerical evaluation of the boltzmann collision term. our new approach should therefore be useful for analyzing future high - resolution, multi - frequency sz observations as well as computing the predicted sz effect signals from numerical simulations.
arxiv:1205.5778
we propose a scheme to generate and manipulate nonreciprocal photon blockade effect in an asymmetrical fabry - p \ ' { e } rot cavity, which consists of a single two - level atom and a second - order nonlinear medium. by utilizing the intrinsic spatial asymmetry of cavity and applying a parametric amplification pumping laser to the nonlinear medium, we can realize direction - dependent single - photon and two - photon blockade effects. for nonreciprocal single - photon blockade, our proposal is robust across a wide range of parameters, such as the cavity or atomic detuning, coupling strength, and atomic decay. within similar parameter ranges, nonreciprocal two - photon blockade can be achieved and modulated by finely adjusting the parametric amplification pumping. our project offers a feasible access to generating high - quality and tunable nonreciprocal single / two - photon source and paves a new avenue for investigating the nonreciprocity of photon quantum statistical properties.
arxiv:2504.14974
comprehensive results on the production of unidentified charged particles, $ \ pi ^ { \ pm } $, $ \ rm { k } ^ { \ pm } $, $ \ rm { k } ^ { 0 } _ { s } $, $ \ rm { k } $ * ( 892 ) $ ^ { 0 } $, $ \ rm { p } $, $ \ overline { \ rm { p } } $, $ \ phi $ ( 1020 ), $ \ lambda $, $ \ overline { \ lambda } $, $ \ xi ^ { - } $, $ \ overline { \ xi } ^ { + } $, $ \ omega ^ { - } $ and $ \ overline { \ omega } ^ { + } $ hadrons in proton - proton ( pp ) collisions at $ \ sqrt { s } $ = 7 tev at midrapidity ( $ | y | < 0. 5 $ ) as a function of charged - particle multiplicity density are presented. in order to avoid auto - correlation biases, the actual transverse momentum ( $ p _ { \ rm { t } } $ ) spectra of the particles under study and the event activity are measured in different rapidity windows. in the highest multiplicity class, the charged - particle density reaches about 3. 5 times the value measured in inelastic collisions. while the yield of protons normalized to pions remains approximately constant as a function of multiplicity, the corresponding ratios of strange hadrons to pions show a significant enhancement that increases with increasing strangeness content. furthermore, all identified particle to pion ratios are shown to depend solely on charged - particle multiplicity density, regardless of system type and collision energy. the evolution of the spectral shapes with multiplicity and hadron mass shows patterns that are similar to those observed in p - pb and pb - pb collisions at lhc energies. the obtained $ p _ { \ rm { t } } $ distributions and yields are compared to expectations from qcd - based pp event generators as well as to predictions from thermal and hydrodynamic models. these comparisons indicate that traces of a collective, equilibrated system are already present in high - multiplicity pp collisions.
arxiv:1807.11321
automating the radiotherapy treatment planning process is a technically challenging problem. the majority of automated approaches have focused on customizing and inferring dose volume objectives to used in plan optimization. in this work we outline a multi - patient atlas - based dose prediction approach that learns to predict the dose - per - voxel for a novel patient directly from the computed tomography ( ct ) planning scan without the requirement of specifying any objectives. our method learns to automatically select the most effective atlases for a novel patient, and then map the dose from those atlases onto the novel patient. we extend our previous work to include a conditional random field for the optimization of a joint distribution prior that matches the complementary goals of an accurately spatially distributed dose distribution while still adhering to the desired dose volume histograms. the resulting distribution can then be used for inverse - planning with a new spatial dose objective, or to create typical dose volume objectives for the canonical optimization pipeline. we investigated six treatment sites ( 633 patients for training and 113 patients for testing ) and evaluated the mean absolute difference ( mad ) in all dvhs for the clinical and predicted dose distribution. the results on average are favorable in comparison to our previous approach ( 1. 91 vs 2. 57 ). comparing our method with and without atlas - selection further validates that atlas - selection improved dose prediction on average in whole breast ( 0. 64 vs 1. 59 ), prostate ( 2. 13 vs 4. 07 ), and rectum ( 1. 46 vs 3. 29 ) while it is less important in breast cavity ( 0. 79 vs 0. 92 ) and lung ( 1. 33 vs 1. 27 ) for which there is high conformity and minimal dose shaping. in cns brain, atlas - selection has the potential to be impactful ( 3. 65 vs 5. 09 ), but selecting the ideal atlas is the most challenging.
arxiv:1608.04330
in the former part of this paper, we summarize our previous results on infinite series involving the hyperbolic sine function, especially, with a focus on the hyperbolic sine analogue of eisenstein series. those are based on the classical results given by cauchy, mellin and kronecker. in the latter part, we give new formulas for some infinite series involving the hyperbolic cosine function.
arxiv:1409.0198
information cocoons are frequently cited in the literature on whether and how social media might lead to ideological segregation and political polarization. from the behavioural and communication perspectives, this paper first examines why algorithm - based social media, as opposed to its traditional counterpart, is more likely to produce information cocoons. we then explore populism and short - termism in voting, bias and noise in decision - making, and prerequisite capital for innovation, demonstrating the importance of information diversity for a sustainable information environment. finally, this study argues for libertarian paternalism by evaluating the criteria and trade - offs involved in regulating algorithms and proposes to employ nudges to address the core issues while preserving freedom of choice.
arxiv:2404.15630
this paper presents a novel approach for deep visualization via a generative network, offering an improvement over existing methods. our model simplifies the architecture by reducing the number of networks used, requiring only a generator and a discriminator, as opposed to the multiple networks traditionally involved. additionally, our model requires less prior training knowledge and uses a non - adversarial training process, where the discriminator acts as a guide rather than a competitor to the generator. the core contribution of this work is its ability to generate detailed visualization images that align with specific class labels. our model incorporates a unique skip - connection - inspired block design, which enhances label - directed image generation by propagating class information across multiple layers. furthermore, we explore how these generated visualizations can be utilized as adversarial examples, effectively fooling classification networks with minimal perceptible modifications to the original images. experimental results demonstrate that our method outperforms traditional adversarial example generation techniques in both targeted and non - targeted attacks, achieving up to a 94. 5 % fooling rate with minimal perturbation. this work bridges the gap between visualization methods and adversarial examples, proposing that fooling rate could serve as a quantitative measure for evaluating visualization quality. the insights from this study provide a new perspective on the interpretability of neural networks and their vulnerabilities to adversarial attacks.
arxiv:2409.13559
the main purpose of this paper is to establish a noncommutative analogue of the efron - - stein inequality, which bounds the variance of a general function of some independent random variables. moreover, we state an operator version including random matrices, which extends a result of d. paulin et al. [ ann. probab. 44 ( 2016 ), no. 5, 3431 - - 3473 ]. further, we state a steele type inequality in the framework of noncommutative probability spaces.
arxiv:1811.00489
currently, the spray - painting robot trajectory planning technology aiming at spray painting quality mainly applies to single - color spraying. conventional methods of optimizing the spray gun trajectory based on simulated thickness can only qualitatively reflect the color distribution, and can not simulate the color effect of spray painting at the pixel level. therefore, it is not possible to accurately control the area covered by the color and the gradation of the edges of the area, and it is also difficult to deal with the situation where multiple colors of paint are sprayed in combination. to solve the above problems, this paper is inspired by the kubelka - munk model and combines the 3d machine vision method and artificial neural network to propose a spray painting color effect prediction method. the method is enabled to predict the execution effect of the spray gun trajectory with pixel - level accuracy from the dimension of the surface color of the workpiece after spray painting. on this basis, the method can be used to replace the traditional thickness simulation method to establish the objective function of the spray gun trajectory optimization problem, and thus solve the difficult problem of spray gun trajectory optimization for multi - color paint combination spraying. in this paper, the mathematical model of the spray painting color effect prediction problem is first determined through the analysis of the kubelka - munk paint film color rendering model, and at the same time, the spray painting color effect dataset is established with the help of the depth camera and point cloud processing algorithm. after that, the multilayer perceptron model was improved with the help of gating and residual structure and was used for the color prediction task. to verify...
arxiv:2409.04558
we demonstrate einstein - podolsky - rosen ( epr ) entanglement by detecting purely spatial quantum correlations in the near and far fields of spontaneous parametric down - conversion generated in a type - 2 beta barium borate crystal. full - field imaging is performed in the photon - counting regime with an electron - multiplying ccd camera. the data are used without any postselection, and we obtain a violation of heisenberg inequalities with inferred quantities taking into account all the biphoton pairs in both the near and far fields by integration on the entire two - dimensional transverse planes. this ensures a rigorous demonstration of the epr paradox in its original position momentum form.
arxiv:1204.0990
gait as a biometric property for person identification plays a key role in video surveillance and security applications. in gait recognition, normally, gait feature such as gait energy image ( gei ) is extracted from one full gait cycle. however in many circumstances, such a full gait cycle might not be available due to occlusion. thus, the gei is not complete giving rise to a degrading in gait - based person identification rate. in this paper, we address this issue by proposing a novel method to identify individuals from gait feature when a few ( or even single ) frame ( s ) is available. to do so, we propose a deep learning - based approach to transform incomplete gei to the corresponding complete gei obtained from a full gait cycle. more precisely, this transformation is done gradually by training several auto encoders independently and then combining these as a uniform model. experimental results on two public gait datasets, namely oulp and casia - b demonstrate the validity of the proposed method in dealing with very incomplete gait cycles.
arxiv:1804.08506
grasp detection requires flexibility to handle objects of various shapes without relying on prior knowledge of the object, while also offering intuitive, user - guided control. this paper introduces graspsam, an innovative extension of the segment anything model ( sam ), designed for prompt - driven and category - agnostic grasp detection. unlike previous methods, which are often limited by small - scale training data, graspsam leverages the large - scale training and prompt - based segmentation capabilities of sam to efficiently support both target - object and category - agnostic grasping. by utilizing adapters, learnable token embeddings, and a lightweight modified decoder, graspsam requires minimal fine - tuning to integrate object segmentation and grasp prediction into a unified framework. the model achieves state - of - the - art ( sota ) performance across multiple datasets, including jacquard, grasp - anything, and grasp - anything + +. extensive experiments demonstrate the flexibility of graspsam in handling different types of prompts ( such as points, boxes, and language ), highlighting its robustness and effectiveness in real - world robotic applications.
arxiv:2409.12521
environment changes, and development of disease and injury. = = = long fiber generation = = = in 2013, a group from the university of tokyo developed cell laden fibers up to a meter in length and on the order of 100 μm in size. these fibers were created using a microfluidic device that forms a double coaxial laminar flow. each ' layer ' of the microfluidic device ( cells seeded in ecm, a hydrogel sheath, and finally a calcium chloride solution ). the seeded cells culture within the hydrogel sheath for several days, and then the sheath is removed with viable cell fibers. various cell types were inserted into the ecm core, including myocytes, endothelial cells, nerve cell fibers, and epithelial cell fibers. this group then showed that these fibers can be woven together to fabricate tissues or organs in a mechanism similar to textile weaving. fibrous morphologies are advantageous in that they provide an alternative to traditional scaffold design, and many organs ( such as muscle ) are composed of fibrous cells. = = = bioartificial organs = = = an artificial organ is an engineered device that can be extra corporeal or implanted to support impaired or failing organ systems. bioartificial organs are typically created with the intent to restore critical biological functions like in the replacement of diseased hearts and lungs, or provide drastic quality of life improvements like in the use of engineered skin on burn victims. while some examples of bioartificial organs are still in the research stage of development due to the limitations involved with creating functional organs, others are currently being used in clinical settings experimentally and commercially. = = = = lung = = = = extracorporeal membrane oxygenation ( ecmo ) machines, otherwise known as heart and lung machines, are an adaptation of cardiopulmonary bypass techniques that provide heart and lung support. it is used primarily to support the lungs for a prolonged but still temporary timeframe ( 1 – 30 days ) and allow for recovery from reversible diseases. robert bartlett is known as the father of ecmo and performed the first treatment of a newborn using an ecmo machine in 1975. skin tissue - engineered skin is a type of bioartificial organ that is often used to treat burns, diabetic foot ulcers, or other large wounds that cannot heal well on their own. artificial skin can be made from autografts, allografts, and xenogra
https://en.wikipedia.org/wiki/Tissue_engineering
minimax designs provide a uniform coverage of a design space $ \ mathcal { x } \ subseteq \ mathbb { r } ^ p $ by minimizing the maximum distance from any point in this space to its nearest design point. although minimax designs have many useful applications, e. g., for optimal sensor allocation or as space - filling designs for computer experiments, there has been little work in developing algorithms for generating these designs, due to its computational complexity. in this paper, a new hybrid algorithm combining particle swarm optimization and clustering is proposed for generating minimax designs on any convex and bounded design space. the computation time of this algorithm scales linearly in dimension $ p $, meaning our method can generate minimax designs efficiently for high - dimensional regions. simulation studies and a real - world example show that the proposed algorithm provides improved minimax performance over existing methods on a variety of design spaces. finally, we introduce a new type of experimental design called a minimax projection design, and show that this proposed design provides better minimax performance on projected subspaces of $ \ mathcal { x } $ compared to existing designs. an efficient implementation of these algorithms can be found in the r package minimaxdesign.
arxiv:1602.03938
retailers use a variety of mechanisms to enable sales and delivery. a relatively new offering by companies is curbside pickup where customers purchase goods online, schedule a pickup time, and come to a pickup facility to receive their orders. to model this new service structure, we consider a queuing system where each arriving job has a preferred service completion time. unlike most queuing systems, we make a strategic decision for when to serve each job based on their requested times and the associated costs. we assume that all jobs must be served before or on their requested time period, and the jobs are outsourced when the capacity is insufficient. costs are incurred for jobs that are outsourced or served early. for small systems, we show that optimal capacity allocation policies are of threshold type. for general systems, we devise heuristic policies based on similar threshold structures. our numerical study investigates the performance of the heuristics developed and shows the robustness of them with respect to several service parameters. our results provide insights on how the optimal long - run average costs change based on the capacity of the system, the length of the planning horizon, cost parameters and the order pattern.
arxiv:2005.12499
in this paper, we use the riemann - liouville fractional integrals to establish some new integral inequalities of ostrowski - gr \ " uss type. from our results, the classical ostrowski - gr \ " uss type inequalities can be deduced as some special cases.
arxiv:1203.3074
lightweight bearing and structural component. missile nose - cones : shielding the missile internals from heat. space shuttle tiles space - debris ballistic shields : ceramic fiber woven shields offer better protection to hypervelocity ( ~ 7 km / s ) particles than aluminum shields of equal weight. rocket nozzles : focusing high - temperature exhaust gases from the rocket booster. unmanned air vehicles : ceramic engine utilization in aeronautical applications ( such as unmanned air vehicles ) may result in enhanced performance characteristics and less operational costs. = = = biomedical = = = artificial bone ; dentistry applications, teeth. biodegradable splints ; reinforcing bones recovering from osteoporosis implant material = = = electronics = = = capacitors integrated circuit packages transducers insulators = = = optical = = = optical fibers, guided light wave transmission switches laser amplifiers lenses infrared heat - seeking devices = = = automotive = = = heat shield exhaust heat management = = biomaterials = = silicification is quite common in the biological world and occurs in bacteria, single - celled organisms, plants, and animals ( invertebrates and vertebrates ). crystalline minerals formed in such environment often show exceptional physical properties ( e. g. strength, hardness, fracture toughness ) and tend to form hierarchical structures that exhibit microstructural order over a range of length or spatial scales. the minerals are crystallized from an environment that is undersaturated with respect to silicon, and under conditions of neutral ph and low temperature ( 0 – 40 °c ). formation of the mineral may occur either within or outside of the cell wall of an organism, and specific biochemical reactions for mineral deposition exist that include lipids, proteins and carbohydrates. most natural ( or biological ) materials are complex composites whose mechanical properties are often outstanding, considering the weak constituents from which they are assembled. these complex structures, which have risen from hundreds of million years of evolution, are inspiring the design of novel materials with exceptional physical properties for high performance in adverse conditions. their defining characteristics such as hierarchy, multifunctionality, and the capacity for self - healing, are currently being investigated. the basic building blocks begin with the 20 amino acids and proceed to polypeptides, polysaccharides, and polypeptides – saccharides. these, in turn, compose the basic proteins, which are the primary constituents of the ' soft tissues ' common to most biominerals. with well over 1000 proteins possible, current research
https://en.wikipedia.org/wiki/Ceramic_engineering
we experimentally study the effects of the anisotropic rydberg - interaction on $ d $ - state rydberg polaritons slowly propagating through a cold atomic sample. in addition to the few - photon nonlinearity known from similar experiments with rydberg $ s $ - states, we observe the interaction - induced dephasing of rydberg polaritons at very low photon input rates into the medium. we develop a model combining the propagation of the two - photon wavefunction through our system with nonperturbative calculations of the anisotropic rydberg - interaction to show that the observed effect can be attributed to pairwise interaction of individual rydberg polaritons.
arxiv:1505.03723
we study complexity of short sentences in presburger arithmetic ( short - pa ). here by " short " we mean sentences with a bounded number of variables, quantifiers, inequalities and boolean operations ; the input consists only of the integers involved in the inequalities. we prove that assuming kannan ' s partition can be found in polynomial time, the satisfiability of short - pa sentences can be decided in polynomial time. furthermore, under the same assumption, we show that the numbers of satisfying assignments of short presburger sentences can also be computed in polynomial time.
arxiv:1704.00249
deep neural networks are highly over - parameterized and the size of the neural networks can be reduced significantly after training without any decrease in performance. one can clearly see this phenomenon in a wide range of architectures trained for various problems. weight / channel pruning, distillation, quantization, matrix factorization are some of the main methods one can use to remove the redundancy to come up with smaller and faster models. this work starts with a short informative chapter, where we motivate the pruning idea and provide the necessary notation. in the second chapter, we compare various saliency scores in the context of parameter pruning. using the insights obtained from this comparison and stating the problems it brings we motivate why pruning units instead of the individual parameters might be a better idea. we propose some set of definitions to quantify and analyze units that don ' t learn and create any useful information. we propose an efficient way for detecting dead units and use it to select which units to prune. we get 5x model size reduction through unit - wise pruning on mnist.
arxiv:1806.06068
the second edition of " semantic relations between nominals " by vivi nastase, stan szpakowicz, preslav nakov and diarmuid \ ' o s \ ' eaghdha has been published in april 2021 by morgan & claypool ( www. morganclaypoolpublishers. com / catalog _ orig / product _ info. php? products _ id = 1627 ). a new chapter 5 of the book, by vivi nastase and stan szpakowicz, discusses relation classification / extraction in the deep - learning paradigm which arose after the first edition appeared. this is chapter 5, made public by the kind permission of morgan & claypool.
arxiv:2009.05426
we describe an approximation to the widely - used poisson - likelihood chi - square using a linear combination of neyman ' s and pearson ' s chi - squares, namely " combined neyman - pearson chi - square " ( $ \ chi ^ 2 _ { \ mathrm { cnp } } $ ). through analytical derivations and toy model simulations, we show that $ \ chi ^ 2 _ \ mathrm { cnp } $ leads to a significantly smaller bias on the best - fit model parameters compared to those using either neyman ' s or pearson ' s chi - square. when the computational cost of using the poisson - likelihood chi - square is high, $ \ chi ^ 2 _ \ mathrm { cnp } $ provides a good alternative given its natural connection to the covariance matrix formalism.
arxiv:1903.07185
scene text image super - resolution ( stisr ), aiming to improve image quality while boosting downstream scene text recognition accuracy, has recently achieved great success. however, most existing methods treat the foreground ( character regions ) and background ( non - character regions ) equally in the forward process, and neglect the disturbance from the complex background, thus limiting the performance. to address these issues, in this paper, we propose a novel method lemma that explicitly models character regions to produce high - level text - specific guidance for super - resolution. to model the location of characters effectively, we propose the location enhancement module to extract character region features based on the attention map sequence. besides, we propose the multi - modal alignment module to perform bidirectional visual - semantic alignment to generate high - quality prior guidance, which is then incorporated into the super - resolution branch in an adaptive manner using the proposed adaptive fusion module. experiments on textzoom and four scene text recognition benchmarks demonstrate the superiority of our method over other state - of - the - art methods. code is available at https : / / github. com / csguoh / lemma.
arxiv:2307.09749
in this paper, we provide a model - independent extension of the paradigm of dynamic hedging of derivative claims. we relate model - independent replication strategies to local martingales having a closed form which we can characterise via solutions of coupled pdes. we provide a general framework and then apply it to a market with no traded claims, a market with an underlying asset and a convex claim and a market with an underlying asset and a set of co - maturing call options. the results encompass known examples of model - independent identities and provide a methodology for deriving new identities.
arxiv:1809.00149
it is shown in " siam j. sci. comput. 39 ( 2017 ) : b424 - b441 " that free - form curves used in computer aided geometric design can usually be represented as the solutions of linear differential systems and points and derivatives on the curves can be evaluated dynamically by solving the differential systems numerically. in this paper we present an even more robust and efficient algorithm for dynamic evaluation of exponential polynomial curves and surfaces. based on properties that spaces spanned by general exponential polynomials are translation invariant and polynomial spaces are invariant with respect to a linear transform of the parameter, the transformation matrices between bases with or without translated or linearly transformed parameters are explicitly computed. points on curves or surfaces with equal or changing parameter steps can then be evaluated dynamically from a start point using a pre - computed matrix. like former dynamic evaluation algorithms, the newly proposed approach needs only arithmetic operations for evaluating exponential polynomial curves and surfaces. unlike conventional numerical methods that solve a linear differential system, the new method can give robust and accurate evaluation results for any chosen parameter steps. basis transformation technique also enables dynamic evaluation of polynomial curves with changing parameter steps using a constant matrix, which reduces time costs significantly than computing each point individually by classical algorithms.
arxiv:1904.10205
beck et. al. characterized the grid graphs whose perfect matching polytopes are gorenstein and they also showed that for some parameters, perfect matching polytopes of torus graphs are gorenstein. in this paper, we complement their result, that is, we characterize the torus graphs whose perfect matching polytopes are gorenstein. beck et. al. also gave a method to construct an infinite family of gorenstein polytopes. in this paper, we introduce a new class of polytopes obtained from graphs and we extend their method to construct many more gorenstein polytopes.
arxiv:0803.1033
we calculate the baryon asymmetry of the universe which would arise during a first order electroweak phase transition due to minimal standard model processes. it agrees in sign and magnitude with the observed baryonic excess, for resonable km parameters and m $ _ t $ in the expected range, and plausible values of bubble velocity and other high temperature effects. a detailed version of this work ( 77pp ) is being simultaneously submitted to the net. a shortened version of this recently appeared in phys. rev. lett. { \ bf 70 }, 2833, 1993.
arxiv:hep-ph/9305274
we demonstrate several new aspects of exceptional points of degeneracy ( epd ) pertaining to propagation in two uniform coupled transmission - line structures. we describe an epd using two different approaches - by solving an eigenvalue problem based on the system matrix, and as a singular point from bifurcation theory, and the link between these two disparate viewpoints. cast as an eigenvalue problem, we show that eigenvalue degeneracies are always coincident with eigenvector degeneracies, so that all eigenvalue degeneracies are implicitly epds in two uniform coupled transmission lines. furthermore, we discuss in some detail the fact that epds define branch points ( bps ) in the complex - frequency plane ; we provide simple formulas for these points, and show that parity - time ( pt ) symmetry leads to real - valued epds occurring on the real - frequency axis. we discuss the connection of the linear algebra approach to previous waveguide analysis based on singular points from bifurcation theory, which provides a complementary viewpoint of epd phenomena, showing that epds are singular points of the dispersion function associated with the fold bifurcation. this provides an important connection of various modal interaction phenomena known in guided - wave structures with recent interesting effects observed in quantum mechanics, photonics, and metamaterials systems described in terms of the epd formalism.
arxiv:1804.03214
we construct the four - point correlation functions containing the top component of the supermultiplet in the neveu - schwarz sector of the n = 1 susy liouville field theory. the construction is based on the recursive representation for the ns conformal blocks. we test our results in the case where one of the fields is degenerate with a singular vector on the level 3 / 2. in this case, the correlation function satisfies a third - order ordinary differential equation, which we derive. we numerically verify the crossing symmetry relations for the constructed correlation functions in the nondegenerate case.
arxiv:0705.1983
in this paper we study the continuous coagulation and multiple fragmentation equation for the mean - field description of a system of particles taking into account the combined effect of the coagulation and the fragmentation processes in which a system of particles growing by successive mergers to form a bigger one and a larger particle splits into a finite number of smaller pieces. we demonstrate the global existence of mass - conserving weak solutions for a wide class of coagulation rate, selection rate and breakage function. here, both the breakage function and the coagulation rate may have algebraic singularity on both the coordinate axes. the proof of the existence result is based on a weak l ^ 1 compactness method for two different suitable approximations to the original problem, i. e. the conservative and non - conservative approximations. moreover, the mass - conservation property of solutions is established for both approximations.
arxiv:1811.06161
we characterize four types of agentive knowledge using a stit semantics over branching discrete - time structures. these are \ emph { ex ante } knowledge, \ emph { ex interim } knowledge, \ emph { ex post } knowledge, and know - how. the first three are notions that arose from game - theoretical analyses on the stages of information disclosure across the decision making process, and the fourth has gained prominence both in logics of action and in deontic logic as a means to formalize ability. in recent years, logicians in ai have argued that any comprehensive study of responsibility attribution and blameworthiness should include proper treatment of these kinds of knowledge. this paper intends to clarify previous attempts to formalize them in stit logic and to propose alternative interpretations that in our opinion are more akin to the study of responsibility in the stit tradition. the logic we present uses an extension with knowledge operators of the xstit language, and formulas are evaluated with respect to branching discrete - time models. we also present an axiomatic system for this logic, and address its soundness and completeness.
arxiv:1911.11086
we shown the rationality of the taylor coefficients of the inverse of the schwarz triangle functions for a triangle group about any vertex of the fundamental domain.
arxiv:math/0702422
this study delves into the enhancement of two - photon absorption ( 2pa ) properties in diazaacene - bithiophene derivatives through a synergistic approach combining theoretical analysis and experimental validation. by investigating the structural modifications and their impact on 2pa cross sections, we identify key factors that significantly influence the 2pa efficiency. for all molecular systems studied, our state - of - the - art quantum chemical calculations show a very high involvement of the first excited singlet state ( s1 ) in the 2pa processes into higher excited states, even if this state itself has only a small 2pa cross section for symmetry reasons. consequently, both the oscillator strength of s1 and the transition dipole moments between s1 and other excited states are of importance, underscoring the role of electronic polarizability in facilitating effective two - photon interactions. the investigated compounds exhibit large 2pa cross sections over a wide near - infrared spectral range reaching giant values of 42000 gm. the introduction of diazine and diazaacene moieties into bithiophene derivatives not only induces charge transfer but also opens up pathways for the creation of materials with tailored nonlinear optical responses, suggesting potential applications in nonlinear optics.
arxiv:2404.09325
unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. here we propose a novel algorithm, deep temporal clustering ( dtc ), to naturally integrate dimensionality reduction and temporal clustering into a single end - to - end learning framework, fully unsupervised. the algorithm utilizes an autoencoder for temporal dimensionality reduction and a novel temporal clustering layer for cluster assignment. then it jointly optimizes the clustering objective and the dimensionality reduction objec tive. based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. several similarity metrics and state - of - the - art algorithms are considered and compared. to gain insight into temporal features that the network has learned for its clustering, we apply a visualization method that generates a region of interest heatmap for the time series. the viability of the algorithm is demonstrated using time series data from diverse domains, ranging from earthquakes to spacecraft sensor data. in each case, we show that the proposed algorithm outperforms traditional methods. the superior performance is attributed to the fully integrated temporal dimensionality reduction and clustering criterion.
arxiv:1802.01059
a stream attention framework has been applied to the posterior probabilities of the deep neural network ( dnn ) to improve the far - field automatic speech recognition ( asr ) performance in the multi - microphone configuration. the stream attention scheme has been realized through an attention vector, which is derived by predicting the asr performance from the phoneme posterior distribution of individual microphone stream, focusing the recognizer ' s attention to more reliable microphones. investigation on the various asr performance measures has been carried out using the real recorded dataset. experiments results show that the proposed framework has yielded substantial improvements in word error rate ( wer ).
arxiv:1711.11141
we present an analysis of chandra x - ray observations of a compact group of galaxies, hcg 80 ( z = 0. 03 ). the system is a spiral - only group composed of four late - type galaxies, and has a high - velocity dispersion of 309 km / s. with high - sensitivity chandra observations, we searched for diffuse x - ray emission from the intragroup medium ( igm ) ; however, no significant emission was detected. we place a severe upper limit on the luminosity of the diffuse gas as lx < 6e40 erg / s. on the other hand, significant emission from three of the four members were detected. in particular, we discovered huge halo emission from hcg 80a that extends on a scale of ~ 30 kpc perpendicular to the galactic disk, whose x - ray temperature and luminosity were measured to be ~ 0. 6 kev and ~ 4e40 erg / s in the 0. 5 - 2 kev band, respectively. it is most likely to be an outflow powered by intense starburst activity. based on the results, we discuss possible reasons for the absence of diffuse x - ray emission in the hcg 80 group, suggesting that the system is subject to galaxy interactions, and is possibly at an early stage of igm evolution.
arxiv:astro-ph/0408541
we present simple and computationally efficient nonparametric estimators of r \ ' enyi entropy and mutual information based on an i. i. d. sample drawn from an unknown, absolutely continuous distribution over $ \ r ^ d $. the estimators are calculated as the sum of $ p $ - th powers of the euclidean lengths of the edges of the ` generalized nearest - neighbor ' graph of the sample and the empirical copula of the sample respectively. for the first time, we prove the almost sure consistency of these estimators and upper bounds on their rates of convergence, the latter of which under the assumption that the density underlying the sample is lipschitz continuous. experiments demonstrate their usefulness in independent subspace analysis.
arxiv:1003.1954
the formulae for d ^ 0 - \ bar { d } ^ 0 or b ^ 0 - \ bar { b } ^ 0 mixing and cp violation at the \ tau - charm or b - meson factories are derived, for the case that only the decay - time distribution of one d or b meson is to be measured. in particular, we point out a new possibility to determine the d ^ 0 - \ bar { d } ^ 0 mixing rate in semileptonic d decays at the \ psi ( 4. 14 ) resonance ; and show that both direct and indirect cp asymmetries can be measured at the \ upsilon ( 4s ) resonance without ordering the decay times of two b _ d mesons or measuring their difference.
arxiv:hep-ph/9907454
tex facility if commissioned for high power testing to characterize accelerating structures and validate them for the operation on future particle accelerators for medical, industrial and research applications. at this aim, tex is directly involved in the lnf leading project eupraxia @ sparc _ lab. the brief description of the facility and its status and prospective will be provided.
arxiv:2308.03053
diffusion models have recently become the de - facto approach for generative modeling in the 2d domain. however, extending diffusion models to 3d is challenging due to the difficulties in acquiring 3d ground truth data for training. on the other hand, 3d gans that integrate implicit 3d representations into gans have shown remarkable 3d - aware generation when trained only on single - view image datasets. however, 3d gans do not provide straightforward ways to precisely control image synthesis. to address these challenges, we present control3diff, a 3d diffusion model that combines the strengths of diffusion models and 3d gans for versatile, controllable 3d - aware image synthesis for single - view datasets. control3diff explicitly models the underlying latent distribution ( optionally conditioned on external inputs ), thus enabling direct control during the diffusion process. moreover, our approach is general and applicable to any type of controlling input, allowing us to train it with the same diffusion objective without any auxiliary supervision. we validate the efficacy of control3diff on standard image generation benchmarks, including ffhq, afhq, and shapenet, using various conditioning inputs such as images, sketches, and text prompts. please see the project website ( \ url { https : / / jiataogu. me / control3diff } ) for video comparisons.
arxiv:2304.06700
forward modeling of wave scattering and radar imaging mechanisms is the key to information extraction from synthetic aperture radar ( sar ) images. like inverse graphics in optical domain, an inherently - integrated forward - inverse approach would be promising for sar advanced information retrieval and target reconstruction. this paper presents such an attempt to the inverse graphics for sar imagery. a differentiable sar renderer ( dsr ) is developed which reformulates the mapping and projection algorithm of sar imaging mechanism in the differentiable form of probability maps. first - order gradients of the proposed dsr are then analytically derived which can be back - propagated from rendered image / silhouette to the target geometry and scattering attributes. a 3d inverse target reconstruction algorithm from sar images is devised. several simulation and reconstruction experiments are conducted, including targets with and without background, using both synthesized data or real measured inverse sar ( isar ) data by ground radar. results demonstrate the efficacy of the proposed dsr and its inverse approach.
arxiv:2205.07099
the main purpose of this paper is to find conditions for holder calmness of the solution mapping, viewed as a function of the boundary data, of a hemivariational inequality governed by the navier - stokes operator. to this end, a more abstract model is studied first : a class of parametric equilibrium problems defined by trifunctions. the presence of trifunctions allows the extension of the monotonicity notions and of the duality principle in the theory of equilibrium problems.
arxiv:2009.08817
when speaking in presence of background noise, humans reflexively change their way of speaking in order to improve the intelligibility of their speech. this reflex is known as lombard effect. collecting speech in lombard conditions is usually hard and costly. for this reason, speech enhancement systems are generally trained and evaluated on speech recorded in quiet to which noise is artificially added. since these systems are often used in situations where lombard speech occurs, in this work we perform an analysis of the impact that lombard effect has on audio, visual and audio - visual speech enhancement, focusing on deep - learning - based systems, since they represent the current state of the art in the field. we conduct several experiments using an audio - visual lombard speech corpus consisting of utterances spoken by 54 different talkers. the results show that training deep - learning - based models with lombard speech is beneficial in terms of both estimated speech quality and estimated speech intelligibility at low signal to noise ratios, where the visual modality can play an important role in acoustically challenging situations. we also find that a performance difference between genders exists due to the distinct lombard speech exhibited by males and females, and we analyse it in relation with acoustic and visual features. furthermore, listening tests conducted with audio - visual stimuli show that the speech quality of the signals processed with systems trained using lombard speech is statistically significantly better than the one obtained using systems trained with non - lombard speech at a signal to noise ratio of - 5 db. regarding speech intelligibility, we find a general tendency of the benefit in training the systems with lombard speech.
arxiv:1905.12605
we consider directed polymer models involving multiple non - intersecting random walks moving through a space - time disordered environment in one spatial dimension. for a single random walk, alberts, khanin and quastel proved that under intermediate disorder scaling ( in which time and space are scaled diffusively, and the strength of the environment is scaled to zero in a critical manner ) the polymer partition function converges to the solution to the stochastic heat equation with multiplicative white noise. in this paper we prove the analogous result for multiple non - intersecting random walks started and ended grouped together. the limiting object now is the multi - layer extension of the stochastic heat equation introduced by o ' connell and warren.
arxiv:1603.08168