text
stringlengths
1
3.65k
source
stringlengths
15
79
this paper proposes a theory for understanding perceptual learning processes within the general framework of laws of nature. neural networks are regarded as systems whose connections are lagrangian variables, namely functions depending on time. they are used to minimize the cognitive action, an appropriate functional index that measures the agent interactions with the environment. the cognitive action contains a potential and a kinetic term that nicely resemble the classic formulation of regularization in machine learning. a special choice of the functional index, which leads to forth - order differential equations - - - cognitive action laws ( cal ) - - - exhibits a structure that mirrors classic formulation of machine learning. in particular, unlike the action of mechanics, the stationarity condition corresponds with the global minimum. moreover, it is proven that typical asymptotic learning conditions on the weights can coexist with the initialization provided that the system dynamics is driven under a policy referred to as information overloading control. finally, the theory is experimented for the problem of feature extraction in computer vision.
arxiv:1808.09162
in this paper we study long distance modifications of gravity obtained by considering actions that are singular in the limit of vanishing curvature. in particular, we showed in a previous publication that models that include inverse powers of curvature invariants that diverge for r - > 0 in the schwarzschild geometry, recover an acceptable weak field limit at short distances from sources. we study then the linearisation of generic actions of the form l = f [ r, p, q ] where p = r _ { ab } r ^ { ab } and q = r _ { abcd } r ^ { abcd }. we show that for the case in which f [ r, p, q ] = f [ r, q - 4p ], the theory is ghost free. assuming this is the case, in the models that can explain the acceleration of the universe without recourse to dark energy there is still an extra scalar field in the spectrum besides the massless spin two graviton. the mass of this extra excitation is of the order of the hubble scale in vacuum. we nevertheless recover einstein gravity at short distances because the mass of this scalar field depends on the background in such a way that it effectively decouples when one gets close to any source. remarkably, for the values of the parameters necessary to explain the cosmic acceleration the induced modifications of gravity are suppressed at the solar system level but can be important for systems like a galaxy.
arxiv:gr-qc/0511045
we present a novel approach for parallel computation in the context of machine learning that we call " tell me something new " ( tmsn ). this approach involves a set of independent workers that use broadcast to update each other when they observe " something new ". tmsn does not require synchronization or a head node and is highly resilient against failing machines or laggards. we demonstrate the utility of tmsn by applying it to learning boosted trees. we show that our implementation is 10 times faster than xgboost and lightgbm on the splice - site prediction problem.
arxiv:1805.07483
deep phenotyping study has become an emerging field to understand the gene function and the structure of biological networks. for the living animal c. elegans, recent advances in genome - editing tools, microfluidic devices and phenotypic analyses allow for a deeper understanding of the genotype - to - phenotype pathway. in this article, i reviewed the evolution of deep phenotyping study in cell development, neuron activity, and the behaviors of intact animals.
arxiv:1905.02335
in this paper we analyze the weighted essentially non - oscillatory ( weno ) schemes in the finite volume framework by examining the first step of the explicit third - order total variation diminishing runge - kutta method. the rationale for the improved performance of the finite volume weno - m, weno - z and weno - zr schemes over weno - js in the first time step is that the nonlinear weights corresponding to large errors are adjusted to increase the accuracy of numerical solutions. based on this analysis, we propose novel z - type nonlinear weights of the finite volume weno scheme for hyperbolic conservation laws. instead of taking the difference of the smoothness indicators for the global smoothness indicator, we employ the logarithmic function with tuners to ensure that the numerical dissipation is reduced around discontinuities while the essentially non - oscillatory property is preserved. the proposed scheme does not necessitate substantial extra computational expenses. numerical examples are presented to demonstrate the capability of the proposed weno scheme in shock capturing.
arxiv:2310.05679
embedding representation learning via neural networks is at the core foundation of modern similarity based search. while much effort has been put in developing algorithms for learning binary hamming code representations for search efficiency, this still requires a linear scan of the entire dataset per each query and trades off the search accuracy through binarization. to this end, we consider the problem of directly learning a quantizable embedding representation and the sparse binary hash code end - to - end which can be used to construct an efficient hash table not only providing significant search reduction in the number of data but also achieving the state of the art search accuracy outperforming previous state of the art deep metric learning methods. we also show that finding the optimal sparse binary hash code in a mini - batch can be computed exactly in polynomial time by solving a minimum cost flow problem. our results on cifar - 100 and on imagenet datasets show the state of the art search accuracy in precision @ k and nmi metrics while providing up to 98x and 478x search speedup respectively over exhaustive linear search. the source code is available at https : / / github. com / maestrojeong / deep - hash - table - icml18
arxiv:1805.05809
stochastic background gravitational waves have not yet been detected by ground - based laser interferometric detectors, but recent improvements in detector sensitivity have raised considerable expectations for their eventual detection. previous studies have introduced methods for exploring anisotropic background gravitational waves using bayesian statistics. these studies represent a groundbreaking approach by offering physically motivated anisotropy mapping that is distinct from the singular value decomposition regularization of the fisher information matrix. however, they are limited by the use of a single model, which can introduce potential bias when dealing with complex data that may consist of a mixture of multiple models. here, we demonstrate the bias introduced by a single - component model approach in the parametric interpretation of anisotropic stochastic gravitational - wave backgrounds, and we confirm that using multiple - component models can mitigate this bias.
arxiv:2411.19761
let i. i. d. symmetric bernoulli random variables be associated to the edges of a binary tree having n levels. to any leaf of the tree, we associate the sum of variables along the path connecting the leaf with the tree root. let m _ n denote the maximum of all such sums. we prove that, as n grows, the distributions of m _ n approach some helix in the space of distributions. each element of this helix is an accumulation point for the shifts of distributions of m _ n.
arxiv:1212.0189
we present an analysis of a data cube of the central region of m104, the sombrero galaxy, obtained with the gmos - ifu of the gemini - south telescope, and report the discovery of collimation and scattering of the active galactic nucleus ( agn ) emission in the circumnuclear region of this galaxy. analysis with pca tomography and spectral synthesis revealed the existence of collimation and scattering of the agn featureless continuum and also of a broad component of the h { \ alpha } emission line. the collimation and scattering of this broad h { \ alpha } component was also revealed by fitting the [ nii ] { \ lambda } { \ lambda } 6548, 6583 and h { \ alpha } emission lines as a sum of gaussian functions. the spectral synthesis, together with a v - i image obtained with the hubble space telescope, showed the existence of circumnuclear dust, which may cause the light scattering. we also identify a dusty feature that may be interpreted as a torus / disk structure. the existence of two opposite regions with featureless continuum ( p. a. = - 18 { \ deg } + / - 13 { \ deg } and p. a. = 162 { \ deg } + / - 13 { \ deg } ) along a direction perpendicular to the torus / disk ( p. a. = 72 { \ deg } + / - 14 { \ deg } ) suggests that this structure is approximately edge - on and collimates the agn emission. the edge - on torus / disk also hides the broad - line region. the proposed scenario is compatible with the unified model and explains why only a weak broad component of the h { \ alpha } emission line is visible and also why many previous studies detected no broad h \ alpha. the technique used here proved to be an efficient method not only for detecting scattered light but also for testing the unified model in low luminosity agns.
arxiv:1302.6649
the operation of resistive and phase - change memory ( rram and pcm ) is controlled by highly localized self - heating effects, yet detailed studies of their temperature are rare due to challenges of nanoscale thermometry. here we show that the combination of raman thermometry and scanning thermal microscopy ( sthm ) can enable such measurements with high spatial resolution. we report temperature - dependent raman spectra of hfo $ _ 2 $, tio $ _ 2 $ and ge $ _ 2 $ sb $ _ 2 $ te $ _ 5 $ ( gst ) films, and demonstrate direct measurements of temperature profiles in lateral pcm devices. our measurements reveal that electrical and thermal interfaces dominate the operation of such devices, uncovering a thermal boundary resistance of 30 m $ ^ 2 $ k $ ^ { - 1 } $ gw $ ^ { - 1 } $ at gst - sio $ _ 2 $ interfaces and an effective thermopower 350 $ \ mu $ v / k at gst - pt interfaces. we also discuss possible pathways to apply raman thermometry and sthm techniques to nanoscale and vertical resistive memory devices.
arxiv:1706.02318
in this paper, a kind of quantum dialogue ( qd ) protocols without information leakage assisted by quantum operation is proposed. the participant in charge of preparation can automatically know the collapsed states after quantum operation performed on the prepared quantum states. the other participant is able to know the collapsed states derived from the prepared quantum states through quantum measurement. the information leakage problem is avoided by means of imposing auxiliary quantum operation on the prepared quantum states.
arxiv:2205.03221
the problem of estimating the covariance matrix $ \ sigma $ of a $ p $ - variate distribution based on its $ n $ observations arises in many data analysis contexts. while for $ n > p $, the classical sample covariance matrix $ \ hat { \ sigma } _ n $ is a good estimator for $ \ sigma $, it fails in the high - dimensional setting when $ n \ ll p $. in this scenario one requires prior knowledge about the structure of the covariance matrix in order to construct reasonable estimators. under the common assumption that $ \ sigma $ is sparse, a refined estimator is given by $ m \ cdot \ hat { \ sigma } _ n $, where $ m $ is a suitable symmetric mask matrix indicating the nonzero entries of $ \ sigma $ and $ \ cdot $ denotes the entrywise product of matrices. in the present work we assume that $ \ sigma $ has toeplitz structure corresponding to stationary signals. this suggests to average the sample covariance $ \ hat { \ sigma } _ n $ over the diagonals in order to obtain an estimator $ \ tilde { \ sigma } _ n $ of toeplitz structure. assuming in addition that $ \ sigma $ is sparse suggests to study estimators of the form $ m \ cdot \ tilde { \ sigma } _ n $. for gaussian random vectors and, more generally, random vectors satisfying the convex concentration property, our main result bounds the estimation error in terms of $ n $ and $ p $ and shows that accurate estimation is indeed possible when $ n \ ll p $. the new bound significantly generalizes previous results by cai, ren and zhou and provides an alternative proof. our analysis exploits the connection between the spectral norm of a toeplitz matrix and the supremum norm of the corresponding spectral density function.
arxiv:1709.09377
learning the structure of markov random fields ( mrfs ) plays an important role in multivariate analysis. the importance has been increasing with the recent rise of statistical relational models since the mrf serves as a building block of these models such as markov logic networks. there are two fundamental ways to learn structures of mrfs : methods based on parameter learning and those based on independence test. the former methods more or less assume certain forms of distribution, so they potentially perform poorly when the assumption is not satisfied. the latter can learn an mrf structure without a strong distributional assumption, but sometimes it is unclear what objective function is maximized / minimized in these methods. in this paper, we follow the latter, but we explicitly define the optimization problem of mrf structure learning as maximum pseudolikelihood estimation ( mple ) with respect to the edge set. as a result, the proposed solution successfully deals with the { \ em symmetricity } in mrfs, whereas such symmetricity is not taken into account in most existing independence test techniques. the proposed method achieved higher accuracy than previous methods when there were asymmetric dependencies in our experiments.
arxiv:1807.00944
we show how to construct the exact factorized s - matrices of 1 + 1 dimensional quantum field theories whose symmetry charges generate a quantum affine algebra. quantum affine toda theories are examples of such theories. we take into account that the lorentz spins of the symmetry charges determine the gradation of the quantum affine algebras. this gives the s - matrices a non - rigid pole structure. it depends on a kind of ` ` quantum ' ' dual coxeter number which will therefore also determine the quantum mass ratios in these theories. as an example we explicitly construct s - matrices with $ u _ q ( c _ n ^ { ( 1 ) } ) $ symmetry.
arxiv:hep-th/9503079
future instruments like nircam and miri on jwst or metis at the elt will be able to image exoplanets that are too faint for current direct imaging instruments. evolutionary models predicting the planetary intrinsic luminosity as a function of time have traditionally concentrated on gas - dominated giant planets. we extend these cooling curves to saturnian and neptunian planets. we simulate the cooling of isolated core - dominated and gas giant planets with masses of 5 earthmasses to 2 jupitermasses. the luminosity includes the contribution from the cooling and contraction of the core and of the h / he envelope, as well as radiogenic decay. for the atmosphere we use grey, ames - cond, petitcode, and helios models. we consider solar and non - solar metallicities as well as cloud - free and cloudy atmospheres. the most important initial conditions, namely the core - to - envelope ratio and the initial luminosity are taken from planet formation simulations based on the core accretion paradigm. we first compare our cooling curves for uranus, neptune, jupiter, saturn, gj 436b, and a 5 earthmass - planet with a 1 % h / he envelope with other evolutionary models. we then present the temporal evolution of planets with masses between 5 earthmasses and 2 jupitermasses in terms of their luminosity, effective temperature, radius, and entropy. we discuss the impact of different post formation entropies. for the different atmosphere types and initial conditions magnitudes in various filter bands between 0. 9 and 30 micrometer wavelength are provided. using black body fluxes and non - grey spectra, we estimate the detectability of such planets with jwst. it is found that a 20 ( 100 ) earthmass - planet can be detected with jwst in the background limit up to an age of about 10 ( 100 ) myr with nircam and miri, respectively.
arxiv:1812.02027
the problem of maximum rate achievable with analog network coding for a unicast communication over a layered wireless relay network with directed links is considered. a relay node performing analog network coding scales and forwards the signals received at its input. recently this problem has been considered under two assumptions : ( a ) each relay node scales its received signal to the upper bound of its transmit power constraint, ( b ) the relay nodes in specific subsets of the network operate in the high - snr regime. we establish that assumption ( a ), in general, leads to suboptimal end - to - end rate. we also characterize the performance of analog network coding in class of symmetric layered networks without assumption ( b ). the key contribution of this work is a lemma that states that a globally optimal set of scaling factors for the nodes in a layered relay network that maximizes the end - to - end rate can be computed layer - by - layer. specifically, a rate - optimal set of scaling factors for the nodes in a layer is the one that maximizes the sum - rate of the nodes in the next layer. this critical insight allows us to characterize analog network coding performance in network scenarios beyond those that can be analyzed using the existing approaches. we illustrate this by computing the maximum rate achievable with analog network coding in one particular layered network, in various communication scenarios.
arxiv:1202.0372
normalised generalised gamma processes are random probability measures that induce nonparametric prior distributions widely used in bayesian statistics, particularly for mixture modelling. we construct a class of dependent normalised generalised gamma priors induced by a stationary population model of moran type, which exploits a generalised p \ ' olya urn scheme associated with the prior. we study the asymptotic scaling for the dynamics of the number of clusters in the sample, which in turn provides a dynamic measure of diversity in the underlying population. the limit is formalised to be a positive nonstationary diffusion process which falls outside well known families, with unbounded drift and an entrance boundary at the origin. we also introduce a new class of stationary positive diffusions, whose invariant measures are explicit and have power law tails, which approximate weakly the scaling limit.
arxiv:1608.00733
the cytoskeleton is an inhomogeneous network of semi - flexible filaments, which are involved in a wide variety of active biological processes. although the cytoskeletal filaments can be very stiff and embedded in a dense and cross - linked network, it has been shown that, in cells, they typically exhibit significant bending on all length scales. in this work we propose a model of a semi - flexible filament deformed by different types of cross - linkers for which one can compute and investigate the bending spectrum. our model allows to couple the evolution of the deformation of the semi - flexible polymer with the stochastic dynamics of linkers which exert transversal forces onto the filament. we observe a $ q ^ { - 2 } $ dependence of the bending spectrum for some biologically relevant parameters and in a certain range of wavenumbers $ q $. however, generically, the spatially localized forcing and the non - thermal dynamics both introduce deviations from the thermal - like $ q ^ { - 2 } $ spectrum.
arxiv:1609.00557
we use a unified elementary approach to prove the second part of classical, mixed, super, and mixed super schur - weyl dualities for general linear groups and supergroups over an infinite ground field of arbitrary characteristic. these dualities describe the endomorphism algebras of the tensor space and mixed tensor space, respectively, over the group algebra of the symmetric group and the brauer wall algebra, respectively. our main new results are the second part of the mixed schur - weyl dualities and mixed super schur - weyl dualities over an infinite ground field of positive characteristic.
arxiv:2307.15622
supernova remnants ( snrs ) are important objects in investigating the links among supernova ( sn ) explosion mechanism ( s ), progenitor stars, and cosmic - ray acceleration. non - thermal emission from snrs is an effective and promising tool for probing their surrounding circumstellar media ( csm ) and, in turn, the stellar evolution and mass - loss mechanism ( s ) of massive stars. in this work, we calculate the time evolution of broadband non - thermal emissions from type ib / c snrs whose csm structures are derived from the mass - loss history of their progenitors. our results predict that type ib / c snrs make a transition of brightness in radio and $ \ gamma $ - ray bands from an undetectable dark for a certain period to a re - brightening phase. this transition originates from their inhomogeneous csm structures in which the snrs are embedded within a low - density wind cavity surrounded by a high - density wind shell and the ambient interstellar medium ( ism ). the " resurrection " in non - thermal luminosity happens at an age of ~ 1, 000 yrs old for a wolf - rayet star progenitor evolved within a typical ism density. combining with the results of type ii snr evolution recently reported by yasuda et al. ( 2021 ), this result sheds light on a comprehensive understanding of non - thermal emissions from snrs with different sn progenitor types and ages, which is made possible for the first time by the incorporation of realistic mass - loss histories of the progenitors.
arxiv:2111.09534
as artificial intelligence ( ai ) becomes more pervasive in various aspects of life, ai literacy is becoming a fundamental competency that enables individuals to move safely and competently in an ai - pervaded world. there is a growing need to measure this competency, e. g., to develop targeted educational interventions. although several measurement tools already exist, many have limitations regarding subjective data collection methods, target group differentiation, validity, and integration of current developments such as generative ai literacy. this study develops and validates the ai competency objective scale ( aicos ) for measuring ai literacy objectively. the presented scale addresses weaknesses and offers a robust measurement approach that considers established competency and measurement models, captures central sub - competencies of ai literacy, and integrates the dimension of generative ai literacy. the aicos provides a sound and comprehensive measure of ai literacy, and initial analyses show potential for a modular structure. furthermore, a first edition of a short version of the aicos is developed. due to its methodological foundation, extensive validation, and integration of recent developments, the test represents a valuable resource for scientific research and practice in educational institutions and professional contexts. the aicos significantly contributes to the development of standardized measurement instruments and enables the targeted assessment and development of ai skills in different target groups.
arxiv:2503.12921
we observe an optical filtering effect in four - wave mixing ( fwm ) process based on a cold atomic gas. the side peaks appear at the edges of pulse of generated optical field, and they propagate through the atomic media without absorption. the theoretical analysis shows that these side peaks corresponded to the high frequency part of pulse of generated signal, which means the atoms cannot response to the rapid change of the electromagnetic field in time. on the contrary, the low frequency components of generated signal are absorbed during the transmission through the atoms. in addition, we experimentally demonstrate that the backward side peak could be stored by using raman transition in atomic ensemble and retrieved later.
arxiv:1410.7931
we design a deterministic subexponential time algorithm that takes as input a multivariate polynomial $ f $ computed by a constant - depth circuit over rational numbers, and outputs a list $ l $ of circuits ( of unbounded depth and possibly with division gates ) that contains all irreducible factors of $ f $ computable by constant - depth circuits. this list $ l $ might also include circuits that are spurious : they either do not correspond to factors of $ f $ or are not even well - defined, e. g. the input to a division gate is a sub - circuit that computes the identically zero polynomial. the key technical ingredient of our algorithm is a notion of the pseudo - resultant of $ f $ and a factor $ g $, which serves as a proxy for the resultant of $ g $ and $ f / g $, with the advantage that the circuit complexity of the pseudo - resultant is comparable to that of the circuit complexity of $ f $ and $ g $. this notion, which might be of independent interest, together with the recent results of limaye, srinivasan and tavenas, helps us derandomize one key step of multivariate polynomial factorization algorithms - that of deterministically finding a good starting point for newton iteration for the case when the input polynomial as well as the irreducible factor of interest have small constant - depth circuits.
arxiv:2403.01965
we propose a method for determining the most likely cause, in terms of conventional generator outages and renewable fluctuations, of power system frequency reaching a predetermined level that is deemed unacceptable to the system operator. our parsimonious model of system frequency incorporates primary and secondary control mechanisms, and supposes that conventional outages occur according to a poisson process and renewable fluctuations follow a diffusion process. we utilize a large deviations theory based approach that outputs the most likely cause of a large excursion of frequency from its desired level. these results yield the insight that current levels of renewable power generation do not significantly increase system vulnerability in terms of frequency deviations relative to conventional failures. however, for a large range of model parameters it is possible that such vulnerabilities may arise as renewable penetration increases.
arxiv:2002.12671
in heavy - fermion superconductors, it is widely believed that the superconducting gap function has sign - reversal due to the strong electron correlation. however, recently discovered fully - gapped s - wave superconductivity in cecu2si2 has clarified that strong attractive pairing interaction can appear even in heavy - fermion systems. to understand the origin of attractive force, we develop the multipole fluctuation theory by focusing on the inter - multipole many - body interaction called the vertex corrections. by analyzing the periodic anderson model for cecu2si2, we find that hexadecapole fluctuations mediate strong attractive pairing interaction. therefore, fully - gapped s - wave superconductivity is driven by pure on - site coulomb repulsion, without introducing electron - phonon interactions. the present theory of superconductivity will be useful to understand rich variety of the superconducting states in heavy fermion systems.
arxiv:1902.10968
the frequency content of seismic data is changing with propagation depth due to intrinsic absorption. this implies that the higher frequencies are highly attenuated, thus leading to a loss in resolution of the seismic image. in addition, absorption anomalies, for example, caused by gas sands, will further dim the seismic reconstruction. it is possible to correct for such absorption effects by employing so called inverse q filtering ( iqf ). this is a filtering technique that tries to restore the loss of the higher frequencies due to propagation. newer developments within iqf can be regarded as a migration type of algorithm, and such classes of techniques are studied in this paper. as seismic waves travel through the earth, the visco - elasticity of the earth ' s medium will cause energy dissipation and waveform distortion. this phenomenon is referred to as seismic absorption. in explaining the propagation of seismic wave in a given medium we explore the relationship between the pressure and displacement stresses. therefore, by introducing an absorption function into the stress and strain relationship we derived a non - linear wave equation. we, then, employed a layered earth model to solve the non - linear wave equation.
arxiv:2308.08350
large language models ( llms ) are one of the most promising developments in the field of artificial intelligence, and the software engineering community has readily noticed their potential role in the software development life - cycle. developers routinely ask llms to generate code snippets, increasing productivity but also potentially introducing ownership, privacy, correctness, and security issues. previous work highlighted how code generated by mainstream commercial llms is often not safe, containing vulnerabilities, bugs, and code smells. in this paper, we present a framework that leverages testing and static analysis to assess the quality, and guide the self - improvement, of code generated by general - purpose, open - source llms. first, we ask llms to generate c code to solve a number of programming tasks. then we employ ground - truth tests to assess the ( in ) correctness of the generated code, and a static analysis tool to detect potential safety vulnerabilities. next, we assess the models ability to evaluate the generated code, by asking them to detect errors and vulnerabilities. finally, we test the models ability to fix the generated code, providing the reports produced during the static analysis and incorrectness evaluation phases as feedback. our results show that models often produce incorrect code, and that the generated code can include safety issues. moreover, they perform very poorly at detecting either issue. on the positive side, we observe a substantial ability to fix flawed code when provided with information about failed tests or potential vulnerabilities, indicating a promising avenue for improving the safety of llm - based code generation tools.
arxiv:2412.14841
joint communication and sensing is expected to be one of the features introduced by the sixth - generation ( 6g ) wireless systems. this will enable a huge variety of new applications, hence, it is important to find suitable approaches to secure the exchanged information. conventional security mechanisms may not be able to meet the stringent delay, power, and complexity requirements which opens the challenge of finding new lightweight security solutions. a promising approach coming from the physical layer is the secret key generation ( skg ) from channel fading. while skg has been investigated for several decades, practical implementations of its full protocol are still scarce. the aim of this chapter is to evaluate the skg rates in real - life setups under a set of different scenarios. we consider a typical radar waveform and present a full implementation of the skg protocol. each step is evaluated to demonstrate that generating keys from the physical layer can be a viable solution for future networks. however, we show that there is not a single solution that can be generalized for all cases, instead, parameters should be chosen according to the context.
arxiv:2310.14624
we consider cosmological dynamics in the theory of gravity with the scalar field possessing a nonminimal kinetic coupling to gravity, $ \ kappa g _ { \ mu \ nu } \ phi ^ { \ mu } \ phi ^ { \ nu } $, and the power - law potential $ v ( \ phi ) = v _ 0 \ phi ^ n $. using the dynamical system method, we analyze all possible asymptotical regimes of the model under investigation and show that for sloping potentials with $ 0 < n < 2 $ there exists a quasi - de sitter asymptotic $ h = { 1 } / { \ sqrt { 9 \ kappa } } $ corresponding to an early inflationary universe. in contrast to the standard inflationary scenario, the kinetic coupling inflation does not depend on a scalar field potential and is only determined by the coupling parameter $ \ kappa $. we obtain that there exist two different late - time asymptotical regimes. the first one leads to the usual power - like cosmological evolution with $ h = 1 / 3t $, while the second one represents the late - time inflationary universe with $ h = 1 / \ sqrt { 3 \ kappa } $. this secondary inflationary phase depends only on $ \ kappa $ and is a specific feature of the model with nonminimal kinetic coupling. additionally, an asymptotical analysis shows that for the quadric potential with n = 2 the asymptotical regimes remain qualitatively the same, while the kinetic coupling inflation is impossible for steep potentials with n > 2. using a numerical analysis, we also construct exact cosmological solutions and find initial conditions leading to the initial kinetic coupling inflation followed either by a " graceful " oscillatory exit or by the secondary inflation.
arxiv:1306.5090
the central region of the galaxy has been observed at 580, 620 and 1010 mhz with the giant metrewave radio telescope ( gmrt ). we detect emission from sgr - a *, the compact object at the dynamical centre of the galaxy, and estimate its flux density at 620 mhz to be 0. 5 + / - 0. 1 jy. this is the first detection of sgr a * below 1 ghz ( roy & rao 2002, 2003 ), which along with a possible detection at 330 mhz ( nord et al. 2004 ) provides its spectrum below 1 ghz. comparison of the 620 mhz map with maps made at other frequencies indicates that most parts of the sgr a west hii region have optical depth 2. however, sgr a *, which is seen in the same region in projection, shows a slightly inverted spectral index between 1010 mhz and 620 mhz. this is consistent with its high frequency spectral index, and indicates that sgr a * is located in front of the sgr a west complex, and rules out any low frequency turnover around 1 ghz, as suggested by davies et al. ( 1976 ).
arxiv:astro-ph/0402052
we present timber, the first white - box poisoning attack targeting decision trees. timber is based on a greedy attack strategy that leverages sub - tree retraining to efficiently estimate the damage caused by poisoning a given training instance. the attack relies on a tree annotation procedure, which enables the sorting of training instances so that they are processed in increasing order of the computational cost of sub - tree retraining. this sorting yields a variant of timber that supports an early stopping criterion, designed to make poisoning attacks more efficient and feasible on larger datasets. we also discuss an extension of timber to traditional random forest models, which is valuable since decision trees are typically combined into ensembles to improve their predictive power. our experimental evaluation on public datasets demonstrates that our attacks outperform existing baselines in terms of effectiveness, efficiency, or both. moreover, we show that two representative defenses can mitigate the effect of our attacks, but fail to effectively thwart them.
arxiv:2410.00862
we investigate the effects of quantum fluctuations in a parity - time ( $ \ mathcal { pt } $ ) symmetric two - species bose - einstein condensate ( bec ). it is found that the $ \ mathcal { pt } $ - symmetry, though preserved by the macroscopic condensate, can be spontaneously broken by its bogoliubov quasi - particles under quantum fluctuations. the associated $ \ mathcal { pt } $ - breaking transitions in the bogoliubov spectrum can be conveniently tuned by the interaction anisotropy in spin channels and the strength of $ \ mathcal { pt } $ potential. in the $ \ mathcal { pt } $ - unbroken regime, the real bogoliubov modes are generally gapped, in contrast to the gapless phonon mode in hermitian case. moreover, the presence of $ \ mathcal { pt } $ potential is found to enhance the mean - field collapse and thereby intrigue the droplet formation after incorporating the repulsive force from quantum fluctuations. these remarkable interplay effects of $ \ mathcal { pt } $ - symmetry and interaction can be directly probed in cold atoms experiments, which shed light on related quantum phenomena in general $ \ mathcal { pt } $ - symmetric systems.
arxiv:2108.04403
non - science are often distinguished by the criterion of falsifiability. the criterion was first proposed by philosopher of science karl popper. to popper, science does not rely on induction ; instead, scientific investigations are inherently attempts to falsify existing theories through novel tests. if a single test fails, then the theory is falsified. : 10 therefore, any test of a scientific theory must prohibit certain results that falsify the theory, and expect other specific results consistent with the theory. using this criterion of falsifiability, astrology is a pseudoscience. astrology was popper ' s most frequent example of pseudoscience. : 7 popper regarded astrology as " pseudo - empirical " in that " it appeals to observation and experiment ", but " nevertheless does not come up to scientific standards ". : 44 in contrast to scientific disciplines, astrology does not respond to falsification through experiment. according to professor of neurology terence hines, this is a hallmark of pseudoscience. : 206 = = = " no puzzles to solve " = = = in contrast to popper, the philosopher thomas kuhn argued that it was not lack of falsifiability that makes astrology unscientific, but rather that the process and concepts of astrology are non - empirical. : 401 to kuhn, although astrologers had, historically, made predictions that " categorically failed ", this in itself does not make it unscientific, nor do the attempts by astrologers to explain away the failure by claiming it was due to the creation of a horoscope being very difficult ( through subsuming, after the fact, a more general horoscope that leads to a different prediction ). rather, in kuhn ' s eyes, astrology is not science because it was always more akin to medieval medicine ; they followed a sequence of rules and guidelines for a seemingly necessary field with known shortcomings, but they did no research because the fields are not amenable to research, : 8 and so, " they had no puzzles to solve and therefore no science to practise. " : 8 : 401 while an astronomer could correct for failure, an astrologer could not. an astrologer could only explain away failure but could not revise the astrological hypothesis in a meaningful way. as such, to kuhn, even if the stars could influence the path of humans through life astrology is not scientific. : 8 = =
https://en.wikipedia.org/wiki/Astrology_and_science
we study a simple two higgs doublet model which reflects, in a phenomenological way, the idea of compositeness for the higgs sector. it is relatively predictive. in one scenario, it allows for a " hidden " usual higgs particle in the 100 gev region and a possible dark matter candidate.
arxiv:0805.0293
the emission spectra for the spin - 1 photon fields are computed when the spacetime is a $ ( 4 + n ) $ - dimensional schwarzschild phase. for the case of the bulk emission we compute the spectra for the vector mode and scalar mode separately. although the emissivities for the scalar mode is larger than those for the vector mode when $ n $ is small, the emissivities for the vector mode photon become dominant rapidly with increasing $ n $. for the case of the brane emission the emission spectra are numerically computed by making use of the complex potential method. comparision of the total bulk emissivities with total brane emissivities indicates that the effect of the field spin makes the bulk emission to be rapidly dominant with increasing $ n $. however, the bulk - to - brane relative emissivity per degree of freedom always remains smaller than unity. the importance for the spin - 2 graviton emission problem is discussed.
arxiv:hep-th/0610089
until the mid - 1970s. membrane separation processes differ based on separation mechanisms and size of the separated particles. the widely used membrane processes include microfiltration, ultrafiltration, nanofiltration, reverse osmosis, electrolysis, dialysis, electrodialysis, gas separation, vapor permeation, pervaporation, membrane distillation, and membrane contactors. all processes except for pervaporation involve no phase change. all processes except electrodialysis are pressure driven. microfiltration and ultrafiltration is widely used in food and beverage processing ( beer microfiltration, apple juice ultrafiltration ), biotechnological applications and pharmaceutical industry ( antibiotic production, protein purification ), water purification and wastewater treatment, the microelectronics industry, and others. nanofiltration and reverse osmosis membranes are mainly used for water purification purposes. dense membranes are utilized for gas separations ( removal of co2 from natural gas, separating n2 from air, organic vapor removal from air or a nitrogen stream ) and sometimes in membrane distillation. the later process helps in the separation of azeotropic compositions reducing the costs of distillation processes. = = pore size and selectivity = = the pore sizes of technical membranes are specified differently depending on the manufacturer. one common distinction is by nominal pore size. it describes the maximum pore size distribution and gives only vague information about the retention capacity of a membrane. the exclusion limit or " cut - off " of the membrane is usually specified in the form of nmwc ( nominal molecular weight cut - off, or mwco, molecular weight cut off, with units in dalton ). it is defined as the minimum molecular weight of a globular molecule that is retained to 90 % by the membrane. the cut - off, depending on the method, can by converted to so - called d90, which is then expressed in a metric unit. in practice the mwco of the membrane should be at least 20 % lower than the molecular weight of the molecule that is to be separated. using track etched mica membranes beck and schultz demonstrated that hindered diffusion of molecules in pores can be described by the rankin equation. filter membranes are divided into four classes according to pore size : the form and shape of the membrane pores are highly dependent on the manufacturing process and are often difficult to specify. therefore, for characterization, test filtrations are carried out and the pore diameter refers to the diameter of the smallest particles which could not pass through the membrane. the
https://en.wikipedia.org/wiki/Membrane_technology
electron - hole systems on a haldane sphere are studied by exact numerical diagonalization. low lying states contain one or more types of bound charged excitonic complexes xk -, interacting through appropriate pseudopotentials. incompressible ground states of such multi - component plasmas are found. a generalized multi - component laughlin wavefunction and composite fermion picture are shown to predict the low lying states of an electron - hole gas at any value of the magnetic field.
arxiv:cond-mat/9904395
the critical behaviour of the one - dimensional q - state potts model with long - range interactions decaying with distance r as $ r ^ { - ( 1 + \ sigma ) } $ has been studied in the wide range of parameters $ 0 < \ sigma \ le 1 $ and $ \ frac { 1 } { 16 } \ le q \ le 64 $. a transfer matrix has been constructed for a truncated range of interactions for integer and continuous q, and finite range scaling has been applied. results for the phase diagram and the correlation length critical exponent are presented.
arxiv:hep-lat/9303016
this article is the third in a series of three articles, the aim of which is to study various correspondences between four enumerative theories associated to a surface $ s $ : gromov - witten theory of $ s ^ { [ n ] } $, orbifold gromov - witten theory of $ [ s ^ { ( n ) } ] $, relative gromov - witten theory of $ s \ times c $ for a nodal curve $ c $ and relative donaldson - thomas theory of $ s \ times c $. in this article, we introduce a one - parameter stability condition, termed $ \ epsilon $ - admissibility, for relative maps from nodal curves to $ x \ times c $. if $ x $ is a point, $ \ epsilon $ - admissibility interpolates between moduli spaces of stable maps to $ c $ relative to some fixed points and moduli spaces of admissible covers with arbitrary ramifications over the same fixed points and simple ramifications elsewhere on $ c $. using zhou ' s calibrated tails, we prove wall - crossing formulas relating invariants for different values of $ \ epsilon $. if $ x $ is a surface $ s $, we use this wall - crossing in conjunction with author ' s quasimap wall - crossing to show that relative pandharipande - thomas / gromov - witten correspondence of $ s \ times c $ and ruan ' s extended crepant resolution conjecture of the pair $ s ^ { [ n ] } $ and $ [ s ^ { ( n ) } ] $ are equivalent up to explicit wall - crossings. we thereby prove crepant resolution conjecture for 3 - point genus - 0 invariants in all classes, if $ s $ is a toric del pezzo surface.
arxiv:2208.00889
a prescription to incorporate the effects of nuclear flow on the process of multifragmentation of hot nuclei is proposed in an analytically solvable canonical model. flow is simulated by the action of an effective negative external pressure. it favors sharpening the signatures of liquid - gas phase transition in finite nuclei with increased multiplicity and a lowered phase transition temperature.
arxiv:nucl-th/0405018
stars form out of molecular gas and supply dust grains during their last evolutionary stages ; in turn hydrogen molecules ( h2 ) are produced more efficiently on dust grains. therefore, dust can drastically accelerate h2 formation, leading to an enhancement of star formation activity. in order to examine the first formation of stars and dust in galaxies, we model the evolution of galaxies in the redshift range of 5 < z < 20. in particular, we focus on the interplay between dust formation in type ii supernova ejecta and h2 production on dust grains. such effect causes an enhancement of star formation rate by an order of magnitude on a timescale ( ~ 3 - - 5 galactic dynamical times ) shorter than the hubble timescale. we also find that about half of the radiative energy from stars is reprocessed by dust grains and is finally radiated in the far infrared ( fir ). typical star formation rates and luminosities ( fir, uv and metal - line luminosities ) are calculated for a large set of ( m _ vir, z _ vir ). using these results and the press - schechter formalism, we calculate galaxy number counts and integrated light from high - redshift ( z > 5 ) galaxies in sub - millimetre and near - infrared bands. we find that : i ) alma can detect dust emission from several thousands of galaxies per square degree, and ii ) ngst can detect the stellar emission from 10 ^ 6 galaxies per square degree. further observational checks of our predictions include the integrated flux of metal ( oxygen and carbon ) lines. we finally discuss possible color selection strategies for high - redshift galaxy searches.
arxiv:astro-ph/0209034
the use of synthetic speech as data augmentation is gaining increasing popularity in fields such as automatic speech recognition and speech classification tasks. despite novel text - to - speech systems with voice cloning capabilities, that allow the usage of a larger amount of voices based on short audio segments, it is known that these systems tend to hallucinate and oftentimes produce bad data that will most likely have a negative impact on the downstream task. in the present work, we conduct a set of experiments around zero - shot learning with synthetic speech data for the specific task of speech commands classification. our results on the google speech commands dataset show that a simple asr - based filtering method can have a big impact in the quality of the generated data, translating to a better performance. furthermore, despite the good quality of the generated speech data, we also show that synthetic and real speech can still be easily distinguishable when using self - supervised ( wavlm ) features, an aspect further explored with a cyclegan to bridge the gap between the two types of speech material.
arxiv:2409.12745
band convergence is considered a clear benefit to thermoelectric performance because it increases the charge carrier concentration for a given fermi level, which typically enhances charge conductivity while preserving the seebeck coefficient. however, this advantage hinges on the assumption that interband scattering of carriers is weak or insignificant. with first - principles treatment of electron - phonon scattering in camg $ _ { 2 } $ sb $ _ { 2 } $ - cazn $ _ { 2 } $ sb $ _ { 2 } $ zintl system and full heusler sr $ _ { 2 } $ sbau, we demonstrate that the benefit of band convergence can be intrinsically negated by interband scattering depending on the manner in which bands converge. in the zintl alloy, band convergence does not improve weighted mobility or the density - of - states effective mass. we trace the underlying reason to the fact that the bands converge at one k - point, which induces strong interband scattering of both the deformation - potential and the polar - optical kinds. the case contrasts with band convergence at distant k - points ( as in the full heusler ), which better preserves the single - band scattering behavior thereby successfully leading to improved performance. therefore, we suggest that band convergence as thermoelectric design principle is best suited to cases in which it occurs at distant k - points.
arxiv:2012.02272
in this work we study the phase transition of the charged - ads black hole surrounded by quintessence via an alternative extended phase space defined by the charge square $ q ^ 2 $ and her conjugate $ \ psi $, a quantity proportional to the inverse of horizon radius, while the cosmological constant is kept fixed. the equation of state is derived under the form $ q ^ 2 = q ^ 2 ( t, \ psi ) $ and the critical behavior of such black hole analyzed. in addition, we explore the connection between the microscopic structure and ruppeiner geothermodynamics. we also find that, at certain points of the phase space, the ruppeiner curvature is characterized by the presence of singularities that are interpreted as phase transitions.
arxiv:1704.07720
positioning of the midcell division plane within the bacterium e. coli is controlled by the min system of proteins : minc, mind and mine. these proteins coherently oscillate from end to end of the bacterium. we present a reaction - - diffusion model describing the diffusion of min proteins along the bacterium and their transfer between the cytoplasmic membrane and cytoplasm. our model spontaneously generates protein oscillations in good agreement with experiments. we explore the oscillation stability, frequency and wavelength as a function of protein concentration and bacterial length.
arxiv:cond-mat/0112125
as these attacks become more and more difficult to see, the need for the great hi - tech models that detect them is undeniable. this paper examines and compares various machine learning as well as deep learning models to choose the most suitable ones for detecting and fighting against cybersecurity risks. the two datasets are used in the study to assess models like naive bayes, svm, random forest, and deep learning architectures, i. e., vgg16, in the context of accuracy, precision, recall, and f1 - score. analysis shows that random forest and extra trees do better in terms of accuracy though in different aspects of the dataset characteristics and types of threat. this research not only emphasizes the strengths and weaknesses of each predictive model but also addresses the difficulties associated with deploying such technologies in the real - world environment, such as data dependency and computational demands. the research findings are targeted at cybersecurity professionals to help them select appropriate predictive models and configure them to strengthen the security measures against cyber threats completely.
arxiv:2407.06014
it is known that many constructions arising in the classical gaussian infinite dimensional analysis can be extended to the case of more general measures. one such extension can be obtained through biorthogonal systems of appell polynomials and generalized functions. in this paper, we consider linear continuous operators from a nuclear frechet space of test functions to itself in this more general setting. we construct an isometric integral transform ( biorthogonal cs - transform ) of those operators into the space of germs of holomorphic functions on a locally convex infinite dimensional nuclear space. using such transform, we provide characterization theorems and give biorthogonal chaos expansion for operators.
arxiv:math/0403025
we consider the asymptotics of the discrete heat kernel on isoradial graphs for the case where the time and the edge lengths tend to zero simultaneously. depending on the asymptotic ratio between time and edge lengths, we show that two different regimes arise : ( i ) a gaussian regime and ( ii ) a poissonian regime, which resemble the short - time asymptotics of the heat kernel on ( i ) euclidean spaces and ( ii ) graphs, respectively.
arxiv:2301.06852
the su - schrieffer - heeger model of polyacetylene is a paradigmatic hamiltonian exhibiting non - trivial edge states. by using floquet theory we study how the spectrum of this one - dimensional topological insulator is affected by a time - dependent potential. in particular, we evidence the competition among different photon - assisted processes and the native topology of the unperturbed hamiltonian to settle the resulting topology at different driving frequencies. while some regions of the quasienergy spectrum develop new gaps hosting floquet edge states, the native gap can be dramatically reduced and the original edge states may be destroyed or replaced by new floquet edge states. our study is complemented by an analysis of zak phase applied to the floquet bands. besides serving as a simple example for understanding the physics of driven topological phases, our results could find a promising test - ground in cold matter experiments.
arxiv:1506.03067
we introduce a new multiple type i error criterion for clinical trials with multiple populations. such trials are of interest in precision medicine where the goal is to develop treatments that are targeted to specific sub - populations defined by genetic and / or clinical biomarkers. the new criterion is based on the observation that not all type i errors are relevant to all patients in the overall population. if disjoint sub - populations are considered, no multiplicity adjustment appears necessary, since a claim in one sub - population does not affect patients in the other ones. for intersecting sub - populations we suggest to control the average multiple type error rate, i. e. the probably that a randomly selected patient will be exposed to an inefficient treatment. we call this the population - wise error rate, exemplify it by a number of examples and illustrate how to control it with an adjustment of critical boundaries or adjusted p - values. we furthermore define corresponding simultaneous confidence intervals. we finally illustrate the power gain achieved by passing from family - wise to population - wise error rate control with two simple examples and a recently suggest multiple testing approach for umbrella trials.
arxiv:2011.04766
some aspects of the construction of sw floer homology for manifolds with non - trivial rational homology are analyzed. in particular, the case of manifolds that are obtained as zero - surgery on a knot in a homology sphere, and for torsion spinc structures. we discuss relative invariants in the case of torsion spinc structures.
arxiv:math/0009159
we study transport through a coulomb blockaded topologically nontrivial superconducting wire ( with majorana end states ) contacted by metallic leads. an exact formula for the current through this interacting majorana single - charge transistor is derived in terms of wire spectral functions. a comprehensive picture follows from three different approaches. we find coulomb oscillations with universal halving of the finite - temperature peak conductance under strong blockade conditions, where the valley conductance mainly comes from elastic cotunneling. the nonlinear conductance exhibits finite - voltage sidebands due to anomalous tunneling involving cooper pair splitting.
arxiv:1206.3912
we prove the ergodic closing lemma for nonsingular endomorphisms.
arxiv:0906.2031
to provide physical insight into the recently observed photoluminescence saturation behaviors in single - walled carbon nanotubes implying the existence of an upper limit of exciton densities, we have performed a time - dependent theoretical study of diffusion - limited exciton - exciton annihilation in the general context of reaction - diffusion processes, for which exact treatments exist. by including the radiative recombination decay as a poissonian process in the exactly - solvable problem of one - dimensional diffusion - driven two - particle annihilation, we were able to correctly model the dynamics of excitons as a function of time with different initial densities, which in turn allowed us to reproduce the experimentally observed photoluminescence saturation behavior at high exciton densities. we also performed monte carlo simulations of the purely stochastic, brownian diffusive motion of one - dimensional excitons, which validated our analytical results. finally, we consider the temperature - dependence of this diffusion - limited exciton - exciton annihilation and point out that high excitonic densities in swnts could be achieved at low temperature in an external magnetic field.
arxiv:0810.5748
we study the quantum - mechanical transport on two - dimensional graphs by means of continuous - time quantum walks and analyse the effect of different boundary conditions ( bcs ). for periodic bcs in both directions, i. e., for tori, the problem can be treated in a large measure analytically. some of these results carry over to graphs which obey open boundary conditions ( obcs ), such as cylinders or rectangles. under obcs the long time transition probabilities ( lps ) also display asymmetries for certain graphs, as a function of their particular sizes. interestingly, these effects do not show up in the marginal distributions, obtained by summing the lps along one direction.
arxiv:quant-ph/0610212
phenotype segmentation is pivotal in analysing visual features of living organisms, enhancing our understanding of their characteristics. in the context of oysters, meat quality assessment is paramount, focusing on shell, meat, gonad, and muscle components. traditional manual inspection methods are time - consuming and subjective, prompting the adoption of machine vision technology for efficient and objective evaluation. we explore machine vision ' s capacity for segmenting oyster components, leading to the development of a multi - network ensemble approach with a global - local hierarchical attention mechanism. this approach integrates predictions from diverse models and addresses challenges posed by varying scales, ensuring robust instance segmentation across components. finally, we provide a comprehensive evaluation of the proposed method ' s performance using different real - world datasets, highlighting its efficacy and robustness in enhancing oyster phenotype segmentation.
arxiv:2501.11203
we present in this paper a new benchmark for evaluating the performances of data warehouses. benchmarking is useful either to system users for comparing the performances of different systems, or to system engineers for testing the effect of various design choices. while the tpc ( transaction processing performance council ) standard benchmarks address the first point, they are not tuneable enough to address the second one. our data warehouse engineering benchmark ( dweb ) allows to generate various ad - hoc synthetic data warehouses and workloads. dweb is fully parameterized. however, two levels of parameterization keep it easy to tune. since dweb mainly meets engineering benchmarking needs, it is complimentary to the tpc standard benchmarks, and not a competitor. finally, dweb is implemented as a java free software that can be interfaced with most existing relational database management systems.
arxiv:0704.3501
fashion is the way we present ourselves to the world and has become one of the world ' s largest industries. fashion, mainly conveyed by vision, has thus attracted much attention from computer vision researchers in recent years. given the rapid development, this paper provides a comprehensive survey of more than 200 major fashion - related works covering four main aspects for enabling intelligent fashion : ( 1 ) fashion detection includes landmark detection, fashion parsing, and item retrieval, ( 2 ) fashion analysis contains attribute recognition, style learning, and popularity prediction, ( 3 ) fashion synthesis involves style transfer, pose transformation, and physical simulation, and ( 4 ) fashion recommendation comprises fashion compatibility, outfit matching, and hairstyle suggestion. for each task, the benchmark datasets and the evaluation protocols are summarized. furthermore, we highlight promising directions for future research.
arxiv:2003.13988
without the mass - energy equivalence available on minkowski spacetime $ \ mathbb { m } $, it is not possible on 4 - dimensional non - relativistic galilei / newton spacetime $ \ mathbb { g } $ to combine 3 - momentum and total mass - energy in a single tensor object. however, given a fiducial frame, it is possible to combine 3 - momentum and kinetic energy into a linear form ( particle ) or $ ( 1, 1 ) $ tensor ( continuum ) in a manner that exhibits increased unity of classical mechanics on flat relativistic and non - relativistic spacetimes $ \ mathbb { m } $ and $ \ mathbb { g } $. as on $ \ mathbb { m } $, for a material continuum on $ \ mathbb { g } $, the first law of thermodynamics can be considered a consequence of a unified dynamical law for energy - momentum rather than an independent postulate.
arxiv:1909.06171
we consider two types of non linear fast diffusion equations in r ^ n : ( 1 ) external drift type equation with general external potential. it is a natural extension of the harmonic potential case, which has been studied in many papers. in this paper we can prove the large time asymptotic behavior to the stationary state by using entropy methods. ( 2 ) mean - field type equation with the convolution term. the stationary solution is the minimizer of the free energy functional, which has direct relation with reverse hardy - littlewood - sobolev inequalities. in this paper, we prove that for some special cases, it also exists large time asymptotic behavior to the stationary state.
arxiv:2011.02343
lexical resources are crucial for cross - linguistic analysis and can provide new insights into computational models for natural language learning. here, we present an advanced database for comparative studies of words with multiple meanings, a phenomenon known as colexification. the new version includes improvements in the handling, selection and presentation of the data. we compare the new database with previous versions and find that our improvements provide a more balanced sample covering more language families worldwide, with an enhanced data quality, given that all word forms are provided in phonetic transcription. we conclude that the new database of cross - linguistic colexifications has the potential to inspire exciting new studies that link cross - linguistic data to open questions in linguistic typology, historical linguistics, psycholinguistics, and computational linguistics.
arxiv:2503.11377
we present in this paper the algebra of fused permutations and its deformation the fused hecke algebra. the first one is defined on a set of combinatorial objects that we call fused permutations, and its deformation is defined on a set of topological objects that we call fused braids. we use these algebras to prove a schur - - weyl duality theorem for any tensor products of any symmetrised powers of the natural representation of $ u _ q ( gl _ n ) $. then we proceed to the study of the fused hecke algebras and in particular, we describe explicitely the irreducible representations and the branching rules. finally, we aim to an algebraic description of the centralisers of the tensor products of $ u _ q ( gl _ n ) $ - representations under consideration. we exhibit a simple explicit element that we conjecture to generate the kernel from the fused hecke algebra to the centraliser. we prove this conjecture in some cases and in particular, we obtain a description of the centraliser of any tensor products of any finite - dimensional representations of $ u _ q ( sl _ 2 ) $.
arxiv:2001.11372
a notion of branch - width, which generalizes the one known for graphs, can be defined for matroids. we first give a proof of the polynomial time model - checking of monadic second - order formulas on representable matroids of bounded branch - width, by reduction to monadic second - order formulas on trees. this proof is much simpler than the one previously known. we also provide a link between our logical approach and a grammar that allows to build matroids of bounded branch - width. finally, we introduce a new class of non - necessarily representable matroids, described by a grammar and on which monadic second - order formulas can be checked in linear time.
arxiv:0908.4499
the financial markets are understood as complex dynamical systems whose dynamics is analysed mostly using nonstationary and brief data sets that usually come from stock markets. for such data sets, a reliable method of analysis is based on recurrence plots and recurrence networks, constructed from the data sets over the period of study. in this study, we do a comprehensive analysis of the complexity of the underlying dynamics of 26 markets around the globe using recurrence based measures. we also examine trends in the nature of transitions as revealed from these measures by the sliding window analysis along the time series during the global financial crisis of 2008 and compare that with changes during the most recent pandemic related lock down. we show that the measures derived from recurrence patterns can be used to capture the nature of transitions in stock market dynamics. our study reveals that the changes around 2008 indicate stochasticity driven transition, which is different from the transition during the pandemic.
arxiv:2208.03456
the symmetry. for example, r 1 { \ displaystyle r _ { 1 } } sends a point to its rotation 90° clockwise around the square ' s center, and f h { \ displaystyle f _ { \ mathrm { h } } } sends a point to its reflection across the square ' s vertical middle line. composing two of these symmetries gives another symmetry. these symmetries determine a group called the dihedral group of degree four, denoted d 4 { \ displaystyle \ mathrm { d } _ { 4 } }. the underlying set of the group is the above set of symmetries, and the group operation is function composition. two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. the result of performing first a { \ displaystyle a } and then b { \ displaystyle b } is written symbolically from right to left as b ∘ a { \ displaystyle b \ circ a } ( " apply the symmetry b { \ displaystyle b } after performing the symmetry a { \ displaystyle a } " ). this is the usual notation for composition of functions. a cayley table lists the results of all such compositions possible. for example, rotating by 270° clockwise ( r 3 { \ displaystyle r _ { 3 } } ) and then reflecting horizontally ( f h { \ displaystyle f _ { \ mathrm { h } } } ) is the same as performing a reflection along the diagonal ( f d { \ displaystyle f _ { \ mathrm { d } } } ). using the above symbols, highlighted in blue in the cayley table : f h ∘ r 3 = f d. { \ displaystyle f _ { \ mathrm { h } } \ circ r _ { 3 } = f _ { \ mathrm { d } }. } given this set of symmetries and the described operation, the group axioms can be understood as follows. binary operation : composition is a binary operation. that is, a ∘ b { \ displaystyle a \ circ b } is a symmetry for any two symmetries a { \ displaystyle a } and b { \ displaystyle b }. for example, r 3 ∘ f h = f c, { \ displaystyle r _ { 3 } \ circ f _ { \ mathrm { h } } = f _ { \ mathrm {
https://en.wikipedia.org/wiki/Group_(mathematics)
in this paper, we consider the cauchy problem for the degenerate parabolic equations on the heisenberg groups with power law non - linearities. we obtain fujita - type critical exponents, which depend on the homogeneous dimension of the heisenberg groups. the analysis includes the case of porous medium equations. our proof approach is based on methods of nonlinear capacity estimates specifically adapted to the nature of the heisenberg groups. we also use the kaplan eigenfunctions method in combination with the hopf - type lemma on the heisenberg groups.
arxiv:2303.06509
patient triage plays a crucial role in healthcare, ensuring timely and appropriate care based on the urgency of patient conditions. traditional triage methods heavily rely on human judgment, which can be subjective and prone to errors. recently, a growing interest has been in leveraging artificial intelligence ( ai ) to develop algorithms for triaging patients. this paper presents the development of a novel algorithm for triaging patients. it is based on the analysis of patient data to produce decisions regarding their prioritization. the algorithm was trained on a comprehensive data set containing relevant patient information, such as vital signs, symptoms, and medical history. the algorithm was designed to accurately classify patients into triage categories through rigorous preprocessing and feature engineering. experimental results demonstrate that our algorithm achieved high accuracy and performance, outperforming traditional triage methods. by incorporating computer science into the triage process, healthcare professionals can benefit from improved efficiency, accuracy, and consistency, prioritizing patients effectively and optimizing resource allocation. although further research is needed to address challenges such as biases in training data and model interpretability, the development of ai - based algorithms for triaging patients shows great promise in enhancing healthcare delivery and patient outcomes.
arxiv:2310.05996
hom - lie algebras defined on central extensions of a given quadratic lie algebra that in turn admit an invariant metric, are studied. it is shown how some of these algebras are naturally equipped with other symmetric, bilinear forms that satisfy an invariant condition for their twisted multiplication maps. the twisted invariant bilinear forms so obtained resemble the cartan - killing forms defined on ordinary lie algebras. this fact allows one to reproduce on the hom - lie algebras hereby studied, some results that are classically associated to the ordinary cartan - killing form.
arxiv:2010.06057
the shock reflection problem is one of the most important problems in mathematical fluid dynamics, since this problem not only arises in many important physical situations but also is fundamental for the theory of multidimensional conservation laws. however, most of the fundamental issues for shock reflection have not been understood. therefore, it is important to establish the regularity of solutions to shock reflection in order to understand fully the phenomena of shock reflection. on the other hand, for a regular reflection configuration, the potential flow governs the exact behavior of the solution in $ c ^ { 1, 1 } $ across the pseudo - sonic circle even starting from the full euler flow, that is, both of the nonlinear systems are actually the same in an physically significant region near the pseudo - sonic circle ; thus, it becomes essential to understand the optimal regularity of solutions for the potential flow across the pseudo - sonic circle and at the point where the pseudo - sonic circle meets the reflected shock. in this paper, we study the regularity of solutions to regular shock reflection for potential flow. in particular, we prove that the $ c ^ { 1, 1 } $ - regularity is optimal for the solution across the pseudo - sonic circle and at the point where the pseudo - sonic circle meets the reflected shock. we also obtain the $ c ^ { 2, \ alpha } $ regularity of the solution up to the pseudo - sonic circle in the pseudo - subsonic region. the problem involves two types of transonic flow : one is a continuous transition through the pseudo - sonic circle from the pseudo - supersonic region to the pseudo - subsonic region ; the other a jump transition through the transonic shock as a free boundary from another pseudo - supersonic region to the pseudo - subsonic region.
arxiv:0804.2500
the laser interferometer space antenna ( lisa ) will observe black hole binaries of stellar origin during their gravitational wave inspiral, months to years before coalescence. due to the long duration of the signal in the lisa band, a faithful waveform is necessary in order to keep track of the binary phase. this is crucial to extract the signal from the data and to perform an unbiased estimation of the source parameters. we consider post - newtonian ( pn ) waveforms, and analyze the pn order needed to keep the bias caused by the pn approximation negligible relative to the statistical parameter estimation error, as a function of the source parameters. by considering realistic population models, we conclude that for $ \ sim 90 \ % $ of the stellar black hole binaries detectable by lisa, waveforms at low post - newtonian ( pn ) order ( pn $ \ le 2 $ ) are sufficiently accurate for an unbiased recovery of the source parameters. our results provide a first estimate of the trade - off between waveform accuracy and information recovery for this class of lisa sources.
arxiv:1811.01805
we present the learned ranking function ( lrf ), a system that takes short - term user - item behavior predictions as input and outputs a slate of recommendations that directly optimizes for long - term user satisfaction. most previous work is based on optimizing the hyperparameters of a heuristic function. we propose to model the problem directly as a slate optimization problem with the objective of maximizing long - term user satisfaction. we also develop a novel constraint optimization algorithm that stabilizes objective trade - offs for multi - objective optimization. we evaluate our approach with live experiments and describe its deployment on youtube.
arxiv:2408.06512
the " time - and - band limiting " commutative property was found and exploited by d. slepian, h. landau and h. pollak at bell labs in the 1960 ' s, and independently by m. mehta and later by c. tracy and h. widom in random matrix theory. the property in question is the existence of local operators with simple spectrum that commute with naturally appearing global ones. here we give a general result that insures the existence of a commuting differential operator for a given family of exceptional orthogonal polynomials satisfying the " bispectral property ". as a main tool we go beyond bispectrality and make use of the notion of fourier algebras associated to the given sequence of exceptional polynomials. we illustrate this result with two examples, of hermite and laguerre type, exhibiting also a nice perline ' s form for the commuting differential operator.
arxiv:2406.12854
we propose the use of bright matter - wave solitons formed from bose - einstein condensates with attractive interactions to probe and study quantum reflection from a solid surface at normal incidence. we demonstrate that the presence of attractive interatomic interactions leads to a number of advantages for the study of quantum reflection. the absence of dispersion as the soliton propagates allows precise control of the velocity normal to the surface and for much lower velocities to be achieved. numerical modelling shows that the robust, self - trapped nature of bright solitons leads to a clean reflection from the surface, limiting the disruption of the density profile and permitting accurate measurements of the reflection probability.
arxiv:0802.4362
universal thermodynamics for frw model of the universe bounded by apparent / event horizon has been considered for massive gravity theory. assuming hawking temperature and using the unified first law of thermodynamics on the horizon, modified entropy on the horizon has been determined. for simple perfect fluid with constant equation of state, generalized second law of thermodynamics and thermodynamical equilibrium have been examined on both the horizons.
arxiv:1702.05372
although inner star - forming rings are common in optical images of barred spiral galaxies, observational evidence for the accompanying molecular gas has been scarce. in this paper we present images of molecular inner rings, traced using the co ( 1 - 0 ) emission line, from the berkeley - illinois - maryland - association survey of nearby galaxies ( bima song ). we detect inner ring co emission from all five song barred galaxies classified as inner ring ( type ( r ) ). we also examine the seven song barred galaxies classified as inner spiral ( type ( s ) ) ; in one of these, ngc 3627, we find morphological and kinematic evidence for a molecular inner ring. inner ring galaxies have been classified as such based on optical images, which emphasize recent star formation. we consider the possibility that there may exist inner rings in which star formation efficiency is not enhanced. however, we find that in ngc 3627 the inner ring star formation efficiency is enhanced relative to most other regions in that galaxy. we note that the song ( r ) galaxies have a paucity of co and h alpha emission interior to the inner ring ( except near the nucleus ), while ngc 3627 has relatively bright bar co and h alpha emission ; we suggest that galaxies with inner rings such as ngc 3627 may be misclassified if there are significant amounts of gas and star formation in the bar.
arxiv:astro-ph/0204435
humans can quickly associate stimuli to solve problems in novel contexts. our novel neural network model learns state representations of facts that can be composed to perform such associative inference. to this end, we augment the lstm model with an associative memory, dubbed fast weight memory ( fwm ). through differentiable operations at every step of a given input sequence, the lstm updates and maintains compositional associations stored in the rapidly changing fwm weights. our model is trained end - to - end by gradient descent and yields excellent performance on compositional language reasoning problems, meta - reinforcement - learning for pomdps, and small - scale word - level language modelling.
arxiv:2011.07831
this talk outlines some recent theoretical developments in lorentz and cpt violation.
arxiv:hep-ph/0104227
large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. however, using these models for text generation that takes into account target attributes, such as sentiment polarity or specific topics, remains a challenge. we propose a simple and flexible method for controlling text generation by aligning disentangled attribute representations. in contrast to recent efforts on training a discriminator to perturb the token level distribution for an attribute, we use the same data to learn an alignment function to guide the pre - trained, non - controlled language model to generate texts with the target attribute without changing the original language model parameters. we evaluate our method on sentiment - and topic - controlled generation, and show large performance gains over previous methods while retaining fluency and diversity.
arxiv:2103.11070
revised analysis of sigma beam asymmetry for eta photoproduction off the free proton from graal is presented. new analysis reveals a narrow structure near w ~ 1. 685 gev. we describe this structure by the contribution of a narrow resonance with quantum numbers p _ { 11 }, or p _ { 13 }, or d _ { 13 }. being considered together with the recent observations of a bump - like structure at w ~ 1. 68 gev in the quasi - free eta photoproduction off the neutron, this result provides an evidence for a narrow ( gamma < 25 mev ) n * ( 1685 ) resonance. properties of this possible new nucleon state, namely the mass, the narrow width, and the much stronger photocoupling to the neutron, are similar to those predicted for the non - strange member of anti - decouplet of exotic baryons.
arxiv:0807.2316
we find aspects of electrically confining large $ n $ yang - mills theories on $ t ^ 2 \ times r ^ { d - 2 } $ which are consistent with a $ gl ( 2, z ) $ duality. the modular parameter associated with this $ gl ( 2, z ) $ is given by $ { m \ over n } + i \ lambda ^ 2 a $, where $ a $ is the area of the torus, $ m $ is the t ' hooft twist on the torus, and $ \ lambda ^ 2 $ is the string tension. $ n $ is taken to infinity keeping $ m \ over n $ and $ g ^ 2n $ fixed. this duality may be interpreted as t - duality of the qcd string if one identifies the magnetic flux with a two - form background in the string theory. our arguments make no use of supersymmetry. while we are not able to show that this is an exact self duality of conventional qcd, we conjecture that it may be applicable within the universality class of qcd. we discuss the status of the conjecture for the soluble case of pure two dimensional euclidean qcd on $ t ^ 2 $, which is almost but not exactly self dual. for higher dimensional theories, we discuss qualitative features consistent with duality. for $ m = 0 $, such a duality would lead to an equivalence between pure qcd on $ r ^ 4 $ and qcd on $ r ^ 2 $ with two adjoint scalars. when $ \ lambda ^ 2 a < < m ^ 2 / n ^ 2 $, the proposed duality includes exchanges of rank with twist. this exchange bears some resemblance, but is not equivalent, to nahm duality. a proposal for an explicit perturbative map which implements duality in this limit is discussed.
arxiv:hep-th/9804057
circular rydberg states with $ n = 70 $ have been prepared in helium using a modified version of the crossed - fields method. this approach to the preparation of high - $ n $ circular rydberg states overcomes limitations of the standard crossed - fields method which arise at this, and higher, values of $ n $. the experiments were performed with atoms traveling in pulsed supersonic beams that were initially laser photoexcited from the metastable 1s2s $ \, ^ 3 $ s $ _ 1 $ level to the 1s73s $ \, ^ 3 $ s $ _ 1 $ level by resonance - enhanced two - color two - photon excitation in a magnetic field of 16. 154 g. these excited atoms were then polarized using a perpendicular electric field of 0. 844 ~ v / cm, and transferred by a pulse of microwave radiation to the state that, when adiabatically depolarized, evolves into the $ n = 70 $ circular state in zero electric field. the excited atoms were detected by state - selective electric field ionization. each step of the circular state preparation process was validated by comparison with the calculated atomic energy level structure in the perpendicular electric and magnetic fields used. of the atoms initially excited to the 1s73s $ \, ^ 3 $ s $ _ 1 $ level, $ \ sim80 $ \ % were transferred to the $ n = 70 $ circular state. at these high values of $ n $, $ \ delta n = 1 $ circular - to - circular rydberg state transitions occur at frequencies below 20 ghz. consequently, atoms in these states, and the circular state preparation process presented here, are well suited to hybrid cavity qed experiments with rydberg atoms and superconducting microwave circuits.
arxiv:1810.10851
existing research highlight the myriad of benefits realized when technology is sufficiently democratized and made accessible to non - technical or novice users. however, democratizing complex technologies such as artificial intelligence ( ai ) remains hard. in this work, we draw on theoretical underpinnings from the democratization of innovation, in exploring the design of maker kits that help introduce novice users to complex technologies. we report on our work designing tjbot : an open source cardboard robot that can be programmed using pre - built ai services. we highlight principles we adopted in this process ( approachable design, simplicity, extensibility and accessibility ), insights we learned from showing the kit at workshops ( 66 participants ) and how users interacted with the project on github over a 12 - month period ( nov 2016 - nov 2017 ). we find that the project succeeds in attracting novice users ( 40 % of users who forked the project are new to github ) and a variety of demographics are interested in prototyping use cases such as home automation, task delegation, teaching and learning.
arxiv:1805.10723
in this paper we investigate partial spreads of $ h ( 2n - 1, q ^ 2 ) $ through the related notion of partial spread sets of hermitian matrices, and the more general notion of constant rank - distance sets. we prove a tight upper bound on the maximum size of a linear constant rank - distance set of hermitian matrices over finite fields, and as a consequence prove the maximality of extensions of symplectic semifield spreads as partial spreads of $ h ( 2n - 1, q ^ 2 ) $. we prove upper bounds for constant rank - distance sets for even rank, construct large examples of these, and construct maximal partial spreads of $ h ( 3, q ^ 2 ) $ for a range of sizes.
arxiv:1208.1903
nonlinear optical responses provide a powerful way to understand the microscopic interactions between laser fields and matter. they are critical for plenty of applications, such as in lasers, integrated photonic circuits, biosensing and medical tools. however, most materials exhibit weak optical nonlinearities or long response times when they interact with intense optical fields. here, we strongly couple the exciton of organic molecules to an optical mode of a fabry - perot cavity, and achieve an enhancement of the nonlinear complex refractive index by two orders of magnitude compared with that of the uncoupled condition. moreover, the coupled system shows an ultrafast response of ~ 120 fs that we extract from optical cross - correlation measurements. the ultrafast and large enhancement of the nonlinar coefficients in this work paves the way for exploring strong coupling effects on various third - order nonlinear optical phenomena and for technological applications.
arxiv:2005.13325
the low - frequency magneto - optical properties of bilayer bernal graphene are studied by the tight - binding model with four most important interlayer interactions taken into account. since the main features of the wave functions are well depicted, the landau levels can be divided into two groups based on the characteristics of the wave functions. these landau levels lead to four categories of absorption peaks in the optical absorption spectra. such absorption peaks own complex optical selection rules and these rules can be reasonably explained by the characteristics of the wave functions. in addition, twin - peak structures, regular frequency - dependent absorption rates and complex field - dependent frequencies are also obtained in this work. the main features of the absorption peaks are very different from those in monolayer graphene and have their origin in the interlayer interactions.
arxiv:1004.2800
in this paper, we continue with the results in \ cite { pg } and compute the group of quasi - isometries for a subclass of split solvable unimodular lie groups. consequently, we show that any finitely generated group quasi - isometric to a member of the subclass has to be polycyclic, and is virtually a lattice in an abelian - by - abelian solvable lie group. we also give an example of a unimodular solvable lie group that is not quasi - isometric to any finitely generated group, as well deduce some quasi - isometric rigidity results.
arxiv:1002.4451
individual investors are now massively using online brokers to trade stocks with convenient interfaces and low fees, albeit losing the advice and personalization traditionally provided by full - service brokers. we frame the problem faced by online brokers of replicating this level of service in a low - cost and automated manner for a very large number of users. because of the care required in recommending financial products, we focus on a risk - management approach tailored to each user ' s portfolio and risk profile. we show that our hybrid approach, based on modern portfolio theory and collaborative filtering, provides a sound and effective solution. the method is applicable to stocks as well as other financial assets, and can be easily combined with various financial forecasting models. we validate our proposal by comparing it with several baselines in a domain expert - based study.
arxiv:2103.07768
we study spaces obtained from a complete finite volume complex hyperbolic n - manifold m by removing a compact totally geodesic complex ( n - 1 ) - submanifold. the main result is that the fundamental group of m - s is relatively hyperbolic, relative to fundamental groups of the ends of m - s, and m - s admits a complete finite volume a - regular riemannian metric of negative sectional curvature. it follows that for n > 1 the fundamental group of m - s satisfies mostow - type rigidity, has finite asymptotic dimension and rapid decay property, satisfies borel and baum - connes conjectures, is co - hopf and residually hyperbolic, has no nontrivial subgroups with property ( t ), and has finite outer automorphism group. furthermore, if m is compact, then the fundamental group of m - s is biautomatic and satisfies strong tits alternative.
arxiv:0711.5001
the february 2021 texas winter power outage has led to hundreds of deaths and billions of dollars in economic losses, largely due to the generation failure and record - breaking electric demand. in this paper, we study the scaling - up of demand flexibility as a means to avoid load shedding during such an extreme weather event. the three mechanisms considered are interruptible load, residential load rationing, and incentive - based demand response. by simulating on a synthetic but realistic large - scale texas grid model along with demand flexibility modeling and electricity outage data, we identify portfolios of mixing mechanisms that exactly avoid outages, which a single mechanism may fail due to decaying marginal effects. we also reveal a complementary relationship between interruptible load and residential load rationing and find nonlinear impacts of incentive - based demand response on the efficacy of other mechanisms.
arxiv:2206.00184
we present x - ray observations of the recent outburst of 2022 from the neutron star low mass x - ray binary ( lmxb ) source 1a 1744 - 361. spectral properties of the source have been analyzed using joint nustar and nicer observations. during our observations, the source happens to be in the banana state ( soft state ) of the hardness intensity diagram ( hid ). in addition to a power - law with a high energy cutoff, the spectrum is found to exhibit broad iron $ k _ { \ alpha } $ emission along with distinct absorption features. a prominent absorption feature observed at 6. 92 kev may be interpreted as $ k _ { \ alpha } $ absorption line from hydrogen - like iron. the absorption feature observed at 7. 98 kev may be interpreted as a blend of fe xxv and ni xxvii transitions. we have summarized the evidence of variability of the spectral features observed in the x - ray continuum by time - resolved spectroscopy.
arxiv:2309.11817
the general aim of the recommender system is to provide personalized suggestions to users, which is opposed to suggesting popular items. however, the normal training paradigm, i. e., fitting a recommender model to recover the user behavior data with pointwise or pairwise loss, makes the model biased towards popular items. this results in the terrible matthew effect, making popular items be more frequently recommended and become even more popular. existing work addresses this issue with inverse propensity weighting ( ipw ), which decreases the impact of popular items on the training and increases the impact of long - tail items. although theoretically sound, ipw methods are highly sensitive to the weighting strategy, which is notoriously difficult to tune. in this work, we explore the popularity bias issue from a novel and fundamental perspective - - cause - effect. we identify that popularity bias lies in the direct effect from the item node to the ranking score, such that an item ' s intrinsic property is the cause of mistakenly assigning it a higher ranking score. to eliminate popularity bias, it is essential to answer the counterfactual question that what the ranking score would be if the model only uses item property. to this end, we formulate a causal graph to describe the important cause - effect relations in the recommendation process. during training, we perform multi - task learning to achieve the contribution of each cause ; during testing, we perform counterfactual inference to remove the effect of item popularity. remarkably, our solution amends the learning process of recommendation which is agnostic to a wide range of models - - it can be easily implemented in existing methods. we demonstrate it on matrix factorization ( mf ) and lightgcn [ 20 ]. experiments on five real - world datasets demonstrate the effectiveness of our method.
arxiv:2010.15363
organ segmentation of medical images is a key step in virtual imaging trials. however, organ segmentation datasets are limited in terms of quality ( because labels cover only a few organs ) and quantity ( since case numbers are limited ). in this study, we explored the tradeoffs between quality and quantity. our goal is to create a unified approach for multi - organ segmentation of body ct, which will facilitate the creation of large numbers of accurate virtual phantoms. initially, we compared two segmentation architectures, 3d - unet and densevnet, which were trained using xcat data that is fully labeled with 22 organs, and chose the 3d - unet as the better performing model. we used the xcat - trained model to generate pseudo - labels for the ct - org dataset that has only 7 organs segmented. we performed two experiments : first, we trained 3d - unet model on the xcat dataset, representing quality data, and tested it on both xcat and ct - org datasets. second, we trained 3d - unet after including the ct - org dataset into the training set to have more quantity. performance improved for segmentation in the organs where we have true labels in both datasets and degraded when relying on pseudo - labels. when organs were labeled in both datasets, exp - 2 improved average dsc in xcat and ct - org by 1. this demonstrates that quality data is the key to improving the model ' s performance.
arxiv:2203.01934
this is a comment on a recently published paper in langmuir : mandal, n. s. ; sen, a. relative diffusivities of bound and unbound protein can control chemotactic directionality. langmuir 2021, pmid : 34647749 [ arxiv : 2103. 13469 ]. in this study, mandal and sen claim to propose a new kinetic model to analyze the directional movement of enzyme molecules in response to a gradient of their substrate, with the supposedly new prediction that net movement occurs up the substrate gradient when the diffusivity of the substrate - bound enzyme is lower than that of the unbound enzyme, and movement down the substrate gradient when the diffusivity of the substrate - bound enzyme is higher than that of the unbound enzyme. they develop this theoretical scheme to present an alternative to our previously published theoretical framework [ agudo - canalejo, j. ; illien, p. ; golestanian, r. phoresis and enhanced diffusion compete in enzyme chemotaxis. nano lett. 2018, 18, 2711 - 2717 ; arxiv : 2104. 02394 ]. we demonstrate that despite their claim, what they present is exactly the same theory as our work, and therefore, we conclude that their claim of novelty is unsubstantiated.
arxiv:2110.12797
understanding the interplay of different traits in a co - infection system with multiple strains has many applications in ecology and epidemiology. because of high dimensionality and complex feedbacks between traits manifested in infection and co - infection, the study of such systems remains a challenge. in the case where strains are similar ( quasi - neutrality assumption ), we can model trait variation as perturbations in parameters, which simplifies analysis. here, we apply singular perturbation theory to many strain parameters simultaneously, and advance analytically to obtain their explicit collective dynamics. we consider and study such a quasi - neutral model of susceptible - infected - susceptible ( sis ) dynamics among n strains which vary in 5 fitness dimensions : transmissibility, clearance rate of single - and co - infection, transmission probability from mixed coinfection, and co - colonization vulnerability factors encompassing cooperation and competition. this quasi - neutral system is analyzed with a singular perturbation method through an appropriate slow - fast decomposition. the fast dynamics correspond to the embedded neutral system, while the slow dynamics are governed by an n - dimensional replicator equation, describing the time evolution of strain frequencies. the coefficients of this replicator system are pairwise invasion fitnesses between strains, which, in our model, are an explicit weighted sum of pairwise asymmetries along all trait dimensions. remarkably these weights depend only on the parameters of the neutral system. such model reduction highlights the centrality of the neutral system for dynamics at the edge of neutrality, and exposes critical features for maintenance of diversity.
arxiv:2104.07289
context. the orientation of the spin axis of a comet is defined by the values of its equatorial obliquity and its cometocentric longitude of the sun. these parameters can be computed from the components of the nongravitational force caused by outgassing, if the cometary activity is well characterized. the trajectories of known interstellar bodies passing through the solar system show nongravitational accelerations. aims. the spin - axis orientation of 1i / 2017 u1 ( ` oumuamua ) remains to be determined ; for 2i / borisov, the already released results are mutually exclusive. here, we investigate the orientation of the spin axes of ` oumuamua and 2i / borisov using public orbit determinations that consider outgassing. methods. we applied a monte carlo simulation using the covariance matrix method together with monte carlo random search techniques to compute the distributions of equatorial obliquities and cometocentric longitudes of the sun at perihelion of ` oumuamua and 2i / borisov from the values of the nongravitational parameters. results. we find that the equatorial obliquity of ` oumuamua could be about 93 deg, if it has a very prolate ( fusiform ) shape, or close to 16 deg, if it is very oblate ( disk - like ). different orbit determinations of 2i / borisov gave obliquity values of 59 deg and 90 deg. the distributions of cometocentric longitudes were in general multimodal. conclusions. the most probable spin - axis direction of ` oumuamua in equatorial coordinates is ( 280 degrees, + 46 degrees ) if very prolate or ( 312 deg, - 50 deg ) if very oblate. for the orbit determinations of 2i / borisov used here, we find most probable poles pointing near ( 275 deg, + 65 deg ) and ( 231 deg, + 30 deg ), respectively. although our analysis favors an oblate shape for 2i / borisov, a prolate one cannot be ruled out.
arxiv:2009.08423
we discuss static particle - like solitons in the 2 + 1 dimensional cp ( 1 ) model with a small mass deformation $ m $ preserving a $ u ( 1 ) \ times z _ 2 $ symmetry in the lagrangian. due to the breaking of scale invariance, the energy function becomes a strictly increasing function of the soliton size $ \ rho $, and therefore no classical finite size solution exists in this model. to remedy this we employ a well known technique of introducing a forth - order derivative term in the lagrangian to force the soliton action to diverge at small values of $ \ rho $. with this additional term the action exhibits a stable minimum at fixed size $ \ rho $.
arxiv:1303.0856
self - attention networks have proven to be of profound value for its strength of capturing global dependencies. in this work, we propose to model localness for self - attention networks, which enhances the ability of capturing useful local context. we cast localness modeling as a learnable gaussian bias, which indicates the central and scope of the local region to be paid more attention. the bias is then incorporated into the original attention distribution to form a revised distribution. to maintain the strength of capturing long distance dependencies and enhance the ability of capturing short - range dependencies, we only apply localness modeling to lower layers of self - attention networks. quantitative and qualitative analyses on chinese - english and english - german translation tasks demonstrate the effectiveness and universality of the proposed approach.
arxiv:1810.10182
refinement types are types equipped with predicates that specify preconditions and postconditions of underlying functional languages. we propose a general semantic construction of dependent refinement type systems from underlying type systems and predicate logic, that is, a construction of liftings of closed comprehension categories from given ( underlying ) closed comprehension categories and posetal fibrations for predicate logic. we give sufficient conditions to lift structures such as dependent products, dependent sums, computational effects, and recursion from the underlying type systems to refinement type systems. we demonstrate the usage of our construction by giving semantics to a refinement type system and proving soundness.
arxiv:2010.08280
many automatic skin lesion diagnosis systems use segmentation as a preprocessing step to diagnose skin conditions because skin lesion shape, border irregularity, and size can influence the likelihood of malignancy. this paper presents, examines and compares two different approaches to skin lesion segmentation. the first approach uses u - nets and introduces a histogram equalization based preprocessing step. the second approach is a c - means clustering based approach that is much simpler to implement and faster to execute. the jaccard index between the algorithm output and hand segmented images by dermatologists is used to evaluate the proposed algorithms. while many recently proposed deep neural networks to segment skin lesions require a significant amount of computational power for training ( i. e., computer with gpus ), the main objective of this paper is to present methods that can be used with only a cpu. this severely limits, for example, the number of training instances that can be presented to the u - net. comparing the two proposed algorithms, u - nets achieved a significantly higher jaccard index compared to the clustering approach. moreover, using the histogram equalization for preprocessing step significantly improved the u - net segmentation results.
arxiv:1710.01248
we present a description of the function space and the smoothness class associated with a convolutional network using the machinery of reproducing kernel hilbert spaces. we show that the mapping associated with a convolutional network expands into a sum involving elementary functions akin to spherical harmonics. this functional decomposition can be related to the functional anova decomposition in nonparametric statistics. building off our functional characterization of convolutional networks, we obtain statistical bounds highlighting an interesting trade - off between the approximation error and the estimation error.
arxiv:2003.12756