text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
non - equilibrium processes which convert chemical energy into mechanical motion enable the motility of organisms. bundles of inextensible filaments driven by energy transduction of molecular motors form essential components of micron - scale motility engines like cilia and flagella. the mimicry of cilia - like motion in recent experiments on synthetic active filaments supports the idea that generic physical mechanisms may be sufficient to generate such motion. here we show, theoretically, that the competition between the destabilising effect of hydrodynamic interactions induced by force - free and torque - free chemomechanically active flows, and the stabilising effect of nonlinear elasticity, provides a generic route to spontaneous oscillations in active filaments. these oscillations, reminiscent of prokaryotic and eukaryotic flagellar motion, are obtained without having to invoke structural complexity or biochemical regulation. this minimality implies that biomimetic oscillations, previously observed only in complex bundles of active filaments, can be replicated in simple chains of generic chemomechanically active beads.
|
arxiv:1211.5368
|
static detection technologies based on signature - based approaches that are widely used in android platform to detect malicious applications. it can accurately detect malware by extracting signatures from test data and then comparing the test data with the signature samples of virus and benign samples. however, this method is generally unable to detect unknown malware applications. this is because, sometimes, the machine code can be converted into assembly code, which can be easily read and understood by humans. furthuremore, the attacker can then make sense of the assembly instructions and understand the functioning of the program from the same. therefore we focus on observing the behaviour of the malicious software while it is actually running on a host system. the dynamic behaviours of an application are conducted by the system call sequences at the end. hence, we observe the system call log of each application, use the same for the construction of our dataset, and finally use this dataset to classify an unknown application as malicious or benign.
|
arxiv:1709.08805
|
we solve a variety of sign problems for models in lattice field theory using the hamiltonian formulation, including yukawa models and simple lattice gauge theories. the solutions emerge naturally in continuous time and use the dual representation for the bosonic fields. these solutions allow us to construct quantum monte carlo methods for these problems. the methods could provide an alternative approach to understanding non - perturbative dynamics of some lattice field theories.
|
arxiv:1611.01680
|
a classical theorem of balcar, pelant, and simon says that there is a base matrix of height h, where h is the distributivity number of p ( omega ) / fin. we show that if the continuum c is regular, then there is a base matrix of height c, and that there are base matrices of any regular uncountable height less or equal than c in the cohen and random models. this answers questions of fischer, koelbing, and wohofsky.
|
arxiv:2202.00897
|
in deductive verification and software model checking, dealing with certain specification language constructs can be problematic when the back - end solver is not sufficiently powerful or lacks the required theories. one way to deal with this is to transform, for verification purposes, the program to an equivalent one not using the problematic constructs, and to reason about its correctness instead. in this paper, we propose instrumentation as a unifying verification paradigm that subsumes various existing ad - hoc approaches, has a clear formal correctness criterion, can be applied automatically, and can transfer back witnesses and counterexamples. we illustrate our approach on the automated verification of programs that involve quantification and aggregation operations over arrays, such as the maximum value or sum of the elements in a given segment of the array, which are known to be difficult to reason about automatically. we formalise array aggregation operations as monoid homomorphisms. we implement our approach in the monocera tool, which is tailored to the verification of programs with aggregation, and evaluate it on example programs, including sv - comp programs.
|
arxiv:2306.00004
|
within the framework of boltzmann transport equation with a bhatnagar - gross - krook ( bgk ) collisional kernel, we study the wake potential induced by fast partons traveling through the high - temperature qcd plasma which is anisotropic in momentum - space. we calculate the dielectric response function of a collisional anisotropic quark - gluon plasma ( aqgp ) for small $ \ xi $ ( anisotropic parameter ) limit. using this, the wake potential for various combinations of the anisotropy parameter ( $ \ xi $ ) and the collision rate ( $ \ nu $ ) is evaluated both for parallel and perpendicular directions of motion of the fast parton. it is seen that the inclusion of the collision modifies the wake potential and the amount as well as the nature of the potential depends on the combinations of $ \ xi $ and $ \ nu $
|
arxiv:1310.4660
|
orthogonal generalized autoregressive conditional heteroskedasticity model ( ogarch ) is widely used in finance industry to produce volatility and correlation forecasts. we show that the classic ogarch model, nevertheless, tends to be too slow in reflecting sudden changes in market condition due to excessive persistence of the integral univariate garch processes. to obtain more flexibility to accommodate abrupt market changes, e. g. financial crisis, we extend classic ogarch model by incorporating a two - state markov regime - switching garch process. this novel construction allows us to capture recurrent systemic regime shifts. empirical results show that this generalization resolves the problem of excessive persistency effectively and greatly enhances ogarch ' s ability to adapt to sudden market breaks while preserving ogarch ' s most attractive features such as dimension reduction and multi - step ahead forecasting. by constructing a global minimum variance portfolio ( gmvp ), we are able to demonstrate significant outperformance of the extended model over the classic ogarch model and the commonly used exponentially weighted moving average ( ewma ) model. in addition, we show that the extended model is superior to ogarch and ewma in terms of predictive accuracy.
|
arxiv:1909.10108
|
four - dimensional image - type data can quickly become prohibitively large, and it may not be feasible to directly apply methods, such as persistent homology or convolutional neural networks, to determine the topological characteristics of these data because they can encounter complexity issues. this study aims to determine the betti numbers of large four - dimensional image - type data. the experiments use synthetic data, and demonstrate that it is possible to circumvent these issues by applying downscaling methods to the data prior to training a convolutional neural network, even when persistent homology software indicates that downscaling can significantly alter the homology of the training data. when provided with downscaled test data, the neural network can estimate the betti numbers of the original samples with reasonable accuracy.
|
arxiv:2306.14442
|
this is a survey recent works on topological extensions of the tutte polynomial.
|
arxiv:1708.08132
|
given a function from $ \ mathbb { z } _ n $ to itself one can determine its polynomial representability by using kempner function. in this paper we present an alternative characterization of polynomial functions over $ \ mathbb { z } _ n $ by constructing a generating set for the $ \ mathbb { z } _ { n } $ - module of polynomial functions. this characterization results in an algorithm that is faster on average in deciding polynomial representability. we also extend the characterization to functions in several variables.
|
arxiv:1402.5789
|
an extension of marcinkiewicz interpolation theorem, allowing intermediate spaces of orlicz type, is proved. this generalization yields a necessary and sufficient condition so that every quasilinear operator, which maps the set, $ s ( x, \ mu ) $, of all $ \ mu $ - measurable simple functions on $ \ sigma $ - finite measure space $ ( x, \ mu ) $ into $ m ( y, \ nu ) $, the class of $ \ nu $ - measurable functions on $ \ sigma $ - finite measure space $ ( y, \ nu ) $, and satisfies endpoint estimates of type : $ 1 < p < \ infty $, $ 1 \ leq r < \ infty $, \ begin { equation * } \ lambda \, \ nu \ left ( \ left \ lbrace y \ in y : | ( tf ) ( y ) | > \ lambda \ right \ rbrace \ right ) ^ { \ frac { 1 } { p } } \ leq c _ { p, r } \ left ( \ int _ { \ mathbb { r _ + } } \ mu \ left ( \ left \ lbrace x \ in x : | ( f ) ( x ) | > t \ right \ rbrace \ right ) ^ { \ frac { r } { p } } t ^ { r - 1 } dt \ right ) ^ { \ frac { 1 } { r } }, \ end { equation * } for all $ f \ in s ( x, \ mu ) $ and $ \ lambda \ in \ mathbb { r _ + } $ ; is bounded from an orlicz space into another.
|
arxiv:1711.09278
|
prescriptive process monitoring methods seek to optimize the performance of business processes by triggering interventions at runtime, thereby increasing the probability of positive case outcomes. these interventions are triggered according to an intervention policy. reinforcement learning has been put forward as an approach to learning intervention policies through trial and error. existing approaches in this space assume that the number of resources available to perform interventions in a process is unlimited, an unrealistic assumption in practice. this paper argues that, in the presence of resource constraints, a key dilemma in the field of prescriptive process monitoring is to trigger interventions based not only on predictions of their necessity, timeliness, or effect but also on the uncertainty of these predictions and the level of resource utilization. indeed, committing scarce resources to an intervention when the necessity or effects of this intervention are highly uncertain may intuitively lead to suboptimal intervention effects. accordingly, the paper proposes a reinforcement learning approach for prescriptive process monitoring that leverages conformal prediction techniques to consider the uncertainty of the predictions upon which an intervention decision is based. an evaluation using real - life datasets demonstrates that explicitly modeling uncertainty using conformal predictions helps reinforcement learning agents converge towards policies with higher net intervention gain
|
arxiv:2307.06564
|
the challenges of learning a robust 6d pose function lie in 1 ) severe occlusion and 2 ) systematic noises in depth images. inspired by the success of point - pair features, the goal of this paper is to recover the 6d pose of an object instance segmented from rgb - d images by locally matching pairs of oriented points between the model and camera space. to this end, we propose a novel bi - directional correspondence mapping network ( bico - net ) to first generate point clouds guided by a typical pose regression, which can thus incorporate pose - sensitive information to optimize generation of local coordinates and their normal vectors. as pose predictions via geometric computation only rely on one single pair of local oriented points, our bico - net can achieve robustness against sparse and occluded point clouds. an ensemble of redundant pose predictions from locally matching and direct pose regression further refines final pose output against noisy observations. experimental results on three popularly benchmarking datasets can verify that our method can achieve state - of - the - art performance, especially for the more challenging severe occluded scenes. source codes are available at https : / / github. com / gorilla - lab - scut / bico - net.
|
arxiv:2205.03536
|
in this paper, we propose a mixed - binary convex quadratic programming reformulation for the box - constrained nonconvex quadratic integer program and then implement ibm ilog cplex 12. 6 to solve the new model. computational results demonstrate that our approach clearly outperform the very recent state - of - the - art solvers.
|
arxiv:1401.5881
|
spiking neural networks ( snns ) have received widespread attention as an ultra - low power computing paradigm. recent studies have shown that snns suffer from severe overfitting, which limits their generalization performance. in this paper, we propose a simple yet effective temporal reversal regularization ( trr ) to mitigate overfitting during training and facilitate generalization of snns. we exploit the inherent temporal properties of snns to perform input / feature temporal reversal perturbations, prompting the snn to produce original - reversed consistent outputs and learn perturbation - invariant representations. to further enhance generalization, we utilize the lightweight ` ` star operation " ( hadamard product ) for feature hybridization of original and temporally reversed spike firing rates, which expands the implicit dimensionality and acts as a spatio - temporal regularizer. we show theoretically that our method is able to tighten the upper bound of the generalization error, and extensive experiments on static / neuromorphic recognition as well as 3d point cloud classification tasks demonstrate its effectiveness, versatility, and adversarial robustness. in particular, our regularization significantly improves the recognition accuracy of low - latency snn for neuromorphic objects, contributing to the real - world deployment of neuromorphic computational software - hardware integration.
|
arxiv:2408.09108
|
randomized experiments have become the standard method for companies to evaluate the performance of new products or services. in addition to augmenting managers ' decision - making, experimentation mitigates risk by limiting the proportion of customers exposed to innovation. since many experiments are on customers arriving sequentially, a potential solution is to allow managers to " peek " at the results when new data becomes available and stop the test if the results are statistically significant. unfortunately, peeking invalidates the statistical guarantees for standard statistical analysis and leads to uncontrolled type - 1 error. our paper provides valid design - based confidence sequences, sequences of confidence intervals with uniform type - 1 error guarantees over time for various sequential experiments in an assumption - light manner. in particular, we focus on finite - sample estimands defined on the study participants as a direct measure of the incurred risks by companies. our proposed confidence sequences are valid for a large class of experiments, including multi - arm bandits, time series, and panel experiments. we further provide a variance reduction technique incorporating modeling assumptions and covariates. finally, we demonstrate the effectiveness of our proposed approach through a simulation study and three real - world applications from netflix. our results show that by using our confidence sequence, harmful experiments could be stopped after only observing a handful of units ; for instance, an experiment that netflix ran on its sign - up page on 30, 000 potential customers would have been stopped by our method on the first day before 100 observations.
|
arxiv:2210.08639
|
we examine whether parameters related to the higgs sector of the minimal supersymmetric standard model can be determined by detailed study of the production cross section and decay branching ratios of the higgs boson. assuming that only the light higgs boson will be observed at a future $ e ^ + e ^ - $ linear collider with $ \ sqrt { s } = 300 \ sim500 $ gev, we show that values of $ m _ { susy } $ and $ \ tan \ beta $ are restricted within a narrow region in the $ m _ { susy } $ versus $ \ tan \ beta $ plane by the combined analysis of the light higgs properties. it is also pointed out that, in some case, $ \ tan \ beta $ may be restricted to a relatively small value, $ \ tan \ beta = 1 \ sim5 $.
|
arxiv:hep-ph/9809353
|
short text clustering has far - reaching effects on semantic analysis, showing its importance for multiple applications such as corpus summarization and information retrieval. however, it inevitably encounters the severe sparsity of short text representations, making the previous clustering approaches still far from satisfactory. in this paper, we present a novel attentive representation learning model for shot text clustering, wherein cluster - level attention is proposed to capture the correlations between text representations and cluster representations. relying on this, the representation learning and clustering for short texts are seamlessly integrated into a unified model. to further ensure robust model training for short texts, we apply adversarial training to the unsupervised clustering setting, by injecting perturbations into the cluster representations. the model parameters and perturbations are optimized alternately through a minimax game. extensive experiments on four real - world short text datasets demonstrate the superiority of the proposed model over several strong competitors, verifying that robust adversarial training yields substantial performance gains.
|
arxiv:1912.03720
|
we consider a class of economic growth models that includes the classical ramsey - - cass - - koopmans capital accumulation model and verify that, under several assumptions, the value function of the model is the unique viscosity solution to the hamilton - - jacobi - - bellman equation. moreover, we discuss a solution method for these models using differential inclusion, where the subdifferential of the value function plays an important role. next, we present an assumption under which the value function is a classical solution to the hamilton - - jacobi - - bellman equation, and show that many economic models satisfy this assumption. in particular, our result still holds in an economic growth model in which the government takes a non - smooth keynesian policy rule.
|
arxiv:2405.16643
|
the optimal solution to an optimization problem depends on the problem ' s objective function, constraints, and size. while deep neural networks ( dnns ) have proven effective in solving optimization problems, changes in the problem ' s size, objectives, or constraints often require adjustments to the dnn architecture to maintain effectiveness, or even retraining a new dnn from scratch. given the dynamic nature of wireless networks, which involve multiple and diverse objectives that can have conflicting requirements and constraints, we propose a multi - task learning ( mtl ) framework to enable a single dnn to jointly solve a range of diverse optimization problems. in this framework, optimization problems with varying dimensionality values, objectives, and constraints are treated as distinct tasks. to jointly address these tasks, we propose a conditional computation - based mtl approach with routing. the multi - task dnn consists of two components, the base dnn ( bdnn ), which is the single dnn used to extract the solutions for all considered optimization problems, and the routing dnn ( rdnn ), which manages which nodes and layers of the bdnn to be used during the forward propagation of each task. the output of the rdnn is a binary vector which is multiplied with all bdnn ' s weights during the forward propagation, creating a unique computational path through the bdnn for each task. this setup allows the tasks to either share parameters or use independent ones, with the decision controlled by the rdnn. the proposed framework supports both supervised and unsupervised learning scenarios. numerical results demonstrate the efficiency of the proposed mtl approach in solving diverse optimization problems. in contrast, benchmark dnns lacking the rdnn mechanism were unable to achieve similar levels of performance, highlighting the effectiveness of the proposed architecture.
|
arxiv:2502.10027
|
the present paper presents and proves a proposition concerning the time complexity of finite languages. it is shown herein, that for any finite language ( a language for which the set of words composing it is finite ) there is a turing machine that computes the language in such a way that for any input of length k the machine stops in, at most, k + 1 steps.
|
arxiv:cs/0501009
|
we show that deep networks are better than shallow networks at approximating functions that can be expressed as a composition of functions described by a directed acyclic graph, because the deep networks can be designed to have the same compositional structure, while a shallow network cannot exploit this knowledge. thus, the blessing of compositionality mitigates the curse of dimensionality. on the other hand, a theorem called good propagation of errors allows to ` lift ' theorems about shallow networks to those about deep networks with an appropriate choice of norms, smoothness, etc. we illustrate this in three contexts where each channel in the deep network calculates a spherical polynomial, a non - smooth relu network, or another zonal function network related closely with the relu network.
|
arxiv:1905.12882
|
we study nuclear embeddings for function spaces of generalised smoothness defined on a bounded lipschitz domain $ \ omega \ subset \ mathbb { r } ^ d $. this covers, in particular, the well - known situation for spaces of besov and triebel - lizorkin spaces defined on bounded domains as well as some first results for function spaces of logarithmic smoothness. in addition, we provide some new, more general approach to compact embeddings for such function spaces, which also unifies earlier results in different settings, including also the study of their entropy numbers. again we rely on suitable wavelet decomposition techniques and the famous tong result ( 1969 ) about nuclear diagonal operators acting in $ \ ell _ r $ spaces, which we could recently extend to the vector - valued setting needed here.
|
arxiv:2212.12222
|
we address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of augmented reality ( ar ) headsets to revolutionize job training and performance. we decompose the problem into two steps : representation learning and key steps extraction. we propose a training objective, bootstrapped multi - cue contrastive ( bmc2 ) loss to learn discriminative representations for various steps without any labels. different from prior works, we develop techniques to train a light - weight temporal module which uses off - the - shelf features for self supervision. our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key - steps, making it amenable for ar applications. we finally extract key steps via a tunable algorithm that clusters the representations and samples. we show significant improvements over prior works for the task of key step localization and phase classification. qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.
|
arxiv:2301.00794
|
we propose masked siamese networks ( msn ), a self - supervised learning framework for learning image representations. our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. this self - supervised pre - training strategy is particularly scalable when applied to vision transformers since only the unmasked patches are processed by the network. as a result, msns improve the scalability of joint - embedding architectures, while producing representations of a high semantic level that perform competitively on low - shot image classification. for instance, on imagenet - 1k, with only 5, 000 annotated images, our base msn model achieves 72. 4 % top - 1 accuracy, and with 1 % of imagenet - 1k labels, we achieve 75. 7 % top - 1 accuracy, setting a new state - of - the - art for self - supervised learning on this benchmark. our code is publicly available.
|
arxiv:2204.07141
|
this paper details the design of an autonomous alignment and tracking platform to mechanically steer directional horn antennas in a sliding correlator channel sounder setup for 28 ghz v2x propagation modeling. a pan - and - tilt subsystem facilitates uninhibited rotational mobility along the yaw and pitch axes, driven by open - loop servo units and orchestrated via inertial motion controllers. a geo - positioning subsystem augmented in accuracy by real - time kinematics enables navigation events to be shared between a transmitter and receiver over an apache kafka messaging middleware framework with fault tolerance. herein, our system demonstrates a 3d geo - positioning accuracy of 17 cm, an average principal axes positioning accuracy of 1. 1 degrees, and an average tracking response time of 27. 8 ms. crucially, fully autonomous antenna alignment and tracking facilitates continuous series of measurements, a unique yet critical necessity for millimeter wave channel modeling in vehicular networks. the power - delay profiles, collected along routes spanning urban and suburban neighborhoods on the nsf powder testbed, are used in pathloss evaluations involving the 3gpp tr38. 901 and itu - r m. 2135 standards. empirically, we demonstrate that these models fail to accurately capture the 28 ghz pathloss behavior in urban foliage and suburban radio environments. in addition to rms direction - spread analyses for angles - of - arrival via the sage algorithm, we perform signal decoherence studies wherein we derive exponential models for the spatial / angular autocorrelation coefficient under distance and alignment effects.
|
arxiv:2302.08584
|
major depressive disorder is a prevalent and serious mental health condition that negatively impacts your emotions, thoughts, actions, and overall perception of the world. it is complicated to determine whether a person is depressed due to the symptoms of depression not apparent. however, their voice can be one of the factor from which we can acknowledge signs of depression. people who are depressed express discomfort, sadness and they may speak slowly, trembly, and lose emotion in their voices. in this study, we proposed the dynamic convolutional block attention module ( dynamic - cbam ) to utilized with in an attention - gru network to classify the emotions by analyzing the audio signal of humans. based on the results, we can diagnose which patients are depressed or prone to depression then so that treatment and prevention can be started as soon as possible. the research delves into the intricate computational steps involved in implementing a attention - gru deep learning architecture. through experimentation, the model has achieved an impressive recognition with unweighted accuracy ( ua ) rate of 0. 87 and 0. 86 weighted accuracy ( wa ) rate and f1 rate of 0. 87 in the vnemos dataset. training code is released in https : / / github. com / fiyud / emotional - vietnamese - speech - based - depression - diagnosis - using - dynamic - attention - mechanism
|
arxiv:2412.08683
|
in computed tomography ( ct ), metal implants increase the inconsistencies between the measured data and the linear attenuation assumption made by analytic ct reconstruction algorithms. the inconsistencies give rise to dark and bright bands and streaks in the reconstructed image, collectively called metal artifacts. these artifacts make it difficult for radiologists to render correct diagnostic decisions. we describe a data - driven metal artifact reduction ( mar ) algorithm for image - guided spine surgery that applies to scenarios in which a prior ct scan of the patient is available. we tested the proposed method with two clinical datasets that were both obtained during spine surgery. using the proposed method, we were not only able to remove the dark and bright streaks caused by the implanted screws but we also recovered the anatomical structures hidden by these artifacts. this results in an improved capability of surgeons to confirm the correctness of the implanted pedicle screw placements.
|
arxiv:1808.01853
|
the n2hdm is based on the cp - conserving 2hdm extended by a real scalar singlet field. its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2hdm sector the possibility of higgs - to - higgs decays with three different higgs bosons. in this paper the n2hdm is subjected to detailed scrutiny. regarding the theoretical constraints we implement tests of tree - level perturbativity and vacuum stability. moreover, we present, for the first time, a thorough analysis of the global minimum of the n2hdm. the model and the theoretical constraints have been implemented in scanners, and we provide n2hdecay, a code based on hdecay, for the computation of the n2hdm branching ratios and total widths including the state - of - the - art higher order qcd corrections and off - shell decays. we then perform an extensive parameter scan in the n2hdm parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. we find that large singlet admixtures are still compatible with the higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the lhc. similarly to the 2hdm, the n2hdm exhibits a wrong - sign parameter regime, which will be constrained by future higgs precision measurements.
|
arxiv:1612.01309
|
we classify elementary abelian 2 subgroups of compact simple lie groups of adjoint type. this finishes the classification of elementary abelian $ p $ subgroups of compact ( or linear algebraic ) simple groups of adjoint type.
|
arxiv:1108.2398
|
gamma - ray bursts ( grbs ) are fascinating events due to their panchromatic nature. we study optical plateaus in grb afterglows via an extended search into archival data. we comprehensively analyze all published grbs with known redshifts and optical plateaus observed by many ground - based telescopes ( e. g., subaru telescope, ratir ) around the world and several space - based observatories such as the neil gehrels swift observatory. we fit 500 optical light curves ( lcs ), showing the existence of the plateau in 179 cases. this sample is 75 % larger than the previous one ( arxiv : 2105. 10717 ), and it is the largest compilation so far of optical plateaus. we discover the 3d fundamental plane relation at optical wavelengths using this sample. this correlation is between the rest - frame time at the end of the plateau emission, $ t ^ { * } _ { \ rm opt } $, its optical luminosity, $ l _ { \ rm opt } $, and the peak in the optical prompt emission, $ l _ { \ rm peak, opt } $, thus resembling the three - dimensional ( 3d ) x - ray fundamental plane relation ( arxiv : 1604. 06840 ). we correct our sample for redshift evolution and selection effects, discovering that this correlation is indeed intrinsic to grb physics. we investigate the rest - frame end time distributions in x - rays and optical ( $ t ^ { * } _ { \ rm opt } $, $ t ^ { * } _ { \ rm x } $ ), and conclude that the plateau is achromatic only when selection biases are not considered. we also investigate if the 3d optical correlation may be a new discriminant between optical grb classes and find that there is no significant separation between the classes compared to the gold sample plane after correcting for evolution.
|
arxiv:2203.12908
|
we propose a novel method to constrain turbulence and bulk motions in massive galaxies, groups and clusters, exploring both simulations and observations. as emerged in the recent picture of the top - down multiphase condensation, the hot gaseous halos are tightly linked to all other phases in terms of cospatiality and thermodynamics. while hot halos ( 10 ^ 7 k ) are perturbed by subsonic turbulence, warm ( 10 ^ 4 k ) ionized and neutral filaments condense out of the turbulent eddies. the peaks condense into cold molecular clouds ( < 100 k ) raining in the core via chaotic cold accretion ( cca ). we show all phases are tightly linked via the ensemble ( wide - aperture ) velocity dispersion along the line of sight. the correlation arises in complementary long - term agn feedback simulations and high - resolution cca runs, and is corroborated by the combined hitomi and new ifu measurements in perseus cluster. the ensemble multiphase gas distributions are characterized by substantial spectral line broadening ( 100 - 200 km / s ) with mild line shift. on the other hand, pencil - beam detections sample the small - scale clouds displaying smaller broadening and significant line shift up to several 100 km / s, with increased scatter due to the turbulence intermittency. we present new ensemble sigma _ v of the warm halpha + [ nii ] gas in 72 observed cluster / group cores : the constraints are consistent with the simulations and can be used as robust proxies for the turbulent velocities, in particular for the challenging hot plasma ( otherwise requiring extremely long x - ray exposures ). we show the physically motivated criterion c = t _ cool / t _ eddy ~ 1 best traces the condensation extent region and presence of multiphase gas in observed clusters / groups. the ensemble method can be applied to many available datasets and can substantially advance our understanding of multiphase halos in light of the next - generation multiwavelength missions.
|
arxiv:1709.06564
|
fluidic channels with physical dimensions approaching molecular sizes are crucial for novel desalination, chemical separation, and sensing technologies. however, fabrication of precisely controlled fluidic channels in the angstrom size is extremely challenging. this, along with our limited understanding of nanofluidic transport, hinders practical applications. here, we fabricated high - quality salt - intercalated vermiculite membranes with channel sizes 3 - 5 angstrom, highly dependent on intercalant. unlike pristine samples, the salt - intercalated membranes are highly stable in water. we tested several such membranes, of which 0. 6 micron thick membranes showed dye rejection efficiencies greater than 98 percent with exceptionally high water permeance of 5400 l m - 2 h - 1 bar - 1 at a differential pressure of 0. 9 bar. interestingly, the same membrane also rejected nacl ions, with efficiencies of 95 percent. our highly confined channels exhibit sub - linear ionic conductance related to hydration sizes, steric exclusion, k + mobility enhancement, and conductance saturation at concentrations less than or equal to 10 mm. this makes highly confined channels interesting for both fundamental science and applications.
|
arxiv:2303.12463
|
in this paper, we introduce a gauss - newton method for solving the complex phase retrieval problem. in contrast to the real - valued setting, the gauss - newton matrix for complex - valued signals is rank - deficient and, thus, non - invertible. to address this, we utilize a gauss - newton step that moves orthogonally to certain trivial directions. we establish that this modified gauss - newton step has a closed - form solution, which corresponds precisely to the minimal - norm solution of the associated least squares problem. additionally, using the leave - one - out technique, we demonstrate that $ m \ ge o ( n \ log ^ 3 n ) $ independent complex gaussian random measurements ensures that the entire trajectory of the gauss - newton iterations remains confined within a specific region of incoherence and contraction with high probability. this finding allows us to establish the asymptotic quadratic convergence rate of the gauss - newton method without the need of sample splitting.
|
arxiv:2406.09903
|
we analyze quasi - topological solitons winding around a mexican - hat potential in two space - time dimensions. they are prototypes for a large number of physical excitations, including skyrmions of the higgs sector of the standard electroweak model, magnetic bubbles in thin ferromagnetic films, and strings in certain non - trivial backgrounds. we present explicit solutions, derive the conditions for classical stability, and show that contrary to the naive expectation these can be satisfied in the weak - coupling limit. in this limit we can calculate the soliton properties reliably, and estimate their lifetime semiclassically. we explain why gauge interactions destabilize these solitons, unless the scalar sector is extended.
|
arxiv:hep-th/9403034
|
with the ubiquity of iot devices there is a growing demand for confidentiality and integrity of data. solutions based on reconfigurable logic ( cpld or fpga ) have certain advantages over asic and mcu / soc alternatives. programmable logic devices are ideal for both confidentiality and upgradability purposes. in this context the hardware security aspects of cpld / fpga devices are paramount. this paper shows preliminary evaluation of hardware security in intel max 10 devices. these fpgas are one of the most suitable candidates for applications demanding extensive features and high level of security. their strong and week security aspects are revealed and some recommendations are suggested to counter possible security vulnerabilities in real designs. this is a feasibility study paper. its purpose is to highlight the most vulnerable areas to attacks aimed at data extraction and reverse engineering. that way further investigations could be performed on specific areas of concern.
|
arxiv:1910.05086
|
in this paper, we aim at interference mitigation in 5g millimeter - wave ( mm - wave ) communications by employing beamforming and non - orthogonal multiple access ( noma ) techniques with the aim of improving network ' s aggregate rate. despite the potential capacity gains of mm - wave and noma, many technical challenges might hinder that performance gain. in particular, the performance of successive interference cancellation ( sic ) diminishes rapidly as the number of users increases per beam, which leads to higher intra - beam interference. furthermore, intersection regions between adjacent cells give rise to inter - beam inter - cell interference. to mitigate both interference levels, optimal selection of the number of beams in addition to best allocation of users to those beams is essential. in this paper, we address the problem of joint user - cell association and selection of number of beams for the purpose of maximizing the aggregate network capacity. we propose three machine learning - based algorithms ; transfer q - learning ( tql ), q - learning, and best sinr association with density - based spatial clustering of applications with noise ( bsdc ) algorithms and compare their performance under different scenarios. under mobility, tql and q - learning demonstrate 12 % rate improvement over bsdc at the highest offered traffic load. for stationary scenarios, q - learning and bsdc outperform tql, however tql achieves about 29 % convergence speedup compared to q - learning.
|
arxiv:2012.04840
|
state capacity may shape whether natural resources generate prosperity, as it determines if windfalls are effectively turned into useful projects or wasted. we test this hypothesis studying the 2004 - 2011 mining boom in peru, where mines ' profits are redistributed as windfall transfers to local governments. our empirical strategy combines geological data with the central government ' s mining windfalls allocation formula to identify the windfalls ' effects on household incomes and other measures of economic development. proxying local state capacity with the ability to tax and relying on a triple difference strategy we uncover significant variation in treatment response, with positive effects of windfalls limited to high state capacity localities. we find suggestive evidence that only localities with high state capacity succeed at transforming windfalls into infrastructure stocks, which in turns contributes to structural transformation and market integration. lastly, social unrest increases in low state capacity localities that receive windfalls but fail to perceive their benefits. our findings underscore important complementarities between investments in extractive industries and in state capacity.
|
arxiv:2411.09586
|
this paper explains in layperson ' s terms how an agent - based model was used to investigate the hypothesis that culture evolves more effectively when novelty - generating creative processes are tempered by imitation processes that preserve proven successful ideas. using evoc, an agent - based model of cultural evolution we found that ( 1 ) the optimal ratio of inventing to imitating ranged from 1 : 1 to 2 : 1 depending on the fitness function, ( 2 ) there was a trade - off between the proportion of creators to conformers and how creative the creators were, and ( 3 ) when agents in increased or decreased their creativity depending on the success of their latest creative efforts, they segregated into creators and conformers, and the mean fitness of ideas across the society was higher. it is tentatively suggested that through the unconscious use of social cues, members of a society self - organizes to achieve a balanced mix of creators and conformers.
|
arxiv:1502.02076
|
segregation at surfaces of metal - covalent binary liquids is often non - classical and in extreme cases such as ausi, the surface crystallizes above the melting point. in this study, we employ atomic - scale computational frameworks to study the surface crystallization of ausi films and droplets as a function of composition, temperature and size. for temperatures in the range $ t _ s ^ \ ast = 765 - 780 $ k above the melting point $ ( t _ s ^ \ ast \ approx1. 3 \, t _ m ) $, both thin film and droplet surfaces undergo a first order transition, from a 2d au $ _ 2 $ si crystalline phase to a laterally disordered yet stratified layer. the thin film surfaces exhibit an effective surface tension that increases with temperature and decreases with si concentration. on the other hand, for droplets in the size range $ 10 - 30 $ nm, the bulk laplace pressure alters the surface segregation as it occurs with respect to a strained bulk. above $ t _ s ^ \ ast $ the size effect on the surface tension is small, while for $ t < t _ s ^ \ ast $ the surface layer is strained and composed of 2d crystallites separated by extended grain boundary scars that lead to large fluctuations in its energetics. as a specific application, all - atom simulations of ausi droplets on si ( 111 ) substrate subject to si surface flux show that the supersaturation dependent surface tension destabilizes the contact line via formation of a precursor wetting film on the solid - vapor interface, and has ramifications for size selection during vls - based routes for nanowire growth. our study sheds light on the interplay between stability and energetics of surfaces in these unique class of binary alloys and offers pathways for exploiting their surface structure for varied applications such as catalytic nanocrystal growth, dealloying, and polymer crystallization.
|
arxiv:2002.12542
|
in this paper, we consider the sparse phase retrieval problem, recovering an $ s $ - sparse signal $ \ bm { x } ^ { \ natural } \ in \ mathbb { r } ^ n $ from $ m $ phaseless samples $ y _ i = | \ langle \ bm { x } ^ { \ natural }, \ bm { a } _ i \ rangle | $ for $ i = 1, \ ldots, m $. existing sparse phase retrieval algorithms are usually first - order and hence converge at most linearly. inspired by the hard thresholding pursuit ( htp ) algorithm in compressed sensing, we propose an efficient second - order algorithm for sparse phase retrieval. our proposed algorithm is theoretically guaranteed to give an exact sparse signal recovery in finite ( in particular, at most $ o ( \ log m + \ log ( \ | \ bm { x } ^ { \ natural } \ | _ 2 / | x _ { \ min } ^ { \ natural } | ) ) $ ) steps, when $ \ { \ bm { a } _ i \ } _ { i = 1 } ^ { m } $ are i. i. d. standard gaussian random vector with $ m \ sim o ( s \ log ( n / s ) ) $ and the initialization is in a neighborhood of the underlying sparse signal. together with a spectral initialization, our algorithm is guaranteed to have an exact recovery from $ o ( s ^ 2 \ log n ) $ samples. since the computational cost per iteration of our proposed algorithm is the same order as popular first - order algorithms, our algorithm is extremely efficient. experimental results show that our algorithm can be several times faster than existing sparse phase retrieval algorithms.
|
arxiv:2005.08777
|
surface fogging is a common phenomenon that can have significant and detrimental effects on surface transparency and visibility. it affects the performance in a wide range of applications including windows, windshields, electronic displays, cameras, mirrors, and eyewear. a host of ongoing research is aimed at combating this problem by understanding and developing stable and effective anti - fogging coatings that are capable of handling a wide range of environmental challenges " passively " without consumption of electrical energy. here we introduce an alternative approach employing sunlight to go beyond state - of - the - art techniques - - - such as superhydrophilic and superhydrophobic coatings - - - by rationally engineering solar absorbing metasurfaces that maintain transparency, while upon illumination, induce localized heating to significantly delay the onset of surface fogging or decrease defogging time. for the same environmental conditions, we demonstrate that our metasurfaces are able to reduce defogging time by up to four - fold and under supersaturated conditions inhibit the nucleation of condensate outperforming conventional state - of - the - art approaches in terms of visibility retention. our research illustrates a durable and environmentally sustainable approach to passive anti - fogging and defogging for transparent surfaces. this work opens up the opportunity for large - scale manufacturing that can be applied to a range of materials, including polymers and other flexible substrates.
|
arxiv:1904.02534
|
it is argued that the penetration depth and the correlation length at the critical point of the 3d superconductor diverge with the same critical exponents, as follows from the general scaling arguments and from the independent calculations in both ginzburg - landau and its dual theory. the incorrect prediction of kiometzis, kleinert and schakel ( kks ) that this is is not so, is the result of the faulty treatment of the dual theory, in which two, instead of one, coupling constants are tuned to reach the critical point. the recent paper by the present author criticized by kks in cond - mat / 9702159 differs on this point from kks, and consequently finds the expected relation between the divergence of the two lengths in the critical region.
|
arxiv:cond-mat/9702167
|
we develop a method to compute the fermion contribution to the vacuum polarization energy of string - - like configurations in a non - - abelian gauge theory. this calculation has been hampered previously by a number of technical obstacles. we use gauge invariance of the energy and separation of length scales in the energy density to overcome these obstacles. we present a proof - of - principle investigation that shows that this energy is small in the ms - bar renormalization scheme. the generalization to other schemes is straightforward.
|
arxiv:0912.3463
|
we show that it is $ \ mathsf { np } $ - hard to approximate the hyperspherical radius of a triangulated manifold up to an almost - polynomial factor.
|
arxiv:1908.02824
|
industrial engineering ( ie ) is concerned with the design, improvement and installation of integrated systems of people, materials, information, equipment and energy. it draws upon specialized knowledge and skill in the mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design, to specify, predict, and evaluate the results to be obtained from such systems. industrial engineering is a branch of engineering that focuses on optimizing complex processes, systems, and organizations by improving efficiency, productivity, and quality. it combines principles from engineering, mathematics, and business to design, analyze, and manage systems that involve people, materials, information, equipment, and energy. industrial engineers aim to reduce waste, streamline operations, and enhance overall performance across various industries, including manufacturing, healthcare, logistics, and service sectors. industrial engineers are employed in numerous industries, such as automobile manufacturing, aerospace, healthcare, forestry, finance, leisure, and education. industrial engineering combines the physical and social sciences together with engineering principles to improve processes and systems. several industrial engineering principles are followed to ensure the effective flow of systems, processes, and operations. industrial engineers work to improve quality and productivity while simultaneously cutting waste. they use principles such as lean manufacturing, six sigma, information systems, process capability, and more. these principles allow the creation of new systems, processes or situations for the useful coordination of labor, materials and machines. depending on the subspecialties involved, industrial engineering may also overlap with, operations research, systems engineering, manufacturing engineering, production engineering, supply chain engineering, management science, engineering management, financial engineering, ergonomics or human factors engineering, safety engineering, logistics engineering, quality engineering or other related capabilities or fields. = = history = = = = = origins = = = = = = = industrial engineering = = = = the origins of industrial engineering are generally traced back to the industrial revolution with the rise of factory systems and mass production. the fundamental concepts began to emerge through ideas like adam smith ' s division of labor and the implementation of interchangeable parts by eli whitney. the term " industrial engineer " is credited to james gunn who proposed the need for such an engineer focused on production and cost analysis in 1901. however, frederick taylor is widely credited as the " father of industrial engineering " for his focus on scientific management, emphasizing time studies and standardized work methods, with his principles being published in 1911. notably, taylor established the first department dedicated to industrial engineering work, called " elementary rate fixing, " in 1885 with the goal of process improvement and
|
https://en.wikipedia.org/wiki/Industrial_engineering
|
i will discuss the presence of massive star clusters in starburst galaxies with an emphasis on low mass galaxies outside the local group. i will show that such galaxies, with respect to their mass and luminosity, may be very rich in young luminous clusters.
|
arxiv:astro-ph/0003149
|
we examine the thermopower q of a mesoscopic normal - metal ( n ) wire in contact to superconducting ( s ) segments and show that even with electron - hole symmetry, q may become finite due to the presence of supercurrents. moreover, we show how the dominant part of q can be directly related to the equilibrium supercurrents in the structure. in general, a finite thermopower appears both between the n reservoirs and the superconductors, and between the n reservoirs themselves. the latter, however, strongly depends on the geometrical symmetry of the structure.
|
arxiv:cond-mat/0401008
|
we show that a system of bosons in a t = 0 quantum field theory can present metastable ground states with spontaneous symmetry breaking, even in the absence of an imaginary mass term. this gives a natural explanation to the davis - shellard background field \ exp ( - i \ omega _ 0 t ) and adds a new degree of freedom in boson systems, with possible applications in cosmology, condensed matter and high energy physics.
|
arxiv:hep-th/9707263
|
a review of the hadron electromagnetic form factors obtained in a light - front constituent quark model, based on the eigenfunctions of a mass operator, is presented. the relevance of different components in the q - q interaction for the description of hadron experimental form factors is analysed.
|
arxiv:nucl-th/9909025
|
this work considers numerical methods for the time - dependent schr \ " { o } dinger equation of incommensurate systems. by using a plane wave method for spatial discretization, the incommensurate problem is lifted to a higher dimension that results in semidiscrete differential equations with extremely demanding computational cost. we propose several fully discrete time stepping schemes based on the idea of " layer - splitting ", which decompose the semidiscrete problem into sub - problems that each corresponds to one of the periodic layers. then these schemes handle only some periodic systems in the original lower dimension at each time step, which reduces the computational cost significantly and is natural to involve stochastic methods and parallel computing. both theoretical analysis and numerical experiments are provided to support the reliability and efficiency of the algorithms.
|
arxiv:2103.14897
|
in this paper we present an algorithm for efficiently counting fixed points in a finite monoid $ m $ under a conjugacy - like action. we then prove a formula for the character table of $ m $ in terms of fixed points and radical, which allows for the effective computation of the character table of $ m $ over a field of null characteristic, as well as its cartan matrix, using a formula from [ thi \ ' ery ' 12 ], again in terms of fixed points. we discuss the implementation details of the resulting algorithms and provide benchmarks of their performances.
|
arxiv:2303.13647
|
this paper presents a new virtualization method for the downlink of a multi - cell multiple - input multiple - output ( mimo ) network, to achieve service isolation among multiple service providers ( sps ) that share the base station resources of an infrastructure provider ( inp ). each sp designs a virtual precoder for its users in each cell, as its service demand to the inp, without the need to be aware of the existence of the other sps or to know the channel state information ( csi ) outside the cell. the inp performs network virtualization to meet the sps ' service demands while managing both the inter - sp and inter - cell interference. we consider coordinated multi - cell precoding at the inp and formulate an optimization problem to minimize a weighted sum of signal leakage and precoding deviation, with per - cell transmit power constraints. we propose a fully distributed semi - closed - form solution at each cell, without any csi exchange across cells. we further propose a low - complexity scheme to allocate the virtual transmit power, for the inp to regulate between interference elimination and virtual demand maximization. simulation results demonstrate that our precoding solution for network virtualization substantially outperforms the traditional spectrum isolation alternative. it can approach the performance of fully cooperative precoding when the number of antennas is large.
|
arxiv:2104.04615
|
let $ \ mathcal { a } _ g $ denote the moduli stack of principally polarized abelian varieties of dimension $ g $. the arithmetic height, or arithmetic volume, of $ \ overline { \ mathcal { a } } _ g $, is defined to be the arithmetic degree of the metrized hodge bundle $ \ overline { \ omega } _ g $ on $ \ overline { \ mathcal { a } } _ g $. in 1999, k \ " uhn proved a formula for the arithmetic volume of $ \ overline { \ mathcal { a } } _ 1 $ in terms of special values of the riemann zeta function. in this article, we generalize his result to the case $ g = 2 $.
|
arxiv:2205.11864
|
we present a review of the current state of the art of cosmological dark matter simulations, with particular emphasis on the implications for dark matter detection efforts and studies of dark energy. this review is intended both for particle physicists, who may find the cosmological simulation literature opaque or confusing, and for astro - physicists, who may not be familiar with the role of simulations for observational and experimental probes of dark matter and dark energy. our work is complementary to the contribution by m. baldi in this issue, which focuses on the treatment of dark energy and cosmic acceleration in dedicated n - body simulations. truly massive dark matter - only simulations are being conducted on national supercomputing centers, employing from several billion to over half a trillion particles to simulate the formation and evolution of cosmologically representative volumes ( cosmic scale ) or to zoom in on individual halos ( cluster and galactic scale ). these simulations cost millions of core - hours, require tens to hundreds of terabytes of memory, and use up to petabytes of disk storage. the field is quite internationally diverse, with top simulations having been run in china, france, germany, korea, spain, and the usa. predictions from such simulations touch on almost every aspect of dark matter and dark energy studies, and we give a comprehensive overview of this connection. we also discuss the limitations of the cold and collisionless dm - only approach, and describe in some detail efforts to include different particle physics as well as baryonic physics in cosmological galaxy formation simulations, including a discussion of recent results highlighting how the distribution of dark matter in halos may be altered. we end with an outlook for the next decade, presenting our view of how the field can be expected to progress. ( abridged )
|
arxiv:1209.5745
|
bayesian optimization ( bo ) and its batch extensions are successful for optimizing expensive black - box functions. however, these traditional bo approaches are not yet ideal for optimizing less expensive functions when the computational cost of bo can dominate the cost of evaluating the blackbox function. examples of these less expensive functions are cheap machine learning models, inexpensive physical experiment through simulators, and acquisition function optimization in bayesian optimization. in this paper, we consider a batch bo setting for situations where function evaluations are less expensive. our model is based on a new exploration strategy using geometric distance that provides an alternative way for exploration, selecting a point far from the observed locations. using that intuition, we propose to use sobol sequence to guide exploration that will get rid of running multiple global optimization steps as used in previous works. based on the proposed distance exploration, we present an efficient batch bo approach. we demonstrate that our approach outperforms other baselines and global optimization methods when the function evaluations are less expensive.
|
arxiv:1811.01466
|
social determinants of health ( sdoh ) play a crucial role in patient health outcomes, yet their integration into biomedical knowledge graphs remains underexplored. this study addresses this gap by constructing an sdoh - enriched knowledge graph using the mimic - iii dataset and primekg. we introduce a novel fairness formulation for graph embeddings, focusing on invariance with respect to sensitive sdoh information. via employing a heterogeneous - gcn model for drug - disease link prediction, we detect biases related to various sdoh factors. to mitigate these biases, we propose a post - processing method that strategically reweights edges connected to sdohs, balancing their influence on graph representations. this approach represents one of the first comprehensive investigations into fairness issues within biomedical knowledge graphs incorporating sdoh. our work not only highlights the importance of considering sdoh in medical informatics but also provides a concrete method for reducing sdoh - related biases in link prediction tasks, paving the way for more equitable healthcare recommendations. our code is available at \ url { https : / / github. com / hwq0726 / sdoh - kg }.
|
arxiv:2412.00245
|
we numerically investigate crystalline order on negative gaussian curvature capillary bridges. in agreement with the experimental results in [ w. irvine et al., nature, " pleats in crystals on curved surfaces ", 2010, ( 468 ), 947 ] } we observe for decreasing integrated gaussian curvature a sequence of transitions, from no defects to isolated dislocations, pleats, scars and isolated sevenfold disclinations. we especially focus on the dependency of the detached topological charge on the integrated gaussian curvature, for which we observe, again in agreement with the experimental results, no net disclination for an integrated curvature down to - 10, and a linear behaviour from there on until the disclinations match the integrated curvature of - 12. the results are obtained using a phase field crystal approach on catenoid - like surfaces and are highly sensitive to the initialization.
|
arxiv:1401.7783
|
we study a subclass of congruent elliptic curves $ e ^ { ( n ) } : y ^ 2 = x ^ 3 - n ^ 2x $, where $ n $ is a positive integer congruent to $ 1 \ pmod 8 $ with all prime factors congruent to $ 1 \ pmod 4 $. we characterize such $ e ^ { ( n ) } $ with mordell - weil rank zero and $ 2 $ - primary part of shafarevich - tate group isomorphic to $ \ big ( \ mathbb z / 2 \ mathbb z \ big ) ^ 2 $. we also discuss such $ e ^ { ( n ) } $ with 2 - primary part of shafarevich - tate group isomorphic to $ \ big ( \ mathbb z / 2 \ mathbb z \ big ) ^ { 2k } $ with $ k \ ge2 $.
|
arxiv:1511.03810
|
the paper consists of four parts. part i presents a brief survey of the nielsen fixed point theory. part ii deals with dynamical zeta functions connected with nielsen fixed point theory. part iii is concerned with congruences for the reidemeister and nielsen numbers. part iv deals with the reidemeister torsion. in chapter 2 we prove that the reidemeister zeta function of a group endomorphism is a rational function with functional equation in the following cases : the group is finitely generated and an endomorphism is eventually commutative ; the group is finite ; the group is a direct sum of a finite group and a finitely generated free abelian group ; the group is finitely generated, nilpotent and torsion free. in chapter 3 we show that the nielsen zeta function has a positive radius of convergence which admits a sharp estimate in terms of the topological entropy of the map. for a periodic map of a compact polyhedron we prove that nielsen zeta function is a radical of a rational function. in sect ion 3. 4 and 3. 5 we give sufficient conditions under which the nielsen zeta function coincides with the reidemeister zeta function and is a rational function with functional equation. in chapter 5 we prove analog of dold congruences for reidemeister and nielsen numbers. in section 6. 2 we establish a connection between the reidemeister torsion and reidemeister zeta function. in section 6. 3 we establish a connection between the reidemeister torsion of a mapping torus, the eta - invariant, the rochlin invariant and the multipliers of the theta function. in section 6. 4 we describe with the help of the reidemeister torsion the connection between the topology of the attraction domain of an attractor and the dynamic of the system on the attractor.
|
arxiv:chao-dyn/9603017
|
the paper presents a solution of the hello world! an instructive case for the transformation tool contest using the viatra2 model transformation tool.
|
arxiv:1111.4758
|
grounding human - machine conversation in a document is an effective way to improve the performance of retrieval - based chatbots. however, only a part of the document content may be relevant to help select the appropriate response at a round. it is thus crucial to select the part of document content relevant to the current conversation context. in this paper, we propose a document content selection network ( csn ) to perform explicit selection of relevant document contents, and filter out the irrelevant parts. we show in experiments on two public document - grounded conversation datasets that csn can effectively help select the relevant document contents to the conversation context, and it produces better results than the state - of - the - art approaches. our code and datasets are available at https : / / github. com / daod / csn.
|
arxiv:2101.08426
|
in evolutionary multiobjective optimization, effectiveness refers to how an evolutionary algorithm performs in terms of converging its solutions into the pareto front and also diversifying them over the front. this is not an easy job, particularly for optimization problems with more than three objectives, dubbed many - objective optimization problems. in such problems, classic pareto - based algorithms fail to provide sufficient selection pressure towards the pareto front, whilst recently developed algorithms, such as decomposition - based ones, may struggle to maintain a set of well - distributed solutions on certain problems ( e. g., those with irregular pareto fronts ). another issue in some many - objective optimizers is rapidly increasing computational requirement with the number of objectives, such as hypervolume - based algorithms and shift - based density estimation ( sde ) methods. in this paper, we aim to address this problem and develop an effective and efficient evolutionary algorithm ( e3a ) that can handle various many - objective problems. in e3a, inspired by sde, a novel population maintenance method is proposed to select high - quality solutions in the environmental selection procedure. we conduct extensive experiments and show that e3a performs better than 11 state - of - the - art many - objective evolutionary algorithms in quickly finding a set of well - converged and well - diversified solutions.
|
arxiv:2205.15884
|
we analyze momentarily static initial data sets of the gravitational field produced by two - point sources in five - dimensional kaluza - klein spacetimes. these initial data sets are characterized by the mass, the separation of sources and the size of a extra dimension. using these initial data sets, we discuss the condition for black hole formation, and propose a new conjecture which is a hybrid of the four - dimensional hoop conjecture and the five - dimensional hyperhoop conjecture. by using the new conjecture, we estimate the cross section of black hole formation due to collisions of particles in kaluza - klein spacetimes. we show that the mass dependence of the cross section gives us information about the size and the number of the compactified extra dimensions.
|
arxiv:0906.0689
|
we demonstrate coherent anti - stokes raman scattering ( cars ) microscopy in a wide - field setup with nanosecond laser pulse excitation. in contrast to confocal setups, the image of a sample can be recorded with a single pair of excitation pulses. for this purpose the excitation geometry is specially designed in order to satisfy the phase matching condition over the whole sample area. the spectral, temporal and spatial sensitivity of the method is demonstrated by imaging test samples, i. e. oil vesicles in sunflower seeds, on a nanosecond timescale. the method provides snapshot imaging in 3 nanoseconds with a spectral resolution of 25 wavenumbers ( cm $ ^ { - 1 } $ ).
|
arxiv:physics/0512226
|
we study the mean field schr \ " odinger problem ( mfsp ), that is the problem of finding the most likely evolution of a cloud of interacting brownian particles conditionally on the observation of their initial and final configuration. its rigorous formulation is in terms of an optimization problem with marginal constraints whose objective function is the large deviation rate function associated with a system of weakly dependent brownian particles. we undertake a fine study of the dynamics of its solutions, including quantitative energy dissipation estimates yielding the exponential convergence to equilibrium as the the time between observations grows larger and larger, as well as a novel class of functional inequalities involving the mean field entropic cost ( i. e. the optimal value in ( mfsp ) ). our strategy unveils an interesting connection between forward backward stochastic differential equations and the riemannian calculus on the space of probability measures introduced by otto, which is of independent interest.
|
arxiv:1905.02393
|
in this paper we present the design and implementation of flow, a fast and precise type checker for javascript that is used by thousands of developers on millions of lines of code at facebook every day. flow uses sophisticated type inference to understand common javascript idioms precisely. this helps it find non - trivial bugs in code and provide code intelligence to editors without requiring significant rewriting or annotations from the developer. we formalize an important fragment of flow ' s analysis and prove its soundness. furthermore, flow uses aggressive parallelization and incrementalization to deliver near - instantaneous response times. this helps it avoid introducing any latency in the usual edit - refresh cycle of rapid javascript development. we describe the algorithms and systems infrastructure that we built to scale flow ' s analysis.
|
arxiv:1708.08021
|
we show that if the atiyah jones conjecture holds for a surface $ x, $ then it also holds for the blow - up of $ x $ at a point. since the conjecture is known to hold for $ { \ mathbb p } ^ 2 $ and for ruled surfaces, it follows that the conjecture is true for all rational surfaces.
|
arxiv:math/0403138
|
cement is the most widely used manufacturing material in the world and improving its toughness would allow for the design of slender infrastructure, requiring less material. to this end, we investigate by means of molecular dynamics simulations the fracture of calcium - silicate - hydrate ( c - s - h ), the binding phase of cement, responsible for its mechanical properties. for the first time, we report values of the fracture toughness, critical energy release rate, and surface energy of c - s - h grains. this allows us to discuss the brittleness of the material at the atomic scale. we show that, at this scale, c - s - h breaks in a ductile way, which prevents from using methods based on linear elastic fracture mechanics. knowledge of the fracture properties of c - s - h at the nanoscale opens the way for an upscaling approach to the design of tougher cement.
|
arxiv:1410.2915
|
we study the long time behaviour of a nonlinear oscillator subject to a random multiplicative noise with a spectral density ( or power - spectrum ) that decays as a power law at high frequencies. when the dissipation is negligible, physical observables, such as the amplitude, the velocity and the energy of the oscillator grow as power - laws with time. we calculate the associated scaling exponents and we show that their values depend on the asymptotic behaviour of the external potential and on the high frequencies of the noise. our results are generalized to include dissipative effects and additive noise.
|
arxiv:0710.4063
|
selection in a time - periodic environment is modeled via the two - player replicator dynamics. for sufficiently fast environmental changes, this is reduced to a multi - player replicator dynamics in a constant environment. the two - player terms correspond to the time - averaged payoffs, while the three and four - player terms arise from the adaptation of the morphs to their varying environment. such multi - player ( adaptive ) terms can induce a stable polymorphism. the establishment of the polymorphism in partnership games [ genetic selection ] is accompanied by decreasing mean fitness of the population.
|
arxiv:0905.3297
|
thermodynamics of solar wind bulk plasma have been routinely measured and quantified, unlike those of solar energetic particles ( seps ), whose thermodynamic properties have remained elusive until recently. the thermodynamic kappa ( \ ( \ kappa _ { \ rm ep } \ ) ) that parameterizes the statistical distribution of sep kinetic energy contains information regarding the population ' s level of correlation and effective degrees of freedom ( \ ( { \ rm d _ { eff } } \ ) ). at the same time, the intermittent kappa ( \ ( \ kappa _ { \ delta b } \ ) ) that parameterizes the statistical distribution of magnetic field increments contains information about the correlation and \ ( { \ rm d _ { eff } } \ ) involved in magnetic field fluctuations. correlations between particles can be affected by magnetic field fluctuations, leading to a relationship between \ ( \ kappa _ { \ rm ep } \ ) and \ ( \ kappa _ { \ delta b } \ ). in this paper, we examine the relationship of \ ( { \ rm d _ { eff } } \ ) and entropy between energetic particles and the magnetic field via the spatial variation of their corresponding parameter kappa values. we compare directly the values of \ ( \ kappa _ { \ rm ep } \ ) and \ ( \ kappa _ { \ delta b } \ ) using parker solar probe is \ ( \ odot \ ) is and fields measurements during an sep event associated with an interplanetary coronal mass ejection ( icme ). remarkably, we find that \ ( \ kappa _ { \ rm ep } \ ) and \ ( \ kappa _ { \ delta b } \ ) are anti - correlated via a linear relationship throughout the passing of the icme, indicating a proportional exchange of \ ( { \ rm d _ { eff } } \ ) from the magnetic field to energetic particles, i. e., \ ( \ kappa _ { \ delta b } \ sim ( - 0. 15 \ pm 0. 03 ) \ kappa _ { \ rm ep } \ ), interpreted as an effective coupling ratio. this finding is crucial for improving our understanding of icmes and suggests that they help to produce an environment that enables the transfer of entropy from the magnetic field to energetic particles due to changes in intermittency of the magnetic field.
|
arxiv:2504.10697
|
cities can be seen as the epitome of complex systems. they arise from a set of interactions and components so diverse that is almost impossible to describe them exhaustively. amid this diversity, we chose an object which orchestrates the development and use of an urban area : the road network. following the established work on space syntax, we represent road networks as graphs. from this symbolic representation we can build a geographical object called the way. the way is defined by local rules independently from the direction in which the network is read. this complex object, and several indicators leaned upon it, allows us to carry out deep analysis of spatial networks, independent from their borders. with this methodology, we demonstrate how different road graphs, from various places in the world, show similar properties. we show how such analysis, based on the topological and topographical properties of their road networks, allows us to trace back some aspects of the historical and geographical contexts of city formation. we define a model of temporal differentiation, where the structural changes through time in the network are highlighted. this methodology was thought to be generic so it can be used with several kinds of spatial networks, opening the panel of research applications and future work. key words : road network, graph theory, spatial analysis, complex system, urban modelling.
|
arxiv:1512.01268
|
compound random measures ( corm ' s ) are a flexible and tractable framework for vectors of completely random measure. in this paper, we provide conditions to guarantee the existence of a corm. furthermore, we prove some interesting properties of corm ' s when exponential scores and regularly varying l \ ' evy intensities are considered.
|
arxiv:1707.06768
|
modelising the translation errors by suitable mathematical operators in the crystal basis model of the genetic code and requiring that codons prone to be misread encode the same amino - acid, the main features of the organisation in multiplets of the genetic code are described.
|
arxiv:math-ph/0111006
|
this paper presents a novel feature of the kernel - based system identification method. we prove that the regularized kernel - based approach for the estimation of a finite impulse response is equivalent to a robust least - squares problem with a particular uncertainty set defined in terms of the kernel matrix, and thus, it is called kernel - based uncertainty set. we provide a theoretical foundation for the robustness of the kernel - based approach to input disturbances. based on robust and regularized least - squares methods, different formulations of system identification are considered, where the kernel - based uncertainty set is employed in some of them. we apply these methods to a case where the input measurements are subject to disturbances. subsequently, we perform extensive numerical experiments and compare the results to examine the impact of utilizing kernel - based uncertainty sets in the identification procedure. the numerical experiments confirm that the robust least square identification approach with the kernel - based uncertainty set improves the robustness of the estimation to the input disturbances.
|
arxiv:2105.12516
|
an algebra $ a $ with multiplication $ a \ times a \ to a, ( a, b ) \ mapsto a \ circ b $, is called right - symmetric, if $ a \ circ ( b \ circ c ) - ( a \ circ b ) \ circ a \ circ ( c \ circ b ) - ( a \ circ c ) \ circ b, $ for any $ a, b, c \ in a $. the multiplication of right - symmetric witt algebras $ w _ n = \ { u \ der _ i : u \ in u, u = { \ cal k } [ x _ 1 ^ { \ pm 1 },..., x _ n ^ { \ pm } $ or $ = { \ cal k } [ x _ 1,..., x _ n ], i = 1,..., n \ }, p = 0, $ or $ w _ n ( { \ bf m ) } = \ { u \ der _ i : u \ in u, u = o _ n ( { \ bf m } ) \ } $, are given by $ u \ der _ i \ circ v \ der _ j = v \ der _ j ( u ) \ der _ i. $ an analogue of the amitsur - levitzki theorem for right - symmetric witt algebras is established. right - symmetric witt algebras of $ satisfy the standard right - symmetric identity of degree $ 2n + 1 : $ $ \ sum _ { \ sigma \ in sym _ { 2n } } sign ( \ sigma ) a _ { \ sigma ( 1 ) } \ circ ( a _ { \ sigma ( 2 ) } \ circ >... ( a _ { \ sigma ( 2n ) } \ circ a _ { 2n + 1 } )... ) = 0. $ the minimal deg $ left polynomial identities of $ w _ n ^ { rsym }, w _ n ^ { + rsym }, p = 0, $ i $ the minimal degree of multilinear left polynomial identity of $ $ is also $ 2n + 1. $ all left polynomial ( also multilinear, if $ p > 0 $ ) identities of right - symmetric witt algebras of minimal $ combinations of left polynomials obtained from standard ones by permutations of arguments.
|
arxiv:math/9809082
|
minimum model calculations on the co - action of hole vanishing lifshitz transitions and correlation effects in ferropnictides are presented. the calculations predict non - fermi - liquid behaviour and huge mass enhancements of the charge carriers at the fermi level. the findings are compared with recent arpes experiments and with measurements of transport and thermal properties of ferropnictides. the results from the calculation can be also applied to other unconventional superconductors and question the traditional view of quantum critical points.
|
arxiv:1601.06516
|
in recent years, large language models ( llms ) have achieved remarkable performances in various nlp tasks. they can generate texts that are indistinguishable from those written by humans. such remarkable performance of llms increases their risk of being used for malicious purposes, such as generating fake news articles. therefore, it is necessary to develop methods for distinguishing texts written by llms from those written by humans. watermarking is one of the most powerful methods for achieving this. although existing watermarking methods have successfully detected texts generated by llms, they significantly degrade the quality of the generated texts. in this study, we propose the necessary and sufficient watermark ( ns - watermark ) for inserting watermarks into generated texts without degrading the text quality. more specifically, we derive minimum constraints required to be imposed on the generated texts to distinguish whether llms or humans write the texts. then, we formulate the ns - watermark as a constrained optimization problem and propose an efficient algorithm to solve it. through the experiments, we demonstrate that the ns - watermark can generate more natural texts than existing watermarking methods and distinguish more accurately between texts written by llms and those written by humans. especially in machine translation tasks, the ns - watermark can outperform the existing watermarking method by up to 30 bleu scores.
|
arxiv:2310.00833
|
qcd phase diagram in the $ t - \ mu $ plane and the equation of state for pure gluon, 2 - flavor, 2 + 1 - flavor systems, and 2 + 1 + 1 - flavor systems have been investigated using the einstein - maxwell - dilaton ( emd ) framework at finite temperature and chemical potential. by inputting lattice qcd data for the equation of state and baryon number susceptibility at zero chemical potential into holographic model, all the parameters can be determined with the aid of machine learning algorithms. our findings indicate that the deconfinement phase transition is of first order for the pure gluon system with critical temperature $ t _ c = 0. 265 $ gev at vanishing chemical potential. the phase transition for the 2 - flavor, 2 + 1 - flavor systems, and 2 + 1 + 1 - flavor systems are crossover at vanishing chemical potential and first - order at high chemical potential, and the critical endpoint ( cep ) in the $ t - \ mu $ plane locates at ( $ \ mu _ b ^ c $ = 0. 46 gev, $ t ^ c $ = 0. 147 gev ), ( $ \ mu _ b ^ c $ = 0. 74 gev, $ t ^ c $ = 0. 094 gev ), and ( $ \ mu _ b ^ c $ = 0. 87 gev, $ t ^ c $ = 0. 108 gev ), respectively. additionally, the thermodynamic quantities of the system for different flavors at finite chemical potential are presented in this paper. it is observed that the difference between the 2 + 1 - flavor and 2 + 1 + 1 - flavor systems is invisible at vanishing chemical potential and low temperature. the location of cep for 2 + 1 + 1 - flavor system deviates explicitly from that of the 2 + 1 - flavor system with the increase of chemical potential. both 2 + 1 - flavor and 2 + 1 + 1 - flavor systems differ significantly from the 2 - flavor system. moreover, at zero temperature, the critical chemical potential is found to be $ \ mu _ b $ = 1. 1 gev, 1. 6 gev, 1. 9 gev for the 2 - flavor, 2 + 1 - flavor and 2 + 1 + 1 - flavor systems, respectively.
|
arxiv:2405.06179
|
we show the existence of neutralizations of various completions of the quantic weyl algebra specialized in a primitive unit root of prime order p.
|
arxiv:1204.3602
|
the interplay between vortex guiding and the hall effect in superconducting nb films with periodically arranged nanogrooves is studied via four - probe measurements in standard and hall configurations and accompanying theoretical modeling. the nanogrooves are milled by focused ion beam and induce a symmetric pinning potential of the washboard type. the resistivity tensor of the films is determined in the limit of small current densities at temperatures close to the critical temperature for the fundamental matching configuration of the vortex lattice with respect to the pinning nanolandscape. the angle between the current direction with respect to the grooves is set at seven fixed values between $ 0 ^ \ circ $ and $ 90 ^ \ circ $. a sign change is observed in the temperature dependence of the hall resistivity $ \ rho _ \ perp ^ - $ of as - grown films in a narrow temperature range near $ t _ c $. by contrast, for all nanopatterned films $ \ rho _ \ perp ^ - $ is nonzero in a broader temperature range below $ t _ c $, allowing us to discriminate between two contributions in $ \ rho _ \ perp ^ - $, namely one contribution originating from the guided vortex motion and the other one caused by the hall anomaly just as in as - grown nb films. all four measured resistivity components are successfully fitted to analytical expressions derived within the framework of a stochastic model of competing isotropic and anisotropic pinning. this provides evidence of the model validity for the description of the resistive response of superconductor thin films with washboard pinning nanolandscapes.
|
arxiv:1604.01161
|
distributive skew lattices satisfying $ x \ wedge ( y \ vee z ) \ wedge x = ( x \ wedge y \ wedge x ) \ vee ( x \ wedge z \ wedge x ) $ and its dual are studied, along with the larger class of linearly distributive skew lattices, whose totally preordered subalgebras are distributive. linear distributivity is characterized in terms of the behavior of the natural partial order between comparable $ \ dd $ - classes. this leads to a second characterization in terms of strictly categorical skew lattices. criteria are given for both types of skew lattices to be distributive.
|
arxiv:1306.5598
|
we study the problem of distributed task allocation inspired by the behavior of social insects, which perform task allocation in a setting of limited capabilities and noisy environment feedback. we assume that each task has a demand that should be satisfied but not exceeded, i. e., there is an optimal number of ants that should be working on this task at a given time. the goal is to assign a near - optimal number of workers to each task in a distributed manner and without explicit access to the values of the demands nor the number of ants working on the task. we seek to answer the question of how the quality of task allocation depends on the accuracy of assessing whether too many ( overload ) or not enough ( lack ) ants are currently working on a given task. concretely, we address the open question of solving task allocation in the model where each ant receives feedback that depends on the deficit defined as the ( possibly negative ) difference between the optimal demand and the current number of workers in the task. the feedback is modeled as a random variable that takes value lack or overload with probability given by a sigmoid of the deficit. each ants receives the feedback independently, but the higher the overload or lack of workers for a task, the more likely it is that all the ants will receive the same, correct feedback from this task ; the closer the deficit is to zero, the less reliable the feedback becomes. we measure the performance of task allocation algorithms using the notion of regret, defined as the absolute value of the deficit summed over all tasks and summed over time. we propose a simple, constant - memory, self - stabilizing, distributed algorithm that quickly converges from any initial distribution to a near - optimal assignment. we also show that our algorithm works not only under stochastic noise but also in an adversarial noise setting.
|
arxiv:1805.03691
|
relative auslander algebras were introduced and studied by beligiannis. in this paper, we apply intermediate extension functors associated to certain recollements of functor categories to study them. in particular, we study the existence of tilting - cotilting modules over such algebras. as a consequence, it will be shown that two gorenstein algebras of g - dimension 1 being of finite cohen - macaulay type are morita equivalent if and only if their cohen - macaulay auslander algebras are morita equivalent.
|
arxiv:1711.07043
|
this paper summarizes our work on experimentally characterizing, mitigating, and recovering data retention errors in multi - level cell ( mlc ) nand flash memory, which was published in hpca 2015, and examines the work ' s significance and future potential. retention errors, caused by charge leakage over time, are the dominant source of flash memory errors. understanding, characterizing, and reducing retention errors can significantly improve nand flash memory reliability and endurance. in this work, we first characterize, with real 2y - nm mlc nand flash chips, how the threshold voltage distribution of flash memory changes with different retention ages - - the length of time since a flash cell was programmed. we observe from our characterization results that 1 ) the optimal read reference voltage of a flash cell, using which the data can be read with the lowest raw bit error rate ( rber ), systematically changes with its retention age, and 2 ) different regions of flash memory can have different retention ages, and hence different optimal read reference voltages. based on our findings, we propose two new techniques. first, retention optimized reading ( ror ) adaptively learns and applies the optimal read reference voltage for each flash memory block online. the key idea of ror is to periodically learn a tight upper bound of the optimal read reference voltage, and from there approach the optimal read reference voltage. our evaluations show that ror can extend flash memory lifetime by 64 % and reduce average error correction latency by 10. 1 %. second, retention failure recovery ( rfr ) recovers data with uncorrectable errors offline by identifying and probabilistically correcting flash cells with retention errors. our evaluation shows that rfr essentially doubles the error correction capability.
|
arxiv:1805.02819
|
analysis of the dynamics of the cavity radiation of a coherently pumped correlated emission laser is presented. the phase fluctuation and dephasing are found to affect the time evolution of the two - mode squeezing and intensity of the cavity radiation significantly. the intensity and degree of the two - mode squeezing increase at early stages of the process with time, but this trend changes rapidly afterwards. it is also shown that they increase with phase fluctuation and dephasing in the strong driving limit, however the situation appears to be opposite in the weak driving limit. this essentially suggests that the phase fluctuation and dephasing weaken the coherence induced by a strong driving mechanism so that the spontaneous emission gets a chance. the other important aspect of the phase fluctuation, in this regard, is the relaxation of the time at which the maximum squeezing is manifested as well as the time in which the radiation remains in a squeezed state.
|
arxiv:1011.3673
|
in this paper we give sufficient conditions for random splitting systems to have a positive top lyapunov exponent. we verify these conditions for random splittings of two fluid models : the conservative lorenz - 96 equations and galerkin approximations of the 2d euler equations on the torus. in doing so, we highlight particular structures in these equations such as shearing. since a positive top lyapunov exponent is an indicator of chaos which in turn is a feature of turbulence, our results show these randomly split fluid models have important characteristics of turbulent flow.
|
arxiv:2210.02958
|
inflation is the currently accepted paradigm for the beginnings of the universe. to explain the observed almost scale invariant spectrum of density perturbations with only a slight spectral tilt, inflation must have been " slow roll ", that is with a potential with sufficiently small slope. while the origin of inflationary structure is intrinsically quantum mechanical, gravity gets treated semiclassically within inflationary models. recent work, in terms of the so called de - sitter swampland conjecture, has called into question whether slow roll inflation is consistent with a complete theory of quantum gravity in the presence of a positive vacuum energy density, which is a key ingredient in the inflationary paradigm. in this work, we show that, in fact, if we understand this conjecture correctly and with another swampland conjecture, the so - called distance conjecture, involved we get a potential mechanism for slow roll inflation and we argue that here fine - tuning is not a technical problem.
|
arxiv:1910.14047
|
we study the computational complexity of fundamental problems over the $ p $ - adic numbers $ { \ mathbb q } _ p $ and the $ p $ - adic integers $ { \ mathbb z } _ p $. gu \ ' epin, haase, and worrell proved that checking satisfiability of systems of linear equations combined with valuation constraints of the form $ v _ p ( x ) = c $ for $ p \ geq 5 $ is np - complete ( both over $ { \ mathbb z } _ p $ and over $ { \ mathbb q } _ p $ ), and left the cases $ p = 2 $ and $ p = 3 $ open. we solve their problem by showing that the problem is np - complete for $ { \ mathbb z } _ 3 $ and for $ { \ mathbb q } _ 3 $, but that it is in p for $ { \ mathbb z } _ 2 $ and for $ { \ mathbb q } _ 2 $. we also present different polynomial - time algorithms for solvability of systems of linear equations in $ { \ mathbb q } _ p $ with either constraints of the form $ v _ p ( x ) \ leq c $ or of the form $ v _ p ( x ) \ geq c $ for $ c \ in { \ mathbb z } $. finally, we show how our algorithms can be used to decide in polynomial time the satisfiability of systems of ( strict and non - strict ) linear inequalities over $ { \ mathbb q } $ together with valuation constraints $ v _ p ( x ) \ geq c $ for several different prime numbers $ p $ simultaneously.
|
arxiv:2504.13536
|
self - correcting quantum memories demonstrate robust properties that can be exploited to improve active quantum error - correction protocols. here we propose a cellular automaton decoder for a variation of the color code where the bases of the physical qubits are locally rotated, which we call the xyz color code. the local transformation means our decoder demonstrates key properties of a two - dimensional fractal code if the noise acting on the system is infinitely biased towards dephasing, namely, no string - like logical operators. as such, in the high - bias limit, our local decoder reproduces the behavior of a partially self - correcting memory. at low error rates, our simulations show that the memory time diverges polynomially with system size without intervention from a global decoder, up to some critical system size that grows as the error rate is lowered. furthermore, although we find that we cannot reproduce partially self - correcting behavior at finite bias, our numerics demonstrate improved memory times at realistic noise biases. our results therefore motivate the design of tailored cellular automaton decoders that help to reduce the bandwidth demands of global decoding for realistic noise models.
|
arxiv:2203.16534
|
in this paper, we aim at the problem of tensor data completion. tensor - train decomposition is adopted because of its powerful representation ability and linear scalability to tensor order. we propose an algorithm named sparse tensor - train optimization ( stto ) which considers incomplete data as sparse tensor and uses first - order optimization method to find the factors of tensor - train decomposition. our algorithm is shown to perform well in simulation experiments at both low - order cases and high - order cases. we also employ a tensorization method to transform data to a higher - order form to enhance the performance of our algorithm. the results of image recovery experiments in various cases manifest that our method outperforms other completion algorithms. especially when the missing rate is very high, e. g., 90 \ % to 99 \ %, our method is significantly better than the state - of - the - art methods.
|
arxiv:1711.02271
|
we propose to train neural networks ( nns ) using a novel variant of the ` ` additively preconditioned trust - region strategy ' ' ( apts ). the proposed method is based on a parallelizable additive domain decomposition approach applied to the neural network ' s parameters. built upon the tr framework, the apts method ensures global convergence towards a minimizer. moreover, it eliminates the need for computationally expensive hyper - parameter tuning, as the tr algorithm automatically determines the step size in each iteration. we demonstrate the capabilities, strengths, and limitations of the proposed apts training method by performing a series of numerical experiments. the presented numerical study includes a comparison with widely used training methods such as sgd, adam, lbfgs, and the standard tr method.
|
arxiv:2312.13677
|
remnant radio galaxies represent the dying phase of radio - loud active galactic nuclei ( agn ). large samples of remnant radio galaxies are important for quantifying the radio galaxy life cycle. the remnants of radio - loud agn can be identified in radio sky surveys based on their spectral index, or, complementary, through visual inspection based on their radio morphology. however, this is extremely time - consuming when applied to the new large and sensitive radio surveys. here we aim to reduce the amount of visual inspection required to find agn remnants based on their morphology, through supervised machine learning trained on an existing sample of remnant candidates. for a dataset of 4107 radio sources, with angular sizes larger than 60 arcsec, from the low frequency array ( lofar ) two - metre sky survey second data release ( lotss - dr2 ), we started with 151 radio sources that were visually classified as ' agn remnant candidate '. we derived a wide range of morphological features for all radio sources from their corresponding stokes - i images : from simple source catalogue - derived properties, to clustered haralick - features, and self - organising map ( som ) derived morphological features. we trained a random forest classifier to separate the ' agn remnant candidates ' from the not yet inspected sources. the som - derived features and the total to peak flux ratio of a source are shown to be most salient to the classifier. we estimate that $ 31 \ pm5 \ % $ of sources with positive predictions from our classifier will be labelled ' agn remnant candidates ' upon visual inspection, while we estimate the upper bound of the $ 95 \ % $ confidence interval for ' agn remnant candidates ' in the negative predictions at $ 8 \ % $. visual inspection of just the positive predictions reduces the number of radio sources requiring visual inspection by $ 73 \ % $.
|
arxiv:2304.05813
|
we observe, with angle - resolved photoemission, a dramatic change in the electronic structure of two c60 monolayers, deposited respectively on ag ( 111 ) and ( 100 ) substrates, and similarly doped with potassium to half - filling of the c60 lowest unoccupied molecular orbital. the fermi surface symmetry, the bandwidth, and the curvature of the dispersion at gamma point are different. orientations of the c60 molecules on the two substrates are known to be the main structural difference between the two monolayers, and we present new band - structure calculations for some of these orientations. we conclude that orientations play a key role in the electronic structure of fullerides.
|
arxiv:cond-mat/0410196
|
this paper presents a simple model that mimics quantum mechanics ( qm ) results in terms of probability fields of free particles subject to self - interference, without using schroedinger equation or complex wavefunctions. unlike the standard qm picture, the proposed model only uses integer - valued quantities and arithmetic operations. in particular, it assumes a discrete spacetime under the form of an euclidean lattice. the proposed approach describes individual particle trajectories as random walks. transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice sites they visit during the walk. non - relativistic qm predictions, particularly selfinterference, are retrieved as probability distributions of similarly - prepared ensembles of particles. extension to interacting particles is discussed but not detailed in this paper.
|
arxiv:1506.00817
|
to mitigate the severe inter - tier interference and enhance limited cooperative gains resulting from the constrained and non - ideal transmissions between adjacent base stations in heterogeneous networks ( hetnets ), heterogeneous cloud radio access networks ( h - crans ) are proposed as cost - efficient potential solutions through incorporating the cloud computing into hetnets. in this article, state - of - the - art research achievements and challenges on h - crans are surveyed. in particular, we discuss issues of system architectures, spectral and energy efficiency performances, and promising key techniques. a great emphasis is given towards promising key techniques in h - crans to improve both spectral and energy efficiencies, including cloud computing based coordinated multi - point transmission and reception, large - scale cooperative multiple antenna, cloud computing based cooperative radio resource management, and cloud computing based self - organizing network in the cloud converging scenarios. the major challenges and open issues in terms of theoretical performance with stochastic geometry, fronthaul constrained resource allocation, and standard development that may block the promotion of h - crans are discussed as well.
|
arxiv:1410.3028
|
this paper describes a building blocks approach to the design of scientific workflow systems. we discuss radical - cybertools as one implementation of the building blocks concept, showing how they are designed and developed in accordance with this approach. this paper offers three main contributions : ( i ) showing the relevance of the design principles underlying the building blocks approach to support scientific workflows on high performance computing platforms ; ( ii ) illustrating a set of building blocks that enable multiple points of integration, " unifying " conceptual reasoning across otherwise very different tools and systems ; and ( iii ) case studies discussing how radical - cybertools are integrated with existing workflow, workload, and general purpose computing systems and used to develop domain - specific workflow systems.
|
arxiv:1903.10057
|
we discuss a formalism for solving ( 2 + 1 ) ads gravity on riemann surfaces. in the torus case the equations of motion are solved by two functions f and g, solutions of two independent o ( 2, 1 ) sigma models, which are distinct because their first integrals contain a different time dependent phase factor. we then show that with the gauge choice $ k = \ sqrt { \ lambda } / tg ( 2 \ sqrt { \ lambda } t ) $ the same couple of first integrals indeed solves exactly the einstein equations for every riemann surface. the $ x ^ a = x ^ a ( x ^ mu ) $ polydromic mapping which extends the standard immersion of a constant curvature three - dimensional surface in a flat four - dimensional space to the case of external point sources or topology, is calculable with a simple algebraic formula in terms only of the two sigma model solutions f and g. a trivial time translation of this formalism allows us to introduce a new method which is suitable to study the scattering of black holes in ( 2 + 1 ) ads gravity.
|
arxiv:hep-th/9907174
|
we propose a scheme for measuring the squeezing, purity, and entanglement of gaussian states of light that does not require homodyne detection. the suggested setup only needs beam splitters and single - photon detectors. two - mode entanglement can be detected from coincidences between photodetectors placed on the two beams.
|
arxiv:quant-ph/0311119
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.