text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
remarkable progress has been achieved in synthesizing photo - realistic images with generative adversarial networks ( gans ). recently, gans are utilized as the training sample generator when obtaining or storing real training data is expensive even infeasible. however, traditional gans generated images are not as informative as the real training samples when being used to train deep neural networks. in this paper, we propose a novel method to synthesize informative training samples with gan ( it - gan ). specifically, we freeze a pre - trained gan model and learn the informative latent vectors that correspond to informative training samples. the synthesized images are required to preserve information for training deep neural networks rather than visual reality or fidelity. experiments verify that the deep neural networks can learn faster and achieve better performance when being trained with our it - gan generated images. we also show that our method is a promising solution to dataset condensation problem.
|
arxiv:2204.07513
|
we report on the theory of a luneburg lens for forward - volume magnetostatic spin waves, and verify its operation via micromagnetic modelling. the lens converts a plane wave to a point source ( and vice versa ) by a designed graded index, realised here by either modulating the thickness or the saturation magnetization in a circular region. we find that the lens enhances the wave amplitude by 5 times at the lens focus, and 47 % of the incident energy arrives in the focus region. furthermore, small deviations in the profile can still result in good focusing, if the lens index is graded smoothly.
|
arxiv:1807.06705
|
we present an analysis of broad emission lines observed in moderate - luminosity active galactic nuclei ( agns ), typical of those found in x - ray surveys of deep fields, with the aim to test the validity of single - epoch virial black hole mass estimates. we have acquired near - infrared ( nir ) spectra of agns up to z ~ 1. 8 in the cosmos and extended chandra deep field - south survey, with the fiber multi - object spectrograph ( fmos ) mounted on the subaru telescope. these low - resolution nir spectra provide a significant detection of the broad halpha line that has been shown to be a reliable probe of black hole mass at low redshift. our sample has existing optical spectroscopy which provides a detection of mgii, a broad emission line typically used for black hole mass estimation at z > 1. we carry out a spectral - line fitting procedure using both halpha and mgii to determine the virial velocity of gas in the broad line region, the monochromatic continuum luminosity at 3000 a, and the total halpha line luminosity. with a sample of 43 agns spanning a range of two decades in luminosity ( i. e., l ~ 10 ^ 44 - 46 ergs / s ), we find a tight correlation between the continuum and line luminosity with a distribution characterized by < log ( l _ 3000 / l _ halpha ) > = 1. 52 and a dispersion sigma = 0. 16. there is also a close one - to - one relationship between the fwhm of halpha and of mgii up to 10000 km / s with a dispersion of 0. 14 in the distribution of the logarithm of their ratios. both of these then lead to there being very good agreement between halpha - and mgii - based masses over a wide range in black hole mass ( i. e., m _ bh ~ 10 ^ 7 - 9 m _ sun ). we do find a small offset in mgii - based masses, relative to those based on halpha, of + 0. 17 dex and a dispersion sigma = 0. 32. in general, these results demonstrate that local scaling relations, using mgii or halpha, are applicable for agn at moderate luminosities and up to z ~ 2.
|
arxiv:1301.2332
|
we discuss how several of the questions that remain unclear on the physics of active galactic nuclei may find elements of answers when using in the coming years the extraordinary set of instruments that will be flying simultaneously to observe in all bands of the electromagnetic spectrum. the choice of questions mentioned here is personal and not exhaustive.
|
arxiv:astro-ph/0011090
|
the stylization of 3d scenes is an increasingly attractive topic in 3d vision. although image style transfer has been extensively researched with promising results, directly applying 2d style transfer methods to 3d scenes often fails to preserve the structural and multi - view properties of 3d environments, resulting in unpleasant distortions in images from different viewpoints. to address these issues, we leverage the remarkable generative prior of diffusion - based models and propose a novel style transfer method, osdiffst, based on a pre - trained one - step diffusion model ( i. e., sd - turbo ) for rendering diverse styles in multi - view images of 3d scenes. to efficiently adapt the pre - trained model for multi - view style transfer on small datasets, we introduce a vision condition module to extract style information from the reference style image to serve as conditional input for the diffusion model and employ lora in diffusion model for adaptation. additionally, we consider color distribution alignment and structural similarity between the stylized and content images using two specific loss functions. as a result, our method effectively preserves the structural information and multi - view consistency in stylized images without any 3d information. experiments show that our method surpasses other promising style transfer methods in synthesizing various styles for multi - view images of 3d scenes. stylized images from different viewpoints generated by our method achieve superior visual quality, with better structural integrity and less distortion. the source code is available at https : / / github. com / yushenzuo / osdiffst.
|
arxiv:2411.10130
|
the character table of a finite group g determines whether | p : p ' | = p ^ 2 and whether | p : z ( p ) | = p ^ 2, where p is a sylow p - subgroup of g. to prove the latter, we give a detailed classification of those groups in terms of the generalized fitting subgroup.
|
arxiv:2204.04407
|
two main types of models have been suggested to explain the long durations and multiple peaks of gamma - ray bursts ( grbs ). in one, there is a very quick release of energy at a central site resulting in a single relativistic shell that produces peaks in the time history through its interactions with the ambient material. in the other, the central site sporadically releases energy over hundreds of seconds forming a peak with each burst of energy. we present three kinematic arguments against a single relativistic shell. these arguments are based only on symmetry. we show that the average envelope of emission of grbs is a linear function rather than the power law expected for a single relativistic shell. we show that presence of gaps in grbs is the strongest argument against a single relativistic shell. we estimate that the fraction of a single shell that can produce gamma - rays in a grb with multiple peaks is about 0. 001, implying that single relativistic shells require 1000 times more energy than previously thought. we conclude that either the central site of a grb must produce 10 ^ { 51 } erg / s for hundreds of seconds, or the relativistic shell must have structure on a scales the order of sqrt ( epsilon ) / gamma, where gamma is the bulk lorentz factor and epsilon is the efficiency.
|
arxiv:astro-ph/9712303
|
we develop a theory of universal central extensions for hom - lie antialgebra. it is proved that a hom - lie antialgebra admits a universal central extension if and only if it is perfect. moreover, we show that the kernel of the universal central extension is equal to the second homology group with trivial coefficients.
|
arxiv:1907.12886
|
in this paper, we propose some estimators for the parameters of a statistical model based on kullback - leibler divergence of the survival function in continuous setting. we prove that the proposed estimators are subclass of " generalized estimating equations " estimators. the asymptotic properties of the estimators such as consistency, asymptotic normality, asymptotic confidence interval and asymptotic hypothesis testing are investigated.
|
arxiv:1606.09288
|
we present hubble space telescope ( hst ) near - infrared photometry of the coldest known brown dwarf, wise j085510. 83 $ - $ 071442. 5 ( wise 0855 $ - $ 0714 ). wise 0855 $ - $ 0714 was observed with the wide field camera 3 ( wfc3 ) aboard hst using the f105w, f125w, and f160w filters, which approximate the $ y $, $ j $, and $ h $ near - infrared bands. wise 0855 $ - $ 0714 is undetected at f105w with a corresponding 2 $ \ sigma $ magnitude limit of $ \ sim $ 26. 9. we marginally detect wise 0855 $ - $ 0714 in the f125w images ( s / n $ \ sim $ 4 ), with a measured magnitude of 26. 41 $ \ pm $ 0. 27, more than a magnitude fainter than the $ j - $ band magnitude reported by faherty and coworkers. wise j0855 $ - $ 0714 is clearly detected in the f160w band, with a magnitude of 23. 90 $ \ pm $ 0. 02, the first secure detection of wise 0855 $ - $ 0714 in the near - infrared. based on these data, we find that wise 0855 $ - $ 0714 has extremely red f105w $ - $ f125w and f125w $ - $ f160w colors relative to other known y dwarfs. we find that when compared to the models of saumon et al. and morley et al., the f105w $ - $ f125w and f125w $ - $ f160w colors of wise 0855 $ - $ 0714 cannot be accounted for simultaneously. these colors likely indicate that we are seeing the collapse of flux on the wien tail for this extremely cold object.
|
arxiv:1605.05618
|
we study concentrated colloidal suspensions, a model system which has a glass transition. samples in the glassy state show aging, in that the motion of the colloidal particles slows as the sample ages from an initial state. we study the relationship between the static structure and the slowing dynamics, using confocal microscopy to follow the three - dimensional motion of the particles. the structure is quantified by considering tetrahedra formed by quadruplets of neighboring particles. we find that while the sample clearly slows down during aging, the static properties as measured by tetrahedral quantities do not vary. however, a weak correlation between tetrahedron shape and mobility is observed, suggesting that the structure facilitates the motion responsible for the sample aging.
|
arxiv:cond-mat/0512698
|
we construct a model of quintessence in string theory based on the idea of axion monodromy as discussed by mcallister, silverstein and westphal arxiv : 0808. 0706. in the model, the quintessence field is an axion whose shift symmetry is broken by the presence of 5 - branes which are placed in highly warped throats. this gives rise to a potential for the axion field which is slowly varying, even after incorporating the effects of moduli stabilization and supersymmetry breaking. we find that the resulting time dependence in the equation of state of dark energy is potentially detectable, depending on the initial conditions. the model has many very light extra particles which live in the highly warped throats, but these are hard to detect. a signal in the rotation of the cmb polarization can also possibly arise.
|
arxiv:1011.5877
|
all telescopes and instruments are to some degree affected by scattered light. it is possible to estimate the amount of such scattered light, and even correct for it, with a radially extended point spread function ( psf ). the outer parts of the psf have only rarely been determined, since they are faint and therefore difficult to measure. a mostly complete overview of existing properties and measurements of radially extended psfs is presented, to both show their similarities and to indicate how bright extended objects can be used to measure the faintest regions. the importance of the far wings of the psf and their possible temporal variations are demonstrated in three edge - on galaxy models. the same study is applied to the first edge - on galaxy where earlier observations reveal a halo, ngc 5907. all psfs were collected in two diagrams, after they were offset or normalized, when that was possible. surface - brightness structures of edge - on galaxies were modelled and analysed to study scattered - light haloes that result with an exponential disc. the models were convolved with both a lower - limit psf and a more average psf. the psf of the observed data could be used in the case of ngc 5907. the comparison of the psfs demonstrates a lower - limit $ r ^ { - 2 } $ power - law decline at larger radii. the analysis of the galaxy models shows that also the outer parts of the psf are important to correctly model and analyse observations and, in particular, fainter regions. the reassessed analysis of the earlier measurements of ngc 5907 reveals an explanation for the faint halo in scattered light, within the quoted level of accuracy.
|
arxiv:1406.5508
|
the complexity of a computational problem is traditionally quantified based on the hardness of its worst case. this approach has many advantages and has led to a deep and beautiful theory. however, from the practical perspective, this leaves much to be desired. in application areas, practically interesting instances very often occupy just a tiny part of an algorithm ' s space of instances, and the vast majority of instances are simply irrelevant. addressing these issues is a major challenge for theoretical computer science which may make theory more relevant to the practice of computer science. following bilu and linial, we apply this perspective to maxcut, viewed as a clustering problem. using a variety of techniques, we investigate practically interesting instances of this problem. specifically, we show how to solve in polynomial time distinguished, metric, expanding and dense instances of maxcut under mild stability assumptions. in particular, $ ( 1 + \ epsilon ) $ - stability ( which is optimal ) suffices for metric and dense maxcut. we also show how to solve in polynomial time $ \ omega ( \ sqrt { n } ) $ - stable instances of maxcut, substantially improving the best previously known result.
|
arxiv:1205.4893
|
we establish a sample of 370 mira variables that are likely near the galactic center ( gc ). the sources have been selected from the ogle and baade surveys based on their sky coordinates, ogle classifications, and baade maser - derived line - of - sight velocities. as the distance to the gc is known to a high accuracy, this sample is a test bed for reddening and extinction studies toward the gc and in mira envelopes. we calculated separate interstellar - and circumstellar - extinction values for individual sources, showing that there is a wide range of circumstellar extinction values ( up to four magnitudes in the k $ _ s $ band ) in the sample, and that circumstellar reddening is statistically different from interstellar reddening laws. further, the reddening laws in the circumstellar environments of our sample and the circumstellar environments of large magellanic cloud ( lmc ) miras are strikingly similar despite the different metallicities of the samples. period - magnitude relations for the mid - infrared ( mir ) wise and msx bands are also explored, and in the wise bands we compare these to period - magnitude relationships derived from miras in the lmc as it is important to compare these lmc relations to those in a higher metallicity environment. emission from the envelope itself may contaminate mir magnitudes altering the relations, especially for sources with thick envelopes.
|
arxiv:2308.01710
|
the accelerated deployment of service robots have spawned a number of algorithm variations to better handle real - world conditions. many local trajectory planning techniques have been deployed on practical robot systems successfully. while most formulations of dynamic window approach and model predictive control can progress along paths and optimize for additional criteria, the use of pure path tracking algorithms is still commonplace. decades later, pure pursuit and its variants continues to be one of the most commonly utilized classes of local trajectory planners. however, few pure pursuit variants have been proposed with schema for variable linear velocities - they either assume a constant velocity or fails to address the point at all. this paper presents a variant of pure pursuit designed with additional heuristics to regulate linear velocities, built atop the existing adaptive variant. the regulated pure pursuit algorithm makes incremental improvements on state of the art by adjusting linear velocities with particular focus on safety in constrained and partially observable spaces commonly negotiated by deployed robots. we present experiments with the regulated pure pursuit algorithm on industrial - grade service robots. we also provide a high - quality reference implementation that is freely included ros 2 nav2 framework at https : / / github. com / ros - planning / navigation2 for fast evaluation.
|
arxiv:2305.20026
|
we present impact, a flexible toolchain for nonlinear model predictive control ( nmpc ) specification with automatic code generation capabilities. the toolchain reduces the engineering complexity of nmpc implementations by providing the user with an easy - to - use application programming interface, and with the flexibility of using multiple state - of - the - art tools and numerical optimization solvers for rapid prototyping of nmpc solutions. impact is written in python, users can call it from python and matlab, and the generated nmpc solvers can be directly executed from c, python, matlab and simulink. an application example is presented involving problem specification and deployment on embedded hardware using simulink, showing the effectiveness and applicability of impact for nmpc - based solutions.
|
arxiv:2303.08850
|
by constructing different parameters which are able to give us the information about our universe during inflation, ( specially at the start and the end of the inflationary universe ) a brief idea of brane world inflation is given in this work. what will be the size of the universe at the end of inflation, i. e., how many times will it grow than today ' s size is been speculated and analysed thereafter. different kinds of fluids are taken to be the matter inside the brane. it is observed that in the case of highly positive pressure grower gas like polytropic, the size of the universe at the end of inflation is comparitively smaller. whereas for negative pressure creators ( like chaplygin gas ) this size is much bigger. except thse two cases, inflation has been studied for barotropic fluid and linear redshift parametrization $ \ omega ( z ) = \ omega _ { 0 } + \ omega _ { 1 } z $ too. for them the size of the universe after inflation is much more high. we also have seen that this size does not depend upon the potential energy at the end of the inflation. on the contrary, there is a high impact of the initial potential energy upon the size of inflation.
|
arxiv:1104.1297
|
we define gravitational mass and current multipoles for five - dimensional, stationary, and asymptotically flat vacuum metrics. we do this by generalizing thorne ' s asymptotically cartesian and mass - centered ( acmc ) coordinate formalism to five dimensions, and prove that the multipoles defined in this way are unambiguously well - defined. further, these two towers of multipole tensors, in the case of biaxial symmetry, reduce to a tower of mass multipoles $ m _ \ ell $, and two separate towers of current or angular momentum multipoles $ s ^ { ( 1 ) } _ \ ell, s ^ { ( 2 ) } _ \ ell $. we apply our formalism to a few examples, in particular myers - perry black holes, black rings, and smooth multicentered geometries.
|
arxiv:2312.04352
|
the spallation neutron source ( sns ) is being designed for operation in 2004. the sns is a 1 gev machine consisting of a combination normal - conducting and super - conducting linac as well as a ring and target area. the linac front end is a 402. 5 mhz rfq being developed by lawrence berkeley lab. the dtl ( at 402. 5 mhz ) and the ccl ( at 805 mhz ) stages are being developed by los alamos national laboratory. the expected output energy of the dtl is 87 mev and that of the ccl is 185 mev. the rf control system under development for the linac is based on the low energy demonstration accelerator ( leda ) control system with some new features. this paper will discuss the new design approach and its benefits. block diagrams and circuit specifics will be addressed. the normal conducting rf control system will be described in detail with references to the super - conducting control system where appropriate.
|
arxiv:physics/0008113
|
we propose an uncertainty propagation study and a sensitivity analysis with the ocular mathematical virtual simulator, a computational and mathematical model that predicts the hemodynamics and biomechanics within the human eye. in this contribution, we focus on the effect of intraocular pressure, retrolaminar tissue pressure and systemic blood pressure on the ocular posterior tissue vasculature. the combination of a physically - based model with experiments - based stochastic input allows us to gain a better understanding of the physiological system, accounting both for the driving mechanisms and the data variability.
|
arxiv:2102.00707
|
recent self - supervised learning ( ssl ) methods have shown impressive results in learning visual representations from unlabeled images. this paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state - of - the - art visual pretext tasks for ssl do not enjoy the benefit, i. e., they are architecture - agnostic. in particular, we focus on vision transformers ( vits ), which have gained much attention recently as a better architectural choice, often outperforming convolutional networks for various visual tasks. the unique characteristic of vit is that it takes a sequence of disjoint patches from an image and processes patch - level representations internally. inspired by this, we design a simple yet effective visual pretext task, coined selfpatch, for learning better patch - level representations. to be specific, we enforce invariance against each patch and its neighbors, i. e., each patch treats similar neighboring patches as positive samples. consequently, training vits with selfpatch learns more semantically meaningful relations among patches ( without using human - annotated labels ), which can be beneficial, in particular, to downstream tasks of a dense prediction type. despite its simplicity, we demonstrate that it can significantly improve the performance of existing ssl methods for various visual tasks, including object detection and semantic segmentation. specifically, selfpatch significantly improves the recent self - supervised vit, dino, by achieving + 1. 3 ap on coco object detection, + 1. 2 ap on coco instance segmentation, and + 2. 9 miou on ade20k semantic segmentation.
|
arxiv:2206.07990
|
the klein - gordon equation is used to calculate the zitterbewegung ( zb, trembling motion ) of spin - zero particles in absence of fields and in the presence of an external magnetic field. both hamiltonian and wave formalisms are employed to describe zb and their results are compared. it is demonstrated that, if one uses wave packets to represent particles, the zb motion has a decaying behavior. it is also shown that the trembling motion is caused by an interference of two sub - packets composed of positive and negative energy states which propagate with different velocities. in the presence of a magnetic field the quantization of energy spectrum results in many interband frequencies contributing to zb oscillations and the motion follows a collapse - revival pattern. in the limit of non - relativistic velocities the interband zb components vanish and the motion is reduced to cyclotron oscillations. the exact dynamics of a charged klein - gordon particle in the presence of a magnetic field is described on an operator level. the trembling motion of a kg particle in absence of fields is simulated using a classical model proposed by morse and feshbach - - it is shown that a variance of a gaussian wave packet exhibits zb oscillations.
|
arxiv:1205.4707
|
astrophysical data analysis of the weak - field predictions support the claim that modified gravity ( mog ) theories provide a self - consistent, scale - invariant, universal description of galaxy rotation curves, without the need of non - baryonic dark matter. comparison to the predictions of milgrom ' s modified dynamics ( mond ) provide a best - fit and experimentally determined universal value of the mond acceleration parameter. the predictions of the modified gravity theories are compared to the predictions of cold non - baryonic dark matter ( cdm ), including a constant density core - modified fitting formula, which produces excellent fits to galaxy rotation curves including the low surface brightness and dwarf galaxies. upon analysing the mass profiles of clusters of galaxies inferred from x - ray luminosity measurements, from the smallest nearby clusters to the largest of the clusters of galaxies, it is shown that while mog provides consistent fits, mond does not fit the observed shape of cluster mass profiles for any value of the mond acceleration parameter. comparison to the predictions of cdm confirm that whereas the navarro - frenk - white ( nfw ) fitting formula does not fit the observed shape of galaxy cluster mass profiles, the core - modified dark matter fitting formula provides excellent best - fits, supporting the hypothesis that baryons are dynamically important in the distribution of dark matter halos.
|
arxiv:0908.0040
|
a f0 and voicing status estimation algorithm for high quality speech analysis / synthesis is proposed. this problem is approached from a different perspective that models the behavior of feature extractors under noise, instead of directly modeling speech signals. under time - frequency locality assumptions, the joint distribution of extracted features and target f0 can be characterized by training a bank of gaussian mixture models ( gmm ) on artificial data generated from monte - carlo simulations. the trained gmms can then be used to generate a set of conditional distributions on the predicted f0, which are then combined and post - processed by viterbi algorithm to give a final f0 trajectory. evaluation on cstr and cmu arctic speech databases shows that the proposed method, trained on fully synthetic data, achieves lower gross error rates than state - of - the - art methods.
|
arxiv:1710.11317
|
the ontology engineering process is complex, time - consuming, and error - prone, even for experienced ontology engineers. in this work, we investigate the potential of large language models ( llms ) to provide effective owl ontology drafts directly from ontological requirements described using user stories and competency questions. our main contribution is the presentation and evaluation of two new prompting techniques for automated ontology development : memoryless cqbycq and ontogenia. we also emphasize the importance of three structural criteria for ontology assessment, alongside expert qualitative evaluation, highlighting the need for a multi - dimensional evaluation in order to capture the quality and usability of the generated ontologies. our experiments, conducted on a benchmark dataset of ten ontologies with 100 distinct cqs and 29 different user stories, compare the performance of three llms using the two prompting techniques. the results demonstrate improvements over the current state - of - the - art in llm - supported ontology engineering. more specifically, the model openai o1 - preview with ontogenia produces ontologies of sufficient quality to meet the requirements of ontology engineers, significantly outperforming novice ontology engineers in modelling ability. however, we still note some common mistakes and variability of result quality, which is important to take into account when using llms for ontology authoring support. we discuss these limitations and propose directions for future research.
|
arxiv:2503.05388
|
we identify a class of linearly ordered topological spaces $ x $ that may satisfy the property that $ x \ times x $ is homeomorphic to $ x \ times _ l x $ or can be embedded into a linearly ordered space with the stated property. we justify the conjectures by partial results.
|
arxiv:1801.01873
|
in car - following models, the driver reacts according to his physical and psychological abilities which may change over time. however, most car - following models are deterministic and do not capture the stochastic nature of human perception. it is expected that purely deterministic traffic models may produce unrealistic results due to the stochastic driving behaviors of drivers. this paper is devoted to the development of a distinct car - following model where a stochastic process is adopted to describe the time - varying random acceleration which essentially reflects the random individual perception of driver behavior with respect to the leading vehicle over time. in particular, we apply coupled langevin equations to model complex human driver behavior. in the proposed model, an extended cox - ingersoll - ross ( cir ) stochastic process will be used to describe the stochastic speed of the follower in response to the stimulus of the leader. an important property of the extended cir process is to enhance the non - negative properties of the stochastic traffic variables ( e. g. non - negative speed ) for any arbitrary model parameters. based on stochastic process theories, we derive stochastic linear stability conditions which, for the first time, theoretically capture the effect of the random parameter on traffic instabilities. our stability results conform to the empirical results that the traffic instability is related to the stochastic nature of traffic flow at the low speed conditions, even when traffic is deemed to be stable from deterministic models.
|
arxiv:1907.06148
|
a large open - circuit ( v $ _ { oc } $ ) deficit restricts current kesterite device performance. the primary challenge is to achieve control over the phase composition and purity of the kesterite absorber. this is hampered by the fact that the metals copper and tin have multiple valence states and this leads inevitably to the formation of multiple phases. specifically for solution - based fabrication procedures for kesterite, the pursuit of phase purity extends to the synthesis of czts precursor solution or nanoparticle dispersed inks ( nano inks ). in this work, a " redox - free " synthesis of czts nano ink is developed by mixing metal precursors with careful valence state control in non - toxic solvents. the issue of secondary phase formation during the synthesis process of kesterite is effectively resolved. additionally, molecular solutions and nanoparticle inks with identical compositions exhibit significantly different abilities in phase control. nanoparticles pre - synthesized in the solution state exhibit superior phase control by following a more ideal phase formation path. this provides a new pathway for the synthesis of kesterite with unprecedented control of the phase composition and purity.
|
arxiv:2408.15795
|
wind power as a renewable source of energy, has numerous economic, environmental and social benefits. in order to enhance and control renewable wind power, it is vital to utilize models that predict wind speed with high accuracy. due to neglecting of requirement and significance of data preprocessing and disregarding the inadequacy of using a single predicting model, many traditional models have poor performance in wind speed prediction. in the current study, for predicting wind speed at target stations in the north of iran, the combination of a multi - layer perceptron model ( mlp ) with the whale optimization algorithm ( woa ) used to build new method ( mlp - woa ) with a limited set of data ( 2004 - 2014 ). then, the mlp - woa model was utilized at each of the ten target stations, with the nine stations for training and tenth station for testing ( namely : astara, bandar - e - anzali, rasht, manjil, jirandeh, talesh, kiyashahr, lahijan, masuleh, and deylaman ) to increase the accuracy of the subsequent hybrid model. the capability of the hybrid model in wind speed forecasting at each target station was compared with the mlp model without the woa optimizer. to determine definite results, numerous statistical performances were utilized. for all ten target stations, the mlp - woa model had precise outcomes than the standalone mlp model. the hybrid model had acceptable performances with lower amounts of the rmse, si and re parameters and higher values of nse, wi, and kge parameters. it was concluded that the woa optimization algorithm can improve the prediction accuracy of mlp model and may be recommended for accurate wind speed prediction.
|
arxiv:2002.06226
|
we study single - photon induced electromagnetically induced transparency ( eit ) in many - emitter waveguide quantum electrodynamics ( wqed ) with linear and nonlinear waveguide dispersion relations. in the single - emitter problem, in addition to the robustness of the eit spectral features in the over - coupled regime of wqed, we find that the nonlinear dispersion results in the appearance of a side peak for frequencies smaller than the resonant eit frequency which turns into a pronounced plateau as the nonlinearity is enhanced. consequently, for many - emitter scenarios, our results indicate the formation of band structure which for higher values of nonlinearity leads to narrow band gaps as compared to the corresponding linear dispersion case. long - distance quantum networking aided with quantum memories can serve as one of the targeted applications of this work.
|
arxiv:2307.03836
|
in this paper we obtain estimates on the distance of inertial manifolds for dynamical systems generated by evolutionary parabolic type equations. we consider the situation where the systems are defined in different phase spaces and we estimate the distance in terms of the distance of the resolvent operators of the corresponding elliptic operators and the distance of the nonlinearities of the equations.
|
arxiv:1305.2771
|
automatic assessment of reading fluency using automatic speech recognition ( asr ) holds great potential for early detection of reading difficulties and subsequent timely intervention. precise assessment tools are required, especially for languages other than english. in this study, we evaluate six state - of - the - art asr - based systems for automatically assessing dutch oral reading accuracy using kaldi and whisper. results show our most successful system reached substantial agreement with human evaluations ( mcc =. 63 ). the same system reached the highest correlation between forced decoding confidence scores and word correctness ( r =. 45 ). this system ' s language model ( lm ) consisted of manual orthographic transcriptions and reading prompts of the test data, which shows that including reading errors in the lm improves assessment performance. we discuss the implications for developing automatic assessment systems and identify possible avenues of future research.
|
arxiv:2306.03444
|
fine - grained facial expression manipulation is a challenging problem, as fine - grained expression details are difficult to be captured. most existing expression manipulation methods resort to discrete expression labels, which mainly edit global expressions and ignore the manipulation of fine details. to tackle this limitation, we propose an end - to - end expression - guided generative adversarial network ( eggan ), which utilizes structured latent codes and continuous expression labels as input to generate images with expected expressions. specifically, we adopt an adversarial autoencoder to map a source image into a structured latent space. then, given the source latent code and the target expression label, we employ a conditional gan to generate a new image with the target expression. moreover, we introduce a perceptual loss and a multi - scale structural similarity loss to preserve identity and global shape during generation. extensive experiments show that our method can manipulate fine - grained expressions, and generate continuous intermediate expressions between source and target expressions.
|
arxiv:2004.09769
|
humans interact with an object in many different ways by making contact at different locations, creating a highly complex motion space that can be difficult to learn, particularly when synthesizing such human interactions in a controllable manner. existing works on synthesizing human scene interaction focus on the high - level control of action but do not consider the fine - grained control of motion. in this work, we study the problem of synthesizing scene interactions conditioned on different contact positions on the object. as a testbed to investigate this new problem, we focus on human - chair interaction as one of the most common actions which exhibit large variability in terms of contacts. we propose a novel synthesis framework couch that plans ahead the motion by predicting contact - aware control signals of the hands, which are then used to synthesize contact - conditioned interactions. furthermore, we contribute a large human - chair interaction dataset with clean annotations, the couch dataset. our method shows significant quantitative and qualitative improvements over existing methods for human - object interactions. more importantly, our method enables control of the motion through user - specified or automatically predicted contacts.
|
arxiv:2205.00541
|
model predictive control strategies require to solve in an sequential manner, many, possibly non - convex, optimization problems. in this work, we propose an interacting stochastic agent system to solve those problems. the agents evolve in pseudo - time and in parallel to the time - discrete state evolution. the method is suitable for non - convex, non - differentiable objective functions. the convergence properties are investigated through mean - field approximation of the time - discrete system, showing convergence in the case of additive linear control. we validate the proposed strategy by applying it to the control of a stirred - tank reactor non - linear system.
|
arxiv:2312.13085
|
recent progress of the complex langevin method and the lefschetz thimble in connection with the sign problem is reviewed. these methods rely on the complexification of the original field manifold and they allow direct simulations of theories with non - real measures. similarities and differences of the two approaches are pointed out. results using the complex langevin method, which allows simulations to evade the sign problem in full qcd, are presented. promising results of the thimble approach for non - gauge theories are also discussed.
|
arxiv:1410.8813
|
black hole complementarity requires that the interior of a black hole be represented by the same degrees of freedom that describe its exterior. entanglement plays a crucial role in the reconstruction of the interior degrees of freedom. this connection is manifest in " two - sided " eternal black holes. but for real black holes which are formed from collapse there are no second sides. the sense in which horizon entropy is entanglement entropy is much more subtle for one - sided black holes. it involves entanglement between different parts of the near - horizon system. as a one - sided black hole evaporates the entanglement that accounts for its interior degrees of freedom disappears, and is gradually replaced by entanglement with the outgoing hawking radiation. a principle of " transfer of entanglement " can be formulated. according to the argument of almheiri, marolf, polchinski and sully, it is when the transfer of entanglement is completed at the page time, that a firewall replaces the horizon. alternatives to firewalls may suffer contradictions which are similar to those of time travel. the firewall hypothesis would be similar to hawking ' s chronology protection conjecture.
|
arxiv:1210.2098
|
this is a summary of arguments in favor of observing high - redshift star formation in the uv as presented at the ringberg meeting in september 2000. the most rapidly star - forming galaxies are very dusty, easier to detect at 850um than in the uv, but less rapidly star - forming galaxies are less obscured by dust and as a result the comparatively faint galaxies that hosted most high - redshift star formation are easiest to detect in the uv. the correlation of star - formation rate and dust obscuration implies that extremely luminous dusty galaxies are usually as bright in the uv as the less luminous dust - free galaxies, and that any uv survey at a given redshift 0 < z < ~ 3 deep enough to detect the majority of the uv luminosity density will detect the majority of ir - selected galaxies as well. little star formation occurs in galaxies that are completely hidden from uv surveys. i review recent attempts to estimate star - formation rates for high - redshift galaxies from uv data alone. the strength of uv surveys is that they detect large numbers of high - redshift galaxies, even ones that are intrinsically very faint, in large and representative comoving volumes. the weakness is that star - formation rates are difficult to estimate for the detected galaxies. ir surveys complement them perfectly : star - formation rates can be estimated with reasonable confidence, but only small regions of the sky can be surveyed and only the most luminous sources can be detected. multiwavelength cooperation, not conflict, will lead to future progress in this field.
|
arxiv:astro-ph/0101144
|
the problem of decomposing networks into modules ( or clusters ) has gained much attention in recent years, as it can account for a coarse - grained description of complex systems, often revealing functional subunits of these systems. a variety of module detection algorithms have been proposed, mostly oriented towards finding hard partitionings of undirected networks. despite the increasing number of fuzzy clustering methods for directed networks, many of these approaches tend to neglect important directional information. in this paper, we present a novel random walk based approach for finding fuzzy partitions of directed, weighted networks, where edge directions play a crucial role in defining how well nodes in a module are interconnected. we will show that cycle decomposition of a random walk process connects the notion of network modules and information transport in a network, leading to a new, symmetric measure of node communication. walk process, for which we will prove that although being time - reversible it inherits all necessary information about directions and modular structure of the original network. finally, we will use this measure to introduce a communication graph, for which we will show that although being undirected it inherits all necessary information about modular structures from the original network.
|
arxiv:1407.8039
|
we report on the observation of magneto - oscillations of terahertz radiation induced photocurrent in hgte / hgcdte quantum wells ( qws ) of different widths, which are characterized by a dirac - like, inverted and normal parabolic band structure. the photocurrent data are accompanied by measurements of photoresistance ( photoconductivity ), radiation transmission, as well as magneto - transport. we develop a microscopic model of a cyclotron - resonance assisted photogalvanic effect, which describes main experimental findings. we demonstrate that the quantum oscillations of the photocurrent are caused by the crossing of fermi level by landau levels resulting in the oscillations of spin polarization and electron mobilities in spin subbands. theory explains a photocurrent direction reversal with the variation of magnetic field observed in experiment. we describe the photoconductivity oscillations related with the thermal suppression of the shubnikov - de haas effect.
|
arxiv:1407.1162
|
the femtoscopic study of pairs of identical pions is particularly suited to investigate the effective source function of particle emission, due to the resulting bose - einstein correlation signal. in small collision systems at the lhc, pp in particular, the majority of the pions are produced in resonance decays, which significantly affect the profile and size of the source. in this work, we explicitly model this effect in order to extract the primordial source in pp collisions at $ \ sqrt { s } = 13 $ tev from charged $ \ pi $ - $ \ pi $ correlations measured by alice. we demonstrate that the assumption of a gaussian primordial source is compatible with the data and that the effective source, resulting from modifications due to resonances, is approximately exponential, as found in previous measurements at the lhc. the universality of hadron emission in pp collisions is further investigated by applying the same methodology to characterize the primordial source of k - p pairs. the size of the primordial source is evaluated as a function of the transverse mass ( $ m _ { \ rm t } $ ) of the pairs, leading to the observation of a common scaling for both $ \ pi $ - $ \ pi $ and k - p, suggesting a collective effect. further, the present results are compatible with the $ m _ { \ rm t } $ scaling of the p - p and p $ - \ lambda $ primordial source measured by alice in high multiplicity pp collisions, providing compelling evidence for the presence of a common emission source for all hadrons in small collision systems at the lhc. this will allow the determination of the source function for any hadron - - hadron pairs with high precision, granting access to the properties of the possible final - state interaction among pairs of less abundantly produced hadrons, such as strange or charmed particles.
|
arxiv:2311.14527
|
recent advances in reinforcement learning ( rl ) have led to a growing interest in applying rl to classical planning domains or applying classical planning methods to some complex rl domains. however, the long - horizon goal - based problems found in classical planning lead to sparse rewards for rl, making direct application inefficient. in this paper, we propose to leverage domain - independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of rl. these classical heuristics act as dense reward generators to alleviate the sparse - rewards issue and enable our rl agent to learn domain - specific value functions as residuals on these heuristics, making learning easier. correct application of this technique requires consolidating the discounted metric used in rl and the non - discounted metric used in heuristics. we implement the value functions using neural logic machines, a neural network architecture designed for grounded first - order logic inputs. we demonstrate on several classical planning domains that using classical heuristics for rl allows for good sample efficiency compared to sparse - reward rl. we further show that our learned value functions generalize to novel problem instances in the same domain.
|
arxiv:2109.14830
|
transition metal carbides / nitrides ( mxenes ) are a newly developing class of two - dimensional ( 2d ) materials with technically robust properties that can be finely tuned by planar surface functionalization. herein, the critical role of oxygen ( o - ) functionalization on the tensile mechanical characteristics of thinnest 2d ti2c mxene is explored by molecular dynamic ( md ) simulation with first - principle based reaxff forcefield. it is demonstrated that ti2c sheet shows unique tensile mechanical behaviors that pronouncedly vary with the content of o - functionalization and stretching direction. upon both loading directions, there is an apparent crossover in the young ' s modulus, failure strength and failure strain. intriguingly, under armchair directional load, a structural transition of 1t to 1t ' phase occurs in the ti2c region, which has been observed in many transition metal dichalcogenides. upon zigzag directional straining, however, two distinct structural transformations take place in pristine and fully o - functionalized ti2c sheets, respectively. as the load is removed, those three structural transformations are reversible, and they are critically understood by analysis of the bond configurations. the study provides important insights into mechanical behaviors and structural transformations of functionalized mxenes.
|
arxiv:2008.06479
|
distal radius fractures are the most common fractures of the upper extremity in humans. as such, they account for a significant portion of the injuries that present to emergency rooms and clinics throughout the world. we trained a faster r - cnn, a machine vision neural network for object detection, to identify and locate distal radius fractures in anteroposterior x - ray images. we achieved an accuracy of 96 \ % in identifying fractures and mean average precision, map, of 0. 866. this is significantly more accurate than the detection achieved by physicians and radiologists. these results were obtained by training the deep learning network with only 38 original images of anteroposterior hands x - ray images with fractures. this opens the possibility to detect with this type of neural network rare diseases or rare symptoms of common diseases, where only a small set of diagnosed x - ray images could be collected for each disease.
|
arxiv:1812.09025
|
in this paper, we introduce ' public computation ' as a genre of learning environments that can be used to radically broaden public participation in authentic, computation - enabled stem disciplinary practices. our paradigmatic approach utilizes open source software designed for professional scientists, engineers and digital artists, and situates them in an undiluted form, alongside live and archived expert support, in a public space. we present a case study of digiplay, a prototypical public computation space we designed at the university of calgary, where users can interact directly with scientific simulations as well as the underlying open source code using an array of massive multi - touch screens. we argue that in such a space, public interactions with the code can be thought of as boundary work and play, through which public participation becomes legitimate scientific act, as the public engages in scientific creation through truly open - ended explorations with the code.
|
arxiv:1610.06658
|
we investigate the ground - state properties of quantum particles interacting via a long - range repulsive potential $ { \ cal v } _ \ sigma ( x ) \ sim 1 / | x | ^ { 1 + \ sigma } $ ( $ - 1 < \ sigma $ ) or $ { \ cal v } _ \ sigma ( x ) \ sim - | x | ^ { - 1 - \ sigma } $ ( $ - 2 \ leq \ sigma < - 1 $ ) that interpolates between the coulomb potential $ { \ cal v } _ 0 ( x ) $ and the linearly confining potential $ { \ cal v } _ { - 2 } ( x ) $ of the schwinger model. in the absence of disorder the ground state is a wigner crystal when $ \ sigma \ leq 0 $. using bosonization and the nonperturbative functional renormalization group we show that any amount of disorder suppresses the wigner crystallization when $ - 3 / 2 < \ sigma \ leq 0 $ ; the ground state is then a mott glass, i. e., a state that has a vanishing compressibility and a gapless optical conductivity. for $ \ sigma < - 3 / 2 $ the ground state remains a wigner crystal.
|
arxiv:2007.02993
|
canonical correlation analysis ( cca ) is a classical representation learning technique for finding correlated variables in multi - view data. several nonlinear extensions of the original linear cca have been proposed, including kernel and deep neural network methods. these approaches seek maximally correlated projections among families of functions, which the user specifies ( by choosing a kernel or neural network structure ), and are computationally demanding. interestingly, the theory of nonlinear cca, without functional restrictions, had been studied in the population setting by lancaster already in the 1950s, but these results have not inspired practical algorithms. we revisit lancaster ' s theory to devise a practical algorithm for nonparametric cca ( ncca ). specifically, we show that the solution can be expressed in terms of the singular value decomposition of a certain operator associated with the joint density of the views. thus, by estimating the population density from data, ncca reduces to solving an eigenvalue system, superficially like kernel cca but, importantly, without requiring the inversion of any kernel matrix. we also derive a partially linear cca ( plcca ) variant in which one of the views undergoes a linear projection while the other is nonparametric. using a kernel density estimate based on a small number of nearest neighbors, our ncca and plcca algorithms are memory - efficient, often run much faster, and perform better than kernel cca and comparable to deep cca.
|
arxiv:1511.04839
|
the aim of this review paper is to discuss some applications of orthogonal polynomials in quantum information processing. the hope is to keep the paper self contained so that someone wanting a brief introduction to the theory of orthogonal polynomials and continuous time quantum walks on graphs may find it in one place. in particular, we focus on the associated jacobi operators and discuss how these can be used to detect perfect state transfer. we also discuss how orthogonal polynomials have been used to give results which are analogous to those given by karlin and mcgregor when studying classical birth and death processes. finally, we show how these ideas have been extended to quantum walks with more than nearest neighbor interactions using exceptional orthogonal polynomials. we also provide a ( non - exhaustive ) list of related open questions.
|
arxiv:2412.16351
|
our position is that logic programming is not programming in the horn clause sublogic of classical logic, but programming in a logic of ( inductive ) definitions. thus, the similarity between prototypical prolog programs ( e. g., member, append,... ) and how inductive definitions are expressed in mathematical text, is not coincidental but essential. we argue here that this provides a natural solution to the main lingering semantic questions of logic programming and its extensions.
|
arxiv:2304.13430
|
the two - user interference channel is a model for multi one - to - one communications, where two transmitters wish to communicate with their corresponding receivers via a shared wireless medium. two most common and simple coding schemes are time division ( td ) and treating interference as noise ( tin ). interestingly, it is shown that there exists an asymptotic scheme, called han - kobayashi scheme, that performs better than td and tin. however, han - kobayashi scheme has impractically high complexity and is designed for asymptotic settings, which leads to a gap between information theory and practice. in this paper, we focus on designing practical codes for interference channels. as it is challenging to analytically design practical codes with feasible complexity, we apply deep learning to learn codes for interference channels. we demonstrate that deepic, a convolutional neural network - based code with an iterative decoder, outperforms td and tin by a significant margin for two - user additive white gaussian noise channels with moderate amount of interference.
|
arxiv:2108.06028
|
in search of extra dimensions in the ongoing lhc experiments, signatures of randall - sundrum ( rs ) lightest kk graviton have been one of the main focus in recent years. the recent data from the dilepton decay channel at the lhc has determined the experimental lower bound on the mass of the rs lightest kaluza - klein ( kk ) graviton for different choices of underlying parameters of the theory. in this work we explore the effects of the backreaction of the bulk scalar field, which is employed to stabilise the rs model, in modifying the couplings of the lightest kk graviton with the standard model ( sm ) matter fields located on the visible brane. in such a modified background geometry we show that the coupling of the lightest kk graviton with the sm matter fields gets a significant suppression due to the inclusion of the backreaction of the bulk stabilising scalar field. this implies that the backreaction parameter weakens the signals from rs scenario in collider experiments which in turn explains the non - visibility of kk graviton in colliders. thus we show that the modulus stabilisation plays a crucial role in the search of warped extra dimensions in collider experiments.
|
arxiv:1506.05613
|
meta - analysis is a powerful tool to synthesize findings from multiple studies. the normal - normal random - effects model is widely used to account for between - study heterogeneity. however, meta - analysis of sparse data, which may arise when the event rate is low for binary or count outcomes, poses a challenge to the normal - normal random - effects model in the accuracy and stability in inference since the normal approximation in the within - study model may not be good. to reduce bias arising from data sparsity, the generalized linear mixed model can be used by replacing the approximate normal within - study model with an exact model. publication bias is one of the most serious threats in meta - analysis. several quantitative sensitivity analysis methods for evaluating the potential impacts of selective publication are available for the normal - normal random - effects model. we propose a sensitivity analysis method by extending the likelihood - based sensitivity analysis with the t - statistic selection function of copas to several generalized linear mixed - effects models. through applications of our proposed method to several real - world meta - analysis and simulation studies, the proposed method was proven to outperform the likelihood - based sensitivity analysis based on the normal - normal model. the proposed method would give useful guidance to address publication bias in meta - analysis of sparse data.
|
arxiv:2404.06837
|
we present in this letter the results from two high quality, low density gaas quantum wells. in sample a of electron density n = 5. 0x10 ^ 10 cm ^ - 2, anisotropic electronic transport behavior was observed at \ nu = 7 / 2 in the second landau level. we believe that the anisotropy is due to the large landau level mixing effect in this sample. in sample b of density 4. 1x10 ^ 10 cm ^ - 2, strong 8 / 3, 5 / 2, and 7 / 3 fractional quantum hall states were observed. furthermore, our energy gap data suggest that, similar to the 8 / 3 state, the 5 / 2 state may also be spin unpolarized in the low density limit. the results from both samples show that the strong electron - electron interactions and a large landau level mixing effect play an import role in the competing ground states in the second landau level.
|
arxiv:1405.6188
|
anderson localization was first investigated in the context of electrons in solids. one of the successes was in explaining the puzzle of negative magneto - resistance - as early as the 1940s it had been observed that electron diffusion rates in some materials can increase with the application of a magnetic field. anderson localization has now been demonstrated in ultra - cold atomic gases. we present a theoretical study of the two - dimensional ultra - cold bose gas in the presence of disorder, to which we apply a synthetic magnetic field. we demonstrate that, in the ballistic transport regime this leads to positive magneto - resistance and that, in the diffusive and strong localization regimes, can also lead to negative magneto - resistance. we propose experimental scenarios to observe these effects.
|
arxiv:1207.5095
|
support vector machines ( svm ) can classify data sets along highly non - linear decision boundaries because of the kernel - trick. this expressiveness comes at a price : during test - time, the svm classifier needs to compute the kernel inner - product between a test sample and all support vectors. with large training data sets, the time required for this computation can be substantial. in this paper, we introduce a post - processing algorithm, which compresses the learned svm model by reducing and optimizing support vectors. we evaluate our algorithm on several medium - scaled real - world data sets, demonstrating that it maintains high test accuracy while reducing the test - time evaluation cost by several orders of magnitude - - - in some cases from hours to seconds. it is fair to say that most of the work in this paper was previously been invented by burges and sch \ " olkopf almost 20 years ago. for most of the time during which we conducted this research, we were unaware of this prior work. however, in the past two decades, computing power has increased drastically, and we can therefore provide empirical insights that were not possible in their original paper.
|
arxiv:1501.06478
|
let $ ( y _ n ) $ be a sequence of i. i. d. $ \ mathbb z $ - valued random variables with law $ \ mu $. the reflected random walk $ ( x _ n ) $ is defined recursively by $ x _ 0 = x \ in \ mathbb n _ 0, x _ { n + 1 } = | x _ n + y _ { n + 1 } | $. under mild hypotheses on the law $ \ mu $, it is proved that, for any $ y \ in \ mathbb n _ 0 $, as $ n \ to + \ infty $, one gets $ \ mathbb p _ x [ x _ n = y ] \ sim c _ { x, y } r ^ { - n } n ^ { - 3 / 2 } $ when $ \ sum _ { k \ in \ mathbb z } k \ mu ( k ) > 0 $ and $ \ mathbb p _ x [ x _ n = y ] \ sim c _ { y } n ^ { - 1 / 2 } $ when $ \ sum _ { k \ in \ mathbb z } k \ mu ( k ) = 0 $, for some constants $ r, c _ { x, y } $ and $ c _ y > 0 $.
|
arxiv:1206.6953
|
multiview video with interactive and smooth view switching at the receiver is a challenging application with several issues in terms of effective use of storage and bandwidth resources, reactivity of the system, quality of the viewing experience and system complexity. the classical decoding system for generating virtual views first projects a reference or encoded frame to a given viewpoint and then fills in the holes due to potential occlusions. this last step still constitutes a complex operation with specific software or hardware at the receiver and requires a certain quantity of information from the neighboring frames for insuring consistency between the virtual images. in this work we propose a new approach that shifts most of the burden due to interactivity from the decoder to the encoder, by anticipating the navigation of the decoder and sending auxiliary information that guarantees temporal and interview consistency. this leads to an additional cost in terms of transmission rate and storage, which we minimize by using optimization techniques based on the user behavior modeling. we show by experiments that the proposed system represents a valid solution for interactive multiview systems with classical decoders.
|
arxiv:1201.0598
|
quantum phase slippage ( qps ) in a superconducting nanowire is a new candidate for developing a quantum bit. it has also been theoretically predicted that the occurrence of qps significantly changes the current - phase relationship ( cpr ) of the wire due to the tunneling between topologically different metastable states. we present studies on the microwave response of the superconducting nanowires to reveal their cprs. first, we demonstrate a simple nanowire fabrication technique, based on commercially available adhesive tapes, which allows making thin superconducting wire from different metals. we compare the resistance vs. temperature curves of mo $ _ { 76 } $ ge $ _ { 24 } $ and al nanowires to the classical and quantum models of phase slips. in order to describe the experimentally observed microwave responses of these nanowires, we use the mccumber - stewart model, which is generalized to include either classical or quantum cpr.
|
arxiv:0905.1726
|
considerations on the complementary time - dependent coordinate transformations emboding lorentz transformation ( lt ) show that the relativistic energy - momentum relationship, implicitly the relativistic mass and energy, do not depend on the square root appearing in lt, being associated to the absolute motion of a particle and related to its inner structure. results concerning the concept of operational theory and its application to the electromagnetic and gravitational field theories, as well as to the quantum mechanics are given in appendixes.
|
arxiv:physics/9912040
|
english is the international standard of social research, but scholars are increasingly conscious of their responsibility to meet the need for scholarly insight into communication processes globally. this tension is as true in computational methods as any other area, with revolutionary advances in the tools for english language texts leaving most other languages far behind. in this paper, we aim to leverage those very advances to demonstrate that multi - language analysis is currently accessible to all computational scholars. we show that english - trained measures computed after translation to english have adequate - to - excellent accuracy compared to source - language measures computed on original texts. we show this for three major analytics - - sentiment analysis, topic analysis, and word embeddings - - over 16 languages, including spanish, chinese, hindi, and arabic. we validate this claim by comparing predictions on original language tweets and their backtranslations : double translations from their source language to english and back to the source language. overall, our results suggest that google translate, a simple and widely accessible tool, is effective in preserving semantic content across languages and methods. modern machine translation can thus help computational scholars make more inclusive and general claims about human communication.
|
arxiv:2301.08416
|
we consider a connection - level model proposed by massouli \ ' { e } and roberts for bandwidth sharing among file transfer flows in a communication network. we study weighted proportionally fair sharing policies and establish explicit - form bounds on the weighted sum of the expected numbers of flows on different routes in heavy traffic. the bounds are linear in the number of critically loaded links in the network, and they hold for a class of phase - type file - size distributions ; i. e., the bounds are heavy - traffic insensitive to the distributions in this class. our approach is lyapunov - drift based, which is different from the widely used diffusion approximation approach. a key technique we develop is to construct a novel inner product in the state space, which then allows us to obtain a multiplicative type of state - space collapse in steady state. furthermore, this state - space collapse result implies the interchange of limits as a by - product for the diffusion approximation of the equal - weight case under phase - type file - size distributions, demonstrating the heavy - traffic insensitivity of the stationary distribution.
|
arxiv:1808.02120
|
this paper presents a novel technique for process discovery. in contrast to the current trend, which only considers an event log for discovering a process model, we assume two additional inputs : an independence relation on the set of logged activities, and a collection of negative traces. after deriving an intermediate net unfolding from them, we perform a controlled folding giving rise to a petri net which contains both the input log and all independence - equivalent traces arising from it. remarkably, the derived petri net cannot execute any trace from the negative collection. the entire chain of transformations is fully automated. a tool has been developed and experimental results are provided that witness the significance of the contribution of this paper.
|
arxiv:1507.02744
|
in 1894, the nea published the results of the work of these conference committees. according to the committee of ten, the goal of high school was to prepare all students to do well in life, contributing to their well - being and the good of society. another goal was to prepare some students to succeed in college. this committee supported the citizen science approach focused on mental training and withheld performance in science studies from consideration for college entrance. the baas encouraged their longer standing model in the uk. the us adopted a curriculum was characterized as follows : elementary science should focus on simple natural phenomena ( nature study ) by means of experiments carried out " in - the - field. " secondary science should focus on laboratory work and the committee ' s prepared lists of specific experiments teaching of facts and principles college preparation the format of shared mental training and pre - professional training consistently dominated the curriculum from its inception to now. however, the movement to incorporate a humanistic approach, such as inclusion of the arts ( s. t. e. a. m. ), science, technology, society and environment education is growing and being implemented more broadly in the late 20th century. reports by the american academy for the advancement of science ( aaas ), including project 2061, and by the national committee on science education standards and assessment detail goals for science education that link classroom science to practical applications and societal implications. = = fields of science education = = science is a universal subject that spans the branch of knowledge that examines the structure and behavior of the physical and natural world through observation and experiment. science education is most commonly broken down into the following three fields : biology, chemistry, and physics. additionally there is a large body of scientific literature that advocates the inclusion of teaching the nature of science, which is slowly being adopted into the national curricula. = = = physics education = = = physics education is characterized by the study of science that deals with matter and energy, and their interactions. physics first, a program endorsed by the american association of physics teachers, is a curriculum in which 9th grade students take an introductory physics course. the purpose is to enrich students ' understanding of physics, and allow for more detail to be taught in subsequent high school biology and chemistry classes. it also aims to increase the number of students who go on to take 12th grade physics or ap physics, which are generally elective courses in american high schools. [ 22 ] physics education in high schools in the united states has suffered the last twenty years because many states now only require three sciences, which
|
https://en.wikipedia.org/wiki/Science_education
|
the ability to identify sentiment in text, referred to as sentiment analysis, is one which is natural to adult humans. this task is, however, not one which a computer can perform by default. identifying sentiments in an automated, algorithmic manner will be a useful capability for business and research in their search to understand what consumers think about their products or services and to understand human sociology. here we propose two new genetic algorithms ( gas ) for the task of automated text sentiment analysis. the gas learn whether words occurring in a text corpus are either sentiment or amplifier words, and their corresponding magnitude. sentiment words, such as ' horrible ', add linearly to the final sentiment. amplifier words in contrast, which are typically adjectives / adverbs like ' very ', multiply the sentiment of the following word. this increases, decreases or negates the sentiment of the following word. the sentiment of the full text is then the sum of these terms. this approach grows both a sentiment and amplifier dictionary which can be reused for other purposes and fed into other machine learning algorithms. we report the results of multiple experiments conducted on large amazon data sets. the results reveal that our proposed approach was able to outperform several public and / or commercial sentiment analysis algorithms.
|
arxiv:1804.01963
|
the astrophysics of compact objects, which requires einstein ' s theory of general relativity for understanding phenomena such as black holes and neutron stars, is attracting increasing attention. in general relativity, gravity is governed by an extremely complex set of coupled, nonlinear, hyperbolic - elliptic partial differential equations. the largest parallel supercomputers are finally approaching the speed and memory required to solve the complete set of einstein ' s equations for the first time since they were written over 80 years ago, allowing one to attempt full 3d simulations of such exciting events as colliding black holes and neutron stars. in this paper we review the computational effort in this direction, and discuss a new 3d multi - purpose parallel code called ` ` cactus ' ' for general relativistic astrophysics. directions for further work are indicated where appropriate.
|
arxiv:gr-qc/9904014
|
since users move around based on social relationships and interests, the resulting movement patterns can represent how nodes are socially connected ( i. e., nodes with strong social ties, nodes that meet occasionally by sharing the same working environment ). this means that social interactions reflect personal relationships ( e. g., family, friends, co - workers, passers - by ) that may be translated into statistical contact opportunities within and between social groups over time. such contact opportunities may be exploited to ensure good data dissemination and retrieval, even in the presence of intermittent connectivity. thus, in the last years, a new trend based on social similarity emerged where social relationships, interests, popularity and among others, are used to improve opportunistic routing. in this chapter, the reader will learn about the different approaches related to opportunistic routing focusing on the social - aware approaches and how such approaches make use of social information derived from opportunistic contacts to improve data forwarding. additionally, a brief overview on the existing taxonomies for opportunistic routing as well as an updated one are provided along with a set of experiments in scenarios based on synthetic mobility models and human traces in order to show the potential of social - aware solutions.
|
arxiv:1407.8411
|
when an object impacts the free surface of a liquid, it ejects a splash curtain upwards and creates an air cavity below the free surface. as the object descends into the liquid, the air cavity eventually closes under the action of hydrostatic pressure ( deep seal ). in contrast, the surface curtain may splash outwards or dome over and close, creating a surface seal. in this paper we experimentally investigate how the splash curtain dynamics are governed by the interplay of cavity pressure difference, gravity, and surface tension and how they control the occurrence, or not, of surface seal. based on the experimental observations and measurements, we develop an analytical model to describe the trajectory and dynamics of the splash curtain. the model enables us to reveal the scaling relationship for the dimensionless surface seal time and discover the existence of a critical dimensionless number that predicts the occurrence of surface seal. this scaling indicates that the most significant parameter governing the occurrence of surface seal is the velocity of the airflow rushing into the cavity. this is in contrast to the current understanding which considers the impact velocity as the determinant parameter.
|
arxiv:1912.05785
|
after major disturbances, power system behavior is governed by the dynamic characteristics of its assets and protection schemes. therefore, modeling protection devices is essential for performing accurate stability studies. modeling all the protection devices in a bulk power system is an intractable task due to the limitations of current stability software, and the difficulty of maintaining and updating the data for thousands of protection devices. one of the critical protection schemes that is not adequately modeled in stability studies is distance relaying. therefore, this paper proposes an algorithm that uses two methods to identify the critical distance relays to be modeled in stability studies : ( i ) apparent impedance monitoring, and ( ii ) the minimum voltage evaluation ( mve ). the algorithm is implemented in python 3. 6 and uses the ge positive sequence load flow analysis ( pslf ) software for performing stability studies. the performance of the algorithm is evaluated on the western electricity coordinating council ( wecc ) system data representing the 2018 summer - peak load. the results of the case studies representing various types of contingencies show that to have an accurate assessment of system behavior, modeling the critical distance relays identified by the algorithm suffices, and there is no need for modeling all the distance relays.
|
arxiv:2009.02500
|
pure dephasing is widely used in the literature to explain experimental observations on quantum dots in cavities. in many cases, its use is not enough and extra terms need to be " fictitiously " added to accomplish with the observed data as it is the case of cavity pumping of an unknown source. here we controvert the validity of the pure dephasing mechanism as a source of decoherence and present a theoretical study based on the phonon - mediated coupling that can explain the emission spectrum and photon auto - and cross - correlation results in recent experiments without the need of any artificial assumptions. we also demonstrated that the phonon - mediated coupling accounts for unexplained features recently reported in measurements of photon auto - and cross - correlation functions. our work illuminates many of the debates in this field and opens up new possibilities for experimental verification and theoretical predictions.
|
arxiv:1906.10784
|
this is a very biased and incomplete survey of some basic notions, old and new results, as well as open problems concerning weinstein symplectic manifolds.
|
arxiv:1707.03442
|
an analytic quasi - periodic cocycle is a linear cocycle over a fixed ergodic torus translation of one or several variables, where the fiber action depends analytically on the base point. consider the space of all such cocycles of any given dimension and endow it with the uniform norm. assume that the translation vector satisfies a generic diophantine condition. we prove large deviation type estimates for the iterates of such cocycles, which, moreover, are stable under small perturbations of the cocycle. as a consequence of these uniform estimates, we establish continuity properties of the lyapunov exponents regarded as functions on this space of cocycles. this result builds upon our previous work on this topic and its proof uses an abstract continuity theorem of the lyapunov exponents which we derived in a recent monograph. the new feature of this paper is extending the availability of such results to cocycles that are identically singular ( i. e. non - invertible anywhere ), in the several variables torus translation setting. this feature is exactly what allows us, through a simple limiting argument, to obtain criteria for the positivity and simplicity of the lyapunov exponents of such cocycles. specializing to the family of cocycles corresponding to a block jacobi operator, we derive consequences on the continuity, positivity and simplicity of its lyapunov exponents, and on the continuity of its integrated density of states.
|
arxiv:1603.06851
|
due to the attractive features that domain wall fermions possess with respect to chiral symmetry, we continue our investigation of the light quark masses with this discretization. achieving reliable results, especially for $ ( m _ u + m _ d ) / 2 $, requires strict control of systematic uncertainties. our present results were obtained on a quenched $ \ beta = 6. 0 $ lattice with spatial volume $ \ approx ( 1. 5 ~ { \ rm fm } ) ^ 3 $. consequently we remark on effects of finite volume as well as finite extent in the fictitious fifth dimension. we compute the renormalization factors nonperturbatively and compare to the one - - loop perturbative result.
|
arxiv:hep-lat/9909101
|
in this paper we see the evolution of a capitalized financial event e, with respect to a capitalization factor f, as the exponential map of a suitably defined lie group g ( f, e ), supported by the half - space of capitalized financial events having the same capital sign of e. the lie group g ( f, e ) depends upon the capitalization factor f and on the event e itself. after the extension of the definition of exponential map of a lie group, we shall eliminate the dependence on the financial event e, recognizing the presence of essentially one unique financial lie semigroup, supported by the entire space of capitalized financial events, determined by the capitalization factor f.
|
arxiv:1106.0562
|
acoustic signature is not always a major driver in aircraft design, as the blackbird relied more on its very high speed and altitude. one method to reduce helicopter rotor noise is modulated blade spacing. standard rotor blades are evenly spaced, and produce greater noise at a given frequency and its harmonics. using varied spacing between the blades spreads the noise or acoustic signature of the rotor over a greater range of frequencies. = = visibility = = the simplest technology is visual camouflage ; the use of paint or other materials to color and break up the lines of a vehicle or person. most stealth aircraft use matte paint and dark colors, and operate only at night. lately, interest in daylight stealth ( especially by the usaf ) has emphasized the use of gray paint in disruptive schemes, and it is assumed that yehudi lights could be used in the future to hide the airframe ( against the background of the sky, including at night, aircraft of any colour appear dark ) or as a sort of active camouflage. the original b - 2 design had wing tanks for a contrail - inhibiting chemical, alleged by some to be chlorofluorosulfonic acid, but this was replaced in the final design with a contrail sensor that alerts the pilot when he should change altitude and mission planning also considers altitudes where the probability of their formation is minimized. in space, mirrored surfaces can be employed to reflect views of empty space toward known or suspected observers ; this approach is compatible with several radar stealth schemes. careful control of the orientation of the satellite relative to the observers is essential, and mistakes can lead to detectability enhancement rather than the desired reduction. = = infrared = = an exhaust plume contributes a significant infrared signature. one means to reduce ir signature is to have a non - circular tail pipe ( a slit shape ) to minimize the exhaust cross sectional area and maximize the mixing of hot exhaust with cool ambient air ( see lockheed f - 117 nighthawk, rectangular nozzles on the lockheed martin f - 22 raptor, and serrated nozzle flaps on the lockheed martin f - 35 lightning ). often, cool air is deliberately injected into the exhaust flow to boost this process ( see ryan aqm - 91 firefly and northrop b - 2 spirit ). the stefan – boltzmann law shows how this results in less energy ( thermal radiation in infrared spectrum ) being released and thus reduces the heat signature. in some aircraft, the jet exhaust is vented above the wing surface to shield it from observers below, as in
|
https://en.wikipedia.org/wiki/Stealth_technology
|
have large language models ( llms ) developed a personality? the short answer is a resounding " we don ' t know! ". in this paper, we show that we do not yet have the right tools to measure personality in language models. personality is an important characteristic that influences behavior. as llms emulate human - like intelligence and performance in various tasks, a natural question to ask is whether these models have developed a personality. previous works have evaluated machine personality through self - assessment personality tests, which are a set of multiple - choice questions created to evaluate personality in humans. a fundamental assumption here is that human personality tests can accurately measure personality in machines. in this paper, we investigate the emergence of personality in five llms of different sizes ranging from 1. 5b to 30b. we propose the option - order symmetry property as a necessary condition for the reliability of these self - assessment tests. under this condition, the answer to self - assessment questions is invariant to the order in which the options are presented. we find that many llms personality test responses do not preserve option - order symmetry. we take a deeper look at llms test responses where option - order symmetry is preserved to find that in these cases, llms do not take into account the situational statement being tested and produce the exact same answer irrespective of the situation being tested. we also identify the existence of inherent biases in these llms which is the root cause of the aforementioned phenomenon and makes self - assessment tests unreliable. these observations indicate that self - assessment tests are not the correct tools to measure personality in llms. through this paper, we hope to draw attention to the shortcomings of current literature in measuring personality in llms and call for developing tools for machine personality measurement.
|
arxiv:2305.14693
|
in coalescing ballistic annihilation, infinitely many particles move with fixed velocities across the real line and, upon colliding, either mutually annihilate or generate a new particle. we compute the critical density in symmetric three - velocity systems with four - parameter reaction equations.
|
arxiv:2401.06852
|
we consider the family of integral operators $ ( k _ { \ alpha } f ) ( x ) $ from $ l ^ p [ 0, 1 ] $ to $ l ^ q [ 0, 1 ] $ given by $ $ ( k _ { \ alpha } f ) ( x ) = \ int _ 0 ^ 1 ( 1 - xy ) ^ { \ alpha - 1 } \, f ( y ) \, \ operatorname { d } \! y, \ qquad 0 < \ alpha < 1. $ $ the main objective is to find upper bounds for the kolmogorov widths, where the $ n $ th kolmogorov width is the infimum of the deviation of $ ( k _ { \ alpha } f ) $ from an $ n $ - dimensional subspaces of $ l ^ p [ 0, 1 ] $ ( with the infimum taken over all $ n $ - dimensional subspaces ), and is therefore a measure of how well $ k _ { \ alpha } $ can be approximated. we find upper bounds for the kolmogorov widths in question that decrease faster than $ \ exp ( - \ kappa \ sqrt { n } ) $ for some positive constant $ \ kappa $.
|
arxiv:1612.03183
|
we consider the learning of multi - agent hawkes processes, a model containing multiple hawkes processes with shared endogenous impact functions and different exogenous intensities. in the framework of stochastic maximum likelihood estimation, we explore the associated risk bound. further, we consider the superposition of hawkes processes within the model, and demonstrate that under certain conditions such an operation is beneficial for tightening the risk bound. accordingly, we propose a stochastic optimization algorithm assisted with a diversity - driven superposition strategy, achieving better learning results with improved convergence properties. the effectiveness of the proposed method is verified on synthetic data, and its potential to solve the cold - start problem of sequential recommendation systems is demonstrated on real - world data.
|
arxiv:1802.04725
|
metastable cosmic strings appear in models of new physics with a two - step symmetry breaking $ g \ to h \ to 1 $, where $ \ pi _ 1 ( h ) \ neq 0 $ and $ \ pi _ 1 ( g ) = 0 $. they decay via the monopole - antimonopole pair creation inside. conventionally, the breaking rate has been estimated by an infinitely thin string approximation, which requires a large hierarchy between the symmetry breaking scales. in this paper, we reexamine it by taking into account the finite sizes of both the cosmic string and the monopole. we obtain a robust lower limit on the tunneling factor $ e ^ { - s _ b } $ even for regimes the conventional estimate is unreliable. in particular, it is relevant to the cosmic string interpretation of the gravitational wave signals recently reported by pulsar timing array experiments.
|
arxiv:2312.15662
|
asymptotics of the variances of many cost measures in random digital search trees are often notoriously messy and involved to obtain. a new approach is proposed to facilitate such an analysis for several shape parameters on random symmetric digital search trees. our approach starts from a more careful normalization at the level of poisson generating functions, which then provides an asymptotically equivalent approximation to the variance in question. several new ingredients are also introduced such as a combined use of the laplace and mellin transforms and a simple, mechanical technique for justifying the analytic de - poissonization procedures involved. the methodology we develop can be easily adapted to many other problems with an underlying binomial distribution. in particular, the less expected and somewhat surprising $ n ( \ log n ) ^ 2 $ - variance for certain notions of total path - length is also clarified.
|
arxiv:1001.0095
|
we investigate weak coin flipping, a fundamental cryptographic primitive where two distrustful parties need to remotely establish a shared random bit. a cheating player can try to bias the output bit towards a preferred value. for weak coin flipping the players have known opposite preferred values. a weak coin - flipping protocol has a bias $ \ epsilon $ if neither player can force the outcome towards their preferred value with probability more than $ \ frac { 1 } { 2 } + \ epsilon $. while it is known that all classical protocols have $ \ epsilon = \ frac { 1 } { 2 } $, mochon showed in 2007 [ arxiv : 0711. 4114 ] that quantumly weak coin flipping can be achieved with arbitrarily small bias ( near perfect ) but the best known explicit protocol has bias $ 1 / 6 $ ( also due to mochon, 2005 [ phys. rev. a 72, 022341 ] ). we propose a framework to construct new explicit protocols achieving biases below $ 1 / 6 $. in particular, we construct explicit unitaries for protocols with bias approaching $ 1 / 10 $. to go below, we introduce what we call the elliptic monotone align ( ema ) algorithm which, together with the framework, allows us to numerically construct protocols with arbitrarily small biases.
|
arxiv:1811.02984
|
in this paper, on the basis of the onsager - - wilson theory of strong binary electrolyte solutions we completely work out the solutions of the governing equations ( onsager - fuoss equations and poisson equations ) for nonequilibrium pair correlation functions and ionic potentials and the solutions for the stokes equation for the velocity and pressure in the case of strong binary electrolyte solutions under the influence of an external electric field of arbitrary strength. the solutions are calculated in the configuration space as functions of coordinates and reduced field strength. thus the axial and transversal components of the velocity and the accompanying nonequilibrium pressure are explicitly obtained and computed for all values of position coordinates. computation of velocity profiles makes it possible to visualize the movement and distortion of ion atmosphere under the influence of an external electric field.
|
arxiv:1207.1144
|
we study the $ \ pm j $ three - dimensional ising model with a spatially uniaxially anisotropic bond randomness on the simple cubic lattice. the $ \ pm j $ random exchange is applied in the $ xy $ planes, whereas in the z direction only a ferromagnetic exchange is used. after sketching the phase diagram and comparing it with the corresponding isotropic case, the system is studied, at the ferromagnetic - paramagnetic transition line, using parallel tempering and a convenient concentration of antiferromagnetic bonds ( $ p _ z = 0 ; p _ { xy } = 0. 176 $ ). the numerical data point out clearly to a second - order ferromagnetic - paramagnetic phase transition belonging in the same universality class with the 3d random ising model. the smooth finite - size behavior of the effective exponents describing the peaks of the logarithmic derivatives of the order parameter provides an accurate estimate of the critical exponent $ 1 / \ nu = 1. 463 ( 3 ) $ and a collapse analysis of magnetization data gives an estimate $ \ beta / \ nu = 0. 516 ( 7 ) $. these results, are in agreement with previous studies and in particular with those of the isotropic $ \ pm j $ three - dimensional ising at the ferromagnetic - paramagnetic transition line, indicating the irrelevance of the introduced anisotropy.
|
arxiv:1208.0883
|
simulators play a critical role in aerial robotics both in and out of the classroom. we present rotorpy, a simulation environment written entirely in python intentionally designed to be a lightweight and accessible tool for robotics students and researchers alike to probe concepts in estimation, planning, and control for aerial robots. rotorpy simulates the 6 - dof dynamics of a multirotor robot including aerodynamic wrenches, obstacles, actuator dynamics and saturation, realistic sensors, and wind models. this work describes the modeling choices for rotorpy, benchmark testing against real data, and a case study using the simulator to design and evaluate a model - based wind estimator.
|
arxiv:2306.04485
|
for a graph $ h $, the tur \ ' { a } n number of $ h $, denoted by ex $ ( n, h ) $, is the maximum number of edges of an $ n $ - vertex $ h $ - free graph. let $ g ( n, h ) $ denote the maximum number of edges not contained in any monochromatic copy of $ h $ in a $ 2 $ - edge - coloring of $ k _ n $. a wheel $ w _ m $ is a graph formed by connecting a single vertex to all vertices of a cycle of length $ m - 1 $. the tur \ ' { a } n number of $ w _ { 2k } $ was determined by simonovits in the 1960s. in this paper, we determine ex $ ( n, w _ { 2k + 1 } ) $ when $ n $ is sufficiently large. we also show that, for sufficiently large $ n $, $ g ( n, w _ { 2k + 1 } ) = \ mbox { ex } ( n, w _ { 2k + 1 } ) $ which confirms a conjecture posed by keevash and sudakov for odd wheels.
|
arxiv:2001.02628
|
for any cardinal number $ \ kappa $ and an index set $ \ gamma $, $ \ sigma _ \ kappa $ - product of real lines consists of elements of $ { \ mathbb r } ^ \ gamma $ having $ < \ kappa $ nonzero coordinates. a compact space $ k $ is $ \ kappa $ - corson compact if it can be embedded into such a space for some $ \ gamma $. the class of ( $ \ omega _ 1 $ - ) corson compact spaces has been intensively studied over last decades. we discuss properties of $ \ kappa $ - corson compacta for various cardinal numbers $ \ kappa $ as well as properties of related boolean algebras and spaces of continuous functions. we present here a detailed discussion of the class of $ \ omega $ - corson compacta extending the results of nakhmanson and yakovlev. for $ \ kappa > \ omega $, our results on $ \ kappa $ - corson compact spaces are related to the line of research originated by kalenda and bell and marciszewski, and continued by bonnet, kubis and todorcevic in their recent paper.
|
arxiv:2107.02513
|
the spin of particles on a non - commutative geometry is investigated within the framework of the representation theory of the q - deformed poincare algebra. an overview of the q - lorentz algebra is given, including its representation theory with explicit formulas for the q - clebsch - gordan coefficients. the vectorial form of the q - lorentz algebra ( wess ), the quantum double form ( woronowicz ), and the dual of the q - lorentz group ( majid ) are shown to be essentially isomorphic. the construction of q - minkowski space and the q - poincare algebra is reviewed. the q - euclidean sub - algebra, generated by rotations and translations, is studied in detail. the results allow for the construction of the q - pauli - lubanski vector, which, in turn, is used to determine the q - spin casimir and the q - little algebras for both the massive and the massless case. irreducible spin representations of the q - poincare algebra are constructed in an angular momentum basis, accessible to physical interpretation. it is shown how representations can be constructed, alternatively, by the method of induction. reducible representations by q - lorentz spinor wave functions are considered. wave equations on these spaces are found, demanding that the spaces of solutions reproduce the irreducible representations. as generic examples the q - dirac equation and the q - maxwell equations are computed explicitly and their uniqueness is shown.
|
arxiv:math/0110219
|
in its war in afghanistan. a small number of practitioners reported it to be more useful than the united states army ' s program of record, the distributed common ground system ( dcgs - a ). california congressman duncan d. hunter complained of united states department of defense obstacles to its wider use in 2012. palantir has also been reported to be working with various u. s. police departments, for example accepting a contract in 2013 to help the northern california regional intelligence center build a controversial license plates database for california. in 2012 new orleans police department partnered with palantir to create a predictive policing program. in 2014, us immigration and customs enforcement ( ice ) awarded palantir a $ 41 million contract to build and maintain a new intelligence system called investigative case management ( icm ) to track personal and criminal records of legal and illegal immigrants. this application has originally been conceived by ice ' s office of homeland security investigations ( hsi ), allowing its users access to intelligence platforms maintained by other federal and private law enforcement entities. the system reached its " final operation capacity " under the trump administration in september 2017. palantir took over the pentagon ' s project maven contract in 2019 after google decided not to continue developing ai unmanned drones used for bombings and intelligence. in 2024, palantir emerged as a " trump trade " for further enforcing the law on illegal immigrants and profiting on federal spending for national security and immigration. = = = british national health service ( nhs ) = = = the firm has contracts relating to patient data from the british national health service. in 2020, it was awarded an emergency non - competitive contract to mine covid - 19 patient data and consolidate government databases to help ministers and officials respond to the pandemic. the contract was valued at more than £23. 5 million and was extended for two more years. the awarding of the contract without competition was heavily criticised, prompting the nhs to pledge an open and transparent procurement process for any future data contract. the firm was encouraged by liam fox " to expand their software business " in britain. it was said to be " critical to the success of the vaccination and ppe programmes, ” but its involvement in the nhs was controversial among civil liberties groups. conservative mp david davis called for a judicial review into the sharing of patient data with palantir. the procurement of a £480m federated data platform by nhs england, launched in january 2023 has been described as a ' must win ' for palantir. the procurement has been
|
https://en.wikipedia.org/wiki/Palantir_Technologies
|
we classify ( up to affine equivalence ) all 7 - dimensional flat manifolds with a cyclic holonomy group.
|
arxiv:1101.2633
|
the omnipresence of deep learning architectures such as deep convolutional neural networks ( cnn ) s is fueled by the synergistic combination of ever - increasing labeled datasets and specialized hardware. despite the indisputable success, the reliance on huge amounts of labeled data and specialized hardware can be a limiting factor when approaching new applications. to help alleviating these limitations, we propose an efficient learning strategy for layer - wise unsupervised training of deep cnns on conventional hardware in acceptable time. our proposed strategy consists of randomly convexifying the reconstruction contractive auto - encoding ( rcae ) learning objective and solving the resulting large - scale convex minimization problem in the frequency domain via coordinate descent ( cd ). the main advantages of our proposed learning strategy are : ( 1 ) single tunable optimization parameter ; ( 2 ) fast and guaranteed convergence ; ( 3 ) possibilities for full parallelization. numerical experiments show that our proposed learning strategy scales ( in the worst case ) linearly with image size, number of filters and filter size.
|
arxiv:1611.09232
|
in this paper, we propose a novel model revgan that automatically generates controllable and personalized user reviews based on the arbitrarily given sentimental and stylistic information. revgan utilizes the combination of three novel components, including self - attentive recursive autoencoders, conditional discriminators, and personalized decoders. we test its performance on the several real - world datasets, where our model significantly outperforms state - of - the - art generation models in terms of sentence quality, coherence, personalization and human evaluations. we also empirically show that the generated reviews could not be easily distinguished from the organically produced reviews and that they follow the same statistical linguistics laws.
|
arxiv:1910.03506
|
we present a variant of the agrawal - biswas algorithm, a monte carlo algorithm which tests the primality of an integer $ n $ by checking whether or not $ ( x + a ) ^ n $ and $ x ^ n + a $ are equivalent in a residue ring of $ \ mathbb { z } / n \ mathbb { z } [ x ] $. the variant that we present is also a randomization of lenstra jr. and pomerance ' s improvement to the agrawal - kayal - saxena deterministic primality test. we show that our variant of the agrawal - biswas algorithm can be used with the miller - rabin primality test to yield an algorithm which is slower than the miller - rabin test but relatively more accurate.
|
arxiv:1810.09651
|
" brane supersymmetry breaking " is a peculiar string - scale mechanism that can unpair bose and fermi excitations in orientifold models. it results from the simultaneous presence, in the vacuum, of collections of d - branes and orientifolds that are not mutually bps, and is closely tied to the scale of string excitations. it also leaves behind, for a mixing of dilaton and internal breathing mode, an exponential potential that is just too steep for a scalar to emerge from the initial singularity while descending it. as a result, in this class of models the scalar can generically bounce off the exponential wall, and this dynamics brings along, in the power spectrum, an infrared depression typically followed by a pre - inflationary peak. we elaborate on a possible link between this type of bounce and the low - $ \ ell $ end of the cmb angular power spectrum. for the first 32 multipoles, one can reach a 50 % reduction in $ \ chi ^ { \, 2 } $ with respect to the standard $ \ lambda $ cdm setting.
|
arxiv:1411.6396
|
we develop a generalized grand canonical potential for the ballistic nonequilibrium electron distribution in a metal nanowire with a finite applied bias voltage. coulomb interactions are treated in the self - consistent hartree approximation, in order to ensure gauge invariance. using this formalism, we investigate the stability and cohesive properties of metallic nanocylinders at ultrahigh current densities. a linear stability analysis shows that metal nanowires with certain { \ em magic conductance values } can support current densities up to 10 ^ 11 a / cm ^ 2, which would vaporize a macroscopic piece of metal. this finding is consistent with experimental studies of gold nanowires. interestingly, our analysis also reveals the existence of reentrant stability zones - - geometries that are stable only under an applied bias.
|
arxiv:cond-mat/0411058
|
we propose a stochastic process of interacting many agents, which is inspired by rank - based supplanting dynamics commonly observed in a group of japanese macaques. in order to characterize the breaking of permutation symmetry with respect to agents ' rank in the stochastic process, we introduce a rank - dependent quantity, overlap centrality, which quantifies how often a given agent overlaps with the other agents. we give a sufficient condition in a wide class of the models such that overlap centrality shows perfect correlation in terms of the agents ' rank in zero - supplanting limit. we also discuss a singularity of the correlation in the case of interaction induced by a potts energy.
|
arxiv:2209.02336
|
vacancies in the l1 shell of atoms and molecules can decay non - radiatively via coster - kronig decay whereby the vacancy is filled by an electron from the l2, 3 shell while a second electron is emitted into the ionization continuum. this process is akin to auger decay, but in contrast to auger electrons, coster - kronig electrons have rather low kinetic energies of less than 50 ev. in the present work, we extend recently introduced methods for the construction of molecular auger spectra that are based on complex - scaled equation - of - motion coupled - cluster theory to coster - kronig decay. we compute ionization energies as well as total and partial decay widths for the 2s - 1 states of argon and hydrogen sulfide and construct the l1l2, 3m coster - kronig and l1mm auger spectra of these species. whereas our final spectra are in good agreement with the available experimental and theoretical data, substantial disagreements are found for various branching ratios suggesting that spin - orbit coupling makes a major impact on coster - kronig decay already in the third period of the periodic table.
|
arxiv:2407.17644
|
the microquasar gx 339 - 4, known to exhibit powerful compact jets that dominate its radio to near - infrared emission, entered an outburst in 2010 for the fifth time in about fifteen years. an extensive radio to x - ray multi - wavelength campaign was immediately triggered, and we report here on eso / fors2 + isaac optical and near - infrared spectroscopic observations, supported by atca radio and rxte / swift x - ray quasi - simultaneous data. gx 339 - 4 was observed at three different epochs, once in the soft state and twice in the hard state. in the soft state, the optical and near - infrared continuum is largely consistent with the raleigh - jeans tail of a thermal process. as an explanation, we favour irradiation of the outer accretion disc by its inner regions, enhanced by disc warping. an excess is also present at low frequencies, likely due to a m subgiant companion star. during the first hard state, the optical / near - infrared continuum is well - described by the optically thin synchrotron emission of the compact jet combined with disc irradiation and perhaps another component peaking in the ultraviolet. the spectral break where the jet transits from the optically thick to thin regimes, located below 1. 20e14 hz, is not detected and the extension of the optically thin synchrotron is consistent with the 3 - 50 kev spectrum. in contrast, the emission during the second hard state is more difficult to understand and points toward a more complex jet continuum. in both cases, the near - infrared continuum is found to be variable at timescales at least as short as 20 s, although these variabilities are smoothed out beyond a few hundred seconds. this implies rapid variations - in flux and frequency - of the location of the spectral break, i. e. dramatic short timescale changes of the physical conditions at the base of the jet, such as the magnetic field and / or the base radius.
|
arxiv:1202.3984
|
we present a microscopic theory for collective excitations of quantum anomalous hall ferromagnets ( qahf ) in twisted bilayer graphene. we calculate the spin magnon and valley magnon spectra by solving bethe - salpeter equations, and verify the stability of qahf. we extract the spin stiffness from the gapless spin wave dispersion, and estimate the energy cost of a skyrmion - antiskyrmion pair, which is found to be comparable in energy with the hartree - fock gap. the valley wave mode is gapped, implying that the valley polarized state is more favorable compared to the valley coherent state. using a nonlinear sigma model, we estimate the valley ordering temperature, which is considerably reduced from the mean - field transition temperature due to thermal excitations of valley waves.
|
arxiv:1908.05417
|
top quarks produced in multi - tev processes will have large lorentz boosts, and their decay products will be highly collimated. in semileptonic decay modes, this often leads to the merging of the b - jet and the hard lepton according to standard event reconstructions, which can complicate new physics searches. here we explore ways of efficiently recovering this signal in the muon channel at the lhc. we perform a particle - level study of events with muons produced inside of boosted tops, as well as in generic qcd jets and from w - strahlung off of hard quarks. we characterize the discriminating power of cuts previously explored in the literature, as well two new ones. we find a particularly powerful isolation variable which can potentially reject light qcd jets with hard embedded muons at the 10 ^ 3 level while retaining 80 ~ 90 % of the tops. this can also be fruitfully combined with other cuts for o ( 1 ) greater discrimination. for w - strahlung, a simple pt - scaled maximum \ delta r cut performs comparably to a highly idealized top - mass reconstruction, rejecting an o ( 1 ) fraction of the background with percent - scale loss of signal. using these results, we suggest a set of well - motivated baseline cuts for any physics analysis involving semileptonic top quarks at tev - scale momenta, using neither b - tagging nor missing energy as discriminators. we demonstrate the utility of our cuts in searching for resonances in the top - antitop invariant mass spectrum. for example, our results suggest that 100 fb ^ { - 1 } of data from a 14 tev lhc could be used to discover a warped kk gluon up to 4. 5 tev or higher.
|
arxiv:1007.2221
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.