text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
the aim of image - based virtual try - on is to generate realistic images of individuals wearing target garments, ensuring that the pose, body shape and characteristics of the target garment are accurately preserved. existing methods often fail to reproduce the fine details of target garments effectively and lack generalizability to new scenarios. in the proposed method, the person ' s initial garment is completely removed. subsequently, a precise warping is performed using the predicted keypoints to fully align the target garment with the body structure and pose of the individual. based on the warped garment, a body segmentation map is more accurately predicted. then, using an alignment - aware segment normalization, the misaligned areas between the warped garment and the predicted garment region in the segmentation map are removed. finally, the generator produces the final image with high visual quality, reconstructing the precise characteristics of the target garment, including its overall shape and texture. this approach emphasizes preserving garment characteristics and improving adaptability to various poses, providing better generalization for diverse applications.
|
arxiv:2504.03807
|
we report the discovery of 158 previously undetected dwarf galaxies in the fornax cluster central regions using a deep coadded $ u, g $ and $ i $ - band image obtained with the decam wide - field camera mounted on the 4 - meter blanco telescope at the cerro tololo interamerican observatory as part of the { \ it next generation fornax survey } ( ngfs ). the new dwarf galaxies have quasi - exponential light profiles, effective radii $ 0. 1 \! < \! r _ e \! < \! 2. 8 $ kpc and average effective surface brightness values $ 22. 0 \! < \! \ mu _ i \! < \! 28. 0 $ mag arcsec $ ^ { - 2 } $. we confirm the existence of ultra - diffuse galaxies ( udgs ) in the fornax core regions that resemble counterparts recently discovered in the virgo and coma galaxy clusters. ~ we also find extremely low surface brightness ngfs dwarfs, which are several magnitudes fainter than the classical udgs. the faintest dwarf candidate in our ngfs sample has an absolute magnitude of $ m _ i \! = \! - 8. 0 $ \, mag. the nucleation fraction of the ngfs dwarf galaxy sample appears to decrease as a function of their total luminosity, reaching from a nucleation fraction of $ > \! 75 \ % $ at luminosities brighter than $ m _ i \! \ simeq \! - 15. 0 $ mag to $ 0 \ % $ at luminosities fainter than $ m _ i \! \ simeq \! - 10. 0 $ mag. the two - point correlation function analysis of the ngfs dwarf sample shows an excess on length scales below $ \ sim \! 100 $ kpc, pointing to the clustering of dwarf galaxies in the fornax cluster core.
|
arxiv:1510.02475
|
rapidly accreting massive protostars undergo a phase of deuterium shell burning during pre - main sequence evolution that causes them to swell to tenths of an au in radius. during this phase, those with close binary companions will overflow their roche lobes and begin transferring mass. since massive stars frequently have companions at distances well under 1 au, this process may affect the early evolution of a substantial fraction of massive stars. we use a simple protostellar evolution model to determine the range in accretion rates, mass ratios, and orbital separations for which mass transfer will occur, and we compute approximately the stability and final outcome of the transfer process. we discuss how mass transfer affects the demographics of massive binaries, and show that it provides a natural explanation for the heretofore unexplained population of massive " twins ", high mass binaries with mass ratios very close to unity.
|
arxiv:astro-ph/0611822
|
social media platforms have been exploited to disseminate misinformation in recent years. the widespread online misinformation has been shown to affect users ' beliefs and is connected to social impact such as polarization. in this work, we focus on misinformation ' s impact on specific user behavior and aim to understand whether general twitter users changed their behavior after being exposed to misinformation. we compare the before and after behavior of exposed users to determine whether the frequency of the tweets they posted, or the sentiment of their tweets underwent any significant change. our results indicate that users overall exhibited statistically significant changes in behavior across some of these metrics. through language distance analysis, we show that exposed users were already different from baseline users before the exposure. we also study the characteristics of two specific user groups, multi - exposure and extreme change groups, which were potentially highly impacted. finally, we study if the changes in the behavior of the users after exposure to misinformation tweets vary based on the number of their followers or the number of followers of the tweet authors, and find that their behavioral changes are all similar.
|
arxiv:2111.00700
|
biological membranes typically contain a large number of different components dispersed in small concentrations in the main membrane phase, including proteins, sugars, and lipids of varying geometrical properties. most of these components do not bind the cargo. here, we show that such ` inert ' components can be crucial for precise control of cross - membrane trafficking. using a statistical mechanics model and molecular dynamics simulations, we demonstrate that the presence of inert membrane components of small isotropic curvatures dramatically influences cargo endocytosis, even if the total spontaneous curvature of such a membrane remains unchanged. curved lipids, such as cholesterol, as well as asymmetrically included proteins and tethered sugars can hence all be actively participating in controlling membrane trafficking of nanoscopic cargo. we find that even a low - level expression of curved inert membrane components can determine the membrane selectivity towards the cargo size, and can be used to selectively target membranes of certain compositions. our results suggest a robust and general way to control cargo trafficking by adjusting the membrane composition without needing to alter the concentration of receptors nor the average membrane curvature. this study indicates that cells can prepare for any trafficking event by incorporating curved inert components in either of the membrane leaflets.
|
arxiv:1712.10147
|
in the current afm experiments the distribution of unfolding times, p ( t ), is measured by applying a constant stretching force f _ s from which the apparent unfolding rate is obtained. to describe the complexity of the underlying energy landscape requires additional probes that can incorporate the dynamics of tension propagation and relaxation of the polypeptide chain upon force quench. we introduce a theory of force correlation spectroscopy ( fcs ) to map the parameters of the energy landscape of proteins. in the fcs the joint distribution, p ( t, t ) of folding and unfolding times is constructed by repeated application of cycles of stretching at constant fs, separated by release periods t during which the force is quenched to f _ q < f _ s. during the release period, the protein can collapse to a manifold of compact states or refold. we show that p ( t, t ) can be used to resolve the kinetics of unfolding as well as formation of native contacts and to extract the parameters of the energy landscape using chain extension as the reaction coordinate and p ( t, t ). we illustrate the utility of the proposed formalism by analyzing simulations of unfolding - refolding trajectories of a coarse - grained protein s1 with beta - sheet architecture for several values of f _ s, t and f _ q = 0. the simulations of stretch - relax trajectories are used to map many of the parameters that characterize the energy landscape of s1.
|
arxiv:cond-mat/0601324
|
in this note we prove that there is a convex domain for which the $ \ infty $ - harmonic potential is not a first $ \ infty $ - eigenfunction.
|
arxiv:1210.3303
|
we propose a finite discretization for the black hole geometry and dynamics. we realize our proposal, in the case of extremal black holes, for which the radial and temporal near horizon geometry is known to be ads $ _ 2 = sl ( 2, \ mathbb { r } ) / so ( 1, 1, \ mathbb { r } ) $. we implement its discretization by replacing the set of real numbers $ \ mathbb { r } $ with the set of integers modulo $ n $, with ads $ _ 2 $ going over to the finite geometry ads $ _ 2 [ n ] = sl ( 2, \ mathbb { z } _ n ) / so ( 1, 1, \ mathbb { z } _ n ) $. we model the dynamics of the microscopic degrees of freedom by generalized arnol ' d cat maps, $ { \ sf a } \ in sl ( 2, \ mathbb { z } _ n ) $, which are isometries of the geometry at both the classical and quantum levels. these exhibit well studied properties of strong arithmetic chaos, dynamical entropy, nonlocality and factorization in the cutoff discretization $ n $, which are crucial for fast quantum information processing. we construct, finally, a new kind of unitary and holographic correspondence, for ads $ _ 2 [ n ] $ / cft $ _ 1 [ n ] $, via coherent states of both the bulk and boundary geometries.
|
arxiv:1306.5670
|
parallel tempering ( pt ), also known as replica exchange, is the go - to workhorse for simulations of multi - modal distributions. the key to the success of pt is to adopt efficient swap schemes. the popular deterministic even - odd ( deo ) scheme exploits the non - reversibility property and has successfully reduced the communication cost from $ o ( p ^ 2 ) $ to $ o ( p ) $ given sufficiently many $ p $ chains. however, such an innovation largely disappears in big data due to the limited chains and few bias - corrected swaps. to handle this issue, we generalize the deo scheme to promote non - reversibility and propose a few solutions to tackle the underlying bias caused by the geometric stopping time. notably, in big data scenarios, we obtain an appealing communication cost $ o ( p \ log p ) $ based on the optimal window size. in addition, we also adopt stochastic gradient descent ( sgd ) with large and constant learning rates as exploration kernels. such a user - friendly nature enables us to conduct approximation tasks for complex posteriors without much tuning costs.
|
arxiv:2211.10837
|
narrow - line seyfert 1 galaxies ( nls1 ) are active galactic nuclei ( agn ) known to have small masses of the central black hole and high accretion rates. nls1s are generally radio - quiet, but a small part of them ( about 7 \ % ) are radio - loud. the recent discovery of powerful relativistic jets in radio - loud nls1s ( rlnls1s ), emitting at high - energy $ \ gamma $ - rays, opened intriguing questions. the observed luminosity of the jet is generally weak, smaller than blazars, although when rescaled for the mass of the central black hole, it becomes of the same order of magnitude of the latter. the weak luminosity, and hence observed flux, resulted in a small number of known rlnls1. from a recent survey of rlnls1s, it was found that only 8 out of 42 sources had radio flux density at 1. 4 ghz greater than 100 mjy, while 21 out of 42 had flux density smaller than 10 mjy. in addition, given the strong variability at all wavelengths, with present - day facilities rlnls1s can often only be detected during high activity periods. the square kilometer array ( ska ), with its superior sensitivity, will break this limit, allowing us to unveil a relatively unknown population of jetted agn. we present the results of a study aimed at evaluating the scenario that could emerge after the advent of ska.
|
arxiv:1601.05791
|
we discuss two main universal dynamic crossovers in a liquid that correspond to relaxation times of 1 ps and $ 10 ^ { - 7 } - 10 ^ { - 6 } $ s. we introduce the concept of liquid elasticity length $ d _ { el } $. at room temperature, $ d _ { el } $ is several \ aa in water and increases to 0. 01 mm in honey and 1 mm in tar. we show that on temperature decrease, $ d _ { el } $ crosses the fundamental lengths of the system, medium - range order $ d _ m $ and system size $ l $. we discuss how $ d _ { el } = d _ m $ and $ d _ { el } = l $ correspond to the two dynamic crossovers.
|
arxiv:0704.2977
|
see also = = branches of science empiricism list of academic disciplines and sub - disciplines logology ( science ) natural history natural sciences ( cambridge ), for the tripos at the university of cambridge = = references = = = = = bibliography = = = = = further reading = = defining natural sciences ledoux, s. f., 2002 : defining natural sciences, behaviorology today, 5 ( 1 ), 34 – 36. stokes, donald e. ( 1997 ). pasteur ' s quadrant : basic science and technological innovation. revised and translated by albert v. carozzi and marguerite carozzi. washington, d. c. : brookings institution press. isbn 978 - 0 - 8157 - 8177 - 6. the history of recent science and technology natural sciences contains updated information on research in the natural sciences including biology, geography and the applied life and earth sciences. reviews of books about natural science this site contains over 50 previously published reviews of books about natural science, plus selected essays on timely topics in natural science. scientific grant awards database contains details of over 2, 000, 000 scientific research projects conducted over the past 25 years. e! science up - to - date science news aggregator from major sources including universities.
|
https://en.wikipedia.org/wiki/Natural_science
|
we report successful results from using deep learning neural networks ( dlnns ) to learn, purely by observation, the behavior of profitable traders in an electronic market closely modelled on the limit - order - book ( lob ) market mechanisms that are commonly found in the real - world global financial markets for equities ( stocks & shares ), currencies, bonds, commodities, and derivatives. successful real human traders, and advanced automated algorithmic trading systems, learn from experience and adapt over time as market conditions change ; our dlnn learns to copy this adaptive trading behavior. a novel aspect of our work is that we do not involve the conventional approach of attempting to predict time - series of prices of tradeable securities. instead, we collect large volumes of training data by observing only the quotes issued by a successful sales - trader in the market, details of the orders that trader is executing, and the data available on the lob ( as would usually be provided by a centralized exchange ) over the period that the trader is active. in this paper we demonstrate that suitably configured dlnns can learn to replicate the trading behavior of a successful adaptive automated trader, an algorithmic system previously demonstrated to outperform human traders. we also demonstrate that dlnns can learn to perform better ( i. e., more profitably ) than the trader that provided the training data. we believe that this is the first ever demonstration that dlnns can successfully replicate a human - like, or super - human, adaptive trader operating in a realistic emulation of a real - world financial market. our results can be considered as proof - of - concept that a dlnn could, in principle, observe the actions of a human trader in a real financial market and over time learn to trade equally as well as that human trader, and possibly better.
|
arxiv:1811.02880
|
this paper investigates the role of clip image embeddings within the stable video diffusion ( svd ) framework, focusing on their impact on video generation quality and computational efficiency. our findings indicate that clip embeddings, while crucial for aesthetic quality, do not significantly contribute towards the subject and background consistency of video outputs. moreover, the computationally expensive cross - attention mechanism can be effectively replaced by a simpler linear layer. this layer is computed only once at the first diffusion inference step, and its output is then cached and reused throughout the inference process, thereby enhancing efficiency while maintaining high - quality outputs. building on these insights, we introduce the vcut, a training - free approach optimized for efficiency within the svd architecture. vcut eliminates temporal cross - attention and replaces spatial cross - attention with a one - time computed linear layer, significantly reducing computational load. the implementation of vcut leads to a reduction of up to 322t multiple - accumulate operations ( macs ) per video and a decrease in model parameters by up to 50m, achieving a 20 % reduction in latency compared to the baseline. our approach demonstrates that conditioning during the semantic binding stage is sufficient, eliminating the need for continuous computation across all inference steps and setting a new standard for efficient video generation.
|
arxiv:2407.19205
|
we report a combined theoretical and experimental study of a new topological semimetal crfevga with an emphasis on the role of atomic disorder on the magnetoelectronic properties and its applications. crfevga belongs to the quaternary heusler alloy family and crystallizes in the cubic structure. synchrotron xrd measurement confirms b2 disorder, which plays a crucial role in dictating the electronic and magnetic properties of the system.
|
arxiv:2212.07576
|
we study the influence of the joule effect on the non - linear behavior of the transport i - v curves in polycrystalline samples of the manganite pr0. 8ca0. 2mno3 by using the crystalline unit cell parameters as an internal thermometer in x - ray and neutron diffraction. we develop a simple analytical model to estimate the temperature profile in the samples. under the actual experimental conditions we show that the internal temperature gradient or the difference between the temperature of the sample and that of the thermal bath are at the origin of the non - linearity observed in the i - v curves. consequences on other compounds with colossal magnetoresistance are also discussed.
|
arxiv:cond-mat/0506448
|
girard ' s light linear logic ( lll ) characterized polynomial time in the proof - as - program paradigm with a bound on cut elimination. this logic relied on a stratification principle and a " one - door " principle which were generalized later respectively in the systems l ^ 4 and l ^ 3a. each system was brought with its own complex proof of ptime soundness. in this paper we propose a broad sufficient criterion for ptime soundness for linear logic subsystems, based on the study of paths inside the proof - nets, which factorizes proofs of soundness of existing systems and may be used for future systems. as an additional gain, our bound stands for any reduction strategy whereas most bounds in the literature only stand for a particular strategy.
|
arxiv:1201.2956
|
the results of several recent cms searches for exotic phenomena beyond the standard model are presented in this talk. two searches look for new physics in a final state with a vector boson and missing transverse energy. three searches target massive resonances decaying to a higgs boson and a vector boson. finally, preliminary results are presented for the first cms search for exotic phenomena using $ \ sqrt { s } = 13 $ tev data, the search for dijet resonances.
|
arxiv:1510.08898
|
recent advancements in neural audio codec ( nac ) models have inspired their use in various speech processing tasks, including speech enhancement ( se ). in this work, we propose a novel, efficient se approach by leveraging the pre - quantization output of a pretrained nac encoder. unlike prior nac - based se methods, which process discrete speech tokens using language models ( lms ), we perform se within the continuous embedding space of the pretrained nac, which is highly compressed along the time dimension for efficient representation. our lightweight se model, optimized through an embedding - level loss, delivers results comparable to se baselines trained on larger datasets, with a significantly lower real - time factor of 0. 005. additionally, our method achieves a low gmac of 3. 94, reducing complexity 18 - fold compared to sepformer in a simulated cloud - based audio transmission environment. this work highlights a new, efficient nac - based se solution, particularly suitable for cloud applications where nac is used to compress audio before transmission. copyright 20xx ieee. personal use of this material is permitted. permission from ieee must be obtained for all other uses, in any current or future media, including reprinting / republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
|
arxiv:2502.16240
|
click - through rate ( ctr ) prediction is of great importance in recommendation systems and online advertising platforms. when served in industrial scenarios, the user - generated data observed by the ctr model typically arrives as a stream. streaming data has the characteristic that the underlying distribution drifts over time and may recur. this can lead to catastrophic forgetting if the model simply adapts to new data distribution all the time. also, it ' s inefficient to relearn distribution that has been occurred. due to memory constraints and diversity of data distributions in large - scale industrial applications, conventional strategies for catastrophic forgetting such as replay, parameter isolation, and knowledge distillation are difficult to be deployed. in this work, we design a novel drift - aware incremental learning framework based on ensemble learning to address catastrophic forgetting in ctr prediction. with explicit error - based drift detection on streaming data, the framework further strengthens well - adapted ensembles and freezes ensembles that do not match the input distribution avoiding catastrophic interference. both evaluations on offline experiments and a / b test shows that our method outperforms all baselines considered.
|
arxiv:2304.09062
|
binary mixtures of dry grains avalanching down a slope are experimentally studied in order to determine the interaction among coarse and fine grains and their effect on the deposit morphology. the distance travelled by the massive front of the avalanche over the horizontal plane of deposition area is measured as a function of mass content of fine particles in the mixture, grain - size ratio, and flume tilt. a sudden transition of the runout is detected at a critical content of fine particles, with a dependence on the grain - size ratio and flume tilt. this transition is explained as two simultaneous avalanches in different flowing regimes ( a viscous - like one and an inertial one ) competing against each other and provoking a full segregation and a split - off of the deposit into two well - defined, separated deposits. the formation of the distal deposit, in turn, depends on a critical amount of coarse particles. this allows the condensation of the pure coarse deposit around a small, initial seed cluster, which grows rapidly by braking and capturing subsequent colliding coarse particles. for different grain - size ratios and keeping a constant total mass, the change in the amount of fines needed for the transition to occur is found to be always less than 7 %. for avalanches with a total mass of 4 kg we find that, most of the time, the runout of a binary avalanche is larger than the runout of monodisperse avalanches of corresponding constituent particles, due to lubrication on the coarse - dominated side or to drag by inertial particles on the fine - dominated side.
|
arxiv:1708.00950
|
a search for electroweak production of supersymmetric particles in scenarios with compressed mass spectra in final states with two low - momentum leptons and missing transverse momentum is presented. this search uses proton - proton collision data recorded by the atlas detector at the large hadron collider in 2015 - 2016, corresponding to 36. 1 fb $ ^ { - 1 } $ of integrated luminosity at $ \ sqrt { s } = 13 $ tev. events with same - flavor pairs of electrons or muons with opposite electric charge are selected. the data are found to be consistent with the standard model prediction. results are interpreted using simplified models of r - parity - conserving supersymmetry in which there is a small mass difference between the masses of the produced supersymmetric particles and the lightest neutralino. exclusion limits at 95 % confidence level are set on next - to - lightest neutralino masses of up to 145 gev for higgsino production and 175 gev for wino production, and slepton masses of up to 190 gev for pair production of sleptons. in the compressed mass regime, the exclusion limits extend down to mass splittings of 2. 5 gev for higgsino production, 2 gev for wino production, and 1 gev for slepton production. the results are also interpreted in the context of a radiatively - driven natural supersymmetry model with non - universal higgs boson masses.
|
arxiv:1712.08119
|
we deal with a yukawa - like long - range modified model of gravity ( mog ) which recently allowed to successfully accommodate many astrophysical and cosmological features without resorting to dark matter. on solar system scales mog predicts retrograde secular precessions of the planetary longitudes of the perihelia \ varpi whose existence has been put on the test here by taking the ratios of the observationally estimated pitjeva ' s corrections to the standard newtonian / einsteinian perihelion precessions for different pairs of planets. it turns out that mog, in the present form which turned out to be phenomenologically successful on astrophysical scales, is ruled out at more than 3sigma level in the solar system. if and when other teams of astronomers will independently estimate their own extra - precessions of the perihelia it will be possible to repeat such a test.
|
arxiv:0809.3563
|
the highly mobile electrons at the interface of srtio3 with other oxide insulators, such as laalo3 or alox, are of great current interest. a vertical gate voltage allows controlling a metal / superconductor - to - insulator transition, as well as electrical modulation of the spin - orbit rashba coupling for spin - charge conversion. these findings raise important questions about the origin of the confined electrons as well as the mechanisms that govern the interfacial electric field. here we use infrared ellipsometry and confocal raman spectroscopy to show that an anomalous polar moment is induced at the interface that is non - collinear, highly asymmetric and hysteretic with respect to the vertical gate electric field. our data indicate that an important role is played by the electromigration of oxygen vacancies and their clustering at the antiferrodistortive domain boundaries of srtio3, which generates local electric and possibly also flexoelectric fields and subsequent polar moments with a large lateral component. our results open new perspectives for the defect engineering of lateral devices with strongly enhanced and hysteretic local electric fields that can be manipulated with various other parameters, like strain, temperature, or photons.
|
arxiv:2109.06673
|
the fifth international workshop on domain - specific languages and models for robotic systems ( dslrob ' 14 ) was held in conjunction with the 2014 international conference on simulation, modeling, and programming for autonomous robots ( simpar 2014 ), october 2014 in bergamo, italy. the main topics of the workshop were domain - specific languages ( dsls ) and model - driven software development ( mdsd ) for robotics. a domain - specific language is a programming language dedicated to a particular problem domain that offers specific notations and abstractions that increase programmer productivity within that domain. model - driven software development offers a high - level way for domain users to specify the functionality of their system at the right level of abstraction. dsls and models have historically been used for programming complex systems. however recently they have garnered interest as a separate field of study. robotic systems blend hardware and software in a holistic way that intrinsically raises many crosscutting concerns ( concurrency, uncertainty, time constraints,... ), for which reason, traditional general - purpose languages often lead to a poor fit between the language features and the implementation requirements. dsls and models offer a powerful, systematic way to overcome this problem, enabling the programmer to quickly and precisely implement novel software solutions to complex problems within the robotics domain.
|
arxiv:1411.7148
|
solving math word problems ( mwps ) is an important and challenging problem in natural language processing. existing approaches to solve mwps require full supervision in the form of intermediate equations. however, labeling every mwp with its corresponding equations is a time - consuming and expensive task. in order to address this challenge of equation annotation, we propose a weakly supervised model for solving mwps by requiring only the final answer as supervision. we approach this problem by first learning to generate the equation using the problem description and the final answer, which we subsequently use to train a supervised mwp solver. we propose and compare various weakly supervised techniques to learn to generate equations directly from the problem description and answer. through extensive experiments, we demonstrate that without using equations for supervision, our approach achieves accuracy gains of 4. 5 % and 32 % over the state - of - the - art weakly supervised approach, on the standard math23k and allarith datasets respectively. additionally, we curate and release new datasets of roughly 10k mwps each in english and in hindi ( a low resource language ). these datasets are suitable for training weakly supervised models. we also present an extension of warmm to semi - supervised learning and present further improvements on results, along with insights.
|
arxiv:2104.06722
|
two versions of a fast, purely reflective paul - baker type telescope are discussed, each with an 8. 4 - m aperture, 3 deg diameter flat field and f / 1. 25 focal ratio. the first version is based on a common, even asphere type of surface with zero conic constant. the primary and tertiary mirrors are 6th order aspheres, while the secondary mirror is an 8th order asphere ( referred to here for brevity, as the 6 / 8 / 6 configuration ). the d _ 80 diameter of a star image varies from 0 ' '. 18 on the optical axis up to 0 ' '. 27 at the edge of the field ( 9. 3 - 13. 5 mcm ). the second version of the telescope is based on a polysag surface type which uses a polynomial expansion in the sag z, r ^ 2 = 2r _ 0z - ( 1 + b ) z ^ 2 + a _ 3 z ^ 3 + a _ 4 z ^ 4 +... + a _ n z ^ n, instead of the common form of an aspheric surface. this approach results in somewhat better images, with d _ 80 ranging from 0 ' '. 16 to 0 ' '. 23, using a lower - order 3 / 4 / 3 combination of powers for the mirror surfaces. an additional example with 3. 5 - m aperture, 3. 5 deg diameter flat field, and f / 1. 25 focal ratio featuring near - diffraction - limited image quality is also presented.
|
arxiv:0707.1731
|
several recent studies have been devoted to investigating the limitations that ordinary quantum mechanics and / or quantum gravity might impose on the measurability of space - time observables. these analyses are often confined to the simplified context of two - dimensional flat space - time and rely on a simple procedure for the measurement of space - like distances based on the exchange of light signals. we present a generalization of this measurement procedure applicable to all three types of space - time intervals between two events in space - times of any number of dimensions. we also present some preliminary observations on an alternative measurement procedure that can be applied taking into account the gravitational field of the measuring apparatus, and briefly discuss quantum limitations of measurability in this context.
|
arxiv:0710.5608
|
quantum states of ultracold neutrons in the gravitational field are to be characterized through gravitational resonance spectroscopy. this paper discusses systematic effects that appear in the spectroscopic measurements. the discussed frequency shifts, which we call stern - gerlach shift, interference shift, and spectator state shift, appear in conceivable measurement schemes and have general importance. these shifts have to be taken into account in precision experiments.
|
arxiv:1501.03023
|
graph machine learning, particularly using graph neural networks, fundamentally relies on node features. nevertheless, numerous real - world systems, such as social and biological networks, often lack node features due to various reasons, including privacy concerns, incomplete or missing data, and limitations in data collection. in such scenarios, researchers typically resort to methods like structural and positional encoding to construct node features. however, the length of such features is contingent on the maximum value within the property being encoded, for example, the highest node degree, which can be exceedingly large in applications like scale - free networks. furthermore, these encoding schemes are limited to categorical data and might not be able to encode metrics returning other type of values. in this paper, we introduce a novel, universally applicable encoder, termed \ emph { propenc }, which constructs expressive node embedding from any given graph metric. \ emph { propenc } leverages histogram construction combined with reversed index encoding, offering a flexible method for node features initialization. it supports flexible encoding in terms of both dimensionality and type of input, demonstrating its effectiveness across diverse applications. \ emph { propenc } allows encoding metrics in low - dimensional space which effectively address the sparsity challenge and enhances the efficiency of the models. we show that \ emph { propenc } can construct node features that either exactly replicate one - hot encoding or closely approximate indices under various settings. our extensive evaluations in graph classification setting across multiple social networks that lack node features support our hypothesis. the empirical results conclusively demonstrate that \ emph { propenc } is both an efficient and effective mechanism for constructing node features from diverse set of graph metrics.
|
arxiv:2409.11554
|
we study the carath \ ' eodory metric on some generalized teichm \ " uller spaces. earle showed that the carath \ ' eodory metric is complete on any teichm \ " uller space. miyachi extended this result for asymptotic teichm \ " uller spaces. we study the completeness of the carath \ ' eodory metric on product teichm \ " uller spaces and on the teichm \ " uller space of a closed set in the riemann sphere.
|
arxiv:2309.09373
|
in this paper we present a very simple proof of the existence of at least one non trivial solution for a kirchhoff type equation on $ \ rn $, for $ n \ ge 3 $. in particular, in the first part of the paper we are interested in studying the existence of a positive solution to the elliptic kirchhoff equation under the effect of a nonlinearity satisfying the general berestycki - lions assumptions. in the second part we look for ground states using minimizing arguments on a suitable natural constraint.
|
arxiv:1001.0269
|
a term structure model in which the short rate is zero is developed as a candidate for a theory of cryptocurrency interest rates. the price processes of crypto discount bonds are worked out, along with expressions for the instantaneous forward rates and the prices of interest - rate derivatives. the model admits functional degrees of freedom that can be calibrated to the initial yield curve and other market data. our analysis suggests that strict local martingales can be used for modelling the pricing kernels associated with virtual currencies based on distributed ledger technologies.
|
arxiv:1904.05472
|
the study of thermodynamic fluctuations allows one to relate the free energy difference between two equilibrium states with the work done on a system through processes far from equilibrium. this finding plays a crucial role in the quantum regime, where the definition of work becomes non - trivial. based on these relations, here we develop a simple interferometric method allowing a direct estimation of the work distribution and the average dissipative work during a driven thermodynamic process by superposing the forward and time - reversal evolutions of the process. we show that our scheme provides useful upper bounds on the average dissipative work even without full control over the thermodynamic process, and we propose methodological variations depending on the possible experimental limitations encountered. finally, we exemplify its applicability by an experimental proposal for implementing our method on a quantum photonics system, on which the thermodynamic process is performed through polarization rotations induced by liquid crystals acting in a discrete temporal regime.
|
arxiv:2107.02201
|
we study a multimodal journey planning scenario consisting of a public transit network and a transfer graph which represents a secondary transportation mode ( e. g., walking, cycling, e - scooter ). the objective is to compute pareto - optimal journeys with respect to arrival time and the number of used public transit trips. while various existing algorithms can efficiently compute optimal journeys in either a pure public transit network or a pure transfer graph, combining the two increases running times significantly. existing approaches therefore typically only support limited walking between stops, either by imposing a maximum transfer distance or by requiring the transfer graph to be transitively closed. to overcome these shortcomings, we propose a novel preprocessing technique called ultra ( unlimited transfers ) : given an unlimited transfer graph, which may represent any non - schedule - based transportation mode, ultra computes a small number of transfer shortcuts that are provably sufficient for computing a pareto set of optimal journeys. these transfer shortcuts can be integrated into a variety of state - of - the - art public transit algorithms, establishing the ultra - query algorithm family. our extensive experimental evaluation shows that ultra improves these algorithms from limited to unlimited transfers without sacrificing query speed. this is true not just for walking, but also for faster transfer modes such as bicycle or car. compared to the state of the art for multimodal journey planning, the fastest ultra - based algorithm achieves a speedup of an order of magnitude.
|
arxiv:1906.04832
|
in this paper, we consider the $ l _ x ^ 2 $ solution $ u $ to mass critical nls $ iu _ t + \ delta u = \ pm | u | ^ { \ frac 4d } u $. we prove that in dimensions $ d \ ge 4 $, if the solution is spherically symmetric and is \ emph { almost periodic modulo scaling }, then it must lie in $ h _ x ^ { 1 + \ eps } $ for some $ \ eps > 0 $. moreover, the kinetic energy of the solution is localized uniformly in time. one important application of the theorem is a simplified proof of the scattering conjecture for mass critical nls without reducing to three enemies ( see the work of killip - tao - visan, and killip - visan - zhang ). as another important application, we establish a liouville type result for $ l _ x ^ 2 $ initial data with ground state mass. we prove that if a radial $ l _ x ^ 2 $ solution to focusing mass critical problem has the ground state mass and does not scatter in both time directions, then it must be global and coincide with the solitary wave up to symmetries. here the ground state is the unique, positive, radial solution to elliptic equation $ \ delta q - q + q ^ { 1 + \ frac 4d } = 0 $. this is the first rigidity type result in scale invariant space $ l _ x ^ 2 $.
|
arxiv:0911.4746
|
we present the results from a joint suzaku / nustar broad - band spectral analysis of 3c 390. 3. the high quality data enables us to clearly separate the primary continuum from the reprocessed components allowing us to detect a high energy spectral cut - off ( $ e _ \ text { cut } = 117 _ { - 14 } ^ { + 18 } $ kev ), and to place constraints on the comptonization parameters of the primary continuum for the first time. the hard over soft compactness is 69 $ _ { - 24 } ^ { + 124 } $ and the optical depth 4. 1 $ _ { - 3. 6 } ^ { + 0. 5 } $, this leads to an electron temperature of $ 30 _ { - 8 } ^ { + 32 } $ kev. expanding our study of the comptonization spectrum to the optical / uv by studying the simultaneous swift - uvot data, we find indications that the compactness of the corona allows only a small fraction of the total uv / optical flux to be comptonized. our analysis of the reprocessed emission show that 3c 390. 3 only has a small amount of reflection ( r ~ 0. 3 ), and of that the vast majority is from distant neutral matter. however we also discover a soft x - ray excess in the source, which can be described by a weak ionized reflection component from the inner parts of the accretion disk. in addition to the backscattered emission, we also detect the highly ionized iron emission lines fe xxv and fe xxvi.
|
arxiv:1510.01333
|
in this paper, we introduce regularized stochastic team problems. under mild assumptions, we prove that there exists an unique fixed point of the best response operator, where this unique fixed point is the optimal regularized team decision rule. then, we establish an asynchronous distributed algorithm to compute this optimal strategy. we also provide a bound that shows how the optimal regularized team decision rule performs in the original stochastic team problem.
|
arxiv:2011.03385
|
general large language models ( llms ) such as chatgpt have shown remarkable success. however, such llms have not been widely adopted for medical purposes, due to poor accuracy and inability to provide medical advice. we propose ivygpt, an llm based on llama that is trained and fine - tuned with high - quality medical question - answer ( qa ) instances and reinforcement learning from human feedback ( rlhf ). after supervised fine - tuning, ivygpt has good multi - turn conversation capabilities, but it cannot perform like a doctor in other aspects, such as comprehensive diagnosis. through rlhf, ivygpt can output richer diagnosis and treatment answers that are closer to human. in the training, we used qlora to train 33 billion parameters on a small number of nvidia a100 ( 80gb ) gpus. experimental results show that ivygpt has outperformed other medical gpt models.
|
arxiv:2307.10512
|
the early evolution of the quasar luminosity function ( qlf ) and black hole mass function ( bhmf ) encodes key information on the physics determining the radiative and accretion processes of supermassive black holes ( bhs ) in high - $ z $ quasars. although the qlf shape has been constrained by recent observations, it remains challenging to develop a theoretical model that explains its redshift evolution associated with bh growth self - consistently. in this study, based on a semi - analytical model for the bh formation and growth, we construct the qlf and bhmf of the early bh population that experiences multiple accretion bursts, in each of which a constant eddington ratio is assigned following a schechter distribution function. our best - fit model to reproduce the observed qlf and bhmf at $ z \ simeq 6 $ suggests that several episodes of moderate super - eddington accretion occur and each of them lasts for $ \ tau \ simeq 20 - 30 $ myr. the average duty cycle in super - eddington phases is $ \ simeq 15 \ % $ for massive bhs that reach $ \ gtrsim 10 ^ 8 ~ m _ \ odot $ by $ z \ simeq 6 $, which is nearly twice that of the entire population. we also find that the observed eddington - ratio distribution function is skewed to a log - normal shape owing to detection limits of quasar surveys. the predicted redshift evolution of the qlf and bhmf suggests a rapid decay of their number and mass density in a cosmic volume toward $ z \ gtrsim 6 $. these results will be unveiled by future deep and wide surveys with the james webb space telescope, roman space telescope, and euclid.
|
arxiv:2210.02308
|
we report the experimental study of the bifurcations of a large - scale circulation that is formed over a turbulent flow generated by a spatially periodic forcing. after shortly describing how the flow becomes turbulent through a sequence of symmetry breaking bifurcations, we focus our study on the transitions that occur within the turbulent regime. they are related to changes in the shape of the probability density function ( pdf ) of the amplitude of the large scale flow. we discuss the nature of these bifurcations and how to model the shape of the pdf.
|
arxiv:1611.01611
|
recent experimental determinations of the nachtmann moments of the inelastic structure function of the proton f2p ( x, q * * 2 ), obtained at jefferson lab, are analyzed for values of the squared four - momentum transfer q * * 2 ranging from ~ 0. 1 to ~ 2 ( gev / c ) * * 2. it is shown that such inelastic proton data exhibit a new type of scaling behavior and that the resulting scaling function can be interpreted as a constituent form factor consistent with the elastic nucleon data. these findings suggest that at low momentum transfer the inclusive proton structure function originates mainly from the elastic coupling with extended objects inside the proton. we obtain a constituent size of ~ 0. 2 - 0. 3 fm.
|
arxiv:hep-ph/0301206
|
a theorem that constructs a path integral solution for general second order partial differential equations is specialized to obtain path integrals that are solutions of elliptic, parabolic, and hyperbolic linear second order partial differential equations with dirichlet / neumann boundary conditions. the construction is checked by evaluating several known kernels for regions with planar and spherical boundaries. some new calculational techniques are introduced.
|
arxiv:math-ph/0405032
|
in this study, we introduced various statistical performance metrics, based on the pinball loss and the empirical coverage, for the ranking of probabilistic forecasting models. we tested the ability of the proposed metrics to determine the top performing forecasting model and investigated the use of which metric corresponds to the highest average per - trade profit in the out - of - sample period. our findings show that for the considered trading strategy, ranking the forecasting models according to the coverage of quantile forecasts used in the trading hours exhibits a superior economic performance.
|
arxiv:2411.17743
|
cancer patients often struggle to transition swiftly to treatment due to limited institutional resources, lack of sophisticated professional guidance, and low health literacy. the emergence of large language models ( llms ) offers new opportunities for such patients to access the wealth of existing patient education materials. the current paper presents the development process for an llm - based chatbot focused on prostate cancer education, including needs assessment, co - design, and usability studies. the resulting application, mededuchat, integrates with patients ' electronic health record data and features a closed - domain, semi - structured, patient - centered approach to address real - world needs. this paper contributes to the growing field of patient - llm interaction by demonstrating the potential of llm - based chatbots to enhance prostate cancer patient education and by offering co - design guidelines for future llm - based healthcare downstream applications.
|
arxiv:2409.19100
|
clusters of galaxies can be used as powerful probes to study astrophysical processes on large scales, test theories of the growth of structure, and constrain cosmological models. the driving science goal of the srg / erosita all - sky survey ( erass ) is to assemble a large sample of x - ray - selected clusters with a well - defined selection function to determine the evolution of the mass function and, hence, the cosmological parameters. we present here a catalog of 12247 optically confirmed galaxy groups and clusters detected in the 0. 2 - 2. 3 kev as extended x - ray sources in a 13, 116deg $ ^ 2 $ region in the western galactic hemisphere of the sky, which erosita surveyed in its first six months of operation. the clusters in the sample span the redshift range $ 0. 003 < z < 1. 32 $. the majority ( 68 % ) of these clusters, 8361 sources, represent new discoveries without known counterparts in the literature. the mass range of the sample covers three orders of magnitude from $ 5 \ times10 ^ { 12 } m _ { \ rm sun } $ to $ 2 \ times10 ^ { 15 } m _ { \ rm sun } $. we construct a sample for cosmology with a higher purity level ( ~ 95 % ) than the primary sample, comprising 5259 securely detected and confirmed clusters in the 12791deg $ ^ { 2 } $ common footprint with the desi legacy survey dr10. we characterize the x - ray properties of each cluster, including their flux, luminosity and temperature, the total mass, gas mass, gas mass fraction, and mass proxy $ y _ { x } $. these are determined within two apertures, 300 kpc, and the overdensity radius $ r _ { 500 } $, and are calculated by applying a forward modeling approach with a rigorous x - ray background treatment, k - factor, and the galactic absorption corrections. population studies utilizing logn - logs, the number of clusters detected above a given flux limit, and the luminosity function show overall agreement with the previous x - ray surveys after accounting for the survey completeness and purity ( abridged )
|
arxiv:2402.08452
|
scene designer is a novel method for searching and generating images using free - hand sketches of scene compositions ; i. e. drawings that describe both the appearance and relative positions of objects. our core contribution is a single unified model to learn both a cross - modal search embedding for matching sketched compositions to images, and an object embedding for layout synthesis. we show that a graph neural network ( gnn ) followed by transformer under our novel contrastive learning setting is required to allow learning correlations between object type, appearance and arrangement, driving a mask generation module that synthesises coherent scene layouts, whilst also delivering state of the art sketch based visual search of scenes.
|
arxiv:2108.07353
|
i discuss recent progress in our understanding of two barriers in quantum gravity : $ c > 1 $ in the case of 2d quantum gravity and $ d > 2 $ in the case of euclidean einstein - hilbert gravity formulated in space - time dimensions $ d > 2 $.
|
arxiv:hep-th/9408129
|
very precise measurements of masses and cross sections are expected to be achievable with a future linear collider. with such an accuracy one must incorporate loop corrections in order to make meaningful predictions for the underlying new physics parameters. for the electroweakino sector, this involves fitting one - loop predictions to expected measurements of the cross section and forward - backward asymmetry for chargino pair production and of the accessible chargino and neutralino masses. we consider two scenarios with characteristic features, chosen taking recent lhc susy and higgs searches into account. our analysis allows the accurate determination of the desired parameters and, additionally, access to stop sector parameters that enter via loop corrections.
|
arxiv:1212.1921
|
quality assessments of models in unsupervised learning and clustering verification in particular have been a long - standing problem in the machine learning research. the lack of robust and universally applicable cluster validity scores often makes the algorithm selection and hyperparameter evaluation a tough guess. in this paper, we show that cluster ensemble aggregation techniques such as consensus clustering may be used to evaluate clusterings and their hyperparameter configurations. we use normalized mutual information to compare individual objects of a clustering ensemble to the constructed consensus of the whole ensemble and show, that the resulting score can serve as an overall quality measure for clustering problems. this method is capable of highlighting the standout clustering and hyperparameter configuration in the ensemble even in the case of a distorted consensus. we apply this very general framework to various data sets and give possible directions for future research.
|
arxiv:1803.11008
|
many complex phenomena occurring in physics, chemistry, biology, finance, etc. can be reduced, by some projection process, to a 1 - d stochastic differential equation ( sde ) for the variable of interest. typically, this sde is both non - linear and non - markovian, so a fokker planck equation ( fpe ), for the probability density function ( pdf ), is generally not obtainable. however, a fpe is desirable because it is the main tool to obtain relevant analytical statistical information such as stationary pdf and first passage time. this problem has been addressed by many authors in the past, but due to an incorrect use of the interaction picture ( the standard tool to obtain a reduced fpe ) previous theoretical results were incorrect, as confirmed by direct numerical simulation of the sde. we will show, in general, how to address the problem and we will derived the correct best fpe from a perturbation approach. the method followed and the results obtained have a general validity beyond the simple case of exponentially correlated gaussian driving used here as an example ; they can be applied even to non gaussian drivings with a generic time correlation.
|
arxiv:2001.05809
|
we consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes or the state of the treated unit. we propose an extension of the double / debiased machine learning framework to estimate the dynamic effects of treatments, which can be viewed as a neyman orthogonal ( locally robust ) cross - fitted version of $ g $ - estimation in the dynamic treatment regime. our method applies to a general class of non - linear dynamic treatment models known as structural nested mean models and allows the use of machine learning methods to control for potentially high dimensional state variables, subject to a mean square error guarantee, while still allowing parametric estimation and construction of confidence intervals for the structural parameters of interest. these structural parameters can be used for off - policy evaluation of any target dynamic policy at parametric rates, subject to semi - parametric restrictions on the data generating process. our work is based on a recursive peeling process, typical in $ g $ - estimation, and formulates a strongly convex objective at each stage, which allows us to extend the $ g $ - estimation framework in multiple directions : i ) to provide finite sample guarantees, ii ) to estimate non - linear effect heterogeneity with respect to fixed unit characteristics, within arbitrary function spaces, enabling a dynamic analogue of the rlearner algorithm for heterogeneous effects, iii ) to allow for high - dimensional sparse parameterizations of the target structural functions, enabling automated model selection via a recursive lasso algorithm. we also provide guarantees for data stemming from a single treated unit over a long horizon and under stationarity conditions.
|
arxiv:2002.07285
|
the emergence of and transitions between distinct phenotypes in isogenic cells can be attributed to the intricate interplay of epigenetic marks, external signals, and gene regulatory elements. these elements include chromatin remodelers, histone modifiers, transcription factors, and regulatory rnas. mathematical models known as gene regulatory networks ( grns ) are an increasingly important tool to unravel the workings of such complex networks. in such models, epigenetic factors are usually proposed to act on the chromatin regions directly involved in the expression of relevant genes. however, it has been well - established that these factors operate globally and compete with each other for targets genome - wide. therefore, a perturbation of the activity of a regulator can redistribute epigenetic marks across the genome and modulate the levels of competing regulators. in this paper, we propose a conceptual and mathematical modeling framework that incorporates both local and global competition effects between antagonistic epigenetic regulators in addition to local transcription factors, and show the counter - intuitive consequences of such interactions. we apply our approach to recent experimental findings on the epithelial - mesenchymal transition ( emt ). we show that it can explain the puzzling experimental data as well provide new verifiable predictions.
|
arxiv:2209.05688
|
we report on the observation of magnetic quantum ratchet effect in metal - oxide - semiconductor field - effect - transistors on silicon surface ( si - mosfets ). we show that the excitation of an unbiased transistor by ac electric field of terahertz radiation at normal incidence leads to a direct electric current between the source and drain contacts if the transistor is subjected to an in - plane magnetic field. the current rises linearly with the magnetic field strength and quadratically with the ac electric field amplitude. it depends on the polarization state of the ac field and can be induced by both linearly and circularly polarized radiation. we present the quasi - classical and quantum theories of the observed effect and show that the current originates from the lorentz force acting upon carriers in asymmetric inversion channels of the transistors.
|
arxiv:1401.0135
|
prevalent in theoretical computer science, and mainly employs deductive reasoning ), the " technocratic paradigm " ( which might be found in engineering approaches, most prominently in software engineering ), and the " scientific paradigm " ( which approaches computer - related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence ). computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human - made computing systems. = = fields = = as a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. csab, formerly called computing sciences accreditation board — which is made up of representatives of the association for computing machinery ( acm ), and the ieee computer society ( ieee cs ) — identifies four areas that it considers crucial to the discipline of computer science : theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. in addition to these four areas, csab also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human – computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. = = = theoretical computer science = = = theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. it aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. = = = = theory of computation = = = = according to peter denning, the fundamental question underlying computer science is, " what can be automated? " theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. in an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. the second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. the famous p = np? problem, one of the millennium prize problems, is an open problem in the theory of computation. = = = = information and coding theory = = = = information theory, closely related to probability and statistics, is related to the quantification of information. this was developed by claude shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data
|
https://en.wikipedia.org/wiki/Computer_science
|
we consider a financial market in which the short rate is modeled by a continuous time markov chain ( ctmc ) with a finite state space. in this setting, we show how to price any financial derivative whose payoff is a function of the state of the underlying ctmc at the maturity date. we also show how to replicate such claims by trading only a money market account and zero - coupon bonds. finally, using an extension of ross ' recovery theorem due to qin and linetsky, we deduce the real - world dynamics of the ctmc.
|
arxiv:2409.14193
|
web attacks, i. e. attacks exclusively using the http protocol, are rapidly becoming one of the fundamental threats for information systems connected to the internet. when the attacks suffered by web servers through the years are analyzed, it is observed that most of them are very similar, using a reduced number of attacking techniques. it is generally agreed that classification can help designers and programmers to better understand attacks and build more secure applications. as an effort in this direction, a new taxonomy of web attacks is proposed in this paper, with the objective of obtaining a practically useful reference framework for security applications. the use of the taxonomy is illustrated by means of multiplatform real world web attack examples. along with this taxonomy, important features of each attack category are discussed. a suitable semantic - dependent web attack encoding scheme is defined that uses different - length vectors. possible applications are described, which might benefit from this taxonomy and encoding scheme, such as intrusion detection systems and application firewalls.
|
arxiv:cs/0210026
|
in this contribution we review recent efforts on investigations of the effect of ( apparent ) boundary slip by utilizing lattice boltzmann simulations. we demonstrate the applicability of the method to treat fundamental questions in microfluidics by investigating fluid flow in hydrophobic and rough microchannels as well as over surfaces covered by nano - or microscale gas bubbles.
|
arxiv:0910.3492
|
the international committee for future accelerators ( icfa ) has been in existence for well over four decades. its mission is to facilitate international collaboration in the construction and use of accelerators for high energy physics. this report presents, after a brief introduction, some recent activities of icfa and its panels. the international linear collider ( ilc ) and its current status are briefly discussed.
|
arxiv:1902.10253
|
collaborative 3d object detection, with its improved interaction advantage among multiple agents, has been widely explored in autonomous driving. however, existing collaborative 3d object detectors in a fully supervised paradigm heavily rely on large - scale annotated 3d bounding boxes, which is labor - intensive and time - consuming. to tackle this issue, we propose a sparsely supervised collaborative 3d object detection framework ssc3od, which only requires each agent to randomly label one object in the scene. specifically, this model consists of two novel components, i. e., the pillar - based masked autoencoder ( pillar - mae ) and the instance mining module. the pillar - mae module aims to reason over high - level semantics in a self - supervised manner, and the instance mining module generates high - quality pseudo labels for collaborative detectors online. by introducing these simple yet effective mechanisms, the proposed ssc3od can alleviate the adverse impacts of incomplete annotations. we generate sparse labels based on collaborative perception datasets to evaluate our method. extensive experiments on three large - scale datasets reveal that our proposed ssc3od can effectively improve the performance of sparsely supervised collaborative 3d object detectors.
|
arxiv:2307.00717
|
large language models ( llms ) have shown remarkable performance across various tasks, but the escalating demands on computational resources pose significant challenges, particularly in the extensive utilization of full fine - tuning for downstream tasks. to address this, parameter - efficient fine - tuning ( peft ) methods have been developed, but they often underperform compared to full fine - tuning and struggle with memory efficiency. in this work, we introduce gradient weight - normalized low - rank projection ( gradnormlorp ), a novel approach that enhances both parameter and memory efficiency while maintaining comparable performance to full fine - tuning. gradnormlorp normalizes the weight matrix to improve gradient conditioning, facilitating better convergence during optimization. additionally, it applies low - rank approximations to the weight and gradient matrices, significantly reducing memory usage during training. extensive experiments demonstrate that our 8 - bit gradnormlorp reduces optimizer memory usage by up to 89. 5 % and enables the pre - training of large llms, such as llama 7b, on consumer - level gpus like the nvidia rtx 4090, without additional inference costs. moreover, gradnormlorp outperforms existing low - rank methods in fine - tuning tasks. for instance, when fine - tuning the roberta model on all glue tasks with a rank of 8, gradnormlorp achieves an average score of 80. 65, surpassing lora ' s score of 79. 23. these results underscore gradnormlorp as a promising alternative for efficient llm pre - training and fine - tuning. source code : https : / / github. com / jhhuangkay / gradient - weight - normalized - low - rank - projection - for - efficient - llm - training
|
arxiv:2412.19616
|
in this paper, we study systematic luby transform ( slt ) codes over additive white gaussian noise ( awgn ) channel. we introduce the encoding scheme of slt codes and give the bipartite graph for iterative belief propagation ( bp ) decoding algorithm. similar to low - density parity - check codes, gaussian approximation ( ga ) is applied to yield asymptotic performance of slt codes. recent work about slt codes has been focused on providing better encoding and decoding algorithms and design of degree distributions. in our work, we propose a novel linear programming method to optimize the degree distribution. simulation results show that the proposed distributions can provide better bit - error - ratio ( ber ) performance. moreover, we analyze the lower bound of slt codes and offer closed form expressions.
|
arxiv:1505.01944
|
we present novel low - - energy theorems for the p - - wave multipoles $ 2m _ { 1 + } + m _ { 1 - } $, $ m _ { 1 + } - m _ { 1 - } $, $ e _ { 1 + } $ and $ l _ { 1 \ pm } $ for neutral pion electroproduction off protons. these should be very useful for the analysis of existing or future threshold data.
|
arxiv:hep-ph/9412282
|
in recent years, the use of low power wide area network ( lpwan ) is increasing for the internet of things ( iot ) applications. in order to demonstrate the application of lpwan technologies for a realistic smart metering scenario, we set - up and implement a widely used lpwan protocol which is called lorawan. in this study, the lorawan is implemented by using multitech devices ( end - node and gateway ) and the performance of the network is evaluated for different physical ( e. g. location, distance etc. ) and link parameters ( e. g. data rate, transmission power etc. ), under the european regulations. to evaluate the performance of the networks, we collected uplink packets in different indoor and outdoor scenarios at various locations. our results show that lorawan is easy to setup, configurable, scalable, and it performs well for real - time smart metering applications. moreover, it is necessary to evaluate the physical conditions for the selection of the available system parameters for deploying a robust lorawan network.
|
arxiv:1907.12355
|
the lattice parameters of mgb2 up to pressures of 8 gpa were determined using high - resolution x - ray powder diffraction in a diamond anvil cell. the bulk modulus, b0, was determined to be 151 + - 5 gpa. both experimental and first - principles calculations indicate nearly isotropic mechanical behavior under pressure. this small anisotropy is in contrast to the 2 dimensional nature of the boron pi states. the pressure dependence of the density of states at the fermi level and a reasonable value for the average phonon frequency account within the context of bcs theory for the reduction of tc under pressure.
|
arxiv:cond-mat/0102480
|
we study the interplay between topological observables and chiral and higgs transitions in lattice scalar qed with quenched fermions. emphasis is put on the chiral transition line and magnetic monopole percolation at strong gauge coupling. we confirm that at infinite gauge coupling the chiral transition is described by mean field exponents. we find a rich and complicated behaviour at the endpoint of the higgs transition line which hampers a satisfactory analysis of the chiral transition. we study in detail an intermediate coupling, where the data are consistent both with a trivial chiral transition clearly separated from monopole percolation and with a chiral transition coincident with monopole percolation, and characterized by the same critical exponent $ \ nu \ simeq 0. 65 $. we discuss the relevance ( or lack thereof ) of these quenched results to our understanding of the \ chupiv \ model. we comment on the interplay of magnetic monopoles and fermion dynamics in more general contexts.
|
arxiv:hep-lat/9707002
|
we lay out a novel formalism to connect the isospin - symmetry breaking correction to the rates of superallowed nuclear beta decays, $ \ delta _ \ text { c } $, to the isospin - breaking sensitive combinations of electroweak nuclear radii that can be accessed experimentally. we individuate transitions in the superallowed decay chart where a measurement of the neutron skin of a stable daughter even at a moderate precision could already help discriminating between models used to compute $ \ delta _ \ text { c } $. we review the existing experimental situation and make connection to the existing and future experimental programs.
|
arxiv:2208.03037
|
the goal of this paper is to study goldbach ' s conjecture for rings of regular functions of affine algebraic varieties over a field. among our main results, we define the notion of goldbach condition for newton polytopes, and we prove in a constructive way that any polynomial in at least two variables over a field can be expressed as sum of at most $ 2r $ absolutely irreducible polynomials, where $ r $ is the number of its non - - zero monomials. we also study other weak forms of goldbach ' s conjecture for localizations of these rings. moreover, we prove the validity of goldbach ' s conjecture for a particular instance of the so - - called forcing algebras introduced by hochster. finally, we prove that, for a proper multiplicative closed set $ s $ of $ \ mathbb { z } $, the collection of elements of $ s ^ { - 1 } \ mathbb { z } $ that can be written as finite sum of primes forms a dense subset of the real numbers, among other results.
|
arxiv:2312.16524
|
in this paper, we propose an efficient and spectrally accurate numerical method for computing the dynamics of rotating bose - einstein condensates ( bec ) in two dimensions ( 2d ) and 3d based on the gross - pitaevskii equation ( gpe ) with an angular momentum rotation term. by applying a time - splitting technique for decoupling the nonlinearity and properly using the alternating direction implicit ( adi ) technique for the coupling in the angular momentum rotation term in the gpe, at every time step, the gpe in rotational frame is decoupled into a nonlinear ordinary differential equation ( ode ) and two partial differential equations with constant coefficients. this allows us to develop new time - splitting spectral ( tssp ) methods for computing the dynamics of bec in a rotational frame. the new numerical method is explicit, unconditionally stable, and of spectral accuracy in space and second order accuracy in time. moreover, it is time reversible and time transverse invariant, and conserves the position density in the discretized level if the gpe does. extensive numerical results are presented to confirm the above properties of the new numerical method for rotating bec in 2d and 3d.
|
arxiv:cond-mat/0609678
|
theories with extra spacetime dimensions aiming at resolving the hierarchy problem have recently been developed. these scenarios have provided exciting new grounds for experimental probes. a review of the searches conducted at the tevatron in this framework during its first running phase and the prospects for its second running phase are reviewed.
|
arxiv:hep-ex/0211060
|
the use of reward functions to structure ai learning and decision making is core to the current reinforcement learning paradigm ; however, without careful design of reward functions, agents can learn to solve problems in ways that may be considered " undesirable " or " unethical. " without thorough understanding of the incentives a reward function creates, it can be difficult to impose principled yet general control mechanisms over its behavior. in this paper, we study methods for constructing guardrails for ai agents that use reward functions to learn decision making. we introduce a novel approach, which we call strategy masking, to explicitly learn and then suppress undesirable ai agent behavior. we apply our method to study lying in ai agents and show that it can be used to effectively modify agent behavior by suppressing lying post - training without compromising agent ability to perform effectively.
|
arxiv:2501.05501
|
this paper provides foundations for strong ( that is, possibly under abstraction ) call - by - value evaluation for the lambda - calculus. recently, accattoli et al. proposed a form of call - by - value strong evaluation for the lambda - calculus, the external strategy, and proved it reasonable for time. here, we study the external strategy using a semantical tool, namely ehrhard ' s call - by - value multi types, a variant of intersection types. we show that the external strategy terminates exactly when a term is typable with so - called shrinking multi types, mimicking similar results for strong call - by - name. additionally, the external strategy is normalizing in the untyped setting, that is, it reaches the normal form whenever it exists. we also consider the call - by - extended - value approach to strong evaluation shown reasonable for time by biernacka et al. the two approaches turn out to not be equivalent : terms may be externally divergent but terminating for call - by - extended - value.
|
arxiv:2309.12261
|
let $ ( \ varepsilon _ { t } ) _ { t > 0 } $ be a sequence of independent real random vectors of $ p $ - dimension and let $ x _ t = \ sum _ { t = s + 1 } ^ { s + t } \ varepsilon _ t \ varepsilon ^ t _ { t - s } / t $ be the lag - $ s $ ( $ s $ is a fixed positive integer ) auto - covariance matrix of $ \ varepsilon _ t $. this paper investigates the limiting behavior of the singular values of $ x _ t $ under the so - called { \ em ultra - dimensional regime } where $ p \ to \ infty $ and $ t \ to \ infty $ in a related way such that $ p / t \ to 0 $. first, we show that the singular value distribution of $ x _ t $ after a suitable normalization converges to a nonrandom limit $ g $ ( quarter law ) under the forth - moment condition. second, we establish the convergence of its largest singular value to the right edge of $ g $. both results are derived using the moment method.
|
arxiv:1501.06641
|
the effects of absence of inversion symmetry on superconducting states are investigated theoretically. in particular we focus on the noncentrosymmetric compounds which have the cubic symmetry $ o $ like li $ _ 2 $ pt $ _ 3 $ b. an appropriate and isotropic spin - orbital interaction is added in the hamiltonian and it acts like a magnetic monopole in the momentum space. the consequent pairing wavefunction has an additional triplet component in the pseudospin space, and a zeeman magnetic field $ \ bf { b } $ can induce a collinear supercurrent $ \ bf { j } $ with a coefficient $ \ kappa ( t ) $. the effects of anisotropy embedded in the cubic symmetry and the nodal superconducting gap function on $ \ kappa ( t ) $ are also considered. from the macroscopic perspectives, the pair of mutually induced $ \ bf { j } $ and magnetization $ { \ bf { m } } $ can affect the distribution of magnetic field in such noncentrosymmetric superconductors, which is studied through solving the maxwell equation in the meissner geometry as well as the case of a single vortex line. in both cases, magnetic fields perpendicular to the external ones emerge as a signature of the broken symmetry.
|
arxiv:0711.0800
|
the transport and optical properties of the nb - doped cs ( v $ _ { 1 - x } $ nb $ _ { x } $ ) $ _ { 3 } $ sb $ _ { 5 } $ with x = 0. 03 and 0. 07 have been investigated and compared with those of the undoped csv $ _ { 3 } $ sb $ _ { 5 } $. upon nb doping, the charge - density wave ( cdw ) transition temperature $ t _ { \ text { cdw } } $ is suppressed, and the superconducting temperature $ t _ { c } $ rises. the residual resistivity ratio decreases with nb doping, suggesting an increase of disorder. for all compounds, the optical conductivity in the pristine phase reveals two drude components ( d1 and d2 ). the substitution of nb causes an increase of d1 alongside a reduction of d2 in weight, which implies a change of the fermi surface. the total drude weight is reduced with increasing nb content, signifying an enhancement of electronic correlations. below $ t _ { \ text { cdw } } $, while the optical conductivity clearly manifests the cdw gap in all materials, the gapped portion of the fermi surface shrinks as the nb content grows. a comprehensive analysis indicates that the change of the fermi surface, the enhancement of electronic correlations, the shrinkage of the removed fermi surface by the cdw gap, and the increase of disorder may all have a considerable impact on the interplay between the cdw and superconductivity in cs ( v $ _ { 1 - x } $ nb $ _ { x } $ ) $ _ { 3 } $ sb $ _ { 5 } $.
|
arxiv:2303.06915
|
in this article a procedure to construct bent functions from $ \ f _ { p ^ n } $ to $ \ f _ p $ by merging plateaued functions which are bent on ( $ n - 2 $ ) - dimensional subspaces of $ \ f _ { p ^ n } $ is presented. taking advantage of such classes of plateaued functions with a simple representation as monomials and binomials, we obtain infinite classes of bent functions with a fairly simple representation. in particular we present the first direct construction of univariate not weakly regular bent functions, and give one class explicitly in a simple representation with binomials.
|
arxiv:1310.8071
|
in this paper, we consider the time - varying bayesian optimization problem. the unknown function at each time is assumed to lie in an rkhs ( reproducing kernel hilbert space ) with a bounded norm. we adopt the general variation budget model to capture the time - varying environment, and the variation is characterized by the change of the rkhs norm. we adapt the restart and sliding window mechanism to introduce two gp - ucb type algorithms : r - gp - ucb and sw - gp - ucb, respectively. we derive the first ( frequentist ) regret guarantee on the dynamic regret for both algorithms. our results not only recover previous linear bandit results when a linear kernel is used, but complement the previous regret analysis of time - varying gaussian process bandit under a bayesian - type regularity assumption, i. e., each function is a sample from a gaussian process.
|
arxiv:2102.06296
|
we consider a massive quantum test klein - gordon field probing an isotropic quantum cosmological space - time in the background. the result obtained is surprising. it turns out, that despite the isotropy of the quantum gravitational field, the semi - classical metric experienced by a mode of the k - g field is non - isotropic. the anisotropy depends on the direction of the momentum of the mode. specifically, what we do is to derive a semi - classical space - time which emerges to a mode of the field. the method amounts to a comparison between qft on a quantum background and qft on a classical curved space - time, giving rise to an emergent metric tensor. the components of the semi - classical metric tensor are calculated from the equation of propagation of the quantum k - g field in the test field approximation. the anisotropies are of a quantum nature : they are proportional to planck constant and " dress " the isotropic classical space - time obtained in the classical limit.
|
arxiv:1211.0161
|
plasmons in two - dimensional electron systems with nonparabolic bands, such as graphene, feature strong dependence on electron - electron interactions. we use a many - body approach to relate plasmon dispersion at long wavelengths to landau fermi - liquid interactions and quasiparticle velocity. an identical renormalization is shown to arise for the magnetoplasmon resonance. for a model with n > > 1 fermion species, this approach predicts a power - law dependence for plasmon frequency vs. carrier concentration, valid in a wide range of doping densities, both high and low. gate tunability of plasmons in graphene can be exploited to directly probe the effects of electron - electron interaction.
|
arxiv:1302.5036
|
this paper proposes a new class of simple, distributed algorithms for scheduling in wireless networks. the algorithms generate new schedules in a distributed manner via simple local changes to existing schedules. the class is parameterized by integers $ k \ geq 1 $. we show that algorithm $ k $ of our class achieves $ k / ( k + 2 ) $ of the capacity region, for every $ k \ geq 1 $. the algorithms have small and constant worst - case overheads : in particular, algorithm $ k $ generates a new schedule using { \ em ( a ) } time less than $ 4k + 2 $ round - trip times between neighboring nodes in the network, and { \ em ( b ) } at most three control transmissions by any given node, for any $ k $. the control signals are explicitly specified, and face the same interference effects as normal data transmissions. our class of distributed wireless scheduling algorithms are the first ones guaranteed to achieve any fixed fraction of the capacity region while using small and constant overheads that do not scale with network size. the parameter $ k $ explicitly captures the tradeoff between control overhead and scheduler throughput performance and provides a tuning knob protocol designers can use to harness this trade - off in practice.
|
arxiv:cs/0611064
|
this paper presents a comprehensive study of channel polarization under noise with memory. by introducing a genie - aided channel model, we demonstrate that polarized subchannels still converge to extremal channels under the standard polar coding structure. notably, the proportion of near - perfect subchannels can exceed the underlying channel capacity $ i ( w ) $. however, we also show that the polarization rate is slower than that observed in the binary - input memoryless channel ( bmc ) scenario. in particular, the block error probability is upper - bounded by $ \ mathcal { o } ( l ^ { - c _ 0 } ) $, where $ l $ denotes the block length and $ c _ 0 $ is a positive constant. moreover, we investigate both upper and lower bounds on the gap between the channel capacity and the cutoff rate under finite block length, which offers greater relevance for practical implementations.
|
arxiv:2411.16557
|
recent work has shown the feasibility of single - channel full - duplex wireless physical layer, allowing nodes to send and receive in the same frequency band at the same time. in this report, we first design and implement a real - time 64 - subcarrier 10 mhz full - duplex ofdm physical layer, fd - phy. the proposed fd - phy not only allows synchronous full - duplex transmissions but also selective asynchronous full - duplex modes. further, we show that in over - the - air experiments using optimal antenna placement on actual devices, the self - interference can be suppressed upto 80db, which is 10db more than prior reported results. then we propose a full - duplex mac protocol, fd - mac, which builds on ieee 802. 11 with three new mechanisms - - shared random backoff, header snooping and virtual backoffs. the new mechanisms allow fd - mac to discover and exploit full - duplex opportunities in a distributed manner. our over - the - air tests show over 70 % throughput gains from using full - duplex over half - duplex in realistically used cases.
|
arxiv:1107.0607
|
manufacturing engineering or production engineering is a branch of professional engineering that shares many common concepts and ideas with other fields of engineering such as mechanical, chemical, electrical, and industrial engineering. manufacturing engineering requires the ability to plan the practices of manufacturing ; to research and to develop tools, processes, machines, and equipment ; and to integrate the facilities and systems for producing quality products with the optimum expenditure of capital. the manufacturing or production engineer ' s primary focus is to turn raw material into an updated or new product in the most effective, efficient & economic way possible. an example would be a company uses computer integrated technology in order for them to produce their product so that it is faster and uses less human labor. = = overview = = manufacturing engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics, and business management. this field also deals with the integration of different facilities and systems for producing quality products ( with optimal expenditure ) by applying the principles of physics and the results of manufacturing systems studies, such as the following : manufacturing engineers develop and create physical artifacts, production processes, and technology. it is a very broad area which includes the design and development of products. manufacturing engineering is considered to be a subdiscipline of industrial engineering / systems engineering and has very strong overlaps with mechanical engineering. manufacturing engineers ' success or failure directly impacts the advancement of technology and the spread of innovation. this field of manufacturing engineering emerged from the tool and die discipline in the early 20th century. it expanded greatly from the 1960s when industrialized countries introduced factories with : 1. numerical control machine tools and automated systems of production. 2. advanced statistical methods of quality control : these factories were pioneered by the american electrical engineer william edwards deming, who was initially ignored by his home country. the same methods of quality control later turned japanese factories into world leaders in cost - effectiveness and production quality. 3. industrial robots on the factory floor, introduced in the late 1970s : these computer - controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. this cut costs and improved production speed. = = history = = the history of manufacturing engineering can be traced to factories in the mid - 19th century usa and 18th century uk. although large home production sites and workshops were established in china, ancient rome, and the middle east, the venice arsenal provides one of the first examples of a factory in the modern sense of the word. founded in 1104 in the
|
https://en.wikipedia.org/wiki/Manufacturing_engineering
|
quicksort algorithm with hoare ' s partition scheme is traditionally implemented with nested loops. in this article, we present loop programming and refactoring techniques that lead to simplified implementation for hoare ' s quicksort algorithm consisting of a single loop. we believe that the techniques are beneficial for general programming and may be used for the discovery of more novel algorithms.
|
arxiv:1906.05384
|
high field superconducting magnets using high temperature superconductors are being developed for high energy physics, nuclear magnetic resonance and energy storage applications. although the conductor technology has progressed to the point where such large magnets can be readily envisioned, quench protection remains a key challenge. it is well - established that quench propagation in hts magnets is very slow and this brings new challenges that must be addressed. in this paper, these challenges are discussed and potential solutions, driven by new technologies such as optical fiber based sensors and thermally conducting electrical insulators, are reviewed.
|
arxiv:1401.3937
|
many swift grbs show an early phase of shallow decay in their x - ray afterglows, lasting from $ t \ sim 10 ^ { 2. 5 } $ s to $ \ sim 10 ^ 4 $ s after the grb, where the flux decays as $ \ sim t ^ { - 0. 2 } - t ^ { - 0. 8 } $. this is perhaps the most mysterious of the new features discovered by swift in the early x - ray afterglow, since it is still not clear what causes it. i discuss different possible explanations for this surprising new discovery, as well as their potential implications for the gamma - ray efficiency, the afterglow kinetic energy, and perhaps even for the physics of collisionless relativistic shocks.
|
arxiv:astro-ph/0612516
|
understanding or comprehending source code is one of the core activities of software engineering. understanding object - oriented source code is essential and required when a programmer maintains, migrates, reuses, documents or enhances source code. the source code that is not comprehended cannot be changed. the comprehension of object - oriented source code is a difficult problem solving process. in order to document object - oriented software system there are needs to understand its source code. to do so, it is necessary to mine source code dependencies in addition to quantitative information in source code such as the number of classes. this paper proposes an automatic approach, which aims to document object - oriented software by visualizing its source code. the design of the object - oriented source code and its main characteristics are represented in the visualization. package content, class information, relationships between classes, dependencies between methods and software metrics is displayed. the extracted views are very helpful to understand and document the object - oriented software. the novelty of this approach is the exploiting of code dependencies and quantitative information in source code to document object - oriented software efficiently by means of a set of graphs. to validate the approach, it has been applied to several case studies. the results of this evaluation showed that most of the object - oriented software systems have been documented correctly.
|
arxiv:1601.07742
|
prevailing methods of course allocation at undergraduate institutions involve reserving seats to give priority to designated groups of students. we introduce a competitive equilibrium - based mechanism that assigns course seats using student preferences and course priorities. this mechanism satisfies approximate notions of stability, efficiency, envy - freeness, and strategy - proofness. we evaluate its performance relative to a mechanism widely used in practice using preferences estimated from university data. our empirical findings demonstrate an improvement in student satisfaction and allocation fairness. the number of students who envy another student of weakly lower priority declines by 8 percent, or roughly 500 students.
|
arxiv:2412.05691
|
using a relation between a bi - orthogonal set of equiseparable bases and the weak values of the density matrix we derive an explicit formula for its tomographic reconstruction completely analogous to the standard mutually unbiased bases expansion. with the simple example of a qubit is evidenced the relationship between weak values, measured probabilities and the separation between non - orthogonal bases.
|
arxiv:1602.04073
|
this work deals with content - based video indexing. our viewpoint is semi - automatic analysis of compressed video. we consider the possible applications of motion analysis and moving object detection : assisting moving object indexing, summarising videos, and allowing image and motion queries. we propose an approach based on interest points. as first results, we test and compare the stability of different types of interest point detectors in compressed sequences.
|
arxiv:cs/0004012
|
it is a well established notion that animals can detect the earth ' s magnetic field, while the biophysical origin of such magnetoreception is still elusive. recently, a magnetic receptor drosophila cg8198 ( magr ) with a rod - like protein complex is reported [ qin \ emph { et al }., nat. mater. \ textbf { 15 }, 217 ( 2016 ) ] to act like a compass needle to guide the magnetic orientation of animals. this view, however, is challenged [ meister, elife \ textbf { 5 }, e17210 ( 2016 ) ] by arguing that thermal fluctuations beat the zeeman coupling of the proteins ' s magnetic moment with the rather weak geomagnetic field ( $ \ sim25 - 65 $ $ \ mu $ t ). in this work, we show that the spin - mechanical interaction at the atomic scale gives rise to a high blocking temperature which allows a good alignment of protein ' s magnetic moment with the earth ' s magnetic field at room temperature. our results provide a promising route to resolve the debate on the thermal behaviors of magr, and may stimulate a broad interest on spin - mechanical couplings down to atomistic levels.
|
arxiv:1802.02376
|
there are a number of localic separation axioms which are roughly analogous to the $ t _ 1 $ - axiom from classical topology. for instance, besides the well - known subfitness and fitness, there are also rosicky - smarda ' s $ t _ 1 $ - locales, totally unordered locales and, more categorically, the recently introduced $ \ mathcal { f } $ - separated locales ( i. e., those with a fitted diagonal ) - a property strictly weaker than fitness. it has recently been shown that the strong hausdorff property and $ \ mathcal { f } $ - separatedness are in a certain sense dual to each other. in this paper, we provide further instances of this duality - e. g., we introduce a new first - order separation property which is to $ \ mathcal { f } $ - separatedness as the johnstone - sun - shu - hao - paseka - smarda conservative hausdorff axiom is to the strong hausdorff property, and which can be of independent interest. using this, we tie up the loose ends of the theory by establishing all the possible implications between these properties and other $ t _ 1 $ - type axioms occurring in the literature. in particular, we show that the strong hausdorff property does not imply $ \ mathcal { f } $ - separatedness, a question which remained open and shows a remarkable difference with its counterpart in the category of topological spaces.
|
arxiv:2310.18522
|
we consider the problem of locating a nearest descriptor system of prescribed reduced order to a descriptor system with large order with respect to the $ { \ mathcal l } _ \ infty $ norm. widely employed approaches such as the balanced truncation and best hankel norm approximation for this $ { \ mathcal l } _ \ infty $ model reduction problem are usually expensive and yield solutions that are not optimal, not even locally. we propose approaches based on the minimization of the $ { \ mathcal l } _ \ infty $ objective by means of smooth optimization techniques. as we illustrate, direct applications of smooth optimization techniques are not feasible, since the optimization techniques converge at best at a linear rate requiring too many evaluations of the costly $ { \ mathcal l } _ \ infty $ - norm objective to be practical. we replace the original large - scale system with a system of smaller order that interpolates the original system at points on the imaginary axis, and minimize the $ { \ mathcal l } _ \ infty $ objective after this replacement. the smaller system is refined by interpolating at additional imaginary points determined based on the local minimizer of the $ { \ mathcal l } _ \ infty $ objective, and the optimization is repeated. we argue the framework converges at a quadratic rate under smoothness and nondegeneracy assumptions, and describe how asymptotic stability constraints on the reduced system sought can be incorporated into our approach. the numerical experiments on benchmark examples illustrate that the approach leads to locally optimal solutions to the $ { \ mathcal l } _ \ infty $ model reduction problem, and the convergence occurs quickly for descriptors systems of order a few ten thousands.
|
arxiv:2309.08011
|
transmission through disordered samples can be controlled by illuminating a sample with waveforms corresponding to the eigenchannels of the transmission matrix. but can the tm be exploited to selectively excite quasi - normal modes and so control the spatial profile and dwell time inside the medium? we show in microwave and numerical studies that spectra of the tm can be analyzed into modal transmission matrices of rank unity. this makes it possible to enhance the energy within a sample by a factor equal to the number of channels. limits to modal selectivity arise, however, from correlation in the speckle patterns of neighboring modes. in accord with an effective hamiltonian model, the degree of modal speckle correlation grows with increasing modal spectral overlap and non - orthogonality of the modes of non - hermitian systems. this is observed when the coupling of a sample to its surroundings increases, as in the crossover from localized to diffusive waves.
|
arxiv:1803.01514
|
domain - specific modelling languages ( dsmls ) help practitioners solve modelling challenges specific to various domains. as domains grow more complex and heterogeneous in nature, industrial practitioners often face challenges in the usability of graphical dsmls. there is still a lack of guidelines that industrial language engineers should consider for improving the user experience ( ux ) of these practitioners. the overall topic of ux is vast and subjective, and general guidelines and definitions of ux are often overly generic or tied to specific technological spaces. to solve this challenge, we leverage existing design principles and standards of human - centred design and ux in general and propose definitions and guidelines for ux and user experience design ( uxd ) aspects in graphical dsmls. in this paper, we categorize the key uxd aspects, primarily based on our experience in developing industrial dsmls, that language engineers should consider during graphical dsml development. ultimately, these uxd guidelines help to improve the general usability of industrial dsmls and support language engineers in developing better dsmls that are independent of graphical modelling tools and more widely accepted by their users.
|
arxiv:2209.14060
|
magnetic torque is used to actuate nano - torsional resonators, which are fabricated by focused - ion - beam milling of permalloy coated silicon nitride membranes. optical interferometry is used to measure the mechanical response of two torsion modes at resonance, which is proportional to the magnetization vector of the nanomagnetic volume. by varying the bias magnetic field, the magnetic behavior can be measured with excellent sensitivity ( $ \ approx 10 ^ 8 \ mu _ b $ ) for single magnetic elements.
|
arxiv:0911.2517
|
for the rapid cycling synchrotron of china spallation neutron source ( csns / rcs ), the stripping foil scattering generates the beam halo and gives rise to additional beam losses during the injection process. the interaction between the proton beam and the stripping foil was discussed and the foil scattering was studied. a simple model and the realistic situation of the foil scattering were considered. by using the codes orbit and fluka, the multi - turn phase space painting injection process with the stripping foil scattering for csns / rcs was simulated and the beam losses due to the foil scattering were obtained.
|
arxiv:1210.4611
|
we prove the consistency of the failure of the singular cardinals hypothesis at $ \ aleph _ \ omega $ together with the reflection of all stationary subsets of $ \ aleph _ { \ omega + 1 } $. this shows that two classic results of magidor ( from 1977 and 1982 ) can hold simultaneously.
|
arxiv:2209.10501
|
previous analyses of point sources in the gamma - ray range were done either below 30 mev or above 100 mev. below 30 mev, the imaging compton telescope ( comptel ) onboard nasa ' s compton gamma - ray observatory detected 26 steady sources in the energy range from 0. 75 to 30 mev. at high energy, the fermi large area telescope ( lat ) has detected more than three thousand sources between 100 mev and 300 gev. since the fermi lat detects gamma rays also below 100 mev, we apply a point source detection algorithm in the energy range between 30 mev and 100 mev. in the analysis we use pgwave, which is a background independent tool based on a wavelet transform.
|
arxiv:1802.02913
|
the princeton variability survey ( pvs ) is a robotic survey which makes use of readily available, ` ` off - the - shelf ' ' type hardware products, in conjunction with a powerful set of commercial software products, in order to monitor and discover variable objects in the night sky. the main goal of the pvs has been to devise an automated telescope and data reduction system, requiring only moderate technical and financial resources to assemble, which may be easily replicated by the dedicated amateur, a student group, or a professional and used to study and discover a variety of variable objects, such as stars. this paper describes the hardware and software components of the pvs device as well as observational results from the initial season of the pvs, including the discovery of a new bright variable star.
|
arxiv:astro-ph/0201394
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.