text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
in this work, we consider an extension of the symmetric teleparallel equivalent of general relativity ( stegr ), namely, $ f ( \ mathbb { q } ) $ gravity, by including a boundary term $ \ mathbb { b } _ q $, where $ \ mathbb { q } $ is the non - metricity scalar. more specifically, we explore static and spherically symmetric black hole and regular black hole solutions in $ f ( \ mathbb { q }, \ mathbb { b } _ q ) $ gravity coupled to nonlinear electrodynamics ( nled ). in particular, to obtain black hole solutions, and in order to ensure that our solutions preserve lorentz symmetry, we assume the following relation $ f _ q = - f _ b $, where $ f _ { q } = \ partial f / \ partial \ mathbb { q } $ and $ f _ { b } = \ partial f / \ partial \ mathbb { b } _ q $. we develop three models of black holes, and as the starting point for each case we consider the non - metricity scalar or the boundary term in such a way to obtain the metric functions $ a ( r ) $. additionally, we are able to express matter through analytical solutions for specific nled lagrangians $ { \ cal l } _ { \ rm nled } ( f ) $. furthermore, we also obtain generalized solutions of the bardeen and culetu types of regular black holes, by imposing specific metric functions.
|
arxiv:2402.02534
|
a user who does not have a quantum computer but wants to perform quantum computations may delegate his computation to a quantum cloud server. in order that the delegation works, it must be assured that no evil server can obtain any important information on the computation. the blind protocol was proposed as a way for the user to protect his information from the unauthorized actions of the server. among the blind protocols proposed thus far, a protocol with two servers sharing entanglement, while it does not require to a user any quantum resource, does not allow the servers to communicate even after the computation. in this paper, we propose a protocol, by extend this two - server protocol to multiple servers, which remains secure even if some servers communicate with each other after the computation. dummy gates and a circuit modeled after brickwork states play a crucial role in the new protocol.
|
arxiv:2106.05537
|
the voluminous nature of geospatial temporal data from physical monitors and simulation models poses challenges to efficient data access, often resulting in cumbersome temporal selection experiences in web - based data portals. thus, selecting a subset of time steps for prioritized visualization and pre - loading is highly desirable. addressing this issue, this paper establishes a multifaceted definition of salient time steps via extensive need - finding studies with domain experts to understand their workflows. building on this, we propose a novel approach that leverages autoencoders and dynamic programming to facilitate user - driven temporal selections. structural features, statistical variations, and distance penalties are incorporated to make more flexible selections. user - specified priorities, spatial regions, and aggregations are used to combine different perspectives. we design and implement a web - based interface to enable efficient and context - aware selection of time steps and evaluate its efficacy and usability through case studies, quantitative evaluations, and expert interviews.
|
arxiv:2403.03449
|
telescopes like herschel and the atacama large millimeter / submillimeter array ( alma ) are creating new opportunities to study sources in the far infrared ( fir ), a wavelength region dominated by cold dust emission. probing cold dust in active galaxies allows for study of the star formation history of active galactic nuclei ( agn ) hosts. the fir is also an important spectral region for observing agn which are heavily enshrouded by dust, such as compton thick agn. by using information from deep x - ray surveys and cosmic x - ray background synthesis models, we compute cloudy photoionization simulations which are used to predict the spectral energy distribution ( sed ) of agn in the fir. expected differential number counts of agn and their host galaxies are calculated in the herschel bands. the expected contribution of agn and their hosts to the cosmic infrared background ( cirb ) and the infrared luminosity density are also computed. multiple star formation scenarios are investigated using a modified blackbody star formation sed. it is found that fir observations at ~ 500 um are an excellent tool in determining the star formation history of agn hosts. additionally, the agn contribution to the cirb can be used to determine whether star formation in agn hosts evolves differently than in normal galaxies. the contribution of compton thick agn to the bright end differential number counts and to the bright source infrared luminosity density is a good test of agn evolution models where quasars are triggered by major mergers.
|
arxiv:1101.3536
|
the spontaneous excitation of a two - level atom held static outside a four dimensional schwarzschild black hole and in interaction with a massless scalar field in the boulware, unruh and hartle - hawking vacuum is investigated and the contributions of the vacuum fluctuations and radiation reaction to the rate of change of the mean atomic energy are calculated separately. we find that for the boulware vacuum, the spontaneous excitation does not occur and the ground state atoms are stable, while the spontaneous emission rate for excited atoms in the boulware vacuum, which is well - behaved at the event horizon, is not the same as that in the usual minkowski vacuum. however, both for the unruh vacuum and the hartle - hawking vacuum, our results show that the atom would spontaneously excite, as if there were an outgoing thermal flux of radiation or as if it were in a thermal bath of radiation at a proper temperature which reduces to the hawking temperature in the spatial asymptotic region, depending on whether the scalar field is in the unruh or hartle - hawking vacuum.
|
arxiv:0707.2613
|
in the context of shark - nir ( system for coronagraphy with high order adaptive optics in z and h band ), we present the development of shins, the shark - nir instrument control software, in particular focusing on the changes introduced during the assembly, integration, and test ( ait ) phase. shark - nir observing sessions will be carried out with " eso - style " observation blocks ( obs ) based on so - called templates scripts that will be prepared by observers. we decided to develop templates also for the large number of ait tests ( flexures, coronagraphic mask alignment, scientific camera performances... ). here we present the adopted http api for the obs generation and a web - based frontend that implements it. taking advantage of this approach, we decided to expose apis also for individual device movement and monitoring, as well as for general status. these apis are then used in the web - based instrument control and synoptic panels. during the recent ait phase, a potential collision issue between two motorized components emerged. while we are exploring the possibility of a hardware interlock, we present a software solution developed at the observation software level, that is also available while using other software such as engineering panels. the system is based on three protection layers and it has been successfully tested.
|
arxiv:2501.08010
|
we report the results of neutron measurements carried out during the application of ultrasounds to a solution containing only stable elements like iron and chlorine, without any other radioactive source of any kind. these measurements, carried out by cr39 detectors and a boron trifouride electronic detector, evidenced the emission of neutron pulses. these pulses stand well above the electronic noise and the background of the laboratory where the measurements were carried out.
|
arxiv:0812.1272
|
we consider a 2 - dimensional bloch - - landau - - pauli hamiltonian for a spinful electron in a constant magnetic field subject to a periodic background potential. assuming that the $ z $ - component of the spin operator is conserved, we compute the linear response of the associated spin density of states to a small change in the magnetic field, and identify it with the spin hall conductivity. this response is in the form of a spin chern marker, which is in general quantized to a half - integer, and to an integer under the further assumption of time - reversal symmetry. our result is thus a generalization to the context of the quantum spin hall effect to the well - known formula by st \ v { r } eda, which is formulated instead for charge transport.
|
arxiv:2002.02419
|
the geometric phase in the dynamics of a spin qubit driven by transverse microwave ( mw ) and longitudinal radiofrequency ( rf ) fields is studied. the phase acquired by the qubit during the full period of the " slow " rf field manifests in the shift of rabi frequency \ omega _ { 1 } of a spin qubit in the mw field. we find out that, for a linearly polarized rf field, this shift does not vanish at the second and higher even orders in the adiabaticity parameter \ omega _ { rf } / \ omega _ { 1 }, where \ omega _ { rf } is the rf frequency. as a result, the adiabatic ( berry ) phases for the rotating and counter - rotating rf components compensate each other, and only the higher - order geometric phase is observed. we experimentally identify that phase in the frequency shift of the rabi oscillations detected by a time - resolved electron paramagnetic resonance.
|
arxiv:1406.4000
|
market makers play an essential role in financial markets. a successful market maker should control inventory and adverse selection risks and provide liquidity to the market. as an important methodology in control problems, reinforcement learning enjoys the advantage of data - driven and less rigid assumptions, receiving great attention in the market - making field since 2018. however, although the china commodity market has the biggest trading volume on agricultural products, nonferrous metals, and some other sectors, the study of applying rl to market making in china market is still rare. in this thesis, we try to fill the gap. our contribution is threefold : we develop the automatic trading system and verify the feasibility of applying reinforcement learning in the china commodity market. also, we probe the agent ' s behavior by analyzing how it reacts to different environmental conditions.
|
arxiv:2205.08936
|
the melting - like transition in potasium clusters k _ n, with n = 20, 55, 92 and 142, is studied by using an orbital - free density - functional constant - energy molecular dynamics simulation method, and compared to previous theoretical results on the melting - like transition in sodium clusters of the same sizes. melting in potasium and sodium clusters proceeds in a similar way : a surface melting stage develops upon heating before the homogeneous melting temperature is reached. premelting effects are nevertheless more important and more easily established in potasium clusters, and the transition regions spread over temperature intervals which are wider than in the case of sodium. for all the sizes considered, the percentage melting temperature reduction when passing from na to k clusters is substantially larger than in the bulk. once those two materials have been compared for a number of different cluster sizes, we study the melting - like transition in rb _ 55 and cs _ 55 clusters and make a comparison with the melting behavior of na _ 55 and k _ 55. as the atomic number increases, the height of the specific heat peaks decreases, their width increases, and the melting temperature decreases as in bulk melting, but in a more pronounced way.
|
arxiv:physics/0005053
|
we study the problem of building a visual concept library for visual recognition. building effective visual concept libraries is challenging, as manual definition is labor - intensive, while relying solely on llms for concept generation can result in concepts that lack discriminative power or fail to account for the complex interactions between them. our approach, escher, takes a library learning perspective to iteratively discover and improve visual concepts. escher uses a vision - language model ( vlm ) as a critic to iteratively refine the concept library, including accounting for interactions between concepts and how they affect downstream classifiers. by leveraging the in - context learning abilities of llms and the history of performance using various concepts, escher dynamically improves its concept generation strategy based on the vlm critic ' s feedback. finally, escher does not require any human annotations, and is thus an automated plug - and - play framework. we empirically demonstrate the ability of escher to learn a concept library for zero - shot, few - shot, and fine - tuning visual classification tasks. this work represents, to our knowledge, the first application of concept library learning to real - world visual tasks.
|
arxiv:2504.00185
|
the authors previously introduced a diffeomorphism - invariant definition of a homogeneous and isotropic sector of loop quantum gravity, along with a program to embed loop quantum cosmology into it. the present paper works out that program in detail for the simpler, but still physically non - trivial, case where the target of the embedding is the homogeneous, but not isotropic, bianchi i model. the diffeomorphism - invariant conditions imposing homogeneity and isotropy in the full theory reduce to conditions imposing isotropy on an already homogeneous bianchi i spacetime. the reduced conditions are invariant under the residual diffeomorphisms still allowed after gauge fixing the bianchi i model. we show that there is a unique embedding of the quantum isotropic model into the homogeneous quantum bianchi i model that ( a ) is covariant with respect to the actions of such residual diffeomorphisms, and ( b ) intertwines both the ( signed ) volume operator and at least one directional hubble rate. that embedding also intertwines all other operators of interest in the respective loop quantum cosmological models, including their hamiltonian constraints. it thus establishes a precise equivalence between dynamics in the isotropic sector of the bianchi i model and the quantized isotropic model, and not just their kinematics. we also discuss the adjoint relationship between the embedding map defined here and a projection map previously defined by ashtekar and wilson - ewing. finally, we highlight certain features that simplify this reduced embedding problem, but which may not have direct analogues in the embedding of homogeneous and isotropic loop quantum cosmology into full loop quantum gravity.
|
arxiv:2102.03901
|
effetive field theory is believed to provide a useful framework for describing low - energy nuclear phenomena in a model - independent fashion. i give here a brief account of the basic features of this approach, some of its latest developments, and examples of actual calculations carried out in this framework.
|
arxiv:nucl-th/0308055
|
in many real world networks agents are initially unsure of each other ' s qualities and must learn about each other over time via repeated interactions. this paper is the first to provide a methodology for studying the dynamics of such networks, taking into account that agents differ from each other, that they begin with incomplete information, and that they must learn through past experiences which connections / links to form and which to break. the network dynamics in our model vary drastically from the dynamics in models of complete information. with incomplete information and learning, agents who provide high benefits will develop high reputations and remain in the network, while agents who provide low benefits will drop in reputation and become ostracized. we show, among many other things, that the information to which agents have access and the speed at which they learn and act can have a tremendous impact on the resulting network dynamics. using our model, we can also compute the ex ante social welfare given an arbitrary initial network, which allows us to characterize the socially optimal network structures for different sets of agents. importantly, we show through examples that the optimal network structure depends sharply on both the initial beliefs of the agents, as well as the rate of learning by the agents. due to the potential negative consequences of ostracism, it may be necessary to place agents with lower initial reputations at less central positions within the network.
|
arxiv:1507.04065
|
we obtain an entire liouville type theorem to the classical semilinear subcritical elliptic equation on heisenberg group. a pointwise estimate near the isolated singularity was also proved. the soul of the proofs is an a priori integral estimate, which deduced from a generalized formula of that found by jerison and lee.
|
arxiv:2011.07749
|
indoor localization has gained significant attention in recent years due to its various applications in smart homes, industrial automation, and healthcare, especially since more people rely on their wireless devices for location - based services. deep learning - based solutions have shown promising results in accurately estimating the position of wireless devices in indoor environments using wireless parameters such as channel state information ( csi ) and received signal strength indicator ( rssi ). however, despite the success of deep learning - based approaches in achieving high localization accuracy, these models suffer from a lack of generalizability and can not be readily - deployed to new environments or operate in dynamic environments without retraining. in this paper, we propose meta - learning - based localization models to address the lack of generalizability that persists in conventionally trained dl - based localization models. furthermore, since meta - learning algorithms require diverse datasets from several different scenarios, which can be hard to collect in the context of localization, we design and propose a new meta - learning algorithm, tb - maml ( task biased model agnostic meta learning ), intended to further improve generalizability when the dataset is limited. lastly, we evaluate the performance of tb - maml - based localization against conventionally trained localization models and localization done using other meta - learning algorithms.
|
arxiv:2305.13453
|
information - theoretical restrictions on the information transfer in quantum measurements are studied. they are derived for the measurement of system s by detector d, registrated and processed by information system o. the formalism of inference maps in hilbert space is used for it ; it permit to calculate o restricted state which contains available for o information on s parameters. it ' s shown that the principal information losses, inevitable in this formalism and induced by heisenberg commutation relations, stipulate the stochasticity of measurement outcomes, registrated by o in the individual events.
|
arxiv:quant-ph/0610215
|
retrieval - augmented generation ( rag ) empowers large language models to access external and private corpus, enabling factually consistent responses in specific domains. by exploiting the inherent structure of the corpus, graph - based rag methods further enrich this process by building a knowledge graph index and leveraging the structural nature of graphs. however, current graph - based rag approaches seldom prioritize the design of graph structures. inadequately designed graph not only impede the seamless integration of diverse graph algorithms but also result in workflow inconsistencies and degraded performance. to further unleash the potential of graph for rag, we propose noderag, a graph - centric framework introducing heterogeneous graph structures that enable the seamless and holistic integration of graph - based methodologies into the rag workflow. by aligning closely with the capabilities of llms, this framework ensures a fully cohesive and efficient end - to - end process. through extensive experiments, we demonstrate that noderag exhibits performance advantages over previous methods, including graphrag and lightrag, not only in indexing time, query time, and storage efficiency but also in delivering superior question - answering performance on multi - hop benchmarks and open - ended head - to - head evaluations with minimal retrieval tokens. our github repository could be seen at https : / / github. com / terry - xu - 666 / noderag.
|
arxiv:2504.11544
|
we investigate statistical and individual astrophysical properties of active galactic nuclei ( agns ), such as parsec - scale flux density, core dominance, angular and linear sizes, maximum observed brightness temperatures of vlbi core components, spectral index distributions for core and jet components, and evolution of brightness temperature along the jets. furthermore, we statistically compare core flux densities and brightness temperature as well as jet spectral indices of gamma - ray bright and weak sources. we used 19 very long baseline interferometry ( vlbi ) observing sessions carried out simultaneously at 2. 3 and 8. 6 ghz with the participation of 10 very long baseline array ( vlba ) stations and up to 10 additional geodetic telescopes. the observations span the period 1998 - 2003. we present here single - epoch results from high - resolution radio observations of 370 agns. our vlbi images at 2. 3 and 8. 6 ghz as well as gaussian models are presented and analyzed. at least one - fourth of the cores are completely unresolved on the longest baselines of the global vlbi observations. the vlbi core components are partially opaque with the median value of spectral index of alpha _ core = 0. 3, while the jet features are usually optically thin alpha _ jet = - 0. 7. the spectral index typically decreases along the jet ridge line owing to the spectral aging, with a median value of - 0. 05 mas ^ - 1. brightness temperatures are found to be affected by doppler boosting and reaching up to \ sim10 ^ 13 k with a median of \ sim2. 5x10 ^ 11 k at both frequencies. the brightness temperature gradients along the jets typically follow a power law t _ b \ simr ^ - 2. 2 at both frequencies. 147 sources ( 40 % ) positionally associated with gamma - ray detections from the fermi lat second source catalog have higher core flux densities and brightness temperatures, and are characterized by the less steep radio spectrum of the optically thin jet emission.
|
arxiv:1205.5559
|
though widely used in image classification, convolutional neural networks ( cnns ) are prone to noise interruptions, i. e. the cnn output can be drastically changed by small image noise. to improve the noise robustness, we try to integrate cnns with wavelet by replacing the common down - sampling ( max - pooling, strided - convolution, and average pooling ) with discrete wavelet transform ( dwt ). we firstly propose general dwt and inverse dwt ( idwt ) layers applicable to various orthogonal and biorthogonal discrete wavelets like haar, daubechies, and cohen, etc., and then design wavelet integrated cnns ( wavecnets ) by integrating dwt into the commonly used cnns ( vgg, resnets, and densenet ). during the down - sampling, wavecnets apply dwt to decompose the feature maps into the low - frequency and high - frequency components. containing the main information including the basic object structures, the low - frequency component is transmitted into the following layers to generate robust high - level features. the high - frequency components are dropped to remove most of the data noises. the experimental results show that % wavelet accelerates the cnn training, and wavecnets achieve higher accuracy on imagenet than various vanilla cnns. we have also tested the performance of wavecnets on the noisy version of imagenet, imagenet - c and six adversarial attacks, the results suggest that the proposed dwt / idwt layers could provide better noise - robustness and adversarial robustness. when applying wavecnets as backbones, the performance of object detectors ( i. e., faster r - cnn and retinanet ) on coco detection dataset are consistently improved. we believe that suppression of aliasing effect, i. e. separation of low frequency and high frequency information, is the main advantages of our approach. the code of our dwt / idwt layer and different wavecnets are available at https : / / github. com / cvi - szu / wavecnet.
|
arxiv:2107.13335
|
conventional training of deep neural networks requires a large number of the annotated image which is a laborious and time - consuming task, particularly for rare objects. few - shot object detection ( fsod ) methods offer a remedy by realizing robust object detection using only a few training samples per class. an unexplored challenge for fsod is that instances from unlabeled novel classes that do not belong to the fixed set of training classes appear in the background. these objects behave similarly to label noise, leading to fsod performance degradation. we develop a semi - supervised algorithm to detect and then utilize these unlabeled novel objects as positive samples during training to improve fsod performance. specifically, we propose a hierarchical ternary classification region proposal network ( htrpn ) to localize the potential unlabeled novel objects and assign them new objectness labels. our improved hierarchical sampling strategy for the region proposal network ( rpn ) also boosts the perception ability of the object detection model for large objects. our experimental results indicate that our method is effective and outperforms the existing state - of - the - art ( sota ) fsod methods.
|
arxiv:2303.10422
|
engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as gfp that report when a gene of interest is being expressed. these technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield. modern morphology recognises a continuum between the major morphological categories of root, stem ( caulome ), leaf ( phyllome ) and trichome. furthermore, it emphasises structural dynamics. modern systematics aims to reflect and discover phylogenetic relationships between plants. modern molecular phylogenetics largely ignores morphological characters, relying on dna sequences as data. molecular analysis of dna sequences from most families of flowering plants enabled the angiosperm phylogeny group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. the theoretical possibility of a practical method for identification of plant species and commercial varieties by dna barcoding is the subject of active current research. = = branches of botany = = botany is divided along several axes. some subfields of botany relate to particular groups of organisms. divisions related to the broader historical sense of botany include bacteriology, mycology ( or fungology ), and phycology – respectively, the study of bacteria, fungi, and algae – with lichenology as a subfield of mycology. the narrower sense of botany as the study of embryophytes ( land plants ) is called phytology. bryology is the study of mosses ( and in the broader sense also liverworts and hornworts ). pteridology ( or filicology ) is the study of ferns and allied plants. a number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto -
|
https://en.wikipedia.org/wiki/Botany
|
in this work we propose approaches to effectively transfer knowledge from weakly labeled web audio data. we first describe a convolutional neural network ( cnn ) based framework for sound event detection and classification using weakly labeled audio data. our model trains efficiently from audios of variable lengths ; hence, it is well suited for transfer learning. we then propose methods to learn representations using this model which can be effectively used for solving the target task. we study both transductive and inductive transfer learning tasks, showing the effectiveness of our methods for both domain and task adaptation. we show that the learned representations using the proposed cnn model generalizes well enough to reach human level accuracy on esc - 50 sound events dataset and set state of art results on this dataset. we further use them for acoustic scene classification task and once again show that our proposed approaches suit well for this task as well. we also show that our methods are helpful in capturing semantic meanings and relations as well. moreover, in this process we also set state - of - art results on audioset dataset, relying on balanced training set.
|
arxiv:1711.01369
|
over the years, scene understanding has attracted a growing interest in computer vision, providing the semantic and physical scene information necessary for robots to complete some particular tasks autonomously. in 3d scenes, rich spatial geometric and topological information are often ignored by rgb - based approaches for scene understanding. in this study, we develop a bottom - up approach for scene understanding that infers support relations between objects from a point cloud. our approach utilizes the spatial topology information of the plane pairs in the scene, consisting of three major steps. 1 ) detection of pairwise spatial configuration : dividing primitive pairs into local support connection and local inner connection ; 2 ) primitive classification : a combinatorial optimization method applied to classify primitives ; and 3 ) support relations inference and hierarchy graph construction : bottom - up support relations inference and scene hierarchy graph construction containing primitive level and object level. through experiments, we demonstrate that the algorithm achieves excellent performance in primitive classification and support relations inference. additionally, we show that the scene hierarchy graph contains rich geometric and topological information of objects, and it possesses great scalability for scene understanding.
|
arxiv:2404.13842
|
identifying driver genes is crucial for understanding oncogenesis and developing targeted cancer therapies. driver discovery methods using protein or pathway networks rely on traditional network science measures, focusing on nodes, edges, or community metrics. these methods can overlook the high - dimensional interactions that cancer genes have within cancer networks. this study presents a novel method using persistent homology to analyze the role of driver genes in higher - order structures within cancer consensus networks derived from main cellular pathways. we integrate mutation data from six cancer types and three biological functions : dna repair, chromatin organization, and programmed cell death. we systematically evaluated the impact of gene removal on topological voids ( $ \ beta _ 2 $ structures ) within the cancer consensus networks. our results reveal that only known driver genes and cancer - associated genes influence these structures, while passenger genes do not. although centrality measures alone proved insufficient to fully characterize impact genes, combining higher - order topological analysis with traditional network metrics can improve the precision of distinguishing between drivers and passengers. this work shows that cancer genes play an important role in higher - order structures, going beyond pairwise measures, and provides an approach to distinguish drivers and cancer - associated genes from passenger genes.
|
arxiv:2409.19115
|
quantum transport of charge or energy in networks with discrete sites is central to diverse quantum technologies, from molecular electronics to light harvesting and quantum opto - mechanical metamaterials. a one dimensional network can be viewed as waveguide. we show that if such waveguide is hybridised with a control unit that contains a few sites, then transmission through the waveguide depends sensitively on the motion of the sites in the control unit. together, the hybrid waveguide and its control - unit form a fano - anderson chain whose born - oppenheimer surfaces inherit characteristics from both components : a bandstructure from the waveguide and potential energy steps as a function of site coordinates from the control - unit. using time - dependent quantum wave packets, we reveal conditions under which the hybrid structure becomes transmissive only if the control unit contains mobile sites that induce non - adiabatic transitions between the surfaces. hence, our approach provides functional synthetic born - oppenheimer surfaces for hybrid quantum technologies combining mechanic and excitonic elements, and has possible applications such as switching and temperature sensing.
|
arxiv:2402.07454
|
we show that a wide range of spin clusters with antiferromagnetic intracluster exchange interaction allows one to define a qubit. for these spin cluster qubits, initialization, quantum gate operation, and readout are possible using the same techniques as for single spins. quantum gate operation for the spin cluster qubit does not require control over the intracluster exchange interaction. electric and magnetic fields necessary to effect quantum gates need only be controlled on the length scale of the spin cluster rather than the scale for a single spin. here, we calculate the energy gap separating the logical qubit states from the next excited state and the matrix elements which determine quantum gate operation times. we discuss spin cluster qubits formed by one - and two - dimensional arrays of s = 1 / 2 spins as well as clusters formed by spins s > 1 / 2. we illustrate the advantages of spin cluster qubits for various suggested implementations of spin qubits and analyze the scaling of decoherence time with spin cluster size.
|
arxiv:cond-mat/0304296
|
galaxy - galaxy strong gravitational lenses have become a popular probe of dark matter ( dm ) by providing a window into structure formation on the smallest scales. in particular, the convergence power spectrum of subhalos within lensing galaxies has been suggested as a promising observable to study dm. however, the distances involved in strong - lensing systems are vast, and we expect the relevant volume to contain line - of - sight ( los ) halos that are not associated with the main lens. we develop a formalism to calculate the effect of los halos as an effective convergence power spectrum. the multi - lens plane equation couples the angular deflections of consecutive lens planes, but by assuming that the perturbations due to the los halos are small, we show that they can be projected onto the main - lens plane as effective subhalos. we test our formalism by simulating lensing systems using the full multi - plane lens equation and find excellent agreement. we show how the relative contribution of los halos and subhalos depends on the source and lens redshift, as well as the assumed halo and subhalo mass functions. for a fiducial system with fraction of dm halo mass in substructure $ f _ { \ rm sub } = 0. 4 \ % $ for subhalo masses $ [ 10 ^ 5 - 10 ^ 8 ] \ rm { m } _ { \ odot } $, the interloper contribution to the power spectrum is at least several times greater than that of subhalos for source redshifts $ z _ s \ gtrsim0. 5 $. furthermore, it is likely that for the slacs and bells lenses the interloper contribution dominates : $ f _ { \ rm sub } \ gtrsim2 \ % $ ( $ 4 \ % $ ) is needed for subhalos to dominate in slacs ( bells ), which is higher than current upper bounds on $ f _ { \ rm sub } $ for our mass range. since the halo mass function is better understood from first principles, the dominance of interlopers in galaxy - galaxy lenses with high - quality imaging can be seen as a significant advantage when translating this observable into a constraint on dm.
|
arxiv:2006.07383
|
in this paper, for the first time, analytical expressions for determination of the wavelength of defect modes ( dms ) in one - dimensional ( 1d ) photonic crystals ( pcs ) with two defect layers ( dls ) were obtained from the condition of zero reflection. analytical formulas on the conditions for the merging of two dms and on the dms with zero value of the reflection coefficient were obtained. different typical 1d pc structures with two dls were considered and compared, including their dms behavior. the distribution of electromagnetic field within the first photonic bandgap ( pbg ) was analyzed for several cases of dm merging and for the different orders of dms. the results of this research permit the simplification of the analysis of optical sensors and filters based on 1d pcs with two dls, as well as a more comprehensive understanding of the dm behavior within the pbgs of such pc structures. furthermore, this paper helps to explain the results obtained by other researchers earlier in this field, too.
|
arxiv:2408.04397
|
this paper presents an efficient method for extracting the second - order sensitivities from a system of implicit nonlinear equations on upcoming graphical processing units ( gpu ) dominated computer systems. we design a custom automatic differentiation ( autodiff ) backend that targets highly parallel architectures by extracting the second - order information in batch. when the nonlinear equations are associated to a reduced space optimization problem, we leverage the parallel reverse - mode accumulation in a batched adjoint - adjoint algorithm to compute efficiently the reduced hessian of the problem. we apply the method to extract the reduced hessian associated to the balance equations of a power network, and show on the largest instances that a parallel gpu implementation is 30 times faster than a sequential cpu reference based on umfpack.
|
arxiv:2201.00241
|
electronic state calculations using quantum computers are mostly based on second quantization, which is suitable for qubit representation. another way to describe electronic states on a quantum computer is first quantization, which is expected to achieve smaller scaling with respect to the number of basis functions than second quantization. among basis functions, a real - space basis is an attractive option for quantum dynamics simulations in the fault - tolerant quantum computation ( ftqc ) era. a major difficulty in first quantization with a real - space basis is state preparation for many - body electronic systems. this difficulty stems from of the antisymmetry of electrons, and it is not straightforward to construct antisymmetric quantum states on a quantum circuit. in the present paper, we provide a design principle for constructing a variational quantum circuit to prepare an antisymmetric quantum state. the proposed circuit generates the superposition of exponentially many slater determinants, that is, a multi - configuration state, which provides a systematic approach to approximating the exact ground state. we implemented the variational quantum eigensolver ( vqe ) to obtain the ground state of a one - dimensional hydrogen molecular system. as a result, the proposed circuit well reproduced the exact antisymmetric ground state and its energy, whereas the conventional variational circuit yielded neither an antisymmetric nor a symmetric state. furthermore, we analyzed the many - body wave functions based on quantum information theory, which illustrated the relation between the electron correlation and the quantum entanglement.
|
arxiv:2306.08434
|
we evaluate renormalization factors of the domain - wall fermion system with various improved gauge actions at one loop level. the renormalization factors are calculated for quark wave function, quark mass, bilinear quark operators, three - and four - quark operators in modified minimal subtraction ( ms - bar ) scheme with the dimensional reduction ( dred ) as well as the naive dimensional regularization ( ndr ). we also present detailed results in the mean field improved perturbation theory.
|
arxiv:hep-lat/0206013
|
thermoacoustic instabilities in can - annular combustors of stationary gas turbines lead to unstable bloch modes which appear as rotating acoustic pressure waves along the turbine annulus. the multi - scale, multiphysical nature of the full problem makes a detailed analysis challenging. in this work, we derive a low - order, coupled oscillator model of an idealized can - annular combustor. the unimodal projection of the helmholtz equation for the can acoustics is combined with the rayleigh conductivity, which describes the aeroacoustic coupling between neighboring cans. using a bloch - wave ansatz, the resulting system is reduced to a single equation for the frequency spectrum. a linear stability analysis is then performed to study the perturbation of the spectrum by the can - to - can interaction. it is observed that the acoustic coupling can suppress or amplify thermoacoustic instabilities, raising the potential for instabilities in nominally stable systems.
|
arxiv:2102.08489
|
a method is described for the reconstruction of the amplitude and phase of the object exit wave function by phase - plate transmission electron microscopy. the proposed method can be considered as in - line holography and requires three images, taken with different phase shifts between undiffracted and diffracted electrons induced by a suitable phase - shifting device. the proposed method is applicable for arbitrary object exit wave functions and non - linear image formation. verification of the method is performed for examples of a simulated crystalline object wave function and a wave function acquired with off - axis holography. the impact of noise on the reconstruction of the wave function is investigated.
|
arxiv:1009.4615
|
the effect of isotope shifts of nuclear magnetic resonance ( nmr ) frequency in xenon isotopes 129xe and 131xe polarized by optically - oriented alkali metal atoms is not only of fundamental interest, but also of practical significance since it is the main factor limiting the accuracy of a whole class of prospective navigation and metrological devices. we have studied the parametric dependences of the isotope shift and have shown that this effect is largely due to incomplete averaging of a inhomogeneous local magnetic field by two xenon isotopes. a numerical model has been derived, which qualitatively describes the effect of isotope frequency shift and provides a good quantitative agreement with the experiment.
|
arxiv:1906.06556
|
machine learning ( ml ) algorithms have made a tremendous impact in the field of medical imaging. while medical imaging datasets have been growing in size, a challenge for supervised ml algorithms that is frequently mentioned is the lack of annotated data. as a result, various methods which can learn with less / other types of supervision, have been proposed. we review semi - supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis / detection or segmentation tasks. we also discuss connections between these learning scenarios, and opportunities for future research.
|
arxiv:1804.06353
|
distance metric learning can be viewed as one of the fundamental interests in pattern recognition and machine learning, which plays a pivotal role in the performance of many learning methods. one of the effective methods in learning such a metric is to learn it from a set of labeled training samples. the issue of data imbalance is the most important challenge of recent methods. this research tries not only to preserve the local structures but also covers the issue of imbalanced datasets. to do this, the proposed method first tries to extract a low dimensional manifold from the input data. then, it learns the local neighborhood structures and the relationship of the data points in the ambient space based on the adjacencies of the same data points on the embedded low dimensional manifold. using the local neighborhood relationships extracted from the manifold space, the proposed method learns the distance metric in a way which minimizes the distance between similar data and maximizes their distance from the dissimilar data points. the evaluations of the proposed method on numerous datasets from the uci repository of machine learning, and also the kddcup98 dataset as the most imbalance dataset, justify the supremacy of the proposed approach in comparison with other approaches especially when the imbalance factor is high.
|
arxiv:1902.03453
|
the reliability and transmission distance are generally limited for the wireless communications due to the severe channel fading. as an effective way to resist the channel fading, cooperative relaying is usually adopted in wireless networks where neighbouring nodes act as relays to help the transmission between the source and the destination. most research works simply regard these cooperative nodes trustworthy, which may be not practical in some cases especially when transmitting confidential information. in this paper, we consider the issue of untrusted relays in cooperative communications and propose an information self - encrypted approach to protect against these relays. specifically, the original packets of the information are used to encrypt each other as the secret keys such that the information cannot be recovered before all of the encrypted packets have been received. the information is intercepted only when the relays obtain all of these encrypted packets. it is proved that the intercept probability is reduced to zero exponentially with the number of the original packets. however, the security performance is still not satisfactory for a large number of relays. therefore, the combination of destination - based jamming is further adopted to confuse the relays, which makes the security performance acceptable even for a large number of relays. finally, the simulation results are provided to confirm the theoretical analysis and the superiority of the proposed scheme.
|
arxiv:1705.06477
|
the fundamental challenge of planning for multi - step manipulation is to find effective and plausible action sequences that lead to the task goal. we present cascaded variational inference ( cavin ) planner, a model - based method that hierarchically generates plans by sampling from latent spaces. to facilitate planning over long time horizons, our method learns latent representations that decouple the prediction of high - level effects from the generation of low - level motions through cascaded variational inference. this enables us to model dynamics at two different levels of temporal resolutions for hierarchical planning. we evaluate our approach in three multi - step robotic manipulation tasks in cluttered tabletop environments given high - dimensional observations. empirical results demonstrate that the proposed method outperforms state - of - the - art model - based methods by strategically interacting with multiple objects.
|
arxiv:1910.13395
|
considering the increasing size of available data, the need for statistical methods that control the finite sample bias is growing. this is mainly due to the frequent settings where the number of variables is large and allowed to increase with the sample size bringing standard inferential procedures to incur significant loss in terms of performance. moreover, the complexity of statistical models is also increasing thereby entailing important computational challenges in constructing new estimators or in implementing classical ones. a trade - off between numerical complexity and statistical properties is often accepted. however, numerically efficient estimators that are altogether unbiased, consistent and asymptotically normal in high dimensional problems would generally be ideal. in this paper, we set a general framework from which such estimators can easily be derived for wide classes of models. this framework is based on the concepts that underlie simulation - based estimation methods such as indirect inference. the approach allows various extensions compared to previous results as it is adapted to possibly inconsistent estimators and is applicable to discrete models and / or models with a large number of parameters. we consider an algorithm, namely the iterative bootstrap ( ib ), to efficiently compute simulation - based estimators by showing its convergence properties. within this framework we also prove the properties of simulation - based estimators, more specifically the unbiasedness, consistency and asymptotic normality when the number of parameters is allowed to increase with the sample size. therefore, an important implication of the proposed approach is that it allows to obtain unbiased estimators in finite samples. finally, we study this approach when applied to three common models, namely logistic regression, negative binomial regression and lasso regression.
|
arxiv:1810.04443
|
we investigate properties of bafe2as2 ( 122 ) single crystals upon gold doping, which is the transition metal with the highest atomic weight. the au substitution into the feas - planes of 122 crystal structure ( au - 122 ) is only possible up to a small amount of ~ 3 %. we find that 5d is more effective in reducing magnetism in 122 than its counter 3d cu, and this relates to superconductivity. we provide evidence of short - range magnetic fluctuations and local - lattice inhomogeneities that may prevent strong percolative superconductivity in ba ( fe1 - xaux ) 2as2.
|
arxiv:1506.08749
|
modern deep neural networks have now reached human - level performance across a variety of tasks. however, unlike humans they lack the ability to explain their decisions by showing where and telling what concepts guided them. in this work, we present a unified framework for transforming any vision neural network into a spatially and conceptually interpretable model. we introduce a spatially - aware concept bottleneck layer that projects " black - box " features of pre - trained backbone models into interpretable concept maps, without requiring human labels. by training a classification layer over this bottleneck, we obtain a self - explaining model that articulates which concepts most influenced its prediction, along with heatmaps that ground them in the input image. accordingly, we name this method " spatially - aware and label - free concept bottleneck model " ( salf - cbm ). our results show that the proposed salf - cbm : ( 1 ) outperforms non - spatial cbm methods, as well as the original backbone, on a variety of classification tasks ; ( 2 ) produces high - quality spatial explanations, outperforming widely used heatmap - based methods on a zero - shot segmentation task ; ( 3 ) facilitates model exploration and debugging, enabling users to query specific image regions and refine the model ' s decisions by locally editing its concept maps.
|
arxiv:2502.20134
|
intramolecular spin alignment in pi - conjugated molecules is studied theoretically in a model of a peierls - hubbard chain coupled with two localized spins. by means of the exact diagonalization technique, we demonstrate that a spin singlet ( s = 0 ) to quartet ( s = 3 / 2 ) transition can be induced by electronic doping, depending on the chain length, the positions of the localized spins, and the sign of the electron - spin coupling. the calculated results provides a theoretical basis for understanding the mechanism of spin alignment recently observed in a diradical donor molecule.
|
arxiv:cond-mat/0304431
|
we introduce a new and efficient numerical method for multicriterion optimal control and single criterion optimal control under integral constraints. the approach is based on extending the state space to include information on a " budget " remaining to satisfy each constraint ; the augmented hamilton - jacobi - bellman pde is then solved numerically. the efficiency of our approach hinges on the causality in that pde, i. e., the monotonicity of characteristic curves in one of the newly added dimensions. a semi - lagrangian " marching " method is used to approximate the discontinuous viscosity solution efficiently. we compare this to a recently introduced " weighted sum " based algorithm for the same problem. we illustrate our method using examples from flight path planning and robotic navigation in the presence of friendly and adversarial observers.
|
arxiv:0901.3977
|
mass - stationarity means that the origin is at a typical location in the mass of a random measure. it is an intrinsic characterisation of palm versions with respect to stationary random measures. stationarity is the special case when the random measure is lebesgue measure. the paper presents constructions of stationary and mass - stationary versions through change of measure and change of origin. further, the paper considers characterisations of mass - stationarity by distributional invariance under preserving shifts agains stationary independent backgrounds.
|
arxiv:1405.7566
|
we demonstrate that the problem of amplitude estimation, a core subroutine used in many quantum algorithms, can be mapped directly to a problem in signal processing called direction of arrival ( doa ) estimation. the doa task is to determine the direction of arrival of an incoming wave with the fewest possible measurements. the connection between amplitude estimation and doa allows us to make use of the vast amount of signal processing algorithms to post - process the measurements of the grover iterator at predefined depths. using an off - the - shelf doa algorithm called esprit together with a compressed - sensing based sampling approach, we create a phase - estimation free, parallel quantum amplitude estimation ( qae ) algorithm with a worst - case sequential query complexity of $ \ sim 4. 3 / \ varepsilon $ and a parallel query complexity of $ \ sim 0. 26 / \ varepsilon $ at 95 % confidence. this performance is statistically equivalent and a $ 16 \ times $ improvement over rall and fuller [ quantum 7, 937 ( 2023 ) ], for sequential and parallel query complexity respectively, which to our knowledge is the best published result for amplitude estimation. the approach presented here provides a simple, robust, parallel method to performing qae, with many possible avenues for improvement borrowing ideas from the wealth of literature in classical signal processing.
|
arxiv:2405.14697
|
micromobility vehicles are gaining popularity due to their portable nature, and their ability to serve short distance urban commutes better than traditional modes of transportation. most of these vehicles, offered by various micromobility service providers around the world, are shareable and can be rented ( by - the - minute ) by riders, thus eliminating the need of owning and maintaining a personal vehicle. however, the existing micromobility ecosystem comprising of vehicles, service providers, and their users, can be exploited as an attack surface by malicious entities - to compromise its security, safety and privacy. in this short position paper, we outline potential privacy and security challenges related to a very popular urban micromobility platform, specifically, dockless battery - powered e - scooters.
|
arxiv:2001.01387
|
we have discovered two low - ionization broad absorption line quasars in programs to obtain optical spectra for radio - selected quasar candidates from the vla first survey ( becker, white, & helfand 1995 ). both belong to the extremely rare class of bal qsos that exhibit narrow absorption lines from metastable excited levels of fe ii and fe iii. until now, there was just a single object in this class, 0059 - 2735 ( hazard et al. 1987 ). in addition, one of our new objects is the first known radio - loud bal qso. the properties of these three unusual objects suggest a trend of increasing radio luminosity with the amount of absorption to the quasar, and are perhaps transition objects between radio - loud and radio - quiet quasars. the two new objects are from a radio - selected sample comprising less than 200 quasars ; one is heavily attenuated at optical wavelengths in the observed frame. these objects would be easily overlooked by most optical qso searches ; their abundance in the radio sample suggests that they may be representatives of a largely undetected component of the quasar population, perhaps as numerous as ordinary low - ionization bal qsos which constitute 1 - 2 % of all qsos.
|
arxiv:astro-ph/9702012
|
reliable estimations of ephemeris errors are fundamental for the follow - up of corot candidates. an equation for the precision of minimum times, originally developed for eclipsing binaries, has been optimized for corot photometry and been used to calculate such errors. it may indicate expected timing precisions for transit events from corot, as well as from kepler. prediction errors for transit events may also be used to calculate probabilities about observing entire or partial transits in any given span of observational coverage, leading to an improved reliability in deductions made from follow - up observations.
|
arxiv:1206.1212
|
strong gravitational lensing is regarded as the most precise technique to measure the mass in the inner region of galaxies or galaxy clusters. in particular, the mass within one einstein radius can be determined with an accuracy of order of a few percent or better, depending on the image configuration. for other radii, however, degeneracies exist between galaxy density profiles, precluding an accurate determination of the enclosed mass. the source position transformation ( spt ), which includes the well - known mass - sheet transformation ( mst ) as a special case, describes this degeneracy of the lensing observables in a more general way. in this paper we explore properties of an spt, removing the mst to leading order, i. e., we consider degeneracies which have not been described before. the deflection field $ \ boldsymbol { \ hat { \ alpha } } ( \ boldsymbol { \ theta } ) $ resulting from an spt is not curl - free in general, and thus not a deflection that can be obtained from a lensing mass distribution. starting from a variational principle, we construct lensing potentials that give rise to a deflection field $ \ boldsymbol { \ tilde { \ alpha } } $, which differs from $ \ boldsymbol { \ hat { \ alpha } } $ by less than an observationally motivated upper limit. the corresponding mass distributions from these ' valid ' spts are studied : their radial profiles are modified relative to the original mass distribution in a significant and non - trivial way, and originally axi - symmetric mass distributions can obtain a finite ellipticity. these results indicate a significant effect of the spt on quantitative analyses of lens systems. we show that the mass inside the einstein radius of the original mass distribution is conserved by the spt ; hence, as is the case for the mst, the spt does not affect the mass determination at the einstein radius. [... ]
|
arxiv:1606.04321
|
while over the last century or more considerable effort has been put into the problem of finding approximate solutions for wave equations in general, and quantum mechanical problems in particular, it appears that as yet relatively little work seems to have been put into the complementary problem of establishing rigourous bounds on the exact solutions. we have in mind either bounds on parametric amplification and the related quantum phenomenon of particle production ( as encoded in the bogoliubov coefficients ), or bounds on transmission and reflection coefficients. modifying and streamlining an approach developed by one of the present authors [ phys. rev. a 59 ( 1999 ) 427 - 438 ], we investigate this question by developing a formal but exact solution for the appropriate second - order linear ode in terms of a time - ordered exponential of 2x2 matrices, then relating the bogoliubov coefficients to certain invariants of this matrix. by bounding the matrix in an appropriate manner, we can thereby bound the bogoliubov coefficients.
|
arxiv:0801.0610
|
in 1975 pippenger and golumbic proved that any graph on $ n $ vertices admits at most $ 2e ( n / k ) ^ k $ induced $ k $ - cycles. this bound is larger by a multiplicative factor of $ 2e $ than the simple lower bound obtained by a blow - up construction. pippenger and golumbic conjectured that the latter lower bound is essentially tight. in the present paper we establish a better upper bound of $ ( 128e / 81 ) \ cdot ( n / k ) ^ k $. this constitutes the first progress towards proving the aforementioned conjecture since it was posed.
|
arxiv:1702.07342
|
lifecycle assessment of wind turbines is essential to improve their design and to optimum maintenance plans for preventing failures during the design life. a critical element of wind turbines is the composite blade due to uncertain cyclic wind loads with relatively high frequency and amplitude in offshore environments. it is critical to detect the wind fatigue damage evolution in composite blades before they fail catastrophically and destroy the entire wind turbines. this study presents a methodology for analysing the fatigue failure probability of a wind turbine composite blade by using monitoring stochastic deterioration modelling. on the basis of 5 minutes mean wind speed measurements, the internal stresses can be accurately obtained from finite element analysis, and failure probabilities are predicted by stochastic gamma process fatigue damage model in design service life. a numerical example of a wind turbine composite blade is investigated to show the applicability of the proposed model. the results show that the stochastic fatigue damage model can give reliable results for time - dependent reliability analysis for composite blades of wind turbines.
|
arxiv:2404.10021
|
during inflation primordial quantum fluctuations of the spacetime metric become classical and there is a spontaneous cpt violation by the spin connection coupling terms of the metric with fermions. the energy levels of the left and the right chirality neutrinos is split which gives rise to a net lepton asymmetry at equilibrium. a net baryon asymmetry of the same magnitude can be generated from this lepton asymmetry either by a gut, $ b - l $ symmetry or by electroweak sphaleron processes which preserve $ b + l $ symmetry. if the amplitude of the primordial tensor perturbations is of the order of $ 10 ^ { - 6 } $ ( as is expected from inflation models ) and the lepton / baryon number violating processes freeze out at the gut era $ t _ d \ sim 10 ^ { 16 } gev $ then a baryon number asymmetry of the correct magnitude $ 10 ^ { - 10 } $ can be generated.
|
arxiv:hep-ph/0204257
|
in recent work, harman and snowden introduced a notion of measure on a fra \ " iss \ ' e class $ \ mathfrak { f } $, and showed how such measures lead to interesting tensor categories. constructing and classifying measures is a difficult problem, and so far only a handful of cases have been worked out. in this paper, we obtain some of the first general results on measures. our main theorem states that if $ \ mathfrak { f } $ is distal ( in the sense of simon ), and there are some bounds on automorphism groups, then $ \ mathfrak { f } $ admits only finitely many measures ; moreover, we give an effective upper bound on their number. for example, if $ \ mathfrak { f } $ is the class of ` ` $ s $ - dimensional permutations ' ' ( finite sets equipped with $ s $ total orders ), we show that the number of measures is bounded above by approximately $ \ exp ( \ exp ( s ^ 2 \ log { s } ) ) $.
|
arxiv:2407.19131
|
future gravitational wave observations are potentially sensitive to new physics corrections to the higgs potential once the first - order electroweak phase transition arises. we study the smeft dimension - six operator effects on the higgs potential, where three types of effects are taken into account : ( i ) smeft tree level effect on $ \ varphi ^ 6 $ operator, ( ii ) smeft tree level effect on the wave function renormalization of the higgs field, and ( iii ) smeft top - quark one - loop level effect. the sensitivity of future gravitational wave observations to these effects is numerically calculated by performing a fisher matrix analysis. we find that the future gravitational wave observations can be sensitive to ( ii ) and ( iii ) once the first - order electroweak phase transition arises from ( i ). the dimension - eight $ \ varphi ^ 8 $ operator effects on the first - order electroweak phase transition are also discussed. the sensitivities of the future gravitational wave observations are also compared with those of future collider experiments.
|
arxiv:2210.11241
|
the textbook proofs of commoner ' s theorem characterizing liveness in free - choice petri nets are given in contexts of technical notions and claims that make the proofs look a bit long. the aim of this note is to give a concise self - contained proof.
|
arxiv:2401.12067
|
a general constructive procedure is presented for analyzing magnetic instabilities in two - dimensional materials, in terms of [ predominantly ] double nesting, and applied to hartree - fock hf + rpa and gutzwiller approximation ga + rpa calculations of the hubbard model. applied to the cuprates, it is found that competing magnetic interactions are present only for hole doping, between half filling and the van hove singularity. while hf + rpa instabilities are present at all dopings ( for sufficiently large hubbard u ), in a gutzwiller approximation they are restricted to a doping range close to the range of relevance for the physical cuprates. the same model would hold for charge instabilities, except that the interaction is more likely to be q - dependent.
|
arxiv:1207.5539
|
building conversational agents that can have natural and knowledge - grounded interactions with humans requires understanding user utterances. entity linking ( el ) is an effective and widely used method for understanding natural language text and connecting it to external knowledge. it is, however, shown that existing el methods developed for annotating documents are suboptimal for conversations, where personal entities ( e. g., " my cars " ) and concepts are essential for understanding user utterances. in this paper, we introduce a collection and a tool for entity linking in conversations. we collect el annotations for 1327 conversational utterances, consisting of links to named entities, concepts, and personal entities. the dataset is used for training our toolkit for conversational entity linking, crel. unlike existing el methods, crel is developed to identify both named entities and concepts. it also utilizes coreference resolution techniques to identify personal entities and references to the explicit entity mentions in the conversations. we compare crel with state - of - the - art techniques and show that it outperforms all existing baselines.
|
arxiv:2206.07836
|
we present observations of a 4 squared degree area toward the gemini cloud obtained using j = 1 - 0 transitions of $ ^ { 12 } $ co, $ ^ { 13 } $ co and c $ ^ { 18 } $ o. no c $ ^ { 18 } $ o emission was detected. this region is composed of 36 core candidates of $ ^ { 13 } $ co. these core candidates have a characteristic diameter of 0. 25 pc, excitation temperatures of 7. 9 k, line width of 0. 54 km s $ ^ { - 1 } $ and a mean mass of 1. 4 m $ _ { \ sun } $. they are likely to be starless core candidates, or transient structures, which probably disperse after $ \ sim $ 10 $ ^ 6 $ yr.
|
arxiv:1506.08034
|
it has been known since the beginning of this century that isomonodromic problems - - - typically the painlev \ ' e transcendents - - - in a suitable asymptotic region look like a kind of ` ` modulation ' ' of isospectral problem. this connection between isomonodromic and isospectral problems is reconsidered here in the light of recent studies related to the seiberg - witten solutions of $ n = 2 $ supersymmetric gauge theories. a general machinary is illustrated in a typical isomonodromic problem, namely the schlesinger equation, which is reformulated to include a small parameter $ \ epsilon $. in the small - $ \ epsilon $ limit, solutions of this isomonodromic problem are expected to behave as a slowly modulated finite - gap solution of an isospectral problem. the modulation is caused by slow deformations of the spectral curve of the finite - gap solution. a modulation equation of this slow dynamics is derived by a heuristic method. an inverse period map of seiberg - witten type turns out to give general solutions of this modulation equation. this construction of general solution also reveals the existence of deformations of seiberg - witten type on the same moduli space of spectral curves. a prepotential is also constructed in the same way as the prepotential of the seiberg - witten theory.
|
arxiv:solv-int/9704004
|
mason and skinner recently constructed a chiral infinite tension limit of the ramond - neveu - schwarz superstring which was shown to compute the cachazo - he - yuan formulae for tree - level d = 10 yang - mills amplitudes and the ns - ns sector of tree - level d = 10 supergravity amplitudes. in this letter, their chiral infinite tension limit is generalized to the pure spinor superstring which computes a d = 10 superspace version of the cachazo - he - yuan formulae for tree - level d = 10 super - yang - mills and supergravity amplitudes.
|
arxiv:1311.4156
|
creativity is one of the driving forces of human kind as it allows to break current understanding to envision new ideas, which may revolutionize entire fields of knowledge. scientific research offers a challenging environment where to learn a model for the creative process. in fact, scientific research is a creative act in the formal settings of the scientific method and this creative act is described in articles. in this paper, we dare to introduce the novel, scientifically and philosophically challenging task of generating abstracts of scientific papers from abstracts of cited papers ( gasp ) as a text - to - text task to investigate scientific creativity, to foster research in this novel, challenging task, we prepared a dataset by using services where that solve the problem of copyright and, hence, the dataset is public available with its standard split. finally, we experimented with two vanilla summarization systems to start the analysis of the complexity of the gasp task.
|
arxiv:2003.04996
|
spin - polarized scanning tunneling microscopy ( sp - stm ) measures tunnel magnetoresistance ( tmr ) with atomic resolution. while various methods for achieving sp probes have been developed, each is limited with respect to fabrication, performance, and allowed operating conditions. in this study, we present the fabrication and use of sp - stm tips made from commercially available antiferromagnetic $ \ rm { mn _ { 88 } ni _ { 12 } } $ foil. the tips are intrinsically sp, which is attractive for exploring magnetic phenomena in the zero field limit. the tip material is relatively ductile and straightforward to etch. we benchmark the conventional stm and spectroscopic performance of our tips and demonstrate their spin sensitivity by measuring the two - state switching of holmium single atom magnets on mgo / ag ( 100 ).
|
arxiv:1807.00364
|
kn \ " orr has constructed an ideal, in the center of the p - modular group algebra of a finite group g, whose dimension is the number of p - blocks of defect zero in g / q ; here p is a prime and q is a normal p - subgroup of g. we generalize his construction to symmetric algebras.
|
arxiv:2309.15538
|
the paper is devoted to the derivation of a combined system of motion equations for solid and fluid in isotropic tight oil / gas sandstone media through volume averaging theorems ( vat ). based on the features of the media, four physical assumptions are proposed as the foundation for our derivation. more precisely, volume averaging theorems are applied to the micro - scale motion equations for both the solid and the fluid as well as to the stress - strain relations, resulting in a combined system of macro - scale equations for the tight oil / gas sandstone media. it is worth noting that the four assumptions may not be satisfied in the whole region. nevertheless, since the characteristic diameter for applying vat ranges between $ 10 ^ { - 6 } $ meters and dozens of meters, we may split the entire domain into several sub - domains such that the four physical assumptions are satisfied in each sub - domain. by choosing a proper characteristic diameter of an averaging volume, we derive a formula for the fluid average pressure in terms of the divergence of the average displacement from the continuity equation of the fluid. as a result, the motion equations derived in this paper are simpler than the biot equations, and are more suitable for inversion of porous medium parameters. when the fluid is gas and the compressional wave is considered, the derived motion equations can be simplified to the diffusive - viscous wave equation. moreover, the explicit relationship between the coefficients in this equation and medium parameters is very important for gas detection in tight gas sandstone.
|
arxiv:1801.05115
|
we construct stationary ricci - flat inter - universe lorentzian wormhole solutions in all d \ ge 5 dimensions that connect two flat asymptotic spacetimes. such a solution can be viewed as the gravity dual of a string tachyon state whose linear momentum is larger than its tension. we focus our analysis on the d = 5 wormholes which are not traversable for the timelike and null geodesics ; however, we demonstrate that there exist accelerated timelike trajectories that traverse from one asymptotic region to the other. we further study the minimally - coupled scalar wave equation and demonstrate that the quantum tunnelling between two worlds must occur. we also obtain charged wormholes in five - dimensional supergravities. with appropriate choice of parameters, these wormholes connect ads $ _ 3 \ times s ^ 2 $ in one asymptotic region to flat minkowskian spacetime in the other.
|
arxiv:0806.3111
|
emphasises that the state should play as small role as possible ( if any role ) in the regulation of economic activity between two transacting parties. friedrich hayek and ludwig von mises are the two most prominent representatives of the austrian school. post - keynesian economics concentrates on macroeconomic rigidities and adjustment processes. it is generally associated with the university of cambridge and the work of joan robinson. ecological economics like environmental economics studies the interactions between human economies and the ecosystems in which they are embedded, but in contrast to environmental economics takes an oppositional position towards general mainstream economic principles. a major difference between the two subdisciplines is their assumptions about the substitution possibilities between human - made and natural capital. additionally, alternative developments include marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics. feminist economics emphasises the role that gender plays in economies, challenging analyses that render gender invisible or support gender - oppressive economic systems. the goal is to create economic research and policy analysis that is inclusive and gender - aware to encourage gender equality and improve the well - being of marginalised groups. = = methodology = = = = = theoretical research = = = mainstream economic theory relies upon analytical economic models. when creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. while neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories. in microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part. sometimes an economic hypothesis is only qualitative, not quantitative. expositions of economic reasoning often use two - dimensional graphs to illustrate theoretical relationships. at a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyse problems in economics. paul samuelson ' s treatise foundations of economic analysis ( 1947 ) exemplifies the method, particularly as to
|
https://en.wikipedia.org/wiki/Economics
|
it is shown that a fixed measurement setting, e. g., a measurement in the computational basis, can detect all entangled states by preparing multipartite quantum states, called network states. we present network states for both cases to construct decomposable entanglement witnesses ( ews ) equivalent to the partial transpose criteria and also non - decomposable ews that detect undistillable entangled states beyond the partial transpose criteria. entanglement detection by state preparation can be extended to multipartite states such as graph states, a resource for measurement - based quantum computing. our results readily apply to a realistic scenario, for instance, an array of superconducting qubits. neutral atoms, or photons, in which the preparation of a multipartite state and a fixed measurement are experimentally feasible.
|
arxiv:2303.16368
|
self - supervised learning ( ssl ) is a powerful tool that allows learning of underlying representations from unlabeled data. transformer based models such as wav2vec 2. 0 and hubert are leading the field in the speech domain. generally these models are fine - tuned on a small amount of labeled data for a downstream task such as automatic speech recognition ( asr ). this involves re - training the majority of the model for each task. adapters are small lightweight modules which are commonly used in natural language processing ( nlp ) to adapt pre - trained models to new tasks. in this paper we propose applying adapters to wav2vec 2. 0 to reduce the number of parameters required for downstream asr tasks, and increase scalability of the model to multiple tasks or languages. using adapters we can perform asr while training fewer than 10 % of parameters per task compared to full fine - tuning with little degradation of performance. ablations show that applying adapters into just the top few layers of the pre - trained network gives similar performance to full transfer, supporting the theory that higher pre - trained layers encode more phonemic information, and further optimizing efficiency.
|
arxiv:2202.03218
|
statistical depth, a useful tool to measure the center - outward rank of multivariate and functional data, is still under - explored in temporal point processes. recent studies on point process depth proposed a weighted product of two terms - one indicates the depth of the cardinality of the process, and the other characterizes the conditional depth of the temporal events given the cardinality. the second term is of great challenge because of the apparent nonlinear structure of event times, and so far only basic parametric representations such as gaussian and dirichlet densities were adopted in the definitions. however, these simplified forms ignore the underlying distribution of the process events, which makes the methods difficult to interpret and to apply to complicated patterns. to deal with these problems, we in this paper propose a distribution - based approach to the conditional depth via the well - known isometric log - ratio ( ilr ) transformation on the inter - event times. the new depth, called the ilr depth, is at first defined for homogeneous poisson process by using the density function on the transformed space. the definition is then extended to any general point process via a time - rescaling transformation. we illustrate the ilr depth using simulations of poisson and non - poisson processes and demonstrate its superiority over previous methods. we also thoroughly examine its mathematical properties and asymptotics in large samples. finally, we apply the ilr depth in a real dataset and the result clearly shows the effectiveness of the new method.
|
arxiv:2203.04454
|
this thesis is devoted to the study of quantum mechanical effects that arise in systems of reduced dimensionality. specifically, we investigate coherence and correlation effects in quantum transport models. in the first part, we present a theory of markovian and non - markovian current correlations in nanoscopic conductors. the theory is applied to obtain the spectrum of quantum noise and high - order current correlations at finite frequencies in quantum - dot systems. one of the main conclusions is that only the non - markovian approach contains the physics of vacuum fluctuations. in the second part, we study the coupling of superconducting qubits to optical atomic systems and to cavity resonators. we propose a hybrid quantum system consisting of a flux qubit coupled to nv centers in diamond. we also demonstrate the existence of the so - called bloch - siegert shift in the ultra - strong coupling regime between a flux qubit and a lc resonator. throughout the thesis, we make special emphasis on the study of decoherence effects produced by the distinct dissipative baths to which the various types of qubits presented in this thesis are inevitably coupled.
|
arxiv:1202.3161
|
observations of the ly - alpha forest at z ~ 3 reveal an average metallicity z ~ 0. 01 z _ solar. the high - redshift supernovae that polluted the igm also accelerated relativistic electrons. since the energy density of the cmb scales as ( 1 + z ) ^ 4, at high redshift these electrons cool via inverse compton scattering. thus, the first star clusters emit x - rays. unlike stellar uv ionizing photons, these x - rays can escape easily from their host galaxies. this has a number of important physical consequences : ( i ) due to their large mean free path, these x - rays can quickly establish a universal ionizing background and partially reionize the universe in a gradual, homogeneous fashion. if x - rays formed the dominant ionizing background, the universe would have more closely resembled a single - phase medium, rather than a two - phase medium. ( ii ) x - rays can reheat the universe to higher temperatures than possible with uv radiation. ( iii ) x - rays counter the tendency of uv radiation to photo - dissociate h2, an important coolant in the early universe, by promoting gas phase h2 formation. the x - ray production efficiency is calibrated to local observations of starburst galaxies, which imply that ~ 10 % of the supernova energy is converted to x - rays. while direct detection of sources in x - ray emission is difficult, the presence of relativistic electrons at high redshift and thus a minimal level of x - ray emission may be inferred by synchrotron emission observations with the square kilometer array. these sources may constitute a significant fraction of the unresolved hard x - ray background, and can account for both the shape and amplitude of the gamma - ray background. this paper discusses the existence and observability of high - redshift x - ray sources, while a companion paper models the detailed reionization physics and chemistry.
|
arxiv:astro-ph/0005262
|
predicting the number of defects in a project is critical for project test managers to allocate budget, resources, and schedule for testing, support and maintenance efforts. software defect prediction models predict the number of defects in given projects after training the model with historical defect related information. the majority of defect prediction studies focused on predicting defect - prone modules from methods, and class - level static information, whereas this study predicts defects from project - level information based on a cross - company project dataset. this study utilizes software sizing metrics, effort metrics, and defect density information, and focuses on developing defect prediction models that apply various machine learning algorithms. one notable issue in existing defect prediction studies is the lack of transparency in the developed models. consequently, the explain - ability of the developed model has been demonstrated using the state - of - the - art post - hoc model - agnostic method called shapley additive explanations ( shap ). finally, important features for predicting defects from cross - company project information were identified.
|
arxiv:2306.08655
|
we discuss weber ' s formula which gives the quotient of two thetanullwerte for a plane smooth quartic in terms of the bitangents. in particular, we show how it can easily be derived from the riemann - jacobi formula.
|
arxiv:1503.01012
|
formation. the use of thiolated polymers ( thiomers ) as scaffold material for tissue engineering was initially introduced at the 4th central european symposium on pharmaceutical technology in vienna 2001. as thiomers are biocompatible, exhibit cellular mimicking properties and efficiently support proliferation and differentiation of various cell types, they are extensively used as scaffolds for tissue engineering. furthermore thiomers such as thiolated hyaluronic acid and thiolated chitosan were shown to exhibit wound healing properties and are subject of numerous clinical trials. additionally, a fragment of an extracellular matrix protein, such as the rgd peptide, can be coupled to a non - bioactive material to promote cell attachment. another form of scaffold is decellularized tissue. this is a process where chemicals are used to extracts cells from tissues, leaving just the extracellular matrix. this has the benefit of a fully formed matrix specific to the desired tissue type. however, the decellurised scaffold may present immune problems with future introduced cells. = = = synthesis = = = a number of different methods have been described in the literature for preparing porous structures to be employed as tissue engineering scaffolds. each of these techniques presents its own advantages, but none are free of drawbacks. = = = = nanofiber self - assembly = = = = molecular self - assembly is one of the few methods for creating biomaterials with properties similar in scale and chemistry to that of the natural in vivo extracellular matrix ( ecm ), a crucial step toward tissue engineering of complex tissues. moreover, these hydrogel scaffolds have shown superiority in in vivo toxicology and biocompatibility compared to traditional macro - scaffolds and animal - derived materials. = = = = textile technologies = = = = these techniques include all the approaches that have been successfully employed for the preparation of non - woven meshes of different polymers. in particular, non - woven polyglycolide structures have been tested for tissue engineering applications : such fibrous structures have been found useful to grow different types of cells. the principal drawbacks are related to the difficulties in obtaining high porosity and regular pore size. = = = = solvent casting and particulate leaching = = = = solvent casting and particulate leaching ( scpl ) allows for the preparation of structures with regular porosity, but with limited thickness. first, the polymer is dissolved into a suitable organic solvent ( e. g
|
https://en.wikipedia.org/wiki/Tissue_engineering
|
single - atom catalysts ( sacs ) with metal - nitrogen - carbon ( m - n - c ) structures are widely recognized as promising candidates in oxygen reduction reactions ( orr ). according to the classical sabatier principle, optimal 3d metal catalysts, such as fe / co - n - c, achieve superior catalytic performance due to the moderate binding strength. however, the substantial orr activity demonstrated by weakly binding m - n - c catalysts such as nicu - n - c challenges current understandings, emphasizing the need to explore new underlying mechanisms. in this work, we integrated a ph - field coupled microkinetic model with detailed experimental electron state analyses to verify a novel key step in the orr reaction pathway of weak - binding sacs - the oxygen adsorption at the metal - n bridge site. this step significantly altered the adsorption scaling relations, electric field responses, and solvation effects, further impacting the key kinetic reaction barrier from hoo * to o * and ph - dependent performance. synchrotron spectra analysis further provides evidence for the new weak - binding m - n - c model, showing an increase in electron density on the anti - bonding pi orbitals of n atoms in weak - binding m - n - c catalysts and confirming the presence of n - o bonds. these findings redefine the understanding of weak - binding m - n - c catalyst behavior, opening up new perspectives for their application in clean energy.
|
arxiv:2402.05405
|
we consider ideals involving the maximal minors of a polynomial matrix. for example, those arising in the computation of the critical values of a polynomial restricted to a variety for polynomial optimisation. gr \ " obner bases are a classical tool for solving polynomial systems. for practical computations, this consists of two stages. first, a gr \ " obner basis is computed with respect to a drl ( degree reverse lexicographic ) ordering. then, a change of ordering algorithm, such as \ textsf { sparse - fglm }, designed by faug \ ` ere and mou, is used to find a gr \ " obner basis of the same ideal but with respect to a lexicographic ordering. the complexity of this latter step, in terms of arithmetic operations, is $ o ( md ^ 2 ) $, where $ d $ is the degree of the ideal and $ m $ is the number of non - trivial columns of a certain $ d \ times d $ matrix. while asymptotic estimates are known for $ m $ for generic polynomial systems, thus far, the complexity of \ textsf { sparse - fglm } was unknown for determinantal systems. by assuming fr \ " oberg ' s conjecture we expand the work of moreno - soc \ ' ias by detailing the structure of the drl staircase in the determinantal setting. then we study the asymptotics of the quantity $ m $ by relating it to the coefficients of these hilbert series. consequently, we arrive at a new bound on the complexity of the \ textsf { sparse - fglm } algorithm for generic determinantal systems and for generic critical point systems. we consider the ideal in the polynomial ring $ \ mathbb { k } [ x _ 1, \ dots, x _ n ] $, where $ \ mathbb { k } $ is some infinite field, generated by $ p $ generic polynomials of degree $ d $ and the maximal minors of a $ p \ times ( n - 1 ) $ polynomial matrix with generic entries of degree $ d - 1 $. then for the case $ d = 2 $ and for $ n \ gg p $ we give an exact formula for $ m $ in terms of $ n $ and $ p $. moreover, for $ d \ geq 3 $, we give an asymptotic formula, as $ n \ to \ infty $, for $ m $ in terms of $ n, p $ and $
|
arxiv:2203.10021
|
we describe the quantum phase transition in the $ n $ - state chiral clock model in spatial dimension $ d = 1 $. with couplings chosen to preserve time - reversal and spatial inversion symmetries, such a model is in the universality class of recent experimental studies of the ordering of pumped rydberg states in a one - dimensional chain of trapped ultracold alkali atoms. for such couplings and $ n = 3 $, the clock model is expected to have a direct phase transition from a gapped phase with a broken global $ \ mathbb { z } _ n $ symmetry, to a gapped phase with the $ \ mathbb { z } _ n $ symmetry restored. the transition has dynamical critical exponent $ z \ neq 1 $, and so cannot be described by a relativistic quantum field theory. we use a lattice duality transformation to map the transition onto that of a bose gas in $ d = 1 $, involving the onset of a single boson condensate in the background of a higher - dimensional $ n $ - boson condensate. we present a renormalization group analysis of the strongly coupled field theory for the bose gas transition in an expansion in $ 2 - d $, with $ 4 - n $ chosen to be of order $ 2 - d $. at two - loop order, we find a regime of parameters with a renormalization group fixed point which can describe a direct phase transition. we also present numerical density - matrix renormalization group studies of lattice chiral clock and bose gas models for $ n = 3 $, finding good evidence for a direct phase transition, and obtain estimates for $ z $ and the correlation length exponent $ \ nu $.
|
arxiv:1808.07056
|
we continue a previous paper to show that mel ' nikov ' s first order formula for part of the separatrix splitting of a pendulum under fast quasi periodic forcing holds, in special examples, as an asymptotic formula in the forcing rapidity.
|
arxiv:chao-dyn/9804043
|
we present a new radiative transfer method ( sph - m1rt ) that is coupled dynamically with smoothed particle hydrodynamics ( sph ). we implement it in the ( task - based parallel ) swift galaxy simulation code but it can be straightforwardly implemented in other sph codes. our moment - based method simultaneously solves the radiation energy and flux equations in sph, making it adaptive in space and time. we modify the m1 closure relation to stabilize radiation fronts in the optically thin limit. we also introduce anisotropic artificial viscosity and high - order artificial diffusion schemes, which allow the code to handle radiation transport accurately in both the optically thin and optically thick regimes. non - equilibrium thermo - chemistry is solved using a semi - implicit sub - cycling technique. the computational cost of our method is independent of the number of sources and can be lowered further by using the reduced speed of light approximation. we demonstrate the robustness of our method by applying it to a set of standard tests from the cosmological radiative transfer comparison project of iliev et al. the sph - m1rt scheme is well - suited for modelling situations in which numerous sources emit ionising radiation, such as cosmological simulations of galaxy formation or simulations of the interstellar medium.
|
arxiv:2102.08404
|
the nasa double asteroid redirection test ( dart ) mission performed a kinetic impact on asteroid dimorphos, the satellite of the binary asteroid ( 65803 ) didymos, at 23 : 14 utc on september 26, 2022 as a planetary defense test. dart was the first hypervelocity impact experiment on an asteroid at size and velocity scales relevant to planetary defense, intended to validate kinetic impact as a means of asteroid deflection. here we report the first determination of the momentum transferred to an asteroid by kinetic impact. based on the change in the binary orbit period, we find an instantaneous reduction in dimorphos ' s along - track orbital velocity component of 2. 70 + / - 0. 10 mm / s, indicating enhanced momentum transfer due to recoil from ejecta streams produced by the impact. for a dimorphos bulk density range of 1, 500 to 3, 300 kg / m $ ^ 3 $, we find that the expected value of the momentum enhancement factor, $ \ beta $, ranges between 2. 2 and 4. 9, depending on the mass of dimorphos. if dimorphos and didymos are assumed to have equal densities of 2, 400 kg / m $ ^ 3 $, $ \ beta $ = 3. 61 + 0. 19 / - 0. 25 ( 1 $ \ sigma $ ). these $ \ beta $ values indicate that significantly more momentum was transferred to dimorphos from the escaping impact ejecta than was incident with dart. therefore, the dart kinetic impact was highly effective in deflecting the asteroid dimorphos.
|
arxiv:2303.03464
|
the real sphere $ s ^ { n - 1 } _ \ mathbb r $ appears as increasing union, over $ d \ in \ { 1,..., n \ } $, of its " polygonal " versions $ s ^ { n - 1, d - 1 } _ \ mathbb r = \ { x \ in s ^ { n - 1 } _ \ mathbb r | x _ { i _ 0 }... x _ { i _ d } = 0, \ forall i _ 0,..., i _ d \ { \ rm distinct } \ } $. motivated by general classification questions for the undeformed noncommutative spheres, smooth or not, we study here the quantum isometries of $ s ^ { n - 1, d - 1 } _ \ mathbb r $, and of its various noncommutative analogues, obtained via liberation and twisting. we discuss as well a complex version of these results, with $ s ^ { n - 1 } _ \ mathbb r $ replaced by the complex sphere $ s ^ { n - 1 } _ \ mathbb c $.
|
arxiv:1501.05229
|
a bose - einstein condensate is dispersively coupled to a single mode of an ultra - high finesse optical cavity. the system is governed by strong interactions between the atomic motion and the light field even at the level of single quanta. while coherently pumping the cavity mode the condensate is subject to the cavity optical lattice potential whose depth depends nonlinearly on the atomic density distribution. we observe bistability already below the single photon level and strong back - action dynamics which tunes the system periodically out of resonance.
|
arxiv:0811.3967
|
analysis of the charged multiplicity in proton - proton inelastic interactions at the lhc energies in the setting of dual parton model is presented. data from the cms experiment and the data simulated at different energies in various pseudo - rapidity windows using the event generator pythia8are analysed and compared with the calculations from the model. each inelastic scattering is assumed to follow the poisson distribution. the theoretical koba - nielsen - olesen ( kno ) scaling of the multiplicity distributions is studied and compared with previously published experimental results at $ \ sqrt { s } $ = 0. 9, 2. 36, 7 tev. predictions from the model for the kno distributions at $ \ sqrt { s } $ = 13, 13. 6 tev and for the future lhc energy of 27 tev are computed and compared with the simulated data.
|
arxiv:2208.04520
|
powerful, highly collimated jets, surrounded by bipolar molecular outflows, are commonly observed near young stellar objects ( ysos ). in the usual theoretical picture of star formation, a jet is ejected from a magnetized accretion disk, with a molecular outflow being driven either by the jet or by a wider wind coming from the disk. here, we propose an alternative global model for the flows surrounding ysos. in addition to a central accretion - ejection engine driving the jet, the molecular outflow is powered by the infalling matter and follows a circulation pattern around the central object without necessarily being entrained by a jet. it is shown that the model produces a heated pressure - driven outflow with magneto - centrifugal acceleration and collimation. we report solutions for the three different parts of this self - similar model, i. e. the jet, the infalling envelope and the circulating matter that eventually forms the molecular outflow. this new picture of the accretion / outflow phase provides a possible explanation for several observed properties of yso outflows. the most relevant ones are the presence of high mass molecular outflows around massive protostars, and a realistic fraction ( typically 0. 1 ) of the accretion flow that goes into the jet.
|
arxiv:astro-ph/0203090
|
we used low resolution spectroscopy from vlt / fors1, and high resolution spectra from vlt / uves, to estimate the physical conditions in the ors, using nebular analysis for emission lines such as [ o ii ], [ o iii ], [ n ii ], and [ s ii ]. we also measured the velocity at two positions of the ors to test a geometrical model for the rings. additionally, we used data from the hst science archives to check the evolution of the ors of sn 1987a for a period that covers almost 11 years. we measured the flux in four different regions, two for each outer ring. we chose regions away from the two bright foreground stars, and as far as possible from the er, and we created light curves for the emission lines of [ o iii ], h { \ alpha }, and [ n ii ]. the profiles of the lightcurves display a declining behaviour, which is consistent with the initial supernova flash powering of the ors. the electron density of the emitting gas in the ors, as estimated through nebular analysis from forbidden [ o ii ] and [ s ii ] lines, is < = 3 * 10 ^ 3 cm ^ - 3, has not changed over the last ~ 15 years, and also the [ n ii ] temperature remains fairly constant at ~ 1. 2 * 10 ^ 4 k. we find no obvious difference in density and temperature for the two ors. the highest density, as measured from the decay of h { \ alpha }, could, however, be ~ 5 * 10 ^ 3 cm ^ - 3, and since the decay is somewhat faster for the southern outer ring than for the northern, the highest density in the ors may be found in the southern outer ring. for an assumed distance of 50 kpc to the supernova, the distance between the supernova and the closest parts of the ors could be as short as ~ 1. 7 * 10 ^ 18 cm. interaction between the supernova ejecta and the outer rings could therefore start in less than ~ 20 years. we do not expect the ors to show the same optical display as the equatorial ring when this happens.
|
arxiv:1008.3387
|
let r be the free algebra on x and y modulo the relations x ^ 5 = yxy and y ^ 2 = xyx endowed with the grading deg x = 1 and deg y = 2. let b _ 3 denote the blow up of the projective plane at three non - colliear points. the main result in this paper is that the category of quasi - coherent sheaves on b _ 3 is equivalent to the quotient of the category of graded r - modules modulo the full subcategory of modules m such that for each m in m, $ ( x, y ) ^ nm = 0 $ for n sufficiently large. this is proved by showing the r is a twisted homogeneous coordinate ring ( in the sense of artin and van den bergh ) for b _ 3. this reduces almost all representation - theoretic questions about r to algebraic geometric questions about the del pezzo surface b _ 3. for example, the generic simple r - module has dimension six. furthermore, the main result combined with results of artin, tate, and van den bergh, imply that r is a noetherian domain of global dimension three, and has other good homological properties.
|
arxiv:0906.2481
|
incremental improvements in accuracy of convolutional neural networks are usually achieved through use of deeper and more complex models trained on larger datasets. however, enlarging dataset and models increases the computation and storage costs and cannot be done indefinitely. in this work, we seek to improve the identification and verification accuracy of a text - independent speaker recognition system without use of extra data or deeper and more complex models by augmenting the training and testing data, finding the optimal dimensionality of embedding space and use of more discriminative loss functions. results of experiments on voxceleb dataset suggest that : ( i ) simple repetition and random time - reversion of utterances can reduce prediction errors by up to 18 %. ( ii ) lower dimensional embeddings are more suitable for verification. ( iii ) use of proposed logistic margin loss function leads to unified embeddings with state - of - the - art identification and competitive verification accuracies.
|
arxiv:1807.08312
|
we present estimates of brightness temperature for 5 galactic masers in star - forming regions detected at space baselines. very compact features with angular sizes of about 23 - 60 micro arcsec were detected in these regions with corresponding linear sizes of about 4 - 10 million km. brightness temperatures range from 1e + 14 up to 1e + 16 k.
|
arxiv:1802.05120
|
a set of universal relations between various properties of any few - body or many - body system consisting of fermions with two spin states and a large but finite scattering length have been derived by shina tan. we derive generalizations of the tan relations for a two - channel model for fermions near a feshbach resonance that includes a molecular state whose detuning energy controls the scattering length. we use quantum field theory methods, including renormalization and the operator product expansion, to derive these relations. they reduce to the tan relations as the scattering length is made increasingly large.
|
arxiv:0806.2277
|
after nearly a century of specialized applications in optics, remote sensing, and acoustics, the near - field ( nf ) electromagnetic propagation zone is experiencing a resurgence in research interest. this renewed attention is fueled by the emergence of promising applications in various fields such as wireless communications, holography, medical imaging, and quantum - inspired systems. signal processing within nf sensing and wireless communications environments entails addressing issues related to extended scatterers, range - dependent beampatterns, spherical wavefronts, mutual coupling effects, and the presence of both reactive and radiative fields. recent investigations have focused on these aspects in the context of extremely large arrays and wide bandwidths, giving rise to novel challenges in channel estimation, beamforming, beam training, sensing, and localization. while nf optics has a longstanding history, advancements in nf phase retrieval techniques and their applications have lately garnered significant research attention. similarly, utilizing nf localization with acoustic arrays represents a contemporary extension of established principles in nf acoustic array signal processing. this article aims to provide an overview of state - of - the - art signal processing techniques within the nf domain, offering a comprehensive perspective on recent advances in diverse applications.
|
arxiv:2408.11434
|
in this paper, we present a physically informed neural network representation of the effective interactions associated with coupled - cluster downfolding models to describe chemical systems and processes. the neural network representation not only allows us to evaluate the effective interactions efficiently for various geometrical configurations of chemical systems corresponding to various levels of complexity of the underlying wave functions, but also reveals that the bare and effective interactions are related by a tangent function of some latent variables. we refer to this characterization of the effective interaction as a tangent model. we discuss the connection between this tangent model for the effective interaction with the previously developed theoretical analysis that examines the difference between the bare and effective hamiltonians in the corresponding active spaces.
|
arxiv:2501.15792
|
the aim of cosmological simulations is to reproduce the properties of the observed universe, serving as tools to test structure and galaxy formation models. constrained simulations of our local cosmological region up to a few hundred mpc / h, the local universe, are designed to reproduce the actual cosmic web of structures as observed. a question that often arises is how to judge the quality of constrained simulations against the observations of the local universe. here we introduce the local universe model ( lum ), a new methodology, whereby many constrained simulations can be judged and the ' ' best ' ' initial conditions can be identified. by characterising the local universe as a set of rich clusters, the model identifies haloes that serve as simulated counterparts to the observed clusters. their merit is determined against a null hypothesis, the probability that such a counterpart could be identified in a random, unconstrained simulation. this model is applied to 100 constrained simulations using the cosmicflows - 3 data. cluster counterparts are found for all constrained simulations, their distribution of separation from the true observed cluster position and their mass distribution are investigated. lastly, the ' ' best ' ' constrained simulation is selected using the lum and discussed in more detail.
|
arxiv:2305.05694
|
instruction tuning has emerged as a critical paradigm for improving the capabilities and alignment of large language models ( llms ). however, existing iterative model - aware data selection methods incur significant computational overhead, as they rely on repeatedly performing full - dataset model inference to estimate sample utility for subsequent training iterations, creating a fundamental efficiency bottleneck. in this paper, we propose lead, an efficient iterative data selection framework that accurately estimates sample utility entirely within the standard training loop, eliminating the need for costly additional model inference. at its core, lead introduces instance - level dynamic uncertainty ( idu ), a theoretically grounded utility function combining instantaneous training loss, gradient - based approximation of loss changes, and exponential smoothing of historical loss signals. to further scale efficiently to large datasets, lead employs a two - stage, coarse - to - fine selection strategy, adaptively prioritizing informative clusters through a multi - armed bandit mechanism, followed by precise fine - grained selection of high - utility samples using idu. extensive experiments across four diverse benchmarks show that lead significantly outperforms state - of - the - art methods, improving average model performance by 6. 1 % - 10. 8 % while using only 2. 5 % of the training data and reducing overall training time by 5 - 10x.
|
arxiv:2505.07437
|
dispersal is a well recognized driver of ecological and evolutionary dynamics, and simultaneously an evolving trait. dispersal evolution has traditionally been studied in single - species metapopulations so that it remains unclear how dispersal evolves in spatially structured communities and food webs. since most natural systems are biodiverse and spatially structured, and thus affected by dispersal and its evolution, this knowledge gap should be bridged. here we discuss whether knowledge established in single - species systems holds in spatially structured multispecies systems and highlight generally valid and fundamental principles. most biotic interactions form the ecological theatre for the evolutionary dispersal play because interactions mediate patterns of fitness expectations in space and time. while this allows for a simple transposition of certain known drivers to a multispecies context, other drivers may require more complex transpositions, or might not be transferred. we discuss an important quantitative modulator of dispersal evolution in the increased trait dimensionality of biodiverse meta - systems and an additional driver in co - dispersal. we speculate that scale and selection pressure mismatches due to co - dispersal, together with increased trait dimensionality may lead to slower and more " diffuse " evolution in biodiverse meta - systems. open questions and potential consequences in both ecological and evolutionary terms call for more investigation.
|
arxiv:2312.00166
|
the microhertz frequency band of gravitational waves probes the merger of supermassive black holes as well as many other gravitational wave phenomena. however, space - interferometry methods that use test masses would require substantial development of test - mass isolation systems to detect anticipated astrophysical events. we propose an approach that avoids on - board inertial test masses by situating spacecraft in the low - acceleration environment of the outer solar system. we show that for earth - spacecraft and inter - spacecraft distances of $ \ gtrsim 10 \, $ au, the accelerations on the spacecraft would be sufficiently small to potentially achieve gravitational wave sensitivities determined by stochastic gravitational wave backgrounds. we further argue, for arm lengths of $ 10 - 30 \, $ au and $ \ sim 10 \, $ watt transmissions, that stable phase locks could be achieved with $ 20 \, $ cm mirrors or $ 5 \, $ m radio dishes, although for the laser case this would require lower laser frequency noise relative to the lisa lasers. we discuss designs that send both laser beams and radio waves between the spacecraft, finding that despite the $ \ sim10 ^ 4 \ times $ longer wavelengths, even a design with radio transmissions could reach stochastic background - limited sensitivities at $ \ lesssim 0. 3 \ times 10 ^ { - 4 } $ hz. operating in the radio significantly reduces many spacecraft design tolerances. our baseline concept requires two arms to do interferometry. however, if one spacecraft carries a clock with allan deviations at $ 10 ^ 4 $ seconds of $ 10 ^ { - 17 } $, a comparable sensitivity could be achieved with a single arm. finally, we discuss the feasibility of achieving similar gravitational wave sensitivities in a ` doppler tracking ' configuration where the single arm is anchored to earth.
|
arxiv:2411.15072
|
this paper is motivated by relations between association and independence of random variables. it is well - known that for real random variables independence implies association in the sense of esary, proschan and walkup, while for random vectors this simple relationship breaks. we modify the notion of association in such a way that any vector - valued process with independent increments has also associated increments in the new sense - - - association between blocks. the new notion is quite natural and admits nice characterization for some classes of processes. in particular, using the covariance interpolation formula due to houdr \ ' { e }, p \ ' { e } rez - abreu and surgailis, we show that within the class of multidimensional gaussian processes block - association of increments is equivalent to supermodularity ( in time ) of the covariance functions. we define also corresponding versions of weak association, positive association and negative association. it turns out that the central limit theorem for weakly associated random vectors due to burton, dabrowski and dehling remains valid, if the weak association is relaxed to the weak association between blocks.
|
arxiv:1009.3743
|
in part iii of this study, we apply the price dynamical model with big buyers and big sellers developed in part i of this paper to the daily closing prices of the top 20 banking and real estate stocks listed in the hong kong stock exchange. the basic idea is to estimate the strength parameters of the big buyers and the big sellers in the model and make buy / sell decisions based on these parameter estimates. we propose two trading strategies : ( i ) follow - the - big - buyer which buys when big buyer begins to appear and there is no sign of big sellers, holds the stock as long as the big buyer is still there, and sells the stock once the big buyer disappears ; and ( ii ) ride - the - mood which buys as soon as the big buyer strength begins to surpass the big seller strength, and sells the stock once the opposite happens. based on the testing over 245 two - year intervals uniformly distributed across the seven years from 03 - july - 2007 to 02 - july - 2014 which includes a variety of scenarios, the net profits would increase 67 % or 120 % on average if an investor switched from the benchmark buy - and - hold strategy to the follow - the - big - buyer or ride - the - mood strategies during this period, respectively.
|
arxiv:1401.1892
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.