text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
gravitational waves provide a window to probe general relativity ( gr ) under extreme conditions. the recent observations of gw190412 and gw190814 are unique high - mass - ratio mergers that enable the observation of gravitational - wave harmonics beyond the dominant $ ( \ ell, m ) = ( 2, 2 ) $ mode. using these events, we search for physics beyond gr by allowing the source parameters measured from the sub - dominant harmonics to deviate from that of the dominant mode. all results are consistent with gr. we constrain the chirp mass as measured by the $ ( \ ell, m ) = ( 3, 3 ) $ mode to be within $ 0 _ { - 3 } ^ { + 5 } \ % $ of the dominant mode when we allow both the masses and spins of the sub - dominant modes to deviate. if we allow only the mass parameters to deviate, we constrain the chirp mass of the $ ( 3, 3 ) $ mode to be within $ \ pm1 \ % $ of the expected value from gr.
|
arxiv:2008.02248
|
we will gain unprecedented, high - accuracy insights into the internal structure of the atomic nucleus thanks to lepton - hadron collision studies in the coming years at the electron - ion - collider ( eic ) in the united states. a good control of radiative corrections is necessary for the eic to be fully exploited and to extract valuable information from various measurements. we present our extension of photoproduction at fixed order in madgraph5 _ amc @ nlo, a widely used framework for ( next - to - ) leading order calculations at the large hadron collider ( lhc ). it applies to electron - hadron collisions, in which the quasi - real photon comes from an electron as well as to proton - nucleus and nucleus - nucleus collisions.
|
arxiv:2401.14741
|
using some basic properties of the gamma function, we evaluate a simple class of infinite products involving dirichlet characters as a finite product of gamma functions and, in the case of odd characters, as a finite product of sines. as a consequence we obtain evaluations of certain multiple $ l $ - series. in the final part of this paper we derive expressions for infinite products of cyclotomic polynomials, again as finite products of gamma or of sine functions.
|
arxiv:1801.09160
|
classical cepheids ( ccs ) are excellent tracers for understanding the structure of the milky way disk. the latest gaia data release 3 provides a large number of line - of - sight velocity information for galactic ccs, offering an opportunity for studying the kinematics of the milky way. we determine the three - dimensional velocities of 2057 ccs relative to the galactic center. from the projections of the 3d velocities onto the xy plane of the galactic disk, we find that $ v _ { r } $ and $ v _ { \ phi } $ velocities of the northern and southern warp ( directions with highest amplitude ) are different. this phenomenon may be related to the warp precession or the asymmetry of the warp structure. by investigating the kinematic warp model, we find that the vertical velocity of ccs is more suitable for constraining the warp precession rate than the line of nodes angles. our results suggest that ccs at $ 12 - 14 $ kpc are the best sample for determining the galactic warp precession rate. based on the spatial structure parameters of cepheid warp from chen et al ( arxiv : 1902. 00998 ), we determine a warp precession rate of $ \ omega = 4. 9 \ pm1. 6 $ km s $ ^ { - 1 } $ kpc $ ^ { - 1 } $ at 13 kpc, which supports a low precession rate in the warp model. in the future, more kinematic information on ccs will help to better constrain the structure and evolution of the milky way.
|
arxiv:2402.15782
|
the advent and proliferation of large multi - modal models ( lmms ) have introduced new paradigms to computer vision, transforming various tasks into a unified visual question answering framework. video quality assessment ( vqa ), a classic field in low - level visual perception, focused initially on quantitative video quality scoring. however, driven by advances in lmms, it is now progressing toward more holistic visual quality understanding tasks. recent studies in the image domain have demonstrated that visual question answering ( vqa ) can markedly enhance low - level visual quality evaluation. nevertheless, related work has not been explored in the video domain, leaving substantial room for improvement. to address this gap, we introduce the vqa2 instruction dataset - the first visual question answering instruction dataset that focuses on video quality assessment. this dataset consists of 3 subsets and covers various video types, containing 157, 755 instruction question - answer pairs. then, leveraging this foundation, we present the vqa2 series models. the vqa2 series models interleave visual and motion tokens to enhance the perception of spatial - temporal quality details in videos. we conduct extensive experiments on video quality scoring and understanding tasks, and results demonstrate that the vqa2series models achieve excellent performance in both tasks. notably, our final model, the vqa2 - assistant, exceeds the renowned gpt - 4o in visual quality understanding tasks while maintaining strong competitiveness in quality scoring tasks. our work provides a foundation and feasible approach for integrating low - level video quality assessment and understanding with lmms.
|
arxiv:2411.03795
|
dirichlet ' s theorem on arithmetic progressions called as dirichlet prime number theorem is a classical result in number theory. atle selberg \ cite { selberg } gave an elementary proof of this theorem. in this article we give an alternative proof of it based on a previous result of us. also we get an estimation of the prime counting function in the special cases.
|
arxiv:1511.03811
|
complementary recommendation gains increasing attention in e - commerce since it expedites the process of finding frequently - bought - with products for users in their shopping journey. therefore, learning the product representation that can reflect this complementary relationship plays a central role in modern recommender systems. in this work, we propose a logical reasoning network, logirec, to effectively learn embeddings of products as well as various transformations ( projection, intersection, negation ) between them. logirec is capable of capturing the asymmetric complementary relationship between products and seamlessly extending to high - order recommendations where more comprehensive and meaningful complementary relationship is learned for a query set of products. finally, we further propose a hybrid network that is jointly optimized for learning a more generic product representation. we demonstrate the effectiveness of our logirec on multiple public real - world datasets in terms of various ranking - based metrics under both low - order and high - order recommendation scenarios.
|
arxiv:2212.04966
|
liang xiao ". the pseudonym sounds like a person ' s name but is a homophone for " two schools ". in the 1980s, tsinghua evolved beyond the polytechnic model and incorporated a multidisciplinary system emphasizing collaboration between distinct schools within the broader university environment. under this system, several schools have been re - incorporated, including tsinghua law school, the school of economics and management, the school of sciences, the school of life sciences, the school of humanities and social sciences, the school of public policy and management, and the academy of arts and design. in 1996, the school of economics and management established a partnership with the sloan school of management at the massachusetts institute of technology. one year later, tsinghua and mit began the mba program known as the tsinghua - mit global mba. in 1998, tsinghua became the first chinese university to offer a master of laws ( llm ) program in american law, through a cooperative venture with the temple university beasley school of law. = = = 21st century = = = tsinghua alumni include the current general secretary of the chinese communist party and paramount leader of china, xi jinping ' 79, who graduated with a degree in chemical engineering, along with the ccp general secretary and former paramount leader of china hu jintao ' 64, who graduated with a degree in hydraulic engineering. in addition to its powerful alumni, tsinghua has a reputation for hosting globally prominent guest speakers, with international leaders bill clinton, tony blair, henry kissinger, carlos ghosn, and henry paulson having lectured to the university community. as of 2018, tsinghua university consists of 20 schools and 58 university departments, 41 research institutes, 35 research centers, and 167 laboratories, including 15 national key laboratories. in september 2006, the peking union medical college, a renowned medical school, was renamed " peking union medical college, tsinghua university " although it and tsinghua university are technically separate institutions. the university operates the tsinghua university press, which publishes academic journals, textbooks, and other scholarly works. through its constituent colleges, graduate and professional schools, and other institutes, tsinghua university offers more than 82 bachelor ' s degree programs, 80 master ' s degree programs and 90 phd programs. in 2014, tsinghua established xinya college, a residential liberal arts college, as a pilot project to reform undergraduate education at the university. modeled after universities in the united states and europe, xinya combines general and professional education in a liberal arts
|
https://en.wikipedia.org/wiki/Tsinghua_University
|
we study the magnetization dynamics in ferromagnet $ \ mid $ insulator $ \ mid $ ferromagnet and ferromagnet $ \ mid $ insulator $ \ mid $ normal metal ultra - small tunnel junctions, and the associated voltage drop in the presence of an electromagnetic environment assisting the tunneling processes. we show that the environment strongly affects the resulting voltage, which becomes a highly non - linear function of the precession cone angle $ \ theta $. we find that voltages comparable to the driving frequency $ \ omega $ can be reached even for small precession cone angles $ \ theta $, in stark contrast to the case where the environment is absent. such an effect could be useful for the detection local magnetization precessions in textured ferromagnets or, conversely, for probing the environment via the magnetization dynamics.
|
arxiv:1405.5744
|
global sustainable fund universe encompasses open - end funds and exchange - traded funds ( etf ) that, by prospectus or other regulatory filings, claim to focus on environment, social and governance ( esg ). challengingly, the claims can only be confirmed by examining the textual disclosures to check if there is presence of intentionality and esg focus on its investment strategy. currently, there is no regulation to enforce sustainability in esg products space. this paper proposes a unique method and system to classify and score the fund prospectuses in the sustainable universe regarding specificity and transparency of language. we aim to employ few - shot learners to identify specific, ambiguous, and generic sustainable investment - related language. additionally, we construct a ratio metric to determine language score and rating to rank products and quantify sustainability claims for us sustainable universe. as a by - product, we publish manually annotated quality training dataset on hugging face ( esg - prospectus - clarity - category under cc - by - nc - sa - 4. 0 ) of more than 1k esg textual statements. the performance of the few - shot finetuning approach is compared with zero - shot models e. g., llama - 13b, gpt 3. 5 turbo etc. we found that prompting large language models are not accurate for domain specific tasks due to misalignment issues. the few - shot finetuning techniques outperform zero - shot models by large margins of more than absolute ~ 30 % in precision, recall and f1 metrics on completely unseen esg languages ( test set ). overall, the paper attempts to establish a systematic and scalable approach to measure and rate sustainability intention quantitatively for sustainable funds using texts in prospectus. regulatory bodies, investors, and advisors may utilize the findings of this research to reduce cognitive load in investigating or screening of esg funds which accurately reflects the esg intention.
|
arxiv:2407.06893
|
this paper provides a proof of concept for an eeg - based reconstruction of a visual image which is on a user ' s mind. our approach is based on the rapid serial visual presentation ( rsvp ) of polygon primitives and brain - computer interface ( bci ) technology. the presentation of polygons that contribute to build a target image ( because they match the shape and / or color of the target ) trigger attention - related eeg patterns. accordingly, these target primitives can be determined using bci classification of event - related potentials ( erps ). they are then accumulated in the display until a satisfactory reconstruction is reached. selection steps have an average classification accuracy of $ 75 \ % $. $ 25 \ % $ of the images could be reconstructed completely, while more than $ 65 \ % $ of the available visual details could be captured on average. most of the misclassifications were not misinterpretations of the bci concerning users ' intent ; rather, users tried to select polygons that were different than what was intended by the experimenters. open problems and alternatives to develop a practical bci - based image reconstruction application are discussed.
|
arxiv:1411.3489
|
we show that the statistics of fluctuation - driven initial - state anisotropies in proton - proton, proton - nucleus and nucleus - nucleus collisions is to a large extent universal. we propose a simple parametrization for the probability distribution of the fourier coefficient $ \ varepsilon _ n $ in harmonic $ n $, which is in good agreement with monte - carlo simulations. our results provide a simple explanation for the 4 - particle cumulant of triangular flow measured in pb - pb collisions, and for the 4 - particle cumulant of elliptic flow recently measured in p - pb collisions. both arise as natural consequences of the condition that initial anisotropies are bounded by unity. we argue that the initial rms anisotropy in harmonic $ n $ can be directly extracted from the measured ratio $ v _ n \ { 4 \ } / v _ n \ { 2 \ } $ : this gives direct access to a property of the initial density profile from experimental data. we also make quantitative predictions for the small lifting of degeneracy between $ v _ n \ { 4 \ } $, $ v _ n \ { 6 \ } $ and $ v _ n \ { 8 \ } $. if confirmed by future experiments, they will support the picture that long - range correlations observed in p - pb collisions at the lhc originate from collective flow proportional to the initial anisotropy.
|
arxiv:1312.6555
|
we propose a novel mechanism for the origin of non - gaussian tails in the probability distribution functions ( pdfs ) of local variables in nonlinear, diffusive, dynamical systems including passive scalars advected by chaotic velocity fields. intermittent fluctuations on appropriate time scales in the amplitude of the ( chaotic ) noise can lead to exponential tails. we provide numerical evidence for such behavior in deterministic, discrete - time passive scalar models. different possibilities for pdfs are also outlined.
|
arxiv:cond-mat/9307004
|
it is known that the exchange of information between web applications is done by means of the soap protocol. securing this protocol is obviously a vital issue for any computer network. however, when it comes to cloud computing systems, the sensitivity of this issue rises, as the clients of system, release their data to the cloud. xml signature is employed to secure soap messages. however, there are also some weak points that have been identified, named as xml signature wrapping attacks, which have been categorized into four major groups ; simple ancestry context attack, optional element context attacks, sibling value context attack, sibling order context. in this paper, two existing methods, for referencing the signed part of soap message, named as id referencing and xpath method, are analyzed and examined. in addition, a new method is proposed and tested, to secure the soap message. in the new method, the xml any signature wrapping attack is prevented by employing the concept of xml digital signature on the soap message. the results of conducted experiments show that the proposed method is approximately three times faster than the xpath method and even a little faster than id.
|
arxiv:1310.0441
|
in the context of multi - access edge computing ( mec ), the task sharing mechanism among edge servers is an activity of vital importance for speeding up the computing process and thereby improve user experience. the distributed resources in the form of edge servers are expected to collaborate with each other in order to boost overall performance of a mec system. however, there are many challenges to adopt global collaboration among the edge computing server entities among which the following two are significant : ensuring trust among the servers and developing a unified scheme to enable real - time collaboration and task sharing. in this article, a blockchain framework is proposed to provide a trusted collaboration mechanism between edge servers in a mec environment. in particular, a permissioned blockchain scheme is investigated to support a trusted design that also provides incentives for collaboration. finally, caliper tool and hyperledger fabric benchmarks are used to conduct an experimental evaluation of the proposed blockchain scheme embedded in a mec framework.
|
arxiv:2006.14166
|
for a split reductive algebraic group, this paper observes a homological interpretation for weyl module multiplicities in jantzen ' s sum formula. this interpretation involves an euler characteristic built from ext groups between integral weyl modules. the new interpretation makes transparent for gl _ n ( and conceivable for other classical groups ) a certain invariance of jantzen ' s sum formula under " howe duality " in the sense of adamovich and rybnikov. for gl _ n a simple and explicit general formula is derived for the euler characteristic between an arbitrary pair of integral weyl modules. in light of brenti ' s work on certain r - polynomials, this formula raises interesting questions about the possibility of relating ext groups between weyl modules to kazhdan - lusztig combinatorics.
|
arxiv:math/0505371
|
recent advances in pre - trained language models have significantly improved neural response generation. however, existing methods usually view the dialogue context as a linear sequence of tokens and learn to generate the next word through token - level self - attention. such token - level encoding hinders the exploration of discourse - level coherence among utterances. this paper presents dialogbert, a novel conversational response generation model that enhances previous plm - based dialogue models. dialogbert employs a hierarchical transformer architecture. to efficiently capture the discourse - level coherence among utterances, we propose two training objectives, including masked utterance regression and distributed utterance order ranking in analogy to the original bert training. experiments on three multi - turn conversation datasets show that our approach remarkably outperforms the baselines, such as bart and dialogpt, in terms of quantitative evaluation. the human evaluation suggests that dialogbert generates more coherent, informative, and human - like responses than the baselines with significant margins.
|
arxiv:2012.01775
|
with the growing capabilities of intelligent systems, the integration of artificial intelligence ( ai ) and robots in everyday life is increasing. however, when interacting in such complex human environments, the failure of intelligent systems, such as robots, can be inevitable, requiring recovery assistance from users. in this work, we develop automated, natural language explanations for failures encountered during an ai agents ' plan execution. these explanations are developed with a focus of helping non - expert users understand different point of failures to better provide recovery assistance. specifically, we introduce a context - based information type for explanations that can both help non - expert users understand the underlying cause of a system failure, and select proper failure recoveries. additionally, we extend an existing sequence - to - sequence methodology to automatically generate our context - based explanations. by doing so, we are able develop a model that can generalize context - based explanations over both different failure types and failure scenarios.
|
arxiv:2011.09407
|
over the past years, topics ranging from climate change to human rights have seen increasing importance for investment decisions. hence, investors ( asset managers and asset owners ) who wanted to incorporate these issues started to assess companies based on how they handle such topics. for this assessment, investors rely on specialized rating agencies that issue ratings along the environmental, social and governance ( esg ) dimensions. such ratings allow them to make investment decisions in favor of sustainability. however, rating agencies base their analysis on subjective assessment of sustainability reports, not provided by every company. furthermore, due to human labor involved, rating agencies are currently facing the challenge to scale up the coverage in a timely manner. in order to alleviate these challenges and contribute to the overall goal of supporting sustainability, we propose a heterogeneous ensemble model to predict esg ratings using fundamental data. this model is based on feedforward neural network, catboost and xgboost ensemble members. given the public availability of fundamental data, the proposed method would allow cost - efficient and scalable creation of initial esg ratings ( also for companies without sustainability reporting ). using our approach we are able to explain 54 % of the variation in ratings r2 using fundamental data and outperform prior work in this area.
|
arxiv:2109.10085
|
in the entropic dynamics ( ed ) framework quantum theory is derived as an application of entropic methods of inference. the physics is introduced through appropriate choices of variables and of constraints that codify the relevant physical information. in previous work, a manifestly covariant ed of quantum scalar fields in a fixed background spacetime was developed. manifest relativistic covariance was achieved by imposing constraints in the form of poisson brackets and of intial conditions to be satisfied by a set of local hamiltonian generators. our approach succeeded in extending to the quantum domain the classical framework that originated with dirac and was later developed by teitelboim and kuchar. in the present work the ed of quantum fields is extended further by allowing the geometry of spacetime to fully partake in the dynamics. the result is a first - principles ed model that in one limit reproduces quantum mechanics and in another limit reproduces classical general relativity. our model shares some formal features with the so - called semi - classical approach to gravity.
|
arxiv:1910.01188
|
digital health technologies ( dht ), such as wearable devices, provide personalized, continuous, and real - time monitoring of patient. these technologies are contributing to the development of novel therapies and personalized medicine. gaining insight from these technologies requires appropriate modeling techniques to capture clinically - relevant changes in disease state. the data generated from these devices is characterized by being stochastic in nature, may have missing elements, and exhibits considerable inter - individual variability - thereby making it difficult to analyze using traditional longitudinal modeling techniques. we present a novel pharmacology - informed neural stochastic differential equation ( sde ) model capable of addressing these challenges. using synthetic data, we demonstrate that our approach is effective in identifying treatment effects and learning causal relationships from stochastic data, thereby enabling counterfactual simulation.
|
arxiv:2403.03274
|
markov chain monte carlo samplers produce dependent streams of variates drawn from the limiting distribution of the markov chain. with this as motivation, we introduce novel univariate kernel density estimators which are appropriate for the stationary sequences of dependent variates. we modify the asymptotic mean integrated squared error criterion to account for dependence and find that the modified criterion suggests data - driven adjustments to standard bandwidth selection methods. simulation studies show that our proposed methods find bandwidths close to the optimal value while standard methods lead to smaller bandwidths and hence to undersmoothed density estimates. empirically, the proposed methods have considerably smaller integrated mean squared error than do standard methods.
|
arxiv:1607.08274
|
in this note we present a computational approach to the construction of ovoids of the hermitian surface and show some related experimental results.
|
arxiv:1210.2600
|
nowadays there is an active discussion about the definition of simplified models of dark matter ( smdm ) as a tool for interpreting lhc searches. here we point out an additional simplified set - up which captures a very well motivated mechanism beyond the standard model : the kinetic - mixing of an extra u ' ( 1 ) gauge symmetry. in addition to that, even if most of the attention has being paid on lhc " mono - signals ", here we highlight an unavoidable signature appearing in smdm with s - channel mediators : dijets or dileptons with no missing energy. we translate these searches into lower bounds on the dm couplings to the visible sector, showing the nice complementarity with the previous analyses, such that the parameter space of dm is being reduced from above and from below.
|
arxiv:1505.04579
|
modern one - stage video instance segmentation networks suffer from two limitations. first, convolutional features are neither aligned with anchor boxes nor with ground - truth bounding boxes, reducing the mask sensitivity to spatial location. second, a video is directly divided into individual frames for frame - level instance segmentation, ignoring the temporal correlation between adjacent frames. to address these issues, we propose a simple yet effective one - stage video instance segmentation framework by spatial calibration and temporal fusion, namely stmask. to ensure spatial feature calibration with ground - truth bounding boxes, we first predict regressed bounding boxes around ground - truth bounding boxes, and extract features from them for frame - level instance segmentation. to further explore temporal correlation among video frames, we aggregate a temporal fusion module to infer instance masks from each frame to its adjacent frames, which helps our framework to handle challenging videos such as motion blur, partial occlusion and unusual object - to - camera poses. experiments on the youtube - vis valid set show that the proposed stmask with resnet - 50 / - 101 backbone obtains 33. 5 % / 36. 8 % mask ap, while achieving 28. 6 / 23. 4 fps on video instance segmentation. the code is released online https : / / github. com / minghanli / stmask.
|
arxiv:2104.05606
|
greedy algorithms, particularly the orthogonal greedy algorithm ( oga ), have proven effective in training shallow neural networks for fitting functions and solving partial differential equations ( pdes ). in this paper, we extend the application of oga to the tasks of linear operator learning, which is equivalent to learning the kernel function through integral transforms. firstly, a novel greedy algorithm is developed for kernel estimation rate in a new semi - inner product, which can be utilized to approximate the green ' s function of linear pdes from data. secondly, we introduce the oga for point - wise kernel estimation to further improve the approximation rate, achieving orders of accuracy improvement across various tasks and baseline models. in addition, we provide a theoretical analysis on the kernel estimation problem and the optimal approximation rates for both algorithms, establishing their efficacy and potential for future applications in pdes and operator learning tasks.
|
arxiv:2501.02791
|
integrated silicon microwave photonics offers great potential in microwave phase shifter elements, and promises compact and scalable multi - element chips that are free from electromagnetic interference. stimulated brillouin scattering, which was recently demonstrated in silicon, is a particularly powerful approach to induce a phase shift due to its inherent flexibility, offering an optically controllable and selective phase shift. however, to date, only moderate amounts of brillouin gain has been achieved and theoretically this would restrict the phase shift to a few tens of degrees, significantly less than the required 360 degrees. here, we overcome this limitation with a phase enhancement method using rf interference, showing a 360 degrees broadband phase shifter based on brillouin scattering in a suspended silicon waveguide. we achieve a full 360 degrees phase - shift over a bandwidth of 15 ghz using a phase enhancement factor of 25, thereby enabling practical broadband brillouin phase shifter for beam forming and other applications.
|
arxiv:1903.08363
|
graph clustering is widely used in many data analysis applications. in this paper we propose several parallel graph clustering algorithms based on monte carlo simulations and expectation maximization in the context of stochastic block models. we apply those algorithms to the specific problems of recommender systems and social network anonymization. we compare the experimental results to previous propositions.
|
arxiv:1609.00161
|
in overloaded massive mimo ( mmimo ) systems, wherein the number $ k $ of user equipments ( ues ) exceeds the number of base station antennas $ m $, it has recently been shown that non - orthogonal multiple access ( noma ) can increase the sum spectral efficiency. this paper aims at identifying cases where code - domain noma can improve the spectral efficiency of mmimo in the classical regime where $ k < m $. novel spectral efficiency expressions are provided for the uplink and downlink with arbitrary spreading signatures and spatial correlation matrices. particular attention is devoted to the planar arrays that are currently being deployed in pre - 5g and 5g networks ( in sub $ - 6 $ ghz bands ), which are characterized by limited spatial resolution. numerical results show that mmimo with such planar arrays can benefit from noma in scenarios where the ues are spatially close to each other. a two - step ue grouping scheme is proposed for noma - aided mmimo systems that is applicable to the spatial correlation matrices of the ues that are currently active in each cell. numerical results are used to investigate the performance of the algorithm under different operating conditions and types of spreading signatures ( orthogonal, sparse and random sets ). the analysis reveals that orthogonal signatures provide the highest average spectral efficiency.
|
arxiv:2003.01281
|
we predict unusual ( for non - relativistic quantum mechanics ) electron states in graphene, which are localized within a finite - width potential barrier. the density of localized states in the sufficiently high and / or wide graphene barrier exhibits a number of singularities at certain values of the energy. such singularities provide quantum oscillations of both the transport ( e. g., conductivity ) and thermodynamic properties of graphene - when increasing the barrier height and / or width, similarly to the well - known shubnikov - de - haas ( sdh ) oscillations of conductivity in pure metals. however, here the sdh - like oscillations are driven by an electric field instead of the usual magnetically - driven sdh - oscillations.
|
arxiv:0712.1407
|
semantic image interpretation ( sii ) is the task of extracting structured semantic descriptions from images. it is widely agreed that the combined use of visual data and background knowledge is of great importance for sii. recently, statistical relational learning ( srl ) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. logic tensor networks ( ltns ) are an srl framework which integrates neural networks with first - order fuzzy logic to allow ( i ) efficient learning from noisy data in the presence of logical constraints, and ( ii ) reasoning with logical formulas describing general properties of the data. in this paper, we develop and apply ltns to two of the main tasks of sii, namely, the classification of an image ' s bounding boxes and the detection of the relevant part - of relations between objects. to the best of our knowledge, this is the first successful application of srl to such sii tasks. the proposed approach is evaluated on a standard image processing benchmark. experiments show that the use of background knowledge in the form of logical constraints can improve the performance of purely data - driven approaches, including the state - of - the - art fast region - based convolutional neural networks ( fast r - cnn ). moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data.
|
arxiv:1705.08968
|
berry curvature physics and quantum geometric effects have been instrumental in advancing topological condensed matter physics in recent decades. although landau level - based flat bands and conventional 3d solids have been pivotal in exploring rich topological phenomena, they are constrained by their limited ability to undergo dynamic tuning. in stark contrast, moir \ ' e systems have risen as a versatile platform for engineering bands and manipulating the distribution of berry curvature in momentum space. these moir \ ' e systems not only harbor tunable topological bands, modifiable through a plethora of parameters, but also provide unprecedented access to large length scales and low energy scales. furthermore, they offer unique opportunities stemming from the symmetry - breaking mechanisms and electron correlations associated with the underlying flat bands that are beyond the reach of conventional crystalline solids. a diverse array of tools, encompassing quantum electron transport in both linear and non - linear response regimes and optical excitation techniques, provide direct avenues for investigating berry physics. this review navigates the evolving landscape of tunable moir \ ' e materials, highlighting recent experimental breakthroughs in the field of topological physics. additionally, we delineate several challenges and offer insights into promising avenues for future research.
|
arxiv:2405.08959
|
we report on our attempts to achieve a nearly steady - state gas flow in hydrodynamical simulations of doubly barred galaxies. after exploring the parameter space, we construct two models, for which we evaluate the photometric and the kinematic integrals, present in the tremaine - weinberg method, in search of observational signatures of two rotating patterns. we show that such signatures are often present, but a direct fit to data points is likely to return incorrect pattern speeds. however, for a particular distribution of the tracer, presented here, the values of the pattern speeds can be retrieved reliably even with the direct fit.
|
arxiv:0801.1472
|
other fields. one route that was taken was the rise of social research. large statistical surveys were undertaken in various parts of the united states and europe. another route undertaken was initiated by emile durkheim, studying " social facts ", and vilfredo pareto, opening metatheoretical ideas and individual theories. a third means developed, arising from the methodological dichotomy present, in which social phenomena were identified with and understood ; this was championed by figures such as max weber. the fourth route taken, based in economics, was developed and furthered economic knowledge as a hard science. the last path was the correlation of knowledge and social values ; the antipositivism and verstehen sociology of max weber firmly demanded this distinction. in this route, theory ( description ) and prescription were non - overlapping formal discussions of a subject. the foundation of social sciences in the west implies conditioned relationships between progressive and traditional spheres of knowledge. in some contexts, such as the italian one, sociology slowly affirms itself and experiences the difficulty of affirming a strategic knowledge beyond philosophy and theology. around the start of the 20th century, enlightenment philosophy was challenged in various quarters. after the use of classical theories since the end of the scientific revolution, various fields substituted mathematics studies for experimental studies and examining equations to build a theoretical structure. the development of social science subfields became very quantitative in methodology. the interdisciplinary and cross - disciplinary nature of scientific inquiry into human behaviour, social and environmental factors affecting it, made many of the natural sciences interested in some aspects of social science methodology. examples of boundary blurring include emerging disciplines like social research of medicine, sociobiology, neuropsychology, bioeconomics and the history and sociology of science. increasingly, quantitative research and qualitative methods are being integrated in the study of human action and its implications and consequences. in the first half of the 20th century, statistics became a free - standing discipline of applied mathematics. statistical methods were used confidently. in the contemporary period, karl popper and talcott parsons influenced the furtherance of the social sciences. researchers continue to search for a unified consensus on what methodology might have the power and refinement to connect a proposed " grand theory " with the various midrange theories that, with considerable success, continue to provide usable frameworks for massive, growing data banks ; for more, see consilience. the social sciences will for the foreseeable future be composed of different zones in the research of, and sometimes distinct in approach toward,
|
https://en.wikipedia.org/wiki/Social_science
|
in this dissertation, we investigate the approach of pure su ( 2 ) lattice gauge theory to its continuum limit using the deconfinement temperature, six gradient scales, and six cooling scales. we find that cooling scales exhibit similarly good scaling behavior as gradient scales, while being computationally more efficient. in addition, we estimate systematic error in continuum limit extrapolations of scale ratios by comparing standard scaling to asymptotic scaling. finally we study topological observables in pure su ( 2 ) using cooling to smooth the gauge fields, and investigate the sensitivity of cooling scales to topological charge. we find that large numbers of cooling sweeps lead to metastable charge sectors, without destroying physical instantons, provided the lattice spacing is fine enough and the volume is large enough. continuum limit estimates of the topological susceptibility are obtained, of which we favor $ \ chi ^ { 1 / 4 } / t _ c = 0. 643 ( 12 ) $. differences between cooling scales in different topological sectors turn out to be too small to be detectable within our statistical error.
|
arxiv:1901.03200
|
we prove the large deviations principle ( ldp ) for the law of the solutions to a class of semilinear stochastic partial differential equations driven by multiplicative noise. our proof is based on the weak convergence approach and significantly improves earlier methods.
|
arxiv:1607.00492
|
in this paper, we investigate synchronization of coupled second - order linear harmonic oscillators with random noises and time delays. the interaction topology is modeled by a weighted directed graph and the weights are perturbed by white noise. on the basis of stability theory of stochastic differential delay equations, algebraic graph theory and matrix theory, we show that the coupled harmonic oscillators can be synchronized almost surely with perturbation and time delays. numerical examples are presented to illustrate our theoretical results.
|
arxiv:0909.4987
|
we explore the symmetry group of the pressure isotropy condition in isotropic coordinates finding a rich structure. we work out some specific examples.
|
arxiv:2004.05856
|
star clusters can be found in galaxy mergers, not only in central regions, but also in the tidal debris. in both the eastern and western tidal tails of ngc 3256 there are dozens of young star clusters, confirmed by their blue colors and larger concentration index as compared to sources off of the tail. tidal tails of other galaxy pairs do not have such widespread cluster formation, indicating environmental influences on the process of star formation or the packaging of the stars.
|
arxiv:astro-ph/0009196
|
in a family of curves, the chern numbers of a singular fiber are the local contributions to the chern numbers of the total space. we will give some inequalities between the chern numbers of a singular fiber as well as their lower and upper bounds. we introduce the dual fiber of a singular fiber, and prove a duality theorem. as an application, we will classify singular fibers with large or small chern numbers.
|
arxiv:1003.1767
|
we demonstrate that migration away from self - produced chemicals ( chemorepulsion ) generates a generic route to clustering and pattern formation among self - propelled colloids. the clustering instability can be caused either by anisotropic chemical production, or by a delayed orientational response to changes of the chemical environment. in each case, chemorepulsion creates clusters of a self - limiting area which grows linearly with self - propulsion speed. this agrees with recent observations of dynamic clusters in janus colloids ( albeit not yet known to be chemorepulsive ). more generally, our results could inform design principles for the self - assembly of chemorepulsive synthetic swimmers and / or bacteria into nonequilibrium patterns.
|
arxiv:1508.04673
|
let $ g $ be a claw - free graph on $ n $ vertices with clique number $ \ omega $, and consider the chromatic number $ \ chi ( g ^ 2 ) $ of the square $ g ^ 2 $ of $ g $. writing $ \ chi ' _ s ( d ) $ for the supremum of $ \ chi ( l ^ 2 ) $ over the line graphs $ l $ of simple graphs of maximum degree at most $ d $, we prove that $ \ chi ( g ^ 2 ) \ le \ chi ' _ s ( \ omega ) $ for $ \ omega \ in \ { 3, 4 \ } $. for $ \ omega = 3 $, this implies the sharp bound $ \ chi ( g ^ 2 ) \ leq 10 $. for $ \ omega = 4 $, this implies $ \ chi ( g ^ 2 ) \ leq 22 $, which is within $ 2 $ of the conjectured best bound. this work is motivated by a strengthened form of a conjecture of erd \ h { o } s and ne \ v { s } et \ v { r } il.
|
arxiv:1609.08646
|
g18. 93 - 0. 03 is a prominent dust complex within an 0. 8deg long filament, with the molecular clump g18. 93 / m being ir dark from near ir wavelength up to 160mu. spitzer composite images show an ir bubble spatially associated with g18. 93. we use grs 13co and iram 30m h13co + data to disentangle the spatial structure of the region. from atlasgal submm data we calculate the gas mass, while we use the h13co + line width to estimate its virial mass. using herschel data we produce temperature maps from fitting the sed. with the magpis 20cm and supercosmos halpha data we trace the ionized gas, and the vgps hi survey provides information on the atomic hydrogen gas. we show that the bubble is spatially associated with g18. 93, located at a kinematic near distance of 3. 6kpc. with 280msun, the most massive clump within g18. 93 is g18. 93 / m. the virial analysis shows that it may be gravitationally bound and has neither spitzer young stellar objects nor mid - ir point sources within. fitting the sed reveals a temperature distribution that decreases towards its center, but heating from the ionizing source puts it above the general ism temperature. we find that the bubble is filled by hii gas, ionized by an o8. 5 star. between the ionizing source and the ir dark clump g18. 93 / m we find a layered structure, from ionized to atomic to molecular hydrogen, revealing a pdr. furthermore, we identify an additional velocity component within the bubble ' s 8mu emission rim at the edge of the infrared dark cloud and speculate that it might be shock induced by the expanding hii region. while the elevated temperature allows for the build - up of larger fragments, and the shock induced velocity component may lead to additional turbulent support, we do not find conclusive evidence that the massive clump g18. 93 / m is prone to collapse because of the expanding hii region.
|
arxiv:1212.3848
|
a well - known problem in holomorphic dynamics is to obtain denjoy - - wolff - type results for compositions of self - maps of the unit disc. here, we tackle the particular case of inner functions : if $ f _ n : \ mathbb { d } \ to \ mathbb { d } $ are inner functions fixing the origin, we show that a limit function of $ f _ n \ circ \ cdots \ circ f _ 1 $ is either constant or an inner function. for the special case of blaschke products, we prove a similar result and show, furthermore, that imposing certain conditions on the speed of convergence guarantees $ l ^ 1 $ convergence of the boundary extensions. we give a counterexample showing that, without these extra conditions, the boundary extensions may diverge at all points of $ \ partial \ mathbb { d } $.
|
arxiv:2206.00374
|
recently, convolutional neural networks ( cnns ) have been widely used in sound event detection ( sed ). however, traditional convolution is deficient in learning time - frequency domain representation of different sound events. to address this issue, we propose multi - dimensional frequency dynamic convolution ( mfdconv ), a new design that endows convolutional kernels with frequency - adaptive dynamic properties along multiple dimensions. mfdconv utilizes a novel multi - dimensional attention mechanism with a parallel strategy to learn complementary frequency - adaptive attentions, which substantially strengthen the feature extraction ability of convolutional kernels. moreover, in order to promote the performance of mean teacher, we propose the confident mean teacher to increase the accuracy of pseudo - labels from the teacher and train the student with high confidence labels. experimental results show that the proposed methods achieve 0. 470 and 0. 692 of psds1 and psds2 on the desed real validation dataset.
|
arxiv:2302.09256
|
we present the standard model calculation of the optical activity of a neutrino sea
|
arxiv:hep-ph/0009012
|
the recent state - of - the - art deep learning methods have significantly improved brain tumor segmentation. however, fully supervised training requires a large amount of manually labeled masks, which is highly time - consuming and needs domain expertise. weakly supervised learning with scribbles provides a good trade - off between model accuracy and the effort of manual labeling. however, for segmenting the hierarchical brain tumor structures, manually labeling scribbles for each substructure could still be demanding. in this paper, we use only two kinds of weak labels, i. e., scribbles on whole tumor and healthy brain tissue, and global labels for the presence of each substructure, to train a deep learning model to segment all the sub - regions. specifically, we train two networks in two phases : first, we only use whole tumor scribbles to train a whole tumor ( wt ) segmentation network, which roughly recovers the wt mask of training data ; then we cluster the wt region with the guide of global labels. the rough substructure segmentation from clustering is used as weak labels to train the second network. the dense crf loss is used to refine the weakly supervised segmentation. we evaluate our approach on the brats2017 dataset and achieve competitive wt dice score as well as comparable scores on substructure segmentation compared to an upper bound when trained with fully annotated masks.
|
arxiv:1911.02014
|
\ log k ) $ queries and the randomized communication complexity of the $ k $ - hamming distance problem is $ \ omega ( k \ log k ) $. further we show that any randomized parity decision tree computing $ k $ - hamming weight has size $ \ exp \ left ( \ omega ( k \ log k ) \ right ) $.
|
arxiv:1808.06717
|
in order to address the economical dispatch problem in islanded microgrid, this letter proposes an optimal criterion and two decentralized economical - sharing schemes. the criterion is to judge whether global optimal economical - sharing can be realized via a decentralized manner. on the one hand, if the system cost functions meet this criterion, the corresponding decentralized droop method is proposed to achieve the global optimal dispatch. otherwise, if the system does not meet this criterion, a modified method to achieve suboptimal dispatch is presented. the advantages of these methods are convenient, effective and communication - less.
|
arxiv:1709.02927
|
we study the process of information dispersal in a network with communication errors and local error - correction. specifically we consider a simple model where a single bit of information initially known to a single source is dispersed through the network, and communication errors lead to differences in the agents ' opinions on this information. naturally, such errors can very quickly make the communication completely unreliable, and in this work we study to what extent this unreliability can be mitigated by local error - correction, where nodes periodically correct their opinion based on the opinion of ( some subset of ) their neighbors. we analyze how the error spreads in the " early stages " of information dispersal by monitoring the average opinion, i. e., the fraction of agents that have the correct information among all nodes that hold an opinion at a given time. our main results show that even with significant effort in error - correction, tiny amounts of noise can lead the average opinion to be nearly uncorrelated with the truth in early stages. we also propose some local methods to help agents gauge when the information they have has stabilized.
|
arxiv:2107.06362
|
vibrations can cause noise in scanning probe microscopies. relative vibrations between the scanning sensor and the sample are important but can be more difficult to determine than absolute vibrations or vibrations relative to the laboratory. we measure the noise spectral density in a scanning squid microscope as a function of position near a localized source of magnetic field, and show that we can determine the spectra of all three components of the relative sensor - sample vibrations. this method is a powerful tool for diagnosing vibrational noise in scanning microscopies.
|
arxiv:1610.00285
|
we study individual rational, pareto optimal, and incentive compatible mechanisms for auctions with heterogeneous items and budget limits. for multi - dimensional valuations we show that there can be no deterministic mechanism with these properties for divisible items. we use this to show that there can also be no randomized mechanism that achieves this for either divisible or indivisible items. for single - dimensional valuations we show that there can be no deterministic mechanism with these properties for indivisible items, but that there is a randomized mechanism that achieves this for either divisible or indivisible items. the impossibility results hold for public budgets, while the mechanism allows private budgets, which is in both cases the harder variant to show. while all positive results are polynomial - time algorithms, all negative results hold independent of complexity considerations.
|
arxiv:1209.6448
|
fe3o4 @ astragalus polysaccharide core - shell nanoparticles ( fe3o4 @ aps nps ) were demonstrated to be an efficient therapeutic drug for treating iron deficiency anemia ( ida ) in vivo. the fe3o4 @ aps nps have been synthesized using a two steps approach involving hydrothermal synthesis and subsequent esterification. transmission electron microscopy ( tem ) and fourier transform infrared ( ftir ) spectroscopy studies show that aps are attached on the surfaces of the highly monodisperse fe3o4 nps. dynamic light scatting ( dls ) and magnetic characterizations reveal that the fe3o4 @ aps nps have outstanding water solubility and stability. cytotoxicity assessment using hela cells and pathological tests in mice demonstrate their good biocompatibility and low toxicity. the ida treatment in rats shows that they have efficient therapeutic effect, which is contributed to both the iron element supplement from fe3o4 and the aps - stimulated hematopoietic cell generation. moreover, the fe3o4 @ aps nps are superparamagnetic and thus able to be used for magnetic resonance imaging ( mri ). this study has demonstrated the potential of nanocomposites involving purified natural products from chinese herb medicine for biomedical applications.
|
arxiv:1806.10740
|
this study deals with certain harmonic zeta functions, one of them occurs in the study of the multiplication property of the harmonic hurwitz zeta function. the values at the negative even integers are found and laurent expansions at poles are described. closed - form expressions are derived for the stieltjes constants that occur in laurent expansions in a neighborhood of s = 1. moreover, as a bonus, it is obtained that the values at the positive odd integers of three harmonic zeta functions can be expressed in closed - form evaluations in terms of zeta values and log - sine integrals.
|
arxiv:2403.07123
|
this paper describes open - source scientific contributions in python surrounding the numerical solutions to hyperbolic hamilton - jacobi ( hj ) partial differential equations viz., their implicit representation on co - dimension one surfaces ; dynamics evolution with levelsets ; spatial derivatives ; total variation diminishing runge - kutta integration schemes ; and their applications to the theory of reachable sets. they are increasingly finding applications in multiple research domains such as reinforcement learning, robotics, control engineering and automation. we describe the library components, illustrate usage with an example, and provide comparisons with existing implementations. this gpu - accelerated package allows for easy portability to many modern libraries for the numerical analyses of the hj equations. we also provide a cpu implementation in python that is significantly faster than existing alternatives.
|
arxiv:2411.03501
|
for any finite group $ g $, a natural question to ask is the order of the smallest possible automorphism group for a cayley graph on $ g $. a particular cayley graph whose automorphism group has this order is referred to as an mrr ( most rigid representation ), and its cayley index is a numerical indicator of this value. study of grrs showed that with the exception of two infinite families and seven individual groups, every group admits a cayley graph whose mrr is a grr, so that the cayley index is 1. the full answer to the question of finding the smallest possible cayley index for a cayley graph on a fixed group was almost completed in previous work, but the precise answers for some finite groups and one infinite family of groups were left open. we fill in the remaining gaps to completely answer this question.
|
arxiv:1703.09299
|
we present a brief overview of the different kinds of electromagnetic radiations expected to come from ( or to be induced by ) space - like sources ( tachyons ). new domains of radiation are here considered ; and the possibility of experimental observation of tachyons via electromagnetic radiation is discussed.
|
arxiv:hep-th/9508166
|
, testing, debugging, and maintaining the source code and documentation of computer programs. this source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. programming is used to invoke some desired behavior ( customization ) from the machine. writing high - quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. the highest - quality software is thus often developed by a team of domain experts, each a specialist in some area of development. however, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. it is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application. = = = = computer programmer = = = = a programmer, computer programmer, or coder is a person who writes computer software. the term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. one who practices or professes a formal approach to programming may also be known as a programmer analyst. a programmer ' s primary computer language ( c, c + +, java, lisp, python, etc. ) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. the term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. however, members of these professions typically possess other software engineering skills, beyond programming. = = = computer industry = = = the computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components, and providing information technology services, including system administration and maintenance. the software industry includes businesses engaged in development, maintenance, and publication of software. the industry also includes software services, such as training, documentation, and consulting. = = sub - disciplines of computing = = = = = computer engineering = = = computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. computer engineers usually have training in electronic engineering ( or electrical engineering ), software design, and hardware - software integration, rather than just software engineering or electronic engineering. computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers
|
https://en.wikipedia.org/wiki/Computing
|
we have performed classical trajectory monte carlo ( ctmc ) studies of electron capture and ionization in multiply charged ( q = 8 ) ion - rydberg atom collisions at intermediate impact velocities. impact parallel to the minor and to the major axis, respectively, of the initial kepler electron ellipse has been investigated. the important role of the initial electron momentum distribution found for singly charged ion impact is strongly disminished for higher projectile charge, while the initial spatial distribution remains important for all values of q studied.
|
arxiv:physics/0004020
|
we introduce a nested family of bayesian nonparametric models for network and interaction data with a hierarchical granularity structure that naturally arises through finer and coarser population labelings. in the case of network data, the structure is easily visualized by merging and shattering vertices, while respecting the edge structure. we further develop bayesian inference procedures for the model family, and apply them to synthetic and real data. the family provides a connection of practical and theoretical interest between the hollywood model of crane and dempsey, and the generalized - gamma graphex model of caron and fox. a key ingredient for the construction of the family is fragmentation and coagulation duality for integer partitions, and for this we develop novel duality relations that generalize those of pitman and dong, goldschmidt and martin. the duality is also crucially used in our inferential procedures.
|
arxiv:2408.04866
|
the subject of this work has its roots in the so called schroedginer bridge problem ( sbp ) which asks for the most likely distribution of brownian particles in their passage between observed empirical marginal distributions at two distinct points in time. renewed interest in this problem was sparked by a reformulation in the language of stochastic control. in earlier works, presented as part i and part ii, we explored a generalization of the original sbp that amounts to optimal steering of linear stochastic dynamical systems between state - distributions, at two points in time, under full state feedback. in these works the cost was quadratic in the control input. the purpose of the present work is to detail the technical steps in extending the framework to the case where a quadratic cost in the state is also present. in the zero - noise limit, we obtain the solution of a ( deterministic ) mass transport problem with general quadratic cost.
|
arxiv:1608.03622
|
aims. monte carlo radiative transfer ( mcrt ) simulations are a powerful tool for understanding the role of dust in astrophysical systems and its influence on observations. however, due to the strong coupling of the radiation field and medium across the whole computational domain, the problem is non - local and non - linear and such simulations are computationally expensive in case of realistic 3d inhomogeneous dust distributions. we explore a novel technique for post - processing mcrt output to reduce the total computational run time by enhancing the output of computationally less expensive simulations of lower - quality. methods. we combine principal component analysis ( pca ) and non - negative matrix factorization ( nmf ) as dimensionality reduction techniques together with gaussian markov random fields and the integrated nested laplace approximation ( inla ), an approximate method for bayesian inference, to detect and reconstruct the non - random spatial structure in the images of lower signal - to - noise or with missing data. results. we test our methodology using synthetic observations of a galaxy from the skirt auriga project - a suite of high resolution magneto - hydrodynamic milky way - sized galaxies simulated in cosmological environment by ' zoom - in ' technique. with this approach, we are able to reproduce high photon number reference images $ \ sim5 $ times faster with median residuals below $ \ sim20 \ % $.
|
arxiv:2211.02602
|
models of stepped dark radiation have recently been found to have an important impact on the anisotropies of the cosmic microwave background, aiding in easing the hubble tension. in this work, we study models with a sector of dark radiation with a step in its abundance, which thermalizes after big bang nucleosynthesis by mixing with the standard model neutrinos. for this, we extend an earlier work which has focused on the background evolution only until the dark sector thermalizes by deriving the full background and perturbation equations of the model and implementing them in an einstein - boltzmann solving code. we expound on the behavior of this model, discussing the wide range of parameters that result in interesting and viable cosmologies that dynamically generate dark radiation during a range of epochs. we find that for the strongly self - coupled regime, there is no large cosmological impact for a tight prior on the mass, whereas larger mass ranges allow a smooth interpolation between a behavior close to the $ \ lambda $ cdm cosmological standard model and close to an additional component of strongly self - interacting dark radiation. in the weakly self - coupled regime we find that we can accommodate a parameter space relevant for the neutrino anomalies as well as one relevant to easing the hubble tension.
|
arxiv:2404.16822
|
a kitaev - heisenberg - j2 - j3 model is proposed to describe the mott - insulating layered iridates a2iro3 ( a = na, li ). the model is a combination of the kitaev honeycomb model and the heisenberg model with all three nearest neighbor couplings j1, j2 and j3. a rich phase diagram is obtained at the classical level, including the experimentally suggested zigzag ordered phase ; as well as the stripy phase, which extends from the kitaev - heisenberg limit to the j1 - j2 - j3 one. combining the experimentally observed spin order with the optimal fitting to the uniform magnetic susceptibility data gives an estimate of possible parameter values, which in turn reaffirms the necessity of including both the kitaev and farther neighbor couplings.
|
arxiv:1108.2481
|
let $ \ underline { e } = \ prod _ { p \ in \ mathbb { p } } e _ p $ be a compact subset of $ \ widehat { \ mathbb { z } } = \ prod _ { p \ in \ mathbb { p } } \ mathbb { z } _ p $ and denote by $ \ mathcal c ( \ underline { e }, \ widehat { \ mathbb { z } } ) $ the ring of continuous functions from $ \ underline { e } $ into $ \ widehat { \ mathbb { z } } $. we obtain two kinds of adelic versions of the weierstrass approximation theorem. firstly, we prove that the ring $ { \ rm int } _ { \ mathbb { q } } ( \ underline { e }, \ widehat { \ mathbb { z } } ) : = \ { f ( x ) \ in \ mathbb { q } [ x ] \ mid \ forall p \ in \ mathbb { p }, \ ; \ ; f ( e _ p ) \ subseteq \ mathbb { z } _ p \ } $ is dense in the direct product $ \ prod _ { p \ in \ mathbb { p } } \ mathcal c ( e _ p, \ mathbb { z } _ p ) \, $ for the uniform convergence topology. secondly, under the hypothesis that, for each $ n \ geq 0 $, $ \ # ( e _ p \ pmod { p } ) > n $ for all but finitely many $ p $, we prove the existence of regular bases of the $ \ mathbb { z } $ - module $ { \ rm int } _ { \ mathbb { q } } ( \ underline { e }, \ widehat { \ mathbb { z } } ) $, and show that, for such a basis $ \ { f _ n \ } _ { n \ geq 0 } $, every function $ \ underline { \ varphi } $ in $ \ prod _ { p \ in \ mathbb { p } } \ mathcal { c } ( e _ p, \ mathbb { z } _ p ) $ may be uniquely written as a series $ \ sum _ { n \ geq 0 } \ underline { c } _ n f _ n $ where $ \ underline { c } _ n
|
arxiv:1511.03465
|
in this work, we investigate the transport phenomena in compound semiconductor material based buried channel quantum well mosfet with a view to developing a simple and effective model for the device current. device simulation has been performed in quantum ballistic regime using non - equilibrium greens function ( negf ) formalism. the simulated current voltage characteristics using a novel concept of effective transmission coefficient has been found to define the reported experimental data with high accuracy. the proposed model has also been effective to capture the transport characteristics reported for other compound semiconductor material based field effect transistors. the concept of the proposed effective transmission coefficient and hence the model lends itself to be a simple and powerful device analysis tool which can be extensively used to predict the performance of a wide variety of compound semiconductor devices in the pre fabrication stage. it has also demonstrated consistency with device characteristics for doping concentration and channel length scaling. thus the model can help the device or process engineers to tune the devices for the best possible performance.
|
arxiv:2006.03681
|
when a model may be fitted separately to each individual statistical unit, inspection of the point estimates may help the statistician to understand between - individual variability and to identify possible relationships. however, some information will be lost in such an approach because estimation uncertainty is disregarded. we present a comparative method for exploratory repeated - measures analysis to complement the point estimates that was motivated by and is demonstrated by analysis of data from the cadet ii breast - cancer screening study. the approach helped to flag up some unusual reader behavior, to assess differences in performance, and to identify potential random - effects models for further analysis.
|
arxiv:1202.6133
|
the $ o ( v ^ 2 ) $ relativistic correction for inelastic $ j / \ psi $ photoproduction, in which heavy quark pairs are in the dominant fock state of the quarkonium, is studied in the framework of nrqcd factorization. an assessment of its significance, particularly in comparison to the color octet contributions, is made. it is found that the impact on the energy distribution is negative in certain regions of phase space. the predictions are compared with photoproduction data from desy - hera.
|
arxiv:hep-ph/9901286
|
a modification of the harmonic superfield formalism in $ d = 4, n = 2 $ supergravity using a subsidiary condition of covariance under the background supersymmetry with a central charge ( $ b $ - covariance ) is considered. conservation of analyticity together with the $ b $ - covariance leads to the appearance of linear gravitational superfields. analytic prepotentials arise in a decomposition of the background linear superfields in terms of spinor coordinates and transform in a nonstandard way under the background supersymmetry. the linear gravitational superfields can be written via spinor derivatives of nonanalytic spinor prepotentials. the perturbative expansion of the extended supergravity action in terms of the $ b $ - covariant superfields and the corresponding version of the differential - geometric formalism are considered. we discuss the dual harmonic representation of the linearized extended supergravity, which corresponds to the dynamical condition of grassmann analyticity.
|
arxiv:hep-th/9803202
|
we study properties of the momentum space triple pomeron vertex in perturbative qcd. particular attention is given to the collinear limit where transverse momenta on one side of the vertex are much larger than on the other side. we also comment on the kernels in nonlinear evolution equations.
|
arxiv:0710.3060
|
this is a short introduction to quantum computers, quantum algorithms and quantum error correcting codes. familiarity with the principles of quantum theory is assumed. emphasis is put on a concise presentation of the principles avoiding lengthy discussions.
|
arxiv:quant-ph/9811006
|
aims. we use a sample of 83 core - dominated active galactic nuclei ( agn ) selected from the mojave ( monitoring of jets in agn with vlba experiments ) radio - flux - limited sample and detected with the fermi large area telescope ( lat ) to study the relations between non - simultaneous radio, optical, and gamma - ray measurements. methods. we perform a multi - band statistical analysis to investigate the relations between the emissions in different bands and reproduce these relations by modeling of the spectral energy distributions of blazars. results. there is a significant correlation between the gamma - ray luminosity and the optical nuclear and radio ( 15 ghz ) luminosities of blazars. we report a well defined positive correlation between the gamma - ray luminosity and the radio - optical loudness for quasars and bl lacertae type objects ( bl lacs ). a strong positive correlation is found between the radio luminosity and the gamma - ray - optical loudness for quasars, while a negative correlation between the optical luminosity and the gamma - ray - radio loudness is present for bl lacs. modeling of these correlations with a simple leptonic jet model for blazars indicates that variations of the accretion disk luminosity ( and hence the jet power ) is able to reproduce the trends observed in most of the correlations. to reproduce all observed correlations, variations of several parameters, such as the accretion power, jet viewing angle, lorentz factor, and magnetic field of the jet, are required.
|
arxiv:1104.4946
|
this paper develops a novel rating - based reinforcement learning approach that uses human ratings to obtain human guidance in reinforcement learning. different from the existing preference - based and ranking - based reinforcement learning paradigms, based on human relative preferences over sample pairs, the proposed rating - based reinforcement learning approach is based on human evaluation of individual trajectories without relative comparisons between sample pairs. the rating - based reinforcement learning approach builds on a new prediction model for human ratings and a novel multi - class loss function. we conduct several experimental studies based on synthetic ratings and real human ratings to evaluate the effectiveness and benefits of the new rating - based reinforcement learning approach.
|
arxiv:2307.16348
|
we investigate the role of latitudinal differential rotation ( dr ) in the spin evolution of solar - type stars. recent asteroseismic observation detected the strong equator - fast dr in some solar - type stars. numerical simulations show that the strong equator - fast dr is a typical feature of young fast - rotating stars and that this tendency is gradually reduced with stellar age. incorporating these properties, we develop a model for the long - term evolution of stellar rotation. the magnetic braking is assumed to be regulated dominantly by the rotation rate in the low - latitude region. therefore, in our model, stars with the equator - fast dr spin down more efficiently than those with the rigid - body rotation. we calculate the evolution of stellar rotation in ranges of stellar mass, $ 0. 9 \, \ mathrm { m } _ { \ odot } \ le m \ le 1. 2 \, \ mathrm { m } _ { \ odot } $, and metallicity, $ 0. 5 \, \ mathrm { z } _ { \ odot } \ le z \ le 2 \, \ mathrm { z } _ { \ odot } $, where $ \ mathrm { m } _ { \ odot } $ and $ \ mathrm { z } _ { \ odot } $ are the solar mass and metallicity, respectively. our model, using the observed torque in the present solar wind, nicely explains both the current solar rotation and the average trend of the rotation of solar - type stars, including the dependence on metallicity. in addition, our model naturally reproduces the observed trend of the weakened magnetic braking in old slowly rotating solar - type stars because strong equator - fast dr becomes reduced. our results indicate that ldr and its transition are essential factors that control the stellar spin down.
|
arxiv:2211.13522
|
the skyrmion number of paraxial optical skyrmions can be defined solely via their polarization singularities and associated winding numbers, using a mathematical derivation that exploits stokes ' s theorem. it is demonstrated that this definition provides a robust way to extract the skyrmion number from experimental data, as illustrated for a variety of optical ( n \ ' eel - type ) skyrmions and bimerons, and their corresponding lattices. this method generates not only an increase in accuracy, but also provides an intuitive geometrical approach to understanding the topology of such quasi - particles of light, and their robustness against smooth transformations.
|
arxiv:2209.06734
|
many have argued that statistics students need additional facility to express statistical computations. by introducing students to commonplace tools for data management, visualization, and reproducible analysis in data science and applying these to real - world scenarios, we prepare them to think statistically. in an era of increasingly big data, it is imperative that students develop data - related capacities, beginning with the introductory course. we believe that the integration of these precursors to data science into our curricula - early and often - will help statisticians be part of the dialogue regarding " big data " and " big questions ".
|
arxiv:1502.00318
|
we explore various $ \ lambda _ c $ states, including $ \ lambda _ c $, $ \ lambda _ c ( 2595 ) $, $ \ lambda _ c ( 2940 ) $, and the predicted $ ( \ bar { d } n ) $ hadronic molecular states, in photoproduction and electroproduction to estimate their yields at eicc and eic. assuming $ \ lambda _ c ( 2940 ) $ as either a hadronic molecular state or a three - quark state, our analysis demonstrates that its production rates are of the same order of magnitude, posing challenges in identifying its underlying structure. after considering the integral luminosity, the yields of $ \ lambda _ c $ excited states reach $ 10 ^ 6 $ to $ 10 ^ 7 $ at eicc and eic. the $ ( \ bar { d } n ) $ molecular states with both isospin $ i = 0 $ and $ i = 1 $ are also studied, with yields reaching $ 10 ^ 5 $, making them likely to be detectable at these facilities.
|
arxiv:2412.03216
|
current models of inter - nucleon interactions are built within the frame of effective field theories ( efts ). contrary to traditional nuclear potentials, eft interactions require a renormalization of their parameters in order to derive meaningful estimations of observable. in this paper, a renormalization procedure is designed in connection with many - body approximations applicable to large - a systems and formulated within the frame of many - body perturbation theory. the procedure is shown to generate counterterms that are independent of the targeted a - body sector. as an example, the procedure is applied to the random phase approximation. this work constitutes one step towards the design of a practical eft for many - body systems.
|
arxiv:1908.07578
|
weak values as introduced by aharonov, albert and vaidman ( aav ) are ensemble average values for the results of weak measurements. they are interesting when the ensemble is preselected on a particular initial state and postselected on a particular final measurement result. i show that weak values arise naturally in quantum optics, as weak measurements occur whenever an open system is monitored ( as by a photodetector ). i use quantum trajectory theory to derive a generalization of aav ' s formula to include ( a ) mixed initial conditions, ( b ) nonunitary evolution, ( c ) a generalized ( non - projective ) final measurement, and ( d ) a non - back - action - evading weak measurement. i apply this theory to the recent stony - brook cavity qed experiment demonstrating wave - particle duality [ g. t. foster, l. a. orozco, h. m. castro - beltran, and h. j. carmichael, phys. rev. lett. { 85 }, 3149 ( 2000 ) ]. i show that the ` ` fractional ' ' correlation function measured in that experiment can be recast as a weak value in a form as simple as that introduced by aav.
|
arxiv:quant-ph/0112116
|
safety is still the main issue of autonomous driving, and in order to be globally deployed, they need to predict pedestrians ' motions sufficiently in advance. while there is a lot of research on coarse - grained ( human center prediction ) and fine - grained predictions ( human body keypoints prediction ), we focus on 3d bounding boxes, which are reasonable estimates of humans without modeling complex motion details for autonomous vehicles. this gives the flexibility to predict in longer horizons in real - world settings. we suggest this new problem and present a simple yet effective model for pedestrians ' 3d bounding box prediction. this method follows an encoder - decoder architecture based on recurrent neural networks, and our experiments show its effectiveness in both the synthetic ( jta ) and real - world ( nuscenes ) datasets. the learned representation has useful information to enhance the performance of other tasks, such as action anticipation. our code is available online : https : / / github. com / vita - epfl / bounding - box - prediction
|
arxiv:2206.14195
|
the galaxy ngc1512 is interacting with the smaller galaxy ngc1510 and shows a peculiar morphology, characterised by two extended arms immersed in an hi disc whose size is about four times larger than the optical diameter of ngc1512. for the first time we performed a deep x - ray observation of the galaxies ngc1512 and ngc1510 with xmm - newton to gain information on the population of x - ray sources and diffuse emission in a system of interacting galaxies. we identified and classified the sources detected in the xmm - newton field of view by means of spectral analysis, hardness - ratios calculated with a bayesian method, x - ray variability, and cross - correlations with catalogues in optical, infrared, and radio wavelengths. we also made use of archival swift ( x - ray ) and australia telescope compact array ( radio ) data to better constrain the nature of the sources detected with xmm - newton. we detected 106 sources in the energy range of 0. 2 - 12 kev, out of which 15 are located within the d _ 25 regions of ngc1512 and ngc1510 and at least six sources coincide with the extended arms. we identified and classified six background objects and six foreground stars. we discussed the nature of a source within the d _ 25 ellipse of ngc1512, whose properties indicate a quasi - stellar object or an intermediate ultra - luminous x - ray source. taking into account the contribution of low - mass x - ray binaries and active galactic nuclei, the number of high - mass x - ray binaries detected within the d _ 25 region of ngc1512 is consistent with the star formation rate obtained in previous works based on radio, infrared optical, and uv wavelengths. we detected diffuse x - ray emission from the interior region of ngc1512 with a plasma temperature of kt = 0. 68 ( 0. 31 - 0. 87 ) kev and a 0. 3 - 10 kev x - ray luminosity of 1. 3e38 erg / s, after correcting for unresolved discrete sources.
|
arxiv:1405.3495
|
in this short note we prove two elegant generalized continued fraction formulae $ $ e = 2 + \ cfrac { 1 } { 1 + \ cfrac { 1 } { 2 + \ cfrac { 2 } { 3 + \ cfrac { 3 } { 4 + \ ddots } } } } $ $ and $ $ e = 3 + \ cfrac { - 1 } { 4 + \ cfrac { - 2 } { 5 + \ cfrac { - 3 } { 6 + \ cfrac { - 4 } { 7 + \ ddots } } } } $ $ using elementary methods. the first formula is well - known, and the second one is newly - discovered in arxiv : 1907. 00205 [ cs. lg ]. we then explore the possibility of automatic verification of such formulae using computer algebra systems ( cas ' s ).
|
arxiv:1907.05563
|
this paper analyzes the iteration - complexity of a generalized alternating direction method of multipliers ( g - admm ) for solving linearly constrained convex problems. this admm variant, which was first proposed by bertsekas and eckstein, introduces a relaxation parameter $ \ alpha \ in ( 0, 2 ) $ into the second admm subproblem. our approach is to show that the g - admm is an instance of a hybrid proximal extragradient framework with some special properties, and, as a by product, we obtain ergodic iteration - complexity for the g - admm with $ \ alpha \ in ( 0, 2 ] $, improving and complementing related results in the literature. additionally, we also present pointwise iteration - complexity for the g - admm.
|
arxiv:1705.06191
|
in this paper, we study the challenging problem of categorizing videos according to high - level semantics such as the existence of a particular human action or a complex event. although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter - class semantic relationships. this paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. specifically, these two types of relationships are estimated and utilized by rigorously imposing regularizations in the learning process of a deep neural network ( dnn ). such a regularized dnn ( rdnn ) can be efficiently realized using a gpu - based implementation with an affordable training cost. through arming the dnn with better capability of harnessing both the feature and the class relationships, the proposed rdnn is more suitable for modeling video semantics. with extensive experimental evaluations, we show that rdnn produces superior performance over several state - of - the - art approaches. on the well - known hollywood2 and columbia consumer video benchmarks, we obtain very competitive results : 66. 9 \ % and 73. 5 \ % respectively in terms of mean average precision. in addition, to substantially evaluate our rdnn and stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called fcvid, which contains 91, 223 internet videos and 239 manually annotated categories.
|
arxiv:1502.07209
|
we study the inverse scattering from a screen with using only one incoming time - - harmonic plane wave but with measurements of the scattered wave done at all directions. especially we focus on the 2d - - case i. e. ( inverse ) scattering from an open bounded smooth curve. besides the inverse scattering problem we also study the inverse electrostatic problem. we then show that one cauchy - - data of any continuous and bounded function vanishing on the screen and harmonic outside it, determines the screen uniquely.
|
arxiv:2409.02591
|
we present two new fisheye image datasets for training face and object detection models : voc - 360 and wider - 360. the fisheye images are created by post - processing regular images collected from two well - known datasets, voc2012 and wider face, using a model for mapping regular to fisheye images implemented in matlab. voc - 360 contains 39, 575 fisheye images for object detection, segmentation, and classification. wider - 360 contains 63, 897 fisheye images for face detection. these datasets will be useful for developing face and object detectors as well as segmentation modules for fisheye images while the efforts to collect and manually annotate true fisheye images are underway.
|
arxiv:1906.11942
|
top - k predictions are used in many real - world applications such as machine learning as a service, recommender systems, and web searches. $ \ ell _ 0 $ - norm adversarial perturbation characterizes an attack that arbitrarily modifies some features of an input such that a classifier makes an incorrect prediction for the perturbed input. $ \ ell _ 0 $ - norm adversarial perturbation is easy to interpret and can be implemented in the physical world. therefore, certifying robustness of top - $ k $ predictions against $ \ ell _ 0 $ - norm adversarial perturbation is important. however, existing studies either focused on certifying $ \ ell _ 0 $ - norm robustness of top - $ 1 $ predictions or $ \ ell _ 2 $ - norm robustness of top - $ k $ predictions. in this work, we aim to bridge the gap. our approach is based on randomized smoothing, which builds a provably robust classifier from an arbitrary classifier via randomizing an input. our major theoretical contribution is an almost tight $ \ ell _ 0 $ - norm certified robustness guarantee for top - $ k $ predictions. we empirically evaluate our method on cifar10 and imagenet. for instance, our method can build a classifier that achieves a certified top - 3 accuracy of 69. 2 \ % on imagenet when an attacker can arbitrarily perturb 5 pixels of a testing image.
|
arxiv:2011.07633
|
let $ \ phi $ be a smooth solution of the parabolic equation $ f ( d ^ 2u, du, u, x, t ) - u _ { t } = 0 $ : assume $ f $ is uniform elliptic only in a neighborhood of $ ( d ^ 2 \ phi, d \ phi, \ phi, x, t ) $, we prove that any solution obtained from small l1 - perturbation of $ \ phi $ remains smooth.
|
arxiv:1111.5888
|
graph neural networks ( gnns ) have revolutionized the field of machine learning on non - euclidean data such as graphs and networks. gnns effectively implement node representation learning through neighborhood aggregation and achieve impressive results in many graph - related tasks. however, most neighborhood aggregation approaches are summation - based, which can be problematic as they may not be sufficiently expressive to encode informative graph structures. furthermore, though the graph pooling module is also of vital importance for graph learning, especially for the task of graph classification, research on graph down - sampling mechanisms is rather limited. to address the above challenges, we propose a concatenation - based graph convolution mechanism that injectively updates node representations to maximize the discriminative power in distinguishing non - isomorphic subgraphs. in addition, we design a novel graph pooling module, called wl - sortpool, to learn important subgraph patterns in a deep - learning manner. wl - sortpool layer - wise sorts node representations ( i. e. continuous wl colors ) to separately learn the relative importance of subtrees with different depths for the purpose of classification, thus better characterizing the complex graph topology and rich information encoded in the graph. we propose a novel subgraph pattern gnn ( spgnn ) architecture that incorporates these enhancements. we test the proposed spgnn architecture on many graph classification benchmarks. experimental results show that our method can achieve highly competitive results with state - of - the - art graph kernels and other gnn approaches.
|
arxiv:2404.13655
|
we qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. for systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. this result partially generalizes to mixed states the quantification of entaglement with marginal mixednesses holding for pure states. we identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. such states cannot be discriminated by the majorization criterion.
|
arxiv:quant-ph/0307192
|
the saha equation provides the relation between two consecutive ionization state populations, like the maxwell - boltzmann velocity distribution of the atoms in a gas ensemble. saha equation can also consider the partitions functions for both states and its main application is in stellar astrophysics population statistics. this paper presents two non - gaussian thermostatistical generalizations for the saha equation : the first one towards the tsallis nonextensive $ q $ - entropy and the other one is based upon kaniadakis $ \ kappa $ - statistics. both thermostatistical formalisms are very successful when used in several complex astrophysical statistical systems and we have demonstrated here that they work also in saha ' s ionization distribution. we have obtained new chemical $ q $ - potentials and their respective graphical regions with a well defined boundary that separated the two symmetric intervals for the $ q $ - potentials. the asymptotic behavior of the $ q $ - potential was also discussed. besides the proton - electron, we have also investigated the complex atoms and pair production ionization reactions.
|
arxiv:1901.01839
|
in this paper, we propose a novel learning - based pipeline for partially overlapping 3d point cloud registration. the proposed model includes an iterative distance - aware similarity matrix convolution module to incorporate information from both the feature and euclidean space into the pairwise point matching process. these convolution layers learn to match points based on joint information of the entire geometric features and euclidean offset for each point pair, overcoming the disadvantage of matching by simply taking the inner product of feature vectors. furthermore, a two - stage learnable point elimination technique is presented to improve computational efficiency and reduce false positive correspondence pairs. a novel mutual - supervision loss is proposed to train the model without extra annotations of keypoints. the pipeline can be easily integrated with both traditional ( e. g. fpfh ) and learning - based features. experiments on partially overlapping and noisy point cloud registration show that our method outperforms the current state - of - the - art, while being more computationally efficient. code is publicly available at https : / / github. com / jiahaowork / idam.
|
arxiv:1910.10328
|
we construct left invariant special k \ " ahler structures on the cotangent bundle of a flat pseudo - riemannian lie group. we introduce the twisted cartesian product of two special k \ " ahler lie algebras according to two linear representations by infinitesimal k \ " ahler transformations. we also exhibit a double extension process of a special k \ " ahler lie algebra which allows us to get all simply connected special k \ " ahler lie groups with bi - invariant symplectic connections. all lie groups constructed by performing this double extension process can be identified with a subgroup of symplectic ( or k \ " ahler ) affine transformations of its lie algebra containing a nontrivial $ 1 $ - parameter subgroup formed by central translations. we show a characterization of left invariant flat special k \ " ahler structures using \ ' etale k \ " ahler affine representations, exhibit some immediate consequences of the constructions mentioned above, and give several non - trivial examples.
|
arxiv:2005.09771
|
by considering a spin - $ \ frac { 1 } { 2 } $ degenerate fermi gases in a ring cavity where strong interaction between atoms and light gives rise to a superradiance, we find the cavity dissipation could cause a severe broadening in some special cases, breaking down the quasi - particle picture which was constantly assumed in mean field theory studies. this broadening happens when the band gap resonant with polariton excitation energy. interestingly enough, this broadening is highly spin selective depending on how the fermions are filled and the spectrum becomes asymmetric due to dissipation. further, a non - monotonous dependence of the maximal broadening of the spectrum against cavity decay rate $ \ kappa $ is found and the largest broadening emerges at $ \ kappa $ comparable to recoil energy.
|
arxiv:1706.10204
|
against a backdrop of tensions related to eu membership, we find levels of online abuse toward uk mps reach a new high. race and religion have become pressing topics globally, and in the uk this interacts with " brexit " and the rise of social media to create a complex social climate in which much can be learned about evolving attitudes. in 8 million tweets by and to uk mps in the first half of 2019, religious intolerance scandals in the uk ' s two main political parties attracted significant attention. furthermore, high profile ethnic minority mps started conversations on twitter about race and religion, the responses to which provide a valuable source of insight. we found a significant presence for disturbing racial and religious abuse. we also explore metrics relating to abuse patterns, which may affect its impact. we find " burstiness " of abuse doesn ' t depend on race or gender, but individual factors may lead to politicians having very different experiences online.
|
arxiv:1910.00920
|
the crosstalk and afterpulsing in hamamatsu silicon photomultipliers, called multi - pixel photon counters ( mppcs ), have been studied in depth. several components of the correlated noise have been identified according to their different possible causes and their effects on the signal. in particular, we have distinguished between prompt and delayed crosstalk as well as between trap - assisted and hole - induced afterpulsing. the prompt crosstalk has been characterized through the pulse amplitude spectrum measured at dark conditions. the newest mppc series, which incorporate isolating trenches between pixels, exhibit a very low prompt crosstalk, but a small component remains likely due to secondary photons reflected on the top surface of the device and photon - generated minority carriers diffusing in the silicon substrate. we present a meticulous procedure to characterize the afterpulsing and delayed crosstalk through the amplitude and delay time distributions of secondary pulses. our results indicate that both noise components are due to minority carriers diffusing in the substrate and that this effect is drastically reduced in the new mppc series as a consequence of an increase of one order of magnitude in the doping density of the substrate. finally, we have developed a monte carlo simulation to study the different components of the afterpulsing and crosstalk. the simulation results support our interpretation of the experimental data. they also demonstrate that trenches longer than those employed in the hamamatsu mppcs would reduce the crosstalk to a much greater extent.
|
arxiv:1509.02286
|
most cloud services and distributed applications rely on hashing algorithms that allow dynamic scaling of a robust and efficient hash table. examples include aws, google cloud and bittorrent. consistent and rendezvous hashing are algorithms that minimize key remapping as the hash table resizes. while memory errors in large - scale cloud deployments are common, neither algorithm offers both efficiency and robustness. hyperdimensional computing is an emerging computational model that has inherent efficiency, robustness and is well suited for vector or hardware acceleration. we propose hyperdimensional ( hd ) hashing and show that it has the efficiency to be deployed in large systems. moreover, a realistic level of memory errors causes more than 20 % mismatches for consistent hashing while hd hashing remains unaffected.
|
arxiv:2205.07850
|
asteroseismology allows us to probe the physical conditions inside the core of red giant stars. this relies on the properties of the global oscillations with a mixed character that are highly sensitive to the physical properties of the core. however, overlapping rotational splittings and mixed - mode spacings result in complex structures in the mixed - mode pattern, which severely complicates its identification and the measurement of the asymptotic period spacing. this work aims at disentangling the rotational splittings from the mixed - mode spacings, in order to open the way to a fully automated analysis of large data sets. an analytical development of the mixed - mode asymptotic expansion is used to derive the period spacing between two consecutive mixed modes. the \ ' echelle diagrams constructed with the appropriately stretched periods are used to exhibit the structure of the gravity modes and of the rotational splittings. we propose a new view on the mixed - mode oscillation pattern based on corrected periods, called stretched periods, that mimic the evenly spaced gravity - mode pattern. this provides a direct understanding of all oscillation components, even in the case of rapid rotation. the measurement of the asymptotic period spacing and the signature of the structural glitches on mixed modes are then made easy. this work opens the possibility to derive all seismic global parameters in an automated way, including the identification of the different rotational multiplets and the measurement of the rotational splitting, even when this splitting is significantly larger than the period spacing. revealing buoyancy glitches provides a detailed view on the radiative core.
|
arxiv:1509.06193
|
the grid integration of intermittent renewable energy sources ( res ) causes costs for grid operators due to forecast uncertainty and the resulting production schedule mismatches. these so - called profile service costs are marginal cost components and can be understood as an insurance fee against res production schedule uncertainty that the system operator incurs due to the obligation to always provide sufficient control reserve capacity for power imbalance mitigation. this paper studies the situation for the german power system and the existing german res support schemes. the profile service costs incurred by german transmission system operators ( tsos ) are quantified and means for cost reduction are discussed. in general, profile service costs are dependent on the res prediction error and the specific workings of the power markets via which the prediction error is balanced. this paper shows both how the prediction error can be reduced in daily operation as well as how profile service costs can be reduced via optimization against power markets and / or active curtailment of res generation.
|
arxiv:1407.7237
|
we use the kharzeev - levin - nardi model of the low $ x $ gluon distributions to fit recent hera data on charm and longitudinal structure functions. having checked that this model gives a good description of the data, we use it to predict $ f ^ c _ 2 $ and $ f _ l $ to be measured in a future electron - ion collider. the results interpolate between those obtained with the de florian - sassot and eskola - paukkunen - salgado nuclear gluon distributions. the conclusion of this exercise is that the kln model, simple as it is, may still be used as an auxiliary tool to make estimates both for heavy ion and electron - ion collisions.
|
arxiv:0812.0780
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.