text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
recommender systems usually learn user interests from various user behaviors, including clicks and post - click behaviors ( e. g., like and favorite ). however, these behaviors inevitably exhibit popularity bias, leading to some unfairness issues : 1 ) for items with similar quality, more popular ones get more exposure ; and 2 ) even worse the popular items with lower popularity might receive more exposure. existing work on mitigating popularity bias blindly eliminates the bias and usually ignores the effect of item quality. we argue that the relationships between different user behaviors ( e. g., conversion rate ) actually reflect the item quality. therefore, to handle the unfairness issues, we propose to mitigate the popularity bias by considering multiple user behaviors. in this work, we examine causal relationships behind the interaction generation procedure in multi - behavior recommendation. specifically, we find that : 1 ) item popularity is a confounder between the exposed items and users ' post - click interactions, leading to the first unfairness ; and 2 ) some hidden confounders ( e. g., the reputation of item producers ) affect both item popularity and quality, resulting in the second unfairness. to alleviate these confounding issues, we propose a causal framework to estimate the causal effect, which leverages backdoor adjustment to block the backdoor paths caused by the confounders. in the inference stage, we remove the negative effect of popularity and utilize the good effect of quality for recommendation. experiments on two real - world datasets validate the effectiveness of our proposed framework, which enhances fairness without sacrificing recommendation accuracy.
|
arxiv:2209.04589
|
, or provide detailed specifications for an object. with the goals of legibility and uniformity, styles are standardized and lettering ability has little relationship to normal writing ability. engineering drawings use a gothic sans - serif script, formed by a series of short strokes. lower case letters are rare in most drawings of machines. iso lettering templates, designed for use with technical pens and pencils, and to suit iso paper sizes, produce lettering characters to an international standard. the stroke thickness is related to the character height ( for example, 2. 5 mm high characters would have a stroke thickness - pen nib size - of 0. 25 mm, 3. 5 would use a 0. 35 mm pen and so forth ). the iso character set ( font ) has a seriffed one, a barred seven, an open four, six, and nine, and a round topped three, that improves legibility when, for example, an a0 drawing has been reduced to a1 or even a3 ( and perhaps enlarged back or reproduced / faxed / microfilmed & c ). when cad drawings became more popular, especially using us software, such as autocad, the nearest font to this iso standard font was romantic simplex ( romans ) - a proprietary shx font ) with a manually adjusted width factor ( override ) to make it look as near to the iso lettering for the drawing board. however, with the closed four, and arced six and nine, romans. shx typeface could be difficult to read in reductions. in more recent revisions of software packages, the truetype font isocpeur reliably reproduces the original drawing board lettering stencil style, however, many drawings have switched to the ubiquitous arial. ttf. = = conventional parts ( areas ) = = = = = title block = = = every engineering drawing must have a title block. the title block ( t / b, tb ) is an area of the drawing that conveys header - type information about the drawing, such as : drawing title ( hence the name " title block " ) drawing number part number ( s ) name of the design activity ( corporation, government agency, etc. ) identifying code of the design activity ( such as a cage code ) address of the design activity ( such as city, state / province, country ) measurement units of the drawing ( for example, inches, millimeters ) default tolerances for dimension callouts where no tolerance is specified boilerplate callouts of general specs intellectual property rights warning iso
|
https://en.wikipedia.org/wiki/Engineering_drawing
|
in this paper, we present uformer, an effective and efficient transformer - based architecture for image restoration, in which we build a hierarchical encoder - decoder network using the transformer block. in uformer, there are two core designs. first, we introduce a novel locally - enhanced window ( lewin ) transformer block, which performs nonoverlapping window - based self - attention instead of global self - attention. it significantly reduces the computational complexity on high resolution feature map while capturing local context. second, we propose a learnable multi - scale restoration modulator in the form of a multi - scale spatial bias to adjust features in multiple layers of the uformer decoder. our modulator demonstrates superior capability for restoring details for various image restoration tasks while introducing marginal extra parameters and computational cost. powered by these two designs, uformer enjoys a high capability for capturing both local and global dependencies for image restoration. to evaluate our approach, extensive experiments are conducted on several image restoration tasks, including image denoising, motion deblurring, defocus deblurring and deraining. without bells and whistles, our uformer achieves superior or comparable performance compared with the state - of - the - art algorithms. the code and models are available at https : / / github. com / zhendongwang6 / uformer.
|
arxiv:2106.03106
|
a key ingredient in understanding the dynamics of stellar outflows is their proper motion. we have used optical images in the [ sii ] emission at 6717 / 31 a and the red digitized palomar observatory sky survey ( dss ) plates to determine the proper motion of hh 7 - 11 system and the optical knot of cep e ( hh 377 ). the dss plate measurements span nearly 37 years for both hh 7 - 11 and hh 377 and have wide field of view, which allows an accurate determination of the proper motions despite their relatively low angular resolution. the optical images, with higher angular resolution, cover a shorter period of 7 and 4 years, respectively, and have been used to complement the dss measurements. from the dss plates we have found that hh 377 has a proper motion of 0. 031 + / - 0. 003 arcsec / yr with a pa = 206 arcdeg, i. e. moving away from iras 230111 + 63, that at a distance of 730 pc corresponds to a tangential velocity of 107 + / - 14 km / s. the values obtained from the optical images are consistent with these measurements. similarly, the proper motions of hh 7 - 11 range from 0. 015 + / - 0. 009 ( hh 9 ) to 0. 044 + / - 0. 007 ( hh 11 ) arcsec / yr, and the flow is moving away from svs 13 with a mean pa = 136 arcdeg. at a distance of 330 pc, these motions correspond to tangential velocities of 25 - 70 km / s, i. e. comparable to the original values obtained by herbig & jones ( 1983 ). the measurements from the optical ccd [ sii ] images are again consistent with these motions, although in detail there are some difference, particularly for hh 7 and hh 10.
|
arxiv:astro-ph/0109500
|
we construct and study an extended random matrix model of rna ( polymer ) folding. a perturbation which acts on all the nucleotides in the chain is added to the action of the rna partition function. the effect of this perturbation on the partition function and the genus distributions is studied. this perturbation distinguishes between the paired and unpaired bases. for example, for $ \ alpha = 1 $ ( where $ \ alpha $ is the ratio of the strengths of the original and perturbed term in the action ) the partition function and genus distribution for odd lengths vanish completely. this partition function and the genus distribution is non - zero for even lengths where structures with fully paired bases only remain. this implies that ( i ). the genus distributions are different and ( ii ). there is a ` ` structural transition ' ' ( from an ` ` unpaired - paired base phase ' ' to a ` ` completely paired base phase ' ' ) as $ \ alpha $ approaches 1 in the extended matrix models. we compare the results of the extended rna model with the results of g. vernizzi, h. orland and a. zee in prl 94, 168103 ( 2005 ).
|
arxiv:0802.2440
|
while reduction in feature size makes computation cheaper in terms of latency, area, and power consumption, performance of emerging data - intensive applications is determined by data movement. these trends have introduced the concept of scalability as reaching a desirable performance per unit cost by using as few number of units as possible. many proposals have moved compute closer to the memory. however, these efforts ignored maintaining a balance between bandwidth and compute rate of an architecture, with those of applications, which is a key principle in designing scalable large systems. this paper proposes the use of memory slices, a modular building block for scalable memory systems integrated with compute, in which performance scales with memory size ( and volume of data ). the slice architecture utilizes a programmable memory interface feeding a systolic compute engine with high reuse rate. the modularity feature of slice - based systems is exploited with a partitioning and data mapping strategy across allocated memory slices where training performance scales with the data size. these features enable shifting the most pressure to cheap compute units rather than expensive memory accesses or transfers via interconnection network. an application of the memory slices to a scale - out memory system is accelerating the training of recurrent, convolutional, and hybrid neural networks ( rnns and rnns + cnn ) that are forming cloud workloads. the results of our cycle - level simulations show that memory slices exhibits a superlinear speedup when the number of slices increases. furthermore, memory slices improve power efficiency to 747 gflops / j for training lstms. while our current evaluation uses memory slices with 3d packaging, a major value is that slices can also be constructed with a variety of packaging options, for example with ddr - based memory units.
|
arxiv:1803.06068
|
the dueling bandit problem, an essential variation of the traditional multi - armed bandit problem, has become significantly prominent recently due to its broad applications in online advertising, recommendation systems, information retrieval, and more. however, in many real - world applications, the feedback for actions is often subject to unavoidable delays and is not immediately available to the agent. this partially observable issue poses a significant challenge to existing dueling bandit literature, as it significantly affects how quickly and accurately the agent can update their policy on the fly. in this paper, we introduce and examine the biased dueling bandit problem with stochastic delayed feedback, revealing that this new practical problem will delve into a more realistic and intriguing scenario involving a preference bias between the selections. we present two algorithms designed to handle situations involving delay. our first algorithm, requiring complete delay distribution information, achieves the optimal regret bound for the dueling bandit problem when there is no delay. the second algorithm is tailored for situations where the distribution is unknown, but only the expected value of delay is available. we provide a comprehensive regret analysis for the two proposed algorithms and then evaluate their empirical performance on both synthetic and real datasets.
|
arxiv:2408.14603
|
in this paper, replacing ` equality ' by ' equality almost everywhere ' we modify several terms associated with the ring of measurable functions defined on a measure space $ ( x, \ mathcal { a }, \ mu ) $ and thereby study the graph theoretic features of the modified comaximal graph, annihilator graph and the weakly zero - divisor graph of the said ring. the study reveals a structural analogy between the modified versions of the comaximal and the zero - divisor graphs, which prompted us to investigate whether these two graphs are isomorphic. introducing a quotient - like concept, we find certain subgraphs of the comaximal graph and the zero - divisor graph of $ \ mathcal { m } ( x, \ mathcal { a } ) $ and show that these two subgraphs are always isomorphic. choosing $ \ mu $ as a counting measure, we prove that even if these two induced graphs are isomorphic, the parent graphs may not be so. however, in case of lebesgue measure space on $ \ mathbb { r } $, we establish that the comaximal and the zero - divisor graphs are isomorphic. observing that both of the comaximal and the zero - divisor graphs of the ring $ \ mathcal { m } ( x, \ mathcal { a } ) $ are subgraphs of the annihilator graph of the said ring, we find equivalent conditions for their equalities in terms of the partitioning of $ x $ into two atoms. moreover, the non - atomicity of the underlying measure space $ x $ is characterized through graph theoretic phenomena of the comaximal and the annihilator graph of $ \ mathcal { m } ( x, \ mathcal { a } ) $.
|
arxiv:2307.02492
|
most protostars have luminosities that are fainter than expected from steady accretion over the protostellar lifetime. the solution to this problem may lie in episodic mass accretion - - prolonged periods of very low accretion punctuated by short bursts of rapid accretion. however, the timescale and amplitude for variability at the protostellar phase is almost entirely unconstrained. in " a jcmt / scuba - 2 transient survey of protostars in nearby star forming regions ", we are monitoring monthly with scuba - 2 the sub - mm emission in eight fields within nearby ( < 500 pc ) star forming regions to measure the accretion variability of protostars. the total survey area of ~ 1. 6 sq. deg. includes ~ 105 peaks with peaks brighter than 0. 5 jy / beam ( 43 associated with embedded protostars or disks ) and 237 peaks of 0. 125 - 0. 5 jy / beam ( 50 with embedded protostars or disks ). each field has enough bright peaks for flux calibration relative to other peaks in the same field, which improves upon the nominal flux calibration uncertainties of sub - mm observations to reach a precision of ~ 2 - 3 % rms, and also provides quantified confidence in any measured variability. the timescales and amplitudes of any sub - mm variation will then be converted into variations in accretion rate and subsequently used to infer the physical causes of the variability. this survey is the first dedicated survey for sub - mm variability and complements other transient surveys at optical and near - ir wavelengths, which are not sensitive to accretion variability of deeply embedded protostars.
|
arxiv:1709.02052
|
in this note, we give an elementary proof of the lack of null controllability for the heat equation on the half line by employing the machinery inherited by the unified transform, known also as the fokas method. this approach also extends in a uniform way to higher dimensions and different initial - boundary value problems governed by the heat equation, suggesting a novel methodology for studying problems related to controllability.
|
arxiv:1908.11579
|
long - slit observations of the blue compact galaxy haro 2 have been performed to detect the halpha emission originating in the partially ionized wind outflowing at 200 km / s, that had been previously detected with the hubble space telescope. a shallow broadening of the halpha line wings has been observed, consistent with the existence of an expanding shell. the rotation curve shows two dips at the same systemic velocity as the nucleus. at the positions of the dips the halpha line is clearly broadened with respect to the central core. this broadening is produced by the outer layers of the expanding shell. from the position of these dips we estimate the size of the shell to be around 20 ' ' in diameter, with a corresponding kinematical age between 5 and 6 myr. a comparison of the halpha and ly _ alpha profiles shows that ly _ alpha is significantly broader than halpha, with an additional emission in the red wing. we interpret this redshifted source of ly _ alpha emission as line photons backscattered by the receding part of the expanding shell. these observations outline the extremely high sensitivity of the ly _ alpha line to the structure and kinematics of the interstellar medium.
|
arxiv:astro-ph/9706109
|
crystal surfaces are sensitive to the surrounding environment, where atoms left with broken bonds reconstruct to minimize surface energy. in many cases, the surface can exhibit chemical properties unique from the bulk. these differences are important as they control reactions and mediate thin film growth. this is particularly true for complex oxides where certain terminating crystal planes are polar and have a net dipole moment. for polar terminations, reconstruction of atoms on the surface is the central mechanism to avoid the so called polar catastrophe. this adds to the complexity of the reconstruction where charge polarization and stoichiometry govern the final surface in addition to standard thermodynamic parameters such as temperature and partial pressure. here we present direct, in - situ determination of polar srtio3 ( 110 ) surfaces at temperatures up to 900 c using cross - sectional aberration corrected scanning transmission electron microscopy ( stem ). under these conditions, we observe the coexistence of various surface structures that change as a function of temperature. as the specimen temperature is lowered, the reconstructed surface evolves due to thermal mismatch with the substrate. periodic defects, similar to dislocations, are found in these surface structures and act to relieve stress due to mismatch. combining stem observations and electron spectroscopy with density functional theory, we find a combination of lattice misfit and charge compensation for stabilization. beyond the characterization of these complex reconstructions, we have developed a general framework that opens a new pathway to simultaneously investigate the surface and near surface regions of single crystals as a function of environment.
|
arxiv:1606.01224
|
proof assistants are software - based tools that are used in the mechanization of proof construction and validation in mathematics and computer science, and also in certified program development. different tools are being increasingly used in order to accelerate and simplify proof checking. context - free language theory is a well - established area of mathematics, relevant to computer science foundations and technology. this proposal aims at formalizing parts of context - free language theory in the coq proof assistant. this report presents the underlying theory and general characteristics of proof assistants, including coq itself, discusses its use in relevant formalization projects, presents the current status of the implementation, addresses related projects and the contributions of this work. the results obtained so far include the formalization of closure properties for context - free grammars ( under union, concatenation and closure ) and the formalization of grammar simplification. grammar simplification is a subject of high importance in computer language processing technology as well as in formal language theory, and the formalization refers to the fact that general context - free grammars generate languages that can be also generated by simpler and equivalent context - free grammars. namely, useless symbol elimination, inaccessible symbol elimination, unit rules elimination and empty rules elimination operations were described and proven correct with respect to the preservation of the language generated by the original grammar.
|
arxiv:1505.00061
|
the changes in the optical transmission of thin vanadium layers upon hydrogen absorption are found to be dominated by the volume changes of the layers and not directly linked to concentration. this effect is demonstrated by utilising the difference in the hydrogen induced expansion of v layers in fe / v and cr / v superlattices. hydrogen resides solely in the vanadium layers in these superlattices, while occupying different sites, causing different lattice expansion. quantitative agreement is obtained between the experimental results and first principle density functional calculations.
|
arxiv:1812.04917
|
the constrained hamiltonian systems admitting no gauge conditions are considered. the methods to deal with such systems are discussed and developed. as a concrete application, the relationship between the dirac and reduced phase space quantizations is investigated for spin models belonging to the class of systems under consideration. it is traced out that the two quantization methods may give similar, or essentially different physical results, and, moreover, a class of constrained systems, which can be quantized only by the dirac method, is discussed. a possible interpretation of the gauge degrees of freedom is given.
|
arxiv:hep-th/9306017
|
we explore the new technique developed recently in \ cite { rosenhaus : 2014woa } and suggest a correspondence between the $ n $ - point correlation functions on spacetime with conical defects and the $ ( n + 1 ) $ - point correlation functions in regular minkowski spacetime. this correspondence suggests a new systematic way to evaluate the correlation functions on spacetimes with conical defects. we check the correspondence for the expectation value of a scalar operator and of the energy momentum tensor in a conformal field theory and obtain the exact agreement with the earlier derivations for cosmic string spacetime. we then use this correspondence and do the computations for a generic scalar operator and a conserved vector current. for generic unitary field theory we compute the expectation value of the energy momentum tensor using the known spectral representation of the $ 2 $ - point correlators of stress - energy tensor in minkowski spacetime.
|
arxiv:1406.2512
|
understanding expressions that refer to the physical world is crucial for such human - assisting systems in the real world, as robots that must perform actions that are expected by users. in real - world reference resolution, a system must ground the verbal information that appears in user interactions to the visual information observed in egocentric views. to this end, we propose a multimodal reference resolution task and construct a japanese conversation dataset for real - world reference resolution ( j - cre3 ). our dataset contains egocentric video and dialogue audio of real - world conversations between two people acting as a master and an assistant robot at home. the dataset is annotated with crossmodal tags between phrases in the utterances and the object bounding boxes in the video frames. these tags include indirect reference relations, such as predicate - argument structures and bridging references as well as direct reference relations. we also constructed an experimental model and clarified the challenges in multimodal reference resolution tasks.
|
arxiv:2403.19259
|
the cross or soft anomalous dimension matrix describes the renormalization of wilson loops with a self - intersection and is an important object in the study of infrared divergences of scattering amplitudes. in this paper it is studied for the maldacena - - wilson loop in n = 4 supersymmetric yang - - mills theory and euclidean kinematics. we consider both the strong - coupling description in terms of minimal surfaces in ads5 as well as the weak - coupling side up to the two - loop level. in either case, the coefficients of the cross anomalous dimension matrix can be expressed in terms of the cusp anomalous dimension. the strong - coupling description displays a gross - - ooguri phase transition and we argue that the cross anomalous dimension is an interesting object to study in an integrability - based approach.
|
arxiv:1805.06448
|
in traditional thermodynamical and statistical - mechanical approaches one has ( some ) detailed knowledge of the principles governing the microdynamics of a system. however in many instances we may not have a hamiltonian or good information about the degrees of freedom ( or, in a roughly complementary sense, the redundancies in observed data ), but on the other hand we have access to a preponderance of raw data. the development outlined here is an attempt to apply the principles of thermodynamics and statistical mechanics generically to such cases.
|
arxiv:cond-mat/0402325
|
the existence of coherent quasiparticles near the fermi energy in the low temperature state of high - temperature superconductors has been well established by angle - resolved photoemission spectroscopy ( arpes ). this technique directly probes the momentum - resolved electronic excitation spectrum of the cuo $ _ 2 $ planes. we present a study of close to optimally doped la $ _ { 1. 83 } $ sr $ _ { 0. 17 } $ cuo $ _ 4 $ in the superconducting state and report an abrupt change in the quasiparticle spectral function, as we follow the dispersion of the arpes signal from the fermi energy up to 0. 6 ev. the interruption in the quasiparticle dispersion separates coherent quasiparticle peaks at low energies from broad incoherent excitations at high energies. we find that the boundary between these low - energy and high - energy features exhibits a cosine - shaped momentum dependence, reminiscent of the superconducting d - wave gap. further intriguing similarities between characteristics of the incoherent excitations and quasiparticle properties ( lifetime, fermi arcs ) suggest a close relation between the electronic response at high and low energies in cuprate superconductors.
|
arxiv:cond-mat/0610880
|
software defects heavily affect software ' s functionalities and may cause huge losses. recently, many ai - based approaches have been proposed to detect defects, which can be divided into two categories : software defect prediction and automatic unit test generation. while these approaches have made great progress in software defect detection, they still have several limitations in practical application, including the low confidence of prediction models and the inefficiency of unit testing models. to address these limitations, we propose a wysiwyg ( i. e., what you see is what you get ) approach : attention - based self - guided automatic unit test generation ( auger ), which contains two stages : defect detection and error triggering. in the former stage, auger first detects the proneness of defects. then, in the latter stage, it guides to generate unit tests for triggering such an error with the help of critical information obtained by the former stage. to evaluate the effectiveness of auger, we conduct a large - scale experiment by comparing with the state - of - the - art ( sota ) approaches on the widely used datasets ( i. e., bears, bugs. jar, and defects4j ). auger makes great improvements by 4. 7 % to 35. 3 % and 17. 7 % to 40. 4 % in terms of f1 - score and precision in defect detection, and can trigger 23 to 84 more errors than sotas in unit test generation. besides, we also conduct a further study to verify the generalization in practical usage by collecting a new dataset from real - world projects.
|
arxiv:2412.00828
|
earliest known treatise on sanskrit prosody. he also presents a numerical system by adding one to the sum of place values. pingala ' s work also includes material related to the fibonacci numbers, called matrameru. indian astronomer and mathematician aryabhata ( 476 β 550 ), in his aryabhatiya ( 499 ) introduced the sine function in trigonometry and the number 0. in 628, brahmagupta suggested that gravity was a force of attraction. he also lucidly explained the use of zero as both a placeholder and a decimal digit, along with the hindu β arabic numeral system now used universally throughout the world. arabic translations of the two astronomers ' texts were soon available in the islamic world, introducing what would become arabic numerals to the islamic world by the 9th century. narayana pandita ( 1340 β 1400 ) was an indian mathematician. plofker writes that his texts were the most significant sanskrit mathematics treatises after those of bhaskara ii, other than the kerala school. : 52 he wrote the ganita kaumudi ( lit. " moonlight of mathematics " ) in 1356 about mathematical operations. the work anticipated many developments in combinatorics. between the 14th and 16th centuries, the kerala school of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. in particular, madhava of sangamagrama led advancement in analysis by providing the infinite and taylor series expansion of some trigonometric functions and pi approximation. parameshvara ( 1380 β 1460 ), presents a case of the mean value theorem in his commentaries on govindasvami and bhaskara ii. the yuktibhasa was written by jyeshtadeva in 1530. = = = = astronomy = = = = the first textual mention of astronomical concepts comes from the vedas, religious literature of india. according to sarma ( 2008 ) : " one finds in the rigveda intelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, the spherical self - supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month. ". the first 12 chapters of the siddhanta shiromani, written by bhaskara in the 12th century, cover topics such as : mean longitudes of the planets ; true longitudes of the planets ; the three problems
|
https://en.wikipedia.org/wiki/History_of_science
|
ultra - reliable and low - latency communications ( urllc ) partakes a major role in 5g networks for mission - critical applications. sparse vector coding ( svc ) appears as a strong candidate for future urllc networks by enabling superior performance in terms of bit error rate ( ber ). svc exploits the virtual digital domain ( vdd ) and compressed sensing ( cs ) algorithms to encode and decode its information through active symbol indices. in this paper, first, a clever encoding / decoding algorithm is proposed for the svc scheme, which allows the use of all possible activation patterns ( aps ) resulting in increasing spectral efficiency. second, a novel solution is proposed to convey additional information bits by further exploiting index modulation ( im ) for the codebooks of the svc scheme. computer simulation results reveal that our low - complexity algorithm and novel im solution provide not only a superior ber performance but also an increase in the number of bits conveyed by im compared to the ordinary svc approach.
|
arxiv:2004.08330
|
understanding and modeling the popularity of user generated content ( ugc ) short videos on social media platforms presents a critical challenge with broad implications for content creators and recommendation systems. this study delves deep into the intricacies of predicting engagement for newly published videos with limited user interactions. surprisingly, our findings reveal that mean opinion scores from previous video quality assessment datasets do not strongly correlate with video engagement levels. to address this, we introduce a substantial dataset comprising 90, 000 real - world ugc short videos from snapchat. rather than relying on view count, average watch time, or rate of likes, we propose two metrics : normalized average watch percentage ( nawp ) and engagement continuation rate ( ecr ) to describe the engagement levels of short videos. comprehensive multi - modal features, including visual content, background music, and text data, are investigated to enhance engagement prediction. with the proposed dataset and two key metrics, our method demonstrates its ability to predict engagements of short videos purely from video content.
|
arxiv:2410.00289
|
a random hash function $ h $ is $ \ varepsilon $ - minwise if for any set $ s $, $ | s | = n $, and element $ x \ in s $, $ \ pr [ h ( x ) = \ min h ( s ) ] = ( 1 \ pm \ varepsilon ) / n $. minwise hash functions with low bias $ \ varepsilon $ have widespread applications within similarity estimation. hashing from a universe $ [ u ] $, the twisted tabulation hashing of p \ v { a } tra \ c { s } cu and thorup [ soda ' 13 ] makes $ c = o ( 1 ) $ lookups in tables of size $ u ^ { 1 / c } $. twisted tabulation was invented to get good concentration for hashing based sampling. here we show that twisted tabulation yields $ \ tilde o ( 1 / u ^ { 1 / c } ) $ - minwise hashing. in the classic independence paradigm of wegman and carter [ focs ' 79 ] $ \ tilde o ( 1 / u ^ { 1 / c } ) $ - minwise hashing requires $ \ omega ( \ log u ) $ - independence [ indyk soda ' 99 ]. p \ v { a } tra \ c { s } cu and thorup [ stoc ' 11 ] had shown that simple tabulation, using same space and lookups yields $ \ tilde o ( 1 / n ^ { 1 / c } ) $ - minwise independence, which is good for large sets, but useless for small sets. our analysis uses some of the same methods, but is much cleaner bypassing a complicated induction argument.
|
arxiv:1404.6724
|
with the increasing availability of optical and synthetic aperture radar ( sar ) images thanks to the sentinel constellation, and the explosion of deep learning, new methods have emerged in recent years to tackle the reconstruction of optical images that are impacted by clouds. in this paper, we focus on the evaluation of convolutional neural networks that use jointly sar and optical images to retrieve the missing contents in one single polluted optical image. we propose a simple framework that ease the creation of datasets for the training of deep nets targeting optical image reconstruction, and for the validation of machine learning based or deterministic approaches. these methods are quite different in terms of input images constraints, and comparing them is a problematic task not addressed in the literature. we show how space partitioning data structures help to query samples in terms of cloud coverage, relative acquisition date, pixel validity and relative proximity between sar and optical images. we generate several datasets to compare the reconstructed images from networks that use a single pair of sar and optical image, versus networks that use multiple pairs, and a traditional deterministic approach performing interpolation in temporal domain.
|
arxiv:2204.00424
|
some n β n { \ displaystyle n \ in \ mathbb { n } }. for example, the definition of a sequence of real numbers ( a n ) { \ displaystyle ( a _ { n } ) } converging to some limit a { \ displaystyle a } is : for each positive number Ξ΅ { \ displaystyle \ varepsilon }, there exists a natural number n { \ displaystyle n } such that for all n > n { \ displaystyle n > n }, | a n β a | < Ξ΅ { \ displaystyle \ left \ vert a _ { n } - a \ right \ vert < \ varepsilon }. when the term " eventually " is used as a shorthand for " there exists a natural number n { \ displaystyle n } such that for all n > n { \ displaystyle n > n } ", the convergence definition can be restated more simply as : for each positive number Ξ΅ > 0 { \ displaystyle \ varepsilon > 0 }, eventually | a n β a | < Ξ΅ { \ displaystyle \ left \ vert a _ { n } - a \ right \ vert < \ varepsilon }. here, notice that the set of natural numbers that do not satisfy this property is a finite set ; that is, the set is empty or has a maximum element. as a result, the use of " eventually " in this case is synonymous with the expression " for all but a finite number of terms " β a special case of the expression " for almost all terms " ( although " almost all " can also be used to allow for infinitely many exceptions as well ). at the basic level, a sequence can be thought of as a function with natural numbers as its domain, and the notion of " eventually " applies to functions on more general sets as well β in particular to those that have an ordering with no greatest element. more specifically, if s { \ displaystyle s } is such a set and there is an element s { \ displaystyle s } in s { \ displaystyle s } such that the function f { \ displaystyle f } is defined for all elements greater than s { \ displaystyle s }, then f { \ displaystyle f } is said to have some property eventually if there is an element x 0 { \ displaystyle x _ { 0 } } such that whenever x > x 0 { \ displaystyle x > x _ { 0 } },
|
https://en.wikipedia.org/wiki/Eventually_(mathematics)
|
it is known in thin - film deposition that the density of nucleated clusters $ n $ varies with the deposition rate $ r $ as a power law, $ n \ sim r ^ \ alpha $. the exponent $ \ alpha $ is a function of the critical nucleus size $ i $ in a way that changes with the aggregation - limiting process active in a given system. we extend here to generic aggregation - limiting processes the derivation of the analytical capture - zone distribution function $ p _ \ beta ( s ) = a _ \ beta s ^ \ beta \ exp ( - b _ \ beta s ^ 2 ) $ of pimpinelli and einstein [ phys. rev. lett. 99, 226102 ( 2007 ) ]. we show that the exponent $ \ beta $ is generally related to the critical nucleus size $ i $ and to the exponent $ \ alpha $ by the equality $ \ alpha ( 2 \ beta + d _ f - 2 ) = 2i $ where $ d _ f $ is the fractal dimensionality of the clusters. this remarkable results allows one to measure $ i $ with no a priori knowledge of the actual aggregation mechanism. we apply this equality to measuring the critical nucleus size in pentacene deposition on mica.
|
arxiv:1312.4412
|
the progenitor systems of type - ia supernovae ( sne ia ) are yet unknown. the collisional - triple sn ia progenitor model posits that sne ia result from head - on collisions of binary white dwarfs ( wds ), driven by dynamical perturbations by the tertiary stars in mild - hierarchical triple systems. to reproduce the galactic sn ia rate, at least ~ 30 - 55 per cent of all wds would need to be in triple systems of a specific architecture. we test this scenario by searching the gaia dr2 database for the postulated progenitor triples. within a volume out to 120 pc, we search around gaia - resolved double wds with projected separations up to 300 au, for physical tertiary companions at projected separations out to 9000 au. at 120 pc, gaia can detect faint low - mass tertiaries down to the bottom of the main sequence and to the coolest wds. around 27 double wds, we identify zero tertiaries at such separations, setting a 95 per cent confidence upper limit of 11 per cent on the fraction of binary wds that are part of mild hierarchical triples of the kind required by the model. as only a fraction ( likely ~ 10 per cent ) of all wds are in < 300 au wd binaries, the potential collisional - triple progenitor population appears to be at least an order of magnitude ( and likely several ) smaller than required by the model.
|
arxiv:1905.00032
|
replacing $ \ { 0 \ } $ by the whole ideal of infinitesimals yields a weaker notion of \ emph { archimedean element } that we call \ emph { quasiarchimedean }. it is known that semisimple mv - algebras with compact maximal spectrum ( in the co - zarisky topology ) are exactly the hyperarchimedean algebras. we characterise all the algebras with compact maximal spectrum as being \ emph { quasihyperarchimedean } \ mbox { mv - algebras, } which in a sense are non semisimple hyperarchimedean algebras. we develop some basic facts in the theory of mv - algebras along the lines of algebraic geometry, where infinitesimals play the role of nilpotent elements, and prove a mv - algebra version of hilbert ' s nullstellensatz. finally we consider the relations ( some inedited ) between several elementary classes of mv - algebras in terms of the ideals that characterise them, and present elementary ( first order with denumerable disjunctions ) proofs in place of the \ mbox { set - theoretical } usually found in the literature.
|
arxiv:1602.05204
|
##2 miiii β3 mui β2 muiui β1 muiuiu β2 muiuiuuiuiu β4 muiuiiuiu β... in light of this, one might wonder whether it is possible to convert mi into mu, using only these four transformation rules. one could spend many hours applying these transformation rules to strings. however, it might be quicker to find a property that is invariant to all rules ( that is, not changed by any of them ), and that demonstrates that getting to mu is impossible. by looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any i ' s is to have three consecutive i ' s in the string. this makes the following invariant interesting to consider : the number of i ' s in the string is not a multiple of 3. this is an invariant to the problem, if for each of the transformation rules the following holds : if the invariant held before applying the rule, it will also hold after applying it. looking at the net effect of applying the rules on the number of i ' s and u ' s, one can see this actually is the case for all rules : the table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of i ' s was not a multiple of three before applying the rule, then it will not be afterwards either. given that there is a single i in the starting string mi, and one is not a multiple of three, one can then conclude that it is impossible to go from mi to mu ( as the number of i ' s will never be a multiple of three ). = = invariant set = = a subset s of the domain u of a mapping t : u β u is an invariant set under the mapping when x β s t ( x ) β s. { \ displaystyle x \ in s \ iff t ( x ) \ in s. } the elements of s are not necessarily fixed, even though the set s is fixed in the power set of u. ( some authors use the terminology setwise invariant, vs. pointwise invariant, to distinguish between these cases. ) for example, a circle is an invariant subset of the plane under a rotation about the circle ' s center. further, a conical surface is invariant as a set under a homothety of space. an invariant set of an operation t is also said to be stable under t
|
https://en.wikipedia.org/wiki/Invariant_(mathematics)
|
we study integrable models for electrons in metals when the single particle spectrum is discrete. the electron - electron interactions are bcs - like pairing, coulomb repulsion, and spin exchange coupling. these couplings are, in general, nonuniform in the sense that they depend on the levels occupied by the interacting electrons. by using the realization of spin 1 / 2 - operators in terms of electrons the models describe spin 1 / 2 models with nonuniform long range interactions and external magnetic field. the integrability and the exact solution arise since the model hamiltonians can be constructed in terms of gaudin models. uniform pairing and the resulting orthodox model correspond to an isotropic limit of the gaudin hamiltonians. we discuss possible applications of this model to a single grain and to a system of few interacting grains.
|
arxiv:cond-mat/0105537
|
blockchain programs ( also known as smart contracts ) manage valuable assets like cryptocurrencies and tokens, and implement protocols in domains like decentralized finance ( defi ) and supply - chain management. these types of applications require a high level of security that is hard to achieve due to the transparency of public blockchains. numerous tools support developers and auditors in the task of detecting weaknesses. as a young technology, blockchains and utilities evolve fast, making it challenging for tools and developers to keep up with the pace. in this work, we study the robustness of code analysis tools and the evolution of weakness detection on a dataset representing six years of blockchain activity. we focus on ethereum as the crypto ecosystem with the largest number of developers and deployed programs. we investigate the behavior of single tools as well as the agreement of several tools addressing similar weaknesses. our study is the first that is based on the entire body of deployed bytecode on ethereum ' s main chain. we achieve this coverage by considering bytecodes as equivalent if they share the same skeleton. the skeleton of a bytecode is obtained by omitting functionally irrelevant parts. this reduces the 48 million contracts deployed on ethereum up to january 2022 to 248328 contracts with distinct skeletons. for bulk execution, we utilize the open - source framework smartbugs that facilitates the analysis of solidity smart contracts, and enhance it to accept also bytecode as the only input. moreover, we integrate six further tools for bytecode analysis. the execution of the 12 tools included in our study on the dataset took 30 cpu years. while the tools report a total of 1307486 potential weaknesses, we observe a decrease in reported weaknesses over time, as well as a degradation of tools to varying degrees.
|
arxiv:2303.10517
|
we investigate the validity of the phragm \ ` en - lindel \ " of principle for a class of elliptic equations with a potential, posed on infinite graphs. consequently, we get uniqueness, in the class of solutions satisfying a suitable growth condition at infinity. we suppose that the { \ it outer degree ( or outer curvature ) } of the graph is bounded from above, and we allow the potential to go to zero at infinity in a controlled way. finally, we discuss the optimality of the conditions on the potential and on the outer degree on special graphs.
|
arxiv:2406.06505
|
a canonical formulation of the n = 1 supergravity theory containing the topological nieh - yan term in its lagrangian density is developed. the constraints are analysed without choosing any gauge. in the time gauge, the theory is shown to be described in terms of real su ( 2 ) variables.
|
arxiv:0909.4850
|
we discuss several implications of r ^ 4 couplings in m theory when compactified on calabi - yau ( cy ) manifolds. in particular, these couplings can be predicted by supersymmetry from the mixed gauge - gravitational chern - simons couplings in five dimensions and are related to the one - loop holomorphic anomaly in four - dimensional n = 2 theories. we find a new contribution to the einstein term in five dimensions proportional to the euler number of the internal cy threefold, which corresponds to a one - loop correction of the hypermultiplet geometry. this correction is reproduced by a direct computation in type ii string theories. finally, we discuss a universal non - perturbative correction to the type iib hyper - metric.
|
arxiv:hep-th/9707013
|
the present short note is simply intended to communicate that i have analytically diagonalized the bogoliubov truncated hamiltonian $ h _ c $ ~ \ cite { bogo1, bogo2 }, in an interacting bosonic gas. this is the natural prosecution of my work ~ \ cite { ms }, now denoted as ( i ), where the diagonalization was performed only in the subspace corresponding to zero momentum collective excitations ( ce ).
|
arxiv:1610.07168
|
in this short note we prove an analogue of auslander correspondence for exact dg categories whose $ h ^ 0 $ - category is $ 0 $ - auslander in the sense of gorsky - - nakaoka - - palu.
|
arxiv:2306.15958
|
federated learning ( fl ) serves as a data privacy - preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients. to accomplish an fl task, the task publisher needs to pay financial incentives to the fl server and fl server offloads the task to the contributing fl clients. it is challenging to design proper incentives for the fl clients due to the fact that the task is privately trained by the clients. this paper aims to propose a contract theory based fl task training model towards minimizing incentive budget subject to clients being individually rational ( ir ) and incentive compatible ( ic ) in each fl training round. we design a two - dimensional contract model by formally defining two private types of clients, namely data quality and computation effort. to effectively aggregate the trained models, a contract - based aggregator is proposed. we analyze the feasible and optimal contract solutions to the proposed contract model. % experimental results demonstrate that the proposed framework and contract model can effective improve the generation accuracy of fl tasks. experimental results show that the generalization accuracy of the fl tasks can be improved by the proposed incentive mechanism where contract - based aggregation is applied.
|
arxiv:2108.05568
|
the galactose network is a complex system responsible for galactose metabolism. it has been extensively studied experimentally and mathematically at the unicellular level to broaden our understanding of its regulatory mechanisms at higher order species. although the key molecular players involved in the metabolic and regulatory processes underlying this system have been known for decades, their interactions and chemical kinetics remain incompletely understood. mathematical models can provide an alternative method to study the dynamics of this network from a quantitative and a qualitative perspective. here, we employ such approaches to unravel the main properties of the galactose network, including equilibrium binary and temporal responses, as a way to decipher its adaptation to actively - changing inputs. we combine the two main components of the network ; namely, the genetic branch which allows for bistable responses, and a metabolic branch, encompassing the relevant metabolic processes and glucose repressive reactions. we use both computational tools to estimate model parameters based on published experimental data, as well as bifurcation analysis to decipher the properties of the system in various parameter regimes. our model analysis reveals that the interplay between the inducer ( galactose ) and the repressor ( glucose ) creates the bistability regime which dictates the temporal responses of the system. based on the same bifurcation techniques, we can also explain why the system is robust to genetic mutations and molecular instabilities. these findings may provide experimentalists with a theoretical framework upon which they can determine how the galactose network functions under various conditions.
|
arxiv:1602.03862
|
recent observations suggest that dense gas clouds can survive even in hot galactic winds. here we show that the inclusion of turbulent densities with different statistical properties has significant effects on the evolution of wind - swept clouds. we investigate how the initial standard deviation of the log - normal density field influences the dynamics of quasi - isothermal clouds embedded in supersonic winds. we compare uniform, fractal solenoidal, and fractal compressive cloud models in both 3d and 2d hydrodynamical simulations. we find that the processes of cloud disruption and dense gas entrainment are functions of the initial density distribution in the cloud. fractal clouds accelerate, mix, and are disrupted earlier than uniform clouds. within the fractal cloud sample, compressive clouds retain high - density nuclei, so they are more confined, less accelerated, and have lower velocity dispersions than their solenoidal counterparts. compressive clouds are also less prone to kelvin - helmholtz and rayleigh - taylor instabilities, so they survive longer than solenoidal clouds. by comparing the cloud properties at the destruction time, we find that dense gas entrainment is more effective in uniform clouds than in either of the fractal clouds, and it is more effective in solenoidal than in compressive models. in contrast, mass loading into the wind is more efficient in compressive cloud models than in uniform or solenoidal models. overall, wide density distributions lead to inefficient entrainment, but they facilitate mass loading and favour the survival of very dense gas in hot galactic winds.
|
arxiv:1901.06924
|
we parametrise the gauge - invariant ideals of the toeplitz - nica - pimsner algebra of a strong compactly aligned product system over $ \ mathbb { z } _ + ^ d $ by using $ 2 ^ d $ - tuples of ideals of the coefficient algebra that are invariant, partially ordered, and maximal. we give an algebraic characterisation of maximality that allows the iteration of a $ 2 ^ d $ - tuple to the maximal one inducing the same gauge - invariant ideal. the parametrisation respects inclusions and intersections, while we characterise the join operation on the $ 2 ^ d $ - tuples that renders the parametrisation a lattice isomorphism. the problem of the parametrisation of the gauge - invariant ideals is equivalent to the study of relative cuntz - nica - pimsner algebras, for which we provide a generalised gauge - invariant uniqueness theorem. we focus further on equivariant quotients of the cuntz - nica - pimsner algebra and provide applications to regular product systems, c * - dynamical systems, strong finitely aligned higher - rank graphs, and product systems on finite frames. in particular, we provide a description of the parametrisation for ( possibly non - automorphic ) c * - dynamical systems and row - finite higher - rank graphs, which squares with known results when restricting to crossed products and to locally convex row - finite higher - rank graphs.
|
arxiv:2310.04175
|
a discrete laplace transform and its inversion formula are obtained by using a quadrature of the continuous fourier transform which is given in terms of hermite polynomials and its zeros. this approach yields a convergent discrete formula for the two - sided laplace transform if the function to be transformed falls off rapidly to zero and satisfy certain conditions of integrability, achieving convergence also for singular functions. the inversion formula becomes a quadrature formula for the bromwich integral. this procedure also yields a quadrature formula for the mellin transform and its corresponding inversion formula that can be generalized straightforwardly for functions of several variables.
|
arxiv:0704.2842
|
we study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. we examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.
|
arxiv:astro-ph/0506072
|
we study the existence of solutions to the equation $ - \ gd _ pu + g ( x, u ) = \ mu $ when $ g ( x,. ) $ is a nondecreasing function and $ \ gm $ a measure. we characterize the good measures, i. e. the ones for which the problem as a renormalized solution. we study particularly the cases where $ g ( x, u ) = \ abs x ^ { \ beta } \ abs u ^ { q - 1 } u $ and $ g ( x, u ) = \ abs x ^ { \ tau } \ rm { sgn } ( u ) ( e ^ { \ tau \ abs u ^ \ lambda } - 1 ) $. the results state that a measure is good if it is absolutely continuous with respect to an appropriate lorentz - bessel capacities.
|
arxiv:1212.6314
|
the unification of quantum mechanics and gravity remains as one of the primary challenges of present - day physics. quantum - gravity - inspired phenomenological models offer a window to explore potential aspects of quantum gravity including qualitatively new behaviour that can be experimentally tested. one such phenomenological model is the generalized uncertainty principle ( gup ), which predicts a modified heisenberg uncertainty relation and a deformed canonical commutator. it was recently shown that optomechanical systems offer significant promise to put stringent experimental bounds on such models. in this paper, we introduce a scheme to increase the sensitivity of these experiments with an extended sequence of pulsed optomechanical interactions. we also analyze the effects of optical phase noise and optical loss and present a strategy to mitigate such deleterious effects.
|
arxiv:1610.06796
|
in the field of fault - tolerant quantum computing, continuous - variable systems can be utilized to protect quantum information from noise through the use of bosonic codes. these codes map qubit - type quantum information onto the larger bosonic hilbert space, and can be divided into two main categories : translational - symmetric codes, such as gottesman - kitaev - preskill ( gkp ) codes, and rotational - symmetric codes, including cat and binomial codes. the relationship between these families of codes has not yet been fully understood. we present an iterative protocol for converting between two instances of these codes gkp qunaught states and four - foldsymmetric binomial states corresponding to a zero - logical encoded qubit - using only gaussian operations. this conversion demonstrates the potential for universality of binomial states for all - gaussian quantum computation and provides a new method for the heraladed preparation of gkp states. through numerical simulation, we obtain gkp qunaught states with a fidelity of over 98 % and a probability of approximately 3. 14 %, after only two steps of our iterative protocol, though higher fidelities can be achieved with additional iterations at the cost of lower success probabilities.
|
arxiv:2301.10030
|
for many physical systems the transition from a stationary solution to sustained small amplitude oscillations corresponds to a hopf bifurcation. for systems involving impacts, thresholds, switches, or other abrupt events, however, this transition can be achieved in fundamentally different ways. this paper reviews 20 such ` hopf - like ' bifurcations for two - dimensional ode systems with state - dependent switching rules. the bifurcations include boundary equilibrium bifurcations, the collision or change of stability of equilibria or folds on switching manifolds, and limit cycle creation via hysteresis or time delay. in each case a stationary solution changes stability and possibly form, and emits one limit cycle. each bifurcation is analysed quantitatively in a general setting : we identify quantities that govern the onset, criticality, and genericity of the bifurcation, and determine scaling laws for the period and amplitude of the resulting limit cycle. complete derivations based on asymptotic expansions of poincare maps are provided. many of these are new, done previously only for piecewise - linear systems. the bifurcations are collated and compared so that dynamical observations can be matched to geometric mechanisms responsible for the creation of a limit cycle. the results are illustrated with impact oscillators, relay control, automated balancing control, predator - prey systems, ocean circulation, and the mckean and wilson - cowan neuron models.
|
arxiv:1905.01329
|
we demonstrate, for the first time, experimental over - the - fiber training of transmitter neural networks ( nns ) using reinforcement learning. optical back - to - back training of a novel nn - based digital predistorter outperforms arcsine - based predistortion with up to 60 \ % bit - error - rate reduction.
|
arxiv:2106.04934
|
a subcomplex $ \ mathcal { x } $ of a cell complex $ \ mathcal { c } $ is called \ emph { rigid } with respect to another cell complex $ \ mathcal { c } ' $ if every injective simplicial map $ \ lambda : \ mathcal { x } \ rightarrow \ mathcal { c } ' $ has a unique extension to an injective simplicial map $ \ phi : \ mathcal { c } \ rightarrow \ mathcal { c } ' $. we say that a cell complex exhibits \ emph { finite rigidity } if it contains a finite rigid subcomplex. given a surface with marked points, its \ textit { flip graph } and \ textit { arc complex } are simplicial complexes indexing the triangulations and the arcs between marked points, respectively. in this paper, we leverage the fact that the flip graph can be embedded in the arc complex as its dual to show that finite rigidity of the flip graph implies finite rigidity of the arc complex. thus, a recent result of the second author on the finite rigidity of the flip graph implies finite rigidity of the arc complex for a broad class of surfaces. notably, this includes surfaces with boundary - - a setting where finite rigidity of the arc complex was previously unknown.
|
arxiv:2310.04211
|
we investigate high - harmonic generation in graphene heterostructures consisting of metallic nanoribbons separated from a graphene sheet by either a few - nanometer layer of aluminum oxide or an atomic monolayer of hexagonal boron nitride. the nanoribbons amplify the near - field at the graphene layer relative to the externally applied pumping, thus allowing us to observe third - and fifth - harmonic generation in the carbon monolayer at modest pump powers in the mid - infrared. we study the dependence of the nonlinear signals on the ribbon width and spacer thickness, as well as pump power and polarization, and demonstrate enhancement factors relative to bare graphene reaching 1600 and 4100 for third - and fifth - harmonic generation, respectively. our work supports the use of graphene heterostructures to selectively enhance specific nonlinear processes of interest, an essential capability for the design of nanoscale nonlinear devices.
|
arxiv:2203.14644
|
we have produced a quantum degenerate li - 6 fermi gas with up to 7 x 10 ^ 7 atoms, an improvement by a factor of fifty over all previous experiments with degenerate fermi gases. this was achieved by sympathetic cooling with bosonic na - 23 in the f = 2, upper hyperfine ground state. we have also achieved bose - einstein condensation of f = 2 sodium atoms by direct evaporation.
|
arxiv:cond-mat/0306050
|
we obtain the classification of certain global bounded solutions for semilinear nonlocal equations of the type $ $ \ triangle ^ s u = w ' ( u ) $ $ in $ \ mathbb { r } ^ n $, with $ s \ in ( 1 / 2, 1 ), $ where $ w $ is a double well potential.
|
arxiv:1610.09295
|
the intention of the paper is to move a step towards a classification of network topologies that exhibit periodic quantum dynamics. we show that the evolution of a quantum system, whose hamiltonian is identical to the adjacency matrix of a circulant graph, is periodic if and only if all eigenvalues of the graph are integers ( that is, the graph is integral ). motivated by this observation, we focus on relevant properties of integral circulant graphs. specifically, we bound the number of vertices of integral circulant graphs in terms of their degree, characterize bipartiteness and give exact bounds for their diameter. additionally, we prove that circulant graphs with odd order do not allow perfect state transfer.
|
arxiv:quant-ph/0703236
|
we study the free energy of the pure glue qcd string with a torus target space and the gauge groups $ su ( n ) $ and ( chiral ) $ u ( n ) $. it is highly constrained by a strong / weak gauge coupling duality which results in modular covariance. the string free energy is computed exactly in terms of modular forms for worldsheet genera 1 - 8. it has a surprisingly mild singularity in the weak gauge coupling / small area limit.
|
arxiv:hep-th/9407176
|
we develop renormalization group methods for solving partial and stochastic differential equations on coarse meshes. renormalization group transformations are used to calculate the precise effect of small scale dynamics on the dynamics at the mesh size. the fixed point of these transformations yields a perfect operator : an exact representation of physical observables on the mesh scale with minimal lattice artifacts. we apply the formalism to simple nonlinear models of critical dynamics, and show how the method leads to an improvement in the computational performance of monte carlo methods.
|
arxiv:cond-mat/0009449
|
large differences between the properties of the known sample of cataclysmic variable stars ( cvs ) and the predictions of the theory of binary star evolution have long been recognised. however, because all existing cv samples suffer from strong selection effects, observational biases must be taken into account before it is possible to tell whether there is an inconsistency. in order to address this problem, we have modelled the impact of selection effects on observed cv samples using a monte carlo approach. by simulating the selection criteria of the palomar - green ( pg ) survey, we show that selection effects cannot reconcile the predictions of standard cv evolution theory with the observed sample. more generally, we illustrate the effect of the biases that are introduced by magnitude limits, selection cuts in u - b, and restrictions in galactic latitude.
|
arxiv:astro-ph/0610278
|
neural networks have been proposed as efficient numerical wavefunction ansatze which can be used to variationally search a wide range of functional forms for ground state solutions. these neural network methods are also advantageous in that more variational parameters and system degrees of freedom can be easily added. we benchmark the methodology by using neural networks to study several different integrable bosonic quantum systems in one dimension and compare our results to the exact solutions. while testing the scalability of the procedure to systems with many particles, we also introduce using symmetric function inputs to the neural network to enforce exchange symmetries of indistinguishable particles.
|
arxiv:2309.02352
|
layer - wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e. g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image. while this approach can be applied directly to generalized linear mappings, product type non - linearities are not covered. this paper proposes an approach to extend layer - wise relevance propagation to neural networks with local renormalization layers, which is a very common product - type non - linearity in convolutional neural networks. we evaluate the proposed method for local renormalization layers on the cifar - 10, imagenet and mit places datasets.
|
arxiv:1604.00825
|
we explore the relation between colour and specific star formation rate ( derived from optical spectra obtained by sdss dr4 ) of over 6, 000 galaxies ( m _ r < = - 20. 5 ) in and around low redshift ( z < 0. 12 ) clusters. even though most red galaxies have little or no ongoing star formation, and most blue galaxies are currently forming stars, there are significant populations of red star - forming ( sf ) and blue passive galaxies. this paper examines various properties of galaxies belonging to the latter two categories. these properties include morphological parameters, internal extinction, spectral features such as ew ( h _ delta ) and the 4000 ang break, and metallicity. our analysis shows that the blue passive galaxies have properties very similar to their sf counterparts, except that their large range in ew ( h _ delta ) indicates recent truncation of star formation. the red sf galaxies fall into two broad categories, one of them being massive galaxies in cluster cores dominated by an old stellar population, but with evidence of current star formation in the core. for the remaining red sf galaxies it is evident from various metallicity measures and mean stellar ages, that their colours result from the predominance of a metal - rich stellar population. the implication of the properties of these sf galaxies on environmental studies, like that of the butcher - oemler effect, is discussed.
|
arxiv:0908.2434
|
using the qcd sum rule approach we investigate the possible four - quark structure of the recently observed charmed scalar mesons $ d _ 0 ^ { 0 } ( 2308 ) $ ( belle ) and $ d _ 0 ^ { 0, + } ( 2405 ) $ ( focus ) and also of the very narrow $ d _ { sj } ^ { + } ( 2317 ) $, firstly observed by babar. we use diquak - antidiquark currents and work to the order of $ m _ s $ in full qcd, without relying on $ 1 / m _ c $ expansion. our results indicate that a four - quark structure is acceptable for the resonances observed by belle and babar : $ d _ 0 ^ { 0 } ( 2308 ) $ and $ d _ { sj } ^ { + } ( 2317 ) $ respectively, but not for the resonances observed by focus : $ d _ 0 ^ { 0, + } ( 2405 ) $.
|
arxiv:hep-ph/0509131
|
current soft prompt methods yield limited performance when applied to small - sized models ( fewer than a billion parameters ). deep prompt - tuning, which entails prepending parameters in each layer for enhanced efficacy, presents a solution for prompting small - sized models, albeit requiring carefully designed implementation. in this paper, we introduce the lottery ticket prompt - learning ( ltp ) framework that integrates winning tickets with soft prompts. the ltp offers a simpler implementation and requires only a one - time execution. we demonstrate ltp on cross - lingual tasks, where prior works rely on external tools like human - designed multilingual templates and bilingual dictionaries, which may not be feasible in a low - resource regime. specifically, we select a subset of parameters that have been changed the most during the fine - tuning with the masked language modeling objective. then, we prepend soft prompts to the original pre - trained language model and only update the selected parameters together with prompt - related parameters when adapting to the downstream tasks. we verify the effectiveness of our ltp framework on cross - lingual tasks, specifically targeting low - resource languages. our approach outperforms the baselines by only updating 20 \ % of the original parameters.
|
arxiv:2404.01242
|
automatic differentiation, also known as backpropagation, ad, autodiff, or algorithmic differentiation, is a popular technique for computing derivatives of computer programs accurately and efficiently. sometimes, however, the derivatives computed by ad could be interpreted as incorrect. these pitfalls occur systematically across tools and approaches. in this paper we broadly categorize problematic usages of ad and illustrate each category with examples such as chaos, time - averaged oscillations, discretizations, fixed - point loops, lookup tables, and linear solvers. we also review debugging techniques and their effectiveness in these situations. with this article we hope to help readers avoid unexpected behavior, detect problems more easily when they occur, and have more realistic expectations from ad tools.
|
arxiv:2305.07546
|
the work presented in this paper is part of a global framework which long term goal is to design a wireless sensor network able to support the observation of a population of endangered birds. we present the first stage for which we have conducted a knowledge discovery approach on a sample of acoustical data. we use mfcc features extracted from bird songs and we exploit two knowledge discovery techniques. one that relies on clustering - based approaches, that highlights the homogeneity in the songs of the species. the other, based on predictive modeling, that demonstrates the good performances of various machine learning techniques for the identification process. the knowledge elicited provides promising results to consider a widespread study and to elicit guidelines for designing a first version of the automatic approach for data collection based on acoustic sensors.
|
arxiv:1306.5349
|
we develop pieri type as well as murnaghan - - nakayama type formulas for equivariant chern - - schwartz - - macpherson classes of schubert cells in the classical flag variety. these formulas include as special cases many previously known multiplication formulas for chern - - schwartz - - macpherson classes or schubert classes. we apply the equivariant murnaghan - - nakayama formula to the enumeration of rim hook tableaux.
|
arxiv:2211.06802
|
we define an operation of jets on graphs inspired by the corresponding notion in commutative algebra and algebraic geometry. we examine a few graph theoretic properties and invariants of this construction, including chromatic numbers, co - chordality, and vertex covers.
|
arxiv:2104.08933
|
we use the rosat public data archive to study the x - ray emission of a sample of supposedly single a0 - f6 spectral type stars from the bright star catalogue. we detected x - ray emission from 19 a and 33 f - type stars. however, our results are not sufficient to associate with certainty the x - ray emission to the a - type stars themselves, since the usual argument that it may originate from a binary companion can not be excluded. a spectral analysis was conducted for 14 sources ( 3 a and 11 f ), finding that for 12 of them a two temperature thermal plasma model is needed to reproduce the observed spectra. the two temperatures are centered at 0. 13 and 0. 54 kev, respectively. the values found for the higher temperature are lower than that ones of x - ray selected single late - type stars. the x - ray luminosities are in the range 1e28 < l _ x < 1e30 erg / s, with a distribution similar to that of active late - type stars. no correlation is found between l _ x and b - v color, vsin ( i ) and l _ bol, while a positive correlation is found between the x - ray luminosity and the hardness ratio.
|
arxiv:astro-ph/9906221
|
in the pioneer 100 ( p100 ) wellness project ( price and others, 2017 ), multiple types of data are collected on a single set of healthy participants at multiple timepoints in order to characterize and optimize wellness. one way to do this is to identify clusters, or subgroups, among the participants, and then to tailor personalized health recommendations to each subgroup. it is tempting to cluster the participants using all of the data types and timepoints, in order to fully exploit the available information. however, clustering the participants based on multiple data views implicitly assumes that a single underlying clustering of the participants is shared across all data views. if this assumption does not hold, then clustering the participants using multiple data views may lead to spurious results. in this paper, we seek to evaluate the assumption that there is some underlying relationship among the clusterings from the different data views, by asking the question : are the clusters within each data view dependent or independent? we develop a new test for answering this question, which we then apply to clinical, proteomic, and metabolomic data, across two distinct timepoints, from the p100 study. we find that while the subgroups of the participants defined with respect to any single data type seem to be dependent across time, the clustering among the participants based on one data type ( e. g. proteomic data ) appears not to be associated with the clustering based on another data type ( e. g. clinical data ).
|
arxiv:1901.03905
|
we have determined chemical abundances and radial velocities for stars in the field of the zeta sculptoris cluster. we find that the cluster metal deficiency previously found from ubv photometry is too high ; the cluster overall metallicity, [ fe / h ] = + 0. 24, is about 70 % higher than the solar value. the chemical abundance pattern is unusual : the ni / fe ratio is significantly lower than typical for field stars with similar fe and ca abundances, and the cluster may also be deficient in the alpha element si. for its age, the cluster is unusually far from the galactic disk, approximately 240 pc. the adopted heliocentric cluster radial velocity, + 3. 9 km / s, shows that it is close to its maximum distance from the galactic plane. we suggest that the zeta sculptoris cluster was formed in the galactic disk 45 myr ago by the interaction of a high velocity cloud with the interstellar medium, and that its formation may be connected with that of gould ' s belt. ( table3, postscript figures and text available by anonymous ftp from " chiron. astro. uu. se " in directory pub / articles / atmos / p90 )
|
arxiv:astro-ph/9405068
|
we show, by explicit computation, that bare lattice perturbation theory in the two - dimensional o ( n ) nonlinear $ \ sigma $ models with superinstanton boundary conditions is divergent in the limit of an infinite number of points $ | \ lambda | $. this is the analogue of david ' s statement that renormalized perturbation theory of these models is infrared divergent in the limit where the physical size of the box tends to infinity. we also give arguments which support the validity of the bare perturbative expansion of short - distance quantities obtained by taking the limit $ | \ lambda | \ to \ infty $ term by term in the theory with more conventional boundary conditions such as dirichlet, periodic, and free.
|
arxiv:hep-lat/9612002
|
given a non - maximally entangled state, an operationally significant question is to quantitatively assess as to what extent the state is away from the maximally entangled state, which is of importance in evaluating the efficacy of the state for its various uses as a resource. it is this question which is examined in this paper for two - qubit pure entangled states in terms of different entanglement measures like negativity ( n ), logarithmic negativity ( ln ), and entanglement of formation ( eof ). although these entanglement measures are defined differently, to what extent they differ in quantitatively addressing the earlier mentioned question has remained uninvestigated. theoretical estimate in this paper shows that an appropriately defined parameter characterizing the fractional deviation of any given entangled state from the maximally entangled state in terms of n is quite different from that computed in terms of eof with their values differing up to ~ 15 % for states further away from the maximally entangled state. similarly, the values of such fractional deviation parameters estimated using the entanglement measures ln and eof, respectively, also strikingly differ among themselves with the maximum value of this difference being around 23 %. this analysis is complemented by illustration of these differences in terms of empirical results obtained from a suitably planned experimental study. thus, such appreciable amount of quantitative non - equivalence between the entanglement measures in addressing the experimentally relevant question considered in the present paper highlights the requirement of an appropriate quantifier for such intent. we indicate directions of study that can be explored towards finding such a quantifier.
|
arxiv:1907.09268
|
in this paper, we prove various radius results and obtain sufficient conditions using the convolution for the ma - minda classes $ \ mathcal { s } ^ * ( \ psi ) $ and $ \ mathcal { c } ( \ psi ) $ of starlike and convex analytic functions. we also obtain the bohr radius for the class $ s _ { f } ( \ psi ) : = \ { g ( z ) = \ sum _ { k = 1 } ^ { \ infty } b _ k z ^ k : g \ prec f \ } $ of subordinants, where $ f \ in \ mathcal { s } ^ * ( \ psi ). $ the results are improvements and generalizations of several well known results.
|
arxiv:2106.04962
|
in this paper, we develop an efficient multi - scale network to predict action classes in partial videos in an end - to - end manner. unlike most existing methods with offline feature generation, our method directly takes frames as input and further models motion evolution on two different temporal scales. therefore, we solve the complexity problems of the two stages of modeling and the problem of insufficient temporal and spatial information of a single scale. our proposed end - to - end multiscale network ( e2emsnet ) is composed of two scales which are named segment scale and observed global scale. the segment scale leverages temporal difference over consecutive frames for finer motion patterns by supplying 2d convolutions. for observed global scale, a long short - term memory ( lstm ) is incorporated to capture motion features of observed frames. our model provides a simple and efficient modeling framework with a small computational cost. our e2emsnet is evaluated on three challenging datasets : bit, hmdb51, and ucf101. the extensive experiments demonstrate the effectiveness of our method for action prediction in videos.
|
arxiv:2301.01216
|
iterative learning control ( ilc ) is a control strategy for repetitive tasks wherein information from previous runs is leveraged to improve future performance. optimization - based ilc ( ob - ilc ) is a powerful design framework for constrained ilc where measurements from the process are integrated into an optimization algorithm to provide robustness against noise and modelling error. this paper proposes a robust ilc controller for constrained linear processes based on the forward - backward splitting algorithm. it demonstrates how structured uncertainty information can be leveraged to ensure constraint satisfaction and provides a rigorous stability analysis in the iteration domain by combining concepts from monotone operator theory and robust control. numerical simulations of a precision motion stage support the theoretical results.
|
arxiv:2203.05291
|
we present high - resolution infrared spectra of four ysos ( t tau n, t tau s, rno 91, and hl tau ). the spectra exhibit narrow absorption lines of 12co, 13co, and c18o as well as broad emission lines of gas phase12co. the narrow absorption lines of co are shown to originate from the colder circumstellar gas. we find that the line of sight gas column densities resulting from the co absorption lines are much higher than expected for the measured extinction for each source and suggest the gas to dust ratio is measuring the dust settling and / or grain coagulation in these extended disks. we provide a model of turbulence, dust settling and grain growth to explain the results. the techniques presented here allow us to provide some observationally - motivated bounds on accretion disk alpha in protostellar systems.
|
arxiv:astro-ph/0603035
|
open - vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. recent two - stage methods first generate class - agnostic mask proposals and then leverage pre - trained vision - language models, e. g., clip, to classify masked regions. we identify the performance bottleneck of this paradigm to be the pre - trained clip model, since it does not perform well on masked images. to address this, we propose to finetune clip on a collection of masked image regions and their corresponding text descriptions. we collect training data by mining an existing image - caption dataset ( e. g., coco captions ), using clip to match masked image regions to nouns in the image captions. compared with the more precise and manually annotated segmentation labels with fixed classes ( e. g., coco - stuff ), we find our noisy but diverse dataset can better retain clip ' s generalization ability. along with finetuning the entire model, we utilize the " blank " areas in masked images using a method we dub mask prompt tuning. experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of clip, and it can further improve a fully finetuned model. in particular, when trained on coco and evaluated on ade20k - 150, our best model achieves 29. 6 % miou, which is + 8. 5 % higher than the previous state - of - the - art. for the first time, open - vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset - specific adaptations.
|
arxiv:2210.04150
|
what makes economic and ecological networks so unlike other highly skewed networks in their tendency toward turbulence and collapse? here, we explore the consequences of a defining feature of these networks : their nodes are tied together by flow. we show that flow networks tend to the power law degree distribution ( pldd ) due to a self - reinforcing process involving position within the global network structure, and thus present the first random graph model for pldds that does not depend on a rich - get - richer function of nodal degree. we also show that in contrast to non - flow networks, pldd flow networks are dramatically more vulnerable to catastrophic failure than non - pldd flow networks, a finding with potential explanatory power in our age of resource - and financial - interdependence and turbulence.
|
arxiv:1308.0726
|
rapidly growing population against a limited amount of land meant diminishing returns to labour. the result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. economist julian simon has criticised malthus ' s conclusions. while adam smith emphasised production and income, david ricardo ( 1817 ) focused on the distribution of income among landowners, workers, and capitalists. ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. he posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialise in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. it has been termed a " fundamental analytical explanation " for gains from trade. coming at the end of the classical tradition, john stuart mill ( 1848 ) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. mill pointed to a distinct difference between the market ' s two roles : allocation of resources and distribution of income. the market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene. value theory was important in classical theory. smith wrote that the " real price of every thing... is the toil and trouble of acquiring it ". smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. other classical economists presented variations on smith, termed the ' labour theory of value '. classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth ( capital ) and a constant population size. = = = marxian economics = = = marxist ( later, marxian ) economics descends from classical economics and it derives from the work of karl marx. the first volume of marx ' s major work, das kapital, was published in 1867. marx focused on the labour theory of value and theory of surplus value. marx wrote that they were mechanisms used by capital to exploit labour. the labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work
|
https://en.wikipedia.org/wiki/Economics
|
the proton spin structure is not understood yet and there has remained large uncertainty on delta g, the gluon spin contribution to the proton. double helicity asymmetry ( a _ ll ) of pi0 production in polarized pp collisions is used to constrain delta g. in this report, preliminary results of a _ ll of pi0 in pp collisions at sqrt ( s ) = 62. 4 gev measured by phenix experiment in 2006 is presented. it can probe higer x region than the previously reported pi0 a _ ll at sqrt ( s ) = 200 gev thanks to the lower center of mass energy.
|
arxiv:0704.1369
|
we consider the stationary keller - segel equation \ begin { equation * } \ begin { cases } - \ delta v + v = \ lambda e ^ v, \ quad v > 0 \ quad & \ text { in } \ omega, \ \ \ partial _ \ nu v = 0 & \ text { on } \ partial \ omega, \ end { cases } \ end { equation * } where $ \ omega $ is a ball. in the regime $ \ lambda \ to 0 $, we study the radial bifurcations and we construct radial solutions by a gluing variational method. for any given natural positive number $ n $, we build a solution having multiple layers at $ r _ 1, \ ldots, r _ n $ by which we mean that the solutions concentrate on the spheres of radii $ r _ i $ as $ \ lambda \ to 0 $ ( for all $ i = 1, \ ldots, n $ ). a remarkable fact is that, in opposition to previous known results, the layers of the solutions do not accumulate to the boundary of $ \ omega $ as $ \ lambda \ to 0 $. instead they satisfy an optimal partition problem in the limit.
|
arxiv:1603.07374
|
in software development process we come across various modules. which raise the idea of priority of the different modules of a software so that important modules are tested on preference. this approach is desirable because it is not possible to test each module regressively due to time and cost constraints. this paper discuss on some parameters, required to prioritize several modules of a software and provides measure of optimal time and cost for testing based on non homogeneous poisson process.
|
arxiv:0904.2769
|
the peak - background split argument is commonly used to relate the abundance of dark matter halos to their spatial clustering. testing this argument requires an accurate determination of the halo mass function. we present a maximum likelihood method for fitting parametric functional forms to halo abundances which differs from previous work because it does not require binned counts. our conclusions do not depend on whether we use our method or more conventional ones. in addition, halo abundances depend on how halos are defined. our conclusions do not depend on the choice of link length associated with the friends - of - friends halo - finder, nor do they change if we identify halos using a spherical overdensity algorithm instead. the large scale halo bias measured from the matter - halo cross spectrum b _ x and the halo autocorrelation function b _ xi ( on scales k ~ 0. 03h / mpc and r ~ 50 mpc / h ) can differ by as much as 5 % for halos that are significantly more massive than the characteristic mass m *. at these large masses, the peak background split estimate of the linear bias factor b1 is 3 - 5 % smaller than b _ xi, which is 5 % smaller than b _ x. we discuss the origin of these discrepancies : deterministic nonlinear local bias, with parameters determined by the peak - background split argument, is unable to account for the discrepancies we see. a simple linear but nonlocal bias model, motivated by peaks theory, may also be difficult to reconcile with our measurements. more work on such nonlocal bias models may be needed to understand the nature of halo bias at this level of precision.
|
arxiv:0906.1314
|
we construct the effective field theory ( eft ) of the teleparallel equivalent of general relativity ( tegr ). firstly, we present the necessary field redefinitions of the scalar field and the tetrads. then we provide all the terms at next - to - leading - order, containing the torsion tensor and its derivatives, and derivatives of the scalar field, accompanied by generic scalar - field - dependent couplings, where all operators are suppressed by a scale $ \ lambda $. removing all redundant terms using the field redefinitions we result to the eft of tegr, which includes significantly more terms comparing to the eft of general relativity. finally, we present an application in a cosmological framework. interestingly enough, although gr and tegr are completely equivalent at the level of classical equations, we find that their corresponding efts possess minor but non - zero differences. hence, we do verify that at higher energies the excitation and the features of the extra degrees of freedom are slightly different in the two theories, thus making them theoretically distinguishable. nevertheless, we mention that these differences are suppressed by the heavy mass scale $ \ lambda $ and thus it is not guaranteed that they could be measured in future experiments and observations.
|
arxiv:2211.11420
|
under the common theme of splitting of operations, the notions of ( tri ) dendriform algebras, pre - lie algebras and post - lie algebras have attracted sustained attention with broad applications. an important aspect of their studies is as the derived structures of rota - baxter operators on associative or lie algebras. this paper introduces extended versions of ( tri ) dendriform algebras, pre - lie algebras, and post - lie algebras, establishing close relations among these new structures that generalize those among their classical counterparts. these new structures can be derived from the extended rota - baxter operator, which combines the standard rota - baxter operator and the modified rota - baxter operator. to characterize these new notions as the derived structures of extended rota - baxter algebras, we define the binary quadratic operad in companion with an operad with nontrivial unary operations. then the extended ( tri ) dendriform algebra is shown to be the binary quadratic nonsymmetric operads in companion to the operad of extended rota - baxter algebras. as a key ingredient in achieving this and for its own right, the free extended rota - baxter algebra is constructed by bracketed words.
|
arxiv:2412.08001
|
adversarial examples are a widely studied phenomenon in machine learning models. while most of the attention has been focused on neural networks, other practical models also suffer from this issue. in this work, we propose an algorithm for evaluating the adversarial robustness of $ k $ - nearest neighbor classification, i. e., finding a minimum - norm adversarial example. diverging from previous proposals, we take a geometric approach by performing a search that expands outwards from a given input point. on a high level, the search radius expands to the nearby voronoi cells until we find a cell that classifies differently from the input point. to scale the algorithm to a large $ k $, we introduce approximation steps that find perturbations with smaller norm, compared to the baselines, in a variety of datasets. furthermore, we analyze the structural properties of a dataset where our approach outperforms the competition.
|
arxiv:2011.09719
|
we consider voter dynamics on a directed adaptive network with fixed out - degree distribution. a transition between an active phase and a fragmented phase is observed. this transition is similar to the undirected case if the networks are sufficiently dense and have a narrow out - degree distribution. however, if a significant number of nodes with low out degree is present, then fragmentation can occur even far below the estimated critical point due to the formation of self - stabilizing structures that nucleate fragmentation. this process may be relevant for fragmentation in current political opinion formation processes.
|
arxiv:1110.1336
|
the initial techniques developed in euclid ' s elements, well before the use of the parallel postulate, are reexamined in order to clarify even the most obscure details, particularly those related to equality, superposition and angle comparison. some commentary on modern developments is included. the known but often misunderstood implicit handling of betweenness and points of intersection is briefly treated. we also sketch a rigorous treatment of absolute geometry in a spirit similar to euclid ' s, one that allows properties of angles and triangles to be derived from two simple axioms on right angles, which then leads to rigid motions of certain planar geometries.
|
arxiv:2501.17406
|
certificates of polynomial nonnegativity can be used to obtain tight dual bounds for polynomial optimization problems. we consider sums of nonnegative circuit ( sonc ) polynomials certificates, which are well suited for sparse problems since the computational cost depends only on the number of terms in the polynomials and does not depend on the degrees of the polynomials. this work is a first step to integrating sonc - based relaxations of polynomial problems into a branch - and - bound algorithm. to this end, the sonc relaxation for constrained optimization problems is extended in order to better utilize variable bounds, since this property is key for the success of a relaxation in the context of branch - and - bound. computational experiments show that the proposed extension is crucial for making the sonc relaxations applicable to most constrained polynomial optimization problems and for integrating the two approaches.
|
arxiv:2211.05518
|
even though auto - encoders ( aes ) have the desirable property of learning compact representations without labels and have been widely applied to out - of - distribution ( ood ) detection, they are generally still poorly understood and are used incorrectly in detecting outliers where the normal and abnormal distributions are strongly overlapping. in general, the learned manifold is assumed to contain key information that is only important for describing samples within the training distribution, and that the reconstruction of outliers leads to high residual errors. however, recent work suggests that aes are likely to be even better at reconstructing some types of ood samples. in this work, we challenge this assumption and investigate what auto - encoders actually learn when they are posed to solve two different tasks. first, we propose two metrics based on the fr \ ' echet inception distance ( fid ) and confidence scores of a trained classifier to assess whether aes can learn the training distribution and reliably recognize samples from other domains. second, we investigate whether aes are able to synthesize normal images from samples with abnormal regions, on a more challenging lung pathology detection task. we have found that state - of - the - art ( sota ) aes are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution, resulting in blurred or misaligned reconstructions. we propose novel deformable auto - encoders ( morphaeus ) to learn perceptually aware global image priors and locally adapt their morphometry based on estimated dense deformation fields. we demonstrate superior performance over unsupervised methods in detecting ood and pathology.
|
arxiv:2206.03698
|
gamma - ray bursts ( grbs ) are cosmologically distributed, very energetic and very transient sources detected in the gamma - ray domain. the identification of their x - ray and optical afterglows allowed so far the redshift measurement of 150 events, from z = 0. 01 to z = 6. 29. for about half of them, we have some knowledge of the properties of the parent galaxy. at high redshift ( z > 2 ), absorption lines in the afterglow spectra give information on the cold interstellar medium in the host. at low redshift ( z < 1. 0 ) multi - band optical - nir photometry and integrated spectroscopy reveal the grb host general properties. a redshift evolution of metallicity is not noticeable in the whole sample. the typical value is a few times lower than solar. the mean host stellar mass is similar to that of the large magellanic cloud, but the mean star formation rate is five times higher. grbs are discovered with gamma - ray, not optical or nir, instruments. their hosts do not suffer from the same selection biases of typical galaxy surveys. therefore, they might represent a fair sample of the most common galaxies that existed in the past history of the universe, and can be used to better understand galaxy formation and evolution.
|
arxiv:0808.2917
|
event - by - event fluctuations of the average transverse momentum of produced particles near mid - rapidity have been measured by the phenix collaboration in sqrt ( s _ nn ) = 200 gev au + au and p + p collisions at the relativistic heavy ion collider. the fluctuations are observed to be in excess of the expectation for statistically independent particle emission for all centralities. the excess fluctuations exhibit a dependence on both the centrality of the collision and on the transverse momentum window over which the average is calculated. both the centrality and p _ t dependence can be well reproduced by a simulation of random particle production with the addition of contributions from hard scattering processes.
|
arxiv:nucl-ex/0310005
|
random vector functional link ( rvfl ), a variant of single - layer feedforward neural network ( slfn ), has garnered significant attention due to its lower computational cost and robustness to overfitting. despite its advantages, the rvfl network ' s reliance on the square error loss function makes it highly sensitive to outliers and noise, leading to degraded model performance in real - world applications. to remedy it, we propose the incorporation of the hawkeye loss ( h - loss ) function into the rvfl framework. the h - loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone. each characteristic brings its own advantages : 1 ) boundedness limits the impact of extreme errors, enhancing robustness against outliers ; 2 ) smoothness facilitates the use of gradient - based optimization algorithms, ensuring stable and efficient convergence ; and 3 ) the insensitive zone mitigates the effect of minor discrepancies and noise. leveraging the h - loss function, we embed it into the rvfl framework and develop a novel robust rvfl model termed h - rvfl. notably, this work addresses a significant gap, as no bounded loss function has been incorporated into rvfl to date. the non - convex optimization of the proposed h - rvfl is effectively addressed by the nesterov accelerated gradient ( nag ) algorithm, whose computational complexity is also discussed. the proposed h - rvfl model ' s effectiveness is validated through extensive experiments on $ 40 $ benchmark datasets from uci and keel repositories, with and without label noise. the results highlight significant improvements in robustness and efficiency, establishing the h - rvfl model as a powerful tool for applications in noisy and outlier - prone environments.
|
arxiv:2410.00510
|
visual - inertial odometry ( vio ) utilizes an inertial measurement unit ( imu ) to overcome the limitations of visual odometry ( vo ). however, the vio for vehicles in large - scale outdoor environments still has some difficulties in estimating forward motion with distant features. to solve these difficulties, we propose a robust vio method based on the analysis of feature confidence in forward motion estimation using an imu. we first formulate the vio problem by using effective trifocal tensor geometry. then, we infer the feature confidence by using the motion information obtained from an imu and incorporate the confidence into the bayesian estimation framework. experimental results on the public kitti dataset show that the proposed vio outperforms the baseline vio, and it also demonstrates the effectiveness of the proposed feature confidence analysis and confidence - incorporated egomotion estimation framework.
|
arxiv:1704.07145
|
we report the discovery of an unusually red brown dwarf found in a search for high proper motion objects using wise and 2mass data. wisep j004701. 06 + 680352. 1 is moving at 0. 44 $ arcsec / yr and lies relatively close to the galactic plane ( b = 5. 2 degrees ). near - infrared photometry and spectroscopy reveals that this is one of the reddest ( 2mass j - k _ s = 2. 55 + / - 0. 08 mag ) field l dwarfs yet detected, making this object an important member of the class of unusually red l dwarfs. we discuss evidence for thick condensate clouds and speculate on the age of the object. although models by different research groups agree that thick clouds can explain the red spectrum, they predict dramatically different effective temperatures, ranging from 1100k to 1600k. this brown dwarf is well suited for additional studies of extremely dusty substellar atmospheres because it is relatively bright ( k _ s = 13. 05 + / - 0. 03 mag ), which should also contribute to an improved understanding of young gas - giant planets and the transition between l and t brown dwarfs.
|
arxiv:1207.4012
|
this paper presents an approach for the implementation and execution of an effective requirements generation process. we achieve this goal by providing a well - defined requirements engineering model that includes verification and validation ( v & v ), and analysis. in addition, we identify focused activity objectives and map popular methods to lower - level activities, and define a criterion based process for optimizing method selection for attendant activities. our model, unlike other models, addresses the complete requirements generation process and consists of activities defined at more adequate levels of abstraction. furthermore, our model also incorporates a unique approach to v & v that enhances quality and reduces the cost of generating requirements. additionally, activity objectives are identified and explicitly stated - not implied as in the current models. to assist in the selection of an appropriate set of methods, we have mapped commonly used methods to activities based on their objectives. finally, we have identified method selection criteria and prescribed a reduced set of methods that optimize these criteria for each activity in our model. thus, our approach assists in the task of selecting methods by using selection criteria to reduce a large collection of potential methods to a smaller, manageable set. the model, clear mapping of methods to activity objectives, and the criteria based process, taken together, provide the much needed guidance for the effective implementation and execution of the requirements generation process.
|
arxiv:cs/0503004
|
event cameras provide rich signals that are suitable for motion estimation since they respond to changes in the scene. as any visual changes in the scene produce event data, it is paramount to classify the data into different motions ( i. e., motion segmentation ), which is useful for various tasks such as object detection and visual servoing. we propose an iterative motion segmentation method, by classifying events into background ( e. g., dominant motion hypothesis ) and foreground ( independent motion residuals ), thus extending the contrast maximization framework. experimental results demonstrate that the proposed method successfully classifies event clusters both for public and self - recorded datasets, producing sharp, motion - compensated edge - like images. the proposed method achieves state - of - the - art accuracy on moving object detection benchmarks with an improvement of over 30 %, and demonstrates its possibility of applying to more complex and noisy real - world scenes. we hope this work broadens the sensitivity of contrast maximization with respect to both motion parameters and input events, thus contributing to theoretical advancements in event - based motion segmentation estimation. https : / / github. com / aoki - media - lab / event _ based _ segmentation _ vcmax
|
arxiv:2504.18447
|
the cafe ( census of warm - hot intergalactic medium, accretion, and feedback explorer ) and lyric ( lyman uv radiation from interstellar medium and circum - galactic medium ) have been proposed to the space agencies in china respectively. cafe was first proposed in 2015 as a joint scientific cas - esa small space mission. lyric was proposed in 2019 as an independent external payload operating on the chinese space station. both missions are dedicated to mapping the lyman uv emission ( ionized oxygen ( o vi ) resonance lines at 103. 2 and 103. 8 nm, and lyman series ) for the diffuse sources in our galaxy and the circum - galactic mediums of the nearby galaxies. we present the primary science objectives, mission concepts, the enabling technologies, as well as the current status.
|
arxiv:2012.07384
|
a regime - switching multivariate time series model which is closed under margins is built. the model imposes a restriction on all lower - dimensional sub - processes to follow a regime - switching process sharing the same latent regime sequence and having the same markov order as the original process. the margin - closed regime - switching model is constructed by considering the multivariate margin - closed gaussian var ( $ k $ ) dependence as a copula within each regime, and builds dependence between observations in different regimes by requiring the first observation in the new regime to depend on the last observation in the previous regime. the property of closure under margins allows inference on the latent regimes based on lower - dimensional selected sub - processes and estimation of univariate parameters from univariate sub - processes, and enables the use of multi - stage estimation procedure for the model. the parsimonious dependence structure of the model also avoids a large number of parameters under the regime - switching setting. the proposed model is applied to a macroeconomic data set to infer the latent business cycle and compared with the relevant benchmark.
|
arxiv:2312.10706
|
metamaterials with an effective zero refractive index associated to their electromagnetic response are sought for a number of applications in communications and nonlinear optics. a promising way that this can be achieved in all - dielectric photonic crystals is through the design of a dirac cone at zero bloch wave - vector in the photonic band structure. in the optical frequency range, the natural way to implement this design is through the use of a photonic crystal slab. in the existing implementation, however, the zero - index photonic modes also radiate strongly into the environment due to intrinsic symmetry properties. this has resulted in large losses in recent experimental realizations of this zero - index paradigm. here, we propose a photonic crystal slab with zero - index modes which are also symmetry - protected bound states in the continuum. our approach thus eliminates the associated radiation loss. this could enable, for the first time, large - scale integration of zero - index materials in photonic devices.
|
arxiv:1811.11917
|
we calculate the variation with temperature of the vortex free energy in d = 2 + 1 su ( 2 ) lattice gauge theories. we do so both above and below the deconfining transition at t = tc. we find that this quantity is zero at all t for large enough volumes. for t < tc this observation is consistent with the fact that the phase is linearly confining ; while for t > tc it is consistent with the conventional expectation of ` spatial ' linear confinement. in small spatial volumes this quantity is shown to be non - zero. the way it decreases to zero with increasing volume is shown to be controlled by the ( spatial ) string tension and it has the functional form one would expect if the vortices being studied were responsible for the confinement at low t, and for the ` spatial ' confinement at large t. we also discuss in detail some of the direct numerical evidence for a non - zero spatial string tension at high t, and we show that the observed linearity of the ( spatial ) potential extends over distances that are large compared to typical high - t length scales.
|
arxiv:hep-lat/0005010
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.