text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
we present an indirect two - qubit parity meter in planar circuit quantum electrodynamics, realized by discrete interaction with an ancilla and a subsequent projective ancilla measurement with a dedicated, dispersively coupled resonator. quantum process tomography and successful entanglement by measurement demonstrate that the meter is intrinsically quantum non - demolition. separate interaction and measurement steps allow commencing subsequent data qubit operations in parallel with ancilla measurement, offering time savings over continuous schemes.
|
arxiv:1311.5530
|
a modified version of young ' s experiment by shahriar afshar indirectly reveals the presence of a fully articulated interference pattern prior to the post - selection of a particle in a " which - slit " basis. while this experiment does not constitute a violation of bohr ' s complementarity principle as claimed by afshar, both he and many of his critics incorrectly assume that a commonly used relationship between visibility parameter v and " which - way " parameter k has crucial relevance to his experiment. it is argued here that this relationship does not apply to this experimental situation and that it is wrong to make any use of it in support of claims for or against the bearing of this experiment on complementarity.
|
arxiv:0801.4757
|
neutron star matter spans a wide range of densities, from that of nuclei at the surface to exceeding several times normal nuclear matter density in the core. while terrestrial experiments, such as nuclear or heavy - ion collision experiments, provide clues about the behaviour of dense nuclear matter, one must resort to theoretical models of neutron star matter to extrapolate to higher density and finite neutron / proton asymmetry relevant for neutron stars. in this work, we explore the parameter space within the framework of the relativistic mean field model allowed by present uncertainties compatible with state - of - the - art experimental data. we apply a cut - off filter scheme to constrain the parameter space using multi - physics constraints at different density regimes : chiral effective field theory, nuclear and heavy - ion collision data as well as multi - messenger astrophysical observations of neutron stars. using the results of the study, we investigate possible correlations between nuclear and astrophysical observables.
|
arxiv:2107.09371
|
let $ a $ be an operator on { a separable } hilbert space $ \ ch $, and let $ g \ subset \ ch $. it is known that - under appropriate conditions on $ a $ and $ g $ - the set of iterations $ f _ g ( a ) = \ { a ^ j \ gbf \ ; | \ ; \ gbf \ in g, \ ; 0 \ leq j \ leq l ( \ gbf ) \ } $ is a frame for $ \ ch $. we call $ f _ g ( a ) $ a dynamical frame for $ \ ch $, and explore further its properties ; in particular, we show that the canonical dual frame of $ f _ g ( a ) $ also has an iterative set structure. we explore the relations between the operator $ a $, the set $ g $ and the number of iterations $ l $ which ensure that the system $ f _ g ( a ) $ is a scalable frame. we give a general statement on frame scalability, we and study in detail the case when $ a $ is a normal operator, utilizing the unitary diagonalization in finite dimensions. in addition, we answer the question of when $ f _ g ( a ) $ is a scalable frame in several special cases involving block - diagonal and companion operators.
|
arxiv:1608.05622
|
we develop a quantum circuit model describing unitary interactions between quantum fields and a uniformly accelerated object, and apply it to a semi - transparent mirror which uniformly accelerates in the minkowski vacuum. the reflection coefficient $ r _ { \ omega } $ of the mirror varies between 0 and 1, representing a generalization of the perfect mirror ( $ r _ { \ omega } = 1 $ ) discussed extensively in the literature. our method is non - perturbative, not requiring $ r _ { \ omega } \ sim 0 $. we use the circuit model to calculate the radiation from an eternally accelerated mirror and obtain a finite particle flux along the past horizon provided an appropriate low frequency regularization is introduced. more importantly, it is straightforward to see from our formalism that the radiation is squeezed. the squeezing is closely related to cutting the correlation across the horizon, which therefore may have important implications to the formation of a black hole firewall.
|
arxiv:1602.02858
|
we exploit the similarities between tikhonov regularization and bayesian hierarchical models to propose a regularization scheme that acts like a distributed tikhonov regularization where the amount of regularization varies from component to component. in the standard formulation, tikhonov regularization compensates for the inherent ill - conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. the selection of the scaling is the core problem in tikhonov regularization. if an estimate of the amount of noise in the data is available, a popular way is to use the morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. a too small value of the regularization parameter would yield a solution that fits to the noise while a too large value would lead to an excessive penalization of the solution. in many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, allowing different regularization for different components of the unknown, or for groups of them. a distributed tikhonov - inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. the numerical scheme that we propose, while exploiting the bayesian interpretation of the inverse problem and identifying the tikhonov regularization with the maximum a posteriori ( map ) estimation, requires no statistical tools. a combination of numerical linear algebra and optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available.
|
arxiv:2404.05956
|
a central function of code review is to increase understanding ; helping reviewers understand a code change aids in knowledge transfer and finding bugs. comments in code largely serve a similar purpose, helping future readers understand the program. it is thus natural to study what happens when these two forms of understanding collide. we ask : what documentation - related comments do reviewers make and how do they affect understanding of the contribution? we analyze ca. 700k review comments on 2, 000 ( java and python ) github projects, and propose several filters to identify which comments are likely to be either in response to a change in documentation and / or call for such a change. we identify 65k such cases. we next develop a taxonomy of the reviewer intents behind such " comments on comments ". we find that achieving a shared understanding of the code is key : reviewer comments most often focused on clarification, followed by pointing out issues to fix, such as typos and outdated comments. curiously, clarifying comments were frequently suggested ( often verbatim ) by the reviewer, indicating a desire to persist their understanding acquired during code review. we conclude with a discussion of implications of our comments - on - comments dataset for research on improving code review, including the potential benefits for automating code review.
|
arxiv:2204.00107
|
when searching for a marked vertex in a graph, szegedy ' s usual search operator is defined by using the transition probability matrix of the random walk with absorbing barriers at the marked vertices. instead of using this operator, we analyze searching with szegedy ' s quantum walk by using reflections around the marked vertices, that is, the standard form of quantum query. we show we can boost the probability to 1 of finding a marked vertex in the complete graph. numerical simulations suggests that the success probability can be improved for other graphs, like the two - dimensional grid. we also prove that, for a certain class of graphs, we can express szegedy ' s search operator, obtained from the absorbing walk, using the standard query model.
|
arxiv:1603.05473
|
modelling hydrodynamic lubrication is crucial in the design of engineering components as well as for a fundamental understanding of friction mechanisms. the cornerstone of thin - film flow modelling is the reynolds equation - - a lower - dimensional representation of the stokes equation. however, the derivation of the reynolds equation is based on assumptions and fixed form constitutive relations, that may not generally be valid, especially when studying systems under extreme conditions. furthermore, these explicit assumptions about the constitutive behaviour of the fluid prohibit applications in a multiscale scenario based on measured or atomistically simulated data. here, we present a method that considers the full compressible navier - stokes equation in a height - averaged sense for arbitrary constitutive relations. we perform numerical tests by using a reformulation of the viscous stress tensor for laminar flow to validate the presented method comparing to results from conventional reynolds solutions. the versatility of the method is shown by incorporating models for mass - conserving cavitation, wall slip and non - newtonian fluids. this allows testing of new constitutive relations that not necessarily need to take a fixed form, and may be obtained from experimental or simulation data.
|
arxiv:2107.12760
|
algorithmic latency in speech processing is dominated by the frame length used for fourier analysis, which in turn limits the achievable performance of magnitude - centric approaches. as previous studies suggest the importance of phase grows with decreasing frame length, this work presents a systematical study on the contribution of phase and magnitude in modern deep neural network ( dnn ) - based speech enhancement at different frame lengths. results indicate that dnns can successfully estimate phase when using short frames, with similar or better overall performance compared to using longer frames. thus, interestingly, modern phase - aware dnns allow for low - latency speech enhancement at high quality.
|
arxiv:2203.16222
|
power system state estimation ( psse ) is commonly formulated as weighted least - square ( wls ) algorithm and solved using iterative methods such as gauss - newton methods. however, iterative methods have become more sensitive to system operating conditions than ever before due to the deployment of intermittent renewable energy sources, low carbon technologies ( e. g., electric vehicles ), and demand response programs. appropriate psse approaches are required to avoid pitfalls of the wls - based psse computations for accurate prediction of operating conditions. this paper proposes a data - driven real - time psse using a deep ensemble learning algorithm. in the proposed approach, the ensemble learning setup is formulated with dense residual neural networks as base - learners and multivariate - linear regressor as meta - learner. historical measurements and states are utilised to train and test the model. the trained model can be used in real - time to estimate power system states ( voltage magnitudes and phase angles ) using real - time measurements. most of current data - driven psse methods assume the availability of a complete set of measurements, which may not be the case in real power system data - acquisition. this paper adopts multivariate linear regression to forecast system states for instants of missing measurements to assist the proposed psse technique. case studies are performed on various ieee standard benchmark systems to validate the proposed approach. the results show that the proposed approach outperforms existing data - driven psse methods techniques.
|
arxiv:2101.03457
|
meta - learning enables rapid generalization to new tasks by learning knowledge from various tasks. it is intuitively assumed that as the training progresses, a model will acquire richer knowledge, leading to better generalization performance. however, our experiments reveal an unexpected result : there is negative knowledge transfer between tasks, affecting generalization performance. to explain this phenomenon, we conduct structural causal models ( scms ) for causal analysis. our investigation uncovers the presence of spurious correlations between task - specific causal factors and labels in meta - learning. furthermore, the confounding factors differ across different batches. we refer to these confounding factors as " task confounders ". based on these findings, we propose a plug - and - play meta - learning causal representation learner ( metacrl ) to eliminate task confounders. it encodes decoupled generating factors from multiple tasks and utilizes an invariant - based bi - level optimization mechanism to ensure their causality for meta - learning. extensive experiments on various benchmark datasets demonstrate that our work achieves state - of - the - art ( sota ) performance.
|
arxiv:2312.05771
|
we present a human - in - the - loop ( hil ) approach to permutation regression, the novel task of predicting a continuous value for a given ordering of items. the model is a gradient boosted regression model that incorporates simple human - understandable constraints of the form x < y, i. e. item x has to be before item y, as binary features. the approach, hugur ( human guided regression ), lets a human explore the search space of such transparent regression models. interacting with hugur, users can add, remove, and refine order constraints interactively, while the coefficients are calculated on the fly. we evaluate hugur in a user study and compare the performance of user - built models with multiple baselines on 9 data sets. the results show that the user - built models outperform the compared methods on small data sets and in general perform on par with the other methods, while being in principle understandable for humans. on larger datasets from the same domain, machine - induced models begin to outperform the user - built models. further work will study the trust users have in models when constructed by themselves and how the scheme can be transferred to other pattern domains, such as strings, sequences, trees, or graphs.
|
arxiv:2502.15992
|
depression and spike frequency adaptation, are indeed critical neurobiological processes in memory consolidation.
|
arxiv:2404.02938
|
sterile neutrinos with a mass around the kev scale are an attractive particle physics candidate for warm dark matter. although many frameworks have been presented in which these neutrinos can fulfill all phenomenological constraints, there are hardly any models known that can explain such a peculiar mass pattern, one sterile neutrino at the kev scale and the other two considerably heavier, while at the same time being compatible with low - energy neutrino data. in this paper, we present models based on the froggatt - nielsen mechanism, which can give such an explanation. we explain how to assign froggatt - nielsen charges in a successful way, and we give a detailed discussion of all conditions to be fulfilled. it turns out that the typical arbitrariness of the charge assignments is greatly reduced when trying to carefully account for all constraints. we furthermore present analytical calculations of a few simplified models, while quasi - perfect models are found numerically.
|
arxiv:1105.5136
|
self - attention is of vital importance in semantic segmentation as it enables modeling of long - range context, which translates into improved performance. we argue that it is equally important to model short - range context, especially to tackle cases where not only the regions of interest are small and ambiguous, but also when there exists an imbalance between the semantic classes. to this end, we propose masked supervised learning ( masksup ), an effective single - stage learning paradigm that models both short - and long - range context, capturing the contextual relationships between pixels via random masking. experimental results demonstrate the competitive performance of masksup against strong baselines in both binary and multi - class segmentation tasks on three standard benchmark datasets, particularly at handling ambiguous regions and retaining better segmentation of minority classes with no added inference cost. in addition to segmenting target regions even when large portions of the input are masked, masksup is also generic and can be easily integrated into a variety of semantic segmentation methods. we also show that the proposed method is computationally efficient, yielding an improved performance by 10 \ % on the mean intersection - over - union ( miou ) while requiring $ 3 \ times $ less learnable parameters.
|
arxiv:2210.00923
|
wide coverage and high - precision rural household wealth data is an important support for the effective connection between the national macro rural revitalization policy and micro rural entities, which helps to achieve precise allocation of national resources. however, due to the large number and wide distribution of rural areas, wealth data is difficult to collect and scarce in quantity. therefore, this article attempts to integrate " sky " remote sensing images with " ground " village street view imageries to construct a fine - grained " computable " technical route for rural household wealth. with the intelligent interpretation of rural houses as the core, the relevant wealth elements of image data were extracted and identified, and regressed with the household wealth indicators of the benchmark questionnaire to form a high - precision township scale wealth prediction model ( r = 0. 85 ) ; furthermore, a national and township scale map of rural household wealth in china was promoted and drawn. based on this, this article finds that there is a " bimodal " pattern in the distribution of wealth among rural households in china, which is reflected in a polarization feature of " high in the south and low in the north, and high in the east and low in the west " in space. this technological route may provide alternative solutions with wider spatial coverage and higher accuracy for high - cost manual surveys, promote the identification of shortcomings in rural construction, and promote the precise implementation of rural policies.
|
arxiv:2502.12163
|
it is well known that in a small p \ ' olya urn, i. e., an urn where second largest real part of an eigenvalue is at most half the largest eigenvalue, the distribution of the numbers of balls of different colours in the urn is asymptotically normal under weak additional conditions. we consider the balanced case, and then give asymptotics of the mean and the covariance matrix, showing that after appropriate normalization, the mean and covariance matrix converge to the mean and variance of the limiting normal distribution.
|
arxiv:1602.06203
|
we study the quasiprobability representation of quantum light, as introduced by glauber and sudarshan, for the unified characterization of quantum phenomena. we begin with reviewing the past and current impact of this technique. regularization and convolution methods are specifically considered since they are accessible in experiments. we further discuss more general quantum systems for which the concept of negative probabilities can be generalized, being highly relevant for quantum information science. for analyzing quantum superpositions, we apply recently developed approaches to visualize quantum coherence of states via negative quasiprobability representations, including regularized quasiprobabilities for light and more general quantum correlated systems.
|
arxiv:1907.12427
|
interaction of ultera - short laser pulses with a dense cold plasma is investigated. due to high density, of plasma, quantum effects such that bohm potential and quantum pressure should be considered. the results reveal that electron density function modulated by laser light in the propagation direction. this modulation can be controlled by amplitude of laser intensity and plasma effective parameters. for some special values of involved parameters electron density become localized in quenches spatially. increasing the quantum coefficient tends to rarefy the high electron density regions, since the total number of electrons are constant. hence, our theory predicts plasma expansion in the direction of laser light due to quantum effects.
|
arxiv:1904.00754
|
entropy estimation, due in part to its connection with mutual information, has seen considerable use in the study of time series data including causality detection and information flow. in many cases, the entropy is estimated using $ k $ - nearest neighbor ( kozachenko - leonenko ) based methods. however, analytic results on this estimator are limited to independent data. in the article, we show rigorous bounds on the rate of decay of the bias in the number of samples, $ n $, assuming they are drawn from a stationary process which satisfies a suitable mixing condition. numerical examples are presented which demonstrate the efficiency of the estimator when applied to a markov process with stationary gaussian density. these results support the asymptotic rates derived in the theoretical work.
|
arxiv:1904.05850
|
we study the effects of large - scale density fluctuations on strong gravitational lensing. previous studies have focused mostly on weak lensing, since large - scale structure alone cannot produce multiple images. when a galaxy or cluster acts as a primary lens, however, we find that large - scale structure can produce asymmetric shear of the same order as the lens itself. indeed, this may explain the origin of the large shear found in lens models in conflict with the small ellipticity of the observed galaxy light distributions. we show that large - scale structure changes the lens equation to the form of a generalized quadrupole lens, which affects lens reconstruction. large - scale structure also changes the angular diameter distance at a given redshift. the precise value depends on the lens and source redshifts and on the large - scale structure power spectrum, but the induced $ 1 \ sigma $ uncertainty in determinations of the hubble constant from measurements of time delays is of order $ 5 - 10 \ % $. if observations of lensing can constrain the magnitude of the shear which is due to large - scale structure, it would provide a direct probe of the overall amplitude of mass fluctuations.
|
arxiv:astro-ph/9511056
|
with the epochal first detection of gravitational waves from a binary neutron star ( ns ) merger with the gw170817 event, and its direct confirmation that ns - ns mergers are significant sources of the of the r - process nucleosynthesis of heavy elements, an immense new arena for prompt em ( x - rays through ir and radio ) studies of fundamental physics has been opened. over the next decade, gw observatories will expand in scale and sensitivity so the need for facilities that can provide prompt, high sensitivity, broad - band em followup becomes more urgent. ns - ns or ns - black hole ( bh ) mergers will be instantly recognized ( and announced ) by the ligo - international collaboration. lsst will be a prime resource for rapid tiling of what will usually be large ( ~ 10 - 100 degree squared ) error boxes. x - ray through ir telescopes in space with ( nearly ) full - sky access that can rapidly image and tile are crucial for providing the earliest imaging and spectroscopic studies of the kilonova emission immediately following ns - ns mergers. the time - domain spectroscopic observatory ( tso ) is a proposed probe - class 1. 3 m telescope at l2, with imaging and spectroscopy ( r = 200, 1800 ) in 4 bands ( 0. 3 - 5 micron ) and rapid slew capability to 90 % of sky. tso nuv - mid - ir spectra will enable new constraints on ns structure and nucleosynthesis.
|
arxiv:1903.05736
|
deep imbalanced regression ( dir ), where the target values have a highly skewed distribution and are also continuous, is an intriguing yet under - explored problem in machine learning. while recent works have already shown that incorporating various classification - based regularizers can produce enhanced outcomes, the role of classification remains elusive in dir. moreover, such regularizers ( e. g., contrastive penalties ) merely focus on learning discriminative features of data, which inevitably results in ignorance of either continuity or similarity across the data. to address these issues, we first bridge the connection between the objectives of dir and classification from a bayesian perspective. consequently, this motivates us to decompose the objective of dir into a combination of classification and regression tasks, which naturally guides us toward a divide - and - conquer manner to solve the dir problem. specifically, by aggregating the data at nearby labels into the same groups, we introduce an ordinal group - aware contrastive learning loss along with a multi - experts regressor to tackle the different groups of data thereby maintaining the data continuity. meanwhile, considering the similarity between the groups, we also propose a symmetric descending soft labeling strategy to exploit the intrinsic similarity across the data, which allows classification to facilitate regression more effectively. extensive experiments on real - world datasets also validate the effectiveness of our method.
|
arxiv:2412.12327
|
we perform a systematic wkb expansion to all orders for a one - dimensional system with potential $ v ( x ) = u _ 0 / \ cos ^ 2 { ( \ alpha x ) } $. we are able to sum the series to the exact energy spectrum. then we show that at any finite order the error of the wkb approximation measured in the natural units of the mean energy level spacing does not go to zero when the quantum number goes to infinity. therefore we make the general conclusion that the semiclassical approximations fail to predict the individual energy levels within a vanishing fraction of the mean energy level spacing.
|
arxiv:quant-ph/9610027
|
we report the discovery of qatar - 6b, a new transiting planet identified by the qatar exoplanet survey ( qes ). the planet orbits a relatively bright ( v = 11. 44 ), early - k main - sequence star at an orbital period of p ~ 3. 506 days. an sed fit to available multi - band photometry, ranging from the near - uv to the mid - ir, yields a distance of d = 101 + / - 6 pc to the system. from a global fit to follow - up photometric and spectroscopic observations, we calculate the mass and radius of the planet to be mp = 0. 67 + / - 0. 07 mjup and rp = 1. 06 + / - 0. 07 rjup, respectively. we use multi - color photometric light curves to show that the transit is grazing, making qatar - 6b one of the few exoplanets known in a grazing transit configuration. it adds to the short list of targets that offer the best opportunity to look for additional bodies in the host planetary system through variations in the transit impact factor and duration.
|
arxiv:1712.03216
|
= = = = = electrical engineering = = = timeline of electrical and electronic engineering = = = energy = = = = = = materials science = = = timeline of materials technology metallurgy = = = measurement = = = history of time in the united states history of timekeeping devices = = = medicine = = = = = = military = = = military history # technological evolution category : military history β articles on history of specific technologies = = = nuclear = = = manhattan project atomic age nuclear testing nuclear arms race = = = science and technology = = = = = = transport = = = = = see also = = = = = related history = = = = = = related disciplines = = = = = = related subjects = = = = = references = = = = further reading = = = = external links = = electropaedia on the history of technology archived 2011 - 05 - 12 at the wayback machine mit 6. 933j β the structure of engineering revolutions. from mit opencourseware, course materials ( graduate level ) for a course on the history of technology through a thomas kuhn - ian lens. concept of civilization events. from jaroslaw kessler, a chronology of " civilizing events ". ancient and medieval city technology society for the history of technology giants of science ( website of the institute of national remembrance )
|
https://en.wikipedia.org/wiki/History_of_technology
|
it is shown that recently measured cross sections for double ionization of negative ions ( $ h ^ -, o ^ - $, and $ c ^ - $ ) possess a universal shape when plotted in suitable dimensionless units. the shape can be represented with a simple analytical function, following the same principles as it has been done in establishing a universal shape function for single ionization [ rost and pattard 1997 phys. rev. a { \ bf 55 } r5 ]. thereby, it is demonstrated that direct double ionization dominates the cross section for the targets considered.
|
arxiv:physics/9904032
|
motivated by the recent synthesis of $ \ beta $ - li $ _ 2 $ iro $ _ 3 $ - - a spin - orbit entangled $ j = 1 / 2 $ mott insulator with a three - dimensional lattice structure of the ir $ ^ { 4 + } $ ions - - we consider generalizations of the kitaev model believed to capture some of the microscopic interactions between the iridium moments on various trivalent lattice structures in three spatial dimensions. of particular interest is the so - called hyperoctagon lattice - - the premedial lattice of the hyperkagome lattice, for which the ground state is a gapless quantum spin liquid where the gapless majorana modes form an extended two - dimensional majorana fermi surface. we demonstrate that this majorana fermi surface is inherently protected by lattice symmetries and discuss possible instabilities. we thus provide the first example of an analytically tractable microscopic model of interacting su ( 2 ) spin - 1 / 2 degrees of freedom in three spatial dimensions that harbors a spin liquid with a two - dimensional spinon fermi surface.
|
arxiv:1401.7678
|
the holy grail of networking is to create \ textit { cognitive networks } that organize, manage, and drive themselves. such a vision now seems attainable thanks in large part to the progress in the field of machine learning ( ml ), which has now already disrupted a number of industries and revolutionized practically all fields of research. but are the ml models foolproof and robust to security attacks to be in charge of managing the network? unfortunately, many modern ml models are easily misled by simple and easily - crafted adversarial perturbations, which does not bode well for the future of ml - based cognitive networks unless ml vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. the purpose of this article is to highlight the problem of insecure ml and to sensitize the readers to the danger of adversarial ml by showing how an easily - crafted adversarial ml example can compromise the operations of the cognitive self - driving network. in this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications ( namely, intrusion detection and network traffic classification ). we also provide some guidelines to design secure ml models for cognitive networks that are robust to adversarial attacks on the ml pipeline of cognitive networks.
|
arxiv:1906.00679
|
the mobile robot relies on slam ( simultaneous localization and mapping ) to provide autonomous navigation and task execution in complex and unknown environments. however, it is hard to develop a dedicated algorithm for mobile robots due to dynamic and challenging situations, such as poor lighting conditions and motion blur. to tackle this issue, we propose a tightly - coupled lidar - visual slam based on geometric features, which includes two sub - systems ( lidar and monocular visual slam ) and a fusion framework. the fusion framework associates the depth and semantics of the multi - modal geometric features to complement the visual line landmarks and to add direction optimization in bundle adjustment ( ba ). this further constrains visual odometry. on the other hand, the entire line segment detected by the visual subsystem overcomes the limitation of the lidar subsystem, which can only perform the local calculation for geometric features. it adjusts the direction of linear feature points and filters out outliers, leading to a higher accurate odometry system. finally, we employ a module to detect the subsystem ' s operation, providing the lidar subsystem ' s output as a complementary trajectory to our system while visual subsystem tracking fails. the evaluation results on the public dataset m2dgr, gathered from ground robots across various indoor and outdoor scenarios, show that our system achieves more accurate and robust pose estimation compared to current state - of - the - art multi - modal methods.
|
arxiv:2307.07763
|
endowing continuum robots with compliance while it is interacting with the internal environment of the human body is essential to prevent damage to the robot and the surrounding tissues. compared with passive compliance, active compliance has the advantages in terms of increasing the force transmission ability and improving safety with monitored force output. previous studies have demonstrated that active compliance can be achieved based on a complex model of the mechanics combined with a traditional machine learning technique such as a support vector machine. this paper proposes a recurrent neural network based approach that avoids the complexity of modeling while capturing nonlinear factors such as hysteresis, friction and delay of the electronics that are not easy to model. the approach is tested on a 3 - tendon single - segment continuum robot with force sensors on each cable. experiments are conducted to demonstrate that the continuum robot with an rnn based feed - forward controller is capable of responding to external forces quickly and entering an unknown environment compliantly.
|
arxiv:1902.08943
|
we use bounds of character sums and some combinatorial arguments to show the abundance of very smooth numbers which also have very few non - zero binary digits.
|
arxiv:2212.10209
|
= progress, practice and consistency = = = philosopher paul thagard believed that astrology can not be regarded as falsified in this sense until it has been replaced with a successor. in the case of predicting behaviour, psychology is the alternative. : 228 to thagard a further criterion of demarcation of science from pseudoscience was that the state of the art must progress and that the community of researchers should be attempting to compare the current theory to alternatives, and not be " selective in considering confirmations and disconfirmations ". : 227 β 228 progress is defined here as explaining new phenomena and solving existing problems, yet astrology has failed to progress having only changed little in nearly 2000 years. : 228 : 549 to thagard, astrologers are acting as though engaged in normal science believing that the foundations of astrology were well established despite the " many unsolved problems ", and in the face of better alternative theories ( psychology ). for these reasons thagard viewed astrology as pseudoscience. : 228 to thagard, astrology should not be regarded as a pseudoscience on the failure of gauquelin to find any correlation between the various astrological signs and someone ' s career, twins not showing the expected correlations from having the same signs in twin studies, lack of agreement on the significance of the planets discovered since ptolemy ' s time and large scale disasters wiping out individuals with vastly different signs at the same time. : 226 β 227 rather, his demarcation of science requires three distinct foci : " theory, community [ and ] historical context ". while verification and falsifiability focused on the theory, kuhn ' s work focused on the historical context, but the astrological community should also be considered. whether or not they : : 226 β 227 are focused on comparing their approach to others. have a consistent approach. try to falsify their theory through experiment. in this approach, true falsification rather than modifying a theory to avoid the falsification only really occurs when an alternative theory is proposed. : 228 = = = irrationality = = = for the philosopher edward w. james, astrology is irrational not because of the numerous problems with mechanisms and falsification due to experiments, but because an analysis of the astrological literature shows that it is infused with fallacious logic and poor reasoning. : 34 what if throughout astrological writings we meet little appreciation of coherence, blatant insensitivity to evidence, no sense of a
|
https://en.wikipedia.org/wiki/Astrology_and_science
|
a \ emph { mixed graph } is a graph with directed edges, called arcs, and undirected edges. a $ k $ - coloring of the vertices is proper if colors from $ { 1, 2,..., k } $ are assigned to each vertex such that $ u $ and $ v $ have different colors if $ uv $ is an edge, and the color of $ u $ is less than or equal to ( resp. strictly less than ) the color of $ v $ if $ uv $ is an arc. the weak ( resp. strong ) chromatic polynomial of a mixed graph counts the number of proper $ k $ - colorings. using order polynomials of partially ordered sets, we establish a reciprocity theorem for weak chromatic polynomials giving interpretations of evaluations at negative integers.
|
arxiv:1210.4634
|
we discuss various mechanisms for the creation of an asymmetric charge fluctuation with respect to the reaction plane among hadrons emitted in relativistic heavy - ion collisions. we show that such mechanisms exist in both, the hadronic gas and the partonic phases of qcd. the mechanisms considered here all require the presence of a strong magnetic field ( the ` ` chiral magnetic effect ' ' ), but they do not involve parity or charge - parity violations. we analyze how a transient local electric current fluctuation generated by the chiral magnetic effect can dynamically evolve into an asymmetric charge distribution among the final - state hadrons in momentum space. we estimate the magnitude of the event - by - event fluctuations of the final - state charge asymmetry due to the partonic and hadronic mechanisms.
|
arxiv:1003.2436
|
scattering scanning near - field optical microscopy enables optical imaging and characterization of plasmonic devices with nanometer - scale resolution well below the diffraction limit. this technique enables developers to probe and understand the waveguide - coupled plasmonic antenna in as - fabricated heat - assisted magnetic recording heads. in order validate and predict results and to extract information from experimental measurements that is physically comparable to simulations, a model was developed to translate the simulated electric field into expected near - field measurements using physical parameters specific to scattering scanning near - field optical microscopy physics. the methods used in this paper prove that scattering scanning near - field optical microscopy can be used to determine critical sub - diffraction - limited dimensions of optical field confinement, which is a crucial metrology requirement for the future of nano - optics, semiconductor photonic devices, and biological sensing where the near - field character of light is fundamental to device operation.
|
arxiv:1802.05259
|
we prove a homotopy theorem for sheaves. its application shortens and simplifies the proof of many oka principles such as gromov ' s oka principle for elliptic submersions.
|
arxiv:1812.01530
|
[ abridged ] we present the first statistical analysis of the complete eso - sculptor redshift survey ( ess ). the flux - calibrated sample of 617 galaxy spectra with r _ c < 20. 5 is separated into 3 spectral classes ( early, intermediate, and late ). we report in detail on the spectral classification, the polynomial k - corrections, and all sources of corresponding random and systematic errors. the derived luminosity functions ( lf ) in the johnson - cousins b and rc bands are in agreement with the results from the comparable cnoc2 survey ( lin et al. 1999 ), whereas the ess provides the first estimates of lfs per spectral type in the johnson v band. a renewed interpretation of the galaxy lfs from a redshift survey are obtained by fitting the ess lfs with composite functions based on the local lfs per morphological type ( sandage, binggeli & tammann 1985 ; jerjen & tammann 1997 ). as good or better fits than with pure schechter functions are obtained using : for the early - type lf, a two - wing gaussian ; for the intermediate - type lf, the sum of a gaussian modeling the spiral galaxies and a steep schechter function ( alpha = - 1. 5 ) representing the dwarf elliptical galaxies ; for the late - type lf, a similar composite function with a flat or weaker slope ( - 0. 8 < alpha < - 0. 3 ) for the schechter component which represents the dwarf irregular galaxies. this analysis illustrates how lfs per spectral type may be affected by morphological type mixing, and emphasizes the need for a quantitative morphological classification at z > 0. 1 which separates the giant and dwarf galaxy populations.
|
arxiv:astro-ph/0301339
|
purpose : the purpose of this paper is to investigate the impact of cooperative principle on the information quality ( iq ) by making objects more relevant for consumer needs, in particular case wikipedia articles for students. design / methodology / approach : the authors performed a quantitative study with participants being invited to complete an online survey. each rater evaluated three selected and re - written articles from wikipedia by four iq dimensions ( accuracy, completeness, objectivity, and representation ). grice ' s maxims and submaxims were used to re - write articles and make them more relevant for student cognitive needs. the results were analyzed with statistical methods of mean, standard deviation, cronbach ' s alpha, and icc ( two - way random model of single measure ). findings : the study demonstrates that wikipedia articles can be made more relevant for student needs by using cooperative principle with increase in iq and also achieving higher consistency of students ' scores as recent research. in particular, students in the research perceived the abstract, constructed with cooperative principle, more objective and complete as reported in recent research. practical implications : the work can benefit encyclopedia editors to improve iq of existing articles as well as consumers that would obtain more relevant information in less reading time. originality / value : this is one of the first attempts to empirically investigate the application of cooperate principle to make objects more relevant for consumer needs and impact of this on iq. iq improvement evidence is provided and impacts on iq dimensions such as objectivity, completeness, accuracy, and representation for research community to validate and compare results.
|
arxiv:1807.03774
|
distributed optimization problems have received much attention due to their privacy preservation, parallel computation, less communication, and strong robustness. this paper presents and studies the time - varying distributed optimization problem for a class of stochastic multi - agent systems for the first time. for this, we initially propose a protocol in the centralized case that allows the tracking error of the agent with respect to the optimal trajectory to be exponentially ultimately bounded in a mean - square sense by stochastic lyapunov theory. we then generalize this to the distributed case. therein, the global variable can be accurately estimated in a fixed - time by our proposed estimator. based on this estimator, we design a new distributed protocol, and the results demonstrate that the tracking error of all agents with respect to the optimal trajectory is exponentially ultimately bound in a mean - square sense by stochastic lyapunov theory. finally, simulation experiments are conducted to validate the findings.
|
arxiv:2503.12934
|
we study electrical transport properties in exfoliated molybdenum disulfide ( mos2 ) back - gated field effect transistors at low drain bias and under different illumination intensities. it is found that photoconductive and photogating effect as well as space charge limited conduction can simultaneously occur. we point out that the photoconductivity increases logarithmically with the light intensity and can persist with a decay time longer than 10 ^ 4 s, due to photo - charge trapping at the mos2 / sio2 interface and in mos2 defects. the transfer characteristics present hysteresis that is enhanced by illumination. at low drain bias, the devices feature low contact resistance of 1. 4 k { \ omega } / { \ mu } m, on current as high as 1. 25 na / { \ mu } m, 10 ^ 5 on - off ratio, mobility of 1 cm ^ 2 / vs and photoresponsivity r = 1 a / w.
|
arxiv:1703.08420
|
motivated by the limit mixed hodge structure on the milnor fiber of a hypersurface singularity germ, we construct a natural mixed hodge structure on the torsion part of the alexander modules of a smooth connected complex algebraic variety. more precisely, let $ u $ be a smooth connected complex algebraic variety and let $ f \ colon u \ to \ mathbb { c } ^ * $ be an algebraic map inducing an epimorphism in fundamental groups. the pullback of the universal cover of $ \ mathbb { c } ^ * $ by $ f $ gives rise to an infinite cyclic cover $ u ^ f $ of $ u $. the action of the deck group $ \ mathbb { z } $ on $ u ^ f $ induces a $ \ mathbb { q } [ t ^ { \ pm 1 } ] $ - module structure on $ h _ * ( u ^ f ; \ mathbb { q } ) $. we show that the torsion parts $ a _ * ( u ^ f ; \ mathbb { q } ) $ of the alexander modules $ h _ * ( u ^ f ; \ mathbb { q } ) $ carry canonical $ \ mathbb { q } $ - mixed hodge structures. we also prove that the covering map $ u ^ f \ to u $ induces a mixed hodge structure morphism on the torsion parts of the alexander modules. as applications, we investigate the semisimplicity of $ a _ * ( u ^ f ; \ mathbb { q } ) $, as well as possible weights of the constructed mixed hodge structures. finally, in the case when $ f \ colon u \ to \ mathbb { c } ^ * $ is proper, we prove the semisimplicity and purity of $ a _ * ( u ^ f ; \ mathbb { q } ) $, and we compare our mixed hodge structure on $ a _ * ( u ^ f ; \ mathbb { q } ) $ with the limit mixed hodge structure on the generic fiber of $ f $.
|
arxiv:2002.01589
|
in the last few years, much progress has been made in the computation of one - loop virtual matrix elements for processes involving many external particles. in this contribution the methods that have enabled this recent progress are briefly reviewed with a focus on their computing and automation aspects.
|
arxiv:1006.5594
|
the emergence of quantum computing poses a formidable security challenge to network protocols traditionally safeguarded by classical cryptographic algorithms. this paper provides an exhaustive analysis of vulnerabilities introduced by quantum computing in a diverse array of widely utilized security protocols across the layers of the tcp / ip model, including tls, ipsec, ssh, pgp, and more. our investigation focuses on precisely identifying vulnerabilities susceptible to exploitation by quantum adversaries at various migration stages for each protocol while also assessing the associated risks and consequences for secure communication. we delve deep into the impact of quantum computing on each protocol, emphasizing potential threats posed by quantum attacks and scrutinizing the effectiveness of post - quantum cryptographic solutions. through carefully evaluating vulnerabilities and risks that network protocols face in the post - quantum era, this study provides invaluable insights to guide the development of appropriate countermeasures. our findings contribute to a broader comprehension of quantum computing ' s influence on network security and offer practical guidance for protocol designers, implementers, and policymakers in addressing the challenges stemming from the advancement of quantum computing. this comprehensive study is a crucial step toward fortifying the security of networked environments in the quantum age.
|
arxiv:2404.08232
|
we use indecomposable ultrafilters to answer some questions of hayut, karagila paper " spectra of uniformity ". it is shown that the bound on the strength by t. usuba " a note on uniform ultrafilters in choiceless context " is optimal.
|
arxiv:2403.09329
|
cans - fizzy - - fizzy for short - - is a gpu - accelerated numerical solver for massively - parallel direct numerical simulations ( dns ) of incompressible two - phase flows. a dns enables direct access to all flow quantities, resolved in time and space at all relevant continuum scales. the resulting numerical experiments provide complete data sets for the analysis of the detailed mechanisms underlying the flow, particularly the interaction between the chaotic and multi - scale dynamics of turbulence and the interface movement and deformation. the insights gained can guide the design and operation of various applications, such as boiling heat transfer, liquid - liquid extraction, gas - liquid reactors, absorption and stripping columns, distillation columns, liquid combustion appliances, in all of which the rate of heat and mass transfer between phases is proportional to the interfacial area. fizzy ' s two - phase capabilities were implemented using the efficient, gpu - accelerated navier - stokes solver cans as base.
|
arxiv:2502.04189
|
the dynamical model developed in [ phys. rev. c 54, 2660 ( 1996 ) ] has been applied to investigate the pion electroproduction reactions on the nucleon. it is found that the model can describe to a very large extent the recent data of p ( e, e ' pi ^ 0 ) reaction from jefferson laboratory and mit - bates. the extracted magnetic dipole ( m1 ), electric dipole ( e2 ), and coulomb ( c2 ) strengths of the gamma n - > delta transition are presented. it is found that the c2 / m1 ratio drops significantly with q ^ 2 and reaches about - 13 % at q ^ 2 = 4 ( gev / c ) ^ 2, while the e2 / m1 ratio remains close to the value \ sim - 3 % at the q ^ 2 = 0 photon point. the determined m1 transition form factor drops faster than the usual dipole form factor of the proton. we also find that the non - resonant interactions can dress the gamma n - > delta vertex to enhance strongly its strength at low q ^ 2, but much less at high q ^ 2. predictions are presented for future experimental tests. possible developments of the model are discussed.
|
arxiv:nucl-th/0010025
|
the point that it would, in theory, be possible to ( as feynman put it ) " swallow the doctor ". the idea was incorporated into feynman ' s 1959 essay there ' s plenty of room at the bottom. moravec predicted in 1988 the possibility of " uploading " human mind into a human - like robot, achieving quasi - immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out ; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality. kurzweil suggested in 2005 that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. kurzweil further buttresses his argument by discussing current bio - engineering advances. kurzweil suggests somatic gene therapy ; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human dna with synthesized genes. beyond merely extending the operational life of the physical body, jaron lanier argues for a form of immortality called " digital ascension " that involves " people dying in the flesh and being uploaded into a computer and remaining conscious. " = = history of the concept = = a paper by mahendra prasad, published in ai magazine, asserts that the 18th - century mathematician marquis de condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity. an early description of the idea was made in john w. campbell ' s 1932 short story " the last evolution ". in his 1958 obituary for john von neumann, ulam recalled a conversation with von neumann about the " ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. " in 1965, good wrote his essay postulating an " intelligence explosion " of recursive self - improvement of a machine intelligence. in 1977, hans moravec wrote an article with unclear publishing status where he envisioned a development of self - improving thinking machines, a creation of " super - consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outward
|
https://en.wikipedia.org/wiki/Technological_singularity
|
neutrino mixing anarchy is the hypothesis that the leptonic mixing matrix can be described as the result of a random draw from an unbiased distribution of unitary three - by - three matrices. in light of the recent very strong evidence for a nonzero sin ^ 2 ( theta _ 13 ), we show that the anarchy hypothesis is consistent with the choice made by the nature - - the probability of a more unusual choice is 44 %. we revisit anarchy ' s ability to make predictions, concentrating on correlations - or lack thereof - among the different neutrino mixing parameters, especially sin ^ 2 ( theta _ 13 ) and sin ^ 2 ( theta _ 23 ). we also comment on anarchical expectations regarding the magnitude of cp - violation in the lepton sector, and potential connections to underlying flavor models or the landscape.
|
arxiv:1204.1249
|
in this short paper, we argue that the chiral central charge $ c _ - $ of a ( 2 + 1 ) d topological ordered state is sometimes strongly constrained by ' t hooft anomaly of anti - unitary global symmetry. for example, if a ( 2 + 1 ) d fermionic tqft has a time reversal anomaly with $ t ^ 2 = ( - 1 ) ^ f $ labeled as $ \ nu \ in \ mathbb { z } _ { 16 } $, the tqft must have $ c _ - = 1 / 4 $ mod $ 1 / 2 $ for odd $ \ nu $, while $ c _ - = 0 $ mod $ 1 / 2 $ for even $ \ nu $. this generalizes the fact that the bosonic tqft with $ t $ anomaly in a particular class must carry $ c _ - = 4 $ mod $ 8 $ to fermionic cases. we also study such a constraint for fermionic tqft with $ u ( 1 ) \ times ct $ symmetry, which is regarded as a gapped surface of the topological superconductor in class aiii.
|
arxiv:2101.01018
|
in this paper we extend the classical theory of combinatorial manifolds to the non - homogeneous setting. nh - manifolds are polyhedra which are locally like euclidean spaces of varying dimensions. we show that many of the properties of classical manifolds remain valid in this wider context. nh - manifolds appear naturally when studying pachner moves on ( classical ) manifolds. we introduce the notion of nh - factorization and prove that pl - homeomorphic manifolds are related by a finite sequence of nh - factorizations involving nh - manifolds.
|
arxiv:1108.4955
|
multi - photon electromagnetically - induced transparency ( eit ) of atomic vapors involves several intermediate atomic levels. the sub - structure of these levels and their collisional interactions can drastically alter experimental eit signals. here, we report on hyperfine structure and collision effects in three - photon rydberg eit on the cascade $ 5s _ { 1 / 2 } \ rightarrow $ $ 5p _ { 1 / 2 } \ rightarrow 5d _ { 3 / 2 } $ $ \ rightarrow 25f _ { 5 / 2 } $ in a room temperature $ ^ { 85 } $ rb vapor cell. in our measurements of eit spectra, we identify two types of eit signatures that correspond with distinct excitation pathways and atomic velocity classes in the atomic vapor. the $ 5d _ { 3 / 2 } $ hyperfine structure and autler - townes splittings lead to complex patterns in the eit spectra, which we analyze with the aid of 10 - level eit simulations. adding 50 ~ mtorr of ar gas alters the eit spectra and induces an additional, third eit mode. based on our simulation results, we attribute these changes to hyperfine collisions in the rb $ 5d _ { 3 / 2 } $ level. our study may become useful in quantum technologies involving rydberg eit and hyperfine collisions in vapor cells, including non - invasive spatio - temporally resolved electric - field sensing of electric fields in low - pressure plasmas.
|
arxiv:2501.16054
|
let $ h $ be a hilbert space and $ e $ a banach space. in this note we present a sufficient condition for an operator $ r : h \ to e $ to be $ \ gamma $ - - radonifying in terms of riesz sequences in $ h $. this result is applied to recover a result of lutz weis and the second named author on the $ r $ - boundedness of resolvents, which is used to obtain a datko - pazy type theorem for the stochastic cauchy problem. we also present some perturbation results.
|
arxiv:math/0602427
|
camera pose estimation is a fundamental problem in robotics. this paper focuses on two issues of interest : first, point and line features have complementary advantages, and it is of great value to design a uniform algorithm that can fuse them effectively ; second, with the development of modern front - end techniques, a large number of features can exist in a single image, which presents a potential for highly accurate robot pose estimation. with these observations, we propose aopnp ( l ), an optimal linear - time camera - robot pose estimation algorithm from points and lines. specifically, we represent a line with two distinct points on it and unify the noise model for point and line measurements where noises are added to 2d points in the image. by utilizing plucker coordinates for line parameterization, we formulate a maximum likelihood ( ml ) problem for combined point and line measurements. to optimally solve the ml problem, aopnp ( l ) adopts a two - step estimation scheme. in the first step, a consistent estimate that can converge to the true pose is devised by virtue of bias elimination. in the second step, a single gauss - newton iteration is executed to refine the initial estimate. aopnp ( l ) features theoretical optimality in the sense that its mean squared error converges to the cramer - rao lower bound. moreover, it owns a linear time complexity. these properties make it well - suited for precision - demanding and real - time robot pose estimation. extensive experiments are conducted to validate our theoretical developments and demonstrate the superiority of aopnp ( l ) in both static localization and dynamic odometry systems.
|
arxiv:2407.16151
|
class invariants - - consistency constraints preserved by every operation on objects of a given type - - are fundamental to building, understanding and verifying object - oriented programs. for verification, however, they raise difficulties, which have not yet received a generally accepted solution. the present work introduces a proof rule meant to address these issues and allow verification tools to benefit from invariants. it clarifies the notion of invariant and identifies the three associated problems : callbacks, furtive access and reference leak. as an example, the 2016 ethereum dao bug, in which $ 50 million were stolen, resulted from a callback invalidating an invariant. the discussion starts with a simplified model of computation and an associated proof rule, demonstrating its soundness. it then removes one by one the three simplifying assumptions, each removal raising one of the three issues, and leading to a corresponding adaptation to the proof rule. the final version of the rule can tackle tricky examples, including " challenge problems " listed in the literature.
|
arxiv:2109.06557
|
encapsulation layers are explored for passivating the surfaces of silicon to reduce optical absorption in the 1500 - nm wavelength band. surface - sensitive test structures consisting of microdisk resonators are fabricated for this purpose. based on previous work in silicon photovoltaics, coatings of sinx and sio2 are applied under varying deposition and annealing conditions. a short dry thermal oxidation followed by a long high - temperature n2 anneal is found to be most effective at long - term encapsulation and reduction of interface absorption. minimization of the optical loss is attributed to simultaneous reduction in sub - bandgap silicon surface states and hydrogen in the capping material.
|
arxiv:0707.0415
|
we use confocal microscopy to study the motion of a magnetic bead in a dense colloidal suspension, near the colloidal glass transition volume fraction $ \ phi _ g $. for dense liquid - like samples near $ \ phi _ g $, below a threshold force the magnetic bead exhibits only localized caged motion. above this force, the bead is pulled with a fluctuating velocity. the relationship between force and velocity becomes increasingly nonlinear as $ \ phi _ g $ is approached. the threshold force and nonlinear drag force vary strongly with the volume fraction, while the velocity fluctuations do not change near the transition.
|
arxiv:cond-mat/0308622
|
when space - time is assumed to be non - riemannian the minimal coupling procedure ( mcp ) is not compatible, in general, with minimal action principle ( map ). this means that the equations gotten by applying mcp to the euler - lagrange equations of a lagrangian $ \ cal l $ do not coincide with the euler - lagrange equations of the lagrangian obtained by applying mcp to $ \ cal l $. such compatibility can be restored if the space - time admits a connection - compatible volume element. we show how these concepts can alter qualitatively the predictions of the einstein - cartan theory of gravity.
|
arxiv:gr-qc/9405054
|
understanding the appropriate skin layer thickness in wounded sites is an important tool to move forward on wound healing practices and treatment protocols. methods to measure depth often are invasive and less specific. this paper introduces a novel method that is non - invasive with deep learning techniques using classifying of skin layers that helps in measurement of wound depth through heatmap analysis. a set of approximately 200 labeled images of skin allows five classes to be distinguished : scars, wounds, and healthy skin, among others. each image has annotated key layers, namely the stratum cornetum, the epidermis, and the dermis, in the software roboflow. in the preliminary stage, the heatmap generator vgg16 was used to enhance the visibility of tissue layers, based upon which their annotated images were used to train resnet18 with early stopping techniques. it ended up at a very high accuracy rate of 97. 67 %. to do this, the comparison of the models resnet18, vgg16, densenet121, and efficientnet has been done where both efficientnet and resnet18 have attained accuracy rates of almost 95. 35 %. for further hyperparameter tuning, efficientnet and resnet18 were trained at six different learning rates to determine the best model configuration. it has been noted that the accuracy has huge variations with different learning rates. in the case of efficientnet, the maximum achievable accuracy was 95. 35 % at the rate of 0. 0001. the same was true for resnet18, which also attained its peak value of 95. 35 % at the same rate. these facts indicate that the model can be applied and utilized in actual - time, non - invasive wound assessment, which holds a great promise to improve clinical diagnosis and treatment planning.
|
arxiv:2411.12678
|
this paper addresses the basic question of how well can a tree approximate distances of a metric space or a graph. given a graph, the problem of constructing a spanning tree in a graph which strongly preserves distances in the graph is a fundamental problem in network design. we present scaling distortion embeddings where the distortion scales as a function of $ \ epsilon $, with the guarantee that for each $ \ epsilon $ the distortion of a fraction $ 1 - \ epsilon $ of all pairs is bounded accordingly. such a bound implies, in particular, that the \ emph { average distortion } and $ \ ell _ q $ - distortions are small. specifically, our embeddings have \ emph { constant } average distortion and $ o ( \ sqrt { \ log n } ) $ $ \ ell _ 2 $ - distortion. this follows from the following results : we prove that any metric space embeds into an ultrametric with scaling distortion $ o ( \ sqrt { 1 / \ epsilon } ) $. for the graph setting we prove that any weighted graph contains a spanning tree with scaling distortion $ o ( \ sqrt { 1 / \ epsilon } ) $. these bounds are tight even for embedding in arbitrary trees. for probabilistic embedding into spanning trees we prove a scaling distortion of $ \ tilde { o } ( \ log ^ 2 ( 1 / \ epsilon ) ) $, which implies \ emph { constant } $ \ ell _ q $ - distortion for every fixed $ q < \ infty $.
|
arxiv:cs/0610003
|
we study noise spectra of currents through a tunnel junction in weak tunneling limit. we introduce effective capacitance to take into account the interaction effect and explicitly incorporate the electromagnetic environment of the junction into the formulation. we study the effect of charging energy and macroscopic environment on noise spectra. we calculate current fluctuations at tunneling barrier and fluctuations measured at leads. it is shown that two fluctuations have different noise spectra and the relation between them is nontrivial. we provide an explanation for the origin of the difference. experimental implications are discussed.
|
arxiv:cond-mat/9507105
|
we present a short review of the approach to quantization known as strict ( deformation ) quantization, which can be seen as a generalization of the weyl - moyal quantization. we include examples and comments on the process of quantization.
|
arxiv:1412.7440
|
a proof that the set of real numbers is denumerable is given.
|
arxiv:math/0108119
|
. according to chemist, matt coppock, he has started to enhance a soldier ' s lethality by collecting different biorecognition receptors. by doing so it will eliminate emerging environmental threats to the soldiers. with the emergence of virtual reality it is only natural to start creating simulations using vr. this will better prepare the user for whatever situation they are training for. in the military there are combat simulations that soldiers will train on. the reason the military will use vr to train its soldiers is because it is the most interactive / immersive experience the user will feels without being put in a real situation. recent simulations include a soldier wearing a shock belt during a combat simulation. each time they are shot the belt will release a certain amount of electricity directly to the user ' s skin. this is to simulate a shot wound in the most humane way possible. there are many sustainability technologies that military personnel wear in the field. one of which is a boot insert. this insert gauges how soldiers are carrying the weight of their equipment and how daily terrain factors impact their mission panning optimization. these sensors will not only help the military plan the best timeline but will help keep the soldiers at best physical / mental health. = = fashion = = fashionable wearables are " designed garments and accessories that combines aesthetics and style with functional technology. " garments are the interface to the exterior mediated through digital technology. it allows endless possibilities for the dynamic customization of apparel. all clothes have social, psychological and physical functions. however, with the use of technology these functions can be amplified. there are some wearables that are called e - textiles. these are the combination of textiles ( fabric ) and electronic components to create wearable technology within clothing. they are also known as smart textile and digital textile. wearables are made from a functionality perspective or from an aesthetic perspective. when made from a functionality perspective, designers and engineers create wearables to provide convenience to the user. clothing and accessories are used as a tool to provide assistance to the user. designers and engineers are working together to incorporate technology in the manufacturing of garments in order to provide functionalities that can simplify the lives of the user. for example, through smartwatches people have the ability to communicate on the go and track their health. moreover, smart fabrics have a direct interaction with the user, as it allows sensing the customers ' moves. this helps to address concerns such as privacy, communication and well - being. years ago, fashionable wearables were functional but not very aesthetic.
|
https://en.wikipedia.org/wiki/Wearable_technology
|
a hard x - ray, high - speed, high dynamic range scientific x - ray imager is described. the imager is based on the mixed - mode pixel array detector ( mm - pad ) readout chip coupled to a 750 micron thick cadmium telluride ( cdte ) sensor. the full imager is a 2 x 3 tiled array of mm - pad sensor / readout chip hybrids. cdte improves detection for high energy x - rays as compared to silicon sensors, enabling efficient x - ray imaging to extend to > 100 kev. the detector is capable of 1 khz imaging and in - pixel circuitry has been designed to allow for well depths of greater than 4 x 10 ^ { 6 } 80 kev x - rays. a charge integrating front - end allows for quantitative measurement of high flux x - ray images beyond the capabilities of photon counting detectors. detector performance is summarized and experimental measurements are presented.
|
arxiv:2004.03421
|
despite the remarkable code generation abilities of large language models llms, they still face challenges in complex task handling. robot development, a highly intricate field, inherently demands human involvement in task allocation and collaborative teamwork. to enhance robot development, we propose an innovative automated collaboration framework inspired by real - world robot developers. this framework employs multiple llms in distinct roles analysts, programmers, and testers. analysts delve deep into user requirements, enabling programmers to produce precise code, while testers fine - tune the parameters based on user feedback for practical robot application. each llm tackles diverse, critical tasks within the development process. clear collaboration rules emulate real world teamwork among llms. analysts, programmers, and testers form a cohesive team overseeing strategy, code, and parameter adjustments. through this framework, we achieve complex robot development without requiring specialized knowledge, relying solely on non experts participation.
|
arxiv:2402.03699
|
physics - informed neural networks ( pinns ) solve partial differential equations ( pdes ) by embedding governing equations and boundary / initial conditions into the loss function. however, enforcing dirichlet boundary conditions accurately remains challenging, often leading to soft enforcement that compromises convergence and reliability in complex domains. we propose a hybrid approach, pinn - fem, which combines pinns with finite element methods ( fem ) to impose strong dirichlet boundary conditions via domain decomposition. this method incorporates fem - based representations near the boundary, ensuring exact enforcement without compromising convergence. through six experiments of increasing complexity, pinn - fem outperforms standard pinn models, showcasing superior accuracy and robustness. while distance functions and similar techniques have been proposed for boundary condition enforcement, they lack generality for real - world applications. pinn - fem bridges this gap by leveraging fem near boundaries, making it well - suited for industrial and scientific problems.
|
arxiv:2501.07765
|
we give a mathematical framework for manipulating indeterminate - length quantum bit strings. in particular, we define prefixes, fragments, tensor products and concatenation of such strings of qubits, and study their properties and relationships. the results are then used to define prefix - free hilbert spaces in a more general way than in previous work, without assuming the existence of a basis of length eigenstates. we prove a quantum analogue of the kraft inequality, illustrate the results with some examples and discuss the relevance of prefix - free hilbert spaces for lossless compression.
|
arxiv:0804.0022
|
in a process algebra with hiding and recursion it is possible to create processes which compute internally without ever communicating with their environment. such processes are said to diverge or livelock. in this paper we show how it is possible to conservatively classify processes as livelock - free through a static analysis of their syntax. in particular, we present a collection of rules, based on the inductive structure of terms, which guarantee livelock - freedom of the denoted process. this gives rise to an algorithm which conservatively flags processes that can potentially livelock. we illustrate our approach by applying both bdd - based and sat - based implementations of our algorithm to a range of benchmarks, and show that our technique in general substantially outperforms the model checker fdr whilst exhibiting a low rate of inconclusive results.
|
arxiv:1304.7394
|
we present the calibration of the swift uvot grisms, of which there are two, providing low - resolution field spectroscopy in the ultraviolet and optical bands respectively. the uv grism covers the range 1700 - 5000 angstrom with a spectral resolution of 75 at 2600 angstrom for source magnitudes of u = 10 - 16 mag, while the visible grism covers the range 2850 - 6600 angstrom with a spectral resolution of 100 at 4000 angstrom for source magnitudes of b = 12 - 17 mag. this calibration extends over all detector positions, for all modes used during operations. the wavelength accuracy ( 1 - sigma ) is 9 angstrom in the uv grism clocked mode, 17 angstrom in the uv grism nominal mode and 22 angstrom in the visible grism. the range below 2740 angstrom in the uv grism and 5200 angstrom in the visible grism never suffers from overlapping by higher spectral orders. the flux calibration of the grisms includes a correction we developed for coincidence loss in the detector. the error in the coincidence loss correction is less than 20 %. the position of the spectrum on the detector only affects the effective area ( sensitivity ) by a few percent in the nominal modes, but varies substantially in the clocked modes. the error in the effective area is from 9 % in the uv grism clocked mode to 15 % in the visible grism clocked mode.
|
arxiv:1501.02433
|
kurdish is written in different scripts. the two most popular scripts are latin and persian - arabic. however, not all kurdish readers are familiar with both mentioned scripts that could be resolved by automatic transliterators. so far, the developed tools mostly transliterate persian - arabic scripts into latin. we present a transliterator to transliterate kurdish texts in latin into persian - arabic script. we also discuss the issues that should be considered in the transliteration process. the tool is a part of kurdish blark, and it is publicly available for non - commercial use
|
arxiv:2110.12374
|
gravitational wave observations can be used to accurately measure the hubble constant $ h _ 0 $ and could help understand the present discrepancy between constraints from type ia supernovae and the cosmic microwave background. neutron star mergers are primarily used for this purpose as their electromagnetic emission can be used to greatly reduce measurement uncertainties. here we estimate $ h _ 0 $ using the recently observed black hole merger gw190521 and its candidate electromagnetic counterpart found by ztf using a highly eccentric explanation of the properties of gw190521. we find that the reconstructed distance of gw190521 and the redshift of the candidate host galaxy are more consistent with standard cosmology for our eccentric model than if we reconstruct the source parameters assuming no eccentricity. we obtain $ h _ 0 = 88. 6 ^ { + 17. 1 } _ { - 34. 3 } $ \, km \, s $ ^ { - 1 } $ mpc $ ^ { - 1 } $ for gw190521, and $ h _ 0 = 73. 4 ^ { + 6. 9 } _ { - 10. 7 } $ \, km \, s $ ^ { - 1 } $ mpc $ ^ { - 1 } $ in combination with the results of the neutron star merger gw170817. our results indicate that future $ h _ 0 $ computations using black hole mergers will need to account for possible eccentricity. for extreme cases, the orbital velocity of binaries in agn disks can represent a significant systematic uncertainty.
|
arxiv:2009.14247
|
the lepton pair production via the quark - antiquark annihilation subprocess in collisions of beam antiproton with the proton target at e _ beam = 14 gev ( which corresponds to the center - of - mass energy of the p pbar system e _ cm = 5. 3 gev ) is studied on the basis of the event sample simulated by pythia6 generator and pandaroot package. different kinematical variables which may be useful for design of the muon system and the electromagnetic calorimeter of the detector of panda experiment at fair, as well as for the study of proton structure functions in the available x - q2 kinematical region, are considered. the problems due to the presence of fake leptons that appear from meson decays, as well as due to the contribution of background qcd processes and minimum bias events, are also discussed. the set of cuts which allows one to separate the events with the signal lepton pairs from different kind of background events are proposed.
|
arxiv:1108.6289
|
of additional materials via its website. registering with amsp gives access to integral, another source of both teaching and learning materials hosted by mathematics education innovation ( mei ). underground mathematics is another resource in active development which reflects the emphasis on problem solving and reasoning in the uk curriculum. a collection of tasks for post - 16 mathematics can be also found on the nrich site. = = australia ( victoria ) = = in contrast with other further mathematics courses, further maths as part of the vce is the easiest level of mathematics. any student wishing to undertake tertiary studies in areas such as science, engineering, commerce, economics and some information technology courses must undertake one or both of the other two vce maths subjects β mathematical methods or specialist mathematics. the further mathematics syllabus in vce consists of three core modules, which all students undertake, plus two modules chosen by the student ( or usually by the school or teacher ) from a list of four. the core modules are univariate data, bivariate data, time series, number patterns and business - related mathematics. the optional modules are geometry and trigonometry, graphs and relations, networks and decision mathematics, or matrices. = = singapore = = further mathematics is available as a second and higher mathematics course at a level ( now h2 ), in addition to the mathematics course at a level. students can pursue this subject if they have a2 and better in ' o ' level mathematics and additional mathematics, depending on the school. some topics covered in this course include mathematical induction, complex number, polar curve and conic sections, differential equations, recurrence relations, matrices and linear spaces, numerical methods, random variables and hypothesis testing and confidence intervals. = = international baccalaureate diploma = = further mathematics, as studied within the international baccalaureate diploma programme, was a higher level ( hl ) course that could be taken in conjunction with mathematics hl or on its own. it consisted of studying all four of the options in mathematics hl, plus two additional topics. topics studied in further mathematics included : topic 1 - linear algebra - studies on matrices, vector spaces, linear and geometric transformations topic 2 - geometry - closer look on triangles, circles and conic sections topic 3 - statistics and probability - the geometric and negative binomial distributions, unbiased estimators, statistical hypothesis testing and an introduction to bivariate distributions topic 4 - sets, relations and groups - algebra of sets, ordered pairs, binary operations and group homomorphism topic 5 - calculus - infinite sequences and series
|
https://en.wikipedia.org/wiki/Further_Mathematics
|
we observe that many of the separation axioms of topology ( including $ t _ 0 - t _ 4 $ ) can be expressed concisely and uniformly in terms of category theory as lifting properties ( in the sense of quillen model categories ) with respect to ( usually open ) continuous maps of finite spaces ( involving up to 4 points ) and the real line.
|
arxiv:1706.09164
|
observations of warm absorbers provided new ways to study the nuclear environments of agns. i discuss basic properties of warm absorbers and early developments with rosat, asca and hst observations. i briefly touch upon recent advances made with chandra and fuse.
|
arxiv:astro-ph/0202335
|
we propose a novel one - stage transformer - based semantic and spatial refined transformer ( ssrt ) to solve the human - object interaction detection task, which requires to localize humans and objects, and predicts their interactions. differently from previous transformer - based hoi approaches, which mostly focus at improving the design of the decoder outputs for the final detection, ssrt introduces two new modules to help select the most relevant object - action pairs within an image and refine the queries ' representation using rich semantic and spatial features. these enhancements lead to state - of - the - art results on the two most popular hoi benchmarks : v - coco and hico - det.
|
arxiv:2204.00746
|
budach ' s mouse - in - an - octant problem ( attributed to lothar budach in a 1980 article by van emde boas and karpinski ) concerns the behaviour of a very simple finite - state machine ( " the mouse " ) moving on the integer two - dimensional grid. its decidability is apparently still open. this note sketches a proof that an extended version of the problem ( a super - mouse ) is undecidable.
|
arxiv:1305.0911
|
while many algorithms exist for tracing various contours for illustrating a meshed object, few algorithms organize these contours into region - bounding closed loops. tracing closed - loop boundaries on a mesh can be problematic due to switchbacks caused by subtle surface variation, and the organization of these regions into a planar map can lead to many small region components due to imprecision and noise. this paper adapts " snaxels, " an energy minimizing active contour method designed for robust mesh processing, and repurposes it to generate visual, shadow and shading contours, and a simplified visual - surface planar map, useful for stylized vector art illustration of the mesh. the snaxel active contours can also track contours as the mesh animates, and frame - to - frame correspondences between snaxels lead to a new method to convert the moving contours on a 3 - d animated mesh into 2 - d svg curve animations for efficient embedding in flash, powerpoint and other dynamic vector art platforms.
|
arxiv:1904.09530
|
we construct superfluid black hole solutions with two chemical potentials. by analogy with qcd, the two chemical potentials correspond to the baryon and isospin symmetries, respectively. we consider two systems : the back - reacted u ( 2 ) einstein - yang - mills theory in 4 + 1 dimensions and the 9 + 1 - dimensional d3 / d7 brane setup with two coincident d7 - brane probes. in the d7 - brane model, the identification of baryon and isospin chemical potential is explicit since the dual field theory is explicitly known. studying the phase diagram, we find in both systems a quantum phase transition at a critical ratio of the two chemical potentials. however the quantum phase transition is different in the two systems : in the d3 / d7 brane setup we always find a second order phase transition, while in the einstein - yang - mills theory, depending on the strength of the back - reaction, we obtain a continuous or first order transition. we expect the continuous quantum phase transition to be bkt - like. we comment on the origin of this differing behavior in these apparently very similar models and compare to phenomenological systems.
|
arxiv:1103.4145
|
recent work in machine learning and computer vision has provided evidence of systematic design flaws in the development of major object recognition benchmark datasets. one such example is imagenet, wherein, for several categories of images, there are incongruences between the objects they represent and the labels used to annotate them. the consequences of this problem are major, in particular considering the large number of machine learning applications, not least those based on deep neural networks, that have been trained on these datasets. in this paper we posit the problem to be the lack of a knowledge representation ( kr ) methodology providing the foundations for the construction of these ground truth benchmark datasets. accordingly, we propose a solution articulated in three main steps : ( i ) deconstructing the object recognition process in four ordered stages grounded in the philosophical theory of teleosemantics ; ( ii ) based on such stratification, proposing a novel four - phased methodology for organizing objects in classification hierarchies according to their visual properties ; and ( iii ) performing such classification according to the faceted classification paradigm. the key novelty of our approach lies in the fact that we construct the classification hierarchies from visual properties exploiting visual genus - differentiae, and not from linguistically grounded properties. the proposed approach is validated by a set of experiments on the imagenet hierarchy of musical experiments.
|
arxiv:2202.08512
|
in this article, we focus on the concept of locally - apn - ness ( ` ` apn " is the abbreviation of the well - known notion of almost perfect nonlinear ) introduced by blondeau, canteaut, and charpin, which makes the corpus of s - boxes somehow larger regarding their differential uniformity and, therefore, possibly, more suitable candidates against the differential attack ( or their variants ). specifically, given two coprime positive integers $ m $ and $ k $ such that $ \ gcd ( 2 ^ m + 1, 2 ^ k + 1 ) = 1 $, we investigate the locally - apn - ness property of an infinite family of niho type power functions in the form $ f ( x ) = x ^ { s ( 2 ^ m - 1 ) + 1 } $ over the finite field $ { \ mathbb f } _ { 2 ^ { 2m } } $ for $ s = ( 2 ^ k + 1 ) ^ { - 1 } $, where $ ( 2 ^ k + 1 ) ^ { - 1 } $ denotes the multiplicative inverse modulo $ 2 ^ m + 1 $. by employing finer studies of the number of solutions of certain equations over finite fields ( with even characteristic ) as well as some subtle manipulations of solving some equations, we prove that $ f ( x ) $ is locally apn and determine its differential spectrum. it is worth noting that computer experiments show that this class of locally - apn power functions covers all niho type locally - apn power functions for $ 2 \ leq m \ leq10 $. in addition, we also determine the boomerang spectrum of $ f ( x ) $ by using its differential spectrum, which particularly generalizes a recent result by yan, zhang, and li.
|
arxiv:2208.02626
|
the transverse properties of an electron beam are characterized by two quantities, the emittance which indicates the electron beam extend in the phase space and the angular momentum which allows for non - planar electron trajectories. whereas the emittance of electron beams produced in laser - plasma accelerator has been measured in several experiments, their angular momentum has been scarcely studied. it was demonstrated that electrons in laser - plasma accelerator carry some angular momentum, but its origin was not established. here we identify one source of angular momentum growth and we present experimental results showing that the angular momentum content evolves during the acceleration.
|
arxiv:1306.0016
|
we report the development of japanese simcse, japanese sentence embedding models fine - tuned with simcse. since there is a lack of sentence embedding models for japanese that can be used as a baseline in sentence embedding research, we conducted extensive experiments on japanese sentence embeddings involving 24 pre - trained japanese or multilingual language models, five supervised datasets, and four unsupervised datasets. in this report, we provide the detailed training setup for japanese simcse and their evaluation results.
|
arxiv:2310.19349
|
although quantum physics is well understood in inertial reference frames ( flat spacetime ), a current challenge is the search for experimental evidence of non - trivial or unexpected behaviour of quantum systems in non - inertial frames. here, we present a novel test of quantum mechanics in a non - inertial reference frame : we consider hong - ou - mandel ( hom ) interference on a rotating platform and study the effect of uniform rotation on the distinguishability of the photons. both theory and experiments show that the rotational motion induces a relative delay in the photon arrival times at the exit beamsplitter and that this delay is observed as a shift in the position of the hom dip. this experiment can be extended to a full general relativistic test of quantum physics using satellites in earth orbit and indicates a new route towards the use of photonic technologies for investigating quantum mechanics at the interface with relativity.
|
arxiv:1906.03400
|
in this paper, we present a modular system for representing and reasoning with legal aspects of traffic rules for autonomous vehicles. we focus on a subset of the united kingdom ' s highway code ( hc ) related to junctions. as human drivers and automated vehicles ( avs ) will interact on the roads, especially in urban environments, we claim that an accessible, unitary, high - level computational model should exist and be applicable to both users. autonomous vehicles introduce a shift in liability that should not bring disadvantages or increased burden on human drivers. we develop a system " in silico " of the model. the proposed system is built of three main components : a natural language interface, using logical english, which encodes the rules ; an internal representation of the rules in prolog ; and an multi - agent - based simulation environment, built in netlogo. the three components interact : logical english is translated into and out of prolog ( along with some support code ) ; prolog and netlogo interface via predicates. such a modular approach enables the different components to carry different " burdens " in the overall system ; it also allows swapping of modules. given netlogo, we can visualize the effect of the modeled rules as well as validate the system with a simple dynamic running scenario. designated agents monitor the behaviour of the vehicles for compliance and record potential violations where they occur. the information on potential violations is then utilized by validators, to determine whether the violation is punishable, differentiating between exceptions and cases.
|
arxiv:2502.09216
|
we show that discrete torsion is implemented in a d - brane world - volume theory by using a projective representation of the orbifold point group. we study the example of c ^ 3 / z _ 2 x z _ 2 and show that the resolution of singularities agrees with that proposed by vafa and witten. a new type of fractional brane appears.
|
arxiv:hep-th/9807235
|
this note is a discussion of the article " bayesian model selection based on proper scoring rules " by a. p. dawid and m. musio, to appear in bayesian analysis. while appreciating the concepts behind the use of proper scoring rules, including the inclusion of improper priors, we point out here some possible practical difficulties with the advocated approach.
|
arxiv:1502.07638
|
let $ p $ be a prime number. continuing and extending our previous paper with the same title, we prove explicit rates of overconvergence for modular functions of the form $ \ frac { e _ k ^ { \ ast } } { v ( e _ k ^ { \ ast } ) } $ where $ e _ k ^ { \ ast } $ is a classical, normalized eisenstein series on $ \ gamma _ 0 ( p ) $ and $ v $ the $ p $ - adic frobenius operator. in particular, we extend our previous paper to the primes $ 2 $ and $ 3 $. for these primes our main theorem improves somewhat upon earlier results by emerton, buzzard and kilford, and roe. we include a detailed discussion of those earlier results as seen from our perspective. we also give some improvements to our earlier paper for primes $ p \ ge 5 $. apart from establishing these improvements, our main purpose here is also to show that all of these results can be obtained by a uniform method, i. e., a method where the main points in the argumentation is the same for all primes. we illustrate the results by some numerical examples.
|
arxiv:2302.02630
|
the continual learning problem has been widely studied in image classification, while rare work has been explored in object detection. some recent works apply knowledge distillation to constrain the model to retain old knowledge, but this rigid constraint is detrimental for learning new knowledge. in our paper, we propose a new scheme for continual learning of object detection, namely contrast r - cnn, an approach strikes a balance between retaining the old knowledge and learning the new knowledge. furthermore, we design a proposal contrast to eliminate the ambiguity between old and new instance to make the continual learning more robust. extensive evaluation on the pascal voc dataset demonstrates the effectiveness of our approach.
|
arxiv:2108.04224
|
consistent embeddings are found of the minimal $ \ mathcal { n } = 2 $ and $ \ mathcal { n } = 3 $ gauged supergravities in four dimensions into its maximally supersymmetric, $ \ mathcal { n } = 8 $, counterpart with a dyonic iso ( 7 ) gauging. these minimal truncations retain the metric along with relevant u ( 1 ) and so ( 3 ) r - symmetry gauge fields selected from the iso ( 7 ) ones. the remaining iso ( 7 ) gauge fields are turned off, with subtleties introduced by the dyonic gauging, and the scalars are fixed to their expectation values at the $ \ mathcal { n } = 2 $ and $ \ mathcal { n } = 3 $ vacua of the $ \ mathcal { n } = 8 $ theory. using the truncation formulae for massive type iia supergravity on the six - sphere to $ d = 4 $ $ \ mathcal { n } = 8 $ iso ( 7 ) supergravity, the minimal $ d = 4 $ $ \ mathcal { n } = 2 $ and $ \ mathcal { n } = 3 $ gauged supergravities are then uplifted consistently to ten dimensions.
|
arxiv:1908.00535
|
the evolution of the 2006 outburst of the recurrent nova rs ophiuchi was followed with 12 x - ray grating observations with chandra and xmm - newton. we present detailed spectral analyses using two independent approaches. from the best dataset, taken on day 13. 8 after outburst, we reconstruct the temperature distribution and derive elemental abundances. we find evidence for at least two distinct temperature components on day 13. 8 and a reduction of temperature with time. the x - ray flux decreases as a power - law, and the power - law index changes from - 5 / 3 to - 8 / 3 around day 70 after outburst. this can be explained by different decay mechanisms for the hot and cool components. the decay of the hot component and the decrease in temperature are consistent with radiative cooling, while the decay of the cool component can be explained by the expansion of the ejecta. we find overabundances of n and of alpha - elements, which could either represent the composition of the secondary that provides the accreted material or that of the ejecta. the n overabundance indicates cno - cycled material. from comparisons to abundances for the secondary taken from the literature, we conclude that 20 - 40 % of the observed nitrogen could originate from the outburst. the overabundance of the alpha - elements is not typical for stars of the spectral type of the secondary in the rs oph system, and white dwarf material might have been mixed into the ejecta. however, no direct measurements of the alpha - elements in the secondary are available, and the continuous accretion may have changed the observable surface composition.
|
arxiv:0810.2023
|
four - dimensional extended : poincar \ ' e, ads - lorentz and maxwell algebras, are obtained by expanding an extension of de sitter or conformal algebra, so ( 4, 1 ) or so ( 3, 2 ). the procedure can be generalized to obtain a new family of extended $ \ mathcal { c } _ k ^ { e } $ and its flat limit, the extended $ \ mathcal { b } _ k ^ { e } $ algebras. the extended $ \ mathcal { c } _ k $ and $ \ mathcal { b } _ k $ algebras have been introduced in the literature recently. the extended poincar \ ' e algebra is also obtained as an in \ " on \ " u - wigner contraction of extended de sitter algebra.
|
arxiv:1905.09200
|
starting from noncommutative generalization of minkowski space we consider quantum deformed relativistic symmetries which lead to the modification of kinematics of special relativity. the noncommutative field theory framework described by means of the star product formalism is briefly described. we briefly present the quantum modifications of einstein gravity
|
arxiv:1003.4185
|
we study the cluster emission properties of 224ra and 238pu employing the barcelona - catania - paris - madrid ( bcpm ) energy density functional ( edf ). starting from two - dimensional potential energy surfaces, coexisting fission paths are identified. a fission valley located at large octupole deformations, corresponding to a highly - asymmetric mass distribution, is found in both nuclei. as the corresponding fragments are dominated by the presence of 208pb, we can relate this fission path to the emergence of cluster emission. using the octupole moment as collective degree of freedom, we compute the cluster decay half - lives and study the impact of collective inertias, pairing strength and collective zero - point energy. the agreement with experimental data resembles the results obtained for spontaneous fission half - lives, indicating the capability of bcpm to consistently describe a large variety of fission phenomena, including cluster emission.
|
arxiv:2311.06822
|
in this paper we study various aspects of the double ramification ( dr ) hierarchy, introduced by the first author, and its quantization. we extend the notion of tau - symmetry to quantum integrable hierarchies and prove that the quantum dr hierarchy enjoys this property. we determine explicitly the genus $ 1 $ quantum correction and, as an application, compute completely the quantization of the $ 3 $ - and $ 4 $ - kdv hierarchies ( the dr hierarchies for witten ' s $ 3 $ - and $ 4 $ - spin theories ). we then focus on the recursion relation satisfied by the dr hamiltonian densities and, abstracting from its geometric origin, we use it to characterize and construct a new family of quantum and classical integrable systems which we call of double ramification type, as they satisfy all of the main properties of the dr hierarchy. in the second part, we obtain new insight towards the miura equivalence conjecture between the dr and dubrovin - zhang hierarchies, via a geometric interpretation of the correlators forming the double ramification tau - function. we then show that the candidate miura transformation between the dr and dz hierarchies ( which we uniquely identified in our previous paper ) indeed turns the dubrovin - zhang poisson structure into the standard form. eventually, we focus on integrable hierarchies associated with rank - $ 1 $ cohomological field theories and their deformations, and we prove the dr / dz equivalence conjecture up to genus $ 5 $ in this context.
|
arxiv:1609.04059
|
nmt systems have problems with large vocabulary sizes. byte - pair encoding ( bpe ) is a popular approach to solving this problem, but while bpe allows the system to generate any target - side word, it does not enable effective generalization over the rich vocabulary in morphologically rich languages with strong inflectional phenomena. we introduce a simple approach to overcome this problem by training a system to produce the lemma of a word and its morphologically rich pos tag, which is then followed by a deterministic generation step. we apply this strategy for english - czech and english - german translation scenarios, obtaining improvements in both settings. we furthermore show that the improvement is not due to only adding explicit morphological information.
|
arxiv:1707.06012
|
we study two - way - fixed - effects regressions ( twfe ) with several treatment variables. under a parallel trends assumption, we show that the coefficient on each treatment identifies a weighted sum of that treatment ' s effect, with possibly negative weights, plus a weighted sum of the effects of the other treatments. thus, those estimators are not robust to heterogeneous effects and may be contaminated by other treatments ' effects. we further show that omitting a treatment from the regression can actually reduce the estimator ' s bias, unlike what would happen under constant treatment effects. we propose an alternative difference - in - differences estimator, robust to heterogeneous effects and immune to the contamination problem. in the application we consider, the twfe regression identifies a highly non - convex combination of effects, with large contamination weights, and one of its coefficients significantly differs from our heterogeneity - robust estimator.
|
arxiv:2012.10077
|
we establish the existence of an optimal partition for the yamabe equation in the whole space made up of mutually linearly isometric sets, each of them invariant under the action of a group of linear isometries. to do this, we establish the existence of a solution to a weakly coupled competitive yamabe system, whose components are invariant under the action of the group, and each of them is obtained from the previous one by composing it with a linear isometry. we show that, as the coupling parameter goes to minus infinity, the components of the solutions segregate and give rise to an optimal partition that has the properties mentioned above. finally, taking advantage of the symmetries considered, we establish the existence of infinitely many sign - changing solutions for the yamabe equation that are different from those previously found in the by w. y. ding, and del pino, musso, pacard and pistoia
|
arxiv:2309.00784
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.