text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
networks of elastoplastic springs ( elastoplastic systems ) have been linked to differential equations with polyhedral constraints in the pioneering paper by moreau ( 1974 ). periodic loading of an elastoplastic system, therefore, corresponds to a periodic motion of the polyhedral constraint. according to krejci ( 1996 ), every solution of a sweeping process with a periodically moving constraint asymptotically converges to a periodic orbit. understanding whether such an asymptotic periodic orbit is unique or there can be an entire family of asymptotic periodic orbits ( that form a periodic attractor ) has been an open problem since then. since suitable small perturbation of a polyhedral constraint seems to be always capable to destroy a potential family of periodic orbits, it is expected that none of potential periodic attractor is structurally stable. in the present paper we give a simple example to prove that even though the periodic attractor ( of non - stationary periodic solutions ) can be destroyed by little perturbation of the moving constraint, the periodic attractor resists perturbations of the physical parameters of the mechanical model ( i. e. the parameters of the network of elastoplastic springs ).
|
arxiv:1903.01965
|
in this article some explicit estimates on the decay of the convolutive inverse of a sequence are proved. they are derived from the functional calculus for sobolev algebras. applications include localization in spline - type spaces and oversampling schemes.
|
arxiv:0804.3828
|
emotion - cause pair extraction ( ecpe ), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. it extends the previous emotion cause extraction ( ece ) task, yet without requiring a set of pre - given emotion clauses as in ece. existing approaches to ecpe generally adopt a two - stage method, i. e., ( 1 ) emotion and cause detection, and then ( 2 ) pairing the detected emotions and causes. such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. to tackle these issues, we propose a multi - task learning model that can extract emotions, causes and emotion - cause pairs simultaneously in an end - to - end manner. specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i. e., the links are directional. emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. experiments are conducted on an ecpe benchmarking dataset. the results show that our proposed model outperforms a range of state - of - the - art approaches.
|
arxiv:2002.10710
|
we prove ` polynomial in $ k $ ' bounds on the size of the bergman kernel for the space of holomorphic siegel cusp forms of degree $ n $ and weight $ k $. when $ n = 1, 2 $ our bounds agree with the conjectural bounds on the aforementioned size, while the lower bounds match for all $ n \ ge 1 $. for an $ l ^ 2 $ - normalised siegel cusp form $ f $ of degree $ 2 $, our bound for its sup - norm is $ o _ \ epsilon ( k ^ { 9 / 4 + \ epsilon } ) $. further, we show that in any compact set $ \ omega $ ( which does not depend on $ k $ ) contained in the siegel fundamental domain of $ \ mathrm { sp } ( 2, \ mathbb z ) $ on the siegel upper half space, the sup - norm of $ f $ is $ o _ \ omega ( k ^ { 3 / 2 - \ eta } ) $ for some $ \ eta > 0 $, going beyond the ` generic ' bound in this setting.
|
arxiv:2206.02190
|
we have analysed 12 simultaneous radio ( vla ) and x - ray ( rxte ) observations of the atoll - type x - ray binary 4u 1728 - 34, performed in two blocks in 2000 and 2001. we have found that the strongest and most variable emission seems to be associated with repeated transitions between hard ( island ) and softer ( lower banana ) x - ray states, while weaker, persistent radio emission is observed when the source is steadily in the hard x - ray state. there is a significant positive ranking correlation between the radio flux density at 8. 46 ghz and the 2 - 10 kev x - ray flux. moreover, significant positive ranking correlations between radio flux density and x - ray timing features ( i. e. break and low - frequency lorentzian frequencies ) have been found. these correlations represent the first evidence for a coupling between disc and jet in an atoll - type x - ray binary. furthermore, drawing an analogy between the hard ( island ) state and the low / hard state of black hole binaries, we confirm previous findings that accreting neutron stars are a factor of ~ 30 less ` radio loud ' than black holes.
|
arxiv:astro-ph/0305221
|
quaternionic and octonionic spinors are introduced and their fundamental properties ( such as the space - times supporting them ) are reviewed. the conditions for the existence of their associated dirac equations are analyzed. quaternionic and octonionic supersymmetric algebras defined in terms of such spinors are constructed. specializing to the d = 11 - dimensional case, the relation of both the quaternionic and the octonionic supersymmetries with the ordinary m - algebra are discussed.
|
arxiv:hep-th/0503210
|
in this paper, we study the uniqueness of the steady 1 - d shock solutions for the inviscid compressible euler system in a finite nozzle via asymptotic analysis for physical parameters. the parameters for the heat conductivity and the temperature - depending viscosity are investigated for both barotropic gases and polytropic gases. it finally turns out that the hypotheses on the physical effects have significant influences on the asymptotic behaviors as the parameters vanish. in particular, the positions of the shock front for the limit shock solution ( if exists ) are different for different hypotheses. hence, it seems impossible to figure out a criterion selecting the unique shock solution within the framework of the inviscid euler flows.
|
arxiv:2311.06584
|
the problem for the stationary navier - stokes equation in 3d under finite dirichlet norm is open. in this paper we answer the analogous question on the 3d hyperbolic space. we also address other dimensions and more general manifolds.
|
arxiv:1501.04928
|
lueck expressed the gromov norm of a knot complement in terms of an infinite series that can be computed from a presentation of the fundamental group of the knot complement. in this note we show that lueck ' s formula, applied to torus knots, yields surprising power series expansions for the logarithm function. this generalizes an infinite series of lehmer for the natural logarithm of 4.
|
arxiv:math/0611027
|
we present a numerical study of the dynamics of the one - dimensional ising model by applying the large - deviation method to describe ensembles of dynamical trajectories. in this approach trajectories are classified according to a dynamical order parameter and the structure of ensembles of trajectories can be understood from the properties of large - deviation functions, which play the role of dynamical free - energies. we consider both glauber and kawasaki dynamics, and also the presence of a magnetic field. for glauber dynamics in the absence of a field we confirm the analytic predictions of jack and sollich about the existence of critical dynamical, or space - time, phase transitions at critical values of the " counting " field $ s $. in the presence of a magnetic field the dynamical phase diagram also displays first order transition surfaces. we discuss how these non - equilibrium transitions in the 1 $ d $ ising model relate to the equilibrium ones of the 2 $ d $ ising model. for kawasaki dynamics we find a much simple dynamical phase structure, with transitions reminiscent of those seen in kinetically constrained models.
|
arxiv:1110.4857
|
in addition to conventional ground rovers, the mars 2020 mission will send a helicopter to mars. the copter ' s high - resolution data helps the rover to identify small hazards such as steps and pointy rocks, as well as providing rich textual information useful to predict perception performance. in this paper, we consider a three - agent system composed of a mars rover, copter, and orbiter. the objective is to provide good localization to the rover by selecting an optimal path that minimizes the localization uncertainty accumulation during the rover ' s traverse. to achieve this goal, we quantify the localizability as a goodness measure associated with the map, and conduct a joint - space search over rover ' s path and copter ' s perceptual actions given prior information from the orbiter. we jointly address where to map by the copter and where to drive by the rover using the proposed iterative copter - rover path planner. we conducted numerical simulations using the map of mars 2020 landing site to demonstrate the effectiveness of the proposed planner.
|
arxiv:2008.07157
|
the mechanism of dust emission from a cometary nucleus is still an open question and thermophysical models have problems reproducing outgassing and dust productions rates simultaneously. in this study, we investigate the capabilities of a rather simple thermophysical model to match observations from rosetta instruments at comet 67p / churyumov - gerasimenko and the influence of model variations. we assume a macro - porous surface structure composed of pebbles and investigate the influence of different model assumptions. besides the scenario in which dust layers are ejected when the vapour pressure overcomes the tensile strength, we use artificial ejection mechanisms, depending on ice - depletion of layers. we find that dust activity following the pressure criterion is only possible for reduced tensile strength values or reduced gas diffusivity and is inconsistent with observed outgassing rates, because activity is driven by co $ _ 2 $. only when we assume that dust activity is triggered when the layer is completely depleted in h $ _ 2 $ o, the ratio of co $ _ 2 $ to h $ _ 2 $ o outgassing rates is in the expected order of magnitude. however, the dust - to - h $ _ 2 $ o ratio is never reproduced. only with decreased gas diffusivity, the slope of the h $ _ 2 $ o outgassing rate is matched, however absolute values are too low. to investigate maximum reachable pressures, we adapted our model equivalent to a gas - impermeable dust structure. here, pressures exceeding the tensile strength by orders of magnitude are possible. maximum activity distances of $ 3. 1 \, \ mathrm { au } $, $ 8. 2 \, \ mathrm { au } $, and $ 74 \, \ mathrm { au } $ were estimated for h $ _ 2 $ o -, co $ _ 2 $ -, and co - driven activity of $ 1 \, \ mathrm { cm } $ - sized dust, respectively. in conclusion, the mechanism behind dust emission remains unclear.
|
arxiv:2306.07057
|
lower bound for the canonical height for drinfeld modules with complex multiplication. let k be a fi nite extension of fq ( t ), let l = k be a galois extension with galois group g and let e be the sub eld of l fixed by the center of g. assume that there exists a finite place v of k such that the local degrees of e = k above v are bounded. let $ \ phi $ be a drinfeld module with complex multiplication. we give an e fective lower bound for the canonical height of $ \ phi $ on l outside the torsion points of $ \ phi $. in the number field case, this problem was solved by f. amoroso, s. david and u. zannier.
|
arxiv:1408.1019
|
initially used to rank web pages, pagerank has now been applied in many fields. with the growing scale of graph, accelerating pagerank computing is urged and designing parallel algorithm is a feasible solution. in this paper, two parallel pagerank algorithms ifp1 and ifp2 are proposed via improving the state - of - the - art personalized pagerank algorithm, i. e., forward push. theoretical analysis indicates that, ifp1 can take advantage of the dag structure of the graph, where the dangling vertices improves the convergence rate and the unreferenced vertices decreases the computation amount. as an improvement of ifp1, ifp2 pushes mass to the dangling vertices only once but rather many times, and thus decreases the computation amount further. experiments on six data sets illustrate that both ifp1 and ifp2 outperform power method, where ifp2 with 38 parallelism can be at most 50 times as fast as the power method.
|
arxiv:2302.03245
|
metal - insulator transitions ( mit ), an intriguing correlated phenomenon induced by the subtle competition of the electrons ' repulsive coulomb interaction and kinetic energy, is of great potential use for electronic applications due to the dramatic change in resistivity. here, we demonstrate a reversible control of mit in vo2 films via oxygen stoichiometry engineering. by facilely depositing and dissolving a water - soluble yet oxygen - active sr3al2o6 capping layer atop the vo2 at room temperature, oxygen ions can reversibly migrate between vo2 and sr3al2o6, resulting in a gradual suppression and a complete recovery of mit in vo2. the migration of the oxygen ions is evidenced in a combination of transport measurement, structural characterization and first - principles calculations. this approach of chemically - induced oxygen migration using a water - dissolvable adjacent layer could be useful for advanced electronic and iontronic devices and studying oxygen stoichiometry effects on the mit.
|
arxiv:2109.05270
|
according to blackwell ' s theorem it is equivalent to compare channels by either a garbling order or optimal decision making. this equivalence does not hold anymore if also allowing pre - garbling, i. e. for the so - called shannon - order ( see rauh et al., 2017 ). we show that the equivalence fails in general even if the set of decision makers is reduced. this is overcome by the introduction of convexified shannon - usefulness as a preference relation of decision makers over channels. we prove that convexified shannon - order and convexified shannon - usefulness are equivalent.
|
arxiv:1906.07041
|
we study the non - critical space - time non - commutative open string ( ncos ) theory using a dual supergravity description in terms of a certain near - horizon limit of the f1 - dp bound state. we find the thermodynamics of ncos theory from supergravity. the thermodynamics is equivalent to yang - mills theory on a commutative space - time. we argue that this fact does not have to be in contradiction with the expected hagedorn behaviour of ncos theory. to support this we consider string corrections to the thermodynamics. we also discuss the relation to little string theory in 6 dimensions.
|
arxiv:hep-th/0006023
|
we revisit the classic $ o ( n ) $ symmetric scalar field theories in $ d $ dimensions with interaction $ ( \ phi ^ i \ phi ^ i ) ^ 2 $. for $ 2 < d < 4 $ these theories flow to the wilson - fisher fixed points for any $ n $. a standard large $ n $ hubbard - stratonovich approach also indicates that, for $ 4 < d < 6 $, these theories possess unitary uv fixed points. we propose their alternate description in terms of a theory of $ n + 1 $ massless scalars with the cubic interactions $ \ sigma \ phi ^ i \ phi ^ i $ and $ \ sigma ^ 3 $. our one - loop calculation in $ 6 - \ epsilon $ dimensions shows that this theory has an ir stable fixed point at real values of the coupling constants for $ n > 1038 $. we show that the $ 1 / n $ expansions of various operator scaling dimensions match the known results for the critical $ o ( n ) $ theory continued to $ d = 6 - \ epsilon $. these results suggest that, for sufficiently large $ n $, there are 5 - dimensional unitary $ o ( n ) $ symmetric interacting cft ' s ; they should be dual to the vasiliev higher - spin theory in ads $ _ 6 $ with alternate boundary conditions for the bulk scalar. using these cft ' s we provide a new test of the 5 - dimensional $ f $ - theorem, and also find a new counterexample for the $ c _ t $ theorem.
|
arxiv:1404.1094
|
we give an introduction to moduli stacks of gauged maps satisfying a stability conditition introduced by mundet and schmitt, and the associated integrals giving rise to gauged gromov - witten invariants. we survey various applications to cohomological and k - theoretic gromov - witten invariants.
|
arxiv:1606.01384
|
the open computing cluster for advanced data manipulation ( occam ) is a multi - purpose flexible hpc cluster designed and operated by a collaboration between the university of torino and the sezione di torino of the istituto nazionale di fisica nucleare. it is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid - state chemistry, high - energy physics, computer science, big data analytics, computational biology, genomics and many others. furthermore, it will serve as a platform for r & d activities on computational technologies themselves, with topics ranging from gpu acceleration to cloud computing technologies. a heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor vms, containers, virtual farms, jobs, interactive bare - metal sessions, etc. this work describes some of the use cases that prompted the design and construction of the hpc cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use - case tests.
|
arxiv:1709.03715
|
most asteroid families are very homogeneous in physical properties. some show greater diversity, however. the flora family is the most intriguing of them. the flora family is spread widely in the inner main belt, has a rich collisional history, and is one of the most taxonomically diverse regions in the main belt. as a result of its proximity to the asteroid ( 4 ) vesta ( the only currently known intact differentiated asteroid ) and its family, migration between the two regions is possible. this dynamical path is one of the counter arguments to the hypothesis that there may be traces of a differentiated parent body other than vesta in the inner main belt region. we here investigate the possibility that some of the v - and a - types ( commonly interpreted as basaltoids and dunites - parts of the mantle and crust of differentiated parent bodies ) in the flora dynamical region are not dynamically connected to vesta.
|
arxiv:1510.00865
|
spike deconvolution is the problem of recovering point sources from their convolution with a known point spread function, playing a fundamental role in many sensing and imaging applications. this paper proposes a novel approach combining esprit with preconditioned gradient descent ( pgd ) to estimate the amplitudes and locations of the point sources by a non - linear least squares. the preconditioning matrices are adaptively designed to account for variations in the learning process, ensuring a proven super - linear convergence rate. we provide local convergence guarantees for pgd and performance analysis of esprit reconstruction, leading to global convergence guarantees for our method in one - dimensional settings with multiple snapshots, demonstrating its robustness and effectiveness. numerical simulations corroborate the performance of the proposed approach for spike deconvolution.
|
arxiv:2502.08035
|
quantum technologies work by utilizing properties inherent in quantum systems such as quantum coherence and quantum entanglement and are expected to be superior to classical counterparts for solving certain problems in science and engineering. the quantum technologies are, however, fragile against an interaction with an environment ( decoherence ) and in order to utilize them with high accuracy we need to develop error mitigation techniques which reduce decoherence effects. in this work, we analyze quantum error mitigation ( qem ) protocol for quantum metrology in the presence of quantum noise. we demonstrate the effectiveness of our qem protocol by analyzing three types of quantum fisher information ( qfi ), ideal ( error - free ) qfi, noisy ( erroneous ) qfi, and quantum - error - mitigated qfi, and show both analytically and numerically that the scaling behaviors of quantum - error - mitigated qfi with respect to the number of probes become restored to the those exhibited in the ideal quantum metrology. our qem protocol is constructed by an ensemble of quantum circuits, namely qem circuit groups, and has advantages such that it can be applied to noisy quantum metrology for any type of initial state as well as any type of the probe - system hamiltonian, and it can be physically implemented in any type of quantum device. furthermore, the quantum - error - mitigated qfi become approximately equal to the ideal qfi for almost any values of physical quantities to be sensed. our protocol enables us to use quantum entanglement as a resource to perform high - sensitive quantum metrology even under the influence of quantum noise.
|
arxiv:2303.01820
|
lidar segmentation is crucial for autonomous driving perception. recent trends favor point - or voxel - based methods as they often yield better performance than the traditional range view representation. in this work, we unveil several key factors in building powerful range view models. we observe that the " many - to - one " mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections. we present rangeformer - - a full - cycle framework comprising novel designs across network architecture, data augmentation, and post - processing - - that better handles the learning and processing of lidar point clouds from the range view. we further introduce a scalable training from range view ( str ) strategy that trains on arbitrary low - resolution 2d range images, while still maintaining satisfactory 3d segmentation accuracy. we show that, for the first time, a range view method is able to surpass the point, voxel, and multi - view fusion counterparts in the competing lidar semantic and panoptic segmentation benchmarks, i. e., semantickitti, nuscenes, and scribblekitti.
|
arxiv:2303.05367
|
we study the $ s $ - matrix and inclusive cross - section for general dressed states in quantum electrodynamics. we obtain an infrared factorization formula of the $ s $ - matrix elements for general dressed states. it enables us to study what dressed states lead to infrared - safe $ s $ - matrix elements. the condition for dressed states can be interpreted as the memory effect which is nothing but the conservation law of the asymptotic symmetry. we derive the generalized soft photon theorem for general dressed states. we also compute inclusive cross - section using general dressed states. it is necessary to use appropriate initial and final dressed states to evaluate interference effects, which cannot be computed correctly by using fock states due to the infrared divergence.
|
arxiv:2209.00608
|
biometric systems, while offering convenient authentication, often fall short in providing rigorous security assurances. a primary reason is the ad - hoc design of protocols and components, which hinders the establishment of comprehensive security proofs. this paper introduces a formal framework for constructing secure and privacy - preserving biometric systems. by leveraging the principles of universal composability, we enable the modular analysis and verification of individual system components. this approach allows us to derive strong security and privacy properties for the entire system, grounded in well - defined computational assumptions.
|
arxiv:2411.17321
|
we refine an idea of deodhar, whose goal is a counting formula for kazhdan - lusztig polynomials. this is a consequence of a simple observation that one can use the solution of soergel ' s conjecture to make ambiguities involved in defining certain morphisms between soergel bimodules in characteristic zero ( double leaves ) disappear.
|
arxiv:2004.00045
|
the discovery of ultrastable glasses has raised novel challenges about glassy systems. recent experiments studied the macroscopic devitrification of ultrastable glasses into liquids upon heating but lacked microscopic resolution. we use molecular dynamics simulations to analyse the kinetics of this transformation. in the most stable systems, devitrification occurs after a very large time, but the liquid emerges in two steps. at short times, we observe the rare nucleation and slow growth of isolated droplets containing a liquid maintained under pressure by the rigidity of the surrounding glass. at large times, pressure is released after the droplets coalesce into large domains, which accelerates devitrification. this two - step process produces pronounced deviations from the classical avrami kinetics and explains the emergence of a giant lengthscale characterising the devitrification of bulk ultrastable glasses. our study elucidates the nonequilibrium kinetics of glasses following a large temperature jump, which differs from both equilibrium relaxation and aging dynamics, and will guide future experimental studies.
|
arxiv:2210.04775
|
this paper studies an unmanned aerial vehicle ( uav ) - enabled multicast channel, in which a uav serves as a mobile transmitter to deliver common information to a set of $ k $ ground users. we aim to characterize the capacity of this channel over a finite uav communication period, subject to its maximum speed constraint and an average transmit power constraint. to achieve the capacity, the uav should use a sufficiently long code that spans over its whole communication period. accordingly, the multicast channel capacity is achieved via maximizing the minimum achievable time - averaged rates of the $ k $ users, by jointly optimizing the uav ' s trajectory and transmit power allocation over time. however, this problem is non - convex and difficult to be solved optimally. to tackle this problem, we first consider a relaxed problem by ignoring the maximum uav speed constraint, and obtain its globally optimal solution via the lagrange dual method. the optimal solution reveals that the uav should hover above a finite number of ground locations, with the optimal hovering duration and transmit power at each location. next, based on such a multi - location - hovering solution, we present a successive hover - and - fly trajectory design and obtain the corresponding optimal transmit power allocation for the case with the maximum uav speed constraint. numerical results show that our proposed joint uav trajectory and transmit power optimization significantly improves the achievable rate of the uav - enabled multicast channel, and also greatly outperforms the conventional multicast channel with a fixed - location transmitter.
|
arxiv:1711.04387
|
the quantum dynamics of the electromagnetic light mode of an optical cavity filled with a coherently driven fermi gas of ultracold atoms strongly depends on geometry of the fermi surface. superradiant light generation and self - organization of the atoms can be achieved at low pumping threshold due to resonant atom - photon umklapp processes, where the fermions are scattered from one side of the fermi surface to the other by exchanging photon momenta. the cavity spectrum exhibits sidebands, that, despite strong atom - light coupling and cavity decay, retain narrow linewidth, due to absorptionless transparency windows outside the atomic particle - hole continuum and the suppression of inhomogeneous broadening and thermal fluctuations in the collisionless fermi gas.
|
arxiv:1309.2714
|
we present two models for the cosmological uv background light, and calculate the opacity of gev gamma - - rays out to redshift 9. the contributors to the background include 2 possible quasar emissivities, and output from star - - forming galaxies as determined by recent a semi - - analytic model ( sam ) of structure formation. the sam used in this work is based upon a hierarchical build - up of structure in a $ \ lambda $ cdm universe and is highly successful in reproducing a variety of observational parameters. above 1 rydberg energy, ionizing radiation is subject to reprocessing by the igm, which we treat using our radiative transfer code, cuba. the two models for quasar emissivity differing above z = 2. 3 are chosen to match the ionization rates observed using flux decrement analysis and the higher values of the line - of - sight proximity effect. we also investigate the possibility of a flat star formation rate density at z $ > 5 $. we conclude that observations of gamma - - rays from 10 to 100 gev by fermi ( glast ) and the next generation of ground based experiments should confirm a strongly evolving opacity from $ 1 < $ z $ < 4 $. observation of attenuation in the spectra of gamma - - ray bursts at higher redshift could constrain emission of uv radiation at these early times, either from a flat or increasing star - formation density or an unobserved population of sources.
|
arxiv:0811.1984
|
we characterize the region of meromorphic continuation of an analytic function $ f $ in terms of the geometric rate of convergence on a compact set of sequences of multi - point rational interpolants of $ f $. the rational approximants have a bounded number of poles and the distribution of interpolation points is arbitrary.
|
arxiv:1211.5573
|
a technology museum is a museum devoted to applied science and technological developments. many museums are both a science museum and a technology museum, and incorporate elements of both museum genres. the goal of technology museums is to educate the public on the history of technology, and to preserve technological heritage. they also may aim to promote local pride in technological and industrial developments, such as the manufacturing materials on display at the newcastle discovery museum. some technology museums may simply want to display technological items, while others may want to use them to demonstrate how they function. = = examples of technology museums = = some of the most historically significant technology museums are : the musee des arts et metiers in paris, founded in 1794 ; the science museum in london, founded in 1857 ; the deutsches museum von meisterwerken der naturwissenschaft und technik in munich, founded in 1903 ; and the technisches museum fur industrie und gewerbe in vienna, founded in 1918. the computer history museum in california, founded in the 1970s. further technology museums in germany include the deutsches technikmuseum in berlin - kreuzberg, the technoseum in mannheim, the technik museum speyer, the technik museum sinsheim and the technikmuseum magdeburg. the most prestigious of its kind in austria is the technisches museum in vienna. = = technology on display in museums = = many other independent museums, such as transport museums, cover certain technical genres, processes or industries, for example mining, chemistry, metrology, musical instruments, ceramics or paper. despite concentration on other fields, if there is extensive information on the technologies related to these subjects, the museum could be considered a technology museum. for example, elements of a technology museum could be incorporated with a marine science museum, a military museum, or an industrial museum. semi - technology - focused museums typically “ reflect some of the variety of applications of technology and present it within interestingly different contexts ”. = = = museum buildings and structures = = = in some examples of this type of museum, the actual building is incorporated into the exhibition. a museum on mining technology may be housed inside a mining or colliery site, and a museum focusing on industrial technology might be inside a warehouse or former factory. many naval and maritime museums follow this trend, such as the patriots point naval and maritime museum in mount pleasant, south carolina. the objects inside this museum are displayed inside the uss yorktown – an aircraft carrier – and the uss laffey — a destroyer. by housing
|
https://en.wikipedia.org/wiki/Technology_museum
|
in this study, we investigate the quasinormal modes of a non - rotating dyonic black hole within the framework of einstein - euler - heisenberg theory. we present a detailed analysis focuses on understanding the influence of dyonic charges on the oscillatory properties of these bhs. the quasinormal modes are calculated to explore the interplay between the dyonic charge and the characteristic frequencies of perturbations. the results are then systematically compared with those of black holes possessing purely electric or purely magnetic charges in the einstein - euler - heisenberg framework. this comparison highlights the unique signatures and dynamic behavior introduced by the presence of dyonic charges, offering deeper insights into the properties of black holes in nonlinear electrodynamics theories.
|
arxiv:2503.23894
|
in supersymmetric models with gluinos around 1000 - 2000 gev, new physics searches based on cascade decay products of the gluino are viable at the next run of the lhc. we investigate a scenario where the light stop is lighter than the gluino and both are lighter than all other squarks, and show that its signal can be established using multi b - jet, multi w and / or multi lepton final state topologies. we then utilize both boosted and conventional jet topologies in the final state in conjunction with di - tau production as a probe of the stau - neutralino co - annihilation region responsible for the model ' s dark matter content. this study is performed in the specific context of one such phenomenologically viable model named no - scale f - su ( 5 ).
|
arxiv:1412.5986
|
existing methods for conversion between synodic and sidereal rotation velocities of the sun are tested for validity using state of the art ephemeris data. it is found that some of them are in good agreement with ephemeris calculations, while the other ones show a discrepancy of almost 0. 01 deg per day. the discrepancy is attributed to a missing factor and a new corrected relation is given.
|
arxiv:1310.0778
|
a computationally efficient method for calculating the transport of neutrino flavor in simulations is to use angular moments of the neutrino one - body reduced density matrix, i. e., ` quantum moments '. as with any moment - based radiation transport method, a closure is needed if the infinite tower of moment evolution equations is truncated. we derive a general parameterization of a quantum closure and the limits the parameters must satisfy in order for the closure to be physical. we then derive from multi - angle calculations the evolution of the closure parameters in two test cases which we then progressively insert into a moment evolution code and show how the parameters affect the moment results until the full multi - angle results are reproduced. this parameterization paves the way to setting prescriptions for genuine quantum closures adapted to neutrino transport in a range of situations.
|
arxiv:2410.00719
|
magnetic and transport properties of a conducting layer with rashba spin - orbit coupling ( rsoc ) magnetically coupled to a layer of localized magnetic moments are studied on strips of varying width. the localized moments are free to rotate and they acquire an order that results from the competition between the magnetic exchange energy and the kinetic energy of the conduction electrons. by minimizing the total hamiltonian within the manifold of variational spiral orders of the magnetic moments, the phase diagram in the space of the interlayer exchange j _ { sd }, and the ratio of the rashba coupling to the hopping integral, lambda / t was determined. two main phases with longitudinal spiral order were found, one at large interlayer coupling j _ { sd } with uniform order in the transversal direction, and the other at small j _ { sd } showing a transversal staggered order. this staggered spiral order is unstable against an antiferromagnetic ( afm ) for large values of lambda / t. in both spiral phases, the longitudinal spiral momentum that departs from the expected linear dependence with the rsoc for large values of lambda / t. then, various transport properties, including the longitudinal drude weight and the spin hall conductivity, inside these two phases are computed in linear response, and their behavior is compared with the ones for the more well - studied cases of a fixed ferromagnetic ( fm ) and afm localized magnetic orders.
|
arxiv:1710.10129
|
we show that an exact solution of two - dimensional dilaton gravity with matter discovered previously exhibits an irreversible temporal flow towards flat space with a vanishing cosmological constant. this time flow is induced by the back reaction of matter on the space - time geometry. we demonstrate that the system is not in equilibrium if the cosmological constant is non - zero, whereas the solution with zero cosmological constant is stable. the flow of the system towards this stable end - point is derived from the renormalization - group flow of the zamolodchikov function. this behaviour is interpreted in terms of non - critical liouville string, with the liouville field identified as the target time.
|
arxiv:gr-qc/9712051
|
analog to digital conversion is a very important part of almost all beam instrumentation systems. ideally, in a properly designed system, the used analog to digital converter ( adc ) should not limit the system performance. however, despite recent improvements in adc technology, quite often this is not possible and the choice of the adc influences significantly or even restricts the system performance. it is therefore very important to estimate the requirements for the analog to digital conversion at an early stage of the system design and evaluate whether one can find an adequate adc fulfilling the system specification. in case of beam instrumentation systems requiring both, high time and amplitude resolution, it often happens that the system specification cannot be met with the available adcs without applying special processing to the analog signals prior to their digitisation. in such cases the requirements for the adc even influence the system architecture. this paper aims at helping the designer of a beam instrumentation system in the process of selecting an adc, which in many cases is iterative, requiring a trade off between system performance, complexity and cost. analog to digital conversion is widely and well described in the literature, therefore this paper focusses mostly on aspects related to beam instrumentation. the adc fundamentals are limited to the content presented as an introduction during the cas one - hour lecture corresponding to this paper.
|
arxiv:2005.06203
|
nanostructured materials and nanocomposites have shown great promise for improving the efficiency of thermoelectric materials. herein, fe nanoparticles were imbedded into a crn matrix by combining two physical vapor deposition approaches, namely high - power impulse magnetron sputtering and a nanoparticle gun. the combination of these techniques allowed the formation of nanocomposites in which the fe nanoparticles remained intact without intermixing with the matrix. the electrical and thermal transport properties of the nanocomposites were investigated and compared to a monolithic crn film. the measured thermoelectric properties revealed an increase in the seebeck coefficient, with a decrease of hall carrier concentration and an increase of the electron mobility which could be explained by energy filtering by internal phases created at the np / matrix interface. the thermal conductivity of the final nanocomposite was reduced from 4. 8 w m - 1k - 1 to a minimum of 3. 0 w m - 1k - 1 w. this study shows prospects for the nanocomposite synthesis process using nanoparticles and its use in improving the thermoelectric properties of coatings.
|
arxiv:2304.07566
|
the properties of the evolution equation have been analyzed. the uniqueness and the existence of solution for the evolution equation with special value of parameter characterizing intensity of change of external conditions, of the corresponding iterated equation have been established. on the base of these facts taking into account some properties of behavior of solution the uniqueness of the equation appeared in the theory of homogeneous nucleation has been established. the equivalence of auxiliary problem and the real problem is shown.
|
arxiv:cond-mat/0503238
|
the challenges associated with using pre - trained models ( ptms ) have not been specifically investigated, which hampers their effective utilization. to address this knowledge gap, we collected and analyzed a dataset of 5, 896 ptm - related questions on stack overflow. we first analyze the popularity and difficulty trends of ptm - related questions. we find that ptm - related questions are becoming more and more popular over time. however, it is noteworthy that ptm - related questions not only have a lower response rate but also exhibit a longer response time compared to many well - researched topics in software engineering. this observation emphasizes the significant difficulty and complexity associated with the practical application of ptms. to delve into the specific challenges, we manually annotate 430 ptm - related questions, categorizing them into a hierarchical taxonomy of 42 codes ( i. e., leaf nodes ) and three categories. this taxonomy encompasses many ptm prominent challenges such as fine - tuning, output understanding, and prompt customization, which reflects the gaps between current techniques and practical needs. we discuss the implications of our study for ptm practitioners, vendors, and educators, and suggest possible directions and solutions for future research.
|
arxiv:2404.14710
|
a prerequisite for the comprehensive understanding of many - body quantum systems is a characterization in terms of their entanglement structure. the experimental detection of entanglement in spatially extended many - body systems describable by quantum fields still presents a major challenge. we develop a general scheme for certifying entanglement and demonstrate it by revealing entanglement between distinct subsystems of a spinor bose - einstein condensate. our scheme builds on the spatially resolved simultaneous detection of the quantum field in two conjugate observables which allows the experimental confirmation of quantum correlations between local as well as non - local partitions of the system. the detection of squeezing in bogoliubov modes in a multi - mode setting illustrates its potential to boost the capabilities of quantum simulations to study entanglement in spatially extended many - body systems.
|
arxiv:2105.12219
|
visual attention networks ( van ) with large kernel attention ( lka ) modules have been shown to provide remarkable performance, that surpasses vision transformers ( vits ), on a range of vision - based tasks. however, the depth - wise convolutional layer in these lka modules incurs a quadratic increase in the computational and memory footprints with increasing convolutional kernel size. to mitigate these problems and to enable the use of extremely large convolutional kernels in the attention modules of van, we propose a family of large separable kernel attention modules, termed lska. lska decomposes the 2d convolutional kernel of the depth - wise convolutional layer into cascaded horizontal and vertical 1 - d kernels. in contrast to the standard lka design, the proposed decomposition enables the direct use of the depth - wise convolutional layer with large kernels in the attention module, without requiring any extra blocks. we demonstrate that the proposed lska module in van can achieve comparable performance with the standard lka module and incur lower computational complexity and memory footprints. we also find that the proposed lska design biases the van more toward the shape of the object than the texture with increasing kernel size. additionally, we benchmark the robustness of the lka and lska in van, vits, and the recent convnext on the five corrupted versions of the imagenet dataset that are largely unexplored in the previous works. our extensive experimental results show that the proposed lska module in van provides a significant reduction in computational complexity and memory footprints with increasing kernel size while outperforming vits, convnext, and providing similar performance compared to the lka module in van on object recognition, object detection, semantic segmentation, and robustness tests.
|
arxiv:2309.01439
|
we propose a parametrization of the nuclear absorption mechanism relying on the proper time spent by $ c \ overline { c } $ bound states travelling in nuclear matter. our approach could lead to the extraction of charmonium formation time. it is based on a large amount of proton - nucleus data, from nucleon - nucleon center - of - mass energies $ \ sqrt { s _ { nn } } = 27 $ gev to $ \ sqrt { s _ { nn } } = 5. 02 $ tev, collected in the past 30 ~ years, and for which the main effect on charmonium production must be its absorption by the nuclear matter it crosses.
|
arxiv:2107.01150
|
a variety of neutrino flavour conversion phenomena occur in core - collapse supernova, due to the large neutrino density close to the neutrinosphere, and the importance of the neutrino - neutrino interaction. three different regimes have been identified so far, usually called the synchronization, the bipolar oscillations and the spectral split. using the formalism of polarization vectors, within two - flavours, we focus on the spectral split phenomenon and we show for the first time that the physical mechanism underlying the neutrino spectral split is a magnetic resonance phenomenon. in particular, we show that the precession frequencies fulfill the magnetic resonance conditions. our numerical calculations show that the neutrino energies and the location at which the resonance takes place in the supernova coincide well with the neutrino energies at which a spectral swap occurs. the corresponding adiabaticity parameters present spikes at the resonance location.
|
arxiv:1103.5302
|
developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. this paper proposes a novel deep learning - based methodology for improving the computational efficiency of histology image classification. the proposed approach is robust when used with images that have reduced input resolution and can be trained effectively with limited labeled data. pre - trained on the original high - resolution ( hr ) images, our method uses knowledge distillation ( kd ) to transfer learned knowledge from a teacher model to a student model trained on the same images at a much lower resolution. to address the lack of large - scale labeled histology image datasets, we perform kd in a self - supervised manner. we evaluate our approach on two histology image datasets associated with celiac disease ( cd ) and lung adenocarcinoma ( luad ). our results show that a combination of kd and self - supervision allows the student model to approach, and in some cases, surpass the classification accuracy of the teacher, while being much more efficient. additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential to scale further. for the cd data, our model outperforms the hr teacher model, while needing 4 times fewer computations. for the luad data, our student model results at 1. 25x magnification are within 3 % of the teacher model at 10x magnification, with a 64 times computational cost reduction. moreover, our cd outcomes benefit from performance scaling with the use of more unlabeled data. for 0. 625x magnification, using unlabeled data improves accuracy by 4 % over the baseline. thus, our method can improve the feasibility of deep learning solutions for digital pathology with standard computational hardware.
|
arxiv:2101.04170
|
monte carlo simulations of lattice qcd at non - zero baryon chemical potential $ \ mu $ suffer from the notorious complex action problem. we consider qcd with static quarks coupled to a large chemical potential. this leaves us with an su ( 3 ) yang - mills theory with a complex action containing the polyakov loop. close to the deconfinement phase transition the qualitative features of this theory, in particular its z ( 3 ) symmetry properties, are captured by the 3 - d 3 - state potts model. we solve the complex action problem in the potts model by using a cluster algorithm. the improved estimator for the $ \ mu $ - dependent part of the boltzmann factor is real and positive and is used for importance sampling. we localize the critical endpoint of the first order deconfinement phase transition line and find consistency with universal 3 - d ising behavior. we also calculate the static quark - quark, quark - anti - quark, and anti - quark - anti - quark potentials which show screening as expected for a system with non - zero baryon density.
|
arxiv:hep-lat/0101012
|
we prove a nonconventional invariance principle ( functional central limit theorem ) for random fields.
|
arxiv:1101.5752
|
in the past, the features of a user interface were limited by those available in the existing graphical widgets it used. now, improvements in processor speed have fostered the emergence of interpreted languages, in which the appropriate method to render a given data object can be loaded at runtime. xml can be used to precisely describe the association of data types with their graphical handling ( beans ), and java provides an especially rich environment for programming the graphics. we present a graphical user interface builder based on java beans and xml, in which the graphical screens are described textually ( in files or a database ) in terms of their screen components. each component may be a simple text read back, or a complex plot. the programming model provides for dynamic data pertaining to a component to be forwarded synchronously or asynchronously, to the appropriate handler, which may be a built - in method, or a complex applet. this work was initially motivated by the need to move the legacy vms display interface of the slac control program to another platform while preserving all of its existing functionality. however the model allows us a powerful and generic system for adding new kinds of graphics, such as matlab, data sources, such as epics, middleware, such as aida [ 1 ], and transport, such as xml and soap. the system will also include a management console, which will be able to report on the present usage of the system, for instance who is running it where and connected to which channels.
|
arxiv:physics/0111057
|
degrees in accordance with law ; cork institute of technology and other institutes of technology have delegated authority from hetac to make awards to and including master ' s degree level — level 9 of ireland ' s national framework for qualifications ( nfq ) — for all areas of study and doctorate level in a number of others. in 2018, ireland passed the technological universities act, which allowed a number of institutes of technology to transform into technological universities. in a number of countries, although being today generally considered similar institutions of higher learning across many countries, polytechnics and institutes of technology used to have a quite different statute among each other, its teaching competences and organizational history. in many cases, " polytechnic " were elite technological universities concentrating on applied science and engineering and may also be a former designation for a vocational institution, before it has been granted the exclusive right to award academic degrees and can be truly called an " institute of technology ". a number of polytechnics providing higher education is simply a result of a formal upgrading from their original and historical role as intermediate technical education schools. in some situations, former polytechnics or other non - university institutions have emerged solely through an administrative change of statutes, which often included a name change with the introduction of new designations like " institute of technology ", " polytechnic university ", " university of applied sciences " or " university of technology " for marketing purposes. such emergence of so many upgraded polytechnics, former vocational education and technical schools converted into more university - like institutions has caused concern where the lack of specialized intermediate technical professionals lead to industrial skill shortages in some fields, being also associated to an increase of the graduate unemployment rate. this is mostly the case in those countries, where the education system is not controlled by the state and any institution can grant degrees. evidence have also shown a decline in the general quality of teaching and graduate ' s preparation for the workplace, due to the fast - paced conversion of that technical institutions to more advanced higher level institutions. mentz, kotze and van der merwe argue that all the tools are in place to promote the debate on the place of technology in higher education in general and in universities of technology specifically and they posit several questions for the debate. = = institutes by country = = = = = argentina = = = in argentina, the main higher institution devoted to the study of technology is the national technological university which has regional faculties throughout argentina. the buenos aires institute of technology ( itba ) and balseiro institute are other recognized institutes of technology. = = = australia = = =
|
https://en.wikipedia.org/wiki/Institute_of_technology
|
we consider several natural ways of expressing the idea that a one - sided ideal in a c * - algebra ( or a submodule in a hilbert c * - module ) is large, and show that they differ, unlike the case of two - sided ideals in c * - algebras. we then show how these different notions, for ideals and for submodules, are related. we also study some permanence properties for these notions. finally, we use essential right ideals to extend the inner product on a hilbert c * - module to a part of the dual module.
|
arxiv:2402.07288
|
in genetic studies, haplotype data provide more refined information than data about separate genetic markers. however, large - scale studies that genotype hundreds to thousands of individuals may only provide results of pooled data, where only the total allele counts of each marker in each pool are reported. methods for inferring haplotype frequencies from pooled genetic data that scale well with pool size rely on a normal approximation, which we observe to produce unreliable inference when applied to real data. we illustrate cases where the approximation breaks down, due to the normal covariance matrix being near - singular. as an alternative to approximate methods, in this paper we propose exact methods to infer haplotype frequencies from pooled genetic data based on a latent multinomial model, where the observed allele counts are considered integer combinations of latent, unobserved haplotype counts. one of our methods, latent count sampling via markov bases, achieves approximately linear runtime with respect to pool size. our exact methods produce more accurate inference over existing approximate methods for synthetic data and for data based on haplotype information from the 1000 genomes project. we also demonstrate how our methods can be applied to time - series of pooled genetic data, as a proof of concept of how our methods are relevant to more complex hierarchical settings, such as spatiotemporal models.
|
arxiv:2308.16465
|
glasses are out - of - equilibrium systems aging under the crystallization threat. during ordinary glass formation, the atomic diffusion slows down rendering its experimental investigation impractically long, to the extent that a timescale divergence is taken for granted by many. we circumvent here these limitations, taking advantage of a wide family of glasses rapidly obtained by physical vapor deposition directly into the solid state, endowed with different " ages " rivaling those reached by standard cooling and waiting for millennia. isothermally probing the mechanical response of each of these glasses, we infer a correspondence with viscosity along the equilibrium line, up to exapoise values. we find a dependence of the elastic modulus on the glass age, which, traced back to temperature steepness index of the viscosity, tears down one of the cornerstones of several glass transition theories : the dynamical divergence. critically, our results suggest that the conventional wisdom picture of a glass ceasing to flow at finite temperature could be wrong.
|
arxiv:1508.06896
|
it is well known that the complex system operation requires the use of new scientific tools and computer simulation. this paper presents a modular approach for modeling and analysis of the complex systems ( in communication or transport systems area ) using hybrid petri nets. the performance evaluation of the hybrid model is made by a simulation methodology that allows building up various functioning scenarios. a decision support system based on the above mentioned methodologies of modeling and analysis will be designed for performance evaluation and time optimization of large scale communication and / or transport systems.
|
arxiv:1703.10679
|
the mid - rapidity ( dsigma _ ( pn ) / dy at y = 0 ) and total sigma _ ( pn ) production cross sections of j / psi mesons are measured in proton - nucleus interactions. data collected by the hera - b experiment in interactions of 920 gev / c protons with carbon, titanium and tungsten targets are used for this analysis. the j / psi mesons are reconstructed by their decay into lepton pairs. the total production cross section obtained is sigma _ ( pn ) ( j / psi ) = 663 + - 74 + - 46 nb / nucleon. in addition, our result is compared with previous measurements.
|
arxiv:hep-ex/0512029
|
two - dimensional electrophoresis is still a very valuable tool in proteomics, due to its reproducibility and its ability to analyze complete proteins. however, due to its sensitivity to dynamic range issues, its most suitable use in the frame of biomarker discovery is not on very complex fluids such as plasma, but rather on more proximal, simpler fluids such as csf, urine, or secretome samples. here, we describe the complete workflow for the analysis of such dilute samples by two - dimensional electrophoresis, starting from sample concentration, then the two - dimensional electrophoresis step per se, ending with the protein detection by fluorescence.
|
arxiv:1309.2428
|
we investigate the masses of 0 + and 2 + glueballs in su ( 2 ) lattice gauge theory using abelian projection to the maximum abelian gauge. we calculate glueball masses using both abelian links and monopole operators. both methods reproduce the known full su ( 2 ) results quantitatively. positivity problems present in the abelian projection are discussed. we study the dependence of the glueball masses on magnetic current loop size, and find that the 0 + state requires a much greater range of sizes than does the 2 + state.
|
arxiv:hep-lat/9809019
|
we investigate yang - lee zeros of grand partition functions as truncated fugacity polynomials of which coefficients are given by the canonical partition functions $ z ( t, v, n ) $ up to $ n \ leq n _ { \ text { max } } $. such a partition function can be inevitably obtained from the net - baryon number multiplicity distribution in relativistic heavy ion collisions, where the number of the event beyond $ n _ { \ text { max } } $ has insufficient statistics, as well as canonical approaches in lattice qcd. we use a chiral random matrix model as a solvable model for chiral phase transition in qcd and show that the closest edge of the distribution to real chemical potential axis is stable against cutting the tail of the multiplicity distribution. the similar behavior is also found in lattice qcd at finite temperature for roberge - weiss transition. in contrast, such a stability is found to be absent in the skellam distribution which does not have phase transition. we compare the number of $ n _ { \ text { max } } $ to obtain the stable yang - lee zeros with those of critical higher order cumulants.
|
arxiv:1505.05985
|
hyperuniform states of matter exhibit unusual suppression of density fluctuations at large scales, contrasting sharply with typical disordered configurations. various types of hyperuniformity emerge in multicomponent disordered systems, significantly enhancing their functional properties for advanced applications. this paper focuses on developing a theoretical framework for two - component hyperuniform systems. we provide a robust theoretical basis to identify novel conditions on structure factors for a variety of hyperuniform binary mixtures, classifying them into five distinct types with seven unique states. our findings also offer valuable guidelines for designing multihyperuniform materials where each component preserves hyperuniformity, added to the overall hyperuniformity.
|
arxiv:2501.06735
|
materials with an allotropic phase transformation can result in the formation of microstructures where grains have orientation relationships ( ors ) determined by the transformation history. these microstructures influence the final material properties. in zirconium alloys, there is a solid - state body centred cubic ( bcc ) to hexagonal close packed ( hcp ) phase transformation, where the crystal orientations of the hcp phase can be related to the parent bcc structure via the burgers orientation relationship ( bor ). in the present work, we adapt a reconstruction code developed for steels which uses a markov chain clustering algorithm to analyse electron backscatter diffraction ( ebsd ) maps and apply this to the hcp and bcc bor. this algorithm is released as open - source code ( via github, as parentbor ). the algorithm enables new post processing of the original and reconstructed data set to analyse the variants of the hcp alpha - phase that are present and understand shared crystal planes and shared lattice directions within each parent beta grain, and we anticipate that this assists in understanding the transformation related deformation properties of the final microstructure. finally, we compare the parentbor code with recently released reconstruction codes implemented in mtex to reveal differences and similarities in how the microstructure is described.
|
arxiv:2104.10648
|
the mean shift ( ms ) algorithm is a nonparametric method used to cluster sample points and find the local modes of kernel density estimates, using an idea based on iterative gradient ascent. in this paper we develop a mean - shift - inspired algorithm to estimate the modes of regression functions and partition the sample points in the input space. we prove convergence of the sequences generated by the algorithm and derive the non - asymptotic rates of convergence of the estimated local modes for the underlying regression model. we also demonstrate the utility of the algorithm for data - enabled discovery through an application on biomolecular structure data. an extension to subspace constrained mean shift ( scms ) algorithm used to extract ridges of regression functions is briefly discussed.
|
arxiv:2104.10103
|
using the high energy color dipole approach, we study the exclusive photoproduction of lepton pairs. we use simple models for the elementary dipole - hadron scattering amplitude that captures main features of the dependence on atomic number a, on energy and on momentum transfer t. this investigation is complementary to conventional partonic description of timelike compton scattering, which considers quark handbag diagrams at leading order in $ \ alpha _ s $ and simple models of the relevant generalized parton distributions ( gpds ). these calculations are input in electromagnetic interactions in pp and aa collisions to measured at the lhc.
|
arxiv:0805.3144
|
we present a new rheological model depending on a real parameter $ \ nu \ in [ 0, 1 ] $ that reduces to the maxwell body for $ \ nu = 0 $ and to the becker body for $ \ nu = 1 $. the corresponding creep law is expressed in an integral form in which the exponential function of the becker model is replaced and generalized by a mittag - leffler function of order $ \ nu $. then, the corresponding non - dimensional creep function and its rate are studied as functions of time for different values of $ \ nu $ in order to visualize the transition from the classical maxwell body to the becker body. based on the hereditary theory of linear viscoelasticity, we also approximate the relaxation function by solving numerically a volterra integral equation of the second kind. in turn, the relaxation function is shown versus time for different values of $ \ nu $ to visualize again the transition from the classical maxwell body to the becker body. furthermore, we provide a full characterization of the new model by computing, in addition to the creep and relaxation functions, the so - called specific dissipation $ q ^ { - 1 } $ as a function of frequency, which is of particularly relevance for geophysical applications
|
arxiv:1707.05188
|
inspired by the brain ' s information processing using binary spikes, spiking neural networks ( snns ) offer significant reductions in energy consumption and are more adept at incorporating multi - scale biological characteristics. in snns, spiking neurons serve as the fundamental information processing units. however, in most models, these neurons are typically simplified, focusing primarily on the leaky integrate - and - fire ( lif ) point neuron model while neglecting the structural properties of biological neurons. this simplification hampers the computational and learning capabilities of snns. in this paper, we propose a brain - inspired deep distributional reinforcement learning algorithm based on snns, which integrates a bio - inspired multi - compartment neuron ( mcn ) model with a population coding approach. the proposed mcn model simulates the structure and function of apical dendritic, basal dendritic, and somatic compartments, achieving computational power comparable to that of biological neurons. additionally, we introduce an implicit fractional embedding method based on population coding of spiking neurons. we evaluated our model on atari games, and the experimental results demonstrate that it surpasses the vanilla fqf model, which utilizes traditional artificial neural networks ( anns ), as well as the spiking - fqf models that are based on ann - to - snn conversion methods. ablation studies further reveal that the proposed multi - compartment neuron model and the quantile fraction implicit population spike representation significantly enhance the performance of mcs - fqf while also reducing power consumption.
|
arxiv:2301.07275
|
the influence of a fullerene molecule trapped inside a single - wall carbon nanotube on resonant electron transport at low temperatures and strong polaronic coupling is theoretically discussed. strong peak to peak fluctuations and anomalous temperature behavior of conductance amplitudes are predicted and investigated. the influence of the chiral properties of carbon nanotubes on transport is also studied.
|
arxiv:cond-mat/0702153
|
hospitals in india still rely on handwritten medical records despite the availability of electronic medical records ( emr ), complicating statistical analysis and record retrieval. handwritten records pose a unique challenge, requiring specialized data for training models to recognize medications and their recommendation patterns. while traditional handwriting recognition approaches employ 2 - d lstms, recent studies have explored using multimodal large language models ( mllms ) for ocr tasks. building on this approach, we focus on extracting medication names and dosages from simulated medical records. our methodology mirage ( multimodal identification and recognition of annotations in indian general prescriptions ) involves fine - tuning the qwen vl, llava 1. 6 and idefics2 models on 743, 118 high resolution simulated medical record images - fully annotated from 1, 133 doctors across india. our approach achieves 82 % accuracy in extracting medication names and dosages.
|
arxiv:2410.09729
|
suppose $ \ mathbb { f } $ is a field of prime characteristic $ p $ and $ e $ is a finite subgroup of the additive group $ ( \ mathbb { f }, + ) $. then $ e $ is an elementary abelian $ p $ - group. we consider two such subgroups, say $ e $ and $ e ' $, to be equivalent if there is an $ \ alpha \ in \ mathbb { f } ^ * : = \ mathbb { f } \ setminus \ { 0 \ } $ such that $ e = \ alpha e ' $. in this paper we show that rational functions can be used to distinguish equivalence classes of subgroups and, for subgroups of prime rank or rank less than twelve, we give explicit finite sets of separating invariants.
|
arxiv:1610.03709
|
the aim of this paper is to explain how the d - iteration can be used for an efficient asynchronous distributed computation. we present the main ideas of the method and illustrate them through very simple examples.
|
arxiv:1202.3108
|
boundary conditions changing operators have played an important role in conformal field theory. here, we study their equivalent in the case where a mass scale is introduced, in an integrable way, either in the bulk or at the boundary. more precisely, we propose an axiomatic approach to determine the general scalar products $ { } _ b < \ theta _ 1,..., \ theta _ m | | \ theta ' _ 1,..., \ theta ' _ { n } > _ a $ between asymptotic states in the hilbert spaces with $ a $ and $ b $ boundary conditions respectively, and compute these scalar products explicitely in the case of the ising and sinh - gordon models with a mass and a boundary interaction. these quantities can be used to study statistical systems with inhomogeneous boundary conditions, and, more interestingly maybe, dynamical problems in quantum impurity problems. as an example, we obtain a series of new exact results for the transition probability in the double well problem of dissipative quantum mechanics.
|
arxiv:hep-th/9801089
|
flagella and cilia are examples of actively oscillating, whiplike biological filaments that are crucial to processes as diverse as locomotion, mucus clearance, embryogenesis and cell motility. elastic driven rod - like filaments subjected to compressive follower forces provide a way to mimic oscillatory beating in synthetic settings. in the continuum limit, this spatiotemporal response is an emergent phenomenon resulting from the interplay between the structural elastic instability of the slender rods subjected to the non - conservative follower forces, geometric constraints that control the onset of this instability, and viscous dissipation due to fluid drag by ambient media. in this paper, we use an elastic rod model to characterize beating frequencies, the critical follower forces and the non - linear rod shapes, for prestressed, clamped rods subject to two types of fluid drag forces, namely, linear stokes drag and non - linear morrison drag. we find that the critical follower force depends strongly on the initial slack and weakly on the nature of the drag force. the emergent frequencies however, depend strongly on both the extent of pre - stress as well as the nature of the fluid drag.
|
arxiv:1805.08922
|
we study the massless minimally coupled scalar field on a two - - dimensional de sitter space - time in the setting of axiomatic quantum field theory. we construct the invariant wightman distribution obtained as the renormalized zero - - mass limit of the massive one. insisting on gauge invariance of the model we construct a vacuum state and a hilbert space of physical states which are invariant under the action of the whole de sitter group. we also present the integral expression of the conserved charge which generates the gauge invariance and propose a definition of dual field.
|
arxiv:math-ph/0609080
|
using a recently developed method, based on a generalization of the zero curvature representation of zakharov and shabat, we study the integrability structure in the abelian higgs model. it is shown that the model contains integrable sectors, where integrability is understood as the existence of infinitely many conserved currents. in particular, a gauge invariant description of the weak and strong integrable sectors is provided. the pertinent integrability conditions are given by a u ( 1 ) generalization of the standard strong and weak constraints for models with two dimensional target space. the bogomolny sector is discussed, as well, and we find that each bogomolny configuration supports infinitely many conserved currents. finally, other models with u ( 1 ) gauge symmetry are investigated.
|
arxiv:hep-th/0702100
|
the question whether p equals np revolves around the discrepancy between active production and mere verification by turing machines. in this paper, we examine the analogous problem for finite transducers and automata. every nondeterministic finite transducer defines a binary relation associating each input word with all output words that the transducer can successfully produce on the given input. finite - valued transducers are those for which there is a finite upper bound on the number of output words that the relation associates with every input word. we characterize finite - valued, functional, and unambiguous nondeterministic transducers whose relations can be verified by a deterministic two - tape automaton, show how to construct such an automaton if one exists, and prove the undecidability of the criterion.
|
arxiv:2005.13710
|
as quantum computing advances, quantum approximate optimization algorithms ( qaoa ) have shown promise in addressing combinatorial optimization problems. however, the limitations of noisy intermediate scale quantum ( nisq ) devices hinder the scalability of qaoa for large - scale optimization tasks. to overcome these challenges, we propose backbone - driven qaoa, a hybrid framework that leverages adaptive tabu search for classical preprocessing to decompose large - scale quadratic unconstrained binary ( qubo ) problems into nisq - compatible subproblems. in our approach, adaptive tabu search dynamically identifies and fixes backbone variables to construct reduced - dimensional subspaces that preserve the critical optimization landscape. these quantum - tractable subproblems are then solved via qaoa, with the resulting solutions iteratively refining the backbone selection in a closed - loop quantum - classical cycle. experimental results demonstrate that our approach not only competes with, and in some cases surpasses, traditional classical algorithms but also performs comparably with recently proposed hybrid classical - quantum algorithms. our proposed framework effectively orchestrates the allocation of quantum and classical resources, thereby enabling the solution of large - scale combinatorial optimization problems on current nisq hardware.
|
arxiv:2504.09575
|
brought forward the idea of the homeostasis between the dynamic processes in the body. hydra experiments performed by abraham trembley in the 18th century began to delve into the regenerative capabilities of cells. during the 19th century, a better understanding of how different metals reacted with the body led to the development of better sutures and a shift towards screw and plate implants in bone fixation. further, it was first hypothesized in the mid - 1800s that cell - environment interactions and cell proliferation were vital for tissue regeneration. = = = modern era ( 20th and 21st centuries ) = = = as time progresses and technology advances, there is a constant need for change in the approach researchers take in their studies. tissue engineering has continued to evolve over centuries. in the beginning people used to look at and use samples directly from human or animal cadavers. now, tissue engineers have the ability to remake many of the tissues in the body through the use of modern techniques such as microfabrication and three - dimensional bioprinting in conjunction with native tissue cells / stem cells. these advances have allowed researchers to generate new tissues in a much more efficient manner. for example, these techniques allow for more personalization which allow for better biocompatibility, decreased immune response, cellular integration, and longevity. there is no doubt that these techniques will continue to evolve, as we have continued to see microfabrication and bioprinting evolve over the past decade. in 1960, wichterle and lim were the first to publish experiments on hydrogels for biomedical applications by using them in contact lens construction. work on the field developed slowly over the next two decades, but later found traction when hydrogels were repurposed for drug delivery. in 1984, charles hull developed bioprinting by converting a hewlett - packard inkjet printer into a device capable of depositing cells in 2 - d. three dimensional ( 3 - d ) printing is a type of additive manufacturing which has since found various applications in medical engineering, due to its high precision and efficiency. with biologist james thompson ' s development of first human stem cell lines in 1998 followed by transplantation of first laboratory - grown internal organs in 1999 and creation of the first bioprinter in 2003 by the university of missouri when they printed spheroids without the need of scaffolds, 3 - d bioprinting became more conventionally used in medical field than ever before. so far, scientists have been able to print mini organoids and organs - on -
|
https://en.wikipedia.org/wiki/Tissue_engineering
|
we prove the existence and uniqueness of a strong solution of a stochastic differential equation with normal reflection representing the random motion of finitely many globules. each globule is a sphere with time - dependent random radius and a center moving according to a diffusion process. the spheres are hard, hence non - intersecting, which induces in the equation a reflection term with a local ( collision - ) time. a smooth interaction is considered too and, in the particular case of a gradient system, the reversible measure of the dynamics is given. in the proofs, we analyze geometrical properties of the boundary of the set in which the process takes its values, in particular the so - called uniform exterior sphere and uniform normal cone properties. these techniques extend to other hard core models of objects with a time - dependent random characteristic : we present here an application to the random motion of a chain - like molecule.
|
arxiv:0910.5394
|
this paper describes the submission to the iwslt 2021 offline speech translation task by the upc machine translation group. the task consists of building a system capable of translating english audio recordings extracted from ted talks into german text. submitted systems can be either cascade or end - to - end and use a custom or given segmentation. our submission is an end - to - end speech translation system, which combines pre - trained models ( wav2vec 2. 0 and mbart ) with coupling modules between the encoder and decoder, and uses an efficient fine - tuning technique, which trains only 20 % of its total parameters. we show that adding an adapter to the system and pre - training it, can increase the convergence speed and the final result, with which we achieve a bleu score of 27. 3 on the must - c test set. our final model is an ensemble that obtains 28. 22 bleu score on the same set. our submission also uses a custom segmentation algorithm that employs pre - trained wav2vec 2. 0 for identifying periods of untranscribable text and can bring improvements of 2. 5 to 3 bleu score on the iwslt 2019 test set, as compared to the result with the given segmentation.
|
arxiv:2105.04512
|
a well - posed stress - driven mixture is proposed for timoshenko nano - beams. the model is a convex combination of local and nonlocal phases and circumvents some problems of ill - posedness emerged in strain - driven eringen - like formulations for structures of nanotechnological interest. the nonlocal part of the mixture is the integral convolution between stress field and a bi - exponential averaging kernel function characterized by a scale parameter. the stress - driven mixture is equivalent to a differential problem equipped with constitutive boundary conditions involving bending and shear fields. closed - form solutions of timoshenko nano - beams for selected boundary and loading conditions are established by an effective analytical strategy. the numerical results exhibit a stiffening behavior in terms of scale parameter.
|
arxiv:2006.11368
|
in the construction of a cluster algebra on the homogeneous coordinate ring of a partial flag variety by gei { \ ss }, leclerc and schr { \ " { o } } er, they defined a special map denoted by ` ` tilde ". this map lifts each element $ f $ of the coordinate ring of a schubert cell uniquely to an element $ \ widetilde { f } $ of the ( multi - homogeneous ) coordinate ring of the corresponding partial flag variety. the significance of this map appears from its essential role ; it lifts the cluster algebra of the coordinate ring of a cell to a cluster algebra living in the coordinate ring of the corresponding partial flag variety. this paper takes a closer look at this map and gives an explicit algorithm to calculate it for the \ textit { generalized minors }.
|
arxiv:2305.04045
|
lisa is a space - based mhz gravitational - wave observatory, with a planned launch in 2034. it is expected to be the first detector of its kind, and will present unique challenges in instrumentation and data analysis. an accurate preflight simulation of lisa data is a vital part of the development of both the instrument and the analysis methods. the simulation must include a detailed model of the full measurement and analysis chain, capturing the main features that affect the instrument performance and processing algorithms. here, we propose a new model that includes, for the first time, proper relativistic treatment of reference frames with realistic orbits ; a model for onboard clocks and clock synchronization measurements ; proper modeling of total laser frequencies, including laser locking, frequency planning and doppler shifts ; better treatment of onboard processing and updated noise models. we then introduce two implementations of this model, lisanode and lisa instrument. we demonstrate that tdi processing successfully recovers gravitational - wave signals from the significantly more realistic and complex simulated data. lisanode and lisa instrument are already widely used by the lisa community and, for example, currently provide the mock data for the lisa data challenges.
|
arxiv:2212.05351
|
the copernican principle, the notion that we are not at a special location in the universe, is one of the cornerstones of modern cosmology and its violation would invalidate the friedmann - lema \ ^ { \ i } tre - robertson - walker ( flrw ) metric, causing a major change in our understanding of the universe. thus, it is of fundamental importance to perform observational tests of this principle. we determine the precision with which future surveys will be able to test the copernican principle and their ability to detect any possible violations. we forecast constraints on the inhomogeneous lema \ ^ { \ i } tre - tolman - bondi model with a cosmological constant $ \ lambda $ ( $ \ lambda $ ltb ), basically a cosmological constant $ \ lambda $ and cold dark matter ( $ \ lambda $ cdm ) model, but endowed with a spherical inhomogeneity. we consider combinations of currently available data and simulated euclid data, together with external data products, based on both $ \ lambda $ cdm and $ \ lambda $ ltb fiducial models. these constraints are compared to the expectations from the copernican principle. when considering the $ \ lambda $ cdm fiducial model, we find that euclid data, in combination with other current and forthcoming surveys, will improve the constraints on the copernican principle by about $ 30 \ % $, with $ \ pm10 \ % $ variations depending on the observables and scales considered. on the other hand, when considering a $ \ lambda $ ltb fiducial model, we find that future euclid data, combined with other current and forthcoming data sets, will be able to detect gpc - scale inhomogeneities of contrast $ - 0. 1 $. next - generation surveys, such as euclid, will thoroughly test homogeneity at large scales, tightening the constraints on possible violations of the copernican principle.
|
arxiv:2207.09995
|
aims. we study the 2d spectral line profile of harps ( high accuracy radial velocity planet searcher ), measuring its variation with position across the detector and with changing line intensity. the characterization of the line profile and its variations are important for achieving the precision of the wavelength scales of 10 ^ { - 10 } or 3. 0 cm / s necessary to detect earth - twins in the habitable zone around solar - like stars. methods. we used a laser frequency comb ( lfc ) with unresolved and unblended lines to probe the instrument line profile. we injected the lfc light ( attenuated by various neutral density filters ) into both the object and the reference fibres of harps, and we studied the variations of the line profiles with the line intensities. we applied moment analysis to measure the line positions, widths, and skewness as well as to characterize the line profile distortions induced by the spectrograph and detectors. based on this, we established a model to correct for point spread function distortions by tracking the beam profiles in both fibres. results. we demonstrate that the line profile varies with the position on the detector and as a function of line intensities. this is consistent with a charge transfer inefficiency ( cti ) effect on the harps detector. the estimate of the line position depends critically on the line profile, and therefore a change in the line amplitude effectively changes the measured position of the lines, affecting the stability of the wavelength scale of the instrument. we deduce and apply the correcting functions to re - calibrate and mitigate this effect, reducing it to a level consistent with photon noise.
|
arxiv:2011.03391
|
we compare deep magellan spectroscopy of 26 groups at 0. 3 < = z < = 0. 55, selected from the canadian network for observational cosmology 2 field survey ( cnoc2 ), with a large sample of nearby groups from the 2pigg catalogue ( eke et al., 2004 ). we find that the fraction of group galaxies with significant [ oii ] emission ( > = 5 \ aa ) increases strongly with redshift, from ~ 29 % in 2dfgrs to ~ 58 % in cnoc2, for all galaxies brighter than ~ m * + 1. 75. this trend is parallel to the evolution of field galaxies, where the equivalent fraction of emission line galaxies increases from ~ 53 % to ~ 75 %. the fraction of emission - line galaxies in groups is lower than in the field, across the full redshift range, indicating that the history of star formation in groups is influenced by their environment. we show that the evolution required to explain the data is inconsistent with a quiescent model of galaxy evolution ; instead, discrete events in which galaxies cease forming stars ( truncation events ) are required. we constrain the probability of truncation ( p _ trunc ) and find that a high value is required in a simple evolutionary scenario neglecting galaxy mergers ( p _ trunc > ~ 0. 3 gyr ^ { - 1 } ). however, without assuming significant density evolution, p _ trunc is not required to be larger in groups than in the field, suggesting that the environmental dependence of star formation was embedded at redshifts z > ~ 0. 45.
|
arxiv:astro-ph/0501183
|
top quark pair production in association with a $ z $ - boson or a photon at the lhc directly probes neutral top - quark couplings. we present predictions for these two processes in the standard model ( sm ) effective field theory ( eft ) at next - to - leading ( nlo ) order in qcd. we include the full set of cp - even dimension - six operators that enter the top - quark interactions with the sm gauge bosons. for comparison, we also present predictions in the smeft for top loop - induced $ hz $ production at the lhc and for $ t \ bar { t } $ production at the ilc at nlo in qcd. results for total cross sections and differential distributions are obtained and uncertainties coming from missing higher orders in the strong coupling and in the eft expansions are discussed. nlo results matched to the parton shower are available, allowing for event generation to be directly employed in an experimental analyses. our framework provides a solid basis for the interpretation of current and future measurements in the smeft, with improved accuracy and precision.
|
arxiv:1601.08193
|
k strings in yang - mills theory can be considered as bound states of k elementary confining strings carrying one unit of colour flux. current estimates of k - string tension sigma _ k are very sensitive to the leading corrections due to quantum fluctuations of the string. in this study we address this problem by comparing polyakov - polyakov correlators in the fundamental representation ( k = 1 ) with the corresponding ones with k = 2 in the confining phase of a z _ 4 gauge theory in three dimensions. highly efficient simulation techniques are available in this case. although the k = 1 polyakov - polyakov correlator matches nicely with the expected bosonic string effects up to the next - to - leading - order, the k = 2 polyakov - polyakov correlators show large deviations. this is an important source of potential systematic errors in the current estimates of sigma _ k.
|
arxiv:hep-lat/0609055
|
computer - aided - diagnosis ( cadx ) systems assist radiologists with identifying and classifying potentially malignant pulmonary nodules on chest ct scans using morphology and texture - based ( radiomic ) features. however, radiomic features are sensitive to differences in acquisitions due to variations in dose levels and slice thickness. this study investigates the feasibility of generating a normalized scan from heterogeneous ct scans as input. we obtained projection data from 40 low - dose chest ct scans, simulating acquisitions at 10 %, 25 % and 50 % dose and reconstructing the scans at 1. 0mm and 2. 0mm slice thickness. a 3d generative adversarial network ( gan ) was used to simultaneously normalize reduced dose, thick slice ( 2. 0mm ) images to normal dose ( 100 % ), thinner slice ( 1. 0mm ) images. we evaluated the normalized image quality using peak signal - to - noise ratio ( psnr ), structural similarity index ( ssim ) and learned perceptual image patch similarity ( lpips ). our gan improved perceptual similarity by 35 %, compared to a baseline cnn method. our analysis also shows that the gan - based approach led to a significantly smaller error ( p - value < 0. 05 ) in nine studied radiomic features. these results indicated that gans could be used to normalize heterogeneous ct images and reduce the variability in radiomic feature values.
|
arxiv:2001.08741
|
in this work we analyze the convergence properties of the spectral deferred correction ( sdc ) method originally proposed by dutt et al. [ bit, 40 ( 2000 ), pp. 241 - - 266 ]. the framework for this high - order ordinary differential equation ( ode ) solver is typically described wherein a low - order approximation ( such as forward or backward euler ) is lifted to higher order accuracy by applying the same low - order method to an error equation and then adding in the resulting defect to correct the solution. our focus is not on solving the error equation to increase the order of accuracy, but on rewriting the solver as an iterative picard integral equation solver. in doing so, our chief finding is that it is not the low - order solver that picks up the order of accuracy with each correction, but it is the underlying quadrature rule of the right hand side function that is solely responsible for picking up additional orders of accuracy. our proofs point to a total of three sources of errors that sdc methods carry : the error at the current time point, the error from the previous iterate, and the numerical integration error that comes from the total number of quadrature nodes used for integration. the second of these two sources of errors is what separates sdc methods from picard integral equation methods ; our findings indicate that as long as difference between the current and previous iterate always gets multiplied by at least a constant multiple of the time step size, then high - order accuracy can be found even if the underlying " solver " is inconsistent the underlying ode. from this vantage, we solidify the prospects of extending spectral deferred correction methods to a larger class of solvers to which we present some examples.
|
arxiv:1706.06245
|
we show that a group whose generalized torsion elements are torsion elements ( which we call a $ tr ^ { * } $ - group ) is torsion - by - $ r ^ { * } $ group, an extension of torsion group by a group without generalized torsion elements. we also discuss a generalized torsion group, a group all of whose non - trivial elements are generalized torsion elements.
|
arxiv:2303.05726
|
a calculation of hadronic timelike form factors in the poincar \ ' e - covariant bethe - salpeter formalism necessitates knowing the analytic structure of the non - perturbative quark - photon vertex in the context of the poincar \ ' e - covariant bethe - salpeter formalism. we include, in the interaction between quark and antiquark, the possibility of non - valence effects by introducing pions as explicit degrees of freedom. these encode the presence of intermediate resonances in the bethe - salpeter interaction kernel. we calculate the vertex for real as well as complex photon momentum. we show how the vertex reflects now the correct physical picture, with the rho resonance appearing as a pole in the complex momentum plane. a multiparticle branch cut for values of the photon momentum from $ - 4m _ \ pi ^ 2 $ to $ - \ infty $ develops. this calculation represents an essential step towards the calculation of timelike form factors in the bethe - salpeter approach.
|
arxiv:1906.06227
|
annotated images are required for both supervised model training and evaluation in image classification. manually annotating images is arduous and expensive, especially for multi - labeled images. a recent trend for conducting such laboursome annotation tasks is through crowdsourcing, where images are annotated by volunteers or paid workers online ( e. g., workers of amazon mechanical turk ) from scratch. however, the quality of crowdsourcing image annotations cannot be guaranteed, and incompleteness and incorrectness are two major concerns for crowdsourcing annotations. to address such concerns, we have a rethinking of crowdsourcing annotations : our simple hypothesis is that if the annotators only partially annotate multi - label images with salient labels they are confident in, there will be fewer annotation errors and annotators will spend less time on uncertain labels. as a pleasant surprise, with the same annotation budget, we show a multi - label image classifier supervised by images with salient annotations can outperform models supervised by fully annotated images. our method contributions are 2 - fold : an active learning way is proposed to acquire salient labels for multi - label images ; and a novel adaptive temperature associated model ( atam ) specifically using partial annotations is proposed for multi - label image classification. we conduct experiments on practical crowdsourcing data, the open street map ( osm ) dataset and benchmark dataset coco 2014. when compared with state - of - the - art classification methods trained on fully annotated images, the proposed atam can achieve higher accuracy. the proposed idea is promising for crowdsourcing data annotation. our code will be publicly available.
|
arxiv:2109.02688
|
using square bridge position, akbulut - ozbagci and later arikan gave algorithms both of which construct an explicit compatible open book decomposition on a closed contact $ 3 $ - manifold which results from a contact $ ( \ pm 1 ) $ - surgery on a legendrian link in the standard contact $ 3 $ - sphere. in this article, we introduce the ` ` generalized square bridge position ' ' for a legendrian link in the standard contact $ 5 $ - sphere and partially generalize this result to the dimension five via an algorithm which constructs relative open book decompositions on relative contact pairs.
|
arxiv:2303.08603
|
we consider the hurwitz action on quasipositive factorizations of 3 - braids. we prove that every orbit contains an element of a special form. this fact provides an algorithm of finding representatives of every orbit for a given braid. we prove also that ( 1 ) any 3 - braid has a finite number of orbits ; ( 2 ) a birman - ko - lee positive 3 - braid has a single orbit ; ( 3 ) a 3 - braid of algebraic length two has at most two orbits.
|
arxiv:1409.4726
|
this paper presents a novel structured knowledge representation called the functional object - oriented network ( foon ) to model the connectivity of the functional - related objects and their motions in manipulation tasks. the graphical model foon is learned by observing object state change and human manipulations with the objects. using a well - trained foon, robots can decipher a task goal, seek the correct objects at the desired states on which to operate, and generate a sequence of proper manipulation motions. the paper describes foon ' s structure and an approach to form a universal foon with extracted knowledge from online instructional videos. a graph retrieval approach is presented to generate manipulation motion sequences from the foon to achieve a desired goal, demonstrating the flexibility of foon in creating a novel and adaptive means of solving a problem using knowledge gathered from multiple sources. the results are demonstrated in a simulated environment to illustrate the motion sequences generated from the foon to carry out the desired tasks.
|
arxiv:1902.01537
|
the descent algebra of the symmetric group, over a field of non - zero characteristic p, is studied. a homomorphism into the algebra of generalised p - modular characters of the symmetric group is defined. this is then used to determine the radical, and its nilpotency index. it also allows the irreducible representations of the descent algebra to be described.
|
arxiv:0706.2707
|
we study b0 - > rho0rho0 decays in a sample of 465 * 10 ^ 6 upsilon ( 4s ) - > bbbar events collected with the babar detector at the pep - ii asymmetric - energy e + e - collider located at the stanford linear accelerator center ( slac ). we measure the branching fraction b = ( 0. 92 + / - 0. 32 + / - 0. 14 ) * 10 ^ { - 6 } and longitudinal polarization fraction f _ l = 0. 75 ^ { + 0. 11 } _ { - 0. 14 } + / - 0. 04, where the first uncertainty is statistical and the second is systematic. the evidence for the b0 - > rho0rho0 signal has a significance of 3. 1 standard deviations, including systematic uncertainties. we investigate the proper - time dependence of the longitudinal component in the decay and measure the cp - violating coefficients s ^ { 00 } _ l = ( 0. 3 + / - 0. 7 + / - 0. 2 ) and c ^ { 00 } _ l = ( 0. 2 + / - 0. 8 + / - 0. 3 ). we study the implication of these results for the unitarity triangle angle alpha.
|
arxiv:0807.4977
|
we probe for statistical and coulomb induced spin textures among the low - lying states of repulsively - interacting particles confined to potentials that are both rotationally and time - reversal invariant. in particular, we focus on two - dimensional quantum dots and employ configuration - interaction techniques to directly compute the correlated many - body eigenstates of the system. we produce spatial maps of the single - particle charge and spin density and verify the annular structure of the charge density and the rotational invariance of the spin field. we further compute two - point spin correlations to determine the correlated structure of a single component of the spin vector field. in addition, we compute three - point spin correlation functions to uncover chiral structures. we present evidence for both chiral and quasi - topological spin textures within energetically degenerate subspaces in the three - and four - particle system.
|
arxiv:1009.1784
|
we provide a simple and generic adaptive restart scheme for convex optimization that is able to achieve worst - case bounds matching ( up to constant multiplicative factors ) optimal restart schemes that require knowledge of problem specific constants. the scheme triggers restarts whenever there is sufficient reduction of a distance - based potential function. this potential function is always computable. we apply the scheme to obtain the first adaptive restart algorithm for saddle - point algorithms including primal - dual hybrid gradient ( pdhg ) and extragradient. the method improves the worst - case bounds of pdhg on bilinear games, and numerical experiments on quadratic assignment problems and matrix games demonstrate dramatic improvements for obtaining high - accuracy solutions. additionally, for accelerated gradient descent ( agd ), this scheme obtains a worst - case bound within 60 % of the bound achieved by the ( unknown ) optimal restart period when high accuracy is desired. in practice, the scheme is competitive with the heuristic of o ' donoghue and candes ( 2015 ).
|
arxiv:2006.08484
|
we use berezin ' s quantization procedure to obtain a formal $ u _ q su _ { 1, 1 } $ - invariant deformation of the quantum disc. explicit formulae for the associated q - bidifferential operators are produced.
|
arxiv:math/9904173
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.