text
stringlengths
1
3.65k
source
stringlengths
15
79
although many near - earth objects have been found by ground - based telescopes, some fast - moving ones, especially those near detection limits, have been missed by observatories. we developed a convolutional neural network for detecting faint fast - moving near - earth objects. it was trained with artificial streaks generated from simulations and was able to find these asteroid streaks with an accuracy of 98. 7 % and a false positive rate of 0. 02 % on simulated data. this program was used to search image data from the zwicky transient facility ( ztf ) in four nights in 2019, and it identified six previously undiscovered asteroids. the visual magnitudes of our detections range from ~ 19. 0 - 20. 3 and motion rates range from ~ 6. 8 - 24 deg / day, which is very faint compared to other ztf detections moving at similar motion rates. our asteroids are also ~ 1 - 51 m diameter in size and ~ 5 - 60 lunar distances away at close approach, assuming their albedo values follow the albedo distribution function of known asteroids. the use of a purely simulated dataset to train our model enables the program to gain sensitivity in detecting faint and fast - moving objects while still being able to recover nearly all discoveries made by previously designed neural networks which used real detections to train neural networks. our approach can be adopted by any observatory for detecting fast - moving asteroid streaks.
arxiv:2208.09098
the high penetration of renewable energy sources in modern smart grids necessitated the development of demand response ( dr ) mechanisms as well as corresponding innovative services for the emerging flexibility markets. from a game theoretic perspective, the basic key performance indicators ( kpis ) for such dr mechanisms are efficiency in terms of social welfare, practical applicability, and incentive guarantees, in the sense of making it a dominant strategy for each user to act truthfully according to his / her preferences, leaving no room for cheating. in this paper, we propose a dr architecture, including a mechanism based on ausubel clinching auction and a communication protocol, that provably guarantee both efficiency and truthful user participation. practicality / easiness of participation is enhanced via simple queries, while user privacy issues are addressed via a distributed implementation. simulation results confirm the desired properties, while also showing that the truthfulness property becomes even more important in markets where participants are not particularly flexible
arxiv:1902.09251
recently contrastive learning has shown significant progress in learning visual representations from unlabeled data. the core idea is training the backbone to be invariant to different augmentations of an instance. while most methods only maximize the feature similarity between two augmented data, we further generate more challenging training samples and force the model to keep predicting discriminative representation on these hard samples. in this paper, we propose mixsiam, a mixture - based approach upon the traditional siamese network. on the one hand, we input two augmented images of an instance to the backbone and obtain the discriminative representation by performing an element - wise maximum of two features. on the other hand, we take the mixture of these augmented images as input, and expect the model prediction to be close to the discriminative representation. in this way, the model could access more variant data samples of an instance and keep predicting invariant discriminative representations for them. thus the learned model is more robust compared to previous contrastive learning methods. extensive experiments on large - scale datasets show that mixsiam steadily improves the baseline and achieves competitive results with state - of - the - art methods. our code will be released soon.
arxiv:2111.02679
extreme mass ratio inspirals ( emris ), i. e. binary systems comprised by a compact stellar - mass object orbiting a massive black hole, are expected to be among the primary gravitational wave ( gw ) sources for the forthcoming lisa mission. the astrophysical processes leading to the formation of such systems still remain poorly understood, resulting into large uncertainties in the predicted cosmic rate of these sources, spanning at least three orders of magnitude. as lisa can individually resolve mostly emris up to $ z \ gtrsim1 $, the ensemble of signals below its detection threshold will add up incoherently forming an unresolved confusion noise, which can be formally described as a stochastic background. we perform an extensive study of this background by considering a collection of astrophysically motivated emri formation scenarios, spanning current uncertainties. we find that, for most astrophysical models, this signal is easily detectable by lisa, with signal to noise ratios of several hundreds. in fiducial emri models - - predicting hundreds of emri detections during mission operations - - the background level is comparable to the lisa noise, affecting the performance of the instrument around 3 mhz. in extreme cases, this background can even " erase " the whole lisa sensitivity bucket in the 2 - 10 mhz frequency range. this points to the need of a better understanding of emris ' astrophysics for a full assessment of the lisa mission potential.
arxiv:2007.14403
we report the detection of h2 in a zabs = 0. 0963 damped lyman - { \ alpha } ( dla ) system towards zem = 0. 4716 qso j1619 + 3342. this dla has log n ( h i ) = 20. 55 ( 0. 10 ), 18. 13 < log n ( h2 ) < 18. 40, [ s / h ] = - 0. 62 ( 0. 13 ), [ fe / s ] = - 1. 00 ( 0. 17 ) and the molecular fraction - 2. 11 < log f ( h2 ) < - 1. 85. the inferred gas kinetic temperature using the rotational level population is in the range 95 - 132 k. we do not detect c i or c ii * absorption from this system. using r - and v - band deep images we identify a sub - l * galaxy at an impact parameter of 14 kpc from the line of sight, having consistent photometric redshift, as a possible host for the absorber. we use the photoionization code cloudy to get the physical conditions in the h2 component using the observational constrains from h2, c i, c ii * and mg i. all the observations can be consistently explained if one or more of the following is true : ( i ) carbon is underabundant by more than 0. 6 dex as seen in halo stars with z ~ 0. 1 z _ sun, ( ii ) h i associated with h2 component is less than 50 % of the h i measured along the line of sight and ( iii ) the h2 formation rate on the dust grains is at least a factor two higher than what is typically used in analytic calculations for milky way interstellar medium. even when these are satisfied, the gas kinetic temperature in the models are much lower than what is inferred from the ortho - to - para ratio of the molecular hydrogen. alternatively the high kinetic temperature could be a consequence of contribution to the gas heating from non - radiative heating processes seen in hydrodynamical simulations.
arxiv:1406.5517
we present an implementation of the matched - filter technique to detect tidal tails of globular clusters. the method was tested using sdss data for the globular cluster palomar 5 revealing its well known tidal tails. we also ran a simulation of a globular cluster with a tidal tail where we successfully recover the tails for a cluster at the same position and with the same characteristics of ngc 2298. based on the simulation we estimate that the matched - filter increases the contrast of the tail relative to the background of stars by a factor of 2. 5 for the case of ngc 2298. we also present the photometry of the globular cluster ngc 2298 using the mosaic2 camera installed on the ctio 4m telescope. the photometry covers ~ 3deg2 reaching v ~ 23. a fit of a king profile to the radial density profile of ngc 2298 shows that this cluster has a tidal radius of 15. 91 ' \ pm 1. 07 ' which is twice as in the literature. the application of the matched - filter to ngc 2298 reveals several extra - tidal structures, including a leading and trailing tail. we also find that ngc 2298 has extra - tidal structures stretching towards and against the galactic disk, suggesting strong tidal interaction. finally, we assess how the matched - filter performs when applied to a globular cluster with and without mass segregation taken into account. we find that disregarding the effects of mass segregation may significantly reduce the detection limit of the matched - filter.
arxiv:1105.1933
a closed formula of the universal part of supersymmetric r \ ' enyi entropy $ s _ q $ for six - dimensional $ ( 1, 0 ) $ superconformal theories is proposed. within our arguments, $ s _ q $ across a spherical entangling surface is a cubic polynomial of $ \ nu = 1 / q $, with $ 4 $ coefficients expressed as linear combinations of the ' t hooft anomaly coefficients for the $ r $ - symmetry and gravitational anomalies. as an application, we establish linear relations between the $ c $ - type weyl anomalies and the ' t hooft anomaly coefficients. we make a conjecture relating the supersymmetric r \ ' enyi entropy to an equivariant integral of the anomaly polynomial in even dimensions and check it against known data in four dimensions and six dimensions.
arxiv:1702.03518
we use the 6c * * sample to investigate the co - moving space density of powerful, steep - spectrum radio sources. this sample, consisting of 68 objects, has virtually complete k - band photometry and spectroscopic redshifts for 32 per cent of the sources. in order to find its complete redshift distribution, we develop a method of redshift estimation based on the k - z diagram of the 3crr, 6ce, 6c * and 7crs radio galaxies. based on this method, we derive redshift probability density functions for all the optically identified sources in the 6c * * sample. using a combination of spectroscopic and estimated redshifts, we select the most radio luminous sources in the sample. their redshift distribution is then compared with the predictions of the radio luminosity function of jarvis et al. we find that, within the uncertainties associated with the estimation method, the data are consistent with a constant co - moving space density of steep - spectrum radio sources beyond z > 2. 5, and rule out a steep decline.
arxiv:astro-ph/0612268
random walks on expander graphs were thoroughly studied, with the important motivation that, under some natural conditions, these walks mix quickly and provide an efficient method of sampling the vertices of a graph. alon, benjamini, lubetzky and sodin studied non - backtracking random walks on regular graphs, and showed that their mixing rate may be up to twice as fast as that of the simple random walk. as an application, they showed that the maximal number of visits to a vertex, made by a non - backtracking random walk of length $ n $ on a high - girth $ n $ - vertex regular expander, is typically $ ( 1 + o ( 1 ) ) \ frac { \ log n } { \ log \ log n } $, as in the case of the balls and bins experiment. they further asked whether one can establish the precise distribution of the visits such a walk makes. in this work, we answer the above question by combining a generalized form of brun ' s sieve with some extensions of the ideas in alon et al. let $ n _ t $ denote the number of vertices visited precisely $ t $ times by a non - backtracking random walk of length $ n $ on a regular $ n $ - vertex expander of fixed degree and girth $ g $. we prove that if $ g = \ omega ( 1 ) $, then for any fixed $ t $, $ n _ t / n $ is typically $ \ frac { 1 } { \ mathrm { e } t! } + o ( 1 ) $. furthermore, if $ g = \ omega ( \ log \ log n ) $, then $ n _ t / n $ is typically $ \ frac { 1 + o ( 1 ) } { \ mathrm { e } t! } $ uniformly on all $ t \ leq ( 1 - o ( 1 ) ) \ frac { \ log n } { \ log \ log n } $ and 0 for all $ t \ geq ( 1 + o ( 1 ) ) \ frac { \ log n } { \ log \ log n } $. in particular, we obtain the above result on the typical maximal number of visits to a single vertex, with an improved threshold window. the essence of the proof lies in showing that variables counting the number of visits to a set of sufficiently distant vertices are asymptotically independent poisson variables.
arxiv:0705.0867
we study combinatorial laplacians on rectangular subgraphs of $ \ epsilon \ mathbb { z } ^ 2 $ that approximate laplace - beltrami operators of riemannian metrics as $ \ epsilon \ rightarrow 0 $. these laplacians arise as follows : we define the notion of a riemmanian metric structure on a graph. we then define combinatorial free field theories and describe how these can be regarded as finite dimensional approximations of scalar field theory. we focus on the gaussian field theory on rectangular subgraphs of $ \ mathbb { z } ^ 2 $ and study its partition function by computing the asymptotic determinant of the discrete laplacian.
arxiv:1501.02057
we compute the cones of effective divisors on blowups of $ \ mathbb { p } ^ 1 \ times \ mathbb { p } ^ 2 $ and $ \ mathbb { p } ^ 1 \ times \ mathbb { p } ^ 3 $ in up to 6 points. we also show that all these varieties are log fano, giving a conceptual explanation for the fact that all the cones we compute are rational polyhedral.
arxiv:2109.03736
the 125 gev resonance discovered at the lhc could be a heavy quarkonium, spin 0 pseudoscalar meson $ \ zeta ^ 0 $. the decay rates of the $ \ zeta ^ 0 $ meson resonance are calculated and compared to the standard model higgs boson decay rates. the branching ratios and signal strengths for $ \ zeta \ rightarrow \ gamma \ gamma $, $ \ zeta \ rightarrow z \ gamma $, $ \ zeta \ rightarrow zz ^ * $ and $ \ zeta \ rightarrow ww ^ * $ are approximately the same as the higgs boson branching ratios and signal strengths. the decay rates for $ \ zeta \ rightarrow \ tau ^ + \ tau ^ - $, $ \ zeta \ rightarrow b \ bar b $ and $ \ zeta \ rightarrow c \ bar c $ are suppressed compared to the higgs boson decay rates. accurate branching ratios and signal strengths obtained at the lhc can distinguish between the standard model higgs boson and the heavy composite $ \ zeta $ meson resonance.
arxiv:1211.2746
digital video pervades daily life. mobile video, digital tv, and digital cinema are now ubiquitous, and as such, the field of digital video processing ( dvp ) has experienced tremendous growth. digital video systems also permeate scientific and engineering disciplines including but not limited to astronomy, communications, surveillance, entertainment, video coding, computer vision, and vision research. as a consequence, educational tools for dvp must cater to a large and diverse base of students. towards enhancing dvp education we have created a carefully constructed gallery of educational tools that is designed to complement a comprehensive corpus of online lectures by providing examples of dvp on real - world content, along with a user - friendly interface that organizes numerous key dvp topics ranging from analog video, to human visual processing, to modern video codecs, etc. this demonstration gallery is currently being used effectively in the graduate class ` ` digital video ' ' at the university of texas at austin. students receive enhanced access to concepts through both learning theory from highly visual lectures and watching concrete examples from the gallery, which captures the beauty of the underlying principles of modern video processing. to better understand the educational value of these tools, we conducted a pair of questionaire - based surveys to assess student background, expectations, and outcomes. the survey results support the teaching efficacy of this new didactic video toolset.
arxiv:2012.14625
model order reduction ( mor ) methods that are designed to preserve structural features of a given full order model ( fom ) often suffer from a lower accuracy when compared to their non - structure - preserving counterparts. in this paper, we present a framework for structure - preserving mor, which allows to compute structured reduced order models ( roms ) with a much higher accuracy. the framework is based on parameter optimization, i. e., the elements of the system matrices of the rom are iteratively varied to minimize an objective functional that measures the difference between the fom and the rom. the structural constraints can be encoded in the parametrization of the rom. the method only depends on frequency response data and can thus be applied to a wide range of dynamical systems. we illustrate the effectiveness of our method on a port - hamiltonian and on a symmetric second - order system in a comparison with other structure - preserving mor algorithms.
arxiv:2011.07567
currently, the majority of research in grammatical error correction ( gec ) is concentrated on universal languages, such as english and chinese. many low - resource languages lack accessible evaluation corpora. how to efficiently construct high - quality evaluation corpora for gec in low - resource languages has become a significant challenge. to fill these gaps, in this paper, we present a framework for constructing gec corpora. specifically, we focus on indonesian as our research language and construct an evaluation corpus for indonesian gec using the proposed framework, addressing the limitations of existing evaluation corpora in indonesian. furthermore, we investigate the feasibility of utilizing existing large language models ( llms ), such as gpt - 3. 5 - turbo and gpt - 4, to streamline corpus annotation efforts in gec tasks. the results demonstrate significant potential for enhancing the performance of llms in low - resource language settings. our code and corpus can be obtained from https : / / github. com / gklmip / gec - construction - framework.
arxiv:2410.20838
we study the spectral and energetics properties of 47 long - duration gamma - ray bursts ( grbs ) with known redshift, all of them detected by the swift satellite. due to the narrow energy range ( 15 - 150 kev ) of the swift - bat detector, the spectral fitting is reliable only for fitting models with 2 or 3 parameters. as high uncertainty and correlation among the errors is expected, a careful analysis of the errors is necessary. we fit both the power law ( pl, 2 parameters ) and cut - - off power law ( cpl, 3 parameters ) models to the time - integrated spectra of the 47 bursts, and present the corresponding parameters, their uncertainties, and the correlations among the uncertainties. the cpl model is reliable only for 29 bursts for which we estimate the nuf _ nu peak energy epk. for these grbs, we calculate the energy fluence and the rest - frame isotropic - equivalent radiated energy, eiso, as well as the propagated uncertainties and correlations among them. we explore the distribution of our homogeneous sample of grbs on the rest - frame diagram e ' pk vs eiso. we confirm a significant correlation between these two quantities ( the " amati " relation ) and we verify that, within the uncertainty limits, no outliers are present. we also fit the spectra to a band model with the high energy power law index frozen to - 2. 3, obtaining a rather good agreement with the " amati " relation of non - swift grbs.
arxiv:0704.0791
suppose that $ \ { x _ t, \, t \ ge0 \ } $ is a non - stationary markov process, taking values in a polish metric space $ e $. we prove the law of large numbers and central limit theorem for an additive functional of the form $ \ int _ 0 ^ t \ psi ( x _ s ) ds $, provided that the dual transition probability semigroup, defined on measures, is strongly contractive in an appropriate wasserstein metric. function $ \ psi $ is assumed to be lipschitz on $ e $.
arxiv:1102.1842
we present a new large - scale corpus of question - answer driven semantic role labeling ( qa - srl ) annotations, and the first high - quality qa - srl parser. our corpus, qa - srl bank 2. 0, consists of over 250, 000 question - answer pairs for over 64, 000 sentences across 3 domains and was gathered with a new crowd - sourcing scheme that we show has high precision and good recall at modest cost. we also present neural models for two qa - srl subtasks : detecting argument spans for a predicate and generating questions to label the semantic relationship. the best models achieve question accuracy of 82. 6 % and span - level accuracy of 77. 6 % ( under human evaluation ) on the full pipelined qa - srl prediction task. they can also, as we show, be used to gather additional annotations at low cost.
arxiv:1805.05377
metallicity is expected to influence not only the lives of massive stars but also the outcome of their deaths as supernovae ( sne ) and as gamma - ray bursts ( grbs ). however, there are surprisingly few direct measurements of the local metallicities of different flavors of core - collapse sne. here we present the largest existing set of host - galaxy spectra with h ii region emission lines at the sites of 35 stripped - envelope core - collapse sne. we derive local oxygen abundances in a robust manner in order to constrain the sn ib / c progenitor population. we obtain spectra at the sn sites, include sne from targeted and untargeted surveys, and perform the abundance determinatinos using three different oxygen - abundance calibrations. the sites of sne ic ( the demise of the most heavily stripped stars having lost both the h and he layers ) are systematically more metal rich than those of sne ib ( arising from stars that retained their he layer ) in all calibrations. a kolmogorov - smirnov test yields the very low probability of 1 % that sn ib and sn ic environment abundances, which are different on average by ~ 0. 2 dex ( in the pettini & pagel scale ), are drawn from the same parent population. broad - lined sne ic ( without grbs ) occur at metallicities between those of sne ib and sne ic. lastly, we find that the host - galaxy central oxygen abundance is not a good indicator of the local sn metallicity ; hence, large - scale sn surveys need to obtain local abundance measurements in order to quantify the impact of metallicity on stellar death.
arxiv:1007.0661
in this work, besides improving prediction accuracy, we study whether personalization could bring robustness benefits to backdoor attacks. we conduct the first study of backdoor attacks in the pfl framework, testing 4 widely used backdoor attacks against 6 pfl methods on benchmark datasets femnist and cifar - 10, a total of 600 experiments. the study shows that pfl methods with partial model - sharing can significantly boost robustness against backdoor attacks. in contrast, pfl methods with full model - sharing do not show robustness. to analyze the reasons for varying robustness performances, we provide comprehensive ablation studies on different pfl methods. based on our findings, we further propose a lightweight defense method, simple - tuning, which empirically improves defense performance against backdoor attacks. we believe that our work could provide both guidance for pfl application in terms of its robustness and offer valuable insights to design more robust fl methods in the future. we open - source our code to establish the first benchmark for black - box backdoor attacks in pfl : https : / / github. com / alibaba / federatedscope / tree / backdoor - bench.
arxiv:2302.01677
characteristics of a diffusion - bonded sapphire cell for optical experiments with hot metal vapors were investigated. the sapphire cell consisted of sapphire - crystal plates and a borosilicate - glass tube, which were bonded to each other by diffusion bonding without any binders or glues. the glass tube was attached to a vacuum manifold using the standard method applied in glass processing, filled with a small amount of rb metal by chasing with a torch, and then sealed. the cell was baked at high temperatures and optical experiments were then performed using rubidium atoms at room temperature. the sapphire cell was found to be vacuum tight, at least up to 350 $ ^ { \ circ } $ c, and the sapphire walls remained clear over all temperatures. from the optical experiments, the generation of a background gas was indicated after baking at 200 $ ^ { \ circ } $ c. the background gas pressure was low enough to avoid pressure broadening of absorption lines but high enough to cause velocity - changing collisions of rb atoms. the generated gas pressure decreased at higher temperatures, probably due to chemical reactions.
arxiv:1710.08627
the natural language processing task of determining " who did what to whom " is called semantic role labeling. for english, recent methods based on transformer models have allowed for major improvements in this task over the previous state of the art. however, for low resource languages, like portuguese, currently available semantic role labeling models are hindered by scarce training data. in this paper, we explore a model architecture with only a pre - trained transformer - based model, a linear layer, softmax and viterbi decoding. we substantially improve the state - of - the - art performance in portuguese by over 15 f1. additionally, we improve semantic role labeling results in portuguese corpora by exploiting cross - lingual transfer learning using multilingual pre - trained models, and transfer learning from dependency parsing in portuguese, evaluating the various proposed approaches empirically.
arxiv:2101.01213
bent functions are maximally nonlinear boolean functions with an even number of variables, which include a subclass of functions, the so - called hyper - bent functions whose properties are stronger than bent functions and a complete classification of hyper - bent functions is elusive and inavailable. ~ in this paper, ~ we solve an open problem of mesnager that describes hyper - bentness of hyper - bent functions with multiple trace terms via dillon - like exponents with coefficients in the extension field ~ $ \ mathbb { f } _ { 2 ^ { 2m } } $ ~ of this field ~ $ \ mathbb { f } _ { 2 ^ { m } } $. by applying m \ " { o } bius transformation and the theorems of hyperelliptic curves, hyper - bentness of these functions are successfully characterized in this field ~ $ \ mathbb { f } _ { 2 ^ { 2m } } $ with ~ $ m $ ~ odd integer.
arxiv:2407.01946
edge instabilities are believed to be one of the possible causes of shear banding in entangled polymeric fluids. here, we investigate the effect of edge disturbance on the shear - induced dynamics of well - entangled dna solutions. using a custom high - aspect - ratio planar - couette cell, we systematically measure the velocity profiles of sheared dna samples at different distances away from the edge of the shear cell. under a weak oscillatory shear with the corresponding weissenberg number ( wi ) smaller than 1, where dna solutions exhibit linear velocity profiles with strong wall slip, the penetration depth of the edge disturbance is on the order of the gap thickness of the shear cell, consistent with the behavior of newtonian fluids. however, under a strong oscillatory shear with wi > 1 that produces shear - banding flows, the penetration depth is an order of magnitude larger than the gap thickness and becomes spatially anisotropic. moreover, we find that the shear - banding flows persist deep inside the sheared sample, where the effect of edge disturbance diminishes. hence, our experiments demonstrate an abnormally long penetration depth of edge disturbance and illustrate the bulk nature of shear - banding flows of entangled polymeric fluids under time - dependent oscillatory shear.
arxiv:1808.08220
we discuss the possibility to probe leptonic mixing parameters at high - energy neutrino telescopes in a model - independent way, using astrophysical neutron and pion sources. in particular we show how the octant of the 2 - 3 mixing angle might be determined independently of prior knowledge of the source, even when current uncertainties on the other mixing parameters are included. we also argue that non - trivial neutrino oscillation effects should be taken into account when using high - energy flavor ratios for astrophysical diagnostics.
arxiv:hep-ph/0511313
chaos often represents a severe obstacle for the set - up of many - body experiments, e. g., in fusion plasmas or turbulent flows. we propose a strategy to control chaotic diffusion in conservative systems. the core of our approach is a small apt modification of the system which channels chaos by building barriers to diffusion. it leads to practical prescriptions for an experimental apparatus to operate in a regular regime ( drastic enhancement of confinement ). the experimental realization of this control on a travelling wave tube opens the possibility to practically achieve the control of a wide range of systems at a low additional cost of energy.
arxiv:nlin/0407048
a recent theoretical calculation shows that the casimir force between two parallel plates can be repulsive for plates with nontrivial magnetic properties ( o. kenneth et al., phys. rev. lett. 89, 033001 ( 2002 ) ). according to the authors, the effect may be observed with known materials, such as ferrites and garnets, and it might be possible to engineer micro - or nanoelectromechanical systems ( mems or nems ) that could take advantage of a short range repulsive force. here we show that on the contrary the casimir force between two parallel plates in vacuum at micron and submicron distance is always attractive.
arxiv:quant-ph/0305065
the paper develops a modified geometrical optics ( go ) of smoothly inhomogeneous isotropic medium, which takes into account two topological phenomena : berry phase and the optical magnus effect. by using the analogy between a quasi - classical motion of a quantum particle with a spin and go of an electromagnetic wave in smoothly inhomogeneous media, we have introduced the standard gauge potential associated with the degeneracy in the wave momentum space. this potential corresponds to the dirac - monopole - like field ( berry curvature ), which causes the topological spin ( polarization ) transport of photons. the deviations of waves of right - hand and left - hand helicity occur in the opposite directions and orthogonally to the principal direction of motion. this produces a spin current directed across the principal motion. the situation is similar to the anomalous hall effect for electrons. in addition, a simple scheme of the experiment allowing one to observe the topological spin splitting of photons has been suggested.
arxiv:physics/0402110
in the absence of a higgs boson, the perturbative description of the standard model ceases to make sense above a tev. heavy spin - 1 fields coupled to w and z bosons can extend the validity of the theory up to higher scales. we carefully identify regions of parameter space where a minimal addition - a single spin - 1 custodial su ( 2 ) triplet resonance - allows one to retain perturbative control in all channels. elastic scattering of longitudinal w and z bosons alone seems to permit a very large cut - off beyond the naive dimensional analysis expectation. we find however that including scattering of the spin - 1 resonances then leads to an earlier onset of strong coupling. most importantly for lhc searches, we define a self - consistent set - up with a well - defined range of validity without recourse to unitarization schemes whose physical meaning is obscure. we discuss the lhc phenomenology and the discovery reach for these electroweak resonances and mention the possibility of a nightmare scenario with no higgs nor resonance within the lhc reach. finally, we discuss the effects of parity breaking in the heavy resonance sector which reduces the contributions to the s parameter.
arxiv:1108.1183
due to its location and climate, antarctica offers unique conditions for long - period observations across a broad wavelength regime, where important diagnostic lines for molecules and ions can be found, that are essential to understand the chemical properties of the interstellar medium. in addition to the natural benefits of the site, new technologies, resulting from astrophotonics, may allow miniaturised instruments, that are easier to winterise and advanced filters to further reduce the background in the infrared.
arxiv:0912.4372
we present numerical evidence of dynamic star formation in which the accreted stellar mass grows superlinearly with time, roughly as $ t ^ 2 $. we perform simulations of star formation in self - gravitating hydrodynamic and magneto - hydrodynamic turbulence that is continuously driven. by turning the self - gravity of the gas in the simulations on or off, we demonstrate that self - gravity is the dominant physical effect setting the mass accretion rate at early times before feedback effects take over, contrary to theories of turbulence - regulated star formation. we find that gravitational collapse steepens the density profile around stars, generating the power - law tail on what is otherwise a lognormal density probability distribution function. furthermore, we find turbulent velocity profiles to flatten inside collapsing regions, altering the size - linewidth relation. this local flattening reflects enhancements of turbulent velocity on small scales, as verified by changes to the velocity power spectra. our results indicate that gas self - gravity dynamically alters both density and velocity structures in clouds, giving rise to a time - varying star formation rate. we find that a substantial fraction of the gas that forms stars arrives via low density flows, as opposed to accreting through high density filaments.
arxiv:1406.4148
we present the first analysis of fisher markets with buyers that have budget - additive utility functions. budget - additive utilities are elementary concave functions with numerous applications in online adword markets and revenue optimization problems. they extend the standard case of linear utilities and have been studied in a variety of other market models. in contrast to the frequently studied ces utilities, they have a global satiation point which can imply multiple market equilibria with quite different characteristics. our main result is an efficient combinatorial algorithm to compute a market equilibrium with a pareto - optimal allocation of goods. it relies on a new descending - price approach and, as a special case, also implies a novel combinatorial algorithm for computing a market equilibrium in linear fisher markets. we complement these positive results with a number of hardness results for related computational questions. we prove that it is np - hard to compute a market equilibrium that maximizes social welfare, and it is ppad - hard to find any market equilibrium with utility functions with separate satiation points for each buyer and each good.
arxiv:1603.07210
in this video, ray - tracing data visualization technique was used to obtain realistic and detailed flow motions during droplet collision. the differences of collision outcome between newtonian and non - newtonian were compared. various types of droplet collision were presented, including bouncing, coalescence, and stretching separation. because of the reducing of equivalent viscosity caused by shear stress, the gas film between shear - thinning droplet is thinner than newtonian liquid. since thinner gas film promotes coalescence, shear thinning liquid has smaller area of bouncing regime in the diagram of weber number and impact parameter. during the ligament / thread breakup process of stretching separation, two kinds of instabilities are identified, helical and buckling instabilities. helical instability is analogous to a viscous rotating liquid jet, while the buckling instability is analogous to electrically charged liquid jets of polymer solutions.
arxiv:1210.3888
in this paper, we extend the results of klainerman and rodnianski in \ cite { kr : trapped }, which were obtained for a finite region, by showing similar results from past null infinity. this allows us to recover and extend the results from past null infinity in the work of christodoulou \ cite { chr : book }.
arxiv:1207.5271
in this paper, we prove a classification theorem for the stable compact minimal submanifolds of the riemannian product of an $ m _ 1 $ - dimensional ( $ m _ 1 \ geq3 $ ) hypersurface $ m _ 1 $ in the euclidean space and any riemannian manifold $ m _ 2 $, when the sectional curvature $ k _ { m _ 1 } $ of $ m _ 1 $ satisfies $ \ frac { 1 } { \ sqrt { m _ 1 - 1 } } \ leq k _ { m _ 1 } \ leq 1. $ this gives a generalization to the results of f. torralbo and f. urbano [ 9 ], where they obtained a classification theorem for the stable minimal submanifolds of the riemannian product of a sphere and any riemannian manifold. in particular, when the ambient space is an $ m $ - dimensional ( $ m \ geq3 $ ) complete hypersurface $ m $ in the euclidean space, if the sectional curvature $ k _ { m } $ of $ m $ satisfies $ \ frac { 1 } { \ sqrt { m + 1 } } \ leq k _ { m } \ leq 1 $, then we conclude that there exist no stable compact minimal submanifolds in $ m $.
arxiv:1209.6400
financial regulators such as central banks collect vast amounts of data, but access to the resulting fine - grained banking microdata is severely restricted by banking secrecy laws. recent developments have resulted in mechanisms that generate faithful synthetic data, but current evaluation frameworks lack a focus on the specific challenges of banking institutions and microdata. we develop a framework that considers the utility and privacy requirements of regulators, and apply this to financial usage indices, term deposit yield curves, and credit card transition matrices. using the central bank of paraguay ' s data, we provide the first implementation of synthetic banking microdata using a central bank ' s collected information, with the resulting synthetic datasets for all three domain applications being publicly available and featuring information not yet released in statistical disclosure. we find that applications less susceptible to post - processing information loss, which are based on frequency tables, are particularly suited for this approach, and that marginal - based inference mechanisms to outperform generative adversarial network models for these applications. our results demonstrate that synthetic data generation is a promising privacy - enhancing technology for financial regulators seeking to complement their statistical disclosure, while highlighting the crucial role of evaluating such endeavors in terms of utility and privacy requirements.
arxiv:2410.22519
the kappa package is designed for calculations of optically thin spectra for the non - maxwellian \ k { appa } - distributions. this paper presents extension of the database to allow calculations of the spectra for extreme values of \ k { appa } < 2, which are important for accurate diagnostics of the \ k { appa } - distributions in the outer solar atmosphere. in addition, two improvements were made to the ionization equilibrium calculations within the database. first, the ionization equilibrium calculations now include the effect of electron impact multi - ionization ( eimi ). although relatively unimportant for maxwellian distribution, the eimi becomes important for some elements such as fe and low values of \ k { appa }, where it modifies the ionization equilibrium significantly. second, the kappa database now includes the suppression of dielectronic recombination at high electron densities, evaluated via the suppression factors. we find that at the same temperature, the suppression of dielectronic recombination is almost independent of \ k { appa }. the ionization equilibrium calculations for the \ k { appa } - distributions are now provided for a range of electron densities.
arxiv:2310.04591
we discuss the non - linear corrections entering in the calculation of the primordial black hole abundance from the non - linear radiation transfer function and the determination of the true physical horizon crossing. we show that the current standard techniques to calculate the abundance of primordial black holes suffer from uncertainties and argue that the primordial black hole abundance may be much smaller than what routinely considered. this would imply, among other consequences, that the interpretation of the recent pulsar timing arrays data from scalar - induced gravitational waves may not be ruled out because of an overproduction of primordial black holes.
arxiv:2307.13633
personalization of machine learning ( ml ) predictions for individual users / domains / enterprises is critical for practical recommendation systems. standard personalization approaches involve learning a user / domain specific embedding that is fed into a fixed global model which can be limiting. on the other hand, personalizing / fine - tuning model itself for each user / domain - - a. k. a meta - learning - - has high storage / infrastructure cost. moreover, rigorous theoretical studies of scalable personalization approaches have been very limited. to address the above issues, we propose a novel meta - learning style approach that models network weights as a sum of low - rank and sparse components. this captures common information from multiple individuals / users together in the low - rank part while sparse part captures user - specific idiosyncrasies. we then study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank - $ r $ and a $ k $ - column sparse matrix using a small number of linear measurements. we propose a computationally efficient alternating minimization method with iterative hard thresholding - - amht - lrs - - to learn the low - rank and sparse part. theoretically, for the realizable gaussian data setting, we show that amht - lrs solves the problem efficiently with nearly optimal sample complexity. finally, a significant challenge in personalization is ensuring privacy of each user ' s sensitive data. we alleviate this problem by proposing a differentially private variant of our method that also is equipped with strong generalization guarantees.
arxiv:2210.03505
this paper studies estimation of a smooth function $ f ( t, s ) $ when we are given functional responses of the form $ f ( t, \ cdot ) $ + error, but scientific interest centers on the collection of functions $ f ( \ cdot, s ) $ for different $ s $. the motivation comes from studies of human brain development, in which $ t $ denotes age whereas $ s $ refers to brain locations. analogously to varying - coefficient models, in which the mean response is linear in $ t $, the " varying - smoother " models that we consider exhibit nonlinear dependence on $ t $ that varies smoothly with $ s $. we discuss three approaches to estimating varying - smoother models : ( a ) methods that employ a tensor product penalty ; ( b ) an approach based on smoothed functional principal component scores ; and ( c ) two - step methods consisting of an initial smooth with respect to $ t $ at each $ s $, followed by a postprocessing step. for the first approach, we derive an exact expression for a penalty proposed by wood, and an adaptive penalty that allows smoothness to vary more flexibly with $ s $. we also develop " pointwise degrees of freedom, " a new tool for studying the complexity of estimates of $ f ( \ cdot, s ) $ at each $ s $. the three approaches to varying - smoother models are compared in simulations and with a diffusion tensor imaging data set.
arxiv:1412.0778
every scene text recognition ( str ) task consists of text localization \ & text recognition as the prominent sub - tasks. however, in real - world applications with fixed camera positions such as equipment monitor reading, image - based data entry, and printed document data extraction, the underlying data tends to be regular scene text. hence, in these tasks, the use of generic, bulky models comes up with significant disadvantages compared to customized, efficient models in terms of model deployability, data privacy \ & model reliability. therefore, this paper introduces the underlying concepts, theory, implementation, and experiment results to develop models, which are highly specialized for the task itself, to achieve not only the sota performance but also to have minimal model weights, shorter inference time, and high model reliability. we introduce a novel deep learning architecture ( geotrnet ), trained to identify digits in a regular scene image, only using the geometrical features present, mimicking human perception over text recognition. the code is publicly available at https : / / github. com / acra - fl / geotrnet
arxiv:2302.03873
the theory of modular deformations is generalized for the category of complex analytic polyhedra which includes germs of complex space as well as any compact complex analytic space. the objective of the theory is a construction of fine moduli spaces. we also discuss new examples of modular families obtained by means of a computer algebra program.
arxiv:math/0506412
autism spectrum disorder ( asd ) is a severe neuropsychiatric disorder that affects intellectual development, social behavior, and facial features, and the number of cases is still significantly increasing. due to the variety of symptoms asd displays, the diagnosis process remains challenging, with numerous misdiagnoses as well as lengthy and expensive diagnoses. fortunately, if asd is diagnosed and treated early, then the patient will have a much higher chance of developing normally. for an asd diagnosis, machine learning algorithms can analyze both social behavior and facial features accurately and efficiently, providing an asd diagnosis in a drastically shorter amount of time than through current clinical diagnosis processes. therefore, we propose to develop a hybrid architecture fully utilizing both social behavior and facial feature data to improve the accuracy of diagnosing asd. we first developed a linear support vector machine for the social behavior based module, which analyzes autism diagnostic observation schedule ( ados ) social behavior data. for the facial feature based module, a densenet model was utilized to analyze facial feature image data. finally, we implemented our hybrid model by incorporating different features of the support vector machine and the densenet into one model. our results show that the highest accuracy of 87 % for asd diagnosis has been achieved by our proposed hybrid model. the pros and cons of each module will be discussed in this paper.
arxiv:2110.03775
a two - stage solution approach for solving the problem of multi - objective optimal power flow ( mopf ) is proposed for hybrid ac / dc grids with vsc - hvdc. first, a mopf model for hybrid ac / dc grids is built to coordinate the economy, voltage deviation and environmental benefits. then, a two - stage solution approach, incorporating decision analysis into optimization process, is presented to solve the model. the first stage of the approach is consisted of a multi - objective particle swarm optimization algorithm with a hybrid coding scheme employed to find multiple pareto - optimal solutions. the second stage will have the obtained solutions clustered into different groups using fuzzy c - means ( fcm ) clustering, and then the ' best ' compromise solutions are obtained by calculating the priority memberships of the solutions belonging to the same groups via grey relation projection ( grp ) method. the novelty of this approach lies primarily in incorporating the fcm - grp based decisions analysis technique into mopf studies, thereby assisting decision makers to automatically identify the ' best ' operation points. the effectiveness of the proposed approach is verified based on the test results of the ieee 14 - and 300 - bus systems.
arxiv:1808.05708
a convection - diffusion problem with a large shift in space is considered. numerical analysis of high order finite element methods on layer - adapted duran type meshes, as well as on coarser duran type meshes in places where weak layers appear, is provided. the theoretical results are confirmed by numerical experiments.
arxiv:2304.10937
thermoelectric power generation has been recognized as one of the most important technologies, and high - performance thermoelectric materials have long been pursued. however, because of the large number of candidate materials, this quest is extremely challenging, and it has become clear that a firm theoretical concept from the viewpoint of band - structure engineering is needed. in this study, we theoretically demonstrate that pnictogen - dichalcogenide layered compounds, which originally attracted attention as a family of superconductors and have recently been investigated as thermoelectric materials, can exhibit very high thermoelectric performance with elemental substitution. in particular, we clarify a promising guiding principle for materials design and find that laoasse $ _ 2 $, a material that has yet to be synthesized, has a powerfactor that is six times as large as that of the known compound laobis $ _ 2 $ and can exhibit a very large $ zt $ under some plausible assumptions. this large enhancement of the thermoelectric performance originates from the quasi - one - dimensional gapped dirac - like band dispersion, which is realized by the square - lattice network. our study offers one ideal limit of the band structure for thermoelectric materials. because our target materials have high controllability of constituent elements and feasibility of carrier doping, experimental studies along this line are strongly awaited.
arxiv:1706.09271
machine learning methods are commonly used to solve inverse problems, wherein an unknown signal must be estimated from few measurements generated via a known acquisition procedure. in particular, neural networks perform well empirically but have limited theoretical guarantees. in this work, we study an underdetermined linear inverse problem that admits several possible solution mappings. a standard remedy ( e. g., in compressed sensing ) establishing uniqueness of the solution mapping is to assume knowledge of latent low - dimensional structure in the source signal. we ask the following question : do deep neural networks adapt to this low - dimensional structure when trained by gradient descent with weight decay regularization? we prove that mildly overparameterized deep linear networks trained in this manner converge to an approximate solution that accurately solves the inverse problem while implicitly encoding latent subspace structure. to our knowledge, this is the first result to rigorously show that deep linear networks trained with weight decay automatically adapt to latent subspace structure in the data under practical stepsize and weight initialization schemes. our work highlights that regularization and overparameterization improve generalization, while overparameterization also accelerates convergence during training.
arxiv:2502.15522
we examine penrose limits of the duality proposed by guarino, jafferis and varela between a type iia massive background of the type of a warped, squashed $ ads _ 4 \ times s ^ 6 $, and a 2 + 1 dimensional ir fixed point of $ { \ cal n } = 8 $ super yang - mills deformed by chern - simons terms to $ { \ cal n } = 2 $ supersymmetry. one type of penrose limit for closed strings corresponds to a large charge closed spin chain, and another, for open strings on giant graviton d - branes, corresponds to an open spin chain on sub - determinant operators. for the first limit, we find that like in the abjm case, there are functions $ f _ a ( \ lambda ) $ that interpolate between the perturbative and nonperturbative ( string ) regions for the magnon energy. for the second, we are unable to match the gravity result with the expected field theory result, making this model more interesting than ones with more supersymmetry.
arxiv:1706.02711
all cosmological models of structure formation predict the existence of a widespread population of dual supermassive black holes in - spiralling inside their common host galaxy, eventually merging and giving rise to intense gravitational waves. these systems can be identified as dual agns at kilo parsec separations, but only very few have been confirmed at z > 0. 5. the appearance of multiple agns at small angular separations can also be due to gravitational lensing of single agns, which are themselves very important systems for many astrophysical topics. here we present a novel technique, dubbed the gaia multipeak ( gmp ) method, to obtain large and reliable samples of dual / lensed agn candidates with sub - arcsec separations by looking for agns showing multiple peaks in the light profiles observed by the gaia satellite. all the gmp - selected sources with high resolution images ( 26 from the hst archive and 5 from dedicated adaptive - optics assisted imaging at the large binocular telescope ) show multiple components with sub - arcsec separation pointing toward a very high reliability of the method. by sampling separations down to ~ 2 kpc at z > 1, this method allows us to probe the physical processes that drive the inspiralling of a pair of smbhs inside a single galaxy.
arxiv:2203.11234
many real - life decisions involve both perceptual processes and weighing the consequences of different actions. however, the neural mechanisms underlying perceptual decisions have typically been examined separately from those underlying economic decisions. here, we trained rats to make choices informed by both perceptual and value cues on a trial - by - trial basis. as in typical perceptual tasks, subjects were rewarded for correctly categorizing a tone relative to a learned threshold. to add an economic component, a light indicated, on each trial, whether correct responses to one side gave higher rewards than correct responses to the other side. as such, on trials with some perceptual uncertainty, it could be worthwhile to choose the unlikely option, if it had higher expected value. we found that, despite subjects sensitivity to the frequency of the cue and the reward sizes, their behavior was not optimal : subjects tended to shift their choices in a stimulus - independent way following light flashes. moreover, subjects tended to under - shift, which could be interpreted as being over - confident in their perceptual beliefs or as being risk - averse.
arxiv:2112.12278
the role of viruses as persistent symbionts in host genomes. as a consequence, the evolution of genetic content order is seen as the result of competent genome editors in contrast to former narratives in which error replication events ( mutations ) dominated. = = = philosophy of medicine = = = beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology / metaphysics of medicine. within the epistemology of medicine, evidence - based medicine ( ebm ) ( or evidence - based practice ( ebp ) ) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include cartesian dualism, the monogenetic conception of disease and the conceptualization of ' placebos ' and ' placebo effects '. there is also a growing interest in the metaphysics of medicine, particularly the idea of causation. philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. causation is of interest because the purpose of much medical research is to establish causal relationships, e. g. what causes disease, or what causes people to get better. = = = philosophy of psychiatry = = = philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. the philosopher of science and medicine dominic murphy identifies three areas of exploration in the philosophy of psychiatry. the first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. the second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. the third area concerns the links and discontinuities between the philosophy of mind and psychopathology. = = = philosophy of psychology = = = philosophy of psychology refers to issues at the theoretical foundations of modern psychology. some of these issues are epistemological concerns about the methodology of psychological investigation. for example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? if the latter, an important question is how the internal experiences of others can be measured. self - reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self - deception or selective memory may affect their responses.
https://en.wikipedia.org/wiki/Philosophy_of_science
a hadamard - hitchcock decomposition of a multidimensional array is a decomposition that expresses the latter as a hadamard product of several tensor rank decompositions. such decompositions can encode probability distributions that arise from statistical graphical models associated to complete bipartite graphs with one layer of observed random variables and one layer of hidden ones, usually called restricted boltzmann machines. we establish generic identifiability of hadamard - hitchcock decompositions by exploiting the reshaped kruskal criterion for tensor rank decompositions. a flexible algorithm leveraging existing decomposition algorithms for tensor rank decomposition is introduced for computing a hadamard - hitchcock decomposition. numerical experiments illustrate its computational performance and numerical accuracy.
arxiv:2308.06597
general phenomenological theory of hydrodynamic waves in regions with smooth loss of convexity of isentropes is developed based on the fact that for most media these regions in p - v plane are anomalously small. accordingly the waves are usually weak and can be described in the manner analogous to that for weak shock waves of compression. the corresponding generalized burgers equation is derived and analyzed. the exact solution of the equation for steady shock waves of rarefaction is obtained and discusses.
arxiv:physics/0101103
the effect that the current quark mass $ m _ 0 $ may result in nonzero - ness of chiral condensates is systematically reexamined and analyzed in a two - flavor nambu - jona - lasinio model simulating quantum chromodynamics ( qcd ) at temperature $ t $ and finite quark chemical potential $ \ mu $ without and with electrical neutrality ( en ) condition and at any $ t $ and $ \ mu $ without en condition. by means of a quantitative investigation of the order parameter $ m $, it is shown that a nonzero $ m _ 0 $ is bound to lead to nonzero quark - antiquark condensates throughout chiral phase transitions, no matter whether the order parameter $ m $ varies discontinuously or continuously. in fact, a complete disappearance of the quark - antiquark condensates are proven to demand the non - physical and unrealistic conditions $ \ mu \, \ geq $ or $ \ gg \, \ sqrt { \ lambda ^ 2 + m _ 0 ^ 2 } $ if $ t = 0 $ and finite, or $ t \ to \ infty $ if $ \ mu < \ sqrt { \ lambda ^ 2 + m _ 0 ^ 2 } $, where $ \ lambda $ is the 3d momentum cut of the loop integrals. theoretically these results show that when $ m _ 0 $ is included, we never have a complete restoration of dynamical ( spontaneous ) chiral symmetry breaking, including after a first order chiral phase transition at low $ t $ and high $ \ mu $. in physical reality, it is the nonzero - ness of the quark - antiquark condensates that leads to the appearance of a critical end point in the first order phase transition line and the crossover behavior at high $ t $ and / or high $ \ mu $ cases, rather than a possible tricritical point and a second order phase transition line. they also provide a basic reason for that one must consider the interplay between the chiral and diquark condensates in the research on color superconductor at zero $ t $ and high $ \ mu $ case. the research shows that how a source term of the lagrangian ( at present i. e. the current quark mass term ) can greatly affect dynamical behavior of a physical system.
arxiv:1506.07197
in this paper, the honeycomb hubbard model in optical lattices is investigated using o ( 3 ) non - linear sigma model. a possible quantum non - magnetic insulator in a narrow parameter region is found near the metal - insulator transition. we study the corresponding dynamics of magnetic properties, and find that the narrow region could be widened by hole doping.
arxiv:0911.3002
we present enhancements to the tcp - friendly rate control mechanism ( tfrc ) designed to better handle the intermittent connectivity occurring in mobility situations. our aim is to quickly adapt to new network conditions and better support real - time applications for which the user - perceived quality depends on the immediate transmission rate. we propose to suspend the transmission before disconnections occur, in a way inspired by freeze - tcp, and extend the solution by probing the network after reconnecting to enable full use of the newly available capacity. we first introduce a numerical model of tfrc ' s performance after a network handover and use it to evaluate the potential performance gains for realistic network parameters. we then describe a set of additions to tfrc to achieve these gains. implementations within the datagram congestion control protocol ( dccp ) for ns - 2 and linux have been adapted to support these enhancements. comparisons of experimental results for the original and modified dccp are presented for a number of example mobility scenarios. we thus show how the proposed modifications enable faster recovery after disconnected periods as well as significantly improved adjustments to the newly available network conditions and an improvement in the quality of experience ( qoe ) for video - streaming applications.
arxiv:1310.5446
in this article we consider the continuity of the eigenvalues of the connection laplacian of $ g $ - connections on vector bundles over riemannian manifolds. to show it, we introduce the notion of the asymptotically $ g $ - equivariant measured gromov - hausdorff topology on the space of metric measure spaces with isometric $ g $ - actions, and apply it to the total spaces of principal $ g $ - bundles equipped with $ g $ - connections over riemannian manifolds.
arxiv:1808.02292
imaging atmospheric cherenkov telescope ( iact ) searches for dark matter often perform observations in " wobble mode ", i. e. collecting data from the signal region and from a corresponding background control region at the same time, enabling efficient simultaneous determination and subtraction of background. this observation strategy is possibly compromised in scenarios where dark matter annihilates to long - lived mediators that can traverse astrophysical distances before decaying to produce the gamma rays observed by the iacts. in this paper, we show that this challenge comes with several interesting features and opportunities : in addition to signal contamination in the background control region, the gamma - ray spectrum changes with the observing direction angle and typically exhibits a hard excess at high energies. this affects signal reconstruction via subtraction of the background control region measurements in non - trivial ways. such features represent a significant departure from the canonical picture, and offer novel handles to identify a dark matter signal and to extract underlying dark matter parameters.
arxiv:1812.08694
in this article, we continue the structural study of factor maps betweeen symbolic dynamical systems and the relative thermodynamic formalism. here, one is studying a factor map from a shift of finite type $ x $ ( equipped with a potential function ) to a sofic shift $ z $, equipped with a shift - invariant measure $ \ nu $. we study relative equilibrium states, that is shift - invariant measures on $ x $ that push forward under the factor map to $ \ nu $ which maximize the relative pressure : the relative entropy plus the integral of $ \ phi $. in the non - relative case ( where $ z $ is the one point shift and the factor map is trivial ), these measures have a very broad range of application : in hyperbolic dynamics, information theory, geometry, teichm \ " uller theory and elsewhere ). relative equilibrium states have also been shown to arise naturally in some contexts in geometric measure theory as a description of measures achieving the hausdorff dimension in ambient spaces. previous articles have identified relative versions of well - known notions of degree appearing in one - dimensional symbolic settings, and established bounds in terms of these on the number of ergodic relative equilibrium states. in this paper, we establish a new connection to multiplicative ergodic theory by relating these factor triples to a cocycle of ruelle perron - frobenius operators, and showing that the principal lyapunov exponent of this cocycle is the relative pressure ; and the dimension of the leading oseledets space is equal to the number of measures of relative maximal entropy, counted with a previously - identified concept of multiplicity.
arxiv:2005.13090
the velocity distribution function is a statistical description that connects particle kinetics and macroscopic parameters in many - body systems. laser - induced fluorescence ( lif ) spectroscopy is utilized to measure the local velocity distribution function in spatially inhomogeneous plasmas. however, the analytic form of such a function for the system of interest is not always clear under the intricate factors in non - equilibrium states. here, we propose a novel approach to select the valid form of the velocity distribution function based on bayesian statistics. we formulate the bayesian inference of ion velocity distribution function and apply it to lif spectra locally observed at several positions in a linear magnetized plasma. we demonstrate evaluating the spatial inhomogeneity by verifying each analytic form of the local velocity distribution function. our approach is widely applicable to experimentally establish the velocity distribution function in plasmas and fluids, including gases and liquids.
arxiv:2110.10998
script knowledge consists of detailed information on everyday activities. such information is often taken for granted in text and needs to be inferred by readers. therefore, script knowledge is a central component to language comprehension. previous work on representing scripts is mostly based on extensive manual work or limited to scenarios that can be found with sufficient redundancy in large corpora. we introduce the task of scenario detection, in which we identify references to scripts. in this task, we address a wide range of different scripts ( 200 scenarios ) and we attempt to identify all references to them in a collection of narrative texts. we present a first benchmark data set and a baseline model that tackles scenario detection using techniques from topic segmentation and text classification.
arxiv:1906.04102
a disjunctive sensing and actuation problem is considered in which the actuators and sensors are prevented from operating together over any given time step. this problem is motivated by practical applications in the area of spacecraft control. assuming a linear system model with stochastic process disturbance and measurement noise, a procedure to construct a periodic sequence that ensures bounded states and estimation error covariance is described along with supporting analysis results. the procedure is also extended to ensure eventual satisfaction of probabilistic chance constraints on the state. the proposed scheme demonstrates good performance in simulations for spacecraft relative motion control.
arxiv:1809.03608
a hypergraph is said to be properly 2 - colorable if there exists a 2 - coloring of its vertices such that no hyperedge is monochromatic. on the other hand, a hypergraph is called non - 2 - colorable if there exists at least one monochromatic hyperedge in each of the possible 2 - colorings of its vertex set. let m ( n ) denote the minimum number of hyperedges in a non - 2 - colorable n - uniform hypergraph. establishing the lower and upper bounds on m ( n ) is a well - studied research direction over several decades. in this paper, we present new constructions for non - 2 - colorable n - uniform hypergraphs. these constructions improve the upper bounds for m ( 8 ), m ( 13 ), m ( 14 ), m ( 16 ) and m ( 17 ). we also improve the lower bound for m ( 5 ).
arxiv:1602.00218
a gas of interacting ultracold fermions can be tuned into a strongly interacting regime using a feshbach resonance. here we theoretically study quasiparticle transport in a system of two reservoirs of interacting ultracold fermions on the bcs side of the bcs - bec crossover coupled weakly via a tunnel junction. using the generalized bcs theory we calculate the time evolution of the system that is assumed to be initially prepared in a non - equilibrium state characterized by a particle number imbalance or a temperature imbalance. a number of characteristic features like sharp peaks in quasiparticle currents, or transitions between the normal and superconducting states are found. we discuss signatures of the seebeck and the peltier effect and the resulting temperature difference of the two reservoirs as a function of the interaction parameter $ ( k _ fa ) ^ { - 1 } $. the peltier effect may lead to an additional cooling mechanism for ultracold fermionic atoms.
arxiv:1607.04213
in addition to x - rays, extreme ultraviolet ( euv ) rays radiated from solar flares can cause serious problems, such as communication failures and satellite drag. therefore, methods for forecasting euv dynamic spectra during flares are urgently required. recently, however, owing to the lack of instruments, euv dynamic spectra have rarely been observed. hence, we develop a new method that converts the soft x - ray light curve observed during large flare events into an euv dynamic spectrum by using the solar dynamics observatory / atmospheric imaging assembly images, a numerical simulation, and atomic database. the simulation provides the solution for a coronal loop that is heated by a strong flare, and the atomic database calculates its dynamic spectrum, including x - ray and euv irradiances. the coefficients needed for the conversion can be calculated by comparing the observed soft x - ray light curve with that of the simulation. we apply our new method to three flares that occurred in the active region 12673 on september 06, 2017. the results show similarities to those of the flare irradiance spectral model, and reconstruct some of the euv peaks observed by the euv variability experiment onboard the solar dynamics observatory.
arxiv:2005.06099
e / f ) $ ( resp. $ { \ rm gl } _ n ( f ) $ ) that base changes to $ \ pi $ when $ h $ is $ { \ rm gl } _ n ( f ) $ ( resp. $ { \ rm u } ( n, e / f ) $ ). as an application we classify the members of a generic $ l $ - packet of $ { \ rm sl } _ n ( e ) $ that admit invariant vectors for $ { \ rm sl } _ n ( f ) $. finally we prove a $ p $ - adic analogue of our result for square - integrable representations in terms of formal degrees by employing the formal degree conjecture of hiraga - ichino - ikeda \ cite { hii08 }.
arxiv:1805.04047
imaginary parts, one gets −i. there are generally two ways of solving the problem. one may define a function that is not continuous along some curve, called a branch cut. such a function is called the principal value of the function. the other way is to consider that one has a multi - valued function, which is analytic everywhere except for isolated singularities, but whose value may " jump " if one follows a closed loop around a singularity. this jump is called the monodromy. = = in the foundations of mathematics = = the definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. this is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. however, it is sometimes useful to consider more general functions. for example, the singleton set may be considered as a function x ↦ { x }. { \ displaystyle x \ mapsto \ { x \ }. } its domain would include all sets, and therefore would not be a set. in usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. however, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. these generalized functions may be critical in the development of a formalization of the foundations of mathematics. for example, von neumann – bernays – godel set theory, is an extension of the set theory in which the collection of all sets is a class. this theory includes the replacement axiom, which may be stated as : if x is a set and f is a function, then f [ x ] is a set. in alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. they are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. = = in computer science = = in computer programming, a function is, in general, a subroutine which implements the abstract concept of function. that is, it is a program unit that produces an output for each input. functional programming is the programming paradigm consisting of building programs by using only subroutines that
https://en.wikipedia.org/wiki/Function_(mathematics)
= barron, r. ( 2003 ). industrial noise control and acoustics. new york : marcel dekker inc. retrieved from crcnetbase hemond, c. ( 1983 ). in ingerman s. ( ed. ), engineering acoustics and noise control. new jersey : prentice - hall. highway traffic noise barriers at a glance. retrieved february 1, 2010, from http : / / www. fhwa. dot. gov / environment / keepdown. htm archived 2011 - 06 - 15 at the wayback machine kinsler, l., frey, a., coppens, a., & sanders, j. ( eds. ). ( 2000 ). fundamentals of acoustics ( 4th ed. ). new york : john wiley and sons. kleppe, j. ( 1989 ). engineering applications of acoustics. sparks, nevada : artech house. moser, m. ( 2009 ). engineering acoustics ( s. zimmerman, r. ellis trans. ). ( 2nd ed. ). berlin : springer - verlag.
https://en.wikipedia.org/wiki/Acoustical_engineering
blind dehazed image quality assessment ( bdqa ), which aims to accurately predict the visual quality of dehazed images without any reference information, is essential for the evaluation, comparison, and optimization of image dehazing algorithms. existing learning - based bdqa methods have achieved remarkable success, while the small scale of dqa datasets limits their performance. to address this issue, in this paper, we propose to adapt contrastive language - image pre - training ( clip ), pre - trained on large - scale image - text pairs, to the bdqa task. specifically, inspired by the fact that the human visual system understands images based on hierarchical features, we take global and local information of the dehazed image as the input of clip. to accurately map the input hierarchical information of dehazed images into the quality score, we tune both the vision branch and language branch of clip with prompt learning. experimental results on two authentic dqa datasets demonstrate that our proposed approach, named clip - dqa, achieves more accurate quality predictions over existing bdqa methods. the code is available at https : / / github. com / junfu1995 / clip - dqa.
arxiv:2502.01707
large language models of code ( code - llms ) have recently brought tremendous advances to code completion, a fundamental feature of programming assistance and code intelligence. however, most existing works ignore the possible presence of bugs in the code context for generation, which are inevitable in software development. therefore, we introduce and study the buggy - code completion problem, inspired by the realistic scenario of real - time code suggestion where the code context contains potential bugs - - anti - patterns that can become bugs in the completed program. to systematically study the task, we introduce two datasets : one with synthetic bugs derived from semantics - altering operator changes ( buggy - humaneval ) and one with realistic bugs derived from user submissions to coding problems ( buggy - fixeval ). we find that the presence of potential bugs significantly degrades the generation performance of the high - performing code - llms. for instance, the passing rates of codegen - 2b - mono on test cases of buggy - humaneval drop more than 50 % given a single potential bug in the context. finally, we investigate several post - hoc methods for mitigating the adverse effect of potential bugs and find that there remains a significant gap in post - mitigation performance.
arxiv:2306.03438
we develop a unified framework for constructing matrix approximations to the convolution operator of volterra type defined by functions that are approximated using classical orthogonal polynomials on $ [ - 1, 1 ] $. the numerically stable algorithms we propose exploit recurrence relations and symmetric properties satisfied by the entries of these convolution matrices. laguerre - based convolution matrices that approximate volterra convolution operator defined by functions on $ [ 0, \ infty ] $ are also discussed for the sake of completeness.
arxiv:1804.08762
in 2009, kyaw proved that every $ n $ - vertex connected $ k _ { 1, 4 } $ - free graph $ g $ with $ \ sigma _ 4 ( g ) \ geq n - 1 $ contains a spanning tree with at most $ 3 $ leaves. in this paper, we prove an analogue of kyaw ' s result for connected $ k _ { 1, 5 } $ - free graphs. we show that every $ n $ - vertex connected $ k _ { 1, 5 } $ - free graph $ g $ with $ \ sigma _ 5 ( g ) \ geq n - 1 $ contains a spanning tree with at most $ 4 $ leaves. moreover, the degree sum condition ` $ \ sigma _ 5 ( g ) \ geq n - 1 $ ' is best possible.
arxiv:1804.09332
we find ourselves in an extended era of entropy production. unlike most other observations, the arrow of time is usually regarded as a constraint on initial conditions. i argue, however, that it primarily constrains the vacuum structure of the theory. i exhibit simple scalar field potentials in which low - entropy initial conditions are not necessary, or not sufficient, for an arrow of time to arise. i argue that the string theory landscape gives rise to an arrow of time independently of the initial entropy, assuming a plausible condition on the lifetime of its metastable vacua. the dynamical resolution of the arrow of time problem arises from the same structural properties of the string landscape that allow it to solve the cosmological constant problem without producing an empty universe, particularly its high dimensionality and the large difference in vacuum energy between neighboring vacua.
arxiv:1112.3341
we present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high quality, live and spoof fingerprint images while preserving features such as uniqueness and diversity. we generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. to generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a wasserstein metric along with gradient penalty ( cyclewgan - gp ) in order to avoid mode collapse and instability. we find that when the spoof training data includes distinct spoof characteristics, it leads to improved live - to - spoof translation. we assess the diversity and realism of the generated live fingerprint images mainly through the fr \ ' echet inception distance ( fid ) and the false acceptance rate ( far ). our best diffusion model achieved a fid of 15. 78. the comparable wgan - gp model achieved slightly higher fid while performing better in the uniqueness assessment due to a slightly lower far when matched against the training data, indicating better creativity. moreover, we give example images showing that a ddpm model clearly can generate realistic fingerprint images.
arxiv:2403.13916
for a process u ( t, s ) acting on a one - parameter family of normed spaces, we present a notion of time - dependent attractor based only on the minimality with respect to the pullback attraction property. such an attractor is shown to be invariant whenever the process is t - closed for some t > 0, a much weaker property than continuity ( defined in the text ). as a byproduct, we generalize the recent theory of attractors in time - dependent spaces developed in [ 10 ]. finally, we exploit the new framework to study the longterm behavior of wave equations with time - dependent speed of propagation.
arxiv:1209.5885
we describe the first dna - based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. the newly developed architecture overcomes drawbacks of existing read - only methods that require decoding the whole file in order to read one data fragment. our system is based on new constrained coding techniques and accompanying dna editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. as a proof of concept, we encoded parts of the wikipedia pages of six universities in the usa, and selected and edited parts of the text written in dna corresponding to three of these schools. the results suggest that dna is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.
arxiv:1505.02199
we prove a general result on irregularities of distribution for borel sets intersected with bounded measurable sets or affine half - spaces.
arxiv:2209.11435
in this paper, we present an ensemble approach for the nl4opt competition subtask 1 ( ner task ). for this task, we first fine tune the pretrained language models based on the competition dataset. then we adopt differential learning rates and adversarial training strategies to enhance the model generalization and robustness. additionally, we use a model ensemble method for the final prediction, which achieves a micro - averaged f1 score of 93. 3 % and attains the second prize in the ner task.
arxiv:2301.02459
in this paper, energy efficient power control for small cells underlaying a macro cellular network is investigated. we formulate the power control problem in self - organizing small cell networks as a non - cooperative game, and propose a distributed energy efficient power control scheme, which allows the small base stations ( sbss ) to take individual decisions for attaining the nash equilibrium ( ne ) with minimum information exchange. specially, in the non - cooperative power control game, a non - convex optimization problem is formulated for each sbs to maximize their energy efficiency ( ee ). by exploiting the properties of parameter - free fractional programming and the concept of perspective function, the non - convex optimization problem for each sbs is transformed into an equivalent constrained convex optimization problem. then, the constrained convex optimization problem is converted into an unconstrained convex optimization problem by exploiting the mixed penalty function method. the inequality constraints are eliminated by introducing the logarithmic barrier functions and the equality constraint is eliminated by introducing the quadratic penalty function. we also theoretically show the existence and the uniqueness of the ne in the non - cooperative power control game. simulation results show remarkable improvements in terms of ee by using the proposed scheme.
arxiv:1703.06824
we report the discovery of variability in the linear polarization from the galactic center black hole source, sagittarius a *. new polarimetry obtained with the berkeley - illinois - maryland association array at a wavelength of 1. 3 mm shows a position angle that differs by 28 + / - 5 degrees from observations 6 months prior and then remains stable for 15 months. this difference may be due to a change in the source emission region on a scale of 10 schwarzschild radii or due to a change of 3 x 10 ^ 5 rad m ^ - 2 in the rotation measure. we consider a change in the source physics unlikely, however, since we see no corresponding change in the total intensity or polarized intensity fraction. on the other hand, turbulence in the accretion region at a radius ~ 10 to 1000 r _ s could readily account for the magnitude and time scale of the position angle change.
arxiv:astro-ph/0411551
if $ g $ is a finite group, an irreducible complex - valued character $ \ chi $ is called rational if $ \ chi ( g ) $ is rational for all $ g \ in g $. also, a conjugacy class $ x ^ g $ is called rational, if for all irreducible complex - valued character $ \ chi $, the value $ \ chi ( x ^ g ) $ is rational. we prove that for $ q $, a power of prime, the group $ \ mathrm { psl } _ 2 ( q ) $ has same number of rational characters and rational conjugacy classes. furthermore, we verify that this equality holds for all finite simple groups whose character tables appear in the $ \ textit { atlas of finite groups } $, except for the tits group.
arxiv:2503.20452
the string landscape is a fantasy. we actually have a plausible landscape of minimally supersymmetric $ ads _ 4 $ solutions of supergravity modified by an exponential superpotential. none of these solutions is accessible to world sheet perturbation theory. if they exist as models of quantum gravity, they are defined by conformal field theories, and each is an independent quantum system, which makes no transitions to any of the others. this landscape has nothing to do with cdl tunneling or eternal inflation. a proper understanding of cdl transitions in qft on a fixed background ds space, shows that the ei picture of this system is not justified within the approximation of low energy effective field theory. the cutoff independent physics, defined by the euclidean functional integral over the 4 - sphere admits only a finite number of instantons. plausible extensions of these ideas to a quantum theory of gravity obeying the holographic principle explain all of the actual facts about cdl transitions in ds space, and lead to a picture radically different from eternal inflation. theories of eternal inflation ( ei ) have to rely too heavily on the anthropic principle to be consistent with experiment. given the vast array of effective low energy field theories that could be produced by the conventional picture of the string landscape one is forced to conclude that the most numerous anthropically allowed theories will disagree with experiment violently.
arxiv:1208.5715
we classify the hamiltonians $ h = p _ x ^ 2 + p _ y ^ 2 + v ( x, y ) $ of all classical superintegrable systems in two dimensional complex euclidean space with second - order constants of the motion. we similarly classify the superintegrable hamiltonians $ h = j _ 1 ^ 2 + j _ 2 ^ 2 + j _ 3 ^ 2 + v ( x, y, z ) $ on the complex 2 - sphere where $ x ^ 2 + y ^ 2 + z ^ 2 = 1 $. this is achieved in all generality using properties of the complex euclidean group and the complex orthogonal group.
arxiv:math-ph/0102006
the low temperature reaction between cn and benzene ( c $ _ 6 $ h $ _ 6 $ ) is of significant interest in the astrochemical community due to the recent detection of benzonitrile, the first aromatic molecule identified in the interstellar medium ( ism ) using radio astronomy. benzonitrile is suggested to be a low temperature proxy for benzene, one of the simplest aromatic molecules, which may be a precursor to polycyclic aromatic hydrocarbons ( pahs ). in order to assess the robustness of benzonitrile as a proxy for benzene, low temperature kinetics measurements are required to confirm whether the reaction remains rapid at the low gas temperatures found in cold dense clouds. here, we study the c $ _ 6 $ h $ _ 6 $ + cn reaction in the temperature range 15 - - 295 k, using the well - established cresu technique ( a french acronym standing for reaction kinetics in uniform supersonic flow ) combined with pulsed laser photolysis - laser - induced fluorescence ( plp - lif ). we obtain rate coefficients, $ k ( t ) $, in the range ( 3. 6 - - 5. 4 ) $ \ times $ 10 $ ^ { - 10 } $ cm $ ^ 3 $ s $ ^ { - 1 } $ with no obvious temperature dependence between 15 - - 295 k, confirming that the cn + c $ _ 6 $ h $ _ 6 $ reaction remains rapid at temperatures relevant to the cold ism.
arxiv:2003.02101
for a variety with a whitney stratification by affine spaces, we study categories of motivic sheaves which are constant mixed tate along the strata. we are particularly interested in those cases where the category of mixed tate motives over a point is equivalent to the category of finite - dimensional bigraded vector spaces. examples of such situations include rational motives on varieties over finite fields and modules over the spectrum representing the semisimplification of de rham cohomology for varieties over the complex numbers. we show that our categories of stratified mixed tate motives have a natural weight structure. under an additional assumption of pointwise purity for objects of the heart, tilting gives an equivalence between stratified mixed tate sheaves and the bounded homotopy category of the heart of the weight structure. specializing to the case of flag varieties, we find natural geometric interpretations of graded category $ \ mathcal o $ and koszul duality.
arxiv:1404.6333
the one - band hubbard model on the pyrochlore lattice contains an extended quantum spin - liquid phase formed from the manifold of singlet dimer coverings. we demonstrate that the massive and deconfined spinon excitations of this system have fermionic statistics. holonic quasiparticles introduced by doping are also fermions and we explain the origin of this counterintuitive result.
arxiv:1503.07271
a versatile and reconfigurable asic is presented, which implements two different concepts of low level trigger ( l0 ) for cherenkov telescopes : the majority trigger ( sum of discriminated inputs ) and the sum trigger concept ( analogue clipped sum of inputs ). up to 7 input signals can be processed following one or both of the previous trigger concepts. each differential pair output of the discriminator is also available as a lvds output. differential circuitry using local feedback allows the asic to achieve high speed ( 500 mhz ) while maintaining good linearity in a 1 vpp range. experimental results are presented. a number of prototype camera designs of the cherenkov telescope array ( cta ) project will use this asic.
arxiv:1611.09713
we review our present knowledge of the polyakov loop, the correlator of polyakov loops and the singlet correlator in thermal qcd from the point of view of perturbation theory and lattice qcd.
arxiv:1812.03732
compact u ( 1 ) lattice gauge theory in ( 2 + 1 ) dimensions is studied on anisotropic lattices using standard path integral monte carlo techniques. we extract the static quark potential and the string tension from 1. 0 < = dtau < = 0. 333 simulations at 1. 0 < = beta < = 3. 0. estimating the actual value of the renormalization constant, ( c = 44 ), we observe the evidence of scaling in the string tension for 1. 4142 < = beta < = 2. 5 ; with the asymptotic behaviour in the large - beta limit given by k sqrt ( beta ) = e ^ ( - 2. 494 beta + 2. 29 ). extrapolations are made to the extreme anisotropic or " hamiltonian " limit, and comparisons are made with previous estimates obtained by various other methods in the hamiltonian formulation.
arxiv:hep-lat/0208047
in this paper, we present a coherent state - vector method which can explain the results of a nested linear mach - zehnder interferometric experiment. such interferometers are used widely in quantum information and quantum optics experiments and also in designing quantum circuits. we have specifically considered the case of an experiment by danan \ emph { et al. } ( phys. rev. lett. \ textbf { 111 }, 240402 ( 2013 ) ) where the outcome of the experiment was spooky by our intuitive guesses. however we have been able to show by our method that the results of this experiment is indeed expected within the standard formalism of quantum mechanics using any classical state of a single - mode radiation field as the input into the nested interferometric set - up of the aforesaid experiment and thereby looking into the power spectrum of the output beam.
arxiv:1701.03074
in this paper, the entropy of isolated horizons in non - minimally coupling scalar field theory and in the scalar - tensor theory of gravitation is calculated by counting the degree of freedom of quantum states in loop quantum gravity. instead of boundary chern - simons theory, the boundary bf theory is used. the advantages of the new approaches are that no spherical symmetry is needed, and that the final result matches exactly with the wald entropy formula.
arxiv:1507.08807
takeda - yano determined the limit of l \ ' { e } vy processes conditioned to avoid zero via various random clocks in terms of doob ' s $ h $ - transform, where the limit processes may differ according to the choice of random clocks. the purpose of this paper is to investigate sample path behaviors of the limit processes in long time and in short time.
arxiv:2211.12863
survival prediction based on whole slide images ( wsis ) is a challenging task for patient - level multiple instance learning ( mil ). due to the vast amount of data for a patient ( one or multiple gigapixels wsis ) and the irregularly shaped property of wsi, it is difficult to fully explore spatial, contextual, and hierarchical interaction in the patient - level bag. many studies adopt random sampling pre - processing strategy and wsi - level aggregation models, which inevitably lose critical prognostic information in the patient - level bag. in this work, we propose a hierarchical vision transformer framework named hvtsurv, which can encode the local - level relative spatial information, strengthen wsi - level context - aware communication, and establish patient - level hierarchical interaction. firstly, we design a feature pre - processing strategy, including feature rearrangement and random window masking. then, we devise three layers to progressively obtain patient - level representation, including a local - level interaction layer adopting manhattan distance, a wsi - level interaction layer employing spatial shuffle, and a patient - level interaction layer using attention pooling. moreover, the design of hierarchical network helps the model become more computationally efficient. finally, we validate hvtsurv with 3, 104 patients and 3, 752 wsis across 6 cancer types from the cancer genome atlas ( tcga ). the average c - index is 2. 50 - 11. 30 % higher than all the prior weakly supervised methods over 6 tcga datasets. ablation study and attention visualization further verify the superiority of the proposed hvtsurv. implementation is available at : https : / / github. com / szc19990412 / hvtsurv.
arxiv:2306.17373
in this paper we introduce adaptive time step control for simulation of evolution of ice sheets. the discretization error in the approximations is estimated using " milne ' s device " by comparing the result from two different methods in a predictor - corrector pair. using a predictor - corrector pair the expensive part of the procedure, the solution of the velocity and pressure equations, is performed only once per time step and an estimate of the local error is easily obtained. the stability of the numerical solution is maintained and the accuracy is controlled by keeping the local error below a given threshold using pi - control. depending on the threshold, the time step $ \ delta t $ is bound by stability requirements or accuracy requirements. our method takes a shorter $ \ delta t $ than an implicit method but with less work in each time step and the solver is simpler. the method is analyzed theoretically with respect to stability and applied to the simulation of a 2d ice slab and a 3d circular ice sheet. % the automatically chosen $ \ delta t $ is either restricted by accuracy or stability depedning on an error tolerance. the stability bounds in the experiments are explained by and agree well with the theoretical results.
arxiv:1605.06970
the combination of different exotic properties in materials paves the way for the emergence of their new potential applications. an example is the recently found coexistence of the mutually antagonistic ferromagnetism and superconductivity in hydrogenated boron - doped diamond, which promises to be an attractive system with which to explore unconventional physics. here, we show the emergence of yu - shiba - rusinov ( ysr ) bands with a spatial extent of tens of nanometers in ferromagnetic superconducting diamond using scanning tunneling spectroscopy. we demonstrate theoretically how a two - dimensional ( 2d ) spin lattice at the surface of a three - dimensional ( 3d ) superconductor gives rise to the ysr bands, and how their density - of - states profile correlates with the spin lattice structure. the established strategy to realize new forms of the coexistence of ferromagnetism and superconductivity opens a way to engineer the unusual electronic states and also to design better performing superconducting devices.
arxiv:2006.02853
with the rapidly - developing high - speed wireless communications, the 60 ghz millimeter - wave frequency range and radio - over - fiber systems have been investigated as a promising solution to deliver mm - wave signals. neural networks have been studied to improve the mm - wave rof system performances at the receiver side by suppressing linear and nonlinear impairments. however, previous neural network studies in mm - wave rof systems focus on the off - line implementation with high - end gpus, which is not practical for low power - consumption, low - cost and limited computation platform applications. to solve this issue, we investigate neural network hardware accelerator implementations using the field programmable gate array ( fpga ), taking advantage of the low power consumption, parallel computation capability, and reconfigurablity features of fpga. convolutional neural network ( cnn ) and binary convolutional neural network ( bcnn ) hardware accelerators are demonstrated. in addition, to satisfy the low - latency requirement in mm - wave rof systems and to enable the use of low - cost compact fpga devices, a novel inner parallel optimization method is proposed. compared with the embedded processor ( arm cortex a9 ) execution latency, the cnn / bcnn fpga - based hardware accelerator reduces their latency by over 92 %. compared with non - optimized fpga implementations, the proposed optimization method reduces the processing latency by over 44 % for cnn and bcnn. compared with the gpu implementation, the latency of cnn implementation with the proposed optimization method is reduced by 85. 49 %, while the power consumption is reduced by 86. 91 %. although the latency of bcnn implementation with the proposed optimization method is larger compared with the gpu implementation, the power consumption is reduced by 86. 14 %. the fpga - based neural network hardware accelerators provide a promising solution for mm - wave rof systems.
arxiv:2002.08205
zero - to ultralow - field nuclear magnetic resonance ( zulf nmr ) is an alternative spectroscopic method to high - field nmr, in which samples are studied in the absence of a large magnetic field. unfortunately, there is a large barrier to entry for many groups, because operating the optical magnetometers needed for signal detection requires some expertise in atomic physics and optics. commercially available magnetometers offer a solution to this problem. here we describe a simple zulf nmr configuration employing commercial magnetometers, and demonstrate sufficient functionality to measure samples with nuclear spins prepolarized in a permanent magnet or initialized using parahydrogen. this opens the possibility for other groups to use zulf nmr, which provides a means to study complex materials without magnetic susceptibility - induced line broadening, and to observe samples through conductive materials.
arxiv:1911.07554
so far, no boron fullerenes were synthesized : more compact sp3 - bonded clusters are energetically preferred. to circumvent this, metallic clusters have been suggested by pochet et al. [ phys. rev. b 83, 081403 ( r ) ( 2011 ) ] as " seeds " for a possible synthesis which would topologically protect the sp2 sector of the configuration space. in this paper, we identify a basic pentagonal unit which allows a balance between the release of strain and the self - doping rule. we formulate a guiding principle for the stability of boron fullerenes, which takes the form of an isolated filled pentagon rule ( ifpr ). the role of metallic clusters is then reexamined. it is shown that the interplay of the ifpr and the seed - induced doping breaks polymorphism and its related problems : it can effectively select between different isomers and reduce the reactivity of the boron shells. the balance between self and exterior doping represents the best strategy for boron buckyball synthesis.
arxiv:1302.4003
combinatorial optimization problems are ubiquitous in industrial applications. however, finding optimal or close - to - optimal solutions can often be extremely hard. because some of these problems can be mapped to the ground - state search of the ising model, tremendous effort has been devoted to developing solvers for ising - type problems over the past decades. recent advances in controlling and manipulating both quantum and classical systems have enabled novel computing paradigms such as quantum simulators and coherent ising machines to tackle hard optimization problems. here, we examine and benchmark several physics - inspired optimization algorithms, including coherent ising machines, gain - dissipative algorithms, simulated bifurcation machines, and hopfield neural networks, which we collectively refer to as stochastic - driven nonlinear dynamical systems. most importantly, we benchmark these algorithms against random ising problems with planted solutions and compare them to simulated annealing as a baseline leveraging the same software stack for all solvers. we further study how different numerical integration techniques and graph connectivity affect performance. this work provides an overview of a diverse set of new optimization paradigms.
arxiv:2503.15427
as space expands, the energy density in black holes increases relative to that of radiation, providing us with motivation to consider scenarios in which the early universe contained a significant abundance of such objects. in this study, we revisit the constraints on primordial black holes derived from measurements of the light element abundances. black holes and their hawking evaporation products can impact the era of big bang nucleosynthesis ( bbn ) by altering the rate of expansion at the time of neutron - proton freeze - out, as well as by radiating mesons which can convert protons into neutrons and vice versa. such black holes can thus enhance the primordial neutron - to - proton ratio, and increase the amount of helium that is ultimately produced. additionally, the products of hawking evaporation can break up helium nuclei, which both reduces the helium abundance and increases the abundance of primordial deuterium. building upon previous work, we make use of modern deuterium and helium measurements to derive stringent constraints on black holes which evaporate in $ t _ { \ rm evap } \ sim 10 ^ { - 1 } $ s to $ \ sim 10 ^ { 13 } $ s ( corresponding to $ m \ sim 6 \ times 10 ^ 8 $ g to $ \ sim 2 \ times 10 ^ { 13 } $ g, assuming standard model particle content ). we also consider how physics beyond the standard model could impact these constraints. due to the gravitational nature of hawking evaporation, the rate at which a black hole evaporates, and the types of particles that are produced through this process, depend on the complete particle spectrum. within this context, we discuss scenarios which feature a large number of decoupled degrees - of - freedom ( \ ie ~ large hidden sectors ), as well as models of tev - scale supersymmetry.
arxiv:2006.03608