text
stringlengths
1
3.65k
source
stringlengths
15
79
we present an analysis of positron lifetimes in solids with unprecedented depth. instead of modeling correlation effects with density functionals, we study positron - electron wave functions with long - range correlations included. this gives new insight in understanding positron annihilation in metals, insulators and semiconductors. by using a new quantum monte carlo approach for computation of positron lifetimes, an improved accuracy compared to previous computations is obtained for a representative set of materials when compared with experiment. thus we present a method without free parameters as a useful alternative to the already existing methods for modeling positrons in solids.
arxiv:2202.13204
anomaly detecting as an important technical in cloud computing is applied to support smooth running of the cloud platform. traditional detecting methods based on statistic, analysis, etc. lead to the high false - alarm rate due to non - adaptive and sensitive parameters setting. we presented an online model for anomaly detecting using machine learning theory. however, most existing methods based on machine learning linked all features from difference sub - systems into a long feature vector directly, which is difficult to both exploit the complement information between sub - systems and ignore multi - view features enhancing the classification performance. aiming to this problem, the proposed method automatic fuses multi - view features and optimize the discriminative model to enhance the accuracy. this model takes advantage of extreme learning machine ( elm ) to improve detection efficiency. elm is the single hidden layer neural network, which is transforming iterative solution the output weights to solution of linear equations and avoiding the local optimal solution. moreover, we rank anomies according to the relationship between samples and the classification boundary, and then assigning weights for ranked anomalies, retraining the classification model finally. our method exploits the complement information between sub - systems sufficiently, and avoids the influence from imbalance dataset, therefore, deal with various challenges from the cloud computing platform. we deploy the privately cloud platform by openstack, verifying the proposed model and comparing results to the state - of - the - art methods with better efficiency and simplicity.
arxiv:1901.09294
we have investigated the formation and kinematics of sub - mm continuum cores in the orion a molecular cloud. a comparison between sub - mm continuum and near infrared extinction shows a continuum core detection threshold of $ a _ v \ sim $ 5 - 10 mag. the threshold is similar to the star formation extinction threshold of $ a _ v \ sim $ 7 mag proposed by recent work, suggesting a universal star formation extinction threshold among clouds within 500 pc to the sun. a comparison between the orion a cloud and a massive infrared dark cloud g28. 37 + 0. 07 indicates that orion a produces more dense gas within the extinction range 15 mag $ \ lesssim a _ v \ lesssim $ 60 mag. using data from the carma - nro orion survey, we find that dense cores in the integral - shaped filament ( isf ) show sub - sonic core - to - envelope velocity dispersion that is significantly less than the local envelope line dispersion, similar to what has been found in nearby clouds. dynamical analysis indicates that the cores are bound to the isf. an oscillatory core - to - envelope motion is detected along the isf. its origin is to be further explored.
arxiv:1908.04488
to ensure reliable object detection in autonomous systems, the detector must be able to adapt to changes in appearance caused by environmental factors such as time of day, weather, and seasons. continually adapting the detector to incorporate these changes is a promising solution, but it can be computationally costly. our proposed approach is to selectively adapt the detector only when necessary, using new data that does not have the same distribution as the current training data. to this end, we investigate three popular metrics for domain gap evaluation and find that there is a correlation between the domain gap and detection accuracy. therefore, we apply the domain gap as a criterion to decide when to adapt the detector. our experiments show that our approach has the potential to improve the efficiency of the detector ' s operation in real - world scenarios, where environmental conditions change in a cyclical manner, without sacrificing the overall performance of the detector. our code is publicly available at https : / / github. com / dadung / dge - cda.
arxiv:2302.10396
we study heegaard floer homology and various related invariants ( such as the $ h $ - function ) for two - component l - space links with linking number zero. for such links, we explicitly describe the relationship between the $ h $ - function, the sato - levine invariant and the casson invariant. we give a formula for the heegaard floer $ d $ - invariants of integral surgeries on two - component l - space links of linking number zero in terms of the $ h $ - function, generalizing a formula of ni and wu. as a consequence, for such links with unknotted components, we characterize l - space surgery slopes in terms of the $ \ nu ^ { + } $ - invariants of the knots obtained from blowing down the components. we give a proof of a skein inequality for the $ d $ - invariants of $ + 1 $ surgeries along linking number zero links that differ by a crossing change. we also describe bounds on the smooth four - genus of links in terms of the $ h $ - function, expanding on previous work of the second author, and use these bounds to calculate the four - genus in several examples of links.
arxiv:1810.10178
we present a simple field transformation which changes the field arguments from the ordinary position - space coordinates to the oblique phase - space coordinates that are linear in position and momentum variables. this is useful in studying quantum field dynamics in the presence of external uniform magnetic field : here, the field transformation serves to separate the dynamics within the given landau level from that between different landau levels. we apply this formalism to both nonrelativistic and relativistic field theories. in the large external magnetic field our formalism provides an efficient method for constructing the relevant lower - dimensional effective field theories with the field degrees defined only on the lowest landau level.
arxiv:hep-th/0110249
in this article, we derive a theoretical formalism that unifies the rigorous coupled wave analysis and the dynamical diffraction theory. based on this formalism, we design a computational approach for the diffraction calculation for the nanoscale lamellar gratings with an arbitrary line profile shape. in this approach, the gratings line profile is approximated as a polygon. this proves to be convenient since such an approach does not rely on the geometry model of the grating. we test the new approach against other computational theories and a synchrotron scattering experiment.
arxiv:2410.14217
following the model introduced by aguech, lasmar and mahmoud [ probab. engrg. inform. sci. 21 ( 2007 ) 133 - 141 ], the weighted depth of a node in a labelled rooted tree is the sum of all labels on the path connecting the node to the root. we analyze weighted depths of nodes with given labels, the last inserted node, nodes ordered as visited by the depth first search process, the weighted path length and the weighted wiener index in a random binary search tree. we establish three regimes of nodes depending on whether the second order behaviour of their weighted depths follows from fluctuations of the keys on the path, the depth of the nodes, or both. finally, we investigate a random distribution function on the unit interval arising as scaling limit for weighted depths of nodes with at most one child.
arxiv:1707.00165
a systematic investigation of even - even superheavy elements in the region of proton numbers $ 100 \ leq z \ leq 130 $ and in the region of neutron numbers from the proton - drip line up to neutron number $ n = 196 $ is presented. for this study we use five most up - to - date covariant energy density functionals of different types, with a non - linear meson coupling, with density dependent meson couplings, and with density - dependent zero - range interactions. pairing correlations are treated within relativistic hartree - bogoliubov ( rhb ) theory based on an effective separable particle - particle interaction of finite range and deformation effects are taken into account. this allows us to assess the spread of theoretical predictions within the present covariant models for the binding energies, deformation parameters, shell structures and $ \ alpha $ - decay half - lives. contrary to the previous studies in covariant density functional theory, it was found that the impact of $ n = 172 $ spherical shell gap on the structure of superheavy elements is very limited. similar to non - relativistic functionals some covariant functionals predict the important role played by the spherical $ n = 184 $ gap. for these functionals ( nl3 *, dd - me2 and pc - pk1 ), there is a band of spherical nuclei along and near the $ z = 120 $ and $ n = 184 $ lines. however, for other functionals ( dd - pc1 and dd - me $ \ delta $ ) oblate shapes dominate at and in the vicinity of these lines. available experimental data are in general described with comparable accuracy and do not allow to discriminate these predictions.
arxiv:1510.07909
in this paper, we consider the following quasilinear schr \ " { o } dinger equation \ begin { align * } - \ delta u - u \ delta ( u ^ { 2 } ) = k ( x ) \ left \ vert u \ right \ vert ^ { q - 2 } u - h ( x ) \ left \ vert u \ right \ vert ^ { s - 2 } u \ text {, } u \ in d ^ { 1, 2 } ( \ mathbb { r } ^ { n } ) \ text {, } \ end { align * } where $ 1 < q < 2 < s < + \ infty $. unlike most results in the literature, the exponent $ s $ here is allowed to be supercritical $ s > 2 \ cdot2 ^ { \ ast } $. by taking advantage of geometric properties of a nonlinear transformation $ f $ and a variant of clark ' s theorem, we get a sequence of solutions with negative energy in a space smaller than $ d ^ { 1, 2 } ( \ mathbb { r } ^ { n } ) $. nonnegative solution at negative energy level is also obtained.
arxiv:2211.08394
in the number - conserving bogoliubov theory of bec the bogoliubov transformation between quasiparticles and particles is nonlinear. we invert this nonlinear transformation and give general expression for eigenstates of the bogoliubov hamiltonian in particle representation. the particle representation unveils structure of a condensate multiparticle wavefunction. we give several examples to illustrate the general formalism.
arxiv:cond-mat/0210258
we consider importance sampling as well as other properly weighted samples with respect to a target distribution $ \ pi $ from a different point of view. by considering the associated weights as sojourn times until the next jump, we define appropriate jump processes. when the original sample sequence forms an ergodic markov chain, the associated jump process is an ergodic semi - - markov process with stationary distribution $ \ pi $. hence, the type of convergence of properly weighted samples may be stronger than that of weighted means. in particular, when the samples are independent and the mean weight is bounded above, we describe a slight modification in order to achieve exact ( weighted ) samples from the target distribution.
arxiv:math/0505045
in this work we study the deflection angle $ \ delta \ varphi $ and gravitational lensing of both lightlike and timelike neutral rays in reissner - nordstr \ " om ( rn ) spacetimes. the exact deflection angle is found as an elliptical function of the impact parameter $ b $ and velocity $ v $ of the ray, and the charge $ q $ of the spacetime. in obtaining this angle, we found the critical impact parameter $ b _ c $ and radius of particle sphere $ r _ c $ that are also dependent on $ v $ and $ q $. in general, both the increase of velocity and charge reduces the $ b _ c $ as well as $ r _ c $. to study the effect of $ v $ and $ q $ on the deflection angle $ \ delta \ varphi $, its weak and strong deflection limits, relativistic and non - relativistic limits, and small charge and extremal rn limits are analyzed carefully. it is found that both the increase of velocity and charge reduces the deflection angle. for weak deflection, the velocity and charge corrections appear respectively in the $ \ mathcal { o } ( 1 / b ) $ and $ \ mathcal { o } ( 1 / b ^ 2 ) $ orders. for strong deflections, these two corrections appear in the same order. the apparent angles and magnifications of weak and strong regular lensing, and retro - lensing are studied for both lightlike and timelike rays. in general, in all cases the increase of velocity or charge will decrease the apparent angle of any order. we show that velocity correction is much larger than that of charge in the weak lensing case, while their effects in the strong regular lensing and retro - lensing are comparable. it is further shown that the apparent angle and magnification in strong regular lensing and retro - lensing can be effectively unified. finally, we argue that the correction of $ v $ and $ q $ on the apparent angle can be correlated to mass or mass hierarchy of timelike particles with certain energy. in addition, the effects of $ v $ and $ q $ on shadow size of black holes are discussed.
arxiv:1806.04719
using ammonia as nitrogen source for molecular beam epitaxy, the gan - based diluted magnetic semiconductor ga1 - xmnxn is successfully grown with mn concentration up to x ~ 6. 8 % and with p - type conductivity. the films have wurtzite structure with substitutional mn on ga site in gan. magnetization measurements revealed that ga1 - xmnxn is ferromagnetic at temperatures higher than room temperature. the ferromagnetic - paramagnetic transition temperature, tc, depends on the mn concentration of the film. at low temperatures, the magnetization increases with increasing of magnetic field, implying that a paramagnetic - like phase coexists with ferromagnetic one. possible explanations will be proposed for the coexistence of two magnetic phases in the grown films.
arxiv:cond-mat/0205560
whereas single - and two - photon wave packets are usually treated as pure states, in practice they will be mixed. we study how entanglement created with mixed photon wave packets is degraded. we find in particular that the entanglement of a delocalized single - photon state of the electro - magnetic field is determined simply by its purity. we also discuss entanglement for two - photon mixed states, as well as the influence of a vacuum component.
arxiv:0803.0771
many physical, biological, and even social systems are faced with the problem of how to efficiently harvest free energy from an environment that can have many possible states, yet only have a limited number of harvesting protocols to choose among. we investigate this scenario by extending earlier work on using feedback control to extract work from nonequilibirum systems. specifically, in contrast to that previous work on the thermodynamics of feedback control, we analyze the combined and separate effects of noisy measurements, memory limitations, and limitations on the number of possible work extraction protocols. our analysis provides a general recipe to construct repertoires of allowed harvesting protocols that minimize the expected thermodynamic losses during free energy harvesting, i. e., that minimize expected entropy production. in particular, our results highlight that the benefits of feedback control over uninformed ( random ) actions extend beyond just the associated information gain, often by many orders of magnitude. our results also uncover the effects of limitations on the number of possible harvesting protocols when there is uncertainty about the distribution over states of the environment.
arxiv:2407.05507
this paper presents a safety - critical locomotion control framework for quadrupedal robots. our goal is to enable quadrupedal robots to safely navigate in cluttered environments. to tackle this, we introduce exponential discrete control barrier functions ( exponential dcbfs ) with duality - based obstacle avoidance constraints into a nonlinear model predictive control ( nmpc ) with whole - body control ( wbc ) framework for quadrupedal locomotion control. this enables us to use polytopes to describe the shapes of the robot and obstacles for collision avoidance while doing locomotion control of quadrupedal robots. compared to most prior work, especially using cbfs, that utilize spherical and conservative approximation for obstacle avoidance, this work demonstrates a quadrupedal robot autonomously and safely navigating through very tight spaces in the real world. ( our open - source code is available at github. com / hybridrobotics / quadruped _ nmpc _ dcbf _ duality, and the video is available at youtu. be / p1gsqjwxm1q. )
arxiv:2212.14199
we propose a real - time dnn - based technique to segment hand and object of interacting motions from depth inputs. our model is called denseattentionseg, which contains a dense attention mechanism to fuse information in different scales and improves the results quality with skip - connections. besides, we introduce a contour loss in model training, which helps to generate accurate hand and object boundaries. finally, we propose and release our interseghands dataset, a fine - scale hand segmentation dataset containing about 52k depth maps of hand - object interactions. our experiments evaluate the effectiveness of our techniques and datasets, and indicate that our method outperforms the current state - of - the - art deep segmentation methods on interaction segmentation.
arxiv:1903.12368
the growth in variety and volume of oltp ( online transaction processing ) applications poses a challenge to oltp systems to meet performance and cost demands in the existing hardware landscape. these applications are highly interactive ( latency sensitive ) and require update consistency. they target commodity hardware for deployment and demand scalability in throughput with increasing clients and data. currently, oltp systems used by these applications provide trade - offs in performance and ease of development over a variety of applications. in order to bridge the gap between performance and ease of development, we propose an intuitive, high - level programming model which allows oltp applications to be modeled as a cluster of application logic units. by extending transactions guaranteeing full acid semantics to provide the proposed model, we maintain ease of application development. the model allows the application developer to reason about program performance, and to influence it without the involvement of oltp system designers ( database designers ) and / or dbas. as a result, the database designer is free to focus on efficient running of programs to ensure optimal cluster resource utilization.
arxiv:1701.04339
we analytically calculate the ground state pairing symmetry and excitation spectra of two holes doped into the half - filled t - t ' - t " - jz model in the strong - coupling limit ( jz > > | t |, | t ' |, | t " | ). in leading order, this reduces to the t ' - t " - jz model, where there are regions of d - wave, s - wave, and ( degenerate ) p - wave symmetry. we find that the t - jz model maps in lowest order onto the t ' - t " - jz model on the boundary between d and p symmetry, with a flat lower band in the pair excitation spectrum. in higher order, d - wave symmetry is selected from the lower pair band. however, we observe that the addition of the appropriate t ' < 0 and / or t ' ' > 0, the signs of t ' and t " found in the hole - doped cuprates, could drive the hole - pair symmetry to p - wave, implying the possibility of competition between p - wave and d - wave pair ground states. ( an added t ' > 0 and / or t " < 0 generally tend to promote d - wave symmetry. ) we perturbatively construct an extended quasi - pair for the t - jz model. in leading order, there are contributions from sites at a distance of sqrt { 2 } lattice spacings apart ; however, contributions from sites 2 lattice spacings apart, also of the same order, vanish identically. finally, we compare our approach with analytic calculations for a 2x2 plaquette and with existing numerical work, and discuss possible relevance to the physical parameter regime.
arxiv:cond-mat/0403733
measurements of inelastic production of charmonium with the zeus detector at hera are presented. j / \ psi and psi ' mesons have been identified using the decay channel psi - > mu + mu -. the data, corresponding to an integrated luminosity of 38 pb - 1 in photoproduction and 73. 3 pb - 1 in electroproduction, are confronted to theoretical predictions.
arxiv:hep-ex/0309046
training. after that time, the engineer in training can decide whether or not to take a state licensing test to make them a professional engineer. the licensing process varies state - by - state, but generally they require the engineer - in - training to possess four years of verifiable work experience in their engineering field, as well as successfully pass the ncees principles and practice of engineering ( pe ) exam for their engineering discipline. after successful completion of that test, the professional engineer can place the suffix p. e. after their name signifying that they are now a professional engineer and they can affix their p. e. seal to drawings and reports, for example. they can also serve as expert witnesses in their areas of expertise. achieving the status of ' professional engineer is one of the highest levels of achievement one can attain in the engineering industry. engineers with this status are generally highly sought - after by employers, especially in the field of civil engineering. there are also graduate degree options for an engineer. many engineers decide to complete a master ' s degree in some field of engineering or business administration or get education in law, medicine, or other field. two types of doctorate are available also, the traditional phd or the doctor of engineering. the phd focuses on research and academic excellence, whereas the doctor of engineering focuses on practical engineering. the education requirements are the same for both degrees ; however, the dissertation required is different. the phd also requires the standard research problem, where the doctor of engineering focuses on a practical dissertation. in present undergraduate engineering education, the emphasis on linear systems develops a way of thinking that dismisses nonlinear dynamics as spurious oscillations. the linear systems approach oversimplifies the dynamics of nonlinear systems. hence, the undergraduate students and teachers should recognize the educational value of chaotic dynamics. practicing engineers will also have more insight of nonlinear circuits and systems by having an exposure to chaotic phenomena. after graduation, continuing education courses may be needed to keep a government - issued professional engineer ( pe ) license valid, to keep skills fresh, to expand skills, or to keep up with new technology. = = caribbean = = = = = trinidad and tobago = = = engineering degree education in trinidad and tobago is not regulated by the board of professional engineers of trinidad and tobago ( boett ) or the location engineering association ( apett ). professional engineers registed with boett are given the credentials " r. eng. ". = = south america = = = = = argentina = = = engineering education programs at universities in argentina
https://en.wikipedia.org/wiki/Engineering_education
following the introduction of the invariant distance on the non - commutative c - algebra of the quantum group su _ q ( 2 ), the green function and the kernel on the q - homogeneous space m = su ( 2 ) _ q / u ( 1 ) are derived. a path integration is formulated. green function for the free massive scalar field on the non - commutative einstein space r ^ 1xm is presented.
arxiv:q-alg/9703032
this article describes f - ivm, a unified approach for maintaining analytics over changing relational data. we exemplify its versatility in four disciplines : processing queries with group - by aggregates and joins ; learning linear regression models using the covariance matrix of the input features ; building chow - liu trees using pairwise mutual information of the input features ; and matrix chain multiplication. f - ivm has three main ingredients : higher - order incremental view maintenance ; factorized computation ; and ring abstraction. f - ivm reduces the maintenance of a task to that of a hierarchy of simple views. such views are functions mapping keys, which are tuples of input values, to payloads, which are elements from a ring. f - ivm also supports efficient factorized computation over keys, payloads, and updates. finally, f - ivm treats uniformly seemingly disparate tasks. in the key space, all tasks require joins and variable marginalization. in the payload space, tasks differ in the definition of the sum and product ring operations. we implemented f - ivm on top of dbtoaster and show that it can outperform classical first - order and fully recursive higher - order incremental view maintenance by orders of magnitude while using less memory.
arxiv:2303.08583
light black holes could have formed in the very early universe through the collapse of large primordial density fluctuations. these primordial black holes ( pbhs ), if light enough, would have evaporated by now because of the emission of hawking radiation ; thus they could not represent a sizable fraction of dark matter today. however, they could have left imprints in the early cosmological epochs. we will discuss the impact of massless graviton emission by ( rotating ) pbhs before the onset of big bang nucleosynthesis ( bbn ) and conclude that this contribution to dark radiation is constrained by the cosmic microwave background ( cmb ) ( with the future cmb stage 4 ) and bbn in the lighter portion of the pbh mass range, under the hypothesis that they dominated the energy density of the universe.
arxiv:2201.04946
model calibration measures the agreement between the predicted probability estimates and the true correctness likelihood. proper model calibration is vital for high - risk applications. unfortunately, modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability. medical image segmentation particularly suffers from this due to the natural uncertainty of tissue boundaries. this is exasperated by their loss functions, which favor overconfidence in the majority classes. we address these challenges with domino, a domain - aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels. our experiments demonstrate that our domino - calibrated deep neural networks outperform non - calibrated models and state - of - the - art morphometric methods in head image segmentation. our results show that our method can consistently achieve better calibration, higher accuracy, and faster inference times than these methods, especially on rarer classes. this performance is attributed to our domain - aware regularization to inform semantic model calibration. these findings show the importance of semantic ties between class labels in building confidence in deep learning models. the framework has the potential to improve the trustworthiness and reliability of generic medical image segmentation models. the code for this article is available at : https : / / github. com / lab - smile / domino.
arxiv:2209.06077
in most vision - language models ( vl ), the understanding of the image structure is enabled by injecting the position information ( pi ) about objects in the image. in our case study of lxmert, a state - of - the - art vl model, we probe the use of the pi in the representation and study its effect on visual question answering. we show that the model is not capable of leveraging the pi for the image - text matching task on a challenge set where only position differs. yet, our experiments with probing confirm that the pi is indeed present in the representation. we introduce two strategies to tackle this : ( i ) positional information pre - training and ( ii ) contrastive learning on pi using cross - modality matching. doing so, the model can correctly classify if images with detailed pi statements match. additionally to the 2d information from bounding boxes, we introduce the object ' s depth as new feature for a better object localization in the space. even though we were able to improve the model properties as defined by our probes, it only has a negligible effect on the downstream performance. our results thus highlight an important issue of multimodal modeling : the mere presence of information detectable by a probing classifier is not a guarantee that the information is available in a cross - modal setup.
arxiv:2305.10046
the induced terahertz response of semiconductor systems is investigated with a microscopic theory. in agreement with recent terahertz experiments, the developed theory fully explains the ultrafast build up of the plasmon resonance and the slow formation of incoherent excitonic populations. for incoherent conditions, it is shown that a terahertz field exclusively probes the correlated electron - hole pairs via a symmetry breaking between many - body correlations with even and odd functional form.
arxiv:cond-mat/0306352
this paper presents an overview of the radiative transfer problem of calculating the spectral line intensity and polarization that emerges from a ( generally magnetized ) astrophysical plasma composed of atoms and molecules whose excitation state is significantly influenced by radiative transitions produced by an anisotropic radiation field. the numerical solution of this non - lte problem of the 2nd kind is facilitating the physical understanding of the second solar spectrum and the exploration of the complex magnetism of the extended solar atmosphere, but much more could be learned if high - sensitivity polarimeters were developed also for the present generation of night - time telescopes. interestingly, i find that the population ratio between the levels of some resonance line transitions can be efficiently modulated by the inclination of a weak magnetic field when the anisotropy of the incident radiation is significant, something that could provide a new diagnostic tool in astrophysics.
arxiv:0911.4669
we describe a conjectural formula via intersection numbers for the masur - veech volumes of strata of quadratic differentials with prescribed zero orders, and we prove the formula for the case when the zero orders are odd. for the principal strata of quadratic differentials with simple zeros, the formula reduces to compute the top segre class of the quadratic hodge bundle, which can be further simplified to certain linear hodge integrals. an appendix proves that the intersection of this class with $ \ psi $ - classes can be computed by eynard - orantin topological recursion. as applications, we analyze numerical properties of masur - veech volumes, area siegel - veech constants and sums of lyapunov exponents of the principal strata for fixed genus and varying number of zeros, which settles the corresponding conjectures due to grivaux - hubert, fougeron, and elaborated in [ the7 ]. we also describe conjectural formulas for area siegel - veech constants and sums of lyapunov exponents for arbitrary affine invariant submanifolds, and verify them for the principal strata.
arxiv:1912.02267
we present a transfer - free preparation method for graphene on hexagonal boron nitride ( h - bn ) crystals by chemical vapor deposition of graphene via a catalytic proximity effect, i. e. activated by a cu catalyst close - by. we demonstrate the full coverage by monolayer graphene of half - millimeter - sized hexagonal boron nitride crystals exfoliated on a copper foil prior to growth. we demonstrate that the proximity of the copper catalyst ensures high yield with the growth rate estimated between of 2 \ mu m / min to 5 \ mu m / min. optical and electron microscopies together with confocal micro - raman mapping confirm that graphene covers the top surface of h - bn crystals that we attribute to be a lateral growth from the supporting catalytic copper substrate. structural and electron transport characterization of the in - situ grown graphene present an electronic mobility of about 20, 000cm2 / ( v. s ). comparison with graphene / h - bn stacks obtained by manual transferring of similar cvd graphene onto h - bn, confirms the better neutrality reached by the self - assembled structures.
arxiv:1701.06057
in volcano monitoring, effective recognition of seismic events is essential for understanding volcanic activity and raising timely warning alerts. traditional methods rely on manual analysis, which can be subjective and labor - intensive. furthermore, current automatic approaches often tackle detection and classification separately, mostly rely on single station information and generally require tailored preprocessing and representations to perform predictions. these limitations often hinder their application to real - time monitoring and utilization across different volcano conditions. this study introduces a novel approach that utilizes semantic segmentation models to automate seismic event recognition by applying a straight forward transformation of multi - channel 1d signals into 2d representations, enabling their use as images. our framework employs a data - driven, end - to - end design that integrates multi - station seismic data with minimal preprocessing, performing both detection and classification simultaneously for five seismic event classes. we evaluated four state - of - the - art segmentation models ( unet, unet + +, deeplabv3 + and swinunet ) on approximately 25. 000 seismic events recorded at four different chilean volcanoes : nevados del chill \ ' an volcanic complex, laguna del maule, villarrica and puyehue - cord \ ' on caulle. among these models, the unet architecture was identified as the most effective model, achieving mean f1 and intersection over union ( iou ) scores of up to 0. 91 and 0. 88, respectively, and demonstrating superior noise robustness and model flexibility to unseen volcano datasets.
arxiv:2410.20595
the cross section data for $ \ pi ^ 0 $ inclusive production in $ pp $ collisions is considered in a rather broad kinematic region in energy $ \ sqrt { s } $, feynman variable $ x _ f $ and transverse momentum $ p _ t $. the analysis of these data is done in the perturbative qcd framework at the next - to - leading order. we find that they cannot be correctly described in the entire kinematic domain and this leads us to conclude that the single - spin asymmetry, $ a _ n $ for this process, observed several years ago at fnal by the experiment e704 and the recent result obtained at bnl - rhic by star, are two different phenomena. this suggests that star data probes a genuine leading - twist qcd single - spin asymmetry for the first time and finds a large effect.
arxiv:hep-ph/0311110
adversarial training is one of the most popular methods for training methods robust to adversarial attacks, however, it is not well - understood from a theoretical perspective. we prove and existence, regularity, and minimax theorems for adversarial surrogate risks. our results explain some empirical observations on adversarial robustness from prior work and suggest new directions in algorithm development. furthermore, our results extend previously known existence and minimax theorems for the adversarial classification risk to surrogate risks.
arxiv:2206.09098
machine learning ( ml ) models trained on data from potentially untrusted sources are vulnerable to poisoning. a small, maliciously crafted subset of the training inputs can cause the model to learn a " backdoor " task ( e. g., misclassify inputs with a certain feature ) in addition to its main task. recent research proposed many hypothetical backdoor attacks whose efficacy heavily depends on the configuration and training hyperparameters of the target model. given the variety of potential backdoor attacks, ml engineers who are not security experts have no way to measure how vulnerable their current training pipelines are, nor do they have a practical way to compare training configurations so as to pick the more resistant ones. deploying a defense requires evaluating and choosing from among dozens of research papers and re - engineering the training pipeline. in this paper, we aim to provide ml engineers with pragmatic tools to audit the backdoor resistance of their training pipelines and to compare different training configurations, to help choose one that best balances accuracy and security. first, we propose a universal, attack - agnostic resistance metric based on the minimum number of training inputs that must be compromised before the model learns any backdoor. second, we design, implement, and evaluate mithridates a multi - stage approach that integrates backdoor resistance into the training - configuration search. ml developers already rely on hyperparameter search to find configurations that maximize the model ' s accuracy. mithridates extends this standard tool to balance accuracy and resistance without disruptive changes to the training pipeline. we show that hyperparameters found by mithridates increase resistance to multiple types of backdoor attacks by 3 - 5x with only a slight impact on accuracy. we also discuss extensions to automl and federated learning.
arxiv:2302.04977
because computers may contain or interact with sensitive information, they are often air - gapped and in this way kept isolated and disconnected from the internet. in recent years the ability of malware to communicate over an air - gap by transmitting sonic and ultrasonic signals from a computer speaker to a nearby receiver has been shown. in order to eliminate such acoustic channels, current best practice recommends the elimination of speakers ( internal or external ) in secure computers, thereby creating a so - called ' audio - gap '. in this paper, we present fansmitter, a malware that can acoustically exfiltrate data from air - gapped computers, even when audio hardware and speakers are not present. our method utilizes the noise emitted from the cpu and chassis fans which are present in virtually every computer today. we show that a software can regulate the internal fans ' speed in order to control the acoustic waveform emitted from a computer. binary data can be modulated and transmitted over these audio signals to a remote microphone ( e. g., on a nearby mobile phone ). we present fansmitter ' s design considerations, including acoustic signature analysis, data modulation, and data transmission. we also evaluate the acoustic channel, present our results, and discuss countermeasures. using our method we successfully transmitted data from air - gapped computer without audio hardware, to a smartphone receiver in the same room. we demonstrated the effective transmission of encryption keys and passwords from a distance of zero to eight meters, with bit rate of up to 900 bits / hour. we show that our method can also be used to leak data from different types of it equipment, embedded systems, and iot devices that have no audio hardware, but contain fans of various types and sizes.
arxiv:1606.05915
the top - quark pair production cross section is measured in final states with one electron or muon and one hadronically decaying tau lepton from the process ttbar to ( l nu [ l ] ) ( tau nu [ tau ] ) bbbar, where l = e, mu. the data sample corresponds to an integrated luminosity of 19. 6 inverse femtobarns collected with the cms detector in proton - proton collisions at sqrt ( s ) = 8 tev. the measured cross section sigma [ ttbar ] = 257 + / - 3 ( stat ) + / - 24 ( syst ) + / - 7 ( lumi ) pb, assuming a top - quark mass of 172. 5 gev, is consistent with the standard model prediction.
arxiv:1407.6643
using a correlation - otdr, we characterized the temperature - induced group delay variations of two nested antiresonant nodeless hollow core fibers. the temperature sensitivity of both is substantially less than for ssmf with some dependency on coating type.
arxiv:2204.06093
we construct evolutionary models of the remnant of the merger of two carbon - oxygen ( co ) core white dwarfs ( wds ). with total masses in the range $ 1 - 2 { \ rm m _ \ odot } $, these remnants may either leave behind a single massive wd or undergo a merger - induced collapse to a neutron star ( ns ). on the way to their final fate, these objects generally experience a $ \ sim 10 $ kyr luminous giant phase, which may be extended if sufficient helium remains to set up a stable shell - burning configuration. the uncertain, but likely significant, mass loss rate during this phase influences the final remnant mass and fate ( wd or ns ). we find that the initial co core composition of the wd is converted to oxygen - neon ( one ) in remnants with final masses $ \ gtrsim 1. 05 { \ rm m _ \ odot } $. this implies that the co core / one core transition in single wds formed via mergers occurs at a similar mass as in wds descended from single stars, and thus that wd - wd mergers do not naturally provide a route to producing ultra - massive co - core wds. as the remnant contracts towards a compact configuration, it experiences a " bottleneck " that sets the characteristic total angular momentum that can be retained. this limit predicts single wds formed from wd - wd mergers have rotational periods of $ \ approx 10 - 20 $ min on the wd cooling track. similarly, it predicts remnants that collapse can form nss with rotational periods $ \ sim 10 $ ms.
arxiv:2011.03546
in this paper we study a continuous - time stochastic linear quadratic control problem arising from mathematical finance. we model the asset dynamics with random market coefficients and portfolio strategies with convex constraints. following the convex duality approach, we show that the necessary and sufficient optimality conditions for both the primal and dual problems can be written in terms of processes satisfying a system of fbsdes together with other conditions. we characterise explicitly the optimal wealth and portfolio processes as functions of adjoint processes from the dual fbsdes in a dynamic fashion and vice versa. we apply the results to solve quadratic risk minimization problems with cone - constraints and derive the explicit representations of solutions to the extended stochastic riccati equations for such problems.
arxiv:1512.04583
the traffic in today ' s networks is increasingly influenced by the interactions among network nodes as well as by the temporal fluctuations in the demands of the nodes. traditional statistical prediction methods are becoming obsolete due to their inability to address the non - linear and dynamic spatio - temporal dependencies present in today ' s network traffic. the most promising direction of research today is graph neural networks ( gnns ) based prediction approaches that are naturally suited to handle graph - structured data. unfortunately, the state - of - the - art gnn approaches separate the modeling of spatial and temporal information, resulting in the loss of important information about joint dependencies. these gnn based approaches further do not model information at both local and global scales simultaneously, leaving significant room for improvement. to address these challenges, we propose netsight. netsight learns joint spatio - temporal dependencies simultaneously at both global and local scales from the time - series of measurements of any given network metric collected at various nodes in a network. using the learned information, netsight can then accurately predict the future values of the given network metric at those nodes in the network. we propose several new concepts and techniques in the design of netsight, such as spatio - temporal adjacency matrix and node normalization. through extensive evaluations and comparison with prior approaches using data from two large real - world networks, we show that netsight significantly outperforms all prior state - of - the - art approaches. we will release the source code and data used in the evaluation of netsight on the acceptance of this paper.
arxiv:2505.07034
an elementary geometric proof for the existence of witt ' s 5 - ( 12, 6, 1 ) design is given.
arxiv:1304.0089
} $ provided by massart inequality. last, we extend the local concentration results holding individually for each $ n $ to time - uniform concentration inequalities holding simultaneously for all $ n $, revisiting a reflection inequality by james, which is of independent interest for the study of sequential decision making strategies.
arxiv:2012.10320
we study bottlebrush macromolecules in a good solvent by small - angle neutron scattering ( sans ), static light scattering ( sls ), and dynamic light scattering ( dls ). these polymers consist of a linear backbone to which long side chains are chemically grafted. the backbone contains about 1600 monomer units ( weight average ) and every second monomer unit carries side - chains with ca. 60 monomer units. the sls - and sans data extrapolated to infinite dilution lead to the form factor of the polymer that can be described in terms of a worm - like chain with a contour length of 380 nm and a persistence length of 17. 5 nm. an analysis of the dls data confirm these model parameters. the scattering intensities taken at finite concentration can be modeled using the polymer reference interaction site model. it reveals a softening of the bottlebrush polymers caused by their mutual interaction. we demonstrate that the persistence decreases from 17. 5 nm down to 5 nm upon increasing the concentration from dilute solution to the highest concentration 40. 59 g / l under consideration. the observed softening of the chains is comparable to the theoretically predicted decrease of the electrostatic persistence length of linear polyelectrolyte chains at finite concentrations.
arxiv:0705.3329
a complex structural, magnetic and electric transport investigation shows that the cr doping on mn sites in the a - type antiferromagnet pr0. 44sr0. 56mno3 provokes a non - uniform magnetic state with coexisting fm and afm regions. irrespective of the ratio of magnetic phases, the samples exhibit a non - metallic behavior of resistivity and thermopower, pointing to the nanoscopic nature of the phase separation. a particularly large magnetoresistance encountered in a broad range of temperatures for samples with cr doping of 4 - 6 % supports such idea.
arxiv:cond-mat/0212517
engineering effective electronic parameters is a major focus in condensed matter physics. their dynamical modulation opens the possibility of creating and controlling physical properties in systems driven out of equilibrium. in this work, we demonstrate that the hubbard $ u $, the on - site coulomb repulsion in strongly correlated materials, can be modified on femtosecond time scales by a strong nonresonant laser excitation in the prototypical charge transfer insulator nio. using our recently developed time - dependent density functional theory plus self - consistent $ u $ ( tddft + u ) method, we demonstrate the importance of a dynamically modulated $ u $ in the description of the high - harmonic generation of nio. our study opens the door to novel ways of modifying effective interactions in strongly correlated materials via laser driving, which may lead to new control paradigms for field - induced phase transitions and perhaps laser - induced mott insulation in charge - transfer materials.
arxiv:1712.01067
we propose a compact perturbative approach that reveals the physical origin of the singularity occurring in the density dependence of correlation energy : like fermions, elementary bosons have a singular correlation energy which comes from the accumulation, through feynman " bubble " diagrams, of the same non - zero momentum transfer excitations from the free particle ground state, that is, the fermi sea for fermions and the bose - einstein condensate for bosons. this understanding paves the way toward deriving the correlation energy of composite bosons like atomic dimers and semiconductor excitons, by suggesting shiva diagrams that have similarity with feynman " bubble " diagrams, the previous elementary boson approaches, which hide this physics, being difficult to do so.
arxiv:1508.05564
we review the available constraints on the low energy supersymmetry. the bulk of the electroweak data is well screened from supersymmetric loop effects, due to the structure of the theory, even with superpartners generically light, $ { \ cal o } ( m _ z ) $. the only exception are the left - handed squarks of the third generation which have to be $ \ simgt { \ cal o } ( 300 $ gev ) to maintain the success of the sm in describing the precision data. the other superpartners can still be light, at their present experimental mass limits. as an application of the derived constraints ( supplemented by the requirement of ` ` naturalness ' ' ) we discuss the predictions for the mass of the lighter mssm higgs boson in specific scanarios of supersymmetry breaking.
arxiv:hep-ph/9711470
quantum systems, such as a single - mode cavity field coupled to a thermal bath, typically experience destructive effects due to interactions with their noisy environment. when the bath combines both thermal fluctuations and a nonclassical feature like quadrature squeezing, forming a squeezed thermal reservoir, the system ' s behaviour can change substantially. in this work, we study the evolution of the cavity field in this generalized environment using an alternative phase - space approach based on the glauber - sudarshan $ p $ - function. we derive a compact analytical expression for the time - dependent $ p $ - function for arbitrary initial cavity field states and demonstrate its utility through specific examples. additionally, we obtain analytical expressions, as a function of time, for some statistical properties of the cavity field, as well as for the nonclassical depth, $ \ tau _ m $, a nonclassicality measure calculated directly from the $ p $ - function.
arxiv:2504.18763
= = sweden = = like all eu member states, sweden follows the bologna process. the master of science academic degree has, like in germany, recently been introduced in sweden. students studying master of science in engineering programs are rewarded both the english master of science degree, but also the swedish equivalent " teknologisk masterexamen ". whilst " civilingenjor " is an at least five year long education. = = syria = = the master of science is a degree that can be studied only in public universities. the program is usually 2 years, but it can be extended to 3 or 4 years. the student is required to pass a specific bachelor ' s degree to attend a specific master of science degree program. the master of science is mostly a research degree, except for some types of programs held with cooperation of foreign universities. the student typically attends courses in the first year of the program and should then prepare a research thesis. publishing two research papers is recommended and will increase the final evaluation grade. = = united kingdom = = the master of science ( msc ) is typically a taught postgraduate degree, involving lectures, examinations and a project dissertation ( normally taking up a third of the program ). master ' s programs usually involve a minimum of 1 year of full - time study ( 180 uk credits, of which 150 must be at master ' s level ) and sometimes up to 2 years of full - time study ( or the equivalent period part - time ). taught master ' s degrees are normally classified into pass, merit and distinction ( although some universities do not give merit ). some universities also offer msc by research programs, where a longer project or set of projects is undertaken full - time ; master ' s degrees by research are normally pass / fail, although some universities may offer a distinction. the more recent master in science ( msci or m. sci. ) degree ( master of natural sciences at the university of cambridge ), is an undergraduate ( ug ) level integrated master ' s degree offered by uk institutions since the 1990s. it is offered as a first degree with the first three ( four in scotland ) years similar to a bsc course and a final year ( 120 uk credits ) at master ' s level, including a dissertation. the final msci qualification is thus at the same level as a traditional msc. = = united states = = the master of science ( magister scientiæ ) degree is normally a full - time two - year degree often abbreviated " ms " or " m. s. " it is the
https://en.wikipedia.org/wiki/Master_of_Science
an algorithmic framework to compute sparse or minimal - tv solutions of linear systems is proposed. the framework includes both the kaczmarz method and the linearized bregman method as special cases and also several new methods such as a sparse kaczmarz solver. the algorithmic framework has a variety of applications and is especially useful for problems in which the linear measurements are slow and expensive to obtain. we present examples for online compressed sensing, tv tomographic reconstruction and radio interferometry.
arxiv:1403.7543
in this study, a graph - computing based grid splitting detection algorithm is proposed for contingency analysis in a graph - based ems ( energy management system ). the graph model of a power system is established by storing its bus - branch information into the corresponding vertex objects and edge objects of the graph database. numerical comparison to an up - to - date serial computing algorithm is also investigated. online tests on a real power system of china state grid with 2752 buses and 3290 branches show that a 6 times speedup can be achieved, which lays a good foundation for advanced contingency analysis.
arxiv:1904.03587
we propose a conceptually new method for solving nonlinear inverse scattering problems ( isps ) such as are commonly encountered in tomographic ultrasound imaging, seismology and other applications. the method is inspired by the theory of nonlocality of physical interactions and utilizes the relevant formalism. we formulate the isp as a problem whose goal is to determine an unknown interaction potential $ v $ from external scattering data. although we seek a local ( diagonally - dominated ) $ v $ as the solution to the posed problem, we allow $ v $ to be nonlocal at the intermediate stages of iterations. this allows us to utilize the one - to - one correspondence between $ v $ and the t - matrix of the problem, $ t $. here it is important to realize that not every $ t $ corresponds to a diagonal $ v $ and we, therefore, relax the usual condition of strict diagonality ( locality ) of $ v $. an iterative algorithm is proposed in which we seek $ t $ that is ( i ) compatible with the measured scattering data and ( ii ) corresponds to an interaction potential $ v $ that is as diagonally - dominated as possible. we refer to this algorithm as to the data - compatible t - matrix completion ( dctmc ). this paper is part i in a two - part series and contains theory only. numerical examples of image reconstruction in a strongly nonlinear regime are given in part ii. the method described in this paper is particularly well suited for very large data sets that become increasingly available with the use of modern measurement techniques and instrumentation.
arxiv:1401.3319
the automl task consists of selecting the proper algorithm in a machine learning portfolio, and its hyperparameter values, in order to deliver the best performance on the dataset at hand. mosaic, a monte - carlo tree search ( mcts ) based approach, is presented to handle the automl hybrid structural and parametric expensive black - box optimization problem. extensive empirical studies are conducted to independently assess and compare : i ) the optimization processes based on bayesian optimization or mcts ; ii ) its warm - start initialization ; iii ) the ensembling of the solutions gathered along the search. mosaic is assessed on the openml 100 benchmark and the scikit - learn portfolio, with statistically significant gains over auto - sklearn, winner of former international automl challenges.
arxiv:1906.00170
be applied to ; but, as the science that studies a particular common aspect of each of those subjects ( they all use scarce resources to attain a sought - after end ). some subsequent comments criticised the definition as overly broad in failing to limit its subject matter to analysis of markets. from the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational - choice modelling expanded the domain of the subject to areas previously treated in other fields. there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. gary becker, a contributor to the expansion of economics into new areas, described the approach he favoured as " combin [ ing the ] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly. " one commentary characterises the remark as making economics an approach rather than a subject matter but with great specificity as to the " choice process and the type of social interaction that [ such ] analysis involves. " the same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject - matter that the texts treat. among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve. many economists including nobel prize winners james m. buchanan and ronald coase reject the method - based definition of robbins and continue to prefer definitions like those of say, in terms of its subject matter. ha - joon chang has for example argued that the definition of robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. in the biology department, it is not said that all biology should be studied with dna analysis. people study living organisms in many different ways, so some people will perform dna analysis, others might analyse anatomy, and still others might build game theoretic models of animal behaviour. but they are all called biology because they all study living organisms. according to ha joon chang, this view that the economy can and should be studied in only one way ( for example by studying only rational choices ), and going even one step further and basically redefining economics as a theory of everything, is peculiar. = = history of economic thought = = = = = from antiquity through the physiocrats = = = questions regarding distribution of resources are found throughout the writings of the boeotian poet hesiod
https://en.wikipedia.org/wiki/Economics
the wave turbulence theory predicts that a conservative system of nonlinear waves can exhibit a process of condensation, which originates in the singularity of the rayleigh - jeans equilibrium distribution of classical waves. considering light propagation in a multimode fiber, we show that light condensation is driven by an energy flow toward the higher - order modes, and a bi - directional redistribution of the wave - action ( or power ) to the fundamental mode and to higher - order modes. the analysis of the near - field intensity distribution provides experimental evidence of this mechanism. the kinetic equation also shows that the wave - action and energy flows can be inverted through a thermalization toward a negative temperature equilibrium state, in which the high - order modes are more populated than low - order modes. in addition, a bogoliubov stability analysis reveals that the condensate state is stable.
arxiv:2012.02235
given two jordan curves in a riemannian manifold, a minimal surface of annulus type bounded by these curves is described as the harmonic extension of a critical point of some functional ( the dirichlet integral ) in a certain space of boundary parametrizations. the $ h ^ { 2, 2 } $ - regularity of the minimal surface of annulus type will be proved by applying the critical points theory and morrey ' s growth condition.
arxiv:math/0603610
inertial fluid flow deformation around pillars in a microchannel is a new method for controlling fluid flow. sequences of pillars have been shown to produce a rich phase space with a wide variety of flow transformations. previous work has successfully demonstrated manual design of pillar sequences to achieve desired transformations of the flow cross - section, with experimental validation. however, such a method is not ideal for seeking out complex sculpted shapes as the search space quickly becomes too large for efficient manual discovery. we explore fast, automated optimization methods to solve this problem. we formulate the inertial flow physics in microchannels with different micropillar configurations as a set of state transition matrix operations. these state transition matrices are constructed from experimentally validated streamtraces. this facilitates modeling the effect of a sequence of micropillars as nested matrix - matrix products, which have very efficient numerical implementations. with this new forward model, arbitrary micropillar sequences can be rapidly simulated with various inlet configurations, allowing optimization routines quick access to a large search space. we integrate this framework with the genetic algorithm and showcase its applicability by designing micropillar sequences for various useful transformations. we computationally discover micropillar sequences for complex transformations that are substantially shorter than manually designed sequences. we also determine sequences for novel transformations that were difficult to manually design. finally, we experimentally validate these computational designs by fabricating devices and comparing predictions with the results from confocal microscopy.
arxiv:1506.01111
this paper presents an implementation of multilayer feed forward neural networks ( nn ) to optimize cmos analog circuits. for modeling and design recently neural network computational modules have got acceptance as an unorthodox and useful tool. to achieve high performance of active or passive circuit component neural network can be trained accordingly. a well trained neural network can produce more accurate outcome depending on its learning capability. neural network model can replace empirical modeling solutions limited by range and accuracy. [ 2 ] neural network models are easy to obtain for new circuits or devices which can replace analytical methods. numerical modeling methods can also be replaced by neural network model due to their computationally expansive behavior. [ 2 ] [ 10 ] [ 20 ]. the pro - posed implementation is aimed at reducing resource requirement, without much compromise on the speed. the nn ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. the concept used is shown to be very effective in reducing resource requirements and enhancing speed.
arxiv:1212.0215
style transfer has been an important topic both in computer vision and graphics. since the seminal work of gatys et al. first demonstrates the power of stylization through optimization in the deep feature space, quite a few approaches have achieved real - time arbitrary style transfer with straightforward statistic matching techniques. in this work, our key observation is that only considering features in the input style image for the global deep feature statistic matching or local patch swap may not always ensure a satisfactory style transfer ; see e. g., figure 1. instead, we propose a novel transfer framework, efanet, that aims to jointly analyze and better align exchangeable features extracted from content and style image pair. in this way, the style features from the style image seek for the best compatibility with the content information in the content image, leading to more structured stylization results. in addition, a new whitening loss is developed for purifying the computed content features and better fusion with styles in feature space. qualitative and quantitative experiments demonstrate the advantages of our approach.
arxiv:1811.10352
in this note, we prove that every automorphism of a rational manifold which is obtained from $ \ bbb { p } ^ k $ by a finite sequence blow - ups along smooth centers of dimension at most r with k > 2r + 2 has zero topological entropy.
arxiv:1210.4651
we prove a rellich - vekua type theorem for schr \ " { o } dinger operators with exponentially decreasing potentials on a class of lattices including square, triangular, hexagonal lattices and their ladders. we also discuss the unique continuation theorem and the non - existence of eigenvalues embedded in the continuous spectrum.
arxiv:2411.03577
we show : the floer homology over the novikov ring of ( nonexact! ) rational lagrangians in an ( nonexact! ) integral symplectic manifold can be computed in terms of exact lagrangians in an exact filling of the prequantization bundle. as a consequence, we give a fukaya - sheaf correspondence for rational ( nonexact! ) lagrangians in weinstein manifolds, as conjectured by ike and the first - named author. we also show that bounding cochains for immersed rational lagrangians transform naturally under legendrian isotopy, as conjectured by akaho and joyce. as an illustration, we show that quantum cohomology of the complex projective line - - which requires the counting of one holomorphic sphere - - can be recovered from purely sheaf - theoretic calculations.
arxiv:2406.08852
we show how to construct, starting from a quasi - hopf ( super ) algebra, central elements or casimir invariants. we show that these central elements are invariant under quasi - hopf twistings. as a consequence, the elliptic quantum ( super ) groups, which arise from twisting the normal quantum ( super ) groups, have the same casimir invariants as the corresponding quantum ( super ) groups.
arxiv:math/9811052
postural stability is linked to vision in everyone, since when the eyes are closed stability decreases by a factor of 2 or more. however, in persons with dyslexia postural stability is often deficient even when the eyes are open, since they show deficits in motor as well as specific cognitive functions. in dyslexics we have shown that abnormal symmetry between retinal maxwell ' s centroid outlines occurs, perturbing the interhemispheric connections. we have also shown that pulse - width modulated lighting can compensate for this lack of asymmetry, improving the reading skills. as the postural stability and the vision are correlated, one may wonder if the excess of the postural instability recorded in a young adult with dyslexia can also be reduced by a pulse - width modulated light controlling the hebbian synaptic plasticity. using a force platform we compared the postural responses of an observer without dyslexia with the responses of a subject with dyslexia, by measuring their respective standing postures with eyes open looking at a target in a room with either continuous or pulse lighting. there was no effect of changing the lighting conditions on the postural control of the subject without dyslexia. however, we found that the postural stability of the subject with dyslexia which was actually impaired during continuous light, but was greatly improved when a 80 hz pulsed light frequency was used. importantly, the excursions of the surface area of the center of pressure on the force platform were reduced by a factor of 2. 3.
arxiv:2004.02702
the generalised quasienergy states are introduced as eigenstates of the new integral of motion for periodically and nonperiodically kicked quantum systems. the photon distribution function of polymode generalised correlated light expressed in terms of multivariable hermite polynomials is discussed and the relation of its properties to schrodinger uncertainty relation is given.
arxiv:hep-th/9312061
hetyei recently introduced a hyperplane arrangement ( called the homogenized linial arrangement ) and used the finite field method of athanasiadis to show that its number of regions is a median genocchi number. these numbers count a class of permutations known as dumont derangements. here, we take a different approach, which makes direct use of zaslavsky ' s formula relating the intersection lattice of this arrangement to the number of regions. we refine hetyei ' s result by obtaining a combinatorial interpretation of the m \ " obius function of this lattice in terms of variants of the dumont permutations. this enables us to derive a formula for the generating function of the characterisitic polynomial of the arrangement. the m \ " obius invariant of the lattice turns out to be a ( nonmedian ) genocchi number. our techniques also yield type b, and more generally dowling arrangement, analogs of these results.
arxiv:1811.06882
active galactic nuclei ( agn ) that show strong rest - frame optical / uv variability in their blue continuum and broad line emission are classified as " changing - look " agn, or at higher luminosities changing look quasars ( clqs ). these surprisingly large and sometimes rapid transitions challenge accepted models of quasar physics and duty cycles, offer several new avenues for study of quasar host galaxies, and open a wider interpretation of the cause of differences between broad and narrow line agn. to better characterize extreme quasar variability, we present follow - up spectroscopy as part of a comprehensive search for clqs across the full sdss footprint using spectroscopically confirmed quasars from the sdss dr7 catalog. our primary selection requires large - amplitude ( | \ delta g | > 1 mag, | \ delta r | > 0. 5 mag ) variability over any of the available time baselines probed by the sdss and pan - starrs 1 surveys. we employ photometry from the catalina sky survey to verify variability behavior in clq candidates where available, and confirm clqs using optical spectroscopy from the william herschel, mmt, magellan, and palomar telescopes. for our adopted s / n threshold on variability of broad h \ beta emission, we find 17 new clqs, yielding a confirmation rate of > ~ 20 %. these candidates are at lower eddington ratio relative to the overall quasar population which supports a disk - wind model for the broad line region. based on our sample, the clq fraction increases from 10 % to roughly half as the continuum flux ratio between repeat spectra at 3420 angstroms increases from 1. 5 to 6. we release a catalog of over 200 highly variable candidates to facilitate future clq searches.
arxiv:1810.00087
the properties of the integral of motion and the evolution of the effective radius of the light beam are analyzed in the framework of the stationary model of the nonlinear schrodinger equation describing filamentation. within the framework of such a model, it is shown that filamentation is limited only by dissipative mechanisms.
arxiv:1801.06629
codes with specific characteristics are more exposed to security vulnerabilities. studies have revealed that codes that do not adhere to best practices are more challenging to verify and maintain, increasing the likelihood of unnoticed or unintentionally introduced vulnerabilities. given the crucial role of smart contracts in blockchain systems, ensuring their security and conducting thorough vulnerability analysis is critical. this study investigates the use of code complexity metrics as indicators of vulnerable code in solidity smart contracts. we highlight the significance of complexity metrics as valuable complementary features for vulnerability assessment and provide insights into the individual power of each metric. by analyzing 21 complexity metrics, we explored their interrelation, association with vulnerability, discriminative power, and mean values in vulnerable versus neutral codes. the results revealed some high correlations and potential redundancies among certain metrics, but weak correlations between each independent metric and vulnerability. nevertheless, we found that all metrics can effectively discriminate between vulnerable and neutral codes, and most complexity metrics, except for three, exhibited higher values in vulnerable codes.
arxiv:2411.17343
we propose a new approach to studying hyperbolic kac - moody algebras, focussing on the rank - 3 algebra $ \ mathfrak { f } $ first investigated by feingold and frenkel. our approach is based on the concrete realization of this lie algebra in terms of a hilbert space of transverse and longitudinal physical string states, which are expressed in a basis using ddf operators. when decomposed under its affine subalgebra $ a _ 1 ^ { ( 1 ) } $, the algebra $ \ mathfrak { f } $ decomposes into an infinite sum of affine representation spaces of $ a _ 1 ^ { ( 1 ) } $ for all levels $ \ ell \ in \ mathbb { z } $. for $ | \ ell | > 1 $ there appear in addition coset virasoro representations for all minimal models of central charge $ c < 1 $, but the different level - $ \ ell $ sectors of $ \ mathfrak { f } $ do not form proper representations of these because they are incompletely realized in $ \ mathfrak { f } $. to get around this problem we propose to nevertheless exploit the coset virasoro algebra for each level by identifying for each level a ( for $ | \ ell | \ geq 3 $ infinite ) set of ` virasoro ground states ' that are not necessarily elements of $ \ mathfrak { f } $ ( in which case we refer to them as ` virtual ' ), but from which the level - $ \ ell $ sectors of $ \ mathfrak { f } $ can be fully generated by the joint action of affine and coset virasoro raising operators. we conjecture ( and present partial evidence ) that the virasoro ground states for $ | \ ell | \ geq 3 $ in turn can be generated from a finite set of ` maximal ground states ' by the additional action of the ` spectator ' coset virasoro raising operators present for all levels $ | \ ell | > 2 $. our results hint at an intriguing but so far elusive secret behind einstein ' s theory of gravity, with possibly important implications for quantum cosmology.
arxiv:2411.18754
neutron - capture elements represent an important nucleosynthetic channel in the study of the galactic chemical evolution of stellar populations. for stellar populations behind significant extinction, such as those in the galactic center and along the galactic plane, abundance analyses based on near - ir spectra are necessary. previously, spectral lines from the neutron - capture elements such as copper ( cu ), cerium ( ce ), neodymium ( nd ), and ytterbium ( yb ) have been identified in the h band, while yttrium ( y ) lines have been identified in the k band. due to the scarcity of spectral lines from neutron - capture elements in the near - ir, the addition of useful spectral lines from other neutron - capture elements is highly desirable. the aim of this work is to identify and characterise a spectral line suitable for abundance determination from the most commonly used s - process element, namely barium. we observed near - ir spectra of 37 m giants in the solar neighbourhood at high s / n and high spectral resolution using the igrins spectrometer on the gemini south telescope. using a manual spectral synthesis method, we determined the stellar parameters for these stars and derived the barium abundance from the ba line ( 6s5d $ ^ 3 $ d $ _ 2 \ rightarrow $ 6s6p $ ^ 3 $ p $ ^ o _ 2 $ ) at $ \ lambda _ \ mathrm { air } = 23 \, 253. 56 \, $ \ aa \ in the k band. we demonstrate that the ba line in the k band at 2. 33 \, \ mic \ ( $ \ lambda $ 23253. 56 ) is useful for abundance analysis from spectra of m giants. the line becomes progressively weaker at higher temperatures and is only useful in m giants and the coolest k giants at supersolar metallicities. we can now add ba to the trends of the heavy elements cu, zn, y, ce, nd, and yb, which can be retrieved from high - resolution h - and k - band spectra. this opens up the study of nucleosynthetic channels, including the s - process and the r - process, in dust - obscured populations such as the galactic center.
arxiv:2408.12971
radial velocity surveys are examined in terms of eigenmode analysis within the framework of cdm - like family of models. rich surveys such as mark iii and sfi, which consist of more than $ 10 ^ { 3 } $ radial velocities, are found to have a few tens of modes that are not noise dominated. poor surveys, which have only a few tens of radial velocities, are noise dominated across the eigenmode spectrum. in particular, the bulk velocity of such surveys has been found to be dominated by the more noisy modes. the mark iii and sfi are well fitted by a tilted flat cdm model found by a maximum likelihood analysis and a $ \ chi ^ { 2 } $ statistics. however, a mode - by - mode inspection shows that a substantial fraction of the modes lie outside the $ 90 % $ confidence level. this implies that although globally the cdm - like family of models seems to be consistent with radial velocity surveys, in detail it does not. this might indicate a need for a revised power spectrum or for some non - trivial biasing scheme.
arxiv:astro-ph/9909158
vehicular cloud computing ( vcc ) is a new technological shift which exploits the computation and storage resources on vehicles for computational service provisioning. spare on - board resources are pooled by a vcc operator, e. g. a roadside unit, to complete task requests using the vehicle - as - a - resource framework. in this paper, we investigate timely service provisioning for deadline - constrained tasks in vcc systems by leveraging the task replication technique ( i. e., allowing one task to be executed by several server vehicles ). a learning - based algorithm, called date - v ( deadline - aware task replication for vehicular cloud ), is proposed to address the special issues in vcc systems including uncertainty of vehicle movements, volatile vehicle members, and large vehicle population. the proposed algorithm is developed based on a novel contextual - combinatorial multi - armed bandit ( cc - mab ) learning framework. date - v is ` contextual ' because it utilizes side information ( context ) of vehicles and tasks to infer the completion probability of a task replication under random vehicle movements. date - v is ` combinatorial ' because it aims to replicate the received task and send the task replications to multiple server vehicles to guarantee the service timeliness. we rigorously prove that our learning algorithm achieves a sublinear regret bound compared to an oracle algorithm that knows the exact completion probability of any task replications. simulations are carried out based on real - world vehicle movement traces and the results show that date - v significantly outperforms benchmark solutions.
arxiv:1812.04575
in order to check the validity of auxiliary field method in the nambu - - jona - lasinio model, the one - loop ( = quantum ) effects of auxiliary fields to the gap equation are considered with n - component fermion models in 4 and 3 dimensions. n is not assumed so large but regarded as a loop expansion parameter. to overcome infrared divergences caused by the nambu - goldstone bosons, an intrinsic fermion mass is assumed. it is shown that the loop expansion can be justified by this intrinsic mass whose lower limit is also given. it is found that due to quantum effects, chiral symmetry breaking ( $ \ chi $ sb ) is restored in d = 4 and d = 3 when the four - fermi coupling is large. however, $ \ chi $ sb is enhanced in a small coupling region in d = 3.
arxiv:hep-th/0306008
we study the mass distribution of galaxy clusters in milgromian dynamics, or modified newtonian dynamics ( mond ). we focus on five galaxy clusters from the x - cop sample, for which high - quality data are available on both the baryonic mass distribution ( gas and stars ) and internal dynamics ( from the hydrostatic equilibrium of hot gas and the sunyaev - zeldovich effect ). we confirm that galaxy clusters require additional ` missing matter ' in mond, although the required amount is drastically reduced with respect to the non - baryonic dark matter in the context of newtonian dynamics. we studied the spatial distribution of the missing matter by fitting the acceleration profiles of the clusters with a bayesian method, finding that a physical density profile with an inner core and an outer $ r ^ { - 4 } $ decline ( giving a finite total mass ) provide good fits within $ \ sim $ 1 mpc. at larger radii, the fit results are less satisfactory but the combination of the mond external field effect and hydrostatic bias ( quantified as 10 $ \ % $ - 40 $ \ % $ ) can play a key role. the missing mass must be more centrally concentrated than the intracluster medium ( icm ). for relaxed clusters ( a1795, a2029, a2142 ), the ratio of missing - to - visible mass is around $ 1 - 5 $ at $ r \ simeq200 - 300 $ kpc and decreases to $ 0. 4 - 1. 1 $ at $ r \ simeq2 - 3 $ mpc, showing that the total amount of missing mass is smaller than or comparable to the icm mass. for clusters with known merger signatures ( a644 and a2319 ), this global ratio increases up to $ \ sim $ 5 but may indicate out - of - equilibrium dynamics rather than actual missing mass. we discuss various possibilities regarding the nature of the extra mass, in particular ` missing baryons ' in the form of pressure - confined cold gas clouds with masses of $ < 10 ^ 5 $ m $ _ \ odot $ and sizes of $ < 50 $ pc.
arxiv:2405.08557
we develop a new formalism for the description of the condensates of cold fermi atoms whose speed of sound can be tuned with the aid of a narrow feshbach resonance. we use this to look for spontaneous phonon creation that mimics spontaneous particle creation in curved space - time in friedmann - robertson - walker and other model universes.
arxiv:1205.0133
we investigate the thermodynamic consistency of the master equation description of heat transport through an optomechanical system attached to two heat baths, one optical and one mechanical. we employ three different master equations to describe this scenario : ( i ) the standard master equation used in optomechanics, where each bath acts only on the resonator that it is physically connected to ; ( ii ) the so - called dressed - state master equation, where the mechanical bath acts on the global system ; and ( iii ) what we call the global master equation, where both baths are treated non - locally and affect both the optical and mechanical subsystems. our main contribution is to demonstrate that, under certain conditions including when the optomechanical coupling strength is weak, the second law of thermodynamics is violated by the first two of these pictures. in order to have a thermodynamically consistent description of an optomechanical system, therefore, one has to employ a global description of the effect of the baths on the system.
arxiv:1806.08175
the pulsar search collaboratory ( psc ) engages high school students and teachers in analyzing real data from the robert c. byrd green bank telescope for the purpose of discovering exotic stars called pulsars. these cosmic clocks can be used as a galactic - scale detector of gravitational waves, ripples in space - time that have recently been directly detected from the mergers of stellar - mass black holes. through immersing students in an authentic, positive learning environment to build a sense of belonging and competency, the goal of the psc is to promote students ' long - term interests in science and science careers. psc students have discovered 7 pulsars since the start of the psc in 2008. originally targeted at teachers and students in west virginia, over time the program has grown to 18 states. in a new effort to scale the psc nationally, the psc has developed an integrated online training program with both self - guided lectures and homework and real - time interactions with pulsar astronomers. now, any high school student can join in the exciting search for pulsars and the discovery of a new type of gravitational waves.
arxiv:1807.06059
we determine the baryon spectrum of 1 + 1 + 1 - flavor qcd in the presence of strong background magnetic fields using lattice simulations at physical quark masses for the first time. our results show a splitting within multiplets according to the electric charge of the baryons and reveal, in particular, a reduction of the nucleon masses for strong magnetic fields. this first - principles input is used to define constituent quark masses and is employed to set the free parameters of the polyakov loop - extended nambu - jona - lasinio ( pnjl ) model in a magnetic field - dependent manner. the so constructed model is shown to exhibit inverse magnetic catalysis at high temperatures and a reduction of the transition temperature as the magnetic field grows - in line with non - perturbative lattice results. this is contrary to the naive variant of this model, which gives incorrect results for this fundamental phase diagram. our findings demonstrate that the magnetic field dependence of the pnjl model can be reconciled with the lattice findings in a systematic way, employing solely zero - temperature first - principles input.
arxiv:1905.02103
precision studies for top quark physics are a cornerstone of the large hadron collider program. polarization, probed through decay kinematics, provides a unique tool to scrutinize the top quark across its various production modes and to explore potential new physics effects. however, the top quark most often decays hadronically, for which unambiguous identification of its decay products sensitive to top quark polarization is not possible. in this letter, we introduce a jet flavor tagging method to significantly improve spin analyzing power in hadronic decays, going beyond exclusive kinematic information employed in previous studies. we provide parametric estimates of the improvement from flavor tagging with any set of measured observables and demonstrate this in practice on simulated data using a graph neural network ( gnn ). we find that the spin analyzing power in hadronic decays can improve by approximately 20 % ( 40 % ) compared to the kinematic approach, assuming an efficiency of 0. 5 ( 0. 2 ) for the network.
arxiv:2407.01663
the effects of group and phase velocity mismatch are well - known in optical harmonic generation, but the non - degenerate cases remain unexplored. in this work we develop an analytic model which predicts velocity mismatch effects in non - degenerate triple sum - frequency mixing, tsf. we verify this model experimentally using two tunable, ultrafast, short - wave - ir lasers to demonstrate spectral fringes in the tsf output from a 500 $ \ mu $ m thick sapphire plate. we find the spectral dependence of the tsf depends strongly on both the phase velocity and the group velocity differences between the input and output fields. we define practical strategies for mitigating the impact of velocity mismatches.
arxiv:1709.10476
the possibility that the mass hierarchy is intimately associated with the compositeness level of the matters is proposed in supersymmetric gauge theory. this implies, for instance, that the preons constituting top quark consists of the ` ` prepreons ' ' binded by the same gauge force making the charm quark out of another preons. the exemplifying toy model illustrates how the hierarchy in the yukawa coupling constants in the up - quark sector is generated from the underlying gauge dynamics. it is also indicated that the incorporation of down - type quarks as elementary objects leads to unpleasant results generically. thus all the quarks as well as the leptons must also be regarded as composite in the present approach.
arxiv:hep-ph/9704329
currently, personal assistant systems, run on smartphones and use natural language interfaces. however, these systems rely mostly on the web for finding information. mobile and wearable devices can collect an enormous amount of contextual personal data such as sleep and physical activities. these information objects and their applications are known as quantified - self, mobile health or personal informatics, and they can be used to provide a deeper insight into our behavior. to our knowledge, existing personal assistant systems do not support all types of quantified - self queries. in response to this, we have undertaken a user study to analyze a set of " textual questions / queries " that users have used to search their quantified - self or mobile health data. through analyzing these questions, we have constructed a light - weight natural language based query interface, including a text parser algorithm and a user interface, to process the users ' queries that have been used for searching quantified - self information. this query interface has been designed to operate on small devices, i. e. smartwatches, as well as augmenting the personal assistant systems by allowing them to process end users ' natural language queries about their quantified - self data.
arxiv:1611.07139
recently were introduced physical billiards where a moving particle is a hard sphere rather than a point as in standard mathematical billiards. it has been shown that in the same billiard tables the physical billiards may have totally different dynamics than mathematical billiards. this difference appears if the boundary of a billiard table has visible singularities ( internal corners if the billiard table is two - dimensional ), i. e. the particle may collide with these singular points. here, we consider the collision of a hard ball with a visible singular point and demonstrate that the motion of the smooth ball after collision with a visible singular point is indeed the one that was used in the studies of physical billiards. so such collision is equivalent to the elastic reflection of hard ball ' s center off a sphere with the center at the singular point and the same radius as the radius of the moving particle.
arxiv:2008.05403
a hypersurface formed of two null sheets, or " light fronts ", swept out by the future null normal geodesics emerging from a common spacelike 2 - disk can serve as a cauchy surface for a region of spacetime. already in the 1960s free ( unconstrained ) initial data for general relativity were found for such hypersurfaces. here an expression is obtained for the symplectic 2 - form of vacuum general relativity in terms of such free data. this can be done, even though variations of the geometry do not in general preserve the nullness of the initial hypersurface, because of the diffeomorphism gauge invariance of general relativity. the present expression for the symplectic 2 - form has been used previously to calculate the poisson brackets of the free data.
arxiv:1211.3880
we present results for $ \ beta $ - decay half - lives based on a new recipe for calculation of phase space factors recently introduced. our study includes $ fp $ - shell and heavier nuclei of experimental and astrophysical interests. the investigation of the kinematics of some $ \ beta $ - decay half - lives is presented, and new phase space factor values are compared with those obtained with previous theoretical approximations. accurate calculation of nuclear matrix elements is a pre - requisite for reliable computation of $ \ beta $ - decay half - lives and is not the subject of this paper. this paper explores if improvements in calculating the $ \ beta $ - decay half - lives can be obtained when using a given set of nuclear matrix elements and employing the new values of the phase space factors. although the largest uncertainty in half - lives computations come from the nuclear matrix elements, introduction of the new values of the phase space factors may improve the comparison with experiment. the new half - lives are systematically larger than previous calculations and may have interesting consequences for calculation of stellar rates.
arxiv:1812.06670
we have studied theoretically the weyl semimetals the point symmetry group of which has reflection planes and which contain equivalent valleys with opposite chiralities. these include the most frequently studied compounds, namely the transition metals monopnictides taas, nbas, tap, nbp, and also bi $ _ { 1 - x } $ sb $ _ x $ alloys. the circular photogalvanic current, which inverts its direction under reversal of the light circular polarization, has been calculated for the light absorption under direct optical transitions near the weyl points. in the studied materials, the total contribution of all the valleys to the photocurrent is nonzero only beyond the simple weyl model, namely, if the effective electron hamiltonian is extended to contain either an anisotropic spin - dependent linear contribution together with a spin - independent tilt or a spin - dependent contribution cubic in the electron wave vector $ \ bf { k } $. with allowance for the tilt of the energy dispersion cone in a weyl semimetal of the $ c _ { 4v } $ symmetry, the photogalvanic current is expressed in terms of the components of the second - rank symmetric tensor that determines the energy spectrum of the carriers near the weyl node ; at low temperature, this contribution to the photocurrent is generated within a certain limited frequency range $ \ delta $. the photocurrent due to the cubic corrections, in the optical absorption region, is proportional to the light frequency squared and generated both inside and outside the $ \ delta $ window.
arxiv:1905.12273
this paper presents the first slicing approach for probabilistic programs based on specifications. we show that when probabilistic programs are accompanied by their specifications in the form of pre - and post - condition, we can exploit this semantic information to produce specification - preserving slices strictly more precise than slices yielded by conventional techniques based on data / control dependency. to achieve this goal, our technique is based on the backward propagation of post - conditions via the greatest pre - expectation transformer - - the probabilistic counterpart of dijkstra weakest pre - condition transformer. the technique is termination - sensitive, allowing to preserve the partial as well as the total correctness of probabilistic programs w. r. t. their specifications. it is modular, featuring a local reasoning principle, and is formally proved correct. as fundamental technical ingredients of our technique, we design and prove sound verification condition generators for establishing the partial and total correctness of probabilistic programs, which are of interest on their own and can be exploited elsewhere for other purposes. on the practical side, we demonstrate the applicability of our approach by means of a few illustrative examples and a case study from the probabilistic modelling field. we also describe an algorithm for computing least slices among the space of slices derived by our technique.
arxiv:2205.03707
the gravitational poynting vector provides a mechanism for the transfer of gravitational energy to a system of falling objects. in the following we will show that the gravitational poynting vector together with the gravitational larmor theorem also provides a mechanism to explain how massive bodies acquire rotational kinetic energy when external mechanical forces are applied on them.
arxiv:gr-qc/0107014
modeling eye movement indicative of expertise behavior is decisive in user evaluation. however, it is indisputable that task semantics affect gaze behavior. we present a novel approach to gaze scanpath comparison that incorporates convolutional neural networks ( cnn ) to process scene information at the fixation level. image patches linked to respective fixations are used as input for a cnn and the resulting feature vectors provide the temporal and spatial gaze information necessary for scanpath similarity comparison. we evaluated our proposed approach on gaze data from expert and novice dentists interpreting dental radiographs using a local alignment similarity score. our approach was capable of distinguishing experts from novices with 93 % accuracy while incorporating the image semantics. moreover, our scanpath comparison using image patch features has the potential to incorporate task semantics from a variety of tasks
arxiv:2003.13987
in many urban areas of the developing world, piped water is supplied only intermittently, as valves direct water to different parts of the water distribution system at different times. the flow is transient, and may transition between free - surface and pressurized, resulting in complex dynamical features with important consequences for water suppliers and users. here, we develop a computational model of transition, transient pipe flow in a network, accounting for a wide variety of realistic boundary conditions. we validate the model against several published data sets, and demonstrate its use on a real pipe network. the model is extended to consider several optimization problems motivated by realistic scenarios. we demonstrate how to infer water flow in a small pipe network from a single pressure sensor, and show how to control water inflow to minimize damaging pressure gradients.
arxiv:1509.03024
as host to two accreting planets, pds 70 provides a unique opportunity to probe the chemical complexity of atmosphere - forming material. we present alma band 6 observations of the pds ~ 70 disk and report the first chemical inventory of the system. with a spatial resolution of 0. 4 ' ' - 0. 5 ' ' ( $ \ sim $ 50 au ), 12 species are detected, including co isotopologues and formaldehyde, small hydrocarbons, hcn and hco + isotopologues, and s - bearing molecules. so and ch3oh are not detected. all lines show a large cavity at the center of the disk, indicative of the deep gap carved by the massive planets. the radial profiles of the line emission are compared to the ( sub - ) mm continuum and infrared scattered light intensity profiles. different molecular transitions peak at different radii, revealing the complex interplay between density, temperature and chemistry in setting molecular abundances. column densities and optical depth profiles are derived for all detected molecules, and upper limits obtained for the non detections. excitation temperature is obtained for h2co. deuteration and nitrogen fractionation profiles from the hydro - cyanide lines show radially increasing fractionation levels. comparison of the disk chemical inventory to grids of chemical models from the literature strongly suggests a disk molecular layer hosting a carbon to oxygen ratio c / o > 1, thus providing for the first time compelling evidence of planets actively accreting high c / o ratio gas at present time.
arxiv:2101.08369
information about the absolute galois group $ g _ k $ of a number field $ k $ is encoded in how it acts on the \ ' etale fundamental group $ \ pi $ of a curve $ x $ defined over $ k $. in the case that $ k = \ mathbb { q } ( \ zeta _ n ) $ is the cyclotomic field and $ x $ is the fermat curve of degree $ n \ geq 3 $, anderson determined the action of $ g _ k $ on the \ ' etale homology with coefficients in $ \ mathbb { z } / n \ mathbb { z } $. the \ ' etale homology is the first quotient in the lower central series of the \ ' etale fundamental group. in this paper, we determine the structure of the graded lie algebra for $ \ pi $. as a consequence, this determines the action of $ g _ k $ on all degrees of the associated graded quotient of the lower central series of the \ ' etale fundamental group of the fermat curve of degree $ n $, with coefficients in $ \ mathbb { z } / n \ mathbb { z } $.
arxiv:1808.04917
in the realm of artificial intelligence, the generation of realistic training data for supervised learning tasks presents a significant challenge. this is particularly true in the synthesis of electrocardiograms ( ecgs ), where the objective is to develop a synthetic 12 - lead ecg model. the primary complexity of this task stems from accurately modeling the intricate biological and physiological interactions among different ecg leads. although mathematical process simulators have shed light on these dynamics, effectively incorporating this understanding into generative models is not straightforward. in this work, we introduce an innovative method that employs ordinary differential equations ( odes ) to enhance the fidelity of generating 12 - lead ecg data. this approach integrates a system of odes that represent cardiac dynamics directly into the generative model ' s optimization process, allowing for the production of biologically plausible ecg training data that authentically reflects real - world variability and inter - lead dependencies. we conducted an empirical analysis of thousands of ecgs and found that incorporating cardiac simulation insights into the data generation process significantly improves the accuracy of heart abnormality classifiers trained on this synthetic 12 - lead ecg data.
arxiv:2409.17833
we prove that $ ( \ mathbb { rp } ^ { 2n - 1 }, \ xi _ { std } ) $ is not exactly fillable for any $ n \ ne 2 ^ k $ and there exist strongly fillable but not exactly fillable contact manifolds for all dimension $ \ ge 5 $.
arxiv:2001.09718
in recent years, channel state information ( csi ), recognized for its fine - grained spatial characteristics, has attracted increasing attention in wifi - based indoor localization. however, despite its potential, csi - based approaches have yet to achieve the same level of deployment scale and commercialization as those based on received signal strength indicator ( rssi ). a key limitation lies in the fact that most existing csi - based systems are developed and evaluated in controlled, small - scale environments, limiting their generalizability. to bridge this gap, we explore the deployment of a large - scale csi - based localization system involving over 400 access points ( aps ) in a real - world building under the integrated sensing and communication ( isac ) paradigm. we highlight two critical yet often overlooked factors : the underutilization of unlabeled data and the inherent heterogeneity of csi measurements. to address these challenges, we propose a novel csi - based learning framework for wifi localization, tailored for large - scale isac deployments on the server side. specifically, we employ a novel graph - based structure to model heterogeneous csi data and reduce redundancy. we further design a pretext pretraining task that incorporates spatial and temporal priors to effectively leverage large - scale unlabeled csi data. complementarily, we introduce a confidence - aware fine - tuning strategy to enhance the robustness of localization results. in a leave - one - smartphone - out experiment spanning five floors and 25, 600 m2, we achieve a median localization error of 2. 17 meters and a floor accuracy of 99. 49 %. this performance corresponds to an 18. 7 % reduction in mean absolute error ( mae ) compared to the best - performing baseline.
arxiv:2504.17173
we report on a novel class of nanocrystalline / amorphous gd $ _ 3 $ ni / gd $ _ { 65 } $ ni $ _ { 35 } $ composite microwires, which was created directly by melt - extraction through controlled solidification. x - ray diffraction ( xrd ) and transmission electron microscopy ( tem ) confirmed the formation of a biphase nanocrystalline / amorphous structure in these wires. magnetic and magnetocaloric experiments indicate the large magnetic entropy change ( - $ \ delta $ sm ~ 9. 64 j / kg k ) and the large refrigerant capacity ( rc ~ 742. 1 j / kg ) around the curie temperature of ~ 120 k for a field change of 5 t. these values are ~ 1. 5 times larger relative to its bulk counterpart, and are superior to other candidate materials being considered for active magnetic refrigeration in the liquid nitrogen temperature range.
arxiv:2007.10428
d $.
arxiv:2203.10021
this paper presents two new algorithms for the joint restoration of depth and reflectivity ( dr ) images constructed from time - correlated single - photon counting ( tcspc ) measurements. two extreme cases are considered : ( i ) a reduced acquisition time that leads to very low photon counts and ( ii ) a highly attenuating environment ( such as a turbid medium ) which makes the reflectivity estimation more difficult at increasing range. adopting a bayesian approach, the poisson distributed observations are combined with prior distributions about the parameters of interest, to build the joint posterior distribution. more precisely, two markov random field ( mrf ) priors enforcing spatial correlations are assigned to the dr images. under some justified assumptions, the restoration problem ( regularized likelihood ) reduces to a convex formulation with respect to each of the parameters of interest. this problem is first solved using an adaptive markov chain monte carlo ( mcmc ) algorithm that approximates the minimum mean square parameter estimators. this algorithm is fully automatic since it adjusts the parameters of the mrfs by maximum marginal likelihood estimation. however, the mcmc - based algorithm exhibits a relatively long computational time. the second algorithm deals with this issue and is based on a coordinate descent algorithm. results on single - photon depth data from laboratory based underwater measurements demonstrate the benefit of the proposed strategy that improves the quality of the estimated dr images.
arxiv:1608.06143