text
stringlengths
1
3.65k
source
stringlengths
15
79
we study the mathematical properties of a kinetic equation which describes the long time behaviour of solutions to the weak turbulence equation associated to the cubic nonlinear schr \ " odinger equation. in particular, we give a precise definition of weak solutions and prove global existence of solutions for all initial data with finite mass. we also prove that any nontrivial initial datum yields the instantaneous onset of a condensate, i. e. a dirac mass at the origin for any positive time. furthermore we show that the only stationary solutions with finite total measure are dirac masses at the origin. we finally construct solutions with finite energy, which is transferred to infinity in a self - similar manner.
arxiv:1410.2073
weak spectral features in bl lacertae objects ( bl lac ) often provide a unique opportunity to probe the inner region of this rare type of active galactic nucleus. we present a hubble space telescope / cosmic origins spectrograph observation of the bl lac h 2356 - 309. a weak ly $ \ alpha $ emission line was detected. this is the fourth detection of a weak ly $ \ alpha $ emission feature in the ultraviolet ( uv ) band in the so - called " high energy peaked bl lacs ", after stocke et al. assuming the line - emitting gas is located in the broad line region ( blr ) and the ionizing source is the off - axis jet emission, we constrain the lorentz factor ( $ \ gamma $ ) of the relativistic jet to be $ \ geq 8. 1 $ with a maximum viewing angle of 3. 6 $ ^ \ circ $. the derived $ \ gamma $ is somewhat larger than previous measurements of $ \ gamma \ approx 3 - 5 $, implying a covering factor of $ \ sim $ 3 % of the line - emitting gas. alternatively, the blr clouds could be optically thin, in which case we constrain the blr warm gas to be $ \ sim 10 ^ { - 5 } \ rm \ m _ { \ odot } $. we also detected two hi and one ovi absorption lines that are within $ | \ delta v | < 150 \ rm \ km \ s ^ { - 1 } $ of the bl lac object. the ovi and one of the hi absorbers likely coexist due to their nearly identical velocities. we discuss several ionization models and find a photoionization model where the ionizing photon source is the bl lac object can fit the observed ion column densities with reasonable physical parameters. this absorber can either be located in the interstellar medium of the host galaxy, or in the blr.
arxiv:1409.6432
theoretical modeling of the driving processes of solar - like oscillations is a powerful way of understanding the properties of the convective zones of solar - type stars. in this framework, the description of the temporal correlation between turbulent eddies is an essential ingredient to model mode amplitudes. however, there is a debate between a gaussian or lorentzian description of the eddy - time correlation function ( samadi et al. 2003, chaplin et al. 2005 ). indeed, a gaussian description reproduces the low - frequency shape of the mode amplitude for the sun, but is unsatisfactory from a theoretical point of view ( houdek, 2009 ) and leads to other disagreements with observations ( samadi et al., 2007 ). these are solved by using a lorentzian description, but there the low - frequency shape of the solar observations is not correctly reproduced. we reconcile the two descriptions by adopting the sweeping approximation, which consists in assuming that the eddy - time - correlation function is dominated by the advection of eddies, in the inertial range, by energy - bearing eddies. using a lorentzian function together with a cut - off frequency derived from the sweeping assumption allows us to reproduce the low - frequency shape of the observations. this result also constitutes a validation of the sweeping assumption for highly turbulent flows as in the solar case.
arxiv:1010.2682
the relation between parameters the d / sqrt ( i ) and ic / isum and radiation patterns of the optical and radio components of an extended radio source is analyzed, where d and i are the apparent size and observed radiation intensity of the source or its components respectively. the parameters of the pattern in the optical and radio ( 1. 4 ghz ) ranges are estimated. the radiation pattern of extended radio - emitting regions is close to spherical and the radiation of the central component is concentrated in a 24 degrees wide beam. its luminosity is a factor of 4. 58 higher than that of the extended component of the radio source. the radiation pattern of the optical component of the radio source turned out to be unexpectedly non - spherical : the main lobe of the pattern is about 26 degrees wide. the g - band luminosity is 6. 4 - 12. 3 times higher than the luminosity of the spherical fraction of the " optical " radiation pattern. a list of 116 new giant radio sources is presented.
arxiv:1612.03305
\ sim o ( 10 ^ { 7 } ) \ : \ mathrm { gev } $ and $ o ( 10 ^ { 14 } ) \ : \ mathrm { gev } $, and $ \ delta n _ { \ rm reh } \ sim 15 $ and $ 10 $, the vigw signal is found to be detectable by lisa and et respectively.
arxiv:2504.10477
we investigate the performance of the upcoming aces ( atomic clock ensemble in space ) space mission in terms of its primary scientific objective, the test of the gravitational redshift. whilst the ultimate performance of that test is determined by the systematic uncertainty of the on - board clock at 2 - 3 ppm, we determine whether, and under which conditions, that limit can be reached in the presence of colored realistic noise, data gaps and orbit determination uncertainties. to do so we have developed several methods and software tools to simulate and analyse aces data. using those we find that the target uncertainty of 2 - 3 ppm can be reached after only a few measurement sessions of 10 - 20 days each, with a relatively modest requirement on orbit determination of around 300 m.
arxiv:1907.12320
a bulk polycrystalline sample of yba _ ( 2 ) cu _ ( 3 ) o _ ( 7 - \ delta ) ( \ delta \ approx 0. 1 ) has been irradiated by \ gamma - rays with ^ { 60 } co source. non - monotonic behavior of t _ { c } with increasing irradiation dose \ phi ( up to 220 mr ) is observed : t _ { c } decreases at low doses ( \ phi < 50 mr ) from initial value ( \ approx 93 k ) by about 2 k and then rises, forming a minimum. at higher doses ( \ phi > 120 mr ) t _ { c } goes down again. the temperature width of resistive transition increases rather sharply with dose below 75 mr and drops somewhat at higher dose. the results observed are discussed, taking into account the granular structure of sample studied and the influence of \ gamma - rays on intergrain josephson coupling.
arxiv:cond-mat/0003192
##on is quite different than that of $ \ mathcal { f } $ - minor deletion.
arxiv:1609.07780
for $ p \ in ( 1, \ infty ) $, we consider the following weighted neumann eigenvalue problem on $ b _ 1 ^ c $, the exterior of the closed unit ball in $ r ^ n $ : \ begin { equation } \ label { neumann eqn } \ begin { aligned } - \ delta _ p \ phi & = \ lambda g | \ phi | ^ { p - 2 } \ phi \ \ text { in } \ b ^ c _ 1, \ \ \ displaystyle \ frac { \ partial \ phi } { \ partial \ nu } & = 0 \ \ text { on } \ \ partial b _ 1, \ end { aligned } \ end { equation } where $ \ delta _ p $ is the $ p $ - laplace operator and $ g \ in l ^ 1 _ { loc } ( b ^ c _ 1 ) $ is an indefinite weight function. depending on the values of $ p $ and the dimension $ n $, we take $ g $ in certain lorentz spaces or weighted lebesgue spaces and show that the above eigenvalue problem admits an unbounded sequence of positive eigenvalues that includes a unique principal eigenvalue. for this purpose, we establish the compact embeddings of $ w ^ { 1, p } ( b ^ c _ 1 ) $ into $ l ^ p ( b ^ c _ 1, | g | ) $ for $ g $ in certain weighted lebesgue spaces. for $ n > p $, we also provide an alternate proof for the embedding of $ w ^ { 1, p } ( b ^ c _ 1 ) $ into $ l ^ { p ^ *, p } ( b ^ c _ 1 ) $. further, we show that the set of all eigenvalues is closed.
arxiv:1812.10677
direct answering of questions that involve multiple entities and relations is a challenge for text - based qa. this problem is most pronounced when answers can be found only by joining evidence from multiple documents. curated knowledge graphs ( kgs ) may yield good answers, but are limited by their inherent incompleteness and potential staleness. this paper presents quest, a method that can answer complex questions directly from textual sources on - the - fly, by computing similarity joins over partial results from different documents. our method is completely unsupervised, avoiding training - data bottlenecks and being able to cope with rapidly evolving ad hoc topics and formulation style in user questions. quest builds a noisy quasi kg with node and edge weights, consisting of dynamically retrieved entity names and relational phrases. it augments this graph with types and semantic alignments, and computes the best answers by an algorithm for group steiner trees. we evaluate quest on benchmarks of complex questions, and show that it substantially outperforms state - of - the - art baselines.
arxiv:1908.00469
with the progressive exhaustion of fossil energy and the enhanced awareness of environmental protection, more attention is being paid to plug in hybrid electric vehicles. inappropriate siting and sizing of plug in hybrid electric vehicles parking lots could have negative effects on the development of plug in hybrid electric vehicles, the layout of the city traffic network, and the convenience of plug in hybrid electric vehicles drivers as well as lead to an increasing in the network losses and a degradation in voltage profiles at some nodes. given this background, this paper aims to allocate parking lots in industrial micro - grids with the objective of minimizing system costs including investment cost, power loss and scheduling cost as possible objectives. a two - stage model has been designed for this purpose. the optimal siting and sizing of parking lots in order to minimize the investment cost of parking lots is performed in the first stage. at the second stage, the optimal plug in hybrid electric vehicles scheduling problem is solved considering market interactions to provide profit to the parking lots owner with taken into account various network constraints. conclusions are duly drawn with a realistic example.
arxiv:1711.04103
new plans should be used to improve quality and to increase productivity and durability of conventional pavements. in this investigation, it has been attempted to promote technical characteristics of bitumen using carbon nanotubes as an additive. wet and dry process methods are most practical ways of mixing cnf in ac. it was decided that the best method to adopt for this investigation was the dry process. in this study thermal and ductility properties of modified bitumen by 0. 1, 0. 5, and 1 % carbon nano - tube content in bitumen were evaluated considering bitumen penetration, softening point, and ductility tests, then the results were compared to those of unmodified bitumen. it was found that adding carbon nano - tubes effects on thermal properties of bitumen by increasing the softening point and decreasing the bitumen penetration. it was also shown that bitumen ductility decreases by carbon nano - tubes modification process.
arxiv:1907.05819
we consider the optimization problem of minimizing an objective functional, which admits a variational form and is defined over probability distributions on the constrained domain, which poses challenges to both theoretical analysis and algorithmic design. inspired by the mirror descent algorithm for constrained optimization, we propose an iterative particle - based algorithm, named mirrored variational transport ( mirrorvt ), extended from the variational transport framework [ 7 ] for dealing with the constrained domain. in particular, for each iteration, mirrorvt maps particles to an unconstrained dual domain induced by a mirror map and then approximately perform wasserstein gradient descent on the manifold of distributions defined over the dual space by pushing particles. at the end of iteration, particles are mapped back to the original constrained domain. through simulated experiments, we demonstrate the effectiveness of mirrorvt for minimizing the functionals over probability distributions on the simplex - and euclidean ball - constrained domains. we also analyze its theoretical properties and characterize its convergence to the global minimum of the objective functional.
arxiv:2208.00587
we have proposed and validated an ansatz as effective potential for confining electron / hole within spherical quantum dot in order to understand quantum confinement and its consequences associated with energy states and band gap of spherical quantum dot. within effective mass approximation formalism, considering an ansatz incorporating a conjoined harmonic oscillator and coulomb interaction as the effective potential for confining an electron or a hole within a spherical quantum dot and by employing appropriate boundary conditions we have calculated the shifts in energy of minimum of conduction band ( cbm ) and maximum of valence band ( vbm ) with respect to size of spherical quantum dot. we have also determined the quantum confinement induced shift in band gap energy of spherical quantum dot. in order to verify our theoretical predictions as well as to validate our ansatz, we have performed phenomenological analysis in comparison with available experimental results for quantum dots made of cdse and observe a very good agreement in this regard. our experimentally consistent theoretical results also help in mapping the probability density of electron and hole inside spherical quantum dot. the consistency of our results with available experimental data signifies the capability as well as applicability of the ansatz for the effective confining potential to have reasonable information in the study of real nano - structured spherical systems.
arxiv:1705.10343
in this paper, a new technique is shown for deriving computable, guaranteed lower bounds of functional type ( minorants ) for two different cost functionals subject to a parabolic time - periodic boundary value problem. together with previous results on upper bounds ( majorants ) for one of the cost functionals, both minorants and majorants lead to two - sided estimates of functional type for the optimal control problem. both upper and lower bounds are derived for the second new cost functional subject to the same parabolic pde - constraints, but where the target is a desired gradient. the time - periodic optimal control problems are discretized by the multiharmonic finite element method leading to large systems of linear equations having a saddle point structure. the derivation of preconditioners for the minimal residual method for the new optimization problem is discussed in more detail. finally, several numerical experiments for both optimal control problems are presented confirming the theoretical results obtained. this work provides the basis for an adaptive scheme for time - periodic optimization problems.
arxiv:1901.09924
we establish adjunction and inversion of adjunction for log canonical centers of arbitrary codimension in full generality.
arxiv:2105.14531
recently the evidence of the neutrinoless double $ \ beta $ ( $ 0 \ nu \ beta \ beta $ ) decay has been announced. this means that neutrinos are majorana particles and their mass hierarchy is forced to certain patterns in the diagonal basis of charged lepton mass matrix. we estimate the magnitude of $ 0 \ nu \ beta \ beta $ decay in the classification of the neutrino mass hierarchy patterns as type a, $ m _ { 1, 2 } \ ll m _ { 3 } $, type b, $ m _ 1 \ sim m _ 2 \ gg m _ 3 $, and type c, $ m _ 1 \ sim m _ 2 \ sim m _ 3 $, where $ m _ { i } $ is the $ i $ - th generation neutrino absolute mass. the data of $ 0 \ nu \ beta \ beta $ decay experiment suggests the neutrino mass hierarchy pattern should be type b or c. type b predicts a small magnitude of $ 0 \ nu \ beta \ beta $ decay which is just edge of the allowed region of experimental value in $ 95 % \ mathrm { c. l. } $, where majorana cp phases should be in a certain parameter region. type c can induce the suitably large amount of $ 0 \ nu \ beta \ beta $ decay which is consistent with the experimental data, where overall scale of degenerate neutrino mass plays a crucial role, and its large value can induce the large $ 0 \ nu \ beta \ beta $ decay in any parameter regions of majorana cp phases.
arxiv:hep-ph/0202143
estimating the quantiles of a large dataset is a fundamental problem in both the streaming algorithms literature and the differential privacy literature. however, all existing private mechanisms for distribution - independent quantile computation require space at least linear in the input size $ n $. in this work, we devise a differentially private algorithm for the quantile estimation problem, with strongly sublinear space complexity, in the one - shot and continual observation settings. our basic mechanism estimates any $ \ alpha $ - approximate quantile of a length - $ n $ stream over a data universe $ \ mathcal { x } $ with probability $ 1 - \ beta $ using $ o \ left ( \ frac { \ log ( | \ mathcal { x } | / \ beta ) \ log ( \ alpha \ epsilon n ) } { \ alpha \ epsilon } \ right ) $ space while satisfying $ \ epsilon $ - differential privacy at a single time point. our approach builds upon deterministic streaming algorithms for non - private quantile estimation instantiating the exponential mechanism using a utility function defined on sketch items, while ( privately ) sampling from intervals defined by the sketch. we also present another algorithm based on histograms that is especially suited to the multiple quantiles case. we implement our algorithms and experimentally evaluate them on synthetic and real - world datasets.
arxiv:2201.03380
long range frequency chirping of bernstein - greene - kruskal modes, whose existence is determined by the fast particles, is investigated in cases where these particles do not move freely and their motion is bounded to restricted orbits. an equilibrium oscillating potential, which creates different orbit topologies of energetic particles, is included into the bump - on - tail instability problem of a plasma wave. with respect to fast particles dynamics, the extended model captures the range of particles motion ( trapped / passing ) with energy and thus represents a more realistic 1d picture of the long range sweeping events observed for weakly damped modes, e. g. global alfven eigenmodes, in tokamaks. the poisson equation is solved numerically along with bounce averaging the vlasov equation in the adiabatic regime. we demonstrate that the shape and the saturation amplitude of the nonlinear mode structure depends not only on the amount of deviation from the initial eigenfrequency but also on the initial energy of the resonant electrons in the equilibrium potential. similarly, the results reveal that the resonant electrons following different equilibrium orbits in the electrostatic potential lead to different rates of frequency evolution. as compared to the previous model [ breizman b. n. 2010 nucl. fusion 50 084014 ], it is shown that the frequency sweeps with lower rates. the additional physics included in the model enables a more complete 1d description of the range of phenomena observed in experiments.
arxiv:1702.05336
the statistical mechanics approach to wealth distribution is based on the conservative kinetic multi - agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. we also show numerically that in order to generate the power - law tail an heterogeneous risk aversion model is required. by changing the parameters of these operators one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.
arxiv:1606.04790
interactions among electrons can give rise to striking collective phenomena when the kinetic energy of charge carriers is suppressed. one example is the fractional quantum hall effect, in which correlations between electrons moving in two dimensions under the influence of a strong magnetic field generate excitations with fractional charge. graphene provides a platform to study unique many - body effects due to its massless chiral charge carriers and the fourfold degeneracy that arises from their spin and valley degrees of freedom. here we report local electronic compressibility measurements of a suspended graphene flake performed using a scanning single - electron transistor. between landau level filling v = 0 and 1, we observe incompressible fractional quantum hall states that follow the standard composite fermion sequence v = p / ( 2p \ pm 1 ) for all integer p \ leq 4. in contrast, incompressible behavior occurs only at v = 4 / 3, 8 / 5, 10 / 7 and 14 / 9 between v = 1 and 2. these fractions correspond to a subset of the standard composite fermion sequence involving only even numerators, suggesting a robust underlying symmetry. we extract the energy gaps associated with each fractional quantum hall state as a function of magnetic field. the states at v = 1 / 3, 2 / 3, 4 / 3 and 8 / 5 are the strongest at low field, and persist below 1. 5 t. the unusual sequence of incompressible states provides insight into the interplay between electronic correlations and su ( 4 ) symmetry in graphene.
arxiv:1201.5128
we perform a bfkl - nll analysis of forward jet production at hera which leads to a good description of data over the full kinematical domain. we also predict the azimuthal angle dependence of mueller - navelet jet production at the tevatron and the lhc using the bfkl nll formalism.
arxiv:0706.1799
resurgence theory implies that the non - perturbative ( np ) and perturbative ( p ) data in a qft are quantitatively related, and that detailed information about non - perturbative saddle point field configurations of path integrals can be extracted from perturbation theory. traditionally, only stable np saddle points are considered in qft, and homotopy group considerations are used to classify them. however, in many qfts the relevant homotopy groups are trivial, and even when they are non - trivial they leave many np saddle points undetected. resurgence provides a refined classification of np - saddles, going beyond conventional topological considerations. to demonstrate some of these ideas, we study the $ su ( n ) $ principal chiral model ( pcm ), a two dimensional asymptotically free matrix field theory which has no instantons, because the relevant homotopy group is trivial. adiabatic continuity is used to reach a weakly coupled regime where np effects are calculable. we then use resurgence theory to uncover the existence and role of novel ` fracton ' saddle points, which turn out to be the fractionalized constituents of previously observed unstable ` uniton ' saddle points. the fractons play a crucial role in the physics of the pcm, and are responsible for the dynamically generated mass gap of the theory. moreover, we show that the fracton - anti - fracton events are the weak coupling realization of ' t hooft ' s renormalons, and argue that the renormalon ambiguities are systematically cancelled in the semi - classical expansion. our results motivate the conjecture that the semi - classical expansion of the path integral can be geometrized as a sum over lefschetz thimbles.
arxiv:1403.1277
accurate and radially extended stellar kinematic data reaching r = 97 ' ' from the center are presented for the cd galaxy of fornax, ngc 1399. the stellar rotation is small ( < = 30 km / s ) ; the stellar velocity dispersion remains constant at 250 - 270 km / s. the deviations from gaussian line of sight velocity distributions are small, at the percent level. we construct dynamical models of the galaxy, deprojecting its nearly round ( e0 - e1 ) surface brightness distribution, and determining the spherical distribution function that best fits ( at the 4 percent level ) the kinematic data on a grid of parametrized potentials. we find that the stellar orbital structure is moderately radial, with beta = 0. 3 + - 0. 1 for r < = 60 ' ', similar to results found for some normal giant ellipticals. the gravitational potential is dominated by the luminous component out to the last data point, with a mass - to - light ratio m / l _ b = 10 solar units, although the presence of a central black hole of m approx 5 x 10 ^ 8 solar masses is compatible with the data in the inner 5 arcsec. the influence of the dark component is marginally detected starting from r approx 60 ' '. using the radial velocities of the globular clusters and planetary nebulae of the galaxy we constrain the potential more strongly, ruling out the self - consistent case and finding that the best fit solution agrees with x - ray determinations. the resulting total mass and mass - to - light ratio are m = 1. 2 - 2. 5 x 10 ^ 12 solar masses and m / l _ b = 22 - 48 m solar units inside r = 417 ' ' or 35 kpc for d = 17. 6 mpc.
arxiv:astro-ph/9909446
we find that the triplet andreev reflection amplitude at the interface between a half - metal and an s - wave superconductor in the presence of a domain wall is significantly enhanced if the half metal is a thin film, rather than an extended magnet. the enhancement is by a factor $ l _ { \ rm d } / d $, where $ l _ { \ rm d } $ is the width of the domain wall and $ d $ the film thickness. we conclude that in a lateral geometry, domain walls can be an effective source of the triplet proximity effect.
arxiv:0904.3916
we study the possibility of asymmetric transmission induced by a non - hermitian scattering center embedded in a one - dimensional waveguide, motivated by the aim of realizing quantum diode in a non - hermitian system. it is shown that a $ \ mathcal { pt } $ symmetric non - hermitian scattering center always has symmetric transmission although the dynamics within the isolated center can be unidirectional, especially at its exceptional point. we propose a concrete scheme based on a flux - controlled non - hermitian scattering center, which comprises a non - hermitian triangular ring threaded by an aharonov - bohm flux. the analytical solution shows that such a complex scattering center acts as a diode at the resonant energy level of the spectral singularity, exhibiting perfect unidirectionality of the transmission. the connections between the phenomena of the asymmetric transmission and reflectionless absorption are also discussed.
arxiv:1409.0420
we derive effective equations with loop quantum gravity corrections for the lema \ ^ itre - tolman - bondi family of space - times, and use these to study quantum gravity effects in the oppenheimer - snyder collapse model. for this model, after the formation of a black hole with an apparent horizon, quantum gravity effects become important in the space - time region where the energy density and space - time curvature scalars become comparable to the planck scale. these quantum gravity effects first stop the collapse of the dust matter field when its energy density reaches the planck scale, and then cause the dust field to begin slowly expanding. due to this continued expansion, the matter field will eventually extend beyond the apparent horizon, at which point the horizon disappears and there is no longer a black hole. there are no singularities anywhere in this space - time. in addition, in the limit that edge effects are neglected, we show that the dynamics for the interior of the star of uniform energy density follow the loop quantum cosmology effective friedman equation for the spatially flat friedman - lema \ ^ itre - robertson - walker space - time. finally, we estimate the lifetime of the black hole, as measured by a distant observer, to be $ \ sim ( gm ) ^ 2 / \ ell _ { \ rm pl } $.
arxiv:2006.09325
the rapid advancement of large language models has raised significant concerns regarding their potential misuse by malicious actors. as a result, developing effective detectors to mitigate these risks has become a critical priority. however, most existing detection methods focus excessively on detection accuracy, often neglecting the societal risks posed by high false positive rates ( fprs ). this paper addresses this issue by leveraging conformal prediction ( cp ), which effectively constrains the upper bound of fprs. while directly applying cp constrains fprs, it also leads to a significant reduction in detection performance. to overcome this trade - off, this paper proposes a zero - shot machine - generated text detection framework via multiscaled conformal prediction ( mcp ), which both enforces the fpr constraint and improves detection performance. this paper also introduces realdet, a high - quality dataset that spans a wide range of domains, ensuring realistic calibration and enabling superior detection performance when combined with mcp. empirical evaluations demonstrate that mcp effectively constrains fprs, significantly enhances detection performance, and increases robustness against adversarial attacks across multiple detectors and datasets.
arxiv:2505.05084
let $ a $ be a unital simple separable exact c $ ^ * $ - algebra which is approximately divisible and of real rank zero. we prove that the set of positive elements in $ a $ with a fixed cuntz class is path connected. this result applies in particular to irrational rotation algebras and af algebras.
arxiv:2202.10428
composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. it includes bound constrained optimization, $ \ ell _ 1 $ norm regularized optimization, and $ \ ell _ 0 $ norm regularized optimization as special cases. this paper proposes and analyzes a new generalized matrix splitting algorithm ( gmsa ) for minimizing composite functions. it can be viewed as a generalization of the classical gauss - seidel method and the successive over - relaxation method for solving linear systems in the literature. our algorithm is derived from a novel triangle operator mapping, which can be computed exactly using a new generalized gaussian elimination procedure. we establish the global convergence, convergence rate, and iteration complexity of gmsa for convex problems. in addition, we also discuss several important extensions of gmsa. finally, we validate the performance of our proposed method on three particular applications : nonnegative matrix factorization, $ \ ell _ 0 $ norm regularized sparse coding, and $ \ ell _ 1 $ norm regularized dantzig selector problem. extensive experiments show that our method achieves state - of - the - art performance in term of both efficiency and efficacy.
arxiv:1806.03165
the efficient utilization of available resources while simultaneously achieving control objectives is a primary motivation in the event - triggered control paradigm. in many modern control applications, one such objective is enforcing the safety of a system. the goal of this paper is to carry out this vision by combining event - triggered and safety - critical control design. we discuss how a direct transcription, in the context of safety, of event - triggered methods for stabilization may result in designs that are not implementable on real hardware due to the lack of a minimum interevent time. we provide a counterexample showing this phenomena and, building on the insight gained, propose an event - triggered control approach via input to state safe barrier functions that achieves safety while ensuring that interevent times are uniformly lower bounded. we illustrate our results in simulation.
arxiv:2003.06963
we present new high - sensitivity e - merlin and vla radio images of the prototypical seyfert 2 galaxy ngc 1068 at 5, 10 and 21 ghz. we image the radio jet, from the compact components ne, c, s1 and s2 to the faint double - lobed jet structure of the ne and sw jet lobes. furthermore, we map the jet between by combining e - merlin and vla data for the first time. components ne, c and s2 have steep spectra indicative of optically - thin non - thermal emission domination between 5 and 21 ghz. component s1, which is where the agn resides, has a flat radio spectrum. we report a new component, s2a, a part of the southern jet. we compare these new data with the merlin and vla data observed in 1983, 1992 and 1995 and report a flux decrease by a factor of 2 in component c, suggesting variability of this jet component. with the high angular resolution e - merlin maps, we detect the bow shocks in the ne jet lobe that coincide with the molecular gas outflows observed with alma. the ne jet lobe has enough radio power considered to be responsible for driving out the dense molecular gas observed with alma around the same region.
arxiv:2312.09722
voip ( voice over internet protocol ) is a growing technology during last decade. it provides the audio, video streaming facility on successful implementation in the network. however, it provides the text transport facility over the network. due to implementation of it the cost effective solution, it can be developed for the intercommunication among the employees of a prestigious organization. the proposed idea has been implemented on the audio streaming area of the voip technology. in the audio streaming, the security vulnerabilities are possible on the voip server during communication between two parties. in the proposed model, first the voip system has been implemented with ivr ( interactive voice response ) as a case study and with the implementation of the security parameters provided to the asterisk server which works as a voip service provider. the asterisk server has been configured with different security parameters like vpn server, firewall iptable rules, intrusion detection and intrusion prevention system. every parameter will be monitored by the system administrator of the voip server along with the mysql database. the system admin will get every update related to the attacks on the server through mail server attached to the asterisk server. the main beauty of the proposed system is voip server alone is configured as a voip server, ivr provider, mail server with ids and ips, vpn server, connection with database server in a single asterisk server inside virtualization environment. the voip system is implemented for a local area network inside the university system
arxiv:1206.1748
we analyze here late evolutionary stages of massive ( with initial mass higher than 8 masses of the sun ) close binary stars. our purposes are to study possible mechanisms of gamma ray bursts ( grbs ) origin. we suppose in this paper that grb phenomenon require formation of massive ( approx. 1 m _ sun ) compact ( approx. 10 km ) accretion disks around kerr black holes and neutron stars. such kerr black holes are products of collapse of wolf - rayet stars in extremely close binaries and merging of neutron stars with black holes and neutron stars with neutron stars in close binary systems. required accretion disks also can be formed around neutron stars which were formed during collapse of accreting oxygen - neon white dwarfs. we have estimated frequencies of events which lead to a rotational collapse concerned with formation of rapidly rotating relativistic objects in the galaxy. we made our calculations using the " scenario machine ".
arxiv:astro-ph/0607329
the correction of multiple aberrations in an optical system requires different optical elements, which increases its cost and complexity. metasurfaces hold great promise to providing new functionality for miniaturized and low - cost optical systems. a key advantage over their bulk counterparts is the metasurface ' s ability to respond to the polarization of light, which adds a new degree of freedom to the optical design. here, we show that polarization control enables a form - birefringent metalens to correct for both spherical and off - axis aberrations using a single element only, which is not possible with bulk optics. the metalens encodes two phase profiles onto the same surface, thus allowing switching from high resolution to wide field of view operation. such ability to obtain both high resolution and wide field of view in a single layer is an important step towards integration of miniaturized optical systems, which may find many applications, e. g., in microscopy and endoscopy.
arxiv:2205.09253
usability engineering is a professional discipline that focuses on improving the usability of interactive systems. it draws on theories from computer science and psychology to define problems that occur during the use of such a system. usability engineering involves the testing of designs at various stages of the development process, with users or with usability experts. the history of usability engineering in this context dates back to the 1980s. in 1988, authors john whiteside and john bennett β€” of digital equipment corporation and ibm, respectively β€” published material on the subject, isolating the early setting of goals, iterative evaluation, and prototyping as key activities. the usability expert jakob nielsen is a leader in the field of usability engineering. in his 1993 book usability engineering, nielsen describes methods to use throughout a product development process β€” so designers can ensure they take into account the most important barriers to learnability, efficiency, memorability, error - free use, and subjective satisfaction before implementing the product. nielsen ’ s work describes how to perform usability tests and how to use usability heuristics in the usability engineering lifecycle. ensuring good usability via this process prevents problems in product adoption after release. rather than focusing on finding solutions for usability problems β€” which is the focus of a ux or interaction designer β€” a usability engineer mainly concentrates on the research phase. in this sense, it is not strictly a design role, and many usability engineers have a background in computer science because of this. despite this point, its connection to the design trade is absolutely crucial, not least as it delivers the framework by which designers can work so as to be sure that their products will connect properly with their target usership. = = international standards = = usability engineers sometimes work to shape an interface such that it adheres to accepted operational definitions of user requirements documentation. for example, the international organization for standardization approved definitions ( see e. g., iso 9241 part 11 ) usability are held by some to be a context, efficiency, and satisfaction with which specific users should be able to perform tasks. advocates of this approach engage in task analysis, then prototype interface design, and usability testing on those designs. on the basis of such tests, the technology is potentially redesigned if necessary. the national institute of standards and technology has collaborated with industry to develop the common industry specification for usability – requirements, which serves as a guide for many industry professionals. the specifications for successful usability in biometrics were also developed by the nist. usa
https://en.wikipedia.org/wiki/Usability_engineering
in this paper we showed a systematic method of appropriate parameter choice for a circular pp collider by using analytical expression of beam - beam tune shift limit started from given design goal and technical limitations. a parameter space has been explored. based on parameters scan and considerations from rf systems, a set of appropriate parameter designed for a 50km and a 100km circular proton - proton collider was proposed.
arxiv:1503.01530
we consider a branching markov process in continuous time in which the particles evolve independently as spectrally negative l \ ' evy processes. when the branching mechanism is critical or subcritical, the process will eventually die and we may define its overall maximum, i. e. the maximum location ever reached by a particule. the purpose of this paper is to give asymptotic estimates for the survival function of this maximum. in particular, we show that in the critical case the asymptotics is polynomial when the underlying l \ ' evy process oscillates or drifts towards $ + \ infty $, and is exponential when it drifts towards $ - \ infty $.
arxiv:2207.12192
in 1987, i. labuda proved a general representation theorem that, as a special case, shows that the topology of local convergence in measure is the minimal topology on orlicz spaces and $ l _ { \ infty } $. minimal topologies connect with the recent, and actively studied, subject of " unbounded convergences ". in fact, a hausdorff locally solid topology $ \ tau $ on a vector lattice $ x $ is minimal iff it is lebesgue and the $ \ tau $ and unbounded $ \ tau $ - topologies agree. in this paper, we study metrizability, submetrizability, and local boundedness of the unbounded topology, $ u \ tau $, associated to $ \ tau $ on $ x $. regarding metrizability, we prove that if $ \ tau $ is a locally solid metrizable topology then $ u \ tau $ is metrizable iff there is a countable set $ a $ with $ \ overline { i ( a ) } ^ \ tau = x $. we prove that a minimal topology is metrizable iff $ x $ has the countable sup property and a countable order basis. in line with the idea that uo - convergence generalizes convergence almost everywhere, we prove relations between minimal topologies and uo - convergence that generalize classical relations between convergence almost everywhere and convergence in measure.
arxiv:1709.05407
. for reference, earth ' s average sea - level pressure is 1013. 25 mbar. first formally proposed by astrophysicist carl sagan, the terraforming of venus has since been discussed through methods such as organic molecule - induced carbon conversion, sun reflection, increasing planetary spin, and various chemical means. due to the high presence of sulfuric acid and solar wind on venus, which are harmful to organic environments, organic methods of carbon conversion have been found unfeasible. other methods, such as solar shading, hydrogen bombardment, and magnesium - calcium bombardment are theoretically sound but would require large - scale resources and space technologies not yet available to humans. = = = ethical considerations = = = while successful terraforming would allow life to prosper on other planets, philosophers have debated whether this practice is morally sound. certain ethics experts suggest that planets like mars hold an intrinsic value independent of their utility to humanity and should therefore be free from human interference. also, some argue that through the steps that are necessary to make mars habitable - such as fusion reactors, space - based solar - powered lasers, or spreading a thin layer of soot on mars ' polar ice caps - would deteriorate the current aesthetic value that mars possesses. this calls into question humanity ' s intrinsic ethical and moral values, as it raises the question of whether humanity is willing to eradicate the current ecosystem of another planet for their benefit. through this ethical framework, terraforming attempts on these planets could be seen to threaten their intrinsically valuable environments, rendering these efforts unethical. = = seeding = = = = = environmental considerations = = = mars is the primary subject of discussion for seeding. locations for seeding are chosen based on atmospheric temperature, air pressure, existence of harmful radiation, and availability of natural resources, such as water and other compounds essential to terrestrial life. = = = developing microorganisms for seeding = = = natural or engineered microorganisms must be created or discovered that can withstand the harsh environments of mars. the first organisms used must be able to survive exposure to ionizing radiation and the high concentration of co2 present in the martian atmosphere. later organisms such as multicellular plants must be able to withstand the freezing temperatures, withstand high co2 levels, and produce significant amounts of o2. microorganisms provide significant advantages over non - biological mechanisms. they are self - replicating, negating the needs to either transport or manufacture large machinery to the surface of mars. they can also
https://en.wikipedia.org/wiki/Planetary_engineering
we attempt to settle the issue as to what is the correct non - abelian generalisation of the born - infeld action, via a consideration of the two - loop $ \ beta $ - - function for the non - abelian background gauge field in open string theory. an analysis of the bosonic theory alone shows the recent proposal of tseytlin ' s to be somewhat lacking. for the superstring, however, this proposal would seem to be correct, and not just within the approximation used in \ cite { tseytlin }. since it is this latter case that is relevant to the description of d - branes we, in effect, obtain an independent verification of tseytlin ' s result. some issues involved in the concept of non - abelian t - - duality are discussed ; and it is shown how the interaction between separated and parallel branes, in the form of massive string states, emerges.
arxiv:hep-th/9801127
we introduce a new random input model for bipartite matching which we call the random type poisson arrival model. just like in the known i. i. d. model ( introduced by feldman et al. 2009 ), online nodes have types in our model. in contrast to the adversarial types studied in the known i. i. d. model, following the random graphs studied in mastin and jaillet 2016, in our model each type graph is generated randomly by including each offline node in the neighborhood of an online node with probability $ c / n $ independently. in our model, nodes of the same type appear consecutively in the input and the number of times each type node appears is distributed according to the poisson distribution with parameter 1. we analyze the performance of the simple greedy algorithm under this input model. the performance is controlled by the parameter $ c $ and we are able to exactly characterize the competitive ratio for the regimes $ c = o ( 1 ) $ and $ c = \ omega ( 1 ) $. we also provide a precise bound on the expected size of the matching in the remaining regime of constant $ c $. we compare our results to the previous work of mastin and jaillet who analyzed the simple greedy algorithm in the $ g _ { n, n, p } $ model where each online node type occurs exactly once. we essentially show that the approach of mastin and jaillet can be extended to work for the random type poisson arrival model, although several nontrivial technical challenges need to be overcome. intuitively, one can view the random type poisson arrival model as the $ g _ { n, n, p } $ model with less randomness ; that is, instead of each online node having a new type, each online node has a chance of repeating the previous type.
arxiv:1805.00578
we present reverberation mapping results after monitoring a sample of 17 high - z, high - luminosity quasars for more than 10 years using photometric and spectroscopic capabilities. continuum and line emission flux variability is observed in all quasars. using cross - correlation analysis we successfully determine lags between the variations in the continuum and broad emission lines for several sources. here we present a highlight of our results and the determined radius - - luminosity relations for ly _ alpha and civ.
arxiv:1801.03866
calculations of the thermodynamical properties of a supercooled liquid confined in a matrix are performed with an inherent structure analysis. the liquid entropy is computed by means of a thermodynamical integration procedure. the contributions to the free energy of the liquid can be decoupled also in confinement in the configurational and the vibrational part. we show that the vibrational entropy can be calculated in the harmonic approximation as in the bulk case. the kauzmann temperature of the confined system is estimated from the behavior of the configurational entropy.
arxiv:cond-mat/0501695
radar - based vital sign monitoring ( vsm ) systems have become valuable for non - contact health monitoring by detecting physiological activities, such as respiration and heartbeat, remotely. however, the conventional phased array used in vsm is vulnerable to privacy breaches, as an eavesdropper can extract sensitive vital sign information by analyzing the reflected radar signals. in this paper, we propose a novel approach to protect privacy in radar - based vsm by modifying the radar transmitter hardware, specifically by strategically selecting the transmit antennas from the available antennas in the transmit array. by dynamically selecting which antennas connect or disconnect to the radio frequency chain, the transmitter introduces additional phase noise to the radar echoes, generating false frequencies in the power spectrum of the extracted phases at the eavesdropper ' s receiver. the antenna activation pattern is designed to maximize the variance of the phases introduced by antenna selection, which effectively makes the false frequencies dominate the spectrum, obscuring the actual vital sign frequencies. meanwhile, the authorized receiver, having knowledge of the antenna selection pattern, can compensate for the phase noise and accurately extract the vital signs. numerical experiments are conducted to validate the effectiveness of the proposed approach in enhancing privacy while maintaining vital sign monitoring.
arxiv:2504.01820
key - point - based scene understanding is fundamental for autonomous driving applications. at the same time, optical flow plays an important role in many vision tasks. however, due to the implicit bias of equal attention on all points, classic data - driven optical flow estimation methods yield less satisfactory performance on key points, limiting their implementations in key - point - critical safety - relevant scenarios. to address these issues, we introduce a points - based modeling method that requires the model to learn key - point - related priors explicitly. based on the modeling method, we present focusflow, a framework consisting of 1 ) a mix loss function combined with a classic photometric loss function and our proposed conditional point control loss ( cpcl ) function for diverse point - wise supervision ; 2 ) a conditioned controlling model which substitutes the conventional feature encoder by our proposed condition control encoder ( cce ). cce incorporates a frame feature encoder ( ffe ) that extracts features from frames, a condition feature encoder ( cfe ) that learns to control the feature extraction behavior of ffe from input masks containing information of key points, and fusion modules that transfer the controlling information between ffe and cfe. our focusflow framework shows outstanding performance with up to + 44. 5 % precision improvement on various key points such as orb, sift, and even learning - based silk, along with exceptional scalability for most existing data - driven optical flow methods like pwc - net, raft, and flowformer. notably, focusflow yields competitive or superior performances rivaling the original models on the whole frame. the source code will be available at https : / / github. com / zhonghuayi / focusflow _ official.
arxiv:2308.07104
interaction of electric fields with biological cells is indispensable for many physiological processes. thermal electrical noise in the cellular environment has long been considered as the minimum threshold for detection of electrical signals by cells. however, there is compelling experimental evidence that the minimum electric field sensed by certain cells and organisms is many orders of magnitude weaker than the thermal electrical noise limit estimated purely under equilibrium considerations. we resolve this discrepancy by proposing a non - equilibrium statistical mechanics model for active electromechanical membranes and hypothesize the role of activity in modulating the minimum electrical field that can be detected by a biological membrane. active membranes contain proteins that use external energy sources to carry out specific functions and drive the membrane away from equilibrium. the central idea behind our model is that active mechanisms, attributed to different sources, endow the membrane with the ability to sense and respond to electric fields that are deemed undetectable based on equilibrium statistical mechanics. our model for active membranes is capable of reproducing different experimental data available in the literature by varying the activity. elucidating how active matter can modulate the sensitivity of cells to electric signals can open avenues for a deeper understanding of physiological and pathological processes.
arxiv:2412.16319
( abridged ) the pierre auger collaboration has reported 27 ultra - high energy cosmic ray events ( uhecrs ) with energies above 56 eev and well determined arrival directions as of 2007 august 31. they find that the arrival directions are not isotropic, but instead appear correlated with the positions of nearby agns. our aim was to determine the sources of these uhecrs by comparing their arrival directions with more comprehensive source catalogs. four ( eight ) of the 27 uhecrs with energy > 56eev detected by the pierre auger observatory have arrival directions within 1. 5deg ( 3. 5deg ) of the extended ( > 180kpc ) radio structures of nearby radiogalaxies or the single nearby bllac with extended radio structure. conversely the radio structures of three ( six ) of all ten nearest extended radiogalaxies are within 1. 5deg ( 3. 5deg ) of a uhecr ; three of the remaining four radiogalaxies are in directions with lower exposure times. this correlation between nearby extended radiogalaxies and a subset of uhecrs is significant at the 99. 9 % level. this is the first direct observational proof that radio galaxies are a significant source of uhecrs. for the remaining ~ 20 uhecrs, an isotropic distribution cannot be ruled out at high significance. the correlation found by the auger collaboration between the 27 uhecrs and agns in the veron - cetty & veron catalog at d < 71mpc has a much lower significance when one considers only the ~ 20 uhecrs not ` matched ' to nearby extended radiogalaxies. no correlation is seen between uhecrs and supernovae, supernova remnants, nearby galaxies, or nearby groups and clusters of galaxies. the primary difference between the uhecr detections at the pierre auger observatory and previous experiments may thus be that the southern hemisphere is more privileged with respect to nearby extended radiogalaxies.
arxiv:0806.3220
recently, a. v. boris and colleagues claimed to deduce a decrease of intraband spectral weight ( sw ), and a transfer of sw from intraband to inter - band frequencies, when optimally - doped or slightly underdoped cuprates become superconducting [ a. v. boris et al., science 304, p. 708 ( 2004 ) ]. we show that, while their data agree with others [ h. j. a. molegraaf et al., science 295, p. 2239 ( 2002 ) ; a. f. santander - syro et al., europhys. lett 62, p. 568 ( 2003 ) ], their analysis is flawed. they cannot disprove the results which yield a superconductivity - induced increase of intraband sw, and a transfer of sw from high to low frequencies, in underdoped or nearly optimally doped bi - 2212.
arxiv:cond-mat/0503767
a neutron star in a compact binary is expected to be well - approximated by a barotropic flow during the inspiral phase. during the merger phase, where tidal disruption and shock - heating occur, a baroclinic description is needed instead. in the barotropic case, a hamiltonian formulation potentially offers unique benefits for numerical relativity simulations of the inspiral phase, including highly accurate conservation of circulation and superconvergence of the fluid variables, and is actively being explored. in this work, we investigate the viability of a hamiltonian formulation in the baroclinic case. at odds with the barotropic case, this formulation is non - conservative, yet it can be treated well with approximate riemann solver algorithms since the non - conservative terms vanish across genuinely nonlinear fields. nonetheless, using numerical 1 - dimensional shock tube tests we find that the weak solutions of the hamiltonian system differ from the standard ones obtained by enforcing conservation of rest mass density, momentum density, and energy density across discontinuities. we also show that barotropic hamiltonian formulations can admit shockwaves at fluid - vacuum interfaces, which may be related to the unstable behavior of stellar surfaces observed in past numerical tests. in light of the unphysical weak solutions, we expect that in future implementations of the hamiltonian formulation of hydrodynamics in numerical relativity it will be necessary to use an explicitly barotropic formulation during the inspiral phase, and then switch to a robust baroclinic formulation prior to merger.
arxiv:2004.15000
discovering patterns in networks of protein - protein interactions ( ppis ) is a central problem in systems biology. alignments between these networks aid functional understanding as they uncover important information, such as evolutionary conserved pathways, protein complexes and functional orthologs. the objective of a multiple network alignment is to create clusters of nodes that are evolutionarily conserved and functionally consistent across all networks. unfortunately, the alignment methods proposed thus far do not fully meet this objective, as they are guided by pairwise scores that do not utilize the entire functional and topological information across all networks. to overcome this weakness, we propose fuse, a multiple network aligner that utilizes all functional and topological information in all ppi networks. it works in two steps. first, it computes novel similarity scores of proteins across the ppi networks by fusing from all aligned networks both the protein wiring patterns and their sequence similarities. it does this by using non - negative matrix tri - factorization ( nmtf ). when we apply nmtf on the five largest and most complete ppi networks from biogrid, we show that nmtf finds a larger number of protein pairs across the ppi networks that are functionally conserved than can be found by using protein sequence similarities alone. this demonstrates complementarity of protein sequence and their wiring patterns in the ppi networks. in the second step, fuse uses a novel maximum weight k - partite matching approximation algorithm to find an alignment between multiple networks. we compare fuse with the state of the art multiple network aligners and show that it produces the largest number of functionally consistent clusters that cover all aligned ppi networks. also, fuse is more computationally efficient than other multiple network aligners.
arxiv:1410.7585
we show that for each element $ g $ of a garside group, there exists a positive integer $ m $ such that $ g ^ m $ is conjugate to a periodically geodesic element $ h $, an element with $ | h ^ n | _ \ d = | n | \ cdot | h | _ \ d $ for all integers $ n $, where $ | g | _ \ d $ denotes the shortest word length of $ g $ with respect to the set $ \ d $ of simple elements. we also show that there is a finite - time algorithm that computes, given an element of a garside group, its stable super summit set.
arxiv:math/0604144
in neutral meson mixing, a certain class of convolution integrals is required whose solution involves the error function $ \ mathrm { erf } ( z ) $ of a complex argument $ z $. we show the the general shape of the analytic solution of these integrals, and give expressions which allow the normalisation of these expressions for use in probability density functions. furthermore, we derive expressions which allow a ( decay time ) acceptance to be included in these integrals, or allow the calculation of moments. we also describe the implementation of numerical routines which allow the numerical evaluation of $ w ( z ) = e ^ { - z ^ 2 } ( 1 - \ mathrm { erf } ( - iz ) ) $, sometimes also called faddeeva function, in c + +. these new routines improve over the old cernlib routine ( s ) wwerf / cwerf in terms of both speed and accuracy. these new routines are part of the roofit package, and have been distributed with it since root version 5. 34 / 08.
arxiv:1407.0748
can we model non - euclidean graphs as pure language or even euclidean vectors while retaining their inherent information? the non - euclidean property have posed a long term challenge in graph modeling. despite recent graph neural networks and graph transformers efforts encoding graphs as euclidean vectors, recovering the original graph from vectors remains a challenge. in this paper, we introduce graphsgpt, featuring an graph2seq encoder that transforms non - euclidean graphs into learnable graph words in the euclidean space, along with a graphgpt decoder that reconstructs the original graph from graph words to ensure information equivalence. we pretrain graphsgpt on $ 100 $ m molecules and yield some interesting findings : ( 1 ) the pretrained graph2seq excels in graph representation learning, achieving state - of - the - art results on $ 8 / 9 $ graph classification and regression tasks. ( 2 ) the pretrained graphgpt serves as a strong graph generator, demonstrated by its strong ability to perform both few - shot and conditional graph generation. ( 3 ) graph2seq + graphgpt enables effective graph mixup in the euclidean space, overcoming previously known non - euclidean challenges. ( 4 ) the edge - centric pretraining framework graphsgpt demonstrates its efficacy in graph domain tasks, excelling in both representation and generation. code is available at \ href { https : / / github. com / a4bio / graphsgpt } { github }.
arxiv:2402.02464
we consider thin spherical shells of matter in both newtonian gravity and general relativity, and examine their equilibrium configurations and dynamical stability. thin - shell models are admittedly a poor substitute for realistic stellar models. but the simplicity of the equations that govern their dynamics, compared with the much more complicated mechanics of a self - gravitating fluid, allows us to deliver, in a very direct and easy manner, powerful insights regarding their equilibria and stability. we explore, in particular, the link between the existence of a maximum mass along a sequence of equilibrium configurations and the onset of dynamical instability. such a link is well - established in the case of fluid bodies in both newtonian gravity and general relativity, but the demonstration of this link is both subtle and difficult. the proof is very simple, however, in the case of thin shells, and it is constructed with nothing more than straightforward algebra and a little calculus.
arxiv:1909.06253
powerful large language models ( llms ) are increasingly expected to be deployed with lower computational costs, enabling their capabilities on resource - constrained devices. post - training quantization ( ptq ) has emerged as a star approach to achieve this ambition, with best methods compressing weights to less than 2 bit on average. in this paper, we propose channel - relaxed vector quantization ( crvq ), a novel technique that significantly improves the performance of ptq baselines at the cost of only minimal additional bits. this state - of - the - art extreme compression method achieves its results through two key innovations : ( 1 ) carefully selecting and reordering a very small subset of critical weight channels, and ( 2 ) leveraging extended codebooks to relax the constraint of critical channels. with our method, we demonstrate a 38. 9 \ % improvement over the current strongest sub - 2 - bit ptq baseline, enabling nearer lossless 1 - bit compression. furthermore, our approach offers flexible customization of quantization bit - width and performance, providing a wider range of deployment options for diverse hardware platforms.
arxiv:2412.09282
the selective optical detection of individual metallic nanoparticles ( nps ) with high spatial and temporal resolution is a challenging endeavour, yet is key to the understanding of their optical response and their exploitation in applications from miniaturised optoelectronics and sensors to medical diagnostics and therapeutics. however, only few reports on ultrafast pump - probe spectroscopy on single small metallic nps are available to date. here, we demonstrate a novel phase - sensitive four - wave mixing ( fwm ) microscopy in heterodyne detection to resolve for the first time the ultrafast changes of real and imaginary part of the dielectric function of single small ( < 40nm ) spherical gold nps. the results are quantitatively described via the transient electron temperature and density in gold considering both intraband and interband transitions at the surface plasmon resonance. this novel microscopy technique enables background - free detection of the complex susceptibility change even in highly scattering environments and can be readily applied to any metal nanostructure.
arxiv:1202.4178
the physics of neutron star crusts is vast, involving many different research fields, from nuclear and condensed matter physics to general relativity. this review summarizes the progress, which has been achieved over the last few years, in modeling neutron star crusts, both at the microscopic and macroscopic levels. the confrontation of these theoretical models with observations is also briefly discussed.
arxiv:0812.3955
given a family $ \ mathcal { h } $ of graphs, a graph $ g $ is called $ \ mathcal { h } $ - universal if $ g $ contains every graph of $ \ mathcal { h } $ as a subgraph. following the extensive research on universal graphs of small size for bounded - degree graphs, alon asked what is the minimum number of edges that a graph must have to be universal for the class of all $ n $ - vertex graphs that are $ d $ - degenerate. in this paper, we answer this question up to a factor that is polylogarithmic in $ n. $
arxiv:2309.05468
there is a growing need for hardware - software contracts which precisely define the implications of microarchitecture on software security - i. e., security contracts. it is our view that such contracts should explicitly account for microarchitecture - level implementation details that underpin hardware leakage, thereby establishing a direct correspondence between a contract and the microarchitecture it represents. at the same time, these contracts should remain as abstract as possible so as to support efficient formal analyses. with these goals in mind, we propose leakage containment models ( lcms ) - novel axiomatic security contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. our core contribution is an axiomatic vocabulary for formally defining lcms, derived from the established axiomatic vocabulary used to formalize processor memory consistency models. using this vocabulary, we formalize microarchitectural leakage - focusing on leakage through hardware memory systems - so that it can be automatically detected in programs. to illustrate the efficacy of lcms, we present two case studies. first, we demonstrate that our leakage definition faithfully captures a sampling of ( transient and non - transient ) microarchitectural attacks from the literature. second, we develop a static analysis tool based on lcms which automatically identifies spectre vulnerabilities in programs and scales to analyze realistic - sized codebases, like libsodium.
arxiv:2112.10511
we solve a finite range two - channel model for three resonant identical bosons. the model provides a minimal description of the various magnetic feshbach resonances in single species ultra - cold bosonic systems, including off - resonant scattering. we obtain important insights into the interpretation of seminal experiments : the three - body recombination rate measured in sodium and the efimov resonances observed in caesium. this approach quantifies non universal effects appearing for a finite magnetic field detuning.
arxiv:0903.3808
the state - of - the - art stylegan2 network supports powerful methods to create and edit art, including generating random images, finding images " like " some query, and modifying content or style. further, recent advancements enable training with small datasets. we apply these methods to synthesize card art, by training on a novel yu - gi - oh dataset. while noise inputs to stylegan2 are essential for good synthesis, we find that coarse - scale noise interferes with latent variables on this dataset because both control long - scale image effects. we observe over - aggressive variation in art with changes in noise and weak content control via latent variable edits. here, we demonstrate that training a modified stylegan2, where coarse - scale noise is suppressed, removes these unwanted effects. we obtain a superior fid ; changes in noise result in local exploration of style ; and identity control is markedly improved. these results and analysis lead towards a gan - assisted art synthesis tool for digital artists of all skill levels, which can be used in film, games, or any creative industry for artistic ideation.
arxiv:2108.08922
tutorial videos are a popular help source for learning feature - rich software. however, getting quick answers to questions about tutorial videos is difficult. we present an automated approach for responding to tutorial questions. by analyzing 633 questions found in 5, 944 video comments, we identified different question types and observed that users frequently described parts of the video in questions. we then asked participants ( n = 24 ) to watch tutorial videos and ask questions while annotating the video with relevant visual anchors. most visual anchors referred to ui elements and the application workspace. based on these insights, we built aqua, a pipeline that generates useful answers to questions with visual anchors. we demonstrate this for fusion 360, showing that we can recognize ui elements in visual anchors and generate answers using gpt - 4 augmented with that visual information and software documentation. an evaluation study ( n = 16 ) demonstrates that our approach provides better answers than baseline methods.
arxiv:2403.05213
this paper presents a surrogate modelling technique based on domain partitioning for bayesian parameter inference of highly nonlinear engineering models. in order to alleviate the computational burden typically involved in bayesian inference applications, a multielement polynomial chaos expansion based kriging metamodel is proposed. the developed surrogate model combines in a piecewise function an array of local polynomial chaos based kriging metamodels constructed on a finite set of non - overlapping subdomains of the stochastic input space. therewith, the presence of non - smoothness in the response of the forward model ( e. g. ~ nonlinearities and sparseness ) can be reproduced by the proposed metamodel with minimum computational costs owing to its local adaptation capabilities. the model parameter inference is conducted through a markov chain monte carlo approach comprising adaptive exploration and delayed rejection. the efficiency and accuracy of the proposed approach are validated through two case studies, including an analytical benchmark and a numerical case study. the latter relates the partial differential equation governing the hydrogen diffusion phenomenon of metallic materials in thermal desorption spectroscopy tests.
arxiv:2212.02250
on september 21, 2012, we carried out spectral observations of a solar facula in the si i 10827 { \ aa }, he i 10830 { \ aa }, and h { \ alpha } spectral lines. later, in the process of analyzing the data, we found a small - scale flare in the middle of the time series. due to an anomalous increase in the absorption of the he i 10830 { \ aa } line, we identified this flare as a negative flare. the aim of this paper is to study the influence of the negative flare on the oscillation characteristics in the facular photosphere and chromosphere. we measured line - of - sight ( los ) velocity and intensity of all the three lines as well as half - width of the chromospheric lines. we also used sdo / hmi magnetic field data. the flare caused modulation of all the studied parameters. in the location of the negative flare, the amplitude of the oscillations increased four times on average. in the adjacent magnetic field local maxima, the chromospheric los velocity oscillations appreciably decreased during the flare. the facula region oscillated as a whole with a 5 - minute period before the flare, and this synchronicity was disrupted after the flare. the flare changed the spectral composition of the line - of - sight magnetic field oscillations, causing an increase in the low - frequency oscillation power.
arxiv:1810.10153
theoretically, it has been proposed that objects traveling radially along regular black holes ( rbhs ) would not be destroyed because of finite tidal forces and the absence of a singularity. however, the matter source allows the creation of an inner horizon linked to an unstable de sitter core due to mass inflation instability. this inner horizon also gives rise to the appearance of a remnant, inhibiting complete evaporation. we introduce here a $ d $ - dimensional black hole model with localized sources of matter ( lsm ), characterized by the absence of an inner horizon and featuring a central integrable singularity instead of an unstable de sitter core. in our model, any object tracing a radial and timelike world - line would not be crushed by the singularity. this is attributed to finite tidal forces, the extendability of radial geodesics, and the weak nature of the singularity. our lsm model enables the potential complete evaporation down to $ r _ h = 0 $ without forming a remnant. in higher dimensions, complete evaporation occurs through a phase transition, which could occur at planck scales and be speculatively driven by the generalized uncertainty principle ( gup ). unlike rbhs, our model satisfies the energy conditions. we demonstrate a linear correction to the conventional area law of entropy, distinct from the rbh ' s correction. additionally, we investigate the stability of the solutions through the speed of sound.
arxiv:2310.01734
we show that inflation which is dominated by the d - term density avoids the ` slow - roll ' problem of inflation in supergravity. such an inflationary scenario can naturally emerge in theories with non - anomalous or anomalous u ( 1 ) gauge symmetry. in the latter case the scale of inflation is fixed by the green - - schwarz mechanism of anomaly cancellation. the crucial point is that the ( super ) gravity - mediated curvature of all the scalar fields ( and, in particular, of the inflaton ), which in the standard f - dominated case is of the order of the hubble parameter, is absent in the d - term inflation case. the curvature of moduli and of all other flat directions during such an inflation crucially depends on their gauge charges.
arxiv:hep-ph/9606342
academic journal recommendation requires effectively combining structural understanding of scholarly networks with interpretable recommendations. while graph neural networks ( gnns ) and large language models ( llms ) excel in their respective domains, current approaches often fail to achieve true integration at the reasoning level. we propose hetgcot - rec, a framework that deeply integrates heterogeneous graph transformer with llms through chain - of - thought reasoning. our framework features two key technical innovations : ( 1 ) a structure - aware mechanism that transforms heterogeneous graph neural network learned subgraph information into natural language contexts, utilizing predefined metapaths to capture academic relationships, and ( 2 ) a multi - step reasoning strategy that systematically embeds graph - derived contexts into the llm ' s stage - wise reasoning process. experiments on a dataset collected from openalex demonstrate that our approach significantly outperforms baseline methods, achieving 96. 48 % hit rate and 92. 21 % h @ 1 accuracy. furthermore, we validate the framework ' s adaptability across different llm architectures, showing consistent improvements in both recommendation accuracy and explanation quality. our work demonstrates an effective approach for combining graph - structured reasoning with language models for interpretable academic venue recommendations.
arxiv:2501.01203
the central subspace of a pair of random variables $ ( y, x ) \ in \ mathbb { r } ^ { p + 1 } $ is the minimal subspace $ \ mathcal { s } $ such that $ y \ perp \ hspace { - 2mm } \ perp x \ mid p _ { \ mathcal { s } } x $. in this paper, we consider the minimax rate of estimating the central space of the multiple index models $ y = f ( \ beta _ { 1 } ^ { \ tau } x, \ beta _ { 2 } ^ { \ tau } x,..., \ beta _ { d } ^ { \ tau } x, \ epsilon ) $ with at most $ s $ active predictors where $ x \ sim n ( 0, i _ { p } ) $. we first introduce a large class of models depending on the smallest non - zero eigenvalue $ \ lambda $ of $ var ( \ mathbb { e } [ x | y ] ) $, over which we show that an aggregated estimator based on the sir procedure converges at rate $ d \ wedge ( ( sd + s \ log ( ep / s ) ) / ( n \ lambda ) ) $. we then show that this rate is optimal in two scenarios : the single index models ; and the multiple index models with fixed central dimension $ d $ and fixed $ \ lambda $. by assuming a technical conjecture, we can show that this rate is also optimal for multiple index models with bounded dimension of the central space. we believe that these ( conditional ) optimal rate results bring us meaningful insights of general sdr problems in high dimensions.
arxiv:1701.06009
. ( 1998 ). complexity theory and the social sciences : an introduction. routledge. isbn 978 - 0 - 415 - 16296 - 8 kuper, a., and kuper, j. ( 1985 ). the social science encyclopedia. london : routledge & kegan paul. ( ed., a limited preview of the 1996 version is available ) lave, c. a., and march, j. g. ( 1993 ). an introduction to models in the social sciences. lanham, md : university press of america. perry, john and erna perry. contemporary society : an introduction to social science ( 12th edition, 2008 ), college textbook potter, d. ( 1988 ). society and the social sciences : an introduction. london : routledge [ u. a. ]. david l. sills and robert k. merton ( 1968 ). international encyclopedia of the social sciences. seligman, edwin r. a. and alvin johnson ( 1934 ). encyclopedia of the social sciences. ( 13 vol. ) ward, l. f. ( 1924 ). dynamic sociology, or applied social science : as based upon statical sociology and the less complex sciences. new york : d. appleton. leavitt, f. m., and brown, e. ( 1920 ). elementary social science. new york : macmillan. bogardus, e. s. ( 1913 ). introduction to the social sciences : a textbook outline. los angeles : ralston press. small, a. w. ( 1910 ). the meaning of social science. chicago : the university of chicago press. = = = 19th century sources = = = andrews, s. p. ( 1888 ). the science of society. boston, mass : sarah e. holmes. denslow, v. b. ( 1882 ). modern thinkers principally upon social science : what they think, and why. chicago : belford, clarke & co. harris, william torrey ( 1879 ). method of study in social science : a lecture delivered before the st. louis social science association, march 4, 1879. st. louis : g. i. jones and co, 1879. hamilton, r. s. ( 1873 ). present status of social science. a review, historical and critical, of the progress of thought in social philosophy. new york : h. l. hinton. carey, h. c. ( 1867 ). principles of social science. philadelphia : j. b. lippincott & co. [ etc. ]
https://en.wikipedia.org/wiki/Social_science
we investigate the upper bound of charge diffusion constant in holography. for this purpose, we apply the conjectured upper bound proposal related to the equilibration scales ( $ \ omega _ { \ text { eq } }, k _ { \ text { eq } } $ ) to the einstein - maxwell - axion model. ( $ \ omega _ { \ text { eq } }, k _ { \ text { eq } } $ ) is defined as the collision point between the diffusive hydrodynamic mode and the first non - hydrodynamic mode, giving rise to the upper bound of the diffusion constant $ d $ at low temperature $ t $ as $ d = \ omega _ { \ text { eq } } / k _ { \ text { eq } } ^ 2 $. we show that the upper bound proposal also works for the charge diffusion and ( $ \ omega _ { \ text { eq } }, k _ { \ text { eq } } $ ), at low $ t $, is determined by $ d $ and the scaling dimension $ \ delta ( 0 ) $ of an infra - red operator as $ ( \ omega _ { \ text { eq } }, \, k _ { \ text { eq } } ^ 2 ) \, = \, ( 2 \ pi t \ delta ( 0 ) \,, \ omega _ { \ text { eq } } / d ) $, as for other diffusion constants. however, for the charge diffusion, we find that the collision occurs at real $ k _ { \ text { eq } } $, while it is complex for other diffusions. in order to examine the universality of the conjectured upper bound, we also introduce a higher derivative coupling to the einstein - maxwell - axion model. this coupling is particularly interesting since it leads to the violation of the \ textit { lower } bound of the charge diffusion constant so the correction may also have effects on the \ textit { upper } bound of the charge diffusion. we find that the higher derivative coupling does not affect the upper bound so that the conjectured upper bound would not be easily violated.
arxiv:2111.07515
wasserstein distributionally robust optimization ( \ textsf { wdro } ) is a popular model to enhance the robustness of machine learning with ambiguous data. however, the complexity of \ textsf { wdro } can be prohibitive in practice since solving its ` ` minimax ' ' formulation requires a great amount of computation. recently, several fast \ textsf { wdro } training algorithms for some specific machine learning tasks ( e. g., logistic regression ) have been developed. however, the research on designing efficient algorithms for general large - scale \ textsf { wdro } s is still quite limited, to the best of our knowledge. \ textit { coreset } is an important tool for compressing large dataset, and thus it has been widely applied to reduce the computational complexities for many optimization problems. in this paper, we introduce a unified framework to construct the $ \ epsilon $ - coreset for the general \ textsf { wdro } problems. though it is challenging to obtain a conventional coreset for \ textsf { wdro } due to the uncertainty issue of ambiguous data, we show that we can compute a ` ` dual coreset ' ' by using the strong duality property of \ textsf { wdro }. also, the error introduced by the dual coreset can be theoretically guaranteed for the original \ textsf { wdro } objective. to construct the dual coreset, we propose a novel grid sampling approach that is particularly suitable for the dual formulation of \ textsf { wdro }. finally, we implement our coreset approach and illustrate its effectiveness for several \ textsf { wdro } problems in the experiments.
arxiv:2210.04260
we introduce the notion of a manifold admitting a simple compact cartan 3 - form $ \ om ^ 3 $. we study algebraic types of such manifolds specializing on those having skew - symmetric torsion, or those associated with a closed or coclosed 3 - form $ \ om ^ 3 $. we prove the existence of an algebra of multi - symplectic forms $ \ phi ^ l $ on these manifolds. cohomology groups associated with complexes of differential forms on $ m ^ n $ in presence of such a closed multi - symplectic form $ \ phi ^ l $ and their relations with the de rham cohomologies of $ m $ are investigated. we show rigidity of a class of strongly associative ( resp. strongly coassociative ) submanifolds. we include an appendix describing all connected simply connected complete riemannian manifolds admitting a parallel 3 - form.
arxiv:1103.1201
wearable technologies enable continuous monitoring of various health metrics, such as physical activity, heart rate, sleep, and stress levels. a key challenge with wearable data is obtaining quality labels. unlike modalities like video where the videos themselves can be effectively used to label objects or events, wearable data do not contain obvious cues about the physical manifestation of the users and usually require rich metadata. as a result, label noise can become an increasingly thorny issue when labeling such data. in this paper, we propose a novel solution to address noisy label learning, entitled few - shot human - in - the - loop refinement ( fhlr ). our method initially learns a seed model using weak labels. next, it fine - tunes the seed model using a handful of expert corrections. finally, it achieves better generalizability and robustness by merging the seed and fine - tuned models via weighted parameter averaging. we evaluate our approach on four challenging tasks and datasets, and compare it against eight competitive baselines designed to deal with noisy labels. we show that fhlr achieves significantly better performance when learning from noisy labels and achieves state - of - the - art by a large margin, with up to 19 % accuracy improvement under symmetric and asymmetric noise. notably, we find that fhlr is particularly robust to increased label noise, unlike prior works that suffer from severe performance degradation. our work not only achieves better generalization in high - stakes health sensing benchmarks but also sheds light on how noise affects commonly - used models.
arxiv:2401.14107
we construct vertex algebraic intertwining operators among certain generalized verma modules for $ \ widehat { \ mathfrak { sl } ( 2, \ mathbb { c } ) } $ and calculate the corresponding fusion rules. additionally, we show that under some conditions these intertwining operators descend to intertwining operators among one generalized verma module and two ( generally non - standard ) irreducible modules. our construction relies on the irreducibility of the maximal proper submodules of generalized verma modules appearing in the garland - lepowsky resolutions of standard $ \ widehat { \ mathfrak { sl } ( 2, \ mathbb { c } ) } $ - modules. we prove this irreducibility using the composition factor multiplicities of irreducible modules in verma modules for symmetrizable kac - moody lie algebras of rank $ 2 $, given by rocha - caridi and wallach.
arxiv:1510.05457
genomes may be analyzed from an information viewpoint as very long strings, containing functional elements of variable length, which have been assembled by evolution. in this work an innovative information theory based algorithm is proposed, to extract significant ( relatively small ) dictionaries of genomic words. namely, conceptual analyses are here combined with empirical studies, to open up a methodology for the extraction of variable length dictionaries from genomic sequences, based on the information content of some factors. its application to human chromosomes highlights an original inter - chromosomal similarity in terms of factor distributions.
arxiv:2009.10449
cutting planes are of crucial importance when solving nonconvex nonlinear programs to global optimality, for example using the spatial branch - and - bound algorithms. in this paper, we discuss the generation of cutting planes for signomial programming. many global optimization algorithms lift signomial programs into an extended formulation such that these algorithms can construct relaxations of the signomial program by outer approximations of the lifted set encoding nonconvex signomial term sets, i. e., hypographs, or epigraphs of signomial terms. we show that any signomial term set can be transformed into the subset of the difference of two concave power functions, from which we derive two kinds of valid linear inequalities. intersection cuts are constructed using signomial term - free sets which do not contain any point of the signomial term set in their interior. we show that these signomial term - free sets are maximal in the nonnegative orthant, and use them to derive intersection sets. we then convexify a concave power function in the reformulation of the signomial term set, resulting in a convex set containing the signomial term set. this convex outer approximation is constructed in an extended space, and we separate a class of valid linear inequalities by projection from this approximation. we implement the valid inequalities in a global optimization solver and test them on minlplib instances. our results show that both types of valid inequalities provide comparable reductions in running time, number of search nodes, and duality gap.
arxiv:2212.02857
we measure neutrino charged current quasielastic - like scattering on hydrocarbon at high statistics using the wide - band numi beam with neutrino energy peaked at 6 gev. the double - differential cross section is reported in terms of muon longitudinal and transverse momentum. cross - section contours versus lepton momentum components are approximately described by a conventional generator - based simulation, however discrepancies are observed for transverse momenta above 0. 5 gev / c for longitudinal momentum ranges 3 to 5 gev / c and 9 to 20 gev / c. the single differential cross section versus momentum transfer squared ( $ d \ sigma / dq _ { qe } ^ 2 $ ) is measured over a four - decade range of $ q ^ 2 $ that extends to $ 10 ~ gev ^ 2 $. the cross section turn - over and fall - off in the $ q ^ 2 $ range 0. 3 to $ 10 ~ gev ^ 2 $ is not fully reproduced by generator predictions that rely on dipole form factors. our measurement probes the axial - vector content of the hadronic current and complements the electromagnetic form factor data obtained using electron - nucleon elastic scattering. these results help oscillation experiments because they probe the importance of various correlations and final - state interaction effects within the nucleus, which have different effects on the visible energy in detectors.
arxiv:1912.09890
deep generative models have recently achieved impressive performance in speech and music synthesis. however, compared to the generation of those domain - specific sounds, generating general sounds ( such as siren, gunshots ) has received less attention, despite their wide applications. in previous work, the samplernn method was considered for sound generation in the time domain. however, samplernn is potentially limited in capturing long - range dependencies within sounds as it only back - propagates through a limited number of samples. in this work, we propose a method for generating sounds via neural discrete time - frequency representation learning, conditioned on sound classes. this offers an advantage in efficiently modelling long - range dependencies and retaining local fine - grained structures within sound clips. we evaluate our approach on the urbansound8k dataset, compared to samplernn, with the performance metrics measuring the quality and diversity of generated sounds. experimental results show that our method offers comparable performance in quality and significantly better performance in diversity.
arxiv:2107.09998
a physically - based method to derive well - posed instances of the two - fluid transport equations for two - phase flow, from the hamilton principle, is presented. the state of the two - fluid flow is represented by the superficial velocity and the drift - flux, instead of the average velocities of each fluid. this generates the conservation equations of the two principal motion modes naturally : the global center - of - mass flow and the relative velocity between fluids. well - posed equations can be obtained by modelling the storage of kinetic energy in fluctuations structures induced by the interaction between fluids, like wakes and vortexes. in this way, the equations can be regularized without losing in the process the instabilities responsible for flow - patterns formation and transition. a specific case of vertical air - water flow is analyzed showing the capability of the present model to predict the formation of the slug flow regime as trains of non - linear waves.
arxiv:2101.06339
in the baseline design of the international linear collider ( ilc ) an undulator - based source is foreseen for the positron source in order to match the physics requirements. the baseline parameters are optimized for the ilc at sqrt ( s ) = 500 gev, that means an electron drive beam of 250 gev. precision measurements in the higgs sector, however, require measurements at sqrt ( s ) = 250 gev, i. e. running with the electron drive beam only at 125 gev, which imposes a challenge for achieving a high yield. therefore the baseline undulator parameters have to be optimized as much as possible within their technical performances. in this bachelor thesis we therefore present a theoretical study on the radiation spectra of a helical undulator, based on the equation for the radiated synchrotron energy spectral density per solid angle per electron in the relativistic, far - field and point - like charge approximation. from this starting point the following undulator properties are examined : the deposited power in the undulator vessel, which can disrupt the functionality of the undulator magnets, the protective property of a mask on this disturbances and the number of positrons produced by the synchrotron radiation in a ti6al4v target. those quantities were evaluated for various values for parameters as undulator period, undulator length and magnetic flux in order to find optimal baseline parameter sets for sqrt ( s ) = 250 gev.
arxiv:1902.07786
this study examines on - shell supersymmetry breaking in the abelian $ \ mathcal { n } = 1 $ chern - simons - matter model within a three - dimensional spacetime. the classical lagrangian is scale - invariant, but two - loop radiative corrections to the effective potential break this symmetry, along with gauge and on - shell supersymmetry. to investigate this issue, the renormalization group equation is used to calculate the two - loop effective potential.
arxiv:2305.03768
in this paper, a 3d - regnet - based neural network is proposed for diagnosing the physical condition of patients with coronavirus ( covid - 19 ) infection. in the application of clinical medicine, lung ct images are utilized by practitioners to determine whether a patient is infected with coronavirus. however, there are some laybacks can be considered regarding to this diagnostic method, such as time consuming and low accuracy. as a relatively large organ of human body, important spatial features would be lost if the lungs were diagnosed utilizing two dimensional slice image. therefore, in this paper, a deep learning model with 3d image was designed. the 3d image as input data was comprised of two - dimensional pulmonary image sequence and from which relevant coronavirus infection 3d features were extracted and classified. the results show that the test set of the 3d model, the result : f1 score of 0. 8379 and auc value of 0. 8807 have been achieved.
arxiv:2107.04055
( abridged ) we present mass models of a sample of 14 spiral and 14 s0 galaxies that constrain their stellar and dark matter content. for each galaxy we derive the stellar mass distribution from near - infrared photometry under the assumptions of axisymmetry and a constant ks - band stellar mass - to - light ratio, ( m / l ) _ ks. to this we add a dark halo assumed to follow a spherically symmetric nfw profile and a correlation between concentration and dark mass within the virial radius, m _ dm. we solve the jeans equations for the corresponding potential under the assumption of constant anisotropy in the meridional plane, beta _ z. by comparing the predicted second velocity moment to observed long - slit stellar kinematics, we determine the three best - fitting parameters of the model : ( m / l ) _ ks, m _ dm and beta _ z. these simple axisymmetric jeans models are able to accurately reproduce the wide range of observed stellar kinematics, which typically extend to ~ 2 - 3 re or, equivalently, ~ 0. 5 - 1 r _ 25. we find a median stellar mass - to - light ratio at ks - band of 1. 09 ( solar units ) with an rms scatter of 0. 31. we present preliminary comparisons between this large sample of dynamically determined stellar mass - to - light ratios and the predictions of stellar population models. the stellar population models predict slightly lower mass - to - light ratios than we measure. the mass models contain a median of 15 per cent dark matter by mass within an effective radius re, and 49 per cent within the optical radius r _ 25. dark and stellar matter contribute equally to the mass within a sphere of radius 4. 1 re or 1. 0 r _ 25. there is no evidence of any significant difference in the dark matter content of the spirals and s0s in our sample.
arxiv:0909.0680
we prove the existence of time - periodic solutions and spatially localised solutions ( breathers ), in general nonlinear klein - gordon infinite lattices. the existence problem is converted into a fixed point problem for an operator on some appropriate function space which is solved by means of schauder ' s fixed point theorem.
arxiv:2103.11854
this paper proposes a new estimator for selecting weights to average over least squares estimates obtained from a set of models. our proposed estimator builds on the mallows model average ( mma ) estimator of hansen ( 2007 ), but, unlike mma, simultaneously controls for location bias and regression error through a common constant. we show that our proposed estimator - - the mean - shift mallows model average ( msa ) estimator - - is asymptotically optimal to the original mma estimator in terms of mean squared error. a simulation study is presented, where we show that our proposed estimator uniformly outperforms the mma estimator.
arxiv:1912.01194
we present the first deep learning model for the analysis of intracytoplasmic sperm injection ( icsi ) procedures. using a dataset of icsi procedure videos, we train a deep neural network to segment key objects in the videos achieving a mean iou of 0. 962, and to localize the needle tip achieving a mean pixel error of 3. 793 pixels at 14 fps on a single gpu. we further analyze the variation between the dataset ' s human annotators and find the model ' s performance to be comparable to human experts.
arxiv:2101.01207
this paper is dedicated to provide theta function representations of algebro - geometric solutions and related crucial quantities for the two - component hunter - saxton ( hs2 ) hierarchy through studying an algebro - geometric initial value problem. our main tools include the polynomial recursive formalism, the hyperelliptic curve with finite number of genus, the baker - akhiezer functions, the meromorphic function, the dubrovin - type equations for auxiliary divisors, and the associated trace formulas. with the help of these tools, the explicit representations of the algebro - geometric solutions are obtained for the entire hs2 hierarchy.
arxiv:1406.6359
is more information always better? or are there some situations in which more information can make us worse off? good ( 1967 ) argues that expected utility maximizers should always accept more information if the information is cost - free and relevant. but good ' s argument presupposes that you are certain you will update by conditionalization. if we relax this assumption and allow agents to be uncertain about updating, these agents can be rationally required to reject free and relevant information. since there are good reasons to be uncertain about updating, rationality can require you to prefer ignorance.
arxiv:2309.12374
as robot autonomy improves, robots are increasingly being considered in the role of autonomous observation systems - - free - flying cameras capable of actively tracking human activity within some predefined area of interest. in this work, we formulate the autonomous observation problem through multi - objective optimization, presenting a novel semi - mdp formulation of the autonomous human observation problem that maximizes observation rewards while accounting for both human - and robot - centric costs. we demonstrate that the problem can be solved with both scalarization - based multi - objective mdp methods and constrained mdp methods, and discuss the relative benefits of each approach. we validate our work on activity tracking using a nasa astrobee robot operating within a simulated international space station environment.
arxiv:2006.00037
several demographic and health indicators, including the total fertility rate ( tfr ) and modern contraceptive use rate ( mcpr ), evolve similarly over time, characterized by a transition between stable states. existing approaches for estimation or projection of transitions in multiple populations have successfully used parametric functions to capture the relation between the rate of change of an indicator and its level. however, incorrect parametric forms may result in bias or incorrect coverage in long - term projections. we propose a new class of models to capture demographic transitions in multiple populations. our proposal, the b - spline transition model ( btm ), models the relationship between the rate of change of an indicator and its level using b - splines, allowing for data - adaptive estimation of transition functions. bayesian hierarchical models are used to share information on the transition function between populations. we apply the btm to estimate and project country - level tfr and mcpr and compare the results against those from extant parametric models. for tfr, btm projections have generally lower error than the comparison model. for mcpr, while results are comparable between btm and a parametric approach, the b - spline model generally improves out - of - sample predictions. the case studies suggest that the btm may be considered for demographic applications.
arxiv:2301.09694
a novel class of non - reversible markov chain monte carlo schemes relying on continuous - time piecewise - deterministic markov processes has recently emerged. in these algorithms, the state of the markov process evolves according to a deterministic dynamics which is modified using a markov transition kernel at random event times. these methods enjoy remarkable features including the ability to update only a subset of the state components while other components implicitly keep evolving and the ability to use an unbiased estimate of the gradient of the log - target while preserving the target as invariant distribution. however, they also suffer from important limitations. the deterministic dynamics used so far do not exploit the structure of the target. moreover, exact simulation of the event times is feasible for an important yet restricted class of problems and, even when it is, it is application specific. this limits the applicability of these techniques and prevents the development of a generic software implementation of them. we introduce novel mcmc methods addressing these shortcomings. in particular, we introduce novel continuous - time algorithms relying on exact hamiltonian flows and novel non - reversible discrete - time algorithms which can exploit complex dynamics such as approximate hamiltonian dynamics arising from symplectic integrators while preserving the attractive features of continuous - time algorithms. we demonstrate the performance of these schemes on a variety of applications.
arxiv:1707.05296
grandchild, in - laws, etc. ) and friendships. = = = intimate relationships = = = what defines a relationship as intimate are the same features that comprise a close relationship ( i. e., must be personal, must have bidirectional interdependence, and must be close ), but there must also be a shared sexual passion or the potential to be sexually intimate. intimate relationships can include married couples, dating partners, and other relationships that satisfy the aforementioned criteria. = = theories = = = = = social exchange theory = = = social exchange theory was developed in the late 1950s and early 1960s as an economic approach to describing social experiences. it addresses the transactional nature of relationships whereby people determine how to proceed in a relationship after assessing the costs versus the benefits. a prominent subset that secured the place of social exchange theory in relationship science is interdependence theory, which was articulated in 1959 by harold kelley and john thibaut in the social psychology of groups. even though kelley and thibaut ' s intent was to discuss the theory as it applied to groups, they began by exploring the effects of mutual influence as it pertains to two people together ( i. e., a dyad ). they expanded upon this process at the dyadic level in later years, further developing the idea that people in relationships 1 ) compare the overall positive to overall negative outcomes of their relationship ( i. e., outcome = rewards - costs ), which they then 2 ) compare to what they expect to get or think they should be getting out of the relationship ( i. e., comparison level or " cl " ) to determine how satisfied they are ( i. e., satisfaction = outcome - cl ), and finally 3 ) compare the outcome of their relationship to the possible options of being either in another relationship or not in any relationship at all ( i. e., comparison level for alternatives or " clalt " ) to determine how dependent they are on the relationship / their partner ( i. e., dependence = outcome - clalt ). they described this as having practical and important implications for commitment in a relationship such that those less satisfied by and less dependent on their partner may be more inclined to end the relationship ( e. g., divorce, in the context of a marriage ). interdependence theory has also been the basis of other influential works, such as caryl rusbult ' s investment model theory. the investment model ( later known as
https://en.wikipedia.org/wiki/Relationship_science
we study the collective excitations of na $ _ 2 $ iro $ _ 3 $ in an itinerant electron approach. we consider a multi - orbital tight - binding model with the electron transfer between the ir $ 5d $ states mediated via oxygen $ 2p $ states and the direct $ d $ - $ d $ transfer on a honeycomb lattice. the one - electron energy as well as the ground state energy are investigated within the hartree - fock approximation. when the direct $ d $ - $ d $ transfer is weak, we obtain nearly flat energy bands due to the formation of quasimolecular orbitals, and the ground state exhibits the zigzag spin order. the evaluation of the density - density correlation function within the random phase approximation shows that the collective excitations emerge as bound states. for an appropriate value of the direct $ d $ - $ d $ transfer, some of them are concentrated in the energy region $ \ omega < $ 50 mev ( magnetic excitations ) while the others lie in the energy region $ \ omega > $ 350 mev ( excitonic excitations ). this behaviour is consistent with the resonant inelastic x - ray scattering spectra. we also show that the larger values of the direct $ d $ - $ d $ transfer are unfavourable in order to explain the observed aspects of na $ _ 2 $ iro $ _ 3 $ such as the ordering pattern of the ground state and the excitation spectrum. these findings may indicate that the direct $ d $ - $ d $ transfer is suppressed by the structural distortions in the view of excitation spectroscopy, as having been pointed out in the \ it { ab initio } calculation.
arxiv:1508.06050
video moment retrieval ( mr ) aims to localize moments within a video based on a given natural language query. given the prevalent use of platforms like youtube for information retrieval, the demand for mr techniques is significantly growing. recent detr - based models have made notable advances in performance but still struggle with accurately localizing short moments. through data analysis, we identified limited feature diversity in short moments, which motivated the development of momentmix. momentmix employs two augmentation strategies : foregroundmix and backgroundmix, each enhancing the feature representations of the foreground and background, respectively. additionally, our analysis of prediction bias revealed that short moments particularly struggle with accurately predicting their center positions of moments. to address this, we propose a length - aware decoder, which conditions length through a novel bipartite matching process. our extensive studies demonstrate the efficacy of our length - aware approach, especially in localizing short moments, leading to improved overall performance. our method surpasses state - of - the - art detr - based methods on benchmark datasets, achieving the highest r1 and map on qvhighlights and the highest r1 @ 0. 7 on tacos and charades - sta ( such as a 2. 46 % gain in r1 @ 0. 7 and a 2. 57 % gain in map average for qvhighlights ). the code is available at https : / / github. com / sjpark5800 / la - detr.
arxiv:2412.20816
we report a detailed investigation on near - ground state cooling of one and two trapped atomic ions. we introduce a simple sideband cooling method for confined atoms and ions, using rf radiation applied to bare ionic states in a static magnetic field gradient, and demonstrate its application to ions confined at secular trap frequencies, $ \ omega _ z \ approx 2 \ pi \ times 117 $ khz. for a single \ ybplus ion, the sideband cooling cycle reduces the average phonon number, $ \ left \ langle \, n \, \ right \ rangle $ from the doppler limit to $ \ left \ langle \, n \, \ right \ rangle = $ 0. 30 ( 12 ). this is in agreement with the theoretically estimated lowest achievable phonon number in this experiment. we extend this method of rf sideband cooling to a system of two \ ybplus ions, resulting in a phonon number of $ \ left \ langle \, n \, \ right \ rangle = $ 1. 1 ( 7 ) in the center - of - mass mode. furthermore, we demonstrate the first realisation of sympathetic rf sideband cooling of an ion crystal consisting of two individually addressable identical isotopes of the same species.
arxiv:1710.09241
for a smooth curve of genus $ g $ embedded by a line bundle of degree at least $ 2g + 3 $ we show that the ideal sheaf of the secant variety is 5 - regular. this bound is sharp with respect to both the degree of the embedding and the bound on the regularity. further, we show that the secant variety is projectively normal for the generic embedding of degree at least $ 2g + 3 $. we also give a conjectural description of the resolutions of the ideals of higher secant varieties.
arxiv:math/0610081
wireless communication in millimeter wave spectrum is poised to provide the latency and bandwidth needed for advanced use cases unfeasible at lower frequencies. despite the market potential of vehicular communication networks, investigations into the millimeter wave vehicular channel are lacking. in this paper, we present a detailed overview of a novel 1 ghz wide, multi - antenna vehicle to vehicle directional channel sounding and measurement platform operating at 28 ghz. the channel sounder uses two 256 - element phased arrays at the transmitter vehicle and four 64 - element arrays at the receiver vehicle, with the receiver measuring 116 different directional beams in less than 1 millisecond. by measuring the full multi - beam channel impulse response at large bandwidths, our system provides unprecedented insight in instantaneous mobile vehicle to vehicle channels. the system also uses centimeter - level global position tracking and 360 degree video capture to provide additional contextual information for joint communication and sensing applications. an initial measurement campaign was conducted on highway and surface streets in austin, texas. we show example data that highlights the sensing capability of the system. preliminary results from the measurement campaign show that bumper mounted mmwave arrays provide rich scattering in traffic as well a provide significant directional diversity aiding towards high reliability vehicular communication. additionally, potential waveguide effects from high traffic in lanes can also extend the range of mmwave signals significantly.
arxiv:2203.09057
$ u ( 1 ) _ x $ ssm is the extension of the minimal supersymmetric standard model ( mssm ) and its local gauge group is $ su ( 3 ) _ c \ times su ( 2 ) _ l \ times u ( 1 ) _ y \ times u ( 1 ) _ x $. we study lepton flavor violating ( lfv ) decays $ z \ rightarrow { { l _ i } ^ { \ pm } { l _ j } ^ { \ mp } } $ ( $ z \ rightarrow e { \ mu } $, $ z \ rightarrow e { \ tau } $, and $ z \ rightarrow { \ mu } { \ tau } $ ) and $ h \ rightarrow { { l _ i } ^ { \ pm } { l _ j } ^ { \ mp } } $ ( $ h \ rightarrow e { \ mu } $, $ h \ rightarrow e { \ tau } $, and $ h \ rightarrow { \ mu } { \ tau } $ ), in this model. in the numerical results, the branching ratios of $ z \ rightarrow { { l _ i } ^ { \ pm } { l _ j } ^ { \ mp } } $ are from $ 10 ^ { - 9 } $ to $ 10 ^ { - 13 } $ and the branching ratios of $ h \ rightarrow { { l _ i } ^ { \ pm } { l _ j } ^ { \ mp } } $ are from $ 10 ^ { - 3 } $ to $ 10 ^ { - 9 } $, which can approach the present experimental upper bounds. based on the latest experimental data, we analyze the influence of different sensitive parameters on the branching ratio, and make reasonable predictions for future experiments. the main sensitive parameters and lfv sources are the non - diagonal elements corresponding to the initial and final generations of leptons, which can be seen from the numerical analysis.
arxiv:2207.01770
electroweak interactions need three nambu - goldstone bosons to provide a mass to the w and the z gauge bosons but they also need an ultra - violet moderator or new physics to unitarize the gauge boson scattering amplitudes. in this talk, i will present various recent models of physics at the fermi scale : several deformations of the minimal supersymmetric standard model, little higgs models, holographic composite higgs models, 5d higgsless models.
arxiv:0910.4976