text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
recently, the worse - case analysis, probabilistic analysis and empirical justification have been employed to address the fundamental question : when does $ \ ell _ 1 $ - minimization find the sparsest solution to an underdetermined linear system? in this paper, a deterministic analysis, rooted in the classic linear programming theory, is carried out to further address this question. we first identify a necessary and sufficient condition for the uniqueness of least $ \ ell _ 1 $ - norm solutions to linear systems. from this condition, we deduce that a sparsest solution coincides with the unique least $ \ ell _ 1 $ - norm solution to a linear system if and only if the so - called \ emph { range space property } ( rsp ) holds at this solution. this yields a broad understanding of the relationship between $ \ ell _ 0 $ - and $ \ ell _ 1 $ - minimization problems. our analysis indicates that the rsp truly lies at the heart of the relationship between these two problems. through rsp - based analysis, several important questions in this field can be largely addressed. for instance, how to efficiently interpret the gap between the current theory and the actual numerical performance of $ \ ell _ 1 $ - minimization by a deterministic analysis, and if a linear system has multiple sparsest solutions, when does $ \ ell _ 1 $ - minimization guarantee to find one of them? moreover, new matrix properties ( such as the \ emph { rsp of order $ k $ } and the \ emph { weak - rsp of order $ k $ } ) are introduced in this paper, and a new theory for sparse signal recovery based on the rsp of order $ k $ is established.
|
arxiv:1307.4579
|
we study states of one - and two - dimensional spin systems that are constructed as correlators within the conformal field theory of a massless, free boson. in one dimension, these are good variational wave functions for xxz spin chains and they are similar to lattice laughlin states in two dimensions. we show that their zz correlations are determined by a modification of the original free - boson theory. an expansion to quadratic order leads to a solvable, effective theory for the correlations in these states. compared to the massless boson, there is an additional term in this effective theory that explains the behavior of the correlations : a polynomial decay in one dimension and at the edge of a two - dimensional system and an exponential decay in the bulk of a two - dimensional system. we test the validity of our approximation by comparing it to monte carlo computations.
|
arxiv:1706.03574
|
predictions and discoveries of new phases of superfluid $ ^ 3 $ he in confined geometries, as well as novel topological excitations confined to surfaces and edges of near a bounding surface of $ ^ 3 $ he, are driving the fields of superfluid $ ^ 3 $ he infused into porous media, as well as the fabrication of sub - micron to nano - scale devices for controlled studies of quantum fluids. in this report we consider superfluid $ ^ 3 $ he confined in a periodic geometry, specifically a two - dimensional lattice of square, sub - micron - scale boundaries ( " posts " ) with translational invariance in the third dimension. the equilibrium phase ( s ) are inhomogeneous and depend on the microscopic boundary conditions imposed by a periodic array of posts. we present results for the order parameter and phase diagram based on strong pair breaking at the boundaries. the ordered phases are obtained by numerically minimizing the ginzburg - landau free energy functional. we report results for the weak - coupling limit, appropriate at ambient pressure, as a function of temperature t, lattice spacing l, and post edge dimension, $ d $. for all $ d $ in which a superfluid transition occurs, we find a transition from the normal state to a periodic, inhomogeneous " polar " phase with $ t _ { c _ 1 } < t _ c $ for bulk superfluid $ ^ 3 $ he. for fixed lattice spacing, l, there is a critical post dimension, d _ c, above which only the periodic polar phase is stable. for $ d < d _ c $ we find a second, low - temperature phase onsetting at $ t _ { c _ 2 } < t _ { c _ 1 } $ from the polar phase to a periodic " b - like " phase. the low temperature phase is inhomogeneous, anisotropic and preserves time - reversal symmetry, but unlike the bulk b - phase has only $ d _ { 4h } $ point symmetry.
|
arxiv:1307.7308
|
this paper addresses an interference channel consisting of $ \ mathbf { n } $ active users sharing $ u $ frequency sub - bands. users are asynchronous meaning there exists a mutual delay between their transmitted codes. a stationary model for interference is considered by assuming the starting point of an interferer ' s data is uniformly distributed along the codeword of any user. the spectrum is divided to private and common bands each containing $ v _ { \ mathrm { p } } $ and $ v _ { \ mathrm { c } } $ frequency sub - bands respectively. we consider a scenario where all transmitters are unaware of the number of active users and the channel gains. the optimum $ v _ { \ mathrm { p } } $ and $ v _ { \ mathrm { c } } $ are obtained such that the so - called outage capacity per user is maximized. if $ \ pr \ { \ mathbf { n } \ leq 2 \ } = 1 $, upper and lower bounds on the mutual information between the input and output of the channel for each user are derived using a genie - aided technique. the proposed bounds meet each other as the code length grows to infinity yielding a closed expression for the achievable rates. if $ \ pr \ { \ mathbf { n } > 2 \ } > 0 $, all users follow a locally randomized on - off signaling scheme on the common band where each transmitter quits transmitting its gaussian signals independently from transmission to transmission. using a conditional version of entropy power inequality ( epi ) and an upper bound on the differential entropy of a mixed gaussian random variable, lower bounds on the achievable rates of users are developed. thereafter, the activation probability on each transmission slot is designed resulting in the largest outage capacity.
|
arxiv:1001.0716
|
it is shown that new spin - orbit - like terms appear in the effective nonrelativistic weak hamiltonian for nucleon provided that nuclear potential is taken into account. arguments for their considerable enhancement, in particular, in relativistic nuclear model of walecka are advanced.
|
arxiv:nucl-th/9907092
|
i provide a pedagogical introduction to the notion of pseudomomentum for waves in a medium, and show how changes in pseudomomentum may sometimes be used to compute real forces. i then explain how these ideas apply to sound waves in a fluid. when the background fluid is in motion, the conservation laws for pseudomomentum and pseudoenergy are most easily obtained by exploiting the acoustic metric and the formalism of general relativity.
|
arxiv:cond-mat/0012316
|
the influence of the space charge of ions emitted from the surface of a conical spike on its shape has been studied. the problem of the calculation of the spatial distributions of the electric field, ion velocity field, and the space charge density near the cone tip has been reduced to the analysis of a system of ordinary differential equations. as a result of numerical solution of these equations, the criterion of the balance of the capillary and electrostatic forces on the conic surface of a liquid - metal anode has been determined. it has allowed us to relate the electrical current flowing through the system, the applied potential difference and the cone angle. we have compared the results of our calculations with available experimental data concerning emission from the surface of pure liquid gallium ( ga ), indium ( in ), tin ( sn ), and some liquid alloys, such as au + si, co + ge, and au + ge. on the basis of the proposed model, explanations have been given for a number of specific features of the emissive behavior of different systems.
|
arxiv:1201.3743
|
we present a novel, domain - agnostic, model - independent, unsupervised, and universally applicable machine learning approach for dimensionality reduction based on the principles of algorithmic complexity. specifically, but without loss of generality, we focus on addressing the challenge of reducing certain dimensionality aspects, such as the number of edges in a network, while retaining essential features of interest. these features include preserving crucial network properties like degree distribution, clustering coefficient, edge betweenness, and degree and eigenvector centralities but can also go beyond edges to nodes and weights for network pruning and trimming. our approach outperforms classical statistical machine learning techniques and state - of - the - art dimensionality reduction algorithms by preserving a greater number of data features that statistical algorithms would miss, particularly nonlinear patterns stemming from deterministic recursive processes that may look statistically random but are not. moreover, previous approaches heavily rely on a priori feature selection, which requires constant supervision. our findings demonstrate the effectiveness of the algorithms in overcoming some of these limitations while maintaining a time - efficient computational profile. our approach not only matches, but also exceeds, the performance of established and state - of - the - art dimensionality reduction algorithms. we extend the applicability of our method to lossy compression tasks involving images and any multi - dimensional data. this highlights the versatility and broad utility of the approach in multiple domains.
|
arxiv:1802.05843
|
in the claw detection problem we are given two functions $ f : d \ rightarrow r $ and $ g : d \ rightarrow r $ ( $ | d | = n $, $ | r | = k $ ), and we have to determine if there is exist $ x, y \ in d $ such that $ f ( x ) = g ( y ) $. we show that the quantum query complexity of this problem is between $ \ omega \ left ( n ^ { 1 / 2 } k ^ { 1 / 6 } \ right ) $ and $ o \ left ( n ^ { 1 / 2 + \ varepsilon } k ^ { 1 / 4 } \ right ) $ when $ 2 \ leq k < n $.
|
arxiv:2103.16390
|
we study superfluidity in the 1d bose - hubbard model using a variational matrix product state technique. we determine the superfluid density as a function of the hubbard parameters by calculating the energy cost of phase twists in the thermodynamic limit. as the system is critical, correlation functions decay as power laws and the entanglement entropy grows with the bond dimension of our variational state. we relate the resulting scaling laws to the superfluid density. we compare two different algorithms for optimizing the infinite matrix product state and develop a physical explanation why one of them ( vumps ) is more efficient than the other ( idmrg ). finally, we comment on finite - temperature superfluidity in one dimension and how our results can be realized in cold atom experiments.
|
arxiv:2202.00669
|
the floreanini - jackiw formulation of the chiral quantum - mechanical system oscillator is a model of constrained theory with only second - class constraints. in the dirac ' s classification. the covariant quantization needs infinite number of auxiliary variables and a wess - zumino term. in this paper we investigate the path integral quatization of this model using $ g \ ddot { u } ler ' s $ canonical formalism. all variables are gauge variables in this formalism. the siegel ' s action is obtained using hamilton - jacobi formulation of the systems with constraints.
|
arxiv:hep-th/9901082
|
this paper treats the stationary stokes problem in exterior domain of $ \ mathbb { r } ^ 3 $ with navier slip boundary condition. the behavior at infinity of the data and the solution are determined by setting the problem in $ l ^ p $ - spaces, for $ p > 2 $, with weights. the main results are the existence and uniqueness of strong solutions of the corresponding system.
|
arxiv:2201.02841
|
cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. in this work, we developed a model generalizing the bandsplit rnn for any complete or overcomplete partitions of the frequency axis. psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. a loss function motivated by the signal - to - noise ratio and the sparsity - promoting property of the 1 - norm was proposed. we additionally exploit the information - sharing property of a common - encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard - to - generalize classes of sounds, and allow flexibility during inference time with detachable decoders. our best model sets the state of the art on the divide and remaster dataset with performance above the ideal ratio mask for the dialogue stem.
|
arxiv:2309.02539
|
solving for the frequency - domain scattered wavefield via physics - informed neural network ( pinn ) has great potential in seismic modeling and inversion. however, when dealing with high - frequency wavefields, its accuracy and training cost limits its applications. thus, we propose a novel implementation of pinn using frequency upscaling and neuron splitting, which allows the neural network model to grow in size as we increase the frequency while leveraging the information from the pre - trained model for lower - frequency wavefields, resulting in fast convergence to high - accuracy solutions. numerical results show that, compared to the commonly used pinn with random initialization, the proposed pinn exhibits notable superiority in terms of convergence and accuracy and can achieve neuron based high - frequency wavefield solutions with a two - hidden - layer model.
|
arxiv:2109.14536
|
we study status updating under inexact knowledge about the battery levels of the energy harvesting sensors in an iot network, where users make on - demand requests to a cache - enabled edge node to send updates about various random processes monitored by the sensors. to serve the request ( s ), the edge node either commands the corresponding sensor to send an update or uses the aged data from the cache. we find a control policy that minimizes the average on - demand aoi subject to per - slot energy harvesting constraints under partial battery knowledge at the edge node. namely, the edge node is informed about sensors ' battery levels only via received status updates, leading to uncertainty about the battery levels for the decision - making. we model the problem as a pomdp which is then reformulated as an equivalent belief - mdp. the belief - mdp in its original form is difficult to solve due to the infinite belief space. however, by exploiting a specific pattern in the evolution of beliefs, we truncate the belief space and develop a dynamic programming algorithm to obtain an optimal policy. moreover, we address a multi - sensor setup under a transmission limitation for which we develop an asymptotically optimal algorithm. simulation results assess the performance of the proposed methods.
|
arxiv:2303.18104
|
in this paper, we asymptotically enumerate graphs with a given degree sequence d = ( d _ 1,..., d _ n ) satisfying restrictions designed to permit heavy - tailed sequences in the sparse case ( i. e. where the average degree is rather small ). our general result requires upper bounds on functions of m _ k = \ sum _ { i = 1 } ^ n [ d _ i ] _ k for a few small integers k \ ge 1. note that m _ 1 is simply the total degree of the graphs. as special cases, we asymptotically enumerate graphs with ( i ) degree sequences satisfying m _ 2 = o ( m _ 1 ^ { 9 / 8 } ) ; ( ii ) degree sequences following a power law with parameter gamma > 5 / 2 ; ( iii ) power - law degree sequences that mimic independent power - law " degrees " with parameter gamma > 1 + \ sqrt { 3 } \ approx 2. 732 ; ( iv ) degree sequences following a certain " long - tailed " power law ; ( v ) certain bi - valued sequences. a previous result on sparse graphs by mckay and the second author applies to a wide range of degree sequences but requires delta = o ( m _ 1 ^ { 1 / 3 } ), where delta is the maximum degree. our new result applies in some cases when delta is only barely o ( m _ 1 ^ { 3 / 5 } ). case ( i ) above generalises a result of janson which requires m _ 2 = o ( m _ 1 ) ( and hence m _ 1 = o ( n ) and delta = o ( n ^ { 1 / 2 } ) ). cases ( ii ) and ( iii ) provide the first asymptotic enumeration results applicable to degree sequences of real - world networks following a power law, for which it has been empirically observed that 2 < gamma < 3.
|
arxiv:1404.1250
|
the purpose of this paper is twofold. first, we propose a novel algorithm for estimating parameters in one - dimensional gaussian mixture models ( gmms ). the algorithm takes advantage of the hankel structure inherent in the fourier data obtained from independent and identically distributed ( i. i. d ) samples of the mixture. for gmms with a unified variance, a singular value ratio functional using the fourier data is introduced and used to resolve the variance and component number simultaneously. the consistency of the estimator is derived. compared to classic algorithms such as the method of moments and the maximum likelihood method, the proposed algorithm does not require prior knowledge of the number of gaussian components or good initial guesses. numerical experiments demonstrate its superior performance in estimation accuracy and computational cost. second, we reveal that there exists a fundamental limit to the problem of estimating the number of gaussian components or model order in the mixture model if the number of i. i. d samples is finite. for the case of a single variance, we show that the model order can be successfully estimated only if the minimum separation distance between the component means exceeds a certain threshold value and can fail if below. we derive a lower bound for this threshold value, referred to as the computational resolution limit, in terms of the number of i. i. d samples, the variance, and the number of gaussian components. numerical experiments confirm this phase transition phenomenon in estimating the model order. moreover, we demonstrate that our algorithm achieves better scores in likelihood, aic, and bic when compared to the em algorithm.
|
arxiv:2404.12613
|
multiple attacks have shown that in - vehicle networks have vulnerabilities which can be exploited. securing the controller area network ( can ) for modern vehicles has become a necessary task for car manufacturers. some attacks inject potentially large amount of fake messages into the can network ; however, such attacks are relatively easy to detect. in more sophisticated attacks, the original messages are modified, making the detection a more complex problem. in this paper, we present a novel machine learning based intrusion detection method for can networks. we focus on detecting message modification attacks, which do not change the timing patterns of communications. our proposed temporal convolutional network - based solution can learn the normal behavior of can signals and differentiate them from malicious ones. the method is evaluated on multiple can - bus message ids from two public datasets including different types of attacks. performance results show that our lightweight approach compares favorably to the state - of - the - art unsupervised learning approach, achieving similar or better accuracy for a wide range of scenarios with a significantly lower false positive rate.
|
arxiv:2106.08692
|
the present work discusses about the existence of compact star model in the context of $ f ( r, t ) $ gravity with $ r $ as the ricci scalar and $ t $ as the trace of energy - momentum tensor $ t _ { \ mu \ nu } $. the model has been developed by considering the spherically symmetric spacetime consisting of isotropic fluid with $ f ( r, t ) = r + 2 \ beta t $ with $ \ beta $ be the coupling parameter. the corresponding field equations are solved by choosing well known finch - skea { \ em ansatz } [ finch, m. r., skea, j. e. f. : { \ it class. quantum gravity } { \ bf 6 }, 467 ( 1989 ) ]. for spacetime continuity we elaborate the boundary conditions by considering the exterior region as schwarzschild metric. the unknown constants appearing in the solution are evaluated for the compact star psr ~ j 1614 - 2230 for different values of coupling constant. the physical properties of the model, e. g., matter density, pressure, stability etc. have been discussed both analytically and graphically. this analysis showed that the geometry and matter are compatible with each other as well as the model is in stable equilibrium in the context of $ f ( r, \, t ) $ modified gravity.
|
arxiv:2105.12569
|
alzheimer ' s disease ( ad ) causes a continuous decline in memory, thinking, and judgment. traditional diagnoses are usually based on clinical experience, which is limited by some realistic factors. in this paper, we focus on exploiting deep learning techniques to diagnose ad based on eye - tracking behaviors. visual attention, as typical eye - tracking behavior, is of great clinical value to detect cognitive abnormalities in ad patients. to better analyze the differences in visual attention between ad patients and normals, we first conduct a 3d comprehensive visual task on a non - invasive eye - tracking system to collect visual attention heatmaps. we then propose a multi - layered comparison convolution neural network ( mc - cnn ) to distinguish the visual attention differences between ad patients and normals. in mc - cnn, the multi - layered representations of heatmaps are obtained by hierarchical convolution to better encode eye - movement behaviors, which are further integrated into a distance vector to benefit the comprehensive visual task. extensive experimental results on the collected dataset demonstrate that mc - cnn achieves consistent validity in classifying ad patients and normals with eye - tracking data.
|
arxiv:2303.06868
|
for any symmetric collection of natural numbers h ^ { p, q } with p + q = k, we construct a smooth complex projective variety whose weight k hodge structure has these hodge numbers ; if k = 2m is even, then we have to impose that h ^ { m, m } is bigger than some quadratic bound in m. combining these results for different weights, we solve the construction problem for the truncated hodge diamond under two additional assumptions. our results lead to a complete classification of all nontrivial dominations among hodge numbers of kaehler manifolds.
|
arxiv:1301.0478
|
we investigate the hypothesis that coulomb - type interactions between dark matter ( dm ) and baryons explain the anomalously low 21cm brightness - temperature minimum at redshift z ~ 17 that was recently measured by the edges experiment. in particular, we reassess the validity of the scenario where a small fraction of the total dm is millicharged, focusing on newly derived constraints from planck 2015 cosmic microwave background ( cmb ) data. crucially, the cmb power spectrum is sensitive to dm - baryon scattering if the fraction of interacting dm is larger than ( or comparable to ) the fractional uncertainty in the baryon energy density. meanwhile, there is a mass - dependent lower limit on the fraction for which the required interaction to cool the baryons sufficiently is so strong that it drives the interacting - dm temperature to the baryon temperature prior to their decoupling from the cmb. if this occurs as early as recombination, the cooling saturates. we precisely determine the viable parameter space for millicharged dm, and find that only a fraction ( m _ chi / mev ) 0. 0115 % < ~ f < ~ 0. 4 % of the entire dm content, and only for dm - particle masses between 0. 5 mev - 35 mev, can be charged at the level needed to marginally explain the anomaly, without violating limits from slac, cmb, big - bang nucleosynthesis ( bbn ), or stellar and sn1987a cooling. in reality, though, we demonstrate that at least moderate fine tuning is required to both agree with the measured absorption profile and overcome various astrophysical sources of heating. finally, we point out that a ~ 0. 4 % millicharged dm component which is tightly coupled to the baryons at recombination may resolve the current 2 - sigma tension between the bbn and cmb determinations of the baryon energy density. future cmb - s4 measurements will be able to probe this scenario directly.
|
arxiv:1807.11482
|
we study four - dimensional chern - simons theory on $ d \ times \ mathbb { c } $ ( where $ d $ is a disk ), which is understood to describe rational solutions of the yang - baxter equation from the work of costello, witten and yamazaki. we find that the theory is dual to a boundary theory, that is a three - dimensional analogue of the two - dimensional chiral wzw model. this boundary theory gives rise to a current algebra that turns out to be an " analytically - continued " toroidal lie algebra. in addition, we show how certain bulk correlation functions of two and three wilson lines can be captured by boundary correlation functions of local operators in the three - dimensional wzw model. in particular, we reproduce the leading and subleading nontrivial contributions to the rational r - matrix purely from the boundary theory.
|
arxiv:2003.08931
|
the measurement of direct photons in s ( nn ) * * 1 / 2 = 200 gev p + p and au + au collisions is presented. the signal is compared to nlo pqcd calculations, which, in case of au + au, are scaled with the number of underlying nucleon - nucleon collisions. the agreement of the calculation with the data in both cases confirms the scaling of hard processes with the number of nucleon - nucleon collisions and supports the explanation of the earlier - observed pion suppression as a final - state effect.
|
arxiv:nucl-ex/0511041
|
we present a theoretical study of the excitations of the two - dimensional supersolid state of a bose - einstein condensate with either dipole - dipole interactions or soft - core interactions. this supersolid state has three gapless excitation branches arising from the spontaneously broken continuous symmetries. two of these branches are related to longitudinal sound waves, similar to those in one - dimensional supersolids. the third branch is a transverse wave arising from the non - zero shear modulus of the two - dimensional crystal. we present the results of numerical calculations for the excitations and dynamic structure factor characterising the density fluctuations, and study their behavior across the discontinuous superfluid to supersolid transition. we show that the speeds of sound are described by a hydrodynamic theory that incorporates generalized elastic parameters, including the shear modulus. furthermore, we establish that dipolar and soft - core supersolids manifest distinct characteristics, falling into the bulk incompressible and rigid lattice limits, respectively.
|
arxiv:2407.01072
|
we develop new adaptive algorithms for temporal integration of nonlinear evolution equations on tensor manifolds. these algorithms, which we call step - truncation methods, are based on performing one time step with a conventional time - stepping scheme, followed by a truncation operation onto a tensor manifold. by selecting the rank of the tensor manifold adaptively to satisfy stability and accuracy requirements, we prove convergence of a wide range of step - truncation methods, including explicit one - step and multi - step methods. these methods are very easy to implement as they rely only on arithmetic operations between tensors, which can be performed by efficient and scalable parallel algorithms. adaptive step - truncation methods can be used to compute numerical solutions of high - dimensional pdes, which have become central to many new areas of application such optimal mass transport, random dynamical systems, and mean field optimal control. numerical applications are presented and discussed for a fokker - planck equation with spatially dependent drift on a flat torus of dimension two and four.
|
arxiv:2008.00155
|
in this paper we study a special class of systems : time - invariant control systems that satisfy the matching condition for which no bounds for the disturbance and the unknown parameters are known. for this class of systems, we provide a simple, direct, adaptive control scheme that combines three elements : ( a ) nonlinear damping, ( b ) single - gain adjustment, and ( c ) deadzone in the update law. it is the first time that these three tools are combined and the proposed controller is called a deadzone - adapted disturbance suppression ( dads ) controller. the proposed adaptive control scheme achieves for the first time an attenuation of the plant state to an assignable small level, despite the presence of disturbances and unknown parameters of arbitrary and unknown bounds. moreover, the dads controller prevents gain and state drift regardless of the size of the disturbance and unknown parameter.
|
arxiv:2311.07938
|
we report a measurement of the transition amplitude ratio $ \ chi $ of an electric quadrupole ( $ e2 $ ) to a magnetic dipole ( $ m1 $ ) of the $ 6p _ { 1 / 2 } \ rightarrow6p _ { 3 / 2 } $ transition in atomic thallium. we utilized the electromagnetically induced transparency ( eit ) mechanism and the sideband - bridging technique to resolve the isotopic transitions and their hyperfine manifold. our measurement gave $ \ chi _ { 205 } ~ = ~ 0. 2550 ( 20 ) ( 7 ) $ for $ ^ { 205 } $ tl, and $ \ chi _ { 203 } = 0. 2532 ( 73 ) ( 7 ) $ for $ ^ { 203 } $ tl, which was measured for the first time. our result provides a reference point for the theoretical calculation of atomic structure and new input for the long dispute on the atomic thallium pnc measurements.
|
arxiv:2203.15357
|
the plan for mass and spin measurement of susy particles with the atlas detector is presented. the measurements of kinematical distributions, such as edges in the invariant mass of leptons and jets, could be used to constrain the model of susy that may be discovered at the lhc. examples from a few points in the msugra scenario are provided with an emphasis on measurements that can be conducted within the first few years of data taking.
|
arxiv:0710.4546
|
the theory of spin relaxation of conduction electrons is developed for zinc - blende - type quantum wells grown on ( 110 ) - oriented substrate. it is shown that, in asymmetric structures, the relaxation of electron spin initially oriented along the growth direction is characterized by two different lifetimes and leads to the appearance of an in - plane spin component. the magnitude and sign of the in - plane component are determined by the structure inversion asymmetry of the quantum well and can be tuned by the gate voltage. in an external magnetic field, the interplay of cyclotron motion of carriers and the larmor precession of electron spin can result in a nonmonotonic dependence of the spin density on the magnetic field.
|
arxiv:0907.5556
|
a coxeter system is an ordered pair ( w, s ) where s is the generating set in a particular type of presentation for the coxeter group w. a subgroup of w is called special if it is generated by a subset of s. amalgamated product decompositions of a coxeter group having special factors and special amalgamated subgroup are easily recognized from the presentation of the coxeter group. if a coxeter group is a subgroup of the fundamental group of a given graph of groups, then the coxeter group is also the fundamental group of a graph of special subgroups, where each vertex and edge group is a subgroup of a conjugate of a vertex or edge group of the given graph of groups. a vertex group of an arbitrary graph of groups decomposition of a coxeter group is shown to split into parts conjugate to special groups and parts that are subgroups of edge groups of the given decomposition. several applications of the main theorem are produced, including the classification of maximal fa - subgroups of a finitely generated coxeter group as all conjugates of certain special subgroups.
|
arxiv:math/0703439
|
modern code review ( mcr ) plays a key role in software quality practices. in mcr process, a new patch ( i. e., a set of code changes ) is encouraged to be examined by reviewers in order to identify weaknesses in source code prior to an integration into main software repositories. to mitigate the risk of having future defects, prior work suggests that mcr should be performed with sufficient review participation. indeed, recent work shows that a low number of participated reviewers is associated with poor software quality. however, there is a likely case that a new patch still suffers from poor review participation even though reviewers were invited. hence, in this paper, we set out to investigate the factors that are associated with the participation decision of an invited reviewer. through a case study of 230, 090 patches spread across the android, libreoffice, openstack and qt systems, we find that ( 1 ) 16 % - 66 % of patches have at least one invited reviewer who did not respond to the review invitation ; ( 2 ) human factors play an important role in predicting whether or not an invited reviewer will participate in a review ; ( 3 ) a review participation rate of an invited reviewers and code authoring experience of an invited reviewer are highly associated with the participation decision of an invited reviewer. these results can help practitioners better understand about how human factors associate with the participation decision of reviewers and serve as guidelines for inviting reviewers, leading to a better inviting decision and a better reviewer participation.
|
arxiv:1806.10277
|
renewed interest in the first stars that were formed in the universe has led to the discovery of extremely iron - poor stars. since several competing scenarios exist, our understanding of the mass range that determines the observed elemental abundances remains unclear. in this study, we consider three well - studied metal - poor stars in terms of the theoretical supernovae ( sne ) model. our results suggest that the observed abundance patterns in the metal - poor star bd + 80 245 and the pair of stars hd 134439 / 40 agree strongly with the theoretical possibility that these stars inherited their heavy element abundance patterns from sne initiated by thermonuclear runaways in the degenerate carbon - oxygen cores of primordial asymptotic giant branch stars with \ ~ 3. 5 - 5 solar masses. recent theoretical calculations have predicted that such sne could be originated from metal - free stars in the intermediate mass range. on the other hand, intermediate mass stars containing some metals would end their lives as white dwarfs after expelling their envelopes in the wind due to intense momentum transport from outgoing photons to heavy elements. this new pathway for the formation of sne requires that stars are formed from the primordial gas. thus, we suggest that stars of a few solar masses were formed from the primordial gas and that some of them caused thermonuclear explosions when the mass of their degenerate carbon - oxygen cores increased to the chandrasekhar limit without experiencing efficient mass loss.
|
arxiv:astro-ph/0601324
|
we study quantum dynamics of bosonic atoms that are excited to form a phase kink, or several kinks, by an imprinting potential in a one - dimensional trap. we calculate dissipation due to quantum and thermal fluctuations in soliton trajectories, collisions and the core structure. single - shot runs show weak filling of a soliton core, typically deeper solitons in the case of stronger fluctuations and spreading / disappearing solitons due to collisions. we also analyze a soliton system in an optical lattice that shows especially strong fluctuation - induced phenomena.
|
arxiv:1001.3385
|
is there a low - density region ( ' gap ' ) between water and a hydrophobic surface? previous x - ray / neutron reflectivity results have been inconsistent because the effect ( if any ) is sub - resolution for the surfaces studied. we have used x - ray reflectivity to probe the interface between water and more hydrophobic smooth surfaces. the depleted region width increases with contact angle and becomes larger than the resolution, allowing definitive measurements. large fluctuations are predicted at this interface ; however, we find that their contribution to the interface roughness is too small to measure.
|
arxiv:1006.4620
|
cirquent calculus is a proof system manipulating circuit - style constructs rather than formulas. using it, this article constructs a sound and complete axiomatization cl16 of the propositional fragment of computability logic ( the game - semantically conceived logic of computational problems - see http : / / www. csc. villanova. edu / ~ japaridz / cl / ) whose logical vocabulary consists of negation and parallel and choice connectives, and whose atoms represent elementary, i. e. moveless, games.
|
arxiv:1707.04823
|
let $ \ bold { \ phi } = ( \ phi _ n ) $ be a musielak - orlicz function, $ x $ be a real banach space and $ a $ be any infinite matrix. in this paper, a generalized vector - valued musielak - orlicz sequence space $ l _ { \ bold { \ phi } } ^ { a } ( x ) $ is introduced. it is shown that the space is complete normed linear space under certain conditions on the matrix $ a $. it is also shown that $ l _ { \ bold { \ phi } } ^ { a } ( x ) $ is a $ \ sigma $ - dedikind complete whenever $ x $ is so. we have discussed some geometric properties, namely, uniformly monotone, uniform opial property for this space. using the sequence of $ s $ - number ( in the sense of pietsch ), the operators of $ s $ - type $ l _ { \ bold { \ phi } } ^ { a } $ and operator ideals under certain conditions on the matrix $ a $ are discussed.
|
arxiv:1408.3528
|
in this paper we obtain 32 canonical forms for 3d piecewise smooth vector fields presenting the so called cusp - fold singularity. all these canonical forms are topologically distinct and collect the main topological aspects of the singularities described as kind of the tangencies involved and positions of the sliding, escaping and crossing regions. also, one - parameter bifurcations of these canonical forms are presented and the topologically equivalent piecewise smooth vector fields are obtained.
|
arxiv:2210.10518
|
the distributed computation of equilibria and optima has seen growing interest in a broad collection of networked problems. we consider the computation of equilibria of convex stochastic nash games characterized by a possibly nonconvex potential function. our focus is on two classes of stochastic nash games : ( p1 ) : a potential stochastic nash game, in which each player solves a parameterized stochastic convex program ; and ( p2 ) : a misspecified generalization, where the player - specific stochastic program is complicated by a parametric misspecification. in both settings, exact proximal br solutions are generally unavailable in finite time since they necessitate solving parameterized stochastic programs. consequently, we design two asynchronous inexact proximal br schemes to solve the problems, where in each iteration a single player is randomly chosen to compute an inexact proximal br solution with rivals ' possibly outdated information. yet, in the misspecified regime ( p2 ), each player possesses an extra estimate of the misspecified parameter and updates its estimate by a projected stochastic gradient ( sg ) algorithm. by since any stationary point of the potential function is a nash equilibrium of the associated game, we believe this paper is amongst the first ones for stochastic nonconvex ( but block convex ) optimization problems equipped with almost - sure convergence guarantees. these statements can be extended to allow for accommodating weighted potential games and generalized potential games. finally, we present preliminary numerics based on applying the proposed schemes to congestion control and nash - cournot games.
|
arxiv:1711.03963
|
modeling human - like action - to - reaction generation has significant real - world applications, like human - robot interaction and games. despite recent advancements in single - person motion generation, it is still challenging to well handle action - to - reaction generation, due to the difficulty of directly predicting reaction from action sequence without prompts, and the absence of a unified representation that effectively encodes multi - person motion. to address these challenges, we introduce think - then - react ( ttr ), a large language - model - based framework designed to generate human - like reactions. first, with our fine - grained multimodal training strategy, ttr is capable to unify two processes during inference : a thinking process that explicitly infers action intentions and reasons corresponding reaction description, which serve as semantic prompts, and a reacting process that predicts reactions based on input action and the inferred semantic prompts. second, to effectively represent multi - person motion in language models, we propose a unified motion tokenizer by decoupling egocentric pose and absolute space features, which effectively represents action and reaction motion with same encoding. extensive experiments demonstrate that ttr outperforms existing baselines, achieving significant improvements in evaluation metrics, such as reducing fid from 3. 988 to 1. 942.
|
arxiv:2503.16451
|
it has been recognized that a heavily overparameterized artificial neural network exhibits surprisingly good generalization performance in various machine - learning tasks. recent theoretical studies have made attempts to unveil the mystery of the overparameterization. in most of those previous works, the overparameterization is achieved by increasing the width of the network, while the effect of increasing the depth has remained less well understood. in this work, we investigate the effect of increasing the depth within an overparameterized regime. to gain an insight into the advantage of depth, we introduce local and global labels as abstract but simple classification rules. it turns out that the locality of the relevant feature for a given classification rule plays a key role ; our experimental results suggest that deeper is better for local labels, whereas shallower is better for global labels. we also compare the results of finite networks with those of the neural tangent kernel ( ntk ), which is equivalent to an infinitely wide network with a proper initialization and an infinitesimal learning rate. it is shown that the ntk does not correctly capture the depth dependence of the generalization performance, which indicates the importance of the feature learning rather than the lazy learning.
|
arxiv:2005.12488
|
we investigate anisotropic cosmological solutions of the theory with non - minimal couplings between electromagnetic fields and gravity in $ y ( r ) f ^ 2 $ form. after we derive the field equations by the variational principle, we look for spatially flat cosmological solutions with magnetic fields or electric fields. then we give exact anisotropic solutions by assuming the hyperbolic expansion functions. we observe that the solutions approach to the isotropic case in late - times.
|
arxiv:1203.1531
|
in this note we discuss a paradigmatic example of interacting particles subject to non conservative external forces and to the action of thermostats consisting of external ( finite ) reservoirs of particles. we then consider a model of granular materials of interest for experimental tests that had recently attracted lot of attentions. this model can be reduced to the previously discussed example under a number of assumptions, in particular that inelasticity due to internal collisions can be neglected for the purpose of measuring the large deviation functional for entropy production rate. we show that if the restitution coefficient in the granular material model is close to one, then the required assuptions are verified on a specific time scale and we predict a fluctuation relation for the entropy production rate measured on the same time scale.
|
arxiv:cond-mat/0601683
|
in this paper we describe the architecture and the implementation of a broker based publish / subscribe system where the broker role is played by a private blockchain, hy - perledger fabric. we show the effectiveness of our architecture by implementing and deploying a photo trading plateform. interestingly, our architecture is generic enough to be adapted to any digital asset trading.
|
arxiv:1907.03627
|
of freedom and make any residual flexibility transparent ( simmons et al., 2021, p. 180 ). a larger study of 92 eeg / erp studies showed that only 60 % of studies adhered to their preregistrations or disclosed all deviations. notably, registered reports had the higher adherence rates ( 92 % ) than unreviewed preregistrations ( 60 % ). however, critics have argued that it is not useful to identify or justify deviations from preregistered plans when those plans do not reflect high quality theory and research practice. as rubin ( 2020 ) explained, β we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses β ( pp. 378 β 379 ). in addition, pre - registering a study requires careful deliberation about the study ' s hypotheses, research design and statistical analyses. this depends on the use of pre - registration templates that provides detailed guidance on what to include and why ( bowman et al., 2016 ; haven & van grootel, 2019 ; van den akker et al., 2021 ). many pre - registration template stress the importance of a power analysis but not only stress the importance of why the methodology was used. additionally to the concerns raised about its practical implementation in quantitative research, critics have also argued that preregistration is less applicable, or even unsuitable, for qualitative research. pre - registration imposes rigidity, limiting researchers ' ability to adapt to emerging data and evolving contexts, which are essential to capturing the richness of participants ' lived experiences ( souza - neto & moyle, 2025 ). additionally, it conflicts with the inductive and flexible nature of theory - building in qualitative research, constraining the exploratory approach that is central to this methodology ( souza - neto & moyle, 2025 ). finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies and analyses ( devezer et al., 2020 ; mcphetres, 2020 ; pham & oh, 2020 ; szollosi et al., 2020 ). consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions ( soderberg et al., 2020 ; see also field et al.,
|
https://en.wikipedia.org/wiki/Preregistration_(science)
|
the thermodynamic euler equation for high - energy states of large - $ n $ gauge theories is derived from the dependence of the extensive quantities on the number of colors $ n $. this euler equation relates the energy of the state to the temperature, entropy, number of degrees of freedom and its chemical potential, but not to the volume or pressure. in the context of the gauge / gravity duality we show that the euler equation is dual to the generalized smarr formula for black holes in the presence of a negative cosmological constant. we also match the fundamental variational equation of thermodynamics to the first law of black hole mechanics, when extended to include variations of the cosmological constant and newton ' s constant.
|
arxiv:2101.04145
|
we consider the random 2 - satisfiability problem, in which each instance is a formula that is the conjunction of m clauses of the form ( x or y ), chosen uniformly at random from among all 2 - clauses on n boolean variables and their negations. as m and n tend to infinity in the ratio m / n - - > alpha, the problem is known to have a phase transition at alpha _ c = 1, below which the probability that the formula is satisfiable tends to one and above which it tends to zero. we determine the finite - size scaling about this transition, namely the scaling of the maximal window w ( n, delta ) = ( alpha _ - ( n, delta ), alpha _ + ( n, delta ) ) such that the probability of satisfiability is greater than 1 - delta for alpha < alpha _ - and is less than delta for alpha > alpha _ +. we show that w ( n, delta ) = ( 1 - theta ( n ^ { - 1 / 3 } ), 1 + theta ( n ^ { - 1 / 3 } ) ), where the constants implicit in theta depend on delta. we also determine the rates at which the probability of satisfiability approaches one and zero at the boundaries of the window. namely, for m = ( 1 + epsilon ) n, where epsilon may depend on n as long as | epsilon | is sufficiently small and | epsilon | * n ^ ( 1 / 3 ) is sufficiently large, we show that the probability of satisfiability decays like exp ( - theta ( n * epsilon ^ 3 ) ) above the window, and goes to one like 1 - theta ( 1 / ( n * | epsilon | ^ 3 ) ) below the window. we prove these results by defining an order parameter for the transition and establishing its scaling behavior in n both inside and outside the window. using this order parameter, we prove that the 2 - sat phase transition is continuous with an order parameter critical exponent of 1. we also determine the values of two other critical exponents, showing that the exponents of 2 - sat are identical to those of the random graph.
|
arxiv:math/9909031
|
as a new type of multicarrier ( mc ) scheme built upon the recently discovered delay - doppler domain orthogonal pulse ( ddop ), orthogonal delay - doppler division multiplexing ( oddm ) aims to address the challenges of waveform design in linear time - varying channels. in this paper, we explore the design principles of oddm and clarify the key ideas underlying the ddop. we then derive an alternative representation of the ddop and highlight the fundamental differences between oddm and conventional mc schemes. finally, we discuss and compare two implementation methods for oddm.
|
arxiv:2504.10949
|
eliminating the fine structure splitting ( fss ) of excitons in self - assembled quantum dots ( qds ) is essential to the generation of high quality entangled photon pairs. it has been shown that the fss has a lower bound under uniaxial stress. in this letter, we show that the fss of excitons in a general self - assembled ingaas / gaas qd can be fully suppressed via combined stresses along the [ 110 ] and [ 010 ] directions. the result is confirmed by atomic empirical pseudopotential calculations. for all the qds we studied, the fss can be tuned to be vanishingly small ( $ < $ 0. 1 $ \ mu $ ev ), which is sufficient small for high quality entangled photon emission.
|
arxiv:1206.1111
|
transition metal dichalcogenides ( tmds ) are layered two - dimensional semiconductors explored for various optoelectronic applications, ranging from light - emitting diodes to single - photon emitters. to interact strongly with light, such devices require monolayer tmds, which exhibit a direct bandgap. these atomically thin sheets are typically obtained through mechanical exfoliation followed by manual identification with a brightfield optical microscope. while this traditional procedure provides high - quality crystals, the identification step is time - intensive, low - throughput, and prone to human error, creating a significant bottleneck for tmd research. here, we report a simple and fully automated approach for high - throughput identification of tmd monolayers using photoluminescence microscopy. compared to a manual search and verification, our methodology offers a four - orders - of - magnitude decrease in the time a researcher must invest per identified monolayer. this ability enables us to measure geometric and photoluminescence - intensity features of more than 2, 400 monolayers and bilayers of wse $ _ 2 $, mose $ _ 2 $, and mos $ _ 2 $. due to these large numbers, we can study and quantify material properties previously inaccessible. for example, we show that the mean photoluminescence intensity from a monolayer correlates with its size due to reduced emission from its edges. further, we observe large variations in brightness ( up to 10 $ \ times $ ) from wse $ _ 2 $ monolayers of different batches produced by the same supplier. therefore, our automated approach not only increases fabrication efficiency but also enhances sample quality for optoelectronic devices of atomically thin semiconductors.
|
arxiv:2407.09228
|
social trust prediction addresses the significant problem of exploring interactions among users in social networks. naturally, this problem can be formulated in the matrix completion framework, with each entry indicating the trustness or distrustness. however, there are two challenges for the social trust problem : 1 ) the observed data are with sign ( 1 - bit ) measurements ; 2 ) they are typically sampled non - uniformly. most of the previous matrix completion methods do not well handle the two issues. motivated by the recent progress of max - norm, we propose to solve the problem with a 1 - bit max - norm constrained formulation. since max - norm is not easy to optimize, we utilize a reformulation of max - norm which facilitates an efficient projected gradient decent algorithm. we demonstrate the superiority of our formulation on two benchmark datasets.
|
arxiv:1504.06394
|
in the last few decades, lots of universal relations between different global physical quantities of neutron stars have been proposed to constrain the unobservable or hard to be observed properties of neutron stars. but few of them are related to the gravitational redshift or the gravitational binding energy, especially for the fast rotating neutron stars. here we will focus on the universal relations related to these two quantities. based on 11 equations of state ( eoss ) from the predictions of microscopic nuclear many - body theories for normal or hybrid neutron stars, we proposed a set of new quasi - universal relations under three rotating cases : static, general rotating and keplerian rotating. these new quasi - universal relations provide a potential way to constrain or estimate the unobservable or hard to be observed properties of neutron stars.
|
arxiv:2008.02958
|
we present a model for random simple graphs with a degree distribution that obeys a power law ( i. e., is heavy - tailed ). to attain this behavior, the edge probabilities in the graph are constructed from bertoin - fujita - roynette - yor ( bfry ) random variables, which have been recently utilized in bayesian statistics for the construction of power law models in several applications. our construction readily extends to capture the structure of latent factors, similarly to stochastic blockmodels, while maintaining its power law degree distribution. the bfry random variables are well approximated by gamma random variables in a variational bayesian inference routine, which we apply to several network datasets for which power law degree distributions are a natural assumption. by learning the parameters of the bfry distribution via probabilistic inference, we are able to automatically select the appropriate power law behavior from the data. in order to further scale our inference procedure, we adopt stochastic gradient ascent routines where the gradients are computed on minibatches ( i. e., subsets ) of the edges in the graph.
|
arxiv:1702.08239
|
we present the mathematica application lieart ( lie algebras and representation theory ) for computations frequently encountered in lie algebras and representation theory, such as tensor product decomposition and subalgebra branching of irreducible representations. lieart can handle all classical and exceptional lie algebras. it computes root systems of lie algebras, weight systems and several other properties of irreducible representations. lieart ' s user interface has been created with a strong focus on usability and thus allows the input of irreducible representations via their dimensional name, while the output is in the textbook style used in most particle - physics publications. the unique dynkin labels of irreducible representations are used internally and can also be used for input and output. lieart exploits the weyl reflection group for most of the calculations, resulting in fast computations and a low memory consumption. extensive tables of properties, tensor products and branching rules of irreducible representations are included in the appendix.
|
arxiv:1206.6379
|
in this paper, we study the problem of extremely large ( xl ) multiple - input multiple - output ( mimo ) channel estimation in the terahertz ( thz ) frequency band, considering the presence of propagation delays across the entire array apertures at both communication ends, which naturally leads to frequency selectivity. this problem is known as beam squint and may be pronounced when communications are subject to multipath fading conditions. multi - carrier ( mc ) transmission schemes, which are usually deployed in thz communication systems to address these issues, suffer from high peak - to - average power ratio, which is specifically dominant in this frequency band where low transmit power is mostly feasible. furthermore, the frequency selectivity caused by severe molecular absorption in the thz band necessitates delicate consideration in mc system design. motivated by the benefits of single - carrier ( sc ) waveforms for practical thz communication systems, diverging from the current dominant research trend on mc systems, we devise a novel channel estimation problem formulation in the time domain for sc xl mimo systems subject to multipath signal propagation, spatial wideband effects, and molecular absorption. an efficient alternating minimization approach is presented to solve the proposed mixed - integer sparse problem formulation. the conducted extensive performance evaluation results validate that the proposed xl mimo estimation scheme exhibits superior performance than conventional sc - and mc - based techniques, approaching the idealized lower bound.
|
arxiv:2310.14745
|
recent advances in transformer architectures [ 1 ] have brought remarkable improvements to visual question answering ( vqa ). nevertheless, transformer - based vqa models are usually deep and wide to guarantee good performance, so they can only run on powerful gpu servers and cannot run on capacity - restricted platforms such as mobile phones. therefore, it is desirable to learn an elastic vqa model that supports adaptive pruning at runtime to meet the efficiency constraints of different platforms. to this end, we present the bilaterally slimmable transformer ( bst ), a general framework that can be seamlessly integrated into arbitrary transformer - based vqa models to train a single model once and obtain various slimmed submodels of different widths and depths. to verify the effectiveness and generality of this method, we integrate the proposed bst framework with three typical transformer - based vqa approaches, namely mcan [ 2 ], uniter [ 3 ], and clip - vil [ 4 ], and conduct extensive experiments on two commonly - used benchmark datasets. in particular, one slimmed mcan - bst submodel achieves comparable accuracy on vqa - v2, while being 0. 38x smaller in model size and having 0. 27x fewer flops than the reference mcan model. the smallest mcan - bst submodel only has 9m parameters and 0. 16g flops during inference, making it possible to deploy it on a mobile device with less than 60 ms latency.
|
arxiv:2203.12814
|
following the approach of yu, singh, and krakauer [ phys. rev. b 43 ( 1991 ) 6411 ] we extended the linearized augmented plane wave code wien of blaha, schwarz, and coworkers by the evaluation of forces. in this paper we describe the approach, demonstrate the high accuracy of the force calculation, and use them for an efficient geometry optimization of poly - atomic systems.
|
arxiv:mtrl-th/9511002
|
dft calculation of various atomic species on graphene sheet is investigated as prototypes for formation of nano - structures on carbon nanotube ( cnt ) wall. we investigate computationally adsorption energies and adsorption sites on graphene sheet for a lot of atomic species including transition metals, noble metals, nitrogen and oxygen, using the dft calculation as a prototype for cnt. the suitable atomic species can be chosen as each application from those results. the calculated results show us that mo and ru are bounded strongly on graphene sheet with large diffusion barrier energy. on the other hand, some atomic species has large binding energies with small diffusion barrier energies
|
arxiv:0711.4669
|
we present the conceptual and technical background required to describe and understand the correlations and fluctuations of the empirical density and current of steady - state diffusion processes on all time scales - - observables central to statistical mechanics and thermodynamics on the level of individual trajectories. we focus on the important and non - trivial effect of a spatial coarse graining. making use of a generalized time - reversal symmetry we provide deeper insight about the physical meaning of fluctuations of the coarse - grained empirical density and current, and explain why a systematic variation of the coarse - graining scale offers an efficient method to infer bounds on a system ' s dissipation. moreover, we discuss emerging symmetries in the statistics of the empirical density and current, and the statistics in the central - limit regime. more broadly our work promotes the application of stochastic calculus as a powerful direct alternative to feynman - kac theory and path - integral methods.
|
arxiv:2204.06553
|
we propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. our method is based on alternating direction multiplier method ( admm ) to deal with complex regularization functions such as structured regularizations. although the original admm is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. moreover, our method can naturally afford mini - batch update and it gives speed up of convergence. we show that, under mild assumptions, our method converges exponentially. the numerical experiments show that our method actually performs efficiently.
|
arxiv:1311.0622
|
we present the detection of an absorpton feature at $ e = 8. 77 ^ { + 0. 05 } _ { - 0. 06 } $ kev in the combined x - ray spectrum of the ultraluminous x - ray source ngc 1313 x - 1 observed with xmm - newton and nustar, significant at the 3 $ \ sigma $ level. if associated with blueshifted ionized iron, the implied outflow velocity is ~ 0. 2 $ c $ for fe xxvi, or ~ 0. 25 $ c $ for fe xxv. these velocities are similar to the ultrafast outflow seen in absorption recently discovered in this source at lower energies by xmm - newton, and we therefore conclude that this is an iron component to the same outflow. photoionization modeling marginally prefers the fe xxv solution, but in either case the outflow properties appear to be extreme, potentially supporting a super - eddington hypothesis for ngc 1313 x - 1.
|
arxiv:1607.03124
|
monocular depth estimation is a fundamental task in computer vision and has drawn increasing attention. recently, some methods reformulate it as a classification - regression task to boost the model performance, where continuous depth is estimated via a linear combination of predicted probability distributions and discrete bins. in this paper, we present a novel framework called binsformer, tailored for the classification - regression - based depth estimation. it mainly focuses on two crucial components in the specific task : 1 ) proper generation of adaptive bins and 2 ) sufficient interaction between probability distribution and bins predictions. to specify, we employ the transformer decoder to generate bins, novelly viewing it as a direct set - to - set prediction problem. we further integrate a multi - scale decoder structure to achieve a comprehensive understanding of spatial geometry information and estimate depth maps in a coarse - to - fine manner. moreover, an extra scene understanding query is proposed to improve the estimation accuracy, which turns out that models can implicitly learn useful information from an auxiliary environment classification task. extensive experiments on the kitti, nyu, and sun rgb - d datasets demonstrate that binsformer surpasses state - of - the - art monocular depth estimation methods with prominent margins. code and pretrained models will be made publicly available at \ url { https : / / github. com / zhyever / monocular - depth - estimation - toolbox }.
|
arxiv:2204.00987
|
in this paper, we consider the inhomogeneous dirichlet boundary value problem for the stationary navier - - stokes equations in $ n $ - dimensional half spaces $ \ mathbb { r } ^ n _ + = \ { x = ( x ', x _ n ) \ ; \ x ' \ in \ mathbb { r } ^ { n - 1 }, x _ n > 0 \ } $ with $ n \ geq 3 $ and prove the well - posedness in the scaling critical besov spaces. our approach is to regard the system as an evolution equation for the normal variable $ x _ n $ and reformulate it as an integral equation. then, we achieve the goal by making use of the maximal regularity method that has developed in the context of nonstationary analysis in critical besov spaces. furthermore, for the case of $ n \ geq 4 $, we find that the asymptotic profile of the solution as $ x _ n \ to \ infty $ is given by the $ ( n - 1 ) $ - dimensional stationary navier - - stokes flow.
|
arxiv:2312.10882
|
provost mark wrighton is chancellor of washington university in st. louis ; former associate provost alice gast is president of lehigh university ; and former professor suh nam - pyo is president of kaist. former dean of the school of science robert j. birgeneau was the chancellor of the university of california, berkeley ( 2004 β 2013 ) ; former professor john maeda was president of rhode island school of design ( risd, 2008 β 2013 ) ; former professor david baltimore was president of caltech ( 1997 β 2006 ) ; and mit alumnus and former assistant professor hans mark served as chancellor of the university of texas system ( 1984 β 1992 ). in addition, faculty members have been recruited to lead governmental agencies ; for example, former professor marcia mcnutt is president of the national academy of sciences, urban studies professor xavier de souza briggs served as the associate director of the white house office of management and budget, and biology professor eric lander was a co - chair of the president ' s council of advisors on science and technology. in 2013, faculty member ernest moniz was nominated by president obama and later confirmed as united states secretary of energy. former professor hans mark served as secretary of the air force from 1979 to 1981. alumna and institute professor sheila widnall served as secretary of the air force between 1993 and 1997, making her the first female secretary of the air force and first woman to lead an entire branch of the us military in the department of defense. a 1999 report, met by promises of change by president charles vest, found that senior female faculty in the school of science were often marginalized, and in return for equal professional accomplishments received reduced " salary, space, awards, resources, and response to outside offers ". as of 2017, mit was the second - largest employer in the city of cambridge. based on feedback from employees, mit was ranked no. 7 as a place to work, among us colleges and universities as of march 2013. surveys cited a " smart ", " creative ", " friendly " environment, noting that the work - life balance tilts towards a " strong work ethic " but complaining about " low pay " compared to an industry position. = = = notable alumni = = = many of mit ' s over 120, 000 alumni have achieved considerable success in scientific research, public service, education, and business. as of october 2020, 41 mit alumni have won nobel prizes, 48 have been selected as rhodes scholars, 61 have been selected as marshall scholars, and 3 have been selected as mitchell
|
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
|
the $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 \ mathbb { z } _ 8 $ - additive codes are subgroups of $ \ mathbb { z } _ 2 ^ { \ alpha _ 1 } \ times \ mathbb { z } _ 4 ^ { \ alpha _ 2 } \ times \ mathbb { z } _ 8 ^ { \ alpha _ 3 } $, and can be seen as linear codes over $ \ mathbb { z } _ 2 $ when $ \ alpha _ 2 = \ alpha _ 3 = 0 $, $ \ mathbb { z } _ 4 $ - additive or $ \ mathbb { z } _ 8 $ - additive codes when $ \ alpha _ 1 = \ alpha _ 3 = 0 $ or $ \ alpha _ 1 = \ alpha _ 2 = 0 $, respectively, or $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 $ - additive codes when $ \ alpha _ 3 = 0 $. a $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 \ mathbb { z } _ 8 $ - linear hadamard code is a hadamard code which is the gray map image of a $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 \ mathbb { z } _ 8 $ - additive code. in this paper, we generalize some known results for $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 $ - linear hadamard codes to $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 \ mathbb { z } _ 8 $ - linear hadamard codes with $ \ alpha _ 1 \ neq 0 $, $ \ alpha _ 2 \ neq 0 $, and $ \ alpha _ 3 \ neq 0 $. first, we give a recursive construction of $ \ mathbb { z } _ 2 \ mathbb { z } _ 4 \ mathbb { z } _ 8 $ - additive hadamard codes of type $ ( \ alpha _ 1, \ alpha _ 2, \ alpha _ 3 ; t _ 1, t _ 2, t _ 3 ) $ with $ t _ 1 \ geq 1 $, $ t _ 2 \ geq 0 $, and $ t _ 3 \ geq 1 $. then, we show that in general the $ \ mathbb { z } _ 4 $
|
arxiv:2301.09404
|
the development of large language models and multi - modal models has enabled the appealing idea of generating novel molecules from text descriptions. generative modeling would shift the paradigm from relying on large - scale chemical screening to find molecules with desired properties to directly generating those molecules. however, multi - modal models combining text and molecules are often trained from scratch, without leveraging existing high - quality pretrained models. training from scratch consumes more computational resources and prohibits model scaling. in contrast, we propose a lightweight adapter - based strategy named chemical language model linker ( chemlml ). chemlml blends the two single domain models and obtains conditional molecular generation from text descriptions while still operating in the specialized embedding spaces of the molecular domain. chemlml can tailor diverse pretrained text models for molecule generation by training relatively few adapter parameters. we find that the choice of molecular representation used within chemlml, smiles versus selfies, has a strong influence on conditional molecular generation performance. smiles is often preferable despite not guaranteeing valid molecules. we raise issues in using the entire pubchem dataset of molecules and their associated descriptions for evaluating molecule generation and provide a filtered version of the dataset as a generation test set. to demonstrate how chemlml could be used in practice, we generate candidate protein inhibitors and use docking to assess their quality and also generate candidate membrane permeable molecules.
|
arxiv:2410.20182
|
we introduce activegamer, an active mapping system that utilizes 3d gaussian splatting ( 3dgs ) to achieve high - quality, real - time scene mapping and exploration. unlike traditional nerf - based methods, which are computationally demanding and restrict active mapping performance, our approach leverages the efficient rendering capabilities of 3dgs, allowing effective and efficient exploration in complex environments. the core of our system is a rendering - based information gain module that dynamically identifies the most informative viewpoints for next - best - view planning, enhancing both geometric and photometric reconstruction accuracy. activegamer also integrates a carefully balanced framework, combining coarse - to - fine exploration, post - refinement, and a global - local keyframe selection strategy to maximize reconstruction completeness and fidelity. our system autonomously explores and reconstructs environments with state - of - the - art geometric and photometric accuracy and completeness, significantly surpassing existing approaches in both aspects. extensive evaluations on benchmark datasets such as replica and mp3d highlight activegamer ' s effectiveness in active mapping tasks.
|
arxiv:2501.06897
|
we derive a priori estimates for solutions of a general class of fully non - linear equations on compact hermitian manifolds. our method is based on ideas that have been used for different specific equations, such as the complex monge - amp \ ` ere, hessian and inverse hessian equations. as an application we solve a class of hessian quotient equations on k \ " ahler manifolds assuming the existence of a suitable subsolution. the method also applies to analogous equations on compact riemannian manifolds.
|
arxiv:1501.02762
|
we have systematically searched for the phase curves among the planets discovered by \ textit { k2 }. using the reported planetary parameters, we screen out the best potential candidates, and examine their light curves in detail. for our work, we consider light curves from two different detrending pipelines - \ texttt { everest } and \ texttt { k2sff }. in order to remove stellar variability and systematics, we test three different filtering techniques : spline, phasma ( median - filtering ) and butterworth ( harmonics filtering ), and use butterworth filtered light curves for the subsequent analysis. we have identified 6 previously unreported phase curves among the planets observed with \ textit { k2 } : k2 - 31b, hats - 9b, hats - 11b, k2 - 107b, k2 - 131b, and k2 - 106b. the first four of these are hot jupiters for which we find the photometric masses consistent with their rv - based masses within 2 $ \ sigma $, 1 $ \ sigma $, 1 $ \ sigma $, and 3 $ \ sigma $ respectively with comparatively low geometric albedos, while the last two are ultra - short period super - earths with phase curves dominated by reflective and thermal components. we also detect a secondary eclipse in hats - 11b at 62 $ \ pm $ 12 ppm. we thus deem it to be possible to validate the planetary nature of selected \ textit { k2 }, and suggest similar vetting could be used for the ongoing \ textit { tess } mission.
|
arxiv:1812.09227
|
in mathematics, an involution, involutory function, or self - inverse function is a function f that is its own inverse, f ( f ( x ) ) = x for all x in the domain of f. equivalently, applying f twice produces the original value. = = general properties = = any involution is a bijection. the identity map is a trivial example of an involution. examples of nontrivial involutions include negation ( x β¦ βx ), reciprocation ( x β¦ 1 / x ), and complex conjugation ( z β¦ z ) in arithmetic ; reflection, half - turn rotation, and circle inversion in geometry ; complementation in set theory ; and reciprocal ciphers such as the rot13 transformation and the beaufort polyalphabetic cipher. the composition g β f of two involutions f and g is an involution if and only if they commute : g β f = f β g. = = involutions on finite sets = = the number of involutions, including the identity involution, on a set with n = 0, 1, 2,... elements is given by a recurrence relation found by heinrich august rothe in 1800 : a 0 = a 1 = 1 { \ displaystyle a _ { 0 } = a _ { 1 } = 1 } and a n = a n β 1 + ( n β 1 ) a n β 2 { \ displaystyle a _ { n } = a _ { n - 1 } + ( n - 1 ) a _ { n - 2 } } for n > 1. { \ displaystyle n > 1. } the first few terms of this sequence are 1, 1, 2, 4, 10, 26, 76, 232 ( sequence a000085 in the oeis ) ; these numbers are called the telephone numbers, and they also count the number of young tableaux with a given number of cells. the number an can also be expressed by non - recursive formulas, such as the sum a n = m = 0 n 2 n! 2 m m! ( n β 2 m )!. { \ displaystyle a _ { n } = \ sum _ { m = 0 } ^ { \ lfloor { \ frac { n } { 2 } } \ rfloor } { \ frac { n! } { 2 ^
|
https://en.wikipedia.org/wiki/Involution_(mathematics)
|
the einstein - cartan theory of gravitation and the classical theory of defects in an elastic medium are presented and compared. the former is an extension of general relativity and refers to four - dimensional space - time, while we introduce the latter as a description of the equilibrium state of a three - dimensional continuum. despite these important differences, an analogy is built on their common geometrical foundations, and it is shown that a space - time with curvature and torsion can be considered as a state of a four - dimensional continuum containing defects. this formal analogy is useful for illustrating the geometrical concept of torsion by applying it to concrete physical problems. moreover, the presentation of these theories using a common geometrical basis allows a deeper understanding of their foundations.
|
arxiv:gr-qc/0306029
|
memory manufacturer had teamed up to create a co - branded module. in 1999, tu and sun eventually bought back the 80 percent of kingston owned by softbank for $ 450 million. on december 14, 1996, john tu and david sun allocated $ 71. 5 million for employee bonuses as a result of the acquisition, averaging $ 130, 000 for each of the company ' s 550 workers. kingston announced a 49 % increase in unit sales for its memory module products in calendar year 1996 over calendar year 1995. in 1996, kingston opened its european headquarters in london, united kingdom. in january 1997, kingston opened a manufacturing facility / office in taiwan, a sales office in japan, and a manufacturing facility and offices in dublin, ireland. the company also expanded its american manufacturing capacity by purchasing pc - oem manufacturing buildings in fountain valley, california. kingston also introduced valueram, which was a high - quality, low - cost memory designed for system integrators to use in white box systems. in 1999, kingston launched advanced validation labs, inc. ( avl ), a sister company that provides memory validation services. = = = 2000s = = = kingston began manufacturing removable disk drive storage products in 1989 in their kingston storage products division. by 2000, it was decided to spin off the product line and become a sister company, storcase technology, inc. storcase ceased operations in 2006 after selling the designs and rights to manufacture its products to competitor cru - dataport. in june 2000, kingston announced a new supply chain management model to its memory manufacturing process. payton technology inc. was established to help support this new model. forbes listed kingston as number 141 on its list of " the 500 largest private companies in the u. s, " with revenues of $ 1. 5 billion for 1999. in march 2001, kingston announced the formation of the consumer markets division ( cmd ), a new division focusing on the retail and e - tail channel. in 2002 kingston launched a patented memory tester and a new hyperx line of high - performance memory modules, and also patented epoc chip - stacking technology. in august of that year, kingston made a $ 50 million investment in elpida and launched a green initiative for module manufacturing. in 2004, kingston announced revenues of $ 1. 8b for 2003. in september, kingston announced new datatraveler elite usb drives, with hardware - based security encryption. in october, advanced micro devices named kingston " outstanding partner " for contributions to the amd athlon 64
|
https://en.wikipedia.org/wiki/Kingston_Technology
|
dijet angular distributions of photoproduction events in which a $ d ^ { * \ pm } $ meson is produced in association with one of two energetic jets have been measured with the zeus detector at hera, using an integrated luminosity of 120 pb $ ^ { - 1 } $. differential cross sections as a function of the angle between the charm - jet and the proton - beam direction in the dijet rest frame have been measured for samples enriched in direct or resolved photon events. the results are compared with predictions from leading - order parton - shower monte carlo models and with next - to - leading - order qcd calculations. the angular distributions show clear evidence for the existence of charm originating from the photon.
|
arxiv:hep-ex/0302025
|
we will discuss a method for visual presentation of knotted surfaces in the four space, by examining a number and a position of its morse ' s critical points. using this method, we will investigate surface - knot with one critical point of index 1. then we show infinitely many mutually distinct surface - knots that has an embedding with two critical points of index 1. next we define a long flat for a banded link for any surface - knot and show diagrammatically a long flat form of $ n $ - twist - spun $ ( 2, t ) $ - torus knots.
|
arxiv:1505.07995
|
we present a scheme for implementing quantum operations with superconducting qubits. our approach uses a " coupler " qubit to mediate a controllable, secular interaction between " data " qubits, pulse sequences which strongly mitigate the effects of 1 / f flux noise, and a high - q resonator - based local memory. we develop a monte - carlo simulation technique capable of describing arbitrary noise - induced dephasing and decay, and demonstrate in this system a set of universal gate operations with o ( 10 ^ - 5 ) error probabilities in the presence of experimentally measured levels of 1 / f noise. we then add relaxation and quantify the decay times required to maintain this error level.
|
arxiv:0801.0761
|
we numerically study a disordered model for the rna secondary structure and we find that it undergoes a phase transition, with a breaking of the replica symmetry in the low temperature region ( like in spin glasses ). our results are based on the exact evaluation of the partition function.
|
arxiv:cond-mat/9907125
|
the minimum spanning tree is used to study the process of market integration for a large group of national stock market indices. we show how the asset tree evolves over time and describe the dynamics of its normalized length, mean occupation layer, and single - and multiple - step linkage survival rates. over the period studied, 1997 - 2006, the tree shows a tendency to become more compact. this implies that global equity markets are increasingly interrelated. the consequence for global investors is a potential reduction of the benefits of international portfolio diversification.
|
arxiv:physics/0607022
|
we report on numerical experiments using deflation to compute quark propagators for the highly improved staggered quark ( hisq ) action. the method is tested on hisq gauge configurations, generated by the milc collaboration, with lattice spacings of 0. 15 fm, with a range of volumes, and sea quark masses down to the physical quark mass.
|
arxiv:1710.07219
|
long - term time series forecasting ( ltsf ) offers broad utility in practical settings like energy consumption and weather prediction. accurately predicting long - term changes, however, is demanding due to the intricate temporal patterns and inherent multi - scale variations within time series. this work confronts key issues in ltsf, including the suboptimal use of multi - granularity information, the neglect of channel - specific attributes, and the unique nature of trend and seasonal components, by introducing a proficient mlp - based forecasting framework. our method adeptly disentangles complex temporal dynamics using clear, concurrent predictions across various scales. these multi - scale forecasts are then skillfully integrated through a system that dynamically assigns importance to information from different granularities, sensitive to individual channel characteristics. to manage the specific features of temporal patterns, a two - pronged structure is utilized to model trend and seasonal elements independently. experimental results on eight ltsf benchmarks demonstrate that mdmixer improves average mae performance by 4. 64 % compared to the recent state - of - the - art mlp - based method ( timemixer ), while achieving an effective balance between training efficiency and model interpretability.
|
arxiv:2505.08199
|
we show that some novel physics of supertubes removes closed time - like curves from many supersymmetric spaces which naively suffer from this problem. the main claim is that supertubes naturally form domain - walls, so while analytical continuation of the metric would lead to closed time - like curves, across the domain - wall the metric is non - differentiable, and the closed time - like curves are eliminated. in the examples we study the metric inside the domain - wall is always of the g \ " odel type, while outside the shell it looks like a localized rotating object, often a rotating black hole. thus this mechanism prevents the appearance of closed time - like curves behind the horizons of certain rotating black holes.
|
arxiv:hep-th/0404239
|
hst and ground - based [ oii } and [ nii ] images obtained from 1996 to 1999 reveal the existence of a ionised optical nebula around the symbiotic binary ch cyg extending out to 5000 a. u. from the central stars. the observed velocity range of the nebula, derived from long - slit echelle spectra, is of 130 km / s. in spite of its complex appearence, the velocity data show that the basic morphology of the inner regions of the optical nebula is that of a bipolar ( or conical ) outflow extending nearly along the plane of the sky out to some 2000 a. u. from the centre. even if the extension of this bipolar outflow and its position angle are consistent with those of the radio jet produced in 1984 ( extrapolated to the time of our optical imagery ), no obvious counterpart is visible of the original, dense radio bullets ejected by the system. we speculate that the optical bipolar outflow might be the remannt of the interaction of the bullets with a relatively dense circumstellar medium.
|
arxiv:astro-ph/0109046
|
rankings of people and items has been highly used in selection - making, match - making, and recommendation algorithms that have been deployed on ranging of platforms from employment websites to searching tools. the ranking position of a candidate affects the amount of opportunities received by the ranked candidate. it has been observed in several works that the ranking of candidates based on their score can be biased for candidates belonging to the minority community. in recent works, the fairness - aware representative ranking was proposed for computing fairness - aware re - ranking of results. the proposed algorithm achieves the desired distribution of top - ranked results with respect to one or more protected attributes. in this work, we highlight the bias in fairness - aware representative ranking for an individual as well as for a group if the group is sub - active on the platform. we define individual unfairness and group unfairness and propose methods to generate ideal individual and group fair representative ranking if the universal representation ratio is known or unknown. the simulation results show the quantified analysis of fairness in the proposed solutions. the paper is concluded with open challenges and further directions.
|
arxiv:2103.01335
|
we consider mid - spectrum eigenstates of the sachdev - ye - kiteav ( syk ) model. we prove that for subsystems whose size is a constant fraction of the system size, the entanglement entropy deviates from the maximum entropy by at least a positive constant. this result highlights the difference between the entanglement entropy of mid - spectrum eigenstates of the syk model and that of random states.
|
arxiv:2409.07043
|
it is shown how states of a quantum mechanical particle in the schroedinger representation can be approximated by states in the so - called polymer representation. the result may shed some light on the semiclassical limit of loop quantum gravity.
|
arxiv:gr-qc/0606090
|
agent - based models ( abms ) have shown promise for modelling various real world phenomena incompatible with traditional equilibrium analysis. however, a critical concern is the manual definition of behavioural rules in abms. recent developments in multi - agent reinforcement learning ( marl ) offer a way to address this issue from an optimisation perspective, where agents strive to maximise their utility, eliminating the need for manual rule specification. this learning - focused approach aligns with established economic and financial models through the use of rational utility - maximising agents. however, this representation departs from the fundamental motivation for abms : that realistic dynamics emerging from bounded rationality and agent heterogeneity can be modelled. to resolve this apparent disparity between the two approaches, we propose a novel technique for representing heterogeneous processing - constrained agents within a marl framework. the proposed approach treats agents as constrained optimisers with varying degrees of strategic skills, permitting departure from strict utility maximisation. behaviour is learnt through repeated simulations with policy gradients to adjust action likelihoods. to allow efficient computation, we use parameterised shared policy learning with distributions of agent skill levels. shared policy learning avoids the need for agents to learn individual policies yet still enables a spectrum of bounded rational behaviours. we validate our model ' s effectiveness using real - world data on a range of canonical $ n $ - agent settings, demonstrating significantly improved predictive capability.
|
arxiv:2402.00787
|
we study fusion of two scalar wilson defects. we propose that fusion holds at a quantum level by showing that bare one - point functions stay invariant. this is an expected result as the path integral stays invariant under fusion of the two defects. the difference instead lies in renormalization of local quantities on the defects. those on the fused defect takes into account uv divergences in the fusion limit when the two defects approach eachother, in addition to uv divergences in the coincident limit of defect - local fields and in the near defect limits of bulk - local fields. at the fixed point of the corresponding rg flow the two conformal defects have fused into a single conformal defect.
|
arxiv:2304.10239
|
recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. the two promising methods, the complex langevin method and the lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. here we propose a unified formulation, in which the sign problem is taken care of by both the langevin dynamics and the holomorphic gradient flow. we apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
|
arxiv:1710.07027
|
precomputed radiance transfer ( prt ) remains an attractive solution for real - time rendering of complex light transport effects such as glossy global illumination. after precomputation, we can relight the scene with new environment maps while changing viewpoint in real - time. however, practical prt methods are usually limited to low - frequency spherical harmonic lighting. all - frequency techniques using wavelets are promising but have so far had little practical impact. the curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. in this paper, we demonstrate a hybrid neural - wavelet prt solution to high - frequency indirect illumination, including glossy reflection, for relighting with changing view. specifically, we seek to represent the light transport function in the haar wavelet basis. for global illumination, we learn the wavelet transport using a small multi - layer perceptron ( mlp ) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other mlp inputs. we optimize / learn the feature field ( compactly represented by a tensor decomposition ) and mlp parameters from multiple images of the scene under different lighting and viewing conditions. we demonstrate real - time ( 512 x 512 at 24 fps, 800 x 600 at 13 fps ) precomputed rendering of challenging scenes involving view - dependent reflections and even caustics.
|
arxiv:2307.06335
|
poincare recurrence theorem states that any finite system will come arbitrary close to its initial state after some very long but finite time. at the statistical level, this by itself does not represent a paradox, but apparently violates the second law of thermodynamics, which may lead to some confusing conclusions for macroscopic systems. however, this statement does not take gravity into account. if two particles with a given center of mass energy come at the distance shorter than the schwarzschild diameter apart, according to classical gravity they will form a black hole. in the classical case, a black hole once formed will always grow and effectively quench the poincare recurrence. we derive the condition under which the classical black hole production rate is higher than the classical poincare recurrence rate. in the quantum case, if the temperature of the black hole is lower than the temperature of the surrounding gas, such a black hole cannot disappear via hawking evaporation. we derive the condition which gives us a critical temperature above which the black hole production is faster than quantum poincare recurrence time. however, in quantum case, the quantum poincare recurrence theorem can be applied to the black hole states too. the presence of the black hole can make the recurrence time longer or shorter, depending on whether the presence of the black hole increases or decreases the total entropy. we derive the temperature below which the produced black hole increases the entropy of the whole system ( gas particles plus a black hole ). finally, if evolution of the system is fast enough, then newly formed black holes will merge and accrete particles until one large black hole dominates the system. we give the temperature above which the presence of black holes increase the entropy of the whole system and prolongs the poincare recurrence time.
|
arxiv:1611.00792
|
generating mocks for future sky surveys requires large volumes and high resolutions, which is computationally expensive even for fast simulations. in this work we try to develop numerical schemes to calibrate various halo and matter statistics in fast low resolution simulations compared to high resolution n - body and hydrodynamic simulations. for the halos, we improve the initial condition accuracy and develop a halo finder " relaxed - fof ", where we allow different linking length for different halo mass and velocity dispersions. we show that our relaxed - fof halo finder improves the common statistics, such as halo bias, halo mass function, halo auto power spectrum in real space and in redshift space, cross correlation coefficient with the reference halo catalog, and halo - matter cross power spectrum. we also incorporate the potential gradient descent ( pgd ) method into fast simulations to improve the matter distribution at nonlinear scale. by building a lightcone output, we show that the pgd method significantly improves the weak lensing convergence tomographic power spectrum. with these improvements fastpm is comparable to the high resolution full n - body simulation of the same mass resolution, with two orders of magnitude fewer time steps. these techniques can be used to improve the halo and matter statistics of fastpm simulations for mock catalogs of future surveys such as desi and lsst.
|
arxiv:1908.05276
|
we provide another construction of the natural parametrization of sle $ _ \ kappa $ for $ \ kappa < 4 $. we construct it as the expectation of the quantum time, which is a random measure carried by sle in an ambient gaussian free field. this quantum time was built as the push forward on the sle curve of the liouville boundary measure, which is a natural field - dependent measure supported on the boundary of the domain. we moreover show that the quantum time can be reconstructed as a chaos on any measure on the trace of sle with the right markovian covariance property. this provides another proof that natural parametrization is characterized by its markovian covariance property.
|
arxiv:1708.03801
|
new preliminary combined results from the lep experiments on searches for the higgs boson beyond the standard model are presented. the new determination of the top quark mass at the tevatron in 2004 influences the interpretations of the lep results in both, the standard model, and the minimal supersymmetric extension of the standard model. higgs boson physics will also be a major research area at the future linear collider. a review including new preliminary results on the potential for precision measurements is given.
|
arxiv:hep-ph/0502002
|
a novel integral equations approach is applied for studying ion pairing in the restricted primitive model ( rpm ) electrolyte, i. e., the three point extension ( tpe ) to the ornstein - zernike integral equations. in the tpe approach, the three - particle correlation functions $ g ^ { [ 3 ] } ( { \ bf r } _ { 1 }, { \ bf r } _ { 2 }, { \ bf r } _ { 3 } ) $ are obtained. the tpe results are compared to molecular dynamics ( md ) simulations and other theories. good agreement between tpe and md is observed for a wide range of parameters, particularly where standard integral equations theories fail, i. e., low salt concentration and high ionic valence. our results support the formation of ion pairs and aligned ion complexes.
|
arxiv:cond-mat/0205591
|
bilstm has been prevalently used as a core module for ner in a sequence - labeling setup. state - of - the - art approaches use bilstm with additional resources such as gazetteers, language - modeling, or multi - task supervision to further improve ner. this paper instead takes a step back and focuses on analyzing problems of bilstm itself and how exactly self - attention can bring improvements. we formally show the limitation of ( crf - ) bilstm in modeling cross - context patterns for each word - - the xor limitation. then, we show that two types of simple cross - structures - - self - attention and cross - bilstm - - can effectively remedy the problem. we test the practical impacts of the deficiency on real - world ner datasets, ontonotes 5. 0 and wnut 2017, with clear and consistent improvements over the baseline, up to 8. 7 % on some of the multi - token entity mentions. we give in - depth analyses of the improvements across several aspects of ner, especially the identification of multi - token mentions. this study should lay a sound foundation for future improvements on sequence - labeling ner. ( source codes : https : / / github. com / jacobvsdanniel / cross - ner )
|
arxiv:1908.11046
|
let $ w $ be a finite irreducible real reflection group, which is a coxeter group. we explicitly construct a basis for the module of differential 1 - forms with logarithmic poles along the coxeter arrangement by using a primitive derivation. as a consequence, we extend the hodge filtration, indexed by nonnegative integers, into a filtration indexed by all integers. this filtration coincides with the filtration by the order of poles. the results are translated into the derivation case.
|
arxiv:0810.3107
|
we present recent results on lattice simulations using chiral effective field theory. in particular we discuss lattice simulations for dilute neutron matter at next - to - leading order and three - body forces in light nuclei at next - to - next - to - leading order.
|
arxiv:0812.3065
|
we study the occurrence of physical collisions between stars in young and compact star cluster. the calculations are performed on the grape - 4 with the starlab software environment which include the dynamical evolution and the nuclear evolution of all stars and binaries. the selection of the initial conditions is based on existing and well observed star clusters, such as r136 in the 30 doradus region in the large magellanic cloud and the arches and quintuplet star clusters in the vicinity of the galactic center. collisions between stars occurred rather frequently in our models. at any time a single star dominates the collision history of the system. the collision rate of this runaway merger scales with the initial relaxation time of the cluster and is independent on other cluster parameters, such as the initial mass function or the initial density profile of the cluster. subsequent encounters result in a steady grow in mass of the coagulating star, until it escapes or explodes in a supernova. the collision rate in these models is about 0. 00022 collisions per star per myr for a cluster with an initial relaxation time of 1 myr.
|
arxiv:astro-ph/0012237
|
we study rigid surface operators in the $ n = 4 $ supersymmetric yang - mills theories with gauge groups $ so ( n ) $ and $ sp ( 2n ) $. using maps $ x _ s $ and $ y _ s $ between these two theories, wyllard made explicit proposals for how the $ s $ - duality map should act on certain subclasses of surface operators. we study the maps $ x _ s $ and $ y _ s $ further and simplify the construction of symbol invariant of rigid surface operators by a convenient trick. by consistency checks, we recover and extend the $ s $ - duality maps proposed by wyllard. we find new subclasses of rigid surface operators related by $ s $ - duality. we try to explain the exceptions of $ s $ - duality maps. we also discuss the extension of the techniques used in the $ b _ n / c _ n $ theories to the $ d _ n $ theories.
|
arxiv:1708.07388
|
lfoundry. in the 2012 to 2014 period, micron again went through an acquisition - layoff cycle, becoming the majority shareholder of inotera memories, purchasing elpida memory for $ 2 billion and the remaining shares in rexchip, a pc memory chip manufacturing venture between powerchip and elpida memory for $ 334 million, while announcing plans to lay off approximately 3, 000 workers. through the elpida acquisition, micron became a major supplier to apple inc. for the iphone and ipad. in december 2016 micron finished acquiring the remaining 67 percent of inotera, making it a 100 percent subsidiary of micron. in april 2017 micron announced sanjay mehrotra as the new president and ceo to replace mark durcan. in june 2017 micron announced it was discontinuing the lexar retail removable media storage business and putting some or all it up for sale. in august of that year the lexar brand was acquired by longsys, a flash memory company based in shenzhen, china. in may 2018 micron technology and intel launched qlc nand memory to increase storage density. the company ranked 150th on the fortune 500 list of largest united states corporations by revenue. in february 2019 the first microsd card with a storage capacity of 1 terabyte ( tb ) was announced by micron. as of march 2020 3. 84tb micron 5210 ion is the cheapest large - capacity ssd in the world. in september 2020 the company introduced the world ' s fastest discrete graphics memory solution. working with computing technology leader nvidia, micron debuted gddr6x in the nvidia geforce rtx 3090 and geforce rtx 3080 graphics processing units ( gpus ). in november 2020, the company unveiled a new 176 - layer 3d nand module. it offers improved read and write latency and is slated to be used in the production of a new generation of solid - state drives. on 22 october 2021, micron closed the sale of im flash ' s lehi, utah fab to texas instruments for a sale price of us $ 900 million. with the passage of the chips and science act, micron announced its pledge to invest billions in new manufacturing within the us. in september 2022, micron announced they would invest $ 15 billion in a new facility in boise, idaho. in october 2022 micron announced a $ 100 billion expansion in clay, new york. micron technology owed net
|
https://en.wikipedia.org/wiki/Micron_Technology
|
extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning. to address this challenge, we adopt a keypoint - based image representation and learn a stochastic dynamics model of the keypoints. future frames are reconstructed from the keypoints and a reference frame. by modeling dynamics in the keypoint coordinate space, we achieve stable learning and avoid compounding of errors in pixel space. our method improves upon unstructured representations both for pixel - level video prediction and for downstream tasks requiring object - level understanding of motion dynamics. we evaluate our model on diverse datasets : a multi - agent sports dataset, the human3. 6m dataset, and datasets based on continuous control tasks from the deepmind control suite. the spatially structured representation outperforms unstructured representations on a range of motion - related tasks such as object tracking, action recognition and reward prediction.
|
arxiv:1906.07889
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.