text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
it is well - known that observability ( and, by duality, controllability ) of the elliptic wave equation, i. e., with a riemannian laplacian, in time $ t _ 0 $ is almost equivalent to the geometric control condition ( gcc ), which stipulates that any geodesic ray meets the control set within time $ t _ 0 $. we show that in the subelliptic setting, gcc is never verified, and that subelliptic wave equations are never observable in finite time. more precisely, given any subelliptic laplacian $ \ delta = - \ sum _ { i = 1 } ^ m x _ i ^ * x _ i $ on a manifold $ m $, and any measurable subset $ \ omega \ subset m $ such that $ m \ backslash \ omega $ contains in its interior a point $ q $ with $ [ x _ i, x _ j ] ( q ) \ notin \ text { span } ( x _ 1, \ ldots, x _ m ) $ for some $ 1 \ leq i, j \ leq m $, we show that for any $ t _ 0 > 0 $, the wave equation with subelliptic laplacian $ \ delta $ is not observable on $ \ omega $ in time $ t _ 0 $. the proof is based on the construction of sequences of solutions of the wave equation concentrating on geodesics ( for the associated sub - riemannian distance ) spending a long time in $ m \ backslash \ omega $. as a counterpart, we prove a positive result of observability for the wave equation in the heisenberg group, where the observation set is a well - chosen part of the phase space.
|
arxiv:2002.01259
|
the magnetic and transport properties of $ pd _ { 0. 99 } fe _ { 0. 01 } $ thin films have been studied. we have found that the curie temperature of the films is about 20 k and the magnetic properties strongly depend on temperature below $ t _ { curie } $. we have also fabricated the set of superconductor - ferromagnet - superconductor josephson junctions $ nb - pdfe - nb $. the temperature dependence of the junctions with the ferromagnet layer thickness of about 36 nm shows the reentrant behaviour that is the evidence of the transition of the junction into the $ \ pi $ - state.
|
arxiv:0709.2495
|
i give a brief overview of some quantum - gravity - phenomenology research lines, focusing on studies of cosmic rays and gamma - ray bursts that concern the fate of lorentz symmetry in quantum spacetime. i also stress that the most valuable phenomenological analyses should not mix too many conjectured new features of quantum spacetime, and from this perspective it appears that it should be difficult to obtain reliable guidance on the quantum - gravity problem from the analysis of synchrotron radiation from the crab nebula and from the analysis of phase coherence of light from extragalactic sources. forthcoming observatories of ultra - high - energy neutrinos should provide several opportunities for clean tests of some simple hypothesis for the short - distance structure of spacetime. in particular, these neutrino studies, and some related cosmic - ray studies, should provide access to the regime $ e > \ sqrt { m e _ p } $.
|
arxiv:gr-qc/0402009
|
we introduce a new family of graphs, namely, hybrid graphs. there are infinitely many hybrid graphs associated to a single graph. we show that every hybrid graph associated to a given graph is cohen macaulay. furthermore, we show that every cohenmacaulay chordal graph is a hybrid graph.
|
arxiv:1904.03824
|
it is generally believed that the shadows of either a black hole or naked singularity arise due to photon spheres developing in these spacetimes. here we propose a new spherically symmetric naked singularity spacetime solution of einstein equations which has no photon sphere, and we show that the singularity casts a shadow in the absence of the photon sphere. we discuss some novel features of this shadow and the lightlike geodesics in this spacetime. we compare the shadow of the naked singularity here with shadows cast by schwarzschild black hole and the first type of joshi - malafarina - narayan ( jmn1 ) naked singularity, where for the last two spacetimes the shadow is formed due to the presence of a photon sphere. it is seen, in particular, that the size of shadow of the singularity is considerably smaller than that of a black hole. our analysis shows that the shadow of this naked singularity is distinguishable from the shadow of a schwarzschild black hole and the jmn1 naked singularity. these results are useful and important in the context of recent observations of shadow of the m87 galactic center.
|
arxiv:2004.06525
|
we study homological behavior of modules satisfying the auslander condition. assume that $ \ mathcal { ac } $ is the class of left $ r $ - modules satisfying the auslander condition. it is proved that each cycle of an exact complex with each term in $ \ mathcal { ac } $ belongs to $ \ mathcal { ac } $ for any ring $ r $. as a consequence, we show that for any left noetherian ring $ r $, $ \ mathcal { ac } $ is a resolving subcategory of the category of left $ r $ - modules if and only if $ _ rr $ satisfies the auslander condition if and only if each gorenstein projective left $ r $ - module belongs to $ \ mathcal { ac } $. as an application, we prove that, for an artinian algebra $ r $ satisfying the auslander condition, $ r $ is gorenstein if and only if $ \ mathcal { ac } $ coincides with the class of gorenstein projective left $ r $ - modules if and only if $ ( { \ mathcal { ac } ^ { < \ infty } }, ( \ mathcal { ac } ^ { < \ infty } ) ^ \ bot ) $ is a tilting - like cotorsion pair if and only if ( $ { \ mathcal { ac } ^ { < \ infty } }, \ mathcal { i } $ ) is a tilting - like cotorsion pair, where $ \ mathcal { ac } ^ { < \ infty } $ is the class of left $ r $ - modules with finite $ \ mathcal { ac } $ - dimension and $ \ mathcal { i } $ is the class of injective left $ r $ - modules. this leads to some criteria for the validity of the auslander and reiten conjecture which says that an artinian algebra satisfying the auslander condition is gorenstein.
|
arxiv:2302.05850
|
we have studied phase transition from hadron matter to quark matter in the presence of high magnetic fields incorporating the trapped electron neutrinos at finite temperatures. we have used the density dependent quark mass ( ddqm ) model for the quark phase while the hadron phase is treated in the frame - work of relativistic mean field theory. it is seen that the nuclear energy at phase transition decreases with both magnetic field and temperature. a brief discussion of the effect of magnetic field in supernova explosions and proto - neutron star evolution is given.
|
arxiv:astro-ph/0012260
|
$ \ chi _ { c0 } ( 2p ) $, $ \ chi _ { c2 } ( 2p ) \ to \ gamma \ gamma $. we discuss the status of the recently observed $ c \ bar c $ states x ( 3872 ) and y ( 3941 ) : according to our results, the x ( 3872 ) can be either $ \ chi _ { c1 } ( 2p ) $ or $ \ eta _ { c2 } ( 1d ) $, while y ( 3941 ) is $ \ chi _ { c2 } ( 2p ) $.
|
arxiv:hep-ph/0511005
|
due to photon - assisted transport processes, chiral edge modes induced by periodic driving do not directly mediate quantized transport. here we show how narrow bandwidth " energy filters " can restore quantization by suppressing photon assisted transport through floquet sidebands. we derive a floquet landauer type equation to describe transport through such an energy - filtered setup, and show how the filter can be integrated out to yield a sharply energy - dependent renormalized system - lead coupling. we show analytically and through numerical simulations that a nearly quantized conductance can be achieved in both off - resonantly and resonantly induced quasienergy gaps when filters are introduced. the conductance approaches the appropriate quantized value on each plateau with increasing system and filter size. we introduce a " floquet distribution function " and show both analytically and numerically that it approaches the equilibrium fermi - dirac form when narrow - band filters are introduced, highlighting the mechanism that restores quantized transport.
|
arxiv:2402.18776
|
recent advances in large video - language models have displayed promising outcomes in video comprehension. current approaches straightforwardly convert video into language tokens and employ large language models for multi - modal tasks. however, this method often leads to the generation of irrelevant content, commonly known as " hallucination ", as the length of the text increases and the impact of the video diminishes. to address this problem, we propose vista - llama, a novel framework that maintains the consistent distance between all visual tokens and any language tokens, irrespective of the generated text length. vista - llama omits relative position encoding when determining attention weights between visual and text tokens, retaining the position encoding for text and text tokens. this amplifies the effect of visual tokens on text generation, especially when the relative distance is longer between visual and text tokens. the proposed attention mechanism significantly reduces the chance of producing irrelevant text related to the video content. furthermore, we present a sequential visual projector that projects the current video frame into tokens of language space with the assistance of the previous frame. this approach not only captures the temporal relationship within the video, but also allows less visual tokens to encompass the entire video. our approach significantly outperforms various previous methods ( e. g., video - chatgpt, moviechat ) on four challenging open - ended video question answering benchmarks. we reach an accuracy of 60. 7 on the zero - shot next - qa and 60. 5 on the zero - shot msrvtt - qa, setting a new state - of - the - art performance. this project is available at https : / / jinxxian. github. io / vista - llama.
|
arxiv:2312.08870
|
in this note, we demonstrate an instance of bounded - degree graphs of size $ n $, for which the total variation mixing time for the random walk is decreased by a factor of $ \ log n / \ log \ log n $ if we multiply the edge - conductances by bounded factors in a certain way.
|
arxiv:1304.0244
|
we construct a model category structure on the category of diffeological spaces which is quillen equivalent to the model structure on the category of topological spaces based on the notions of serre fibrations and weak homotopy equivalences.
|
arxiv:1311.5668
|
the null energy condition ( nec ) is a cornerstone of general relativity, and its violation could leave observable imprints in the cosmic gravitational wave spectrum. theoretical models suggest that nec violations during inflation can amplify the primordial tensor power spectrum, leading to distinct features in the stochastic gravitational wave background ( sgwb ). in this work, we search for these nec - violating signatures in the sgwb using data from advanced ligo and advanced virgo ' s first three observing runs. our analysis reveals no statistically significant evidence of such signals, allowing us to place stringent upper limits on the tensor power spectrum amplitude, $ p _ { t, 2 } $, during the second inflationary stage. specifically, we find that $ p _ { t, 2 } \ lesssim 0. 15 $ at a $ 95 \ % $ confidence level. notably, this upper limit is consistent with constraints derived from pulsar timing array observations, reinforcing the hypothesis that nec violations during inflation could explain the signal detected by pulsar timing arrays. our findings contribute to a deeper understanding of the early universe and highlight the potential of current and future gravitational wave experiments in probing the physics of inflation and nec violations.
|
arxiv:2404.07075
|
content sharing across multiple augmented reality ( ar ) displays is becoming commonplace, enhancing team communication and collaboration through devices like smartphones and ar glasses. however, this practice raises significant privacy concerns, especially concerning the physical environment visible in ar, which may include sensitive personal details like facial features and identifiable information. our research focuses on protecting privacy within ar environments, particularly the physical backgrounds visible during content sharing across three common ar display methods : projection, smartphone, and ar glasses. we analyze the potential privacy risks associated with each method and employ a region of interest ( roi ) video encryption system to hierarchically encrypt the physical backdrop based on its safety rating. this study pioneers the integration of roi video encryption at the bitstream level within ar contexts, providing a more efficient solution than traditional pixel - level encryption by enhancing encryption speed and reducing the required space. our adaptive system dynamically adjusts the encryption intensity based on the ar display method, ensuring tailored privacy protection.
|
arxiv:2411.10964
|
accurate estimate of neutrino energy loss rates are needed for the study of the late stages of the stellar evolution, in particular for cooling of neutron stars and white dwarfs. the energy spectra of neutrinos and antineutrinos arriving at the earth can also provide useful information on the primary neutrino fluxes as well as neutrino mixing scenario ( it is to be noted that these supernova neutrinos are emitted after the supernova explosion which is a much later stage of stellar evolution than that considered in this paper ). recently an improved microscopic calculation of weak - interaction mediated rates for iron isotopes was introduced using the proton - neutron quasiparticle random phase approximation ( pn - qrpa ) theory. here i present for the first time the fine - grid calculation of the neutrino and anti - neutrino energy loss rates due to $ ^ { 54, 55, 56 } $ fe in stellar matter. in the core of massive stars isotopes of iron, $ ^ { 54, 55, 56 } $ fe, are considered to be key players in decreasing the electron - to - baryon ratio ( $ y _ { e } $ ) mainly via electron capture on these nuclide. core - collapse simulators may find this calculation suitable for interpolation purposes and for necessary incorporation in the stellar evolution codes. the calculated cooling rates are also compared with previous calculations.
|
arxiv:1408.4321
|
this paper studies the sensitivity to the observations of the block / group lasso solution to an overdetermined linear regression model. such a regularization is known to promote sparsity patterns structured as nonoverlapping groups of coefficients. our main contribution provides a local parameterization of the solution with respect to the observations. as a byproduct, we give an unbiased estimate of the degrees of freedom of the group lasso. among other applications of such results, one can choose in a principled and objective way the regularization parameter of the lasso through model selection criteria.
|
arxiv:1205.1481
|
the time - dependent cp asymmetries in fully reconstructed b0 - - > d ( * ) pi / rho decays ( new preliminary result ), and in partially reconstructed b0 - - > d ( * ) pi decays, are measured with the babar detector at the pep - ii asymmetric b factory at slac, using 232 million y ( 4s ) - > bb decays. we combine the above results and, using other measurements and theoretical assumptions, we interpret them in terms of the angles of the unitarity triangle describing the cabibbo - kobayashi - maskawa matrix. we find | sin ( 2beta + gamma ) | > 0. 64 ( 0. 42 ) at 68 % ( 90 % ) confidence level using a frequentistic approach and | 2beta + gamma | = ( 90 + - 43 ) o using a bayesian approach.
|
arxiv:hep-ex/0601018
|
context. the density split statistics in weak gravitational lensing analyses probes the correlation between regions of different ( foreground ) galaxy number densities and their weak lensing signal, measured by the shape distortion of background galaxies. aims. in this paper, we reconsider density split statistics, by constructing a new angular filter function that is adapted to the expected relation between galaxy number density and shear pattern, in a way that the filter weighting the galaxy number density is matched to the filter that is used to quantify the shear signal. methods. we use the results of numerical ray - tracing simulations, specifically through millennium simulation supplemented by a galaxy distribution based on a semi - analytic model, to construct a matched pair of adapted filter functions for the galaxy density and the tangential shear signal. we compare the performance of our new filter to the previously used top - hat filter, applying both to a different and independent set of numerical simulations ( slics, cosmo - slics ). results. we show that the adapted filter yields a better correlation between the total matter and the galaxy distribution. furthermore, the adapted filter provides a larger signal - to - noise ratio to constrain the bias between the total matter and the galaxy distribution, and we show that it is, in general, a more sensitive discriminator between different cosmologies, with the exception of cosmologies with very large $ \ sigma _ 8 $ values. all analyses lead to the conclusion that our adapted filter should be favored in future density split statistic works.
|
arxiv:2006.10778
|
in [ a. neri, p. santonastaso, f. zullo. extending two families of maximum rank distance codes ], the authors extended the family of $ 2 $ - dimensional $ \ mathbb { f } _ { q ^ { 2t } } $ - linear mrd codes recently found in [ g. longobardi, g. marino, r. trombetti, y. zhou. a large family of maximum scattered linear sets of $ \ mathrm { pg } ( 1, q ^ n ) $ and their associated mrd codes ]. also, for $ t \ geq 5 $ they determined equivalence classes of the elements in this new family and provided the exact number of inequivalent codes in it. in this article, we complete the study of the equivalence issue removing the restriction $ t \ geq 5 $. moreover, we prove that in the case when $ t = 4 $, the linear sets of the projective line $ \ mathrm { pg } ( 1, q ^ { 8 } ) $ ensuing from codes in the relevant family, are not equivalent to any one known so far.
|
arxiv:2208.09701
|
we demonstrate the existence of an anomaly - induced inhomogeneous phase in a class of vector - like gauge theories without sign problem, thus disproving the long - standing conjecture that the absence of sign problem precludes spontaneous breaking of translational invariance. the presence of the phase in the two - color modification of quantum chromodynamics can be tested by an independent nonperturbative evaluation of the neutral pion decay constant as a function of external magnetic field. our results provide a benchmark for future lattice studies of inhomogeneous phases in dense quark matter.
|
arxiv:1902.07522
|
federated learning is widely discussed as a distributed machine learning concept with stress on preserving data privacy. various structures of federated learning were proposed. centralized federated learning for instance has been the primary structure that suits cloud computing. decentralized federated learning also has been proposed for ecosystems where communication is dominantly peer - to - peer. semi - decentralized federated learning ( sdfl ) has emerged recently as a new concept where the interconnected nodes are clustered, and each cluster is managed independently. the potential of sdfl lies in its clustering feature, which distributes the load of the global model update down onto multiple nodes. since the concept is fairly new, much can be done to render this fl model a reliable, efficient, and real - time service at the edge. in this paper, we propose sdflmq, a semi - decentralized federated learning framework at the edge that uses mqtt as the communication protocol. we demonstrate how the publish / subscribe communication model is used to facilitate the clustering and load balancing in sdfl. we also demonstrate how sdflmq can use some of the core mqtt features to expand its capacity with no significant costs. based on some primary evaluations, we demonstrate how sdflmq can efficiently distribute the load of aggregation, and potentially save unnecessary memory allocation, all with no requirement for a powerful central unit for aggregation and global model update. we also disclose some of the key future expansions of sdflmq with a focus on the operation of large deep neural network models at the edge.
|
arxiv:2503.13624
|
in this work we present a gauge principle that starts with the momentum space representation of the position operator ( $ { \ hat x } _ i = i \ hbar \ frac { \ partial } { \ partial p _ i } $ ) rather than starting with the position space representation of the momentum operator ( $ { \ hat p } _ i = - i \ hbar \ frac { \ partial } { \ partial x _ i } $ ). we discuss some simple examples with this new type of gauge theory : ( i ) analog solutions from ordinary gauge theory in this momentum gauge theory, ( ii ) landau levels using momentum gauge fields, ( iii ) the emergence of non - commutative space - times from the momentum gauge fields. we find that the non - commutative space - time parameter can be momentum dependent, and one can construct a model where space - time is commutative at low momentum but becomes non - commutative at high momentum.
|
arxiv:2206.02638
|
we compute bordered floer homology cfdd of ( 2, 2n ) - torus link complement, and discuss assorted examples and type - dd structure homotopy equivalence.
|
arxiv:1311.2288
|
that it can safely store values between β ( 231β1 ) and 231β1, but it may not assume that the range is not larger. = = = long long = = = in the c99 version of the c programming language and the c + + 11 version of c + +, a long long type is supported that has double the minimum capacity of the standard long. this type is not supported by compilers that require c code to be compliant with the previous c + + standard, c + + 03, because the long long type did not exist in c + + 03. for an ansi / iso compliant compiler, the minimum requirements for the specified ranges, that is, β ( 263β1 ) to 263β1 for signed and 0 to 264β1 for unsigned, must be fulfilled ; however, extending this range is permitted. this can be an issue when exchanging code and data between platforms, or doing direct hardware access. thus, there are several sets of headers providing platform independent exact width types. the c standard library provides stdint. h ; this was introduced in c99 and c + + 11. = = syntax = = integer literals can be written as regular arabic numerals, consisting of a sequence of digits and with negation indicated by a minus sign before the value. however, most programming languages disallow use of commas or spaces for digit grouping. examples of integer literals are : 42 10000 - 233000 there are several alternate methods for writing integer literals in many programming languages : many programming languages, especially those influenced by c, prefix an integer literal with 0x or 0x to represent a hexadecimal value, e. g. 0xdeadbeef. other languages may use a different notation, e. g. some assembly languages append an h or h to the end of a hexadecimal value. perl, ruby, java, julia, d, go, c #, rust, python ( starting from version 3. 6 ), and php ( from version 7. 4. 0 onwards ) allow embedded underscores for clarity, e. g. 10 _ 000 _ 000, and fixed - form fortran ignores embedded spaces in integer literals. c ( starting from c23 ) and c + + use single quotes for this purpose. in c and c + +, a leading zero indicates an octal value, e. g. 0755. this was primarily intended to be used with unix modes ; however, it
|
https://en.wikipedia.org/wiki/Integer_(computer_science)
|
software engineering educators are continually challenged by rapidly evolving concepts, technologies, and industry demands. due to the omnipresence of software in a digitalized society, higher education institutions ( heis ) have to educate the students such that they learn how to learn, and that they are equipped with a profound basic knowledge and with latest knowledge about modern software and system development. since industry demands change constantly, heis are challenged in meeting such current and future demands in a timely manner. this paper analyzes the current state of practice in software engineering education. specifically, we want to compare contemporary education with industrial practice to understand if frameworks, methods and practices for software and system development taught at heis reflect industrial practice. for this, we conducted an online survey and collected information about 67 software engineering courses. our findings show that development approaches taught at heis quite closely reflect industrial practice. we also found that the choice of what process to teach is sometimes driven by the wish to make a course successful. especially when this happens for project courses, it could be beneficial to put more emphasis on building learning sequences with other courses.
|
arxiv:2101.08432
|
f : n β m { \ displaystyle f : n \ to m } defined by f ( i ) = j i β b j { \ displaystyle f ( i ) = j \ iff i \ in b _ { j } } is a rigid surjection. = = see also = = uniqueness theorem structural rigidity, a mathematical theory describing the degrees of freedom of ensembles of rigid physical objects connected together by flexible hinges. level structure ( algebraic geometry ) = = references = = this article incorporates material from rigid on planetmath, which is licensed under the creative commons attribution / share - alike license.
|
https://en.wikipedia.org/wiki/Rigidity_(mathematics)
|
we reformulate the singularity confinement of the discrete toda equation. we prove the co - primeness property, which has been introduced in our previous paper ( arxiv : 1311. 0060 ) as one of the integrability criteria, for the discrete toda equation. we study three types of boundary conditions ( semi - infinite, molecule, periodic ) for the discrete toda equation, and prove that the same co - primeness property holds for all the types of boundaries. ( v2 : typos corrected, final version to appear in j. math. phys. )
|
arxiv:1412.1167
|
the grothendieck - - serre conjecture predicts that every generically trivial torsor under a reductive group scheme $ g $ over a regular local ring $ r $ is trivial. the mixed characteristic case of the conjecture is widely open. we consider the following setup. let $ a $ be a mixed characteristic dvr, $ g $ a reductive group scheme over $ a $, $ x $ an irreducible smooth projective $ a $ - scheme, $ \ mathcal g $ a principal $ g $ - bundle over $ x $. suppose $ \ mathcal g $ is generically trivial. we prove that in this case $ \ mathcal g $ is zariski locally trivial. this result confirms the conjecture.
|
arxiv:2302.02842
|
learning effective representations for chinese characters presents unique challenges, primarily due to the vast number of characters and their continuous growth, which requires models to handle an expanding category space. additionally, the inherent sparsity of character usage complicates the generalization of learned representations. prior research has explored radical - based sequences to overcome these issues, achieving progress in recognizing unseen characters. however, these approaches fail to fully exploit the inherent tree structure of such sequences. to address these limitations and leverage established data properties, we propose formation tree - clip ( ft - clip ). this model utilizes formation trees to represent characters and incorporates a dedicated tree encoder, significantly improving performance in both seen and unseen character recognition tasks. we further introduce masking for to both character images and tree nodes, enabling efficient and effective training. this approach accelerates training significantly ( by a factor of 2 or more ) while enhancing accuracy. extensive experiments show that processing characters through formation trees aligns better with their inherent properties than direct sequential methods, significantly enhancing the generality and usability of the representations.
|
arxiv:2404.12693
|
this is a list of atheists in science and technology. a statement by a living person that he or she does not believe in god is not a sufficient criterion for inclusion in this list. persons in this list are people ( living or not ) who both have publicly identified themselves as atheists and whose atheism is relevant to their notable activities or public life. = = a = = scott aaronson ( 1981 β ) : american theoretical computer scientist and professor at the university of texas at austin. his primary area of research is quantum computing and computational complexity theory. ernst abbe ( 1840 β 1905 ) : german physicist, optometrist, entrepreneur, and social reformer. together with otto schott and carl zeiss, he laid the foundation of modern optics. abbe developed numerous optical instruments. he was a co - owner of carl zeiss ag, a german manufacturer of research microscopes, astronomical telescopes, planetariums and other optical systems. fay ajzenberg - selove ( 1926 β 2012 ) : american nuclear physicist who was known for her experimental work in nuclear spectroscopy of light elements, and for her annual reviews of the energy levels of light atomic nuclei. she was a recipient of the 2007 national medal of science. jean le rond d ' alembert ( 1717 β 1783 ) : french mathematician, mechanician, physicist, philosopher, and music theorist. he was also co - editor with denis diderot of the encyclopedie. zhores alferov ( 1930 β 2019 ) : belarusian, soviet, and russian physicist who contributed substantially to the creation of modern heterostructure physics and electronics. he is an inventor of the heterotransistor and co - winner ( with herbert kroemer and jack kilby ) of the 2000 nobel prize in physics. hannes alfven ( 1908 β 1995 ) : swedish electrical engineer and plasma physicist. he received the 1970 nobel prize in physics for his work on magnetohydrodynamics ( mhd ). he is best known for describing the class of mhd waves now known as alfven waves. jim al - khalili obe ( 1962 β ) : iraqi - born british quantum physicist, author and science communicator. he is professor of theoretical physics and chair in the public engagement in science at the university of surrey philip w. anderson ( 1923 β 2020 ) : american physicist. he was one of the recipients of the nobel prize in physics in 1977. anderson has made contributions to the theories of localization,
|
https://en.wikipedia.org/wiki/List_of_atheists_in_science_and_technology
|
pre - training techniques significantly enhance the performance of semantic segmentation tasks with limited training data. however, the efficacy under a large domain gap between pre - training ( e. g. rgb ) and fine - tuning ( e. g. infrared ) remains underexplored. in this study, we first benchmark the infrared semantic segmentation performance of various pre - training methods and reveal several phenomena distinct from the rgb domain. next, our layerwise analysis of pre - trained attention maps uncovers that : ( 1 ) there are three typical attention patterns ( local, hybrid, and global ) ; ( 2 ) pre - training tasks notably influence the pattern distribution across layers ; ( 3 ) the hybrid pattern is crucial for semantic segmentation as it attends to both nearby and foreground elements ; ( 4 ) the texture bias impedes model generalization in infrared tasks. building on these insights, we propose unip, a unified infrared pre - training framework, to enhance the pre - trained model performance. this framework uses the hybrid - attention distillation nmi - had as the pre - training target, a large - scale mixed dataset infmix for pre - training, and a last - layer feature pyramid network ll - fpn for fine - tuning. experimental results show that unip outperforms various pre - training methods by up to 13. 5 \ % in average miou on three infrared segmentation tasks, evaluated using fine - tuning and linear probing metrics. unip - s achieves performance on par with mae - l while requiring only 1 / 10 of the computational cost. furthermore, unip significantly surpasses state - of - the - art ( sota ) infrared or rgb segmentation methods and demonstrates broad potential for application in other modalities, such as rgb and depth. our code is available at https : / / github. com / casiatao / unip.
|
arxiv:2502.02257
|
i propose a new class of interpretations, { \ it real world interpretations }, of the quantum theory of closed systems. these interpretations postulate a preferred factorization of hilbert space and preferred projective measurements on one factor. they give a mathematical characterisation of the different possible worlds arising in an evolving closed quantum system, in which each possible world corresponds to a ( generally mixed ) evolving quantum state. in a realistic model, the states corresponding to different worlds should be expected to tend towards orthogonality as different possible quasiclassical structures emerge or as measurement - like interactions produce different classical outcomes. however, as the worlds have a precise mathematical definition, real world interpretations need no definition of quasiclassicality, measurement, or other concepts whose imprecision is problematic in other interpretational approaches. it is natural to postulate that precisely one world is chosen randomly, using the natural probability distribution, as the world realised in nature, and that this world ' s mathematical characterisation is a complete description of reality.
|
arxiv:0708.3710
|
in response to the need for the astro2020 decadal survey to explicitly engage early career astronomers, the national academies of sciences, engineering, and medicine hosted the early career astronomer and astrophysicist focus session ( ecfs ) on october 8 - 9, 2018 under the auspices of committee of astronomy and astrophysics. the meeting was attended by fifty six pre - tenure faculty, research scientists, postdoctoral scholars, and senior graduate students, as well as eight former decadal survey committee members, who acted as facilitators. the event was designed to educate early career astronomers about the decadal survey process, to solicit their feedback on the role that early career astronomers should play in astro2020, and to provide a forum for the discussion of a wide range of topics regarding the astrophysics career path. this white paper presents highlights and themes that emerged during two days of discussion. in section 1, we discuss concerns that emerged regarding the coming decade and the astrophysics career path, as well as specific recommendations from participants regarding how to address them. we have organized these concerns and suggestions into five broad themes. these include ( sequentially ) : ( 1 ) adequately training astronomers in the statistical and computational techniques necessary in an era of " big data ", ( 2 ) responses to the growth of collaborations and telescopes, ( 3 ) concerns about the adequacy of graduate and postdoctoral training, ( 4 ) the need for improvements in equity and inclusion in astronomy, and ( 5 ) smoothing and facilitating transitions between early career stages. section 2 is focused on ideas regarding the decadal survey itself, including : incorporating early career voices, ensuring diverse input from a variety of stakeholders, and successfully and broadly disseminating the results of the survey.
|
arxiv:1907.01676
|
the field of exoplanetary science is making rapid progress both in statistical studies of exoplanet properties as well as in individual characterization. as space missions provide an emerging picture of formation and evolution of exoplanetary systems, the search for habitable worlds becomes one of the fundamental issues to address. to tackle such a complex challenge, we need to specify the conditions favorable for the origin, development and sustainment of life as we know it. this requires the understanding of global ( astrospheric ) and local ( atmospheric, surface and internal ) environments of exoplanets in the framework of the physical processes of the interaction between evolving planet - hosting stars along with exoplanetary evolution over geological timescales, and the resulting impact on climate and habitability of exoplanets. feedbacks between astrophysical, physico - chemical atmospheric and geological processes can only be understood through interdisciplinary studies with the incorporation of progress in heliophysics, astrophysics, planetary, earth sciences, astrobiology, and the origin of life communities. the assessment of the impacts of host stars on the climate and habitability of terrestrial ( exo ) planets and potential exomoons around them may significantly modify the extent and the location of the habitable zone and provide new directions for searching for signatures of life. thus, characterization of stellar ionizing outputs becomes an important task for further understanding the extent of habitability in the universe. the goal of this white paper is to identify and describe promising key research goals to aid the theoretical characterization and observational detection of ionizing radiation from quiescent and flaring upper atmospheres of planet hosts as well as properties of stellar coronal mass ejections and stellar energetic particle events.
|
arxiv:1903.06853
|
there is inherent information captured in the order in which we write words in a list. the orderings of binomials - - - lists of two words separated by ` and ' or ` or ' - - - has been studied for more than a century. these binomials are common across many areas of speech, in both formal and informal text. in the last century, numerous explanations have been given to describe what order people use for these binomials, from differences in semantics to differences in phonology. these rules describe primarily ` frozen ' binomials that exist in exactly one ordering and have lacked large - scale trials to determine efficacy. online text provides a unique opportunity to study these lists in the context of informal text at a very large scale. in this work, we expand the view of binomials to include a large - scale analysis of both frozen and non - frozen binomials in a quantitative way. using this data, we then demonstrate that most previously proposed rules are ineffective at predicting binomial ordering. by tracking the order of these binomials across time and communities we are able to establish additional, unexplored dimensions central to these predictions. expanding beyond the question of individual binomials, we also explore the global structure of binomials in various communities, establishing a new model for these lists and analyzing this structure for non - frozen and frozen binomials. additionally, novel analysis of trinomials - - - lists of length three - - - suggests that none of the binomials analysis applies in these cases. finally, we demonstrate how large data sets gleaned from the web can be used in conjunction with older theories to expand and improve on old questions.
|
arxiv:2003.03612
|
in this work, we study the light focusing behaviors of sub - micron si hemispherical nanolens in theory. results show that the width and depth of the focus spot light at 405 nm can reach 42 nm ( approximately { \ lambda } / 10 ) and 20 nm ( { \ lambda } / 20 ), respectively. theoretical analysis indicates that this nano - focusing phenomenon comes from two reasons, the high refractive index of si and the sub - micro size of the lens which considerably decrease the influence of material losses. the focusing capability of si nanolens is comparable with current euv technique but with a low cost, providing an alternative approach towards super - resolution photolithography and optical microscopy.
|
arxiv:2008.12054
|
sinai ' s walk is a recurrent one - dimensional nearest - neighbor random walk in random environment. it is known for a phenomenon of strong localization, namely, the walk spends almost all time at or near the bottom of deep valleys of the potential. our main result shows a weakness of this localization phenomenon : with probability one, the zones where the walk stays for the most time can be far away from the sites where the walk spends the most time. in particular, this gives a negative answer to a problem of erd \ h { o } s and r \ ' { e } v \ ' { e } sz [ mathematical structures - - computational mathematics - - mathematical modelling 2 ( 1984 ) 152 - - 157 ], originally formulated for the usual homogeneous random walk.
|
arxiv:math/0606376
|
feynman perturbation theory for nonabelian gauge theory in light - like gauge is investigated. a lattice along two space - like directions is used as a gauge invariant ultraviolet regularization. for preservation of the polinomiality of action we use as independent variables arbitrary ( non - unitary ) matrices related to the link of the lattice. the action of the theory is selected in such a way to preserve as much as possible the rotational invariance, which remains after introduction of the lattice, as well as to make superfluous degrees of freedom vanish in the limit of removing the regularization. feynman perturbation theory is constructed and diagrams which does not contain ultraviolet divergences are analyzed. the scheme of renormalization of this theory is discussed.
|
arxiv:1009.2238
|
the fundamental parameters of reddening, metallicity, age, and distance are presented for the poorly studied open clusters be ~ 89, ru ~ 135, and be ~ 10, derived from their ccd ubvri photometry. by fitting the appropriate isochrones to the observed sequences of the clusters in five different color - - magnitude diagrams, the weighted averages of distance moduli and heliocentric distances ( $ ( v _ 0 $ - - $ m _ { v } ), d $ ( kpc ) ) are $ ( 11 \ fm90 \ pm 0 \ fm06, 2. 4 \ pm 0. 06 $ ) for be ~ 89, $ ( 9 \ fm58 \ pm 0 \ fm07, 0. 81 \ pm 0. 03 $ ) for ru ~ 135, and $ ( 11 \ fm16 \ pm 0 \ fm06, 1. 7 \ pm 0. 05 $ ) for be ~ 10, and the weighted averages of the ages $ ( \ log ( a ), a $ ( gyr ) ) are $ ( 9. 58 \ pm 0. 06, 3. 8 \ pm 0. 6 ) $ for be ~ 89, $ ( 9. 58 \ pm 0. 06, 3. 8 \ pm 0. 7 ) $ for ru ~ 135, and $ ( 9. 06 \ pm 0. 05, 1. 08 \ pm 0. 08 ) $ for be ~ 10.
|
arxiv:1008.2867
|
there is an apparent power deficit relative to the $ \ lambda $ cdm prediction of the cmb spectrum at large scales, which, though not yet statistically significant, persists from wmap to planck data. proposals that invoke some form of initial condition for the inflation have been made to address this apparent power suppression, albeit with conflicting conclusions. by studying the curvature perturbations of a scalar field in the flrw universe parameterized by the equation of state parameter $ w $, we find that the large - scale spectrum at the end of inflation reflects the super - horizon spectrum of the initial state. the large - scale spectrum is suppressed if the universe begins with the adiabatic vacuum in a super - inflation ( $ w < - 1 $ ) or positive - pressure ( $ w > 0 $ ) era. in the latter case, there is however no causal mechanism to establish the initial adiabatic vacuum. on the other hand, as long as the universe begins with the adiabatic vacuum in an era with $ - 1 < w < 0 $, even if there exists an intermediate positive - pressure era, the large - scale spectrum would be enhanced rather than suppressed. we further calculate the spectrum of a two - stage inflation model with a two - field potential and show that the result agrees with that obtained from the ad hoc single - field analysis.
|
arxiv:1505.05980
|
converting a compressed format of a string into another compressed format without an explicit decompression is one of the central research topics in string processing. we discuss the problem of converting the run - length burrows - wheeler transform ( rlbwt ) of a string to lempel - ziv 77 ( lz77 ) phrases of the reversed string. the first results with policriti and prezza ' s conversion algorithm [ algorithmica 2018 ] were $ o ( n \ log r ) $ time and $ o ( r ) $ working space for length of the string $ n $, number of runs $ r $ in the rlbwt, and number of lz77 phrases $ z $. recent results with kempa ' s conversion algorithm [ soda 2019 ] are $ o ( n / \ log n + r \ log ^ { 9 } n + z \ log ^ { 9 } n ) $ time and $ o ( n / \ log _ { \ sigma } n + r \ log ^ { 8 } n ) $ working space for the alphabet size $ \ sigma $ of the rlbwt. in this paper, we present a new conversion algorithm by improving policriti and prezza ' s conversion algorithm where dynamic data structures for general purpose are used. we argue that these dynamic data structures can be replaced and present new data structures for faster conversion. the time and working space of our conversion algorithm with new data structures are $ o ( n \ min \ { \ log \ log n, \ sqrt { \ frac { \ log r } { \ log \ log r } } \ } ) $ and $ o ( r ) $, respectively.
|
arxiv:1902.05224
|
we have obtained ( warped ) ads black hole solutions in the three dimensional extended new massive gravity. we investigate some properties of black holes and obtain central charges of the two dimensional dual cft. to obtain the central charges, we use the relation between entropy and temperature according to the ads / cft dictionary. for ads black holes, one can also use the central charge function formalism which leads to the same results.
|
arxiv:1005.1619
|
physical adversarial patches have emerged as a key adversarial attack to cause misclassification of traffic sign recognition ( tsr ) systems in the real world. however, existing adversarial patches have poor stealthiness and attack all vehicles indiscriminately once deployed. in this paper, we introduce an invisible and triggered physical adversarial patch ( itpatch ) with a novel attack vector, i. e., fluorescent ink, to advance the state - of - the - art. it applies carefully designed fluorescent perturbations to a target sign, an attacker can later trigger a fluorescent effect using invisible ultraviolet light, causing the tsr system to misclassify the sign and potentially resulting in traffic accidents. we conducted a comprehensive evaluation to investigate the effectiveness of itpatch, which shows a success rate of 98. 31 % in low - light conditions. furthermore, our attack successfully bypasses five popular defenses and achieves a success rate of 96. 72 %.
|
arxiv:2409.12394
|
many of today ' s deep neural network accelerators, e. g., google ' s tpu and nvidia ' s tensor core, are built around accelerating the general matrix multiplication ( i. e., gemm ). however, supporting convolution on gemm - based accelerators is not trivial. the naive method explicitly lowers the convolution to gemm, commonly known as im2col, which introduces significant performance and memory overhead. existing implicit im2col algorithms require unscalable hardware and are inefficient in supporting important convolution variants such as strided convolution. in this paper, we propose a memory - efficient and hardware - friendly implicit im2col algorithm used by google ' s tpu, which dynamically converts a convolution into a gemm with practically zero performance and memory overhead, fully unleashing the power of gemm engines. through comprehensive experimental results, we quantitatively argue that this algorithm has been adopted in commercial closed - source platforms, and we are the first to describe its high - level idea and implementation details. finally, we show that our algorithm can also be generally applied to nvidia ' s tensor cores ( tc ), matching and out - performing the measured performance on tcs.
|
arxiv:2110.03901
|
in recent years, deep learning has shown impressive performance on many tasks. however, recent researches showed that deep learning systems are vulnerable to small, specially crafted perturbations that are imperceptible to humans. images with such perturbations are the so called adversarial examples, which have proven to be an indisputable threat to the dnn based applications. the lack of better understanding of the dnns has prevented the development of efficient defenses against adversarial examples. in this paper, we propose a two - stream architecture to protect cnn from attacking by adversarial examples. our model draws on the idea of " two - stream " which commonly used in the security field, and successfully defends different kinds of attack methods by the differences of " high - resolution " and " low - resolution " networks in feature extraction. we provide a reasonable interpretation on why our two - stream architecture is difficult to defeat, and show experimentally that our method is hard to defeat with state - of - the - art attacks. we demonstrate that our two - stream architecture is robust to adversarial examples built by currently known attacking algorithms.
|
arxiv:1912.12859
|
using a combination of analytical techniques and quantum monte carlo simulations we investigate the coupled spin ladder system lacuo2. 5. at a critical ratio of the interladder to intraladder coupling ( j ' / j ) _ c \ approx 0. 11 we find a quantum phase transition between a neel ordered and a disordered state. at criticality the uniform susceptibility behaves as \ chi ( t ) = at ^ 2 with a universal prefactor. at intermediate temperatures the system crosses over to a ` ` decoupled ladders regime ' ' with pseudo - gap type behavior, similar to uncoupled ladders. this can explain the gap - like experimental data for the magnetic susceptibility of lacuo2. 5 despite the presence of the long range neel order.
|
arxiv:cond-mat/9606089
|
we present here the weak gravitational lensing detection of four nearby galaxy clusters in the southern sky : abell 2029, abell 85, abell 1606 and abell 2457. the weak lensing detections of abell 1606 and abell 2457 are the first in the literature. this work capitalizes on the wide field of view of the dark energy camera at the cerro tololo inter - american observatory, which we use to obtain deep, multi - wavelength imaging of all targets. we publish maps of the clusters ' projected mass distributions, and obtain the $ m _ { 200 } $ of their clusters through nfw profile fits to the two - dimensional tangential ellipticity signal.
|
arxiv:1812.08356
|
camera and 3d lidar sensors have become indispensable devices in modern autonomous driving vehicles, where the camera provides the fine - grained texture, color information in 2d space and lidar captures more precise and farther - away distance measurements of the surrounding environments. the complementary information from these two sensors makes the two - modality fusion be a desired option. however, two major issues of the fusion between camera and lidar hinder its performance, \ ie, how to effectively fuse these two modalities and how to precisely align them ( suffering from the weak spatiotemporal synchronization problem ). in this paper, we propose a coarse - to - fine lidar and camera fusion - based network ( termed as lif - seg ) for lidar segmentation. for the first issue, unlike these previous works fusing the point cloud and image information in a one - to - one manner, the proposed method fully utilizes the contextual information of images and introduces a simple but effective early - fusion strategy. second, due to the weak spatiotemporal synchronization problem, an offset rectification approach is designed to align these two - modality features. the cooperation of these two components leads to the success of the effective camera - lidar fusion. experimental results on the nuscenes dataset show the superiority of the proposed lif - seg over existing methods with a large margin. ablation studies and analyses demonstrate that our proposed lif - seg can effectively tackle the weak spatiotemporal synchronization problem.
|
arxiv:2108.07511
|
motivated by questions about the open - system dynamics of topological quantum matter, we investigated the quantum brownian motion of an electron in a homogeneous magnetic field. when the fermi length $ l _ f = \ hbar / ( v _ fm _ { \ text { eff } } ) $ becomes much longer than the magnetic length $ l _ b = ( \ hbar c / eb ) ^ { 1 / 2 } $, then the spatial coordinates $ x, y $ of the electron cease to commute, $ [ x, y ] = il _ b ^ 2 $. as a consequence, localization of the electron becomes limited by heisenberg uncertainty, and the linear bath - electron coupling becomes unconventional. moreover, because the kinetic energy of the electron is quenched by the strong magnetic field, the electron has no energy to give to or take from the bath, and so the usual connection between frictional forces and dissipation no longer holds. these two features make quantum brownian motion topological, in the regime $ l _ f \ gg l _ b $, which is at the verge of current experimental capabilities. we model topological quantum brownian motion in terms of an unconventional operator langevin equation derived from first principles, and solve this equation with the aim of characterizing diffusion. while diffusion in the noncommutative plane turns out to be conventional, with the mean displacement squared being proportional to $ t ^ \ alpha $ and $ \ alpha = 1 $, there is an exotic regime for the proportionality constant in which it is directly proportional to the friction coefficient and inversely proportional to the square of the magnetic field : in this regime, friction helps diffusion and the magnetic field suppresses all fluctuations. we also show that quantum tunneling can be completely suppressed in the noncommutative plane for suitably designed metastable potential wells, a feature that might be worth exploiting for storage and protection of quantum information.
|
arxiv:1602.00694
|
we discuss several combinatorial problems that arise when one looks at computational algorithms for highly symmetric networks of processors. more specifically, we are interested in minimal times associated with four communication tasks ( defined more precisely below ) : universal broadcast, every processor has a vector that it wishes to broadcast to all the others ; universal accumulation, every processor wishes to receive the sum of all the vectors being sent to it by all the other processors ; universal exchange, every processor wishes to exchange a vector with each other processor ; and global summation, every processor wants the sum of the vectors in all the processors
|
arxiv:1305.6349
|
a phenomenological theory is developed, that accounts for the collective dynamics of a bose - einstein condensate of magnons. in terms of such description we discuss the nature of spontaneous macroscopic interference between magnon clouds, highlighting the close relation between such effects and the well known josephson effects. using those ideas we present a detailed calculation of the josephson oscillations between two magnon clouds, spatially separated in a magnonic josephson junction.
|
arxiv:1305.4285
|
the bethe - salpeter equation ( bse ) is the workhorse method to study excitons in materials. the size of the bse hamiltonian, that is how many valence to conduction band transitions are considered in those calculations, needs to be chosen to be sufficiently large to converge excitons ' energies and wavefunctions but should be minimized to make calculations tractable, as bse calculations scale with the number of atoms as $ ( n _ { \ rm { atoms } } ^ 6 ) $. in particular, in the case of supercell ( sc ) calculations composed of $ n _ { \ rm { rep } } $ replicas of the primitive cell ( pc ), a natural choice to build this bse hamiltonian is to include all transitions from pc calculations by zone folding. however, this greatly increases the size of the bse hamiltonian, as the number of matrix elements in it is $ ( n _ k n _ c n _ v ) ^ 2 $, where $ n _ k $ is the number of $ k $ - points, and $ n _ { c ( v ) } $ is the number of conduction ( valence ) states. the number of $ k $ - points decreases by a factor $ n _ { \ rm { rep } } $ but both the number of conduction and valence states increase by the same factor, therefore the bse hamiltonian increases by a factor $ n _ { \ rm { rep } } ^ 2 $, making exactly corresponding calculations prohibitive. here we provide an analysis to decide how many transitions are necessary to achieve comparable results. with our method, we show that to converge with an energy tolerance of 0. 1 ev the first exciton binding energy of a lif sc composed of 64 pcs, we only need 12 \ % of the valence to conduction transitions that are given by zone folding. we also show that exciton energies are much harder to converge than random phase approximation transition energies, underscoring the necessity of careful convergence studies. the procedure in our work helps in evaluating excitonic properties in large sc calculations such as defects, self - trapped excitons, polarons, and interfaces.
|
arxiv:2502.19396
|
we give shuffle algebra realization of positive part of quantum affine superalgebra $ u _ { v } ( \ widehat { \ mathfrak { d } } ( 2, 1 ; \ theta ) ) $ associated to any simple root systems. we also determine the shuffle algebra associated to $ \ widehat { \ mathfrak { sl } } ( 2 | 1 ) $ with odd root system when $ v $ is a primitive root of unity of even order, generalizing results in \ cite { fjmmt03 }.
|
arxiv:1909.12575
|
jointly integrating aspect ratio and context has been extensively studied and shown performance improvement in traditional object detection systems such as the dpms. it, however, has been largely ignored in deep neural network based detection systems. this paper presents a method of integrating a mixture of object models and region - based convolutional networks for accurate object detection. each mixture component accounts for both object aspect ratio and multi - scale contextual information explicitly : ( i ) it exploits a mixture of tiling configurations in the roi pooling to remedy the warping artifacts caused by a single type roi pooling ( e. g., with equally - sized 7 x 7 cells ), and to respect the underlying object shapes more ; ( ii ) it " looks from both the inside and the outside of a roi " by incorporating contextual information at two scales : global context pooled from the whole image and local context pooled from the surrounding of a roi. to facilitate accurate detection, this paper proposes a multi - stage detection scheme for integrating the mixture of object models, which utilizes the detection results of the model at the previous stage as the proposals for the current in both training and testing. the proposed method is called the aspect ratio and context aware region - based convolutional network ( arc - r - cnn ). in experiments, arc - r - cnn shows very competitive results with faster r - cnn [ 41 ] and r - fcn [ 10 ] on two datasets : the pascal voc and the microsoft coco. it obtains significantly better map performance using high iou thresholds on both datasets.
|
arxiv:1612.00534
|
what happens when fermions hop on a lattice with crystalline defects? the answer depends on topological quantum numbers which specify the action of lattice rotations and translations in the low energy theory. one can understand the topological quantum numbers as a twist of continuum gauge fields in terms of crystalline gauge fields. we find that disclinations and dislocations - - defects of crystalline symmetries - - generally lead in the continuum to a certain ` ` emanant ' ' quantized magnetic flux. to demonstrate these facts, we study in detail tight - binding models whose low - energy descriptions are ( 2 + 1 ) d dirac cones. our map from lattice to continuum defects explains the crystalline topological response to disclinations and dislocations, and motivates the fermion crystalline equivalence principle used in the classification of crystalline topological phases. when the gap closes, the presence of emanant flux leads to pair creation from the vacuum with the particles and anti - particles swirling around the defect. we compute the associated currents and energy density using the tools of defect conformal field theory. there is a rich set of renormalization group fixed points, depending on how particles scatter from the defect. at half flux, there is a defect conformal manifold leading to a continuum of possible low - energy theories. we present extensive numerical evidence supporting the emanant magnetic flux at lattice defects and we test our map between lattice and continuum defects in detail. we also point out a no - go result, which implies that a single ( 2 + 1 ) d dirac cone in symmetry class aii is incompatible with a commuting $ c _ m $ rotational symmetry with $ ( c _ m ) ^ m = + 1 $.
|
arxiv:2501.13866
|
in this work, we systematically investigate the efficacy of dynamic activation mechanisms within the llama family of language models. despite the potential of dynamic activation methods to reduce computation and increase speed in models using the relu activation function, our empirical findings have uncovered several inherent pitfalls in the current dynamic activation schemes. through extensive experiments across various dynamic activation strategies, we demonstrate that llama models usually underperform when compared to their relu counterparts, particularly in scenarios demanding high sparsity ratio. we attribute these deficiencies to a combination of factors : 1 ) the inherent complexity of dynamically predicting activation heads and neurons ; 2 ) the inadequate sparsity resulting from activation functions ; 3 ) the insufficient preservation of information resulting from kv cache skipping. our analysis not only sheds light on the limitations of dynamic activation in the context of large - scale llama models but also proposes roadmaps for enhancing the design of future sparsity schemes.
|
arxiv:2405.09274
|
representation learning ( rl ) methods for cyberattack detection face the diversity and sophistication of attack data, leading to the issue of mixed representations of different classes, particularly as the number of classes increases. to address this, the paper proposes a novel deep learning architecture / model called the twin auto - encoder ( tae ). tae first maps the input data into latent space and then deterministically shifts data samples of different classes further apart to create separable data representations, referred to as representation targets. tae ' s decoder then projects the input data into these representation targets. after training, tae ' s decoder extracts data representations. tae ' s representation target serves as a novel dynamic codeword, which refers to the vector that represents a specific class. this vector is updated after each training epoch for every data sample, in contrast to the conventional fixed codeword that does not incorporate information from the input data. we conduct extensive experiments on diverse cybersecurity datasets, including seven iot botnet datasets, two network ids datasets, three malware datasets, one cloud ddos dataset, and ten artificial datasets as the number of classes increases. tae boosts accuracy and f - score in attack detection by around 2 % compared to state - of - the - art models, achieving up to 96. 1 % average accuracy in iot attack detection. additionally, tae is well - suited for cybersecurity applications and potentially for iot systems, with a model size of approximately 1 mb and an average running time of around 2. 6e - 07 seconds for extracting a data sample.
|
arxiv:2403.15509
|
we propose a method to increase both the neutron storage time and the precision of its lifetime measurements by at least tenfold. the storage of ultracold neutrons ( ucn ) in material traps now provides the most accurate measurements of neutron lifetime and is used in many other experiments. the precision of these measurements is limited by the interaction of ucn with the trap walls. we show that covering trap walls with liquid helium may strongly decrease the ucn losses from material traps. $ ^ 4 $ he does not absorb neutrons at all. superfluid he covers the trap walls as a thin film, $ \ sim 10 $ nm thick, due to the van der waals attraction. however, this he film on a flat wall is too thin to protect the ucn from their absorption inside a trap material. by combining the van der waals attraction with capillary effects we show that surface roughness may increase the thickness of this film much beyond the neutron penetration depth $ \ sim 33 $ nm. using liquid he for ucn storage requires low temperature $ t < 0. 5 $ k to avoid neutron interaction with he vapor, while the neutron losses because of the interaction with surface waves are small and can be accounted for using their linear temperature dependence.
|
arxiv:2108.11246
|
we review some aspects of minimal cycles in string compactifications and their role in constructing new critical theories in six and lower dimensions as well as in accounting for black hole entropy. ( based on a talk presented at the salam memorial meeting, the abdus salam international center for theoretical physics, fall 1997 )
|
arxiv:hep-th/9805213
|
53, 790 for nine months, and summer tuition was $ 17, 800. financial support for graduate students are provided in large part by individual departments. they include fellowships, traineeships, teaching and research assistantships, and loans. the annual increase in expenses had led to a student tradition ( dating back to the 1960s ) of tongue - in - cheek " tuition riots ". mit has been nominally co - educational since admitting ellen swallow richards in 1870. richards also became the first female member of mit ' s faculty, specializing in sanitary chemistry. female students remained a small minority prior to the completion of the first wing of a women ' s dormitory, mccormick hall, in 1963. between 1993 and 2009 the proportion of women rose from 34 percent to 45 percent of undergraduates and from 20 percent to 31 percent of graduate students. as of 2009, women outnumbered men in biology, brain & cognitive sciences, architecture, urban planning, and biological engineering. = = = faculty and staff = = = as of 2025, mit had 1, 090 faculty members. faculty are responsible for lecturing classes, for advising both graduate and undergraduate students, and for sitting on academic committees, as well as for conducting original research. between 1964 and 2009 a total of seventeen faculty and staff members affiliated with mit won nobel prizes ( thirteen of them in the latter 25 years ). as of october 2020, 37 mit faculty members, past or present, have won nobel prizes, the majority in economics or physics. as of october 2013, current faculty and teaching staff included 67 guggenheim fellows, 6 fulbright scholars, and 22 macarthur fellows. faculty members who have made extraordinary contributions to their research field as well as the mit community are granted appointments as institute professors for the remainder of their tenures. susan hockfield, a molecular neurobiologist, served as mit ' s president from 2004 to 2012. she was the first woman to hold the post. mit faculty members have often been recruited to lead other colleges and universities. founding faculty - member charles w. eliot became president of harvard university in 1869, a post he would hold for 40 years, during which he wielded considerable influence both on american higher education and on secondary education. mit alumnus and faculty member george ellery hale played a central role in the development of the california institute of technology ( caltech ), and other faculty members have been key founders of franklin w. olin college of engineering in nearby needham, massachusetts. as of 2014 former provost robert a. brown served as president of boston university ; former
|
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
|
pareto set learning ( psl ) is an emerging approach for acquiring the complete pareto set of a multi - objective optimization problem. existing methods primarily rely on the mapping of preference vectors in the objective space to pareto optimal solutions in the decision space. however, the sampling of preference vectors theoretically requires prior knowledge of the pareto front shape to ensure high performance of the psl methods. designing a sampling strategy of preference vectors is difficult since the pareto front shape cannot be known in advance. to make pareto set learning work effectively in any pareto front shape, we propose a pareto front shape - agnostic pareto set learning ( gpsl ) that does not require the prior information about the pareto front. the fundamental concept behind gpsl is to treat the learning of the pareto set as a distribution transformation problem. specifically, gpsl can transform an arbitrary distribution into the pareto set distribution. we demonstrate that training a neural network by maximizing hypervolume enables the process of distribution transformation. our proposed method can handle any shape of the pareto front and learn the pareto set without requiring prior knowledge. experimental results show the high performance of our proposed method on diverse test problems compared with recent pareto set learning algorithms.
|
arxiv:2408.05778
|
we derive the mean - field equations characterizing the dynamics of a rumor process that takes place on top of complex heterogeneous networks. these equations are solved numerically by means of a stochastic approach. first, we present analytical and monte carlo calculations for homogeneous networks and compare the results with those obtained by the numerical method. then, we study the spreading process in detail for random scale - free networks. the time profiles for several quantities are numerically computed, which allow us to distinguish among different variants of rumor spreading algorithms. our conclusions are directed to possible applications in replicated database maintenance, peer to peer communication networks and social spreading phenomena.
|
arxiv:cond-mat/0312131
|
zero - shot learning ( zsl ) for image classification focuses on recognizing novel categories that have no labeled data available for training. the learning is generally carried out with the help of mid - level semantic descriptors associated with each class. this semantic - descriptor space is generally shared by both seen and unseen categories. however, zsl suffers from hubness, domain discrepancy and biased - ness towards seen classes. to tackle these problems, we propose a three - step approach to zero - shot learning. firstly, a mapping is learned from the semantic - descriptor space to the image - feature space. this mapping learns to minimize both one - to - one and pairwise distances between semantic embeddings and the image features of the corresponding classes. secondly, we propose test - time domain adaptation to adapt the semantic embedding of the unseen classes to the test data. this is achieved by finding correspondences between the semantic descriptors and the image features. thirdly, we propose scaled calibration on the classification scores of the seen classes. this is necessary because the zsl model is biased towards seen classes as the unseen classes are not used in the training. finally, to validate the proposed three - step approach, we performed experiments on four benchmark datasets where the proposed method outperformed previous results. we also studied and analyzed the performance of each component of our proposed zsl framework.
|
arxiv:1903.11701
|
the advent of compact, handheld devices has given us a pool of tracked movement data that could be used to infer trends and patterns that can be made to use. with this flooding of various trajectory data of animals, humans, vehicles, etc., the idea of analytic originated, using active learning to infer semantic annotations from the trajectories by learning from sets of labeled data. this study explores the application of dimensionality reduction and decision boundaries in combination with the already present active learning, highlighting patterns and clusters in data. we test these features with three different trajectory datasets with objective of exploiting the the already labeled data and enhance their interpretability. our experimental analysis exemplifies the potential of these combined methodologies in improving the efficiency and accuracy of trajectory labeling. this study serves as a stepping - stone towards the broader integration of machine learning and visual methods in context of movement data analysis.
|
arxiv:2401.05418
|
in this work we show that the composite fermion construction for the torus geometry is modular covariant. we show that this is the case both before and after projection, and that modular covariance properties are preserved under both exact projection and under jk projection which was recently introduced by pu, wu, and jain ( prb 96, 195302 ( 2017 ) ). it is crucial for the modular properties to hold that the cf state is a proper state, i. e. that there are no holes in the occupied $ \ lambda $ - levels.
|
arxiv:1810.10391
|
by introducing an invariant of loops on a compact oriented surface with one boundary component, we give an explicit formula for the action of dehn twists on the completed group ring of the fundamental group of the surface. this invariant can be considered as ` ` the logarithms " of dehn twists. the formula generalizes the classical formula describing the action on the first homology of the surface, and morita ' s explicit computations of the extended first and the second johnson homomorphisms. for the proof we use a homological interpretation of the goldman lie algebra in the framework of kontsevich ' s formal symplectic geometry. as an application, we prove the action of the dehn twist of a simple closed curve on the $ k $ - th nilpotent quotient of the fundamental group of the surface depends only on the conjugacy class of the curve in the $ k $ - th quotient.
|
arxiv:1008.5017
|
we prove that the rank gradient vanishes for mapping class groups of genus bigger than 1, $ aut ( f _ n ) $, for all $ n $, $ out ( f _ n ) $ for $ n \ geq 3 $, and any artin group whose underlying graph is connected. these groups have fixed price 1. we compute the rank gradient and verify that it is equal to the first $ l ^ 2 $ - betti number for some classes of coxeter groups.
|
arxiv:1210.2873
|
several dihedral angles prediction methods were developed for protein structure prediction and their other applications. however, distribution of predicted angles would not be similar to that of real angles. to address this we employed generative adversarial networks ( gan ). generative adversarial networks are composed of two adversarially trained networks : a discriminator and a generator. a discriminator distinguishes samples from a dataset and generated samples while a generator generates realistic samples. although the discriminator of gans is trained to estimate density, gan model is intractable. on the other hand, noise - contrastive estimation ( nce ) was introduced to estimate a normalization constant of an unnormalized statistical model and thus the density function. in this thesis, we introduce noise - contrastive estimation generative adversarial networks ( nce - gan ) which enables explicit density estimation of a gan model. and a new loss for the generator is proposed. we also propose residue - wise variants of auxiliary classifier gan ( ac - gan ) and semi - supervised gan to handle sequence information in a window. in our experiment, the conditional generative adversarial network ( c - gan ), ac - gan and semi - supervised gan were compared. and experiments done with improved conditions were invested. we identified a phenomenon of ac - gan that distribution of its predicted angles is composed of unusual clusters. the distribution of the predicted angles of semi - supervised gan was most similar to the ramachandran plot. we found that adding the output of the nce as an additional input of the discriminator is helpful to stabilize the training of the gans and to capture the detailed structures. adding regression loss and using predicted angles by regression loss only model could improve the conditional generation performance of the c - gan and ac - gan.
|
arxiv:1803.10996
|
1d diagonally disordered chain with frenkel exciton and long range exponential intersite interaction is considered. it is shown that some states of this disordered system are delocalised contrary to the popular statement that all states in 1d disordered system are localised.
|
arxiv:cond-mat/0310137
|
when the vacuum state of a scalar or electromagnetic field is modified by the presence of a reflecting boundary, an interacting test particle undergoes velocity fluctuations. such effect is regarded as a sort of quantum analog of the classical brownian motion. several aspects about this system have been recently investigated in the literature, for instance, finite temperature effects, curved spacetime framework, near - boundary regime, late time behavior, and subvacuum phenomena. here, further steps are given in this analysis by considering the effect of vacuum fluctuations of a scalar field in the presence of a perfectly reflecting flat boundary over the motion of a scalar test particle when the background field does not satisfy the huygens ' principle. specifically, the background field is allowed to have mass and the system is studied in $ d + 1 $ dimensions. a method of implementing a smooth transition between distinct states of the field is also developed, rendering regularized analytic expressions describing the velocity fluctuations of the test particle. this method is applied to study some special behaviors of the system. possible applications include fields known to occur in nature as, for instance, the massive higgs ' field, for which the velocity fluctuations are here predicted to acquire a characteristic oscillation, thus behaving differently from their electromagnetic counterparts.
|
arxiv:1906.08322
|
we introduce a family of markov processes on set partitions with a bounded number of blocks, called lipschitz partition processes. we construct these processes explicitly by a poisson point process on the space of lipschitz continuous maps on partitions. by this construction, the markovian consistency property is readily satisfied ; that is, the finite restrictions of any lipschitz partition process comprise a compatible collection of finite state space markov chains. we further characterize the class of exchangeable lipschitz partition processes by a novel set - valued matrix operation.
|
arxiv:1506.01495
|
for random graphs distributed according to a stochastic block model, we consider the inferential task of partioning vertices into blocks using spectral techniques. spectral partioning using the normalized laplacian and the adjacency matrix have both been shown to be consistent as the number of vertices tend to infinity. importantly, both procedures require that the number of blocks and the rank of the communication probability matrix are known, even as the rest of the parameters may be unknown. in this article, we prove that the ( suitably modified ) adjacency - spectral partitioning procedure, requiring only an upper bound on the rank of the communication probability matrix, is consistent. indeed, this result demonstrates a robustness to model mis - specification ; an overestimate of the rank may impose a moderate performance penalty, but the procedure is still consistent. furthermore, we extend this procedure to the setting where adjacencies may have multiple modalities and we allow for either directed or undirected graphs.
|
arxiv:1205.0309
|
graph neural networks ( gnns ) have achieved great success in various tasks, but their performance highly relies on a large number of labeled nodes, which typically requires considerable human effort. gnn - based active learning ( al ) methods are proposed to improve the labeling efficiency by selecting the most valuable nodes to label. existing methods assume an oracle can correctly categorize all the selected nodes and thus just focus on the node selection. however, such an exact labeling task is costly, especially when the categorization is out of the domain of individual expert ( oracle ). the paper goes further, presenting a soft - label approach to al on gnns. our key innovations are : i ) relaxed queries where a domain expert ( oracle ) only judges the correctness of the predicted labels ( a binary question ) rather than identifying the exact class ( a multi - class question ), and ii ) new criteria of maximizing information gain propagation for active learner with relaxed queries and soft labels. empirical studies on public datasets demonstrate that our method significantly outperforms the state - of - the - art gnn - based al methods in terms of both accuracy and labeling cost.
|
arxiv:2203.01093
|
we are discussing schwinger ' idea that physical mechanism of sonoluminescence is a physical vacuum excitation. this theory was based on the assumption that the sudden change of the rate of bubble collapse leads to the jump of dielectric constant of the gas trapped inside the bubble. we show that the dependence of the dielectric constant on the gas density really leads to the jump of the dielectric constant at shock - wave propagation in a collapsing gas bubble.
|
arxiv:cond-mat/0002434
|
minimally invasive experimental methods that can measure local rate dependent mechanical properties are essential in understanding the behaviour of soft and biological materials in a wide range of applications. needle based measurement techniques such as cavitation rheology and volume controlled cavity expansion ( vcce ), allow for minimally invasive local mechanical testing, but have been limited to measuring the elastic material properties. here, we propose several enhancements to the vcce technique to adapt it for characterization of viscoelastic response at low to medium stretch rates ( $ 10 ^ { - 2 } $ - $ 1 $ s $ { } ^ { - 1 } $ ). the proposed technique performs several cycles of expansion - relaxation at controlled stretch rates in a cavity expansion setting and then employs a large deformation viscoelastic model to capture the measured material response. application of the technique to soft pdms rubber reveals significant rate dependent material response with high precision and repeatability, while isolating equilibrated states that are used to directly infer the quasistatic elastic modulus. the technique is further established by demonstrating its ability to capture changes in the rate dependent material response of a tuneable pdms system. the measured viscoelastic properties are used to explain earlier reports of rate insensitive material response by needle based methods : it is demonstrated that the conventional use of constant volumetric rate cavity expansion can induce high stretch rates that lead to viscoelastic stiffening and an illusion of rate insensitive material response. we thus conclude with a cautionary note on possible overestimation of the quasistatic elastic modulus in previous studies and suggest that the stretch rate controlled expansion protocol, proposed in this work, is essential for accurate estimation of both quasistatic and dynamic material parameters.
|
arxiv:2007.11090
|
we obtain limit theorems for $ \ phi ( a ^ p ) ^ { 1 / p } $ and $ ( a ^ p \ sigma b ) ^ { 1 / p } $ as $ p \ to \ infty $ for positive matrices $ a, b $, where $ \ phi $ is a positive linear map between matrix algebras ( in particular, $ \ phi ( a ) = kak ^ * $ ) and $ \ sigma $ is an operator mean ( in particular, the weighted geometric mean ), which are considered as certain reciprocal lie - trotter formulas and also a generalization of kato ' s limit to the supremum $ a \ vee b $ with respect to the spectral order.
|
arxiv:1810.05476
|
lanczos methods for solving $ \ textit { a } \ textbf { x } = \ textbf { b } $ consist in constructing a sequence of vectors $ ( \ textbf { x } _ k ), k = 1,... $ such that $ \ textbf { r } _ { k } = \ textbf { b } - \ textit { a } \ textbf { x } _ { k } = \ textit { p } _ { k } ( \ textit { a } ) \ textbf { r } _ { 0 } $,, where $ \ textit { p } _ { k } $ is the orthogonal polynomial of degree at most $ k $ with respect to the linear functional $ c $ defined as $ c ( \ xi ^ i ) = ( \ textbf { y }, \ textit { a } ^ i \ textbf { r } _ { 0 } ) $. let $ \ textit { p } ^ { ( 1 ) } _ { k } $ be the regular monic polynomial of degree $ k $ belonging to the family of formal orthogonal polynomials ( fop ) with respect to $ c ^ { ( 1 ) } $ defined as $ c ^ { ( 1 ) } ( \ xi ^ { i } ) = c ( \ xi ^ { i + 1 } ) $. all lanczos - type algorithms are characterized by the choice of one or two recurrence relationships, one for $ \ textit { p } _ { k } $ and one for $ \ textit { p } ^ { ( 1 ) } _ { k } $. we shall study some new recurrence relations involving $ \ textit { p } _ { k } $ and $ \ textit { p } ^ { ( 1 ) } _ { k } $ and their possible combination to obtain new lanczos - type algorithms. we will show that some recurrence relations exist, but cannot be used to derive lanczos - type algorithms, while others do not exist at all.
|
arxiv:1403.0323
|
we present the analytic solution for the stationary quantum hamiltonjacobi equation. knowing the strong relation between the riccati and quantum hamilton - jacobi equations, we develop a simple method to obtain the exact solution. then, in order to prove the validity of the proposed method, we use two central potentials : the three - dimensional harmonic oscillator and coulomb potential, both with bound - states. finally, we compute the action - angle variables in a entirely quantum version for to achieve connect with the nodes of the wave function.
|
arxiv:1609.01198
|
in this paper, we study the large time asymptotic behavior toward rarefaction waves for solutions to the 1 - dimensional compressible navier - stokes equations with density - dependent viscosities for general initial data whose far fields are connected by a rarefaction wave to the corresponding euler equations with one end state being vacuum. first, a global - in - time weak solution around the rarefaction wave is constructed by approximating the system and regularizing the initial data with general perturbations, and some a priori uniform - in - time estimates for the energy and entropy are obtained. then it is shown that the density of any weak solution satisfying the natural energy and entropy estimates will converge to the rarefaction wave connected to vacuum with arbitrary strength in super - norm time - asymptotically. our results imply, in particular, that the initial vacuum at far fields will remain for all the time which are in contrast to the case of non - vacuum rarefaction waves studied in \ cite { jwx } where all the possible vacuum states will vanish in finite time. finally, it is proved that the weak solution becomes regular away from the vacuum region of the rarefaction wave.
|
arxiv:1109.0871
|
the present study investigates the accurate inference of reynolds - averaged navier - stokes solutions for the compressible flow over aerofoils in two dimensions with a deep neural network. our approach yields networks that learn to generate precise flow fields for varying body - fitted, structured grids by providing them with an encoding of the corresponding mapping to a canonical space for the solutions. we apply the deep neural network model to a benchmark case of incompressible flow at randomly given angles of attack and reynolds numbers and achieve an improvement of more than an order of magnitude compared to previous work. further, for transonic flow cases, the deep neural network model accurately predicts complex flow behaviour at high reynolds numbers, such as shock wave / boundary layer interaction, and quantitative distributions like pressure coefficient, skin friction coefficient as well as wake total pressure profiles downstream of aerofoils. the proposed deep learning method significantly speeds up the predictions of flow fields and shows promise for enabling fast aerodynamic designs.
|
arxiv:2109.02183
|
we apply the velocity distribution function ( vdf ) to a sample of sunyaev - zel ' dovich ( sz ) - selected clusters, and we report preliminary cosmological constraints in the $ \ sigma _ 8 $ - $ \ omega _ m $ cosmological parameter space. the vdf is a forward - modeled test statistic that can be used to constrain cosmological models directly from galaxy cluster dynamical observations. the method was introduced in ntampaka et al. ( 2017 ) and employs line - of - sight velocity measurements to directly constrain cosmological parameters ; it is less sensitive to measurement error than a standard halo mass function approach. the method is applied to the hectospec survey of sunyaev - zeldovich - selected clusters ( hecs - sz ) sample, which is a spectroscopic follow up of a planck - selected sample of 83 galaxy clusters. credible regions are calculated by comparing the vdf of the observed cluster sample to that of mock observations, yielding $ \ mathcal { s } _ 8 \ equiv \ sigma _ 8 \ left ( \ omega _ m / 0. 3 \ right ) ^ { 0. 25 } = 0. 751 \ pm0. 037 $. these constraints are in tension with the planck cosmic microwave background ( cmb ) tt fiducial value, which lies outside of our 95 % credible region, but are in agreement with some recent analyses of large scale structure that observe fewer massive clusters than are predicted by the planck fiducial cosmological parameters.
|
arxiv:1906.07729
|
the existence of correlations between the parts of a quantum system on the one hand, and entanglement between them on the other, are different properties. yet, one intuitively would identify strong $ n $ - party correlations with $ n $ - party entanglement in an $ n $ - partite quantum state. if the local systems are qubits, this intuition is confirmed : the state with the strongest $ n $ - party correlations is the greenberger - horne - zeilinger ( ghz ) state, which does have genuine multipartite entanglement. however, for high - dimensional local systems the state with strongest $ n $ - party correlations may be a tensor product of bell states, that is, partially separable. we show this by introducing several novel tools for handling the bloch representation.
|
arxiv:1908.04220
|
hypergraphs have been a useful tool for analyzing population dynamics such as opinion formation and the public goods game occurring in overlapping groups of individuals. in the present study, we propose and analyze evolutionary dynamics on hypergraphs, in which each node takes one of the two types of different but constant fitness values. for the corresponding dynamics on conventional networks, under the birth - death process and uniform initial conditions, most networks are known to be amplifiers of natural selection ; amplifiers by definition enhance the difference in the strength of the two competing types in terms of the probability that the mutant type fixates in the population. in contrast, we provide strong computational evidence that a majority of hypergraphs are suppressors of selection under the same conditions by combining theoretical and numerical analyses. we also show that this suppressing effect is not explained by one - mode projection, which is a standard method for expressing hypergraph data as a conventional network. our results suggest that the modeling framework for structured populations in addition to the specific network structure is an important determinant of evolutionary dynamics, paving a way to studying fixation dynamics on higher - order networks including hypergraphs.
|
arxiv:2301.05343
|
this document describes an approach used in the multi - machine disruption prediction challenge for fusion energy by itu, a data science competition which ran from september to november 2023, on the online platform zindi. the competition involved data from three fusion devices - c - mod, hl - 2a, and j - text - with most of the training data coming from the last two, and the test data coming from the first one. each device has multiple diagnostics and signals, and it turns out that a critical issue in this competition was to identify which signals, and especially which features from those signals, were most relevant to achieve accurate predictions. the approach described here is based on extracting features from signals, and then applying logistic regression on top of those features. each signal is treated as a separate predictor and, in the end, a combination of such predictors achieved the first place on the leaderboard.
|
arxiv:2311.14856
|
we show numerically that any of the constant mean curvature tori first found by wente must have index at least eight.
|
arxiv:0806.4659
|
we study the focusing of light through random photonic materials using wavefront shaping. we explore a novel approach namely binary amplitude modulation. to this end, the light incident to a random photonic medium is spatially divided into a number of segments. we identify the segments that give rise to fields that are out of phase with the total field at the intended focus and assign these a zero amplitude, whereas the remaining segments maintain their original amplitude. using 812 independently controlled segments of light, we find the intensity at the target to be 75 + / - 6 times enhanced over the average intensity behind the sample. we experimentally demonstrate focusing of light through random photonic media using both an amplitude only mode liquid crystal spatial light modulator and a mems - based spatial light modulator. our use of micro electro - mechanical system ( mems ) - based digital micromirror devices for the control of the incident light field opens an avenue to high speed implementations of wavefront shaping.
|
arxiv:1101.2860
|
a honeynet is a promising active cyber defense mechanism. it reveals the fundamental indicators of compromise ( iocs ) by luring attackers to conduct adversarial behaviors in a controlled and monitored environment. the active interaction at the honeynet brings a high reward but also introduces high implementation costs and risks of adversarial honeynet exploitation. in this work, we apply infinite - horizon semi - markov decision process ( smdp ) to characterize a stochastic transition and sojourn time of attackers in the honeynet and quantify the reward - risk trade - off. in particular, we design adaptive long - term engagement policies shown to be risk - averse, cost - effective, and time - efficient. numerical results have demonstrated that our adaptive engagement policies can quickly attract attackers to the target honeypot and engage them for a sufficiently long period to obtain worthy threat information. meanwhile, the penetration probability is kept at a low level. the results show that the expected utility is robust against attackers of a large range of persistence and intelligence. finally, we apply reinforcement learning to the smdp to solve the curse of modeling. under a prudent choice of the learning rate and exploration policy, we achieve a quick and robust convergence of the optimal policy and value.
|
arxiv:1906.12182
|
current steps in the current - voltage characteristics of wide superconducting sn films exposed to a microwave irradiation were observed in the resistive state with phase slip lines. the behaviour of the magnitude of the steps on the applied irradiation power was found to be similar to that for the current steps in narrow superconducting channels with phase slip centers and, to some extent, for the shapiro steps in josephson junctions. this provides evidence for the josephson properties of the phase slip lines in wide superconducting films and supports the assumption about similarity between the processes of phase slip in wide and narrow films.
|
arxiv:0707.2381
|
a groupoid that satisfying the left invertive law is called an ag - groupoid. this concept is extended to introduce a stein ag - groupoid. we provethe existence by providing some non - associative examples. we also explore some basic and general properties of these ag - groupoids and find their relations with other subclasses of ag - groupoids.
|
arxiv:1403.5422
|
the wasp ( wide angle search for planets ) project is an exoplanet transit survey that has been automatically taking wide field images since 2004. two instruments, one in la palma and the other in south africa, continually monitor the night sky, building up light curves of millions of unique objects. these light curves are used to search for the characteristics of exoplanetary transits. this first public data release ( dr1 ) of the wasp archive makes available all the light curve data and images from 2004 up to 2008 in both the northern and southern hemispheres. a web interface ( www. wasp. le. ac. uk / public / ) to the data allows easy access over the internet. the data set contains 3 631 972 raw images and 17 970 937 light curves. in total the light curves have 119 930 299 362 data points available between them.
|
arxiv:1009.5306
|
this paper investigates an expected average error for distributed averaging problems under asynchronous updates. the asynchronism in this context implies no existence of a global clock as well as random characteristics in communication uncertainty such as communication delays and packet drops. although some previous works contributed to the design of average consensus protocols to guarantee the convergence to an exact average, these methods may increase computational burdens due to extra works. sometimes it is thus beneficial to make each agent exchange information asynchronously without modifying the algorithm, which causes randomness in the average value as a trade - off. in this study, an expected average error is analyzed based on the switched system framework, to estimate an upper bound of the asynchronous average compared to the exact one in the expectation sense. numerical examples are provided to validate the proposed results.
|
arxiv:2006.01925
|
we report on the dynamical behavior of defects of strength s = + / - 1 / 2 in a lyotropic liquid crystal during the annihilation process. by following their positions using time resolved polarizing microscopy technique, we present statistically significant evidence that the relative velocity between defect pairs is gaussian distributed, anti - persistent and long - range correlated. we further show that simulations of the lebwohl - lasher model reproduce quite well our experimental findings.
|
arxiv:1305.5421
|
flow cytometry is a technology that rapidly measures antigen - based markers associated to cells in a cell population. although analysis of flow cytometry data has traditionally considered one or two markers at a time, there has been increasing interest in multidimensional analysis. however, flow cytometers are limited in the number of markers they can jointly observe, which is typically a fraction of the number of markers of interest. for this reason, practitioners often perform multiple assays based on different, overlapping combinations of markers. in this paper, we address the challenge of imputing the high dimensional jointly distributed values of marker attributes based on overlapping marginal observations. we show that simple nearest neighbor based imputation can lead to spurious subpopulations in the imputed data, and introduce an alternative approach based on nearest neighbor imputation restricted to a cell ' s subpopulation. this requires us to perform clustering with missing data, which we address with a mixture model approach and novel em algorithm. since mixture model fitting may be ill - posed, we also develop techniques to initialize the em algorithm using domain knowledge. we demonstrate our approach on real flow cytometry data.
|
arxiv:1003.5539
|
generalizing earlier results concerning p - adic fields, this paper develops a theory of b ( g ) for all local and global fields.
|
arxiv:1401.5728
|
the magnetic - field - induced variations of the microwave surface resistance, r _ s, have been investigated in ceramic mg _ { 1 - x } ( lial ) _ xb _ 2, with x in the range 0. 1 - 0. 4. the measurements have been performed on increasing and decreasing the dc magnetic field, h _ 0, at fixed temperatures. at low temperatures, we have observed a magnetic hysteresis in the r _ s ( h _ 0 ) curves in all the investigated samples. on increasing the temperature, the range of h _ 0 in which the hysteretic behavior is visible shrinks ; however, in the sample with x = 0. 1 it is present up to temperatures close to t _ c. we show that the field dependence of r _ s can be quantitatively justified taking into account the critical - state effects on the fluxon lattice only in the sample with x = 0. 4. on the contrary, in the samples with x < 0. 4 the hysteresis exhibits an unusual shape, similar to that observed in others two - gap mgb _ 2 samples, which cannot be justified in the framework of the critical - state models.
|
arxiv:0909.1292
|
the lack of large video databases obtained from real patients with respiratory disorders makes the design and optimization of video - based monitoring systems quite critical. the purpose of this study is the development of suitable models and simulators of breathing behaviors and disorders, such as respiratory pauses and apneas, in order to allow efficient design and test of video - based monitoring systems. more precisely, a novel continuous - time markov chain ( ctmc ) statistical model of breathing patterns is presented. the respiratory rate ( rr ) pattern, estimated by measured vital signs of hospital - monitored patients, is approximated as a ctmc, whose states and parameters are selected through an appropriate statistical analysis. then, two simulators, software - and hardware - based, are proposed. after validation of the ctmc model, the proposed simulators are tested with previously developed video - based algorithms for the estimation of the rr and the detection of apnea events. examples of application to assess the performance of systems for video - based rr estimation and apnea detection are presented. the results, in terms of kullback - leibler divergence, show that realistic breathing patterns, including specific respiratory disorders, can be accurately described by the proposed model ; moreover, the simulators are able to reproduce practical breathing patterns for video analysis. the presented ctmc statistical model can be strategic to describe realistic breathing patterns and devise simulators useful to develop and test novel and effective video processing - based monitoring systems.
|
arxiv:1610.01444
|
suggested screening jurors for their level of influence from such tv programs. further, research has shown that newspaper media has been found to shape readers general knowledge and perceptions of science and technology in a rather positive way. it could lead to support of it due to the interest readers may obtain and seek further knowledge on the topic. = = controversies = = questions about certain areas of forensic science, such as fingerprint evidence and the assumptions behind these disciplines have been brought to light in some publications including the new york post. the article stated that " no one has proved even the basic assumption : that everyone ' s fingerprint is unique. " the article also stated that " now such assumptions are being questioned β and with it may come a radical change in how forensic science is used by police departments and prosecutors. " law professor jessica gabel said on nova that forensic science " lacks the rigors, the standards, the quality controls and procedures that we find, usually, in science ". the national institute of standards and technology has reviewed the scientific foundations of bite - mark analysis used in forensic science. bite mark analysis is a forensic science technique that analyzes the marks on the victim ' s skin compared to the suspects teeth. nist reviewed the findings of the national academies of sciences, engineering, and medicine 2009 study. the national academics of sciences, engineering, and medicine conducted research to address the issues of reliability, accuracy, and reliability of bitemark analysis, where they concluded that there is a lack of sufficient scientific foundation to support the data. yet the technique is still legal to use in court as evidence. nist funded a 2019 meeting that consisted of dentists, lawyers, researchers and others to address the gaps in this field. in the us, on 25 june 2009, the supreme court issued a 5 - to - 4 decision in melendez - diaz v. massachusetts stating that crime laboratory reports may not be used against criminal defendants at trial unless the analysts responsible for creating them give testimony and subject themselves to cross - examination. the supreme court cited the national academies of sciences report strengthening forensic science in the united states in their decision. writing for the majority, justice antonin scalia referred to the national research council report in his assertion that " forensic evidence is not uniquely immune from the risk of manipulation. " in the us, another area of forensic science that has come under question in recent years is the lack of laws requiring the accreditation of forensic labs. some states require accreditation, but some states do not. because of this, many labs have been caught performing very poor
|
https://en.wikipedia.org/wiki/Forensic_science
|
sustainability reports are key for evaluating companies ' environmental, social and governance, esg performance, but their content is increasingly obscured by greenwashing - sustainability claims that are misleading, exaggerated, and fabricated. yet, existing nlp approaches for esg analysis lack robustness against greenwashing risks, often extracting insights that reflect misleading or exaggerated sustainability claims rather than objective esg performance. to bridge this gap, we introduce a3cg - aspect - action analysis with cross - category generalization, as a novel dataset to improve the robustness of esg analysis amid the prevalence of greenwashing. by explicitly linking sustainability aspects with their associated actions, a3cg facilitates a more fine - grained and transparent evaluation of sustainability claims, ensuring that insights are grounded in verifiable actions rather than vague or misleading rhetoric. additionally, a3cg emphasizes cross - category generalization. this ensures robust model performance in aspect - action analysis even when companies change their reports to selectively favor certain sustainability areas. through experiments on a3cg, we analyze state - of - the - art supervised models and llms, uncovering their limitations and outlining key directions for future research.
|
arxiv:2502.15821
|
in this paper, we study the critical branching random walk in the critical dimension, $ z ^ 4 $. we provide the asymptotics of the probability of visiting a fixed finite subset and the range of the critical branching random walk conditioned on the total number of offsprings. we also prove that conditioned on visiting, the first visiting point converges in distribution.
|
arxiv:1701.08917
|
the gravitational field of supermassive black holes is able to strongly bend light rays emitted by nearby sources. when the deflection angle exceeds $ \ pi $, gravitational lensing can be analytically approximated by the so - called strong deflection limit. in this paper we remove the conventional assumption of sources very far from the black hole, considering the distance of the source as an additional parameter in the lensing problem to be treated exactly. we find expressions for critical curves, caustics and all lensing observables valid for any position of the source up to the horizon. after analyzing the spherically symmetric case we focus on the kerr black hole, for which we present an analytical 3 - dimensional description of the higher order caustic tubes.
|
arxiv:0705.0246
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.