text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
a myth ", also noting that there was a " mismatch between earning a stem degree and having a stem job " in the united states, with only around 1⁄4 of stem graduates working in stem fields, while less than half of workers in stem fields have a stem degree. economics writer ben casselman, in a 2014 study of post - graduation earnings in the united states for fivethirtyeight, wrote that, based on the data, science should not be grouped with the other three stem categories, because, while the other three generally result in high - paying jobs, " many sciences, particularly the life sciences, pay below the overall median for recent college graduates. " a 2017 article from the university of leicester concluded, that " maintaining accounts of a ‘ crisis ’ in the supply of stem workers has usually been in the interests of industry, the education sector and government, as well as the lobby groups that represent them. concerns about a shortage have meant the allocation of significant additional resources to the sector whose representatives have, in turn, become powerful voices in advocating for further funds and further investment. " a 2022 report from rutgers university stated : " in the united states, the stem crisis theme is a perennial policy favorite, appearing every few years as an urgent concern in the nation ’ s competition with whatever other nation is ascendant, or as the cause of whatever problem is ailing the domestic economy. and the solution is always the same : increase the supply of stem workers through expanding stem education. time and again, serious and empirically grounded studies find little evidence of any systemic failures or an inability of market responses to address whatever supply is required to meet workforce needs. " a study of the uk job market, published in 2022, found similar problems, which have been reported for the usa earlier : " it is not clear that having a degree in the sciences, rather than in other subjects, provides any sort of advantage in terms of short - or long - term employability... while only a minority of stem graduates ever work in highly - skilled stem jobs, we identified three particular characteristics of the stem labour market that may present challenges for employers : stem employment appears to be predicated on early entry to the sector ; a large proportion of stem graduates are likely to never work in the sector ; and there may be more movement out of hs stem positions by older workers than in other sectors... " = = see also = = = = references = = = = further reading = = david beede ; et al. (
|
https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics
|
large language models ( llms ) have shown great ability in solving traditional natural language tasks and elementary reasoning tasks with appropriate prompting techniques. however, their ability is still limited in solving complicated science problems. in this work, we aim to push the upper bound of the reasoning capability of llms by proposing a collaborative multi - agent, multi - reasoning - path ( comm ) prompting framework. specifically, we prompt llms to play different roles in a problem - solving team, and encourage different role - play agents to collaboratively solve the target task. in particular, we discover that applying different reasoning paths for different roles is an effective strategy to implement few - shot prompting approaches in the multi - agent scenarios. empirical results demonstrate the effectiveness of the proposed methods on two college - level science problems over competitive baselines. our further analysis shows the necessity of prompting llms to play different roles or experts independently. we release the code at : https : / / github. com / amazon - science / comm - prompt
|
arxiv:2404.17729
|
we study the effect of a perpendicular uniform magnetic field on the dissipative conductivity of a rectangular lattice with anisotropic hopping, $ t _ x \ neq t _ y $. we show that the magnetic field may enhance dramatically the directional anisotropy in the conductivity. the effect is a measurable physical realization of aubry ' s duality in harper systems.
|
arxiv:cond-mat/9910209
|
we report on electrical gating of the charge - density - wave phases and current in h - bn capped three - terminal 1t - tas $ _ 2 $ heterostructure devices. it is demonstrated that the application of a gate bias can shift the source - drain current - voltage hysteresis associated with the transition between the nearly commensurate and incommensurate charge - density wave phases. the evolution of the hysteresis and the presence of abrupt spikes in the current while sweeping the gate voltage suggest that the effect is electrical rather than self - heating. we attribute the gating to an electric - field effect on the commensurate charge - density - wave domains in the atomic planes near the gate dielectric. the transition between the nearly commensurate and incommensurate charge - density - wave phases can be induced by both the source - drain current and the electrostatic gate. since the charge - density - wave phases are persistent in 1t - tas2 at room temperature, one can envision memory applications of such devices when scaled down to the dimensions of individual commensurate domains and few - atomic plane thicknesses.
|
arxiv:2208.07857
|
we study the influence of mutual interaction on the conformation of flexible poly ( propyleneamine ) dendrimers of fourth generation in concentrated solution. mixtures of dendrimers with protonated and deuterated end groups are investigated by small - angle neutron scattering up to volume fractions of 0. 23. this value is in the range of the overlap concentration of the dendrimers. the contrast between the solute and the solvent was varied by using mixtures of protonated and deuterated solvents. this allows us to investigate the partial structure factors of the deuterated dendrimers in detail. an analysis of the measured scattering intensities reveals that the shape of the flexible dendrimers is practically independent of the concentration in contrast to the pronounced conformational changes of flexible linear polymers.
|
arxiv:0907.1241
|
the term two - - photon processes is used for the reactions in which some system of particles is produced in collision of two photons, either real or virtual. in the study of these processes our main goal was to suggest approach, allowing to extract from the data information on proper two - - photon process separating it from mechanism which responsible for the production of photons. here i present my view for history of two - - photon physics. i don ' t try to give complete review, concentrating mainly on works of our team ( which cover essential part of the topic ) and some colleagues. my citation is strongly incomplete. i cite here only papers which were essential in our understanding of the problems. the choice of presented details is the result of my discussions with gleb kotkin and valery serbo. 1. prehistory. 2. two photon processes at e ^ + e ^ - colliders. 3. photon colliders. 4. notes on physical program.
|
arxiv:1508.06581
|
machine learning and computer vision techniques have grown rapidly in recent years due to their automation, suitability, and ability to generate astounding results. hence, in this paper, we survey the key studies that are published between 2014 and 2022, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic tumors, and hepatic - vasculature structures. we divide the surveyed studies based on the tissue of interest ( hepatic - parenchyma, hepatic - tumors, or hepatic - vessels ), highlighting the studies that tackle more than one task simultaneously. additionally, the machine learning algorithms are classified as either supervised or unsupervised, and they are further partitioned if the amount of work that falls under a certain scheme is significant. moreover, different datasets and challenges found in literature and websites containing masks of the aforementioned tissues are thoroughly discussed, highlighting the organizers ' original contributions and those of other researchers. also, the metrics used excessively in literature are mentioned in our review, stressing their relevance to the task at hand. finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing, such as the scarcity of many studies on the vessels ' segmentation challenge and why their absence needs to be dealt with sooner than later.
|
arxiv:2103.06384
|
several proposals for quantum computation utilize a lattice type architecture with qubits trapped by a periodic potential. for systems undergoing many body interactions described by the bose - hubbard hamiltonian, the ground state of the system carries number fluctuations that scale with the number of qubits. this process degrades the initialization of the quantum computer register and can introduce errors during error correction. in an earlier manuscript we proposed a solution to this problem tailored to the loading of cold atoms into an optical lattice via the mott insulator phase transition. it was shown that by adding an inhomogeneity to the lattice and performing a continuous measurement, the unit filled state suitable for a quantum computer register can be maintained. here, we give a more rigorous derivation of the register fidelity in homogeneous and inhomogeneous lattices and provide evidence that the protocol is effective in the finite temperature regime.
|
arxiv:quant-ph/0403052
|
recent data search platforms use ml task - based utility measures rather than metadata - based keywords, to search large dataset corpora. requesters submit a training dataset and these platforms search for augmentations ( join or union compatible datasets ) that, when used to augment the requester ' s dataset, most improve model ( e. g., linear regression ) performance. although effective, providers that manage personally identifiable data demand differential privacy ( dp ) guarantees before granting these platforms data access. unfortunately, making data search differentially private is nontrivial, as a single search can involve training and evaluating datasets hundreds or thousands of times, quickly depleting privacy budgets. we present saibot, a differentially private data search platform that employs factorized privacy mechanism ( fpm ), a novel dp mechanism, to calculate sufficient semi - ring statistics for ml over different combinations of datasets. these statistics are privatized once, and can be freely reused for the search. this allows saibot to scale to arbitrary numbers of datasets and requests, while minimizing the amount that dp noise affects search results. we optimize the sensitivity of fpm for common augmentation operations, and analyze its properties with respect to linear regression. specifically, we develop an unbiased estimator for many - to - many joins, prove its bounds, and develop an optimization to redistribute dp noise to minimize the impact on the model. our evaluation on a real - world dataset corpus of 329 datasets demonstrates that saibot can return augmentations that achieve model accuracy within 50 to 90 % of non - private search, while the leading alternative dp mechanisms ( tpm, apm, shuffling ) are several orders of magnitude worse.
|
arxiv:2307.00432
|
co - packaged optics is poised to solve the interconnect bandwidth bottleneck for gpus and ai accelerators in near future. this technology can immediately boost today ' s ai / ml compute power to train larger neural networks that can perform more complex tasks. more importantly, co - packaged optics unlocks new system - level opportunities to rethink our conventional supercomputing and datacenter architectures. disaggregation of memory and compute units is one of such new paradigms that can greatly speed up ai / ml workloads by providing low - latency and high - throughput performance, while maintaining flexibility to support conventional cloud computing applications as well. this paper gives a brief overview of state - of - the - art of co - packaged optical i / o and requirements of its next generations. we also discuss ideas to exploit co - packaged optics in disaggregated ai systems and possible future directions.
|
arxiv:2303.01744
|
we study the wave propagation modes in the relativistic streaming pair plasma of the magnetospheres of pulsars and magnetars, focusing on the effect of vacuum polarization. we show that the combined plasma and vacuum polarization effects give rise to a vacuum resonance, where ` ` avoided mode crossing ' ' occurs between the extraordinary mode and the ( superluminous ) ordinary mode. when a photon propagates from the vacuum - polarization - dominated region at small radii to the plasma - dominated region at large radii, its polarization state may undergo significant change across the vacuum resonance. we map out the parameter regimes ( e. g., field strength, plasma density and lorentz factor ) under which the vacuum resonance occurs and examine how wave propagation is affected by the resonance. some possible applications of our results are discussed, including high - frequency radio emission from pulsars and possibly magnetars, and optical / ir emission from neutron star surfaces and inner magnetospheres.
|
arxiv:astro-ph/0611924
|
alzheimer ' s disease ( ad ) is the only major cause of mortality in the world without an effective disease modifying treatment. evidence supporting the so called disconnection hypothesis suggests that functional connectivity biomarkers may have clinical potential for early detection of ad. however, known issues with low test - retest reliability and signal to noise in functional connectivity may prevent accuracy and subsequent predictive capacity. we validate the utility of a novel principal component based diagnostic identifiability framework to increase separation in functional connectivity across the alzheimer ' s spectrum by identifying and reconstructing fc using only ad sensitive components or connectivity modes. we show that this framework ( 1 ) increases test - retest correspondence and ( 2 ) allows for better separation, in functional connectivity, of diagnostic groups both at the whole brain and individual resting state network level. finally, we evaluate a posteriori the association between connectivity mode weights with longitudinal neurocognitive outcomes.
|
arxiv:1809.09757
|
ai alignment methods. other fields of ethics have had to contend with technology - related issues, including military ethics, media ethics, and educational ethics. = = futures studies = = futures studies is the study of social and technological progress. it aims to explore the range of plausible futures and incorporate human values in the development of new technologies. : 54 more generally, futures researchers are interested in improving " the freedom and welfare of humankind ". : 73 it relies on a thorough quantitative and qualitative analysis of past and present technological trends, and attempts to rigorously extrapolate them into the future. science fiction is often used as a source of ideas. : 173 futures research methodologies include survey research, modeling, statistical analysis, and computer simulations. : 187 = = = existential risk = = = existential risk researchers analyze risks that could lead to human extinction or civilizational collapse, and look for ways to build resilience against them. relevant research centers include the cambridge center for the study of existential risk, and the stanford existential risk initiative. future technologies may contribute to the risks of artificial general intelligence, biological warfare, nuclear warfare, nanotechnology, anthropogenic climate change, global warming, or stable global totalitarianism, though technologies may also help us mitigate asteroid impacts and gamma - ray bursts. in 2019 philosopher nick bostrom introduced the notion of a vulnerable world, " one in which there is some level of technological development at which civilization almost certainly gets devastated by default ", citing the risks of a pandemic caused by bioterrorists, or an arms race triggered by the development of novel armaments and the loss of mutual assured destruction. he invites policymakers to question the assumptions that technological progress is always beneficial, that scientific openness is always preferable, or that they can afford to wait until a dangerous technology has been invented before they prepare mitigations. = = emerging technologies = = emerging technologies are novel technologies whose development or practical applications are still largely unrealized. they include nanotechnology, biotechnology, robotics, 3d printing, and blockchains. in 2005, futurist ray kurzweil claimed the next technological revolution would rest upon advances in genetics, nanotechnology, and robotics, with robotics being the most impactful of the three technologies. genetic engineering will allow far greater control over human biological nature through a process called directed evolution. some thinkers believe that this may shatter our sense of self, and have urged for renewed public debate exploring the issue more thoroughly ; others fear
|
https://en.wikipedia.org/wiki/Technology
|
we propose a general construction principle which allows to include an infinite number of resonance states into a scattering matrix of hyperbolic type. as a concrete realization of this mechanism we provide new s - matrices generalizing a class of hyperbolic ones, which are related to a pair of simple lie algebras, to the elliptic case. for specific choices of the algebras we propose elliptic generalizations of affine toda field theories and the homogeneous sine - gordon models. for the generalization of the sinh - gordon model we compute explicitly renormalization group scaling functions by means of the c - theorem and the thermodynamic bethe ansatz. in particular we identify the virasoro central charges of the corresponding ultraviolet conformal field theories.
|
arxiv:hep-th/0103252
|
deep learning ( dl ) in general and recurrent neural networks ( rnns ) in particular have seen high success levels in sequence based applications. this paper pertains to rnns for time series modelling and forecasting. we propose a novel rnn architecture capturing ( stochastic ) seasonal correlations intelligently while capable of accurate multi - step forecasting. it is motivated from the well - known encoder - decoder ( ed ) architecture and multiplicative seasonal auto - regressive model. it incorporates multi - step ( multi - target ) learning even in the presence ( or absence ) of exogenous inputs. it can be employed on single or multiple sequence data. for the multiple sequence case, we also propose a novel greedy recursive procedure to build ( one or more ) predictive models across sequences when per - sequence data is less. we demonstrate via extensive experiments the utility of our proposed architecture both in single sequence and multiple sequence scenarios.
|
arxiv:2207.04113
|
a graphical realization of a linear code c consists of an assignment of the coordinates of c to the vertices of a graph, along with a specification of linear state spaces and linear ` ` local constraint ' ' codes to be associated with the edges and vertices, respectively, of the graph. the $ \ k $ - complexity of a graphical realization is defined to be the largest dimension of any of its local constraint codes. $ \ k $ - complexity is a reasonable measure of the computational complexity of a sum - product decoding algorithm specified by a graphical realization. the main focus of this paper is on the following problem : given a linear code c and a graph g, how small can the $ \ k $ - complexity of a realization of c on g be? as useful tools for attacking this problem, we introduce the vertex - cut bound, and the notion of ` ` vc - treewidth ' ' for a graph, which is closely related to the well - known graph - theoretic notion of treewidth. using these tools, we derive tight lower bounds on the $ \ k $ - complexity of any realization of c on g. our bounds enable us to conclude that good error - correcting codes can have low - complexity realizations only on graphs with large vc - treewidth. along the way, we also prove the interesting result that the ratio of the $ \ k $ - complexity of the best conventional trellis realization of a length - n code c to the $ \ k $ - complexity of the best cycle - free realization of c grows at most logarithmically with codelength n. such a logarithmic growth rate is, in fact, achievable.
|
arxiv:0805.2199
|
we study the asymptotic behavior of the solutions of the mildly degenerate kirchhoff equation with a dissipative term. we obtain a new estimate on second - in - time derivative of the solution. moreover we renormalize the solution in such a way that the renormalization as a no zero limit as t goes to infinity. finally we calculate explicitly the norm of this limit and we prove that it does not depend on the initial data.
|
arxiv:1105.5358
|
in this paper we study the well - known chv \ ' atal - gomory ( cg ) procedure for the class of integer semidefinite programs ( isdps ). we prove several results regarding the hierarchy of relaxations obtained by iterating this procedure. we also study different formulations of the elementary closure of spectrahedra. a polyhedral description of the elementary closure for a specific type of spectrahedra is derived by exploiting total dual integrality for sdps. moreover, we show how to exploit ( strengthened ) cg cuts in a branch - and - cut framework for isdps. different from existing algorithms in the literature, the separation routine in our approach exploits both the semidefinite and the integrality constraints. we provide separation routines for several common classes of binary sdps resulting from combinatorial optimization problems. in the second part of the paper we present a comprehensive application of our approach to the quadratic traveling salesman problem ( qtsp ). based on the algebraic connectivity of the directed hamiltonian cycle, two isdps that model the qtsp are introduced. we show that the cg cuts resulting from these formulations contain several well - known families of cutting planes. numerical results illustrate the practical strength of the cg cuts in our branch - and - cut algorithm, which outperforms alternative isdp solvers and is able to solve large qtsp instances to optimality.
|
arxiv:2201.10224
|
we present an analysis of the neutral hydrogen and stellar populations of elliptical galaxies in the tal et al. ( 2009 ) sample. our aim is to test their conclusion that the continuing assembly of these galaxies at z ~ 0 is essentially gas - free and not accompanied by significant star formation. in order to do so, we make use of hi data and line - strength indices available in the literature. we look for direct and indirect evidence of the presence of cold gas during the recent assembly of these objects and analyse its relation to galaxy morphological fine structure. we find that > 25 % of ellipticals contain hi at the level of m ( hi ) > 10 ^ 8 m ( sun ), and that m ( hi ) is of the order of a few percent of the total stellar mass. available data are insufficient to establish whether galaxies with a disturbed stellar morphology are more likely to contain hi. however, hi interferometry reveals very disturbed gas morphology / kinematics in all but one of the detected systems, confirming the continuing assembly of many ellipticals but also showing that this is not necessarily gas - free. we also find that all very disturbed ellipticals have a single - stellar - population - equivalent age < 4 gyr. we interpret this as evidence that ~ 0. 5 - 5 % of their stellar mass is contained in a young population formed during the past ~ 1 gyr. overall, a large fraction of ellipticals seem to have continued their assembly over the past few gyr in the presence of a mass of cold gas of the order of 10 % of the galaxy stellar mass. this material is now observable as neutral hydrogen and young stars.
|
arxiv:0910.2416
|
we give a particular choice of the higher eilenberg - maclane maps by a recursive formula. this choice leads to a simple description of the homotopy operations for simplicial z / 2 - algebras.
|
arxiv:math/0411595
|
we investigate the transport of a fermi gas with unitarity - limited interactions across the superfluid phase transition, probing its response to a direct current ( dc ) drive through a tunnel junction. as the superfluid critical temperature is crossed from below, we observe the evolution from a highly nonlinear to an ohmic conduction characteristics, associated with the critical breakdown of the josephson dc current induced by pair condensate depletion. moreover, we reveal a large and dominant anomalous contribution to resistive currents, which reaches its maximum at the lowest attained temperature, fostered by the tunnel coupling between the condensate and phononic bogoliubov - anderson excitations. increasing the temperature, while the zeroing of supercurrents marks the transition to the normal phase, the conductance drops considerably but remains much larger than that of a normal, uncorrelated fermi gas tunneling through the same junction. we attribute such enhanced transport to incoherent tunneling of sound modes, which remain weakly damped in the collisional hydrodynamic fluid of unpaired fermions at unitarity.
|
arxiv:2010.00582
|
direct coupling analysis ( dca ) is a now widely used method to leverage statistical information from many similar biological systems to draw meaningful conclusions on each system separately. dca has been applied with great success to sequences of homologous proteins, and also more recently to whole - genome population - wide sequencing data. we here argue that the use of dca on the genome scale is contingent on fundamental issues of population genetics. dca can be expected to yield meaningful results when a population is in the quasi - linkage equilibrium ( qle ) phase studied by kimura and others, but not, for instance, in a phase of clonal competition. we discuss how the exponential ( potts model ) distributions emerge in qle, and compare couplings to correlations obtained in a study of about 3, 000 genomes of the human pathogen streptococcus pneumoniae.
|
arxiv:1808.03478
|
while developing their software, professional object - oriented ( oo ) software developers keep in their minds an image of the subtyping relation between types in their software. the goal of this paper is to present an observation about the graph of the subtyping relation in java, namely the observation that, after the addition of generics - - - and of wildcards, in particular - - - to java, the graph of the subtyping relation is no longer a simple directed - acyclic graph ( dag ), as in pre - generics java, but is rather a fractal. further, this observation equally applies to other mainstream nominally - typed oo languages ( such as c #, c + + and scala ) where generics and wildcards ( or some other form of ' variance annotations ' ) are standard features. accordingly, the shape of the subtyping relation in these oo languages is more complex than a tree or a simple dag, and indeed is also a fractal. given the popularity of fractals, the fractal observation may help oo software developers keep a useful and intuitive mental image of their software ' s subtyping relation, even if it is a little more frightening, and more amazing one than before. with proper support from ides, the fractal observation can help oo developers in resolving type errors they may find in their code in lesser time, and with more confidence.
|
arxiv:1411.5166
|
in this paper we consider some insurance policies related to drawdown and drawup events of log - returns for an underlying asset modeled by a spectrally negative geometric l \ ' evy process. we consider four contracts, three of which were introduced in zhang et al. ( 2013 ) for a geometric brownian motion. the first one is an insurance contract where the protection buyer pays a constant premium until the drawdown of fixed size of log - returns occurs. in return he / she receives a certain insured amount at the drawdown epoch. the next insurance contract provides protection from any specified drawdown with a drawup contingency. this contract expires early if a certain fixed drawup event occurs prior to the fixed drawdown. the last two contracts are extensions of the previous ones by an additional cancellation feature which allows the investor to terminate the contract earlier. we focus on two problems : calculating the fair premium $ p $ for the basic contracts and identifying the optimal stopping rule for the policies with the cancellation feature. to do this we solve some two - sided exit problems related to drawdown and drawup of spectrally negative l \ ' evy processes, which is of independent mathematical interest. we also heavily rely on the theory of optimal stopping.
|
arxiv:1701.01891
|
we present bvi photometry of the globular cluster ngc 6642 using the soi imager at the soar telescope. the colour magnitude diagrams ( cmd ) reach $ \ approx $ 1. 5mag in v below the main sequence turn - off. a comparison of the overall sequences, and in particular the red giant branch slope of ngc 6642 with that of m ~ 5 indicates that the two clusters must have a similar metallicity of [ fe / h ] $ \ approx $ - 1. 3. we also obtain for ngc 6642 a reddening e ( b - v ) = 0. 42 $ \ pm $ 0. 03, and a distance from the sun of d $ _ { \ odot } $ = 7. 2 $ \ pm $ 0. 5 kpc. therefore ngc 6642 is a moderately metal - poor globular cluster, spatially located in the bulge, at a galactocentric distance of r $ _ { \ rm gc } $ $ \ approx $ 1. 7 kpc. the comparison of cmds of ngc 6642 with those of m ~ 5 shows that there is a very good match of magnitude difference between turn - off and horizontal branch, suggesting comparable ages. m ~ 5 has an age typical of the halo globulars, and consequently ngc 6642 is coeval with the halo. ngc 6642 is a good candidate to be one of the few genuine metal - poor and old { \ it bulge } clusters, and might be one of the most ancient fossils in the galaxy.
|
arxiv:astro-ph/0511608
|
we consider the dynamics of systems with arbitrary friction and diffusion. these include, as a special case, systems for which friction and diffusion are connected by einstein fluctuation - dissipation relation, e. g. brownian motion. we study the limit where friction effects dominate the inertia, i. e. where the mass goes to zero ( smoluchowski - kramers limit ). { using the it \ ^ o stochastic integral convention, } we show that the limiting effective langevin equations has different drift fields depending on the relation between friction and diffusion. { alternatively, our results can be cast as different interpretations of stochastic integration in the limiting equation }, which can be parametrized by $ \ alpha \ in \ mathbb { r } $. interestingly, in addition to the classical it \ ^ o ( $ \ alpha = 0 $ ), stratonovich ( $ \ alpha = 0. 5 $ ) and anti - it \ ^ o ( $ \ alpha = 1 $ ) integrals, we show that position - dependent $ \ alpha = \ alpha ( x ) $, and even stochastic integrals with $ \ alpha \ notin [ 0, 1 ] $ arise. our findings are supported by numerical simulations.
|
arxiv:1112.2607
|
the compression is an important topic in computer science which allows we to storage more amount of data on our data storage. there are several techniques to compress any file. in this manuscript will be described the most important algorithm to compress images such as jpeg and it will be compared with another method to retrieve good reason to not use this method on images. so to compress the text the most encoding technique known is the huffman encoding which it will be explained in exhaustive way. in this manuscript will showed how to compute a text compression method on images in particular the method and the reason to choice a determinate image format against the other. the method studied and analyzed in this manuscript is the re - pair algorithm which is purely for grammatical context to be compress. at the and it will be showed the good result of this application.
|
arxiv:1901.10744
|
given a quadratic function $ h $ that satisfies a slater condition, yakubovich ' s s - procedure ( or s - lemma ) gives a characterization of all other quadratic functions that are copositive with $ h $ in a form that is amenable to numerical computations. in this paper we present a deep - rooted connection between the s - procedure and the dual cone calculus formula $ ( k _ 1 \ cap k _ 2 ) ^ * = k _ 1 ^ * + k _ 2 ^ * $, which holds for closed convex cones in $ \ r ^ 2 $. to establish the link with the s - procedure, we generalize the dual cone calculus formula to a situation where $ k _ 1 $ is nonclosed, nonconvex and nonconic but exhibits sufficient mathematical resemblance to a closed convex cone. as a result, we obtain a new proof of the s - lemma and an extension to hilbert space kernels.
|
arxiv:1305.2444
|
extensive research has already started on 6g and beyond wireless technologies due to the envisioned new use - cases and potential new requirements for future wireless networks. although a plethora of modern physical layer solutions have been introduced in the last few decades, it is undeniable that a level of saturation has been reached in terms of the available spectrum, adapted modulation / coding solutions and accordingly the maximum capacity. within this context, communications through reconfigurable intelligent surfaces ( riss ), which enable novel and effective functionalities including wave absorption, tuneable anomalous reflection, and reflection phase modification, appear as a potential candidate to overcome the inherent drawbacks of legacy wireless systems. the core idea of riss is the transformation of the uncontrollable and random wireless propagation environment into a reconfigurable communication system entity that plays an active role in forwarding information. in this paper, the well - known multipath fading phenomenon is revisited in mobile wireless communication systems, and novel and unique solutions are introduced from the perspective of riss. the feasibility of eliminating or mitigating the multipath fading effect stemming from the movement of mobile receivers is also investigated by utilizing the riss. it is shown that rapid fluctuations in the received signal strength due to the doppler effect can be effectively reduced by using the real - time tuneable riss. it is also proven that the multipath fading effect can be totally eliminated when all reflectors in a propagation environment are coated with riss, while even a few riss can significantly reduce the doppler spread as well as the deep fades in the received signal for general propagation environments with several interacting objects.
|
arxiv:1912.04080
|
quantum phases of matter are characterized by the underlying correlations of the many - body system. although this is typically captured by a local order parameter, it has been shown that a broad class of many - body systems possesses a hidden non - local order. in the case of bosonic mott insulators, the ground state properties are governed by quantum fluctuations in the form of correlated particle - hole pairs that lead to the emergence of a non - local string order in one dimension. using high - resolution imaging of low - dimensional quantum gases in an optical lattice, we directly detect these pairs with single - site and single - particle sensitivity and observe string order in the one - dimensional case.
|
arxiv:1108.3317
|
lattice qcd is the only non - perturbative method based uniquely on the first principles of qcd. after a very simple introduction to the principles of lattice qcd, i discuss its present limitations and the type of processes it can deal with. then i present some striking results in the light and heavy quarks sectors. finally i try to guess the prospects.
|
arxiv:hep-ph/9504271
|
we study the influence of the magnetic field on the photon emission from the quark - gluon plasma created in $ aa $ collisions. we find that even for very optimistic assumption on the magnitude of the magnetic field for noncentral $ aa $ collisions the effect of magnetic field is very small.
|
arxiv:1607.04314
|
deep learning based on deep neural networks of various structures and architectures has been powerful in many practical applications, but it lacks enough theoretical verifications. in this paper, we consider a family of deep convolutional neural networks applied to approximate functions on the unit sphere $ \ mathbb { s } ^ { d - 1 } $ of $ \ mathbb { r } ^ d $. our analysis presents rates of uniform approximation when the approximated function lies in the sobolev space $ w ^ r _ \ infty ( \ mathbb { s } ^ { d - 1 } ) $ with $ r > 0 $ or takes an additive ridge form. our work verifies theoretically the modelling and approximation ability of deep convolutional neural networks followed by downsampling and one fully connected layer or two. the key idea of our spherical analysis is to use the inner product form of the reproducing kernels of the spaces of spherical harmonics and then to apply convolutional factorizations of filters to realize the generated linear features.
|
arxiv:2007.14285
|
datasets for data - to - text generation typically focus either on multi - domain, single - sentence generation or on single - domain, long - form generation. in this work, we cast generating wikipedia sections as a data - to - text generation task and create a large - scale dataset, wikitablet, that pairs wikipedia sections with their corresponding tabular data and various metadata. wikitablet contains millions of instances, covering a broad range of topics, as well as a variety of flavors of generation tasks with different levels of flexibility. we benchmark several training and decoding strategies on wikitablet. our qualitative analysis shows that the best approaches can generate fluent and high quality texts but they struggle with coherence and factuality, showing the potential for our dataset to inspire future work on long - form generation.
|
arxiv:2012.14919
|
we discuss the main uncertainties affecting estimates of small scale fluctuations due to extragalactic sources in the planck surveyor frequency bands. conservative estimates allow us to confidently conclude that, in the frequency range 100 - - 200 ghz, the contaminating effect of extragalactic sources is well below the expected anisotropy level of the cosmic microwave background ( cmb ), down to angular scales of at least $ \ simeq 10 ' $. hence, an accurate subtraction of foreground fluctuations is not critical for the determination of the cmb power spectrum up to multipoles $ \ ell \ simeq 1000 $. in any case, planck ' s wide frequency coverage will allow to carefully control foreground contributions. on the other hand, the all sky surveys at 9 frequencies, spanning the range 30 - - 900 ghz, will be unique in providing complete samples comprising from several hundreds to many thousands of extragalactic sources, selected in an essentially unexplored frequency interval. new classes of sources may be revealed in these data.
|
arxiv:astro-ph/9812069
|
the excitation of internal gravity waves by an entropy bubble oscillating in an isothermal atmosphere is investigated using direct two - dimensional numerical simulations. the oscillation field is measured by a projection of the simulated velocity field onto the anelastic solutions of the linear eigenvalue problem for the perturbations. this facilitates a quantitative study of both the spectrum and the amplitudes of excited g - modes.
|
arxiv:astro-ph/0311094
|
i propose that stiffness may be defined and quantified for nonlinear systems using lyapunov exponents, and demonstrate the relationship that exists between stiffness and the fractal dimension of a strange attractor : that stiff chaos is thin chaos.
|
arxiv:nlin/0007031
|
vertical symbolic regression ( vsr ) recently has been proposed to expedite the discovery of symbolic equations with many independent variables from experimental data. vsr reduces the search spaces following the vertical discovery path by building from reduced - form equations involving a subset of independent variables to full - fledged ones. proved successful by many symbolic regressors, deep neural networks are expected to further scale up vsr. nevertheless, directly combining vsr with deep neural networks will result in difficulty in passing gradients and other engineering issues. we propose vertical symbolic regression using deep policy gradient ( vsr - dpg ) and demonstrate that vsr - dpg can recover ground - truth equations involving multiple input variables, significantly beyond both deep reinforcement learning - based approaches and previous vsr variants. our vsr - dpg models symbolic regression as a sequential decision - making process, in which equations are built from repeated applications of grammar rules. the integrated deep model is trained to maximize a policy gradient objective. experimental results demonstrate that our vsr - dpg significantly outperforms popular baselines in identifying both algebraic equations and ordinary differential equations on a series of benchmarks.
|
arxiv:2402.00254
|
the georgia institute of technology ( commonly referred to as georgia tech, gt, and simply tech or the institute ) is a public research university and institute of technology in atlanta, georgia, united states. established in 1885, it has the largest student enrollment of the university system of georgia institutions and satellite campuses in savannah, georgia and metz, france. the school was founded as the georgia school of technology as part of reconstruction efforts to build an industrial economy in the southern united states after the civil war. initially, it offered only a degree in mechanical engineering. by 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. in 1948, the school changed its name to reflect its evolution from a trade school to a technical institute and research university. georgia tech is organized into seven colleges with about 31 departments and academic units. it emphasizes the academic fields of science and technology. georgia tech ' s $ 5. 3 billion economic impact for fiscal year 2023 led all public institutions in the state. georgia tech fields eight men ' s and seven women ' s sports teams ; these compete in ncaa division i athletics and have won five national championships. the university is a member of the atlantic coast conference. = = history = = = = = establishment = = = the idea of a technology school in georgia was introduced in 1865 during the reconstruction period. two former confederate officers, major john fletcher hanson ( an industrialist ) and nathaniel edwin harris ( a politician and eventually governor of georgia ), who had become prominent citizens in the town of macon, georgia, after the civil war, believed that the south needed to improve its technology to compete with the north ' s industrialization. because the american south of that era was mainly populated by agricultural workers and few technical developments were occurring, they proposed to establish a technology school. in 1882, the georgia state legislature authorized a committee, led by harris, to visit the northeast to learn how technology schools worked. they were impressed by the polytechnic educational models developed at the massachusetts institute of technology and the worcester county free institute of industrial science ( now worcester polytechnic institute ). the committee recommended adapting the worcester model, which stressed a combination of " theory and practice ", the " practice " component including student employment and production of consumer items to generate revenue for the school. on october 13, 1885, georgia governor henry d. mcdaniel signed the bill to create and fund the new school. in 1887, atlanta pioneer richard peters donated to the state 4 acres ( 1. 6 ha ) of the site of a failed garden suburb called peters park.
|
https://en.wikipedia.org/wiki/Georgia_Tech
|
human - robot walking with prosthetic legs and exoskeletons, especially over complex terrains such as stairs, remains a significant challenge. egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. this motivated us to create the stairnet initiative to support the development of new deep learning models for visual sensing and recognition of stairs, with an emphasis on lightweight and efficient neural networks for onboard real - time inference. in this study, we present an overview of the development of our large - scale dataset with over 515, 000 manually labeled images, as well as our development of different deep learning models ( e. g., 2d and 3d cnn, hybrid cnn and lstm, and vit networks ) and training methods ( e. g., supervised learning with temporal data and semi - supervised learning with unlabeled images ) using our new dataset. we consistently achieved high classification accuracy ( i. e., up to 98. 8 % ) with different designs, offering trade - offs between model accuracy and size. when deployed on mobile devices with gpu and npu accelerators, our deep learning models achieved inference speeds up to 2. 8 ms. we also deployed our models on custom - designed cpu - powered smart glasses. however, limitations in the embedded hardware yielded slower inference speeds of 1. 5 seconds, presenting a trade - off between human - centered design and performance. overall, we showed that stairnet can be an effective platform to develop and study new visual perception systems for human - robot locomotion with applications in exoskeleton and prosthetic leg control.
|
arxiv:2310.20666
|
the dynamics in a confined turbulent convection flow is dominated by multiple long - lived macroscopic circulation states, which are visited subsequently by the system in a markov - type hopping process. in the present work, we analyze the short transition paths between these subsequent macroscopic system states by a data - driven learning algorithm that extracts the low - dimensional transition manifold and the related new coordinates, which we term collective variables, in the state space of the complex turbulent flow. we therefore transfer and extend concepts for conformation transitions in stochastic microscopic systems, such as in the dynamics of macromolecules, to a deterministic macroscopic flow. our analysis is based on long - term direct numerical simulation trajectories of turbulent convection in a closed cubic cell at a prandtl number $ pr = 0. 7 $ and rayleigh numbers $ ra = 10 ^ 6 $ and $ 10 ^ 7 $ for a time lag of $ 10 ^ 5 $ convective free - fall time units. the simulations resolve vortices and plumes of all physically relevant scales resulting in a state space spanned by more than 3. 5 million degrees of freedom. the transition dynamics between the large - scale circulation states can be captured by the transition manifold analysis with only two collective variables which implies a reduction of the data dimension by a factor of more than a million. our method demonstrates that cessations and subsequent reversals of the large - scale flow are unlikely in the present setup and thus paves the way to the development of efficient reduced - order models of the macroscopic complex nonlinear dynamical system.
|
arxiv:2304.02966
|
we show that 3 - fold terminal flips and divisorial contractions to a curve may be factored by a sequence of weighted blow - ups, flops, blow - downs to a locally complete intersection curve in a smooth 3 - fold or divisorial contractions to a point.
|
arxiv:0910.4209
|
a universal quantum computing scheme, with a universal set of logical gates, is proposed based on networks of 1d quantum systems. the encoding of information is in terms of universal features of gapped phases, for which effective field theories such as sine - gordon field theory can be employed to describe a qubit. primary logical gates are from twist, pump, glue, and shuffle operations that can be realized in principle by tuning parameters of the systems. our scheme demonstrates the power of 1d quantum systems for robust quantum computing.
|
arxiv:1901.04551
|
we study the first passage dynamics of an ageing stochastic process in the continuous time random walk ( ctrw ) framework. in such ctrw processes the test particle performs a random walk, in which successive steps are separated by random waiting times distributed in terms of the waiting time probability density function $ \ psi ( t ) \ simeq t ^ { - 1 - \ alpha } $ ) ( $ 0 \ le \ alpha \ le 2 $ ). an ageing stochastic process is defined by the explicit dependence of its dynamic quantities on the ageing time $ t _ a $, the time elapsed between its preparation and the start of the observation. subdiffusive ageing ctrws describe systems such as charge carriers in amorphous semiconductors, tracer dispersion in geological and biological systems, or the dynamics of blinking quantum dots. we derive the exact forms of the first passage time density for an ageing subdiffusive ctrw in the semi - infinite, confined, and biased case, finding different scaling regimes for weakly, intermediately, and strongly aged systems : these regimes, with different scaling laws, are also found when the scaling exponent is in the range $ 1 < \ alpha < 2 $, for sufficiently long $ t _ a $. we compare our results with the ageing motion of a test particle in a quenched energy landscape. we test our theoretical results in the quenched landscape against simulations : only when the bias is strong enough, the correlations from returning to previously visited sites become insignificant and the results approach the aging ctrw results. with small or without bias, the ageing effects disappear and a change in the exponent compared to the case of a completely annealed landscape can be found, reflecting the build - up of correlations in the quenched landscape.
|
arxiv:1503.09028
|
this first part of the paper describes the support of top graded local cohomology modules. as a corrolary one obtains a simple criteria for the vanishing of these modules and also the fact that they have finitely many minimal primes. the second part of this paper constructs examples of cohomological hilbert functions which are not of polynomial type.
|
arxiv:math/0209350
|
we report on a new test of lorentz invariance performed by comparing the resonance frequencies of two orthogonal cryogenic optical resonators subject to earth ' s rotation over 1 year. for a possible anisotropy of the speed of light c, we obtain 2. 6 + / - 1. 7 parts in 10 ^ 15. within the robertson - mansouri - sexl test theory, this implies an isotropy violation parameter beta - delta - 1 / 2 of - 2. 2 + / - 1. 5 parts in 10 ^ 9, about three times lower than the best previous result. within the general extension of the standard model of particle physics, we extract limits on 7 parameters at accuracies down to a part in 10 ^ 15, improving the best previous result by about two orders of magnitude.
|
arxiv:physics/0305117
|
technische universitat berlin ( tu berlin ; also known as berlin institute of technology and technical university of berlin, although officially the name should not be translated ) is a public research university located in berlin, germany. it was the first german university to adopt the name " technische universitat " ( university of technology ). the university alumni and staff includes several us national academies members, two national medal of science laureates, the creator of the first fully functional programmable ( electromechanical ) computer, konrad zuse, and ten nobel prize laureates. tu berlin is a member of tu9, an incorporated society of the largest and most notable german institutes of technology and of the top international managers in engineering network, which allows for student exchanges between leading engineering schools. it belongs to the conference of european schools for advanced engineering education and research. the tu berlin is home of two innovation centers designated by the european institute of innovation and technology. the university is labeled as " the entrepreneurial university " ( " die grunderhochschule " ) by the federal ministry for economic affairs and energy. the university is notable for having been the first to offer a degree in industrial engineering and management ( wirtschaftsingenieurwesen ). the university designed the degree in response to requests by industrialists for graduates with the technical and management training to run a company. first offered in winter term 1926 / 27, it is one of the oldest programmes of its kind. tu berlin has one of the highest proportions of international students in germany, almost 27 % in 2019. in addition, tu berlin is part of the berlin university alliance, has been conferred the title of " university of excellence " under and receiving funding from the german universities excellence initiative. = = history = = on 1 april 1879, the koniglich technische hochschule zu berlin ( en : " royal technical academy of berlin " ) came into being in 1879 through a merger of the konigliche gewerbeakademie zu berlin ( en : " royal trade academy ", founded in 1827 ) and konigliche bauakademie zu berlin ( en : " royal building academy ", founded in 1799 ), two predecessor institutions of the prussian state. in 1899, the koniglich technische hochschule zu berlin was the first polytechnic in germany to award doctorates, as a standard degree for the graduates, in addition to diplomas, thanks to professor alois riedler and adolf slaby, chairman of the association of
|
https://en.wikipedia.org/wiki/Technische_Universität_Berlin
|
the exceptional electronic, optical and chemical properties of two - dimensional materials strongly depend on the 3d atomic structure and crystal defects. using re - doped mos2 as a model, here we develop scanning atomic electron tomography ( saet ) to determine the 3d atomic positions and crystal defects such as dopants, vacancies and ripples with a precision down to 4 picometers. we measure the 3d bond distortion and local strain tensor induced by single dopants for the first time. by directly providing experimental 3d atomic coordinates to density functional theory ( dft ), we obtain more truthful electronic band structures than those derived from conventional dft calculations relying on relaxed 3d atomic models, which is confirmed by photoluminescence measurements. we anticipate that saet is not only generally applicable to the determination of the 3d atomic coordinates of 2d materials, heterostructures and thin films, but also could transform ab initio calculations by using experimental 3d atomic coordinates as direct input to better predict and discover new physical, chemical and electronic properties.
|
arxiv:1901.00633
|
quantum many - body dynamics generically results in increasing entanglement that eventually leads to thermalization of local observables. this makes the exact description of the dynamics complex despite the apparent simplicity of ( high - temperature ) thermal states. for accurate but approximate simulations one needs a way to keep track of essential ( quantum ) information while discarding inessential one. to this end, we first introduce the concept of the information lattice, which supplements the physical spatial lattice with an additional dimension and where a local hamiltonian gives rise to well defined locally conserved von neumann information current. this provides a convenient and insightful way of capturing the flow, through time and space, of information during quantum time evolution, and gives a distinct signature of when local degrees of freedom decouple from long - range entanglement. as an example, we describe such decoupling of local degrees of freedom for the mixed field transverse ising model. building on this, we secondly construct algorithms to time - evolve sets of local density matrices without any reference to a global state. with the notion of information currents, we can motivate algorithms based on the intuition that information for statistical reasons flow from small to large scales. using this guiding principle, we construct an algorithm that, at worst, shows two - digit convergence in time - evolutions up to very late times for diffusion process governed by the mixed field transverse ising hamiltonian. while we focus on dynamics in 1d with nearest - neighbor hamiltonians, the algorithms do not essentially rely on these assumptions and can in principle be generalized to higher dimensions and more complicated hamiltonians.
|
arxiv:2105.11206
|
neutral hydrogen is ubiquitous, absorbing and emitting 21 cm radiation throughout much of the universe ' s history. active sources of perturbations, such as cosmic strings, would generate simultaneous perturbations in the distribution of neutral hydrogen and in the cosmic microwave background ( cmb ) radiation from recombination. moving strings would create wakes leading to 21 cm brightness fluctuations, while also perturbing cmb light via the gott - kaiser - stebbins effect. this would lead to spatial correlations between the 21 cm and cmb anisotropies. passive sources, like inflationary perturbations, predict no cross correlations prior to the onset of reionization. thus, observation of any cross correlation between cmb and 21 cm radiation from dark ages would constitute evidence for new physics. we calculate the cosmic string induced correlations between cmb and 21 cm and evaluate their observability.
|
arxiv:1003.2214
|
in this paper, we establish sharp two - sided estimates for the transition densities of relativistic stable processes [ i. e., for the heat kernels of the operators $ m - ( m ^ { 2 / \ alpha } - \ delta ) ^ { \ alpha / 2 } $ ] in $ c ^ { 1, 1 } $ open sets. here $ m > 0 $ and $ \ alpha \ in ( 0, 2 ) $. the estimates are uniform in $ m \ in ( 0, m ] $ for each fixed $ m > 0 $. letting $ m \ downarrow0 $, we recover the dirichlet heat kernel estimates for $ \ delta ^ { \ alpha / 2 } : = - ( - \ delta ) ^ { \ alpha / 2 } $ in $ c ^ { 1, 1 } $ open sets obtained in [ 14 ]. sharp two - sided estimates are also obtained for green functions of relativistic stable processes in bounded $ c ^ { 1, 1 } $ open sets.
|
arxiv:0908.1509
|
we show that quantum information can be encoded into entangled states of multiple indistinguishable particles in such a way that any inertial observer can prepare, manipulate, or measure the encoded state independent of their lorentz reference frame. such relativistically invariant quantum information is free of the difficulties associated with encoding into spin or other degrees of freedom in a relativistic context.
|
arxiv:quant-ph/0403014
|
we extend the new perturbation formula of equilibrium states by hastings to kms states of general $ w ^ * $ - dynamical systems.
|
arxiv:1902.05734
|
traditional text classification approaches often require a good amount of labeled data, which is difficult to obtain, especially in restricted domains or less widespread languages. this lack of labeled data has led to the rise of low - resource methods, that assume low data availability in natural language processing. among them, zero - shot learning stands out, which consists of learning a classifier without any previously labeled data. the best results reported with this approach use language models such as transformers, but fall into two problems : high execution time and inability to handle long texts as input. this paper proposes a new model, zeroberto, which leverages an unsupervised clustering step to obtain a compressed data representation before the classification task. we show that zeroberto has better performance for long inputs and shorter execution time, outperforming xlm - r by about 12 % in the f1 score in the folhauol dataset. keywords : low - resource nlp, unlabeled data, zero - shot learning, topic modeling, transformers.
|
arxiv:2201.01337
|
the orlicz $ \ left ( \ ell _ { 2 }, \ ell _ { 1 } \ right ) $ - mixed inequality states that $ $ \ left ( \ sum _ { j _ { 1 } = 1 } ^ { n } \ left ( \ sum _ { j _ { 2 } = 1 } ^ { n } \ left \ vert a ( e _ { j _ { 1 } }, e _ { j _ { 2 } } ) \ right \ vert \ right ) ^ { 2 } \ right ) ^ { \ frac { 1 } { 2 } } \ leq \ sqrt { 2 } \ left \ vert a \ right \ vert $ $ for all bilinear forms $ a : \ mathbb { k } ^ { n } \ times \ mathbb { k } ^ { n } \ rightarrow \ mathbb { k } $ and all positive integers $ n $, where $ \ mathbb { k } ^ { n } $ denotes $ \ mathbb { r } ^ { n } $ or $ \ mathbb { c } ^ { n } $ endowed with the supremum norm. in this paper we extend this inequality to multilinear forms, with $ \ mathbb { k } ^ { n } $ endowed with $ \ ell _ { p } $ norms for all $ p \ in \ lbrack1, \ infty ]. $
|
arxiv:2007.00037
|
this paper seeks whether alfv \ ' en waves ( aw ) can be produced in laboratory - scale liquid metal experiments, \ emph { i. e. } at low - magnetic reynolds number ( $ r \! m $ ). aw are incompressible waves propagating along magnetic fields typically found geo and astrophysical systems. until now, only faint linear waves have been experimentally produced in liquid metals because of the large magnetic dissipation they undergo when $ r \! m \ ll1 $. yet, controlling laboratory aw could emulate such far remote processes as anomalous heating in the solar corona, oscillations of the earth inner core or turbulence in the solar wind. to answer this question, we force aw with an ac electric current in a liquid metal channel in a transverse magnetic field. we derive a wave - bearing extension of the usual low $ - r \! m $ mhd approximation to identify two linear regimes : the purely diffusive regime exists when $ n _ \ omega $, the ratio of the oscillation period to the timescale of diffusive two - dimensionalisation by the lorentz force, is small. the propagative regime is governed by the ratio of the forcing period to the aw propagation timescale which, we call the jameson number $ j \! a $ after jameson ( 1964 ), jfm. in this regime, aw are dissipative and dispersive as they propagate more slowly where velocity gradients are higher. both regimes are recovered in the flowcube experiment, in excellent agreement with the model up to $ j \! a \ lesssim 0. 85 $ but near the $ j \! a = 1 $ resonance, high amplitude waves become clearly nonlinear. hence, in electrically driving aw, we were able to produce some of the propagative, diffusive and nonlinear processes of astro and geophysical aw.
|
arxiv:2405.04276
|
some general properties of extensive air showers are discussed. the main focus is put on the longitudinal development, in particular the energy flow, and on the lateral distribution of different air shower components. the intention of the paper is to provide a basic introduction to the subject rather than a comprehensive review.
|
arxiv:astro-ph/0402300
|
we give a relation between the radius and the dimension in which the asymptotic formula in the waring problem holds in a multiplicative and dimension - free fashion.
|
arxiv:2203.09273
|
we present a comparison study of state - of - the - art classical optimisation methods to a d - wave 2000q quantum annealer for the planning of earth observation missions. the problem is to acquire high value images while obeying the attitude manoeuvring constraint of the satellite. in order to investigate close to real - world problems, we created benchmark problems by simulating realistic scenarios. our results show that a tuned quantum annealing approach can run faster than a classical exact solver for some of the problem instances. moreover, we find that the solution quality of the quantum annealer is comparable to the heuristic method used operationally for small problem instances, but degrades rapidly due to the limited precision of the quantum annealer.
|
arxiv:2006.09724
|
at the present paper, it is studied cosmological solutions and its stability in the frame of f ( r ) horava - lifshitz gravity. the perturbations around general spatially flat frw solutions are analyzed and it is showed that the stability of those solutions will depend on the kind of theory, i. e. on the form of the action f ( r ), as well as on the parameters contained in any horava - lifshitz theory due to the breaking of lorentz invariance. the ( in ) stability of a given cosmic solution can restrict the models and gives new observational predictions, and can give a natural explanation on the end of inflation and radiation / matter phases. an explicit example of f ( r ) is studied, and it is showed that the instability can produce the transition between the different epochs of the universe history.
|
arxiv:1011.2090
|
zero forcing is a deterministic iterative graph coloring process in which vertices are colored either blue or white, and in every round, any blue vertices that have a single white neighbor force these white vertices to become blue. here we study probabilistic zero forcing, where blue vertices have a non - zero probability of forcing each white neighbor to become blue. we explore the propagation time for probabilistic zero forcing on the erd \ h { o } s - r \ ' eyni random graph $ \ gnp $ when we start with a single vertex colored blue. we show that when $ p = \ log ^ { - o ( 1 ) } n $, then with high probability it takes $ ( 1 + o ( 1 ) ) \ log _ 2 \ log _ 2n $ rounds for all the vertices in $ \ gnp $ to become blue, and when $ \ log n / n \ ll p \ leq \ log ^ { - o ( 1 ) } n $, then with high probability it takes $ \ theta ( \ log ( 1 / p ) ) $ rounds.
|
arxiv:1909.06568
|
using first - principles density functional calculations, we study the electronic and magnetic properties of ferromagnetic insulating double - perovskite compound la2nimno6, which has been reported to exhibit interesting magnetic field sensitive dielectric anomaly as a function of temperature. our study reveals existence of very soft infra - red active phonons that couple strongly with spins at the ni and mn sites through modification of the super - exchange interaction. we suggest that these modes are the origin for observed dielectric anomaly in la2nimno6.
|
arxiv:0805.1112
|
the identification of topological superconductors usually involves searching for in - gap modes that are protected by topology. however, in the current experimental settings, the smoking - gun evidence of these in - gap modes is still lacking. in this work, we propose to distinguish between two - dimensional conventional s - wave and topological p - wave superconductors by above - gap transport signatures. our method utilizes the emergence of tomasch oscillations of quasiparticles in a junction consisting of a superconductor sandwiched between two metallic leads. we demonstrate that the behavior of the oscillations in conductance as a function of the interface barriers provides a distinctive signature for s - wave and p - wave superconductors. specifically, the oscillations become weaker as the barrier strength increases in s - wave superconductors, while they become more pronounced in p - wave superconductors, which we prove to be a direct manifestation of the pairing symmetries. our method opens a new route for identifying topological superconductors through above - gap transport.
|
arxiv:2305.09530
|
mass and other conserved noether charges are discussed for solutions of gravity theories with locally anti - de sitter asymptotics in 2n dimensions. the action is supplemented with a boundary term whose purpose is to guarantee that it reaches an extremum on the classical solutions, provided the spacetime is locally ads at the boundary. it is also shown that if spacetime is locally ads at spatial infinity, the conserved charges are finite and properly normalized without requiring subtraction of a reference background. in this approach, noether charges associated to lorentz and diffeomorphism invariance vanish identically for constant curvature spacetimes. the case of zero cosmological constant is obtained as a limit of ads, where $ \ lambda $ plays the role of a regulator.
|
arxiv:hep-th/9912045
|
for a normal subgroup $ n $ of the free group $ \ f _ d $ with at least two generators we introduce the radial limit set $ \ lr ( n, \ phi ) $ of $ n $ with respect to a graph directed markov system $ \ phi $ associated to $ \ f _ d $. these sets are shown to provide fractal models of radial limit sets of normal subgroups of kleinian groups of schottky type. our main result states that if $ \ phi $ is symmetric and linear, then we have that $ \ dim _ { h } ( \ lr ( n, \ phi ) ) = \ dim _ { h } \ lr ( \ f _ d, \ phi ) ) $ if and only if the quotient group $ \ f _ d / n $ is amenable, where $ \ dim _ { h } $ denotes the hausdorff dimension. this extends a result of brooks for normal subgroups of kleinian groups to a large class of fractal sets. moreover, we show that if $ \ f _ d / n $ is non - amenable then $ \ dim _ { h } ( \ lr ( n, \ phi ) ) > \ dim _ { h } ( \ lr ( \ f _ d, \ phi ) ) / 2 $, which extends results by falk and stratmann and by roblin.
|
arxiv:1106.0026
|
we present a refined version of the anomaly awareness framework for enhancing unsupervised anomaly detection. our approach introduces minimal supervision into variational autoencoders ( vaes ) through a two - stage training strategy : the model is first trained in an unsupervised manner on background data, and then fine - tuned using a small sample of labeled anomalies to encourage larger reconstruction errors for anomalous samples. we validate the method across diverse domains, including the mnist dataset with synthetic anomalies, network intrusion data from the cicids benchmark, collider physics data from the lhco2020 dataset, and simulated events from the standard model effective field theory ( smeft ). the latter provides a realistic example of subtle kinematic deviations in higgs boson production. in all cases, the model demonstrates improved sensitivity to unseen anomalies, achieving better separation between normal and anomalous samples. these results indicate that even limited anomaly information, when incorporated through targeted fine - tuning, can substantially improve the generalization and performance of unsupervised models for anomaly detection.
|
arxiv:2504.11520
|
here we explore the possibility of precise time - keeping in quantum systems using athermal resources. we show that quantum measurement engineered reservoirs can be used as athermal resources to drive the ticks of a quantum clock. two and three level quantum systems act as transducers in our model, converting the quantum measurement induced noise to produce a series of ticks. the ticking rate of the clock is maximized when the measured observable maximally non - commutes with the clock ' s hamiltonian. we use the large deviation principle to characterize the statistics of observed ticks within a given time - period and show that it can be sub - poissonian - - quantified by mandel ' s q parameter - - alluding to the quantum nature of the clock. we discuss the accuracy and efficiency of the clock, and extend our framework to include hybrid quantum clocks fueled by both measurements, and thermal resources. we make comparisons to relatable recent proposals for quantum clocks, and discuss alternate device implementations harvesting the quantum measurement engineered non - equilibrium conditions, beyond the clock realization.
|
arxiv:2207.07909
|
gaia data release 3 ( gdr3 ) contains a wealth of information to advance our knowledge of stellar physics. in these lecture notes we introduce the data products from gdr3 that can be exploited by the stellar physics community. then we visit different regions of the hr diagram, discuss the open scientific questions, and describe how gdr3 can help advance this particular topic. specific regions include hot ob and a type stars, fgk main sequence, giants, and variable sources, low mass stars, and ultra - cool dwarfs. examples of scientific exploitation are also provided. these lecture notes are accompanied by a 3 - hour lecture presentation and a 3 - hour practical session that are publicly available on the website of the ecole evry schatzman 2023 : stellar physics with gaia, https : / / ees2023. sciencesconf. org /, see lectures and hands - on work.
|
arxiv:2504.17361
|
recent advances in multimodal imaging acquisition techniques have allowed us to measure different aspects of brain structure and function. multimodal fusion, such as linked independent component analysis ( lica ), is popularly used to integrate complementary information. however, it has suffered from missing data, commonly occurring in neuroimaging data. therefore, in this paper, we propose a full information lica algorithm ( fi - lica ) to handle the missing data problem during multimodal fusion under the lica framework. built upon complete cases, our method employs the principle of full information and utilizes all available information to recover the missing latent information. our simulation experiments showed the ideal performance of fi - lica compared to current practices. further, we applied fi - lica to multimodal data from the alzheimer ' s disease neuroimaging initiative ( adni ) study, showcasing better performance in classifying current diagnosis and in predicting the ad transition of participants with mild cognitive impairment ( mci ), thereby highlighting the practical utility of our proposed method.
|
arxiv:2406.18829
|
while language competition models of diachronic language shift are increasingly sophisticated, drawing on sociolinguistic components like variable language prestige, distance from language centers and intermediate bilingual transitionary populations, in one significant way they fall short. they fail to consider contact - based outcomes resulting in mixed language practices, e. g. outcome scenarios such as creoles or unmarked code switching as an emergent communicative norm. on these lines something very interesting is uncovered in india, where traditionally there have been monolingual hindi speakers and hindi / english bilinguals, but virtually no monolingual english speakers. while the indian census data reports a sharp increase in the proportion of hindi / english bilinguals, we argue that the number of hindi / english bilinguals in india is inaccurate, given a new class of urban individuals speaking a mixed lect of hindi and english, popularly known as " hinglish ". based on predator - prey, sociolinguistic theories, salient local ecological factors and the rural - urban divide in india, we propose a new mathematical model of interacting monolingual hindi speakers, hindi / english bilinguals and hinglish speakers. the model yields globally asymptotic stable states of coexistence, as well as bilingual extinction. to validate our model, sociolinguistic data from different indian classes are contrasted with census reports : we see that purported urban hindi / english bilinguals are unable to maintain fluent hindi speech and instead produce hinglish, whereas rural speakers evidence monolingual hindi. thus we present evidence for the first time where an unrecognized mixed lect involving english but not " english ", has possibly taken over a sizeable faction of a large global population.
|
arxiv:1406.4824
|
comment : monitoring networked applications with incremental quantile estimation [ arxiv : 0708. 0302 ]
|
arxiv:0708.0317
|
we prove the existence of a family of integrable deformations of $ \ mathbb { z } _ n $ - coset models in two dimensions. our approach uses and generalises the method of auxiliary fields that was recently introduced for the principal chiral model by ferko and smith.
|
arxiv:2409.04523
|
several techniques have been employed for the direct visualization of cytoskeletal filaments and their associated proteins. total - internal - reflection - fluorescence ( tirf ) microscopy has a high signal - to - background ratio, but it suffers from photobleaching and photodamage of the fluorescent proteins. label - free techniques such as interference reflection microscopy ( irm ) and interference scattering microscopy ( iscat ) circumvent the problem of photobleaching but cannot readily visualize single molecules. here, we present a protocol for combining irm with a commercial tirf microscope for the simultaneous imaging of microtubule - associated proteins ( maps ) and dynamic microtubules in vitro. our protocol allows for high - speed observation of maps interacting with dynamic microtubules. this improves on existing two - color tirf setups by eliminating both the need for microtubule labeling and the need for several additional optical components, such as a second excitation laser. we image both channels on the same camera chip to avoid image - registration and frame - synchronization problems. we demonstrate our setup by visualizing single kinesin molecules walking on dynamic microtubules.
|
arxiv:2201.07911
|
we theoretically study the electronic transport of a nanowire partly irradiated under an external terahertz ( thz ) electromagnetic field. although the electrons in the ballistic nanowires only suffer lateral collision with photons the reflection of electrons also takes place in this partly irradiated case. using free - electron model and scattering matrix approach we showed that at resonance there exists a step decrement of 50 percent for the transmission probability as the amplitude of field increases to a certain volume. and the coherent structure of transmission for the system apparently appears when the field irradiate the middle part of nanowire only. this sensitive transmission property of the system may be used in the thz detection.
|
arxiv:cond-mat/0212044
|
in this paper, we present a generic physics - informed generative model called mpdm that integrates multi - fidelity physics simulations with diffusion models. mpdm categorizes multi - fidelity physics simulations into inexpensive and expensive simulations, depending on computational costs. the inexpensive simulations, which can be obtained with low latency, directly inject contextual information into ddms. furthermore, when results from expensive simulations are available, mpdm refines the quality of generated samples via a guided diffusion process. this design separates the training of a denoising diffusion model from physics - informed conditional probability models, thus lending flexibility to practitioners. mpdm builds on bayesian probabilistic models and is equipped with a theoretical guarantee that provides upper bounds on the wasserstein distance between the sample and underlying true distribution. the probabilistic nature of mpdm also provides a convenient approach for uncertainty quantification in prediction. our models excel in cases where physics simulations are imperfect and sometimes inaccessible. we use a numerical simulation in fluid dynamics and a case study in heat dynamics within laser - based metal powder deposition additive manufacturing to demonstrate how mpdm seamlessly integrates multi - idelity physics simulations and observations to obtain surrogates with superior predictive performance.
|
arxiv:2407.17720
|
the chu circuit model provides the basis for analyzing the minimum radiation quality factor, q, of a given spherical mode. however, examples of electrically large spherical radiators readily demonstrate that this q limit has limitations in predicting bandwidth. spherical mode radiation is reexamined and an equivalent 1d transmission line model is derived that exactly models the fields. this model leads to a precise cutoff frequency of the spherical waveguide, which provides a clear boundary between propagating and evanescent fields. a new delineation of ' stored ' and ' radiated ' electromagnetic energy is postulated, which leads to a new definition of spherical mode q. next, attention is turned to the harrington bound on the directivity - bandwidth tradeoff of an antenna with an arbitrary size. harrington derived the maximum directivity for a specified number of spherical harmonics such that the q is not ' large '. here, the method of lagrange multipliers is used to quantify the maximum directivity for a given bandwidth. it is shown that optimally exciting all spherical harmonics ( including n > ka ) enables both larger directivity and bandwidth than harrington ' s previous limit. while chu and harrington ' s analyses are generally good approximations for most situations, the new self - consistent theory that defines fundamental antenna limits leads to updated results.
|
arxiv:2408.07085
|
one of the most reliable means of studying the stellar interior is through the apsidal motion in double line eclipsing binary systems since these systems present errors in masses, radii, and effective temperatures of only a few per cent. on the other hand, the theoretical values of the apsidal motion to be compared with the observed values depend on the stellar masses of the components and more strongly on their radii ( fifth power ). the main objective of this work is to make available grids of evolutionary stellar models that, in addition to the traditional parameters ( e. g. age, mass, log g, t $ _ { \ rm eff } $ ), also contain the necessary parameters for the theoretical study of apsidal motion and tidal evolution. this information is useful for the study of the apsidal motion in eclipsing binaries and their tidal evolution, and can also be used for the same purpose in exoplanetary systems. all models were computed using the mesa package. we consider core overshooting for models with masses $ \ ge $ 1. 2 m $ _ \ odot $. for the amount of core overshooting we adopted a recent relationship for mass $ \ times $ core overshooting. we adopted for the mixing - length parameter $ \ alpha _ { \ rm mlt } $ the value 1. 84 ( the solar - calibrated value ). mass loss was taken into account in two evolutionary phases. the models were followed from the pre - main sequence phase to the white dwarf ( wd ) stage. the evolutionary models containing age, luminosity, log g, and teff, as well as the first three harmonics of the internal stellar structure ( k $ _ 2 $, k $ _ 3 $, and k $ _ 4 $ ), the radius of gyration $ \ beta $ y, and the dimensionless variable $ \ alpha $, related to gravitational potential energy, are presented in 69 tables covering three chemical compositions : [ fe / h ] = - 0. 50, 0. 00, and 0. 50. additional models with different input physics are available.
|
arxiv:2305.01627
|
with the increasing importance of data privacy, local differential privacy ( ldp ) has recently become a strong measure of privacy for protecting each user ' s privacy from data analysts without relying on a trusted third party. in many cases, both data providers and data analysts hope to maximize the utility of released data. in this paper, we study the fundamental trade - off formulated as a constrained optimization problem : maximizing data utility subject to the constraint of ldp budgets. in particular, the generalized randomized response ( grr ) treats all discrete data equally except for the true data. for this, we introduce an adaptive ldp mechanism called bipartite randomized response ( brr ), which solves the above privacy - utility maximization problem from the global standpoint. we prove that for any utility function and any privacy level, solving the maximization problem is equivalent to confirming how many high - utility data to be treated equally as the true data on release probability, the outcome of which gives the optimal randomized response. further, solving this linear program can be computationally cheap in theory. several examples of utility functions defined by distance metrics and applications in decision trees and deep learning are presented. the results of various experiments show that our brr significantly outperforms the state - of - the - art ldp mechanisms of both continuous and distributed types.
|
arxiv:2504.20926
|
while the advanced machine learning algorithms are effective in load forecasting, they often suffer from low data utilization, and hence their superior performance relies on massive datasets. this motivates us to design machine learning algorithms with improved data utilization. specifically, we consider the load forecasting for a new user in the system by observing only few shots ( data points ) of its energy consumption. this task is challenging since the limited samples are insufficient to exploit the temporal characteristics, essential for load forecasting. nonetheless, we notice that there are not too many temporal characteristics for residential loads due to the limited kinds of human lifestyle. hence, we propose to utilize the historical load profile data from existing users to conduct effective clustering, which mitigates the challenges brought by the limited samples. specifically, we first design a feature extraction clustering method for categorizing historical data. then, inheriting the prior knowledge from the clustering results, we propose a two - phase long short term memory ( lstm ) model to conduct load forecasting for new users. the proposed method outperforms the traditional lstm model, especially when the training sample size fails to cover a whole period ( i. e., 24 hours in our task ). extensive case studies on two real - world datasets and one synthetic dataset verify the effectiveness and efficiency of our method.
|
arxiv:2202.07939
|
we present ac susceptibility and specific heat measurements taken on samples of liho $ _ x $ y $ _ { 1 - x } $ f $ _ 4 $ in the dilute limit : x = 0. 018, 0. 045, 0. 080 and 0. 12. susceptibility measurements show glassy behavior including wide absorption spectra that continually broaden with decreasing temperature. dynamical scaling analyses show evidence of finite - temperature spin - glass transitions, the temperatures of which match those of recent theoretical work. a surprisingly long intrinsic time constant is observed in these samples and is found to be inversely correlated with the concentration of magnetic moments, x. our results support the picture that this behavior is largely a single - ion effect, related to the random transverse fields generated by the off - diagonal component of the dipolar interaction and significantly slowed by the important nuclear hyperfine interaction. specific heat measurements show broad features due to the electronic spins on top of a large schottky - like nuclear contribution. unusually, the peak position of the electronic component is found to be largely concentration independent, unlike the glass transition temperature.
|
arxiv:1205.3374
|
i propose a two component analytic formula $ f ( s, t ) = f ^ { ( 1 ) } ( s, t ) + f ^ { ( 2 ) } ( s, t ) $ for $ ( ab \ rightarrow ab ) + ( a \ bar { b } \ rightarrow a \ bar { b } ) $ scattering at energies $ \ ge 100 gev $, where $ s, t $ denote squares of c. m. energy and momentum transfer. it saturates the froissart - martin bound and obeys auberson - kinoshita - martin ( akm ) \ cite { akm1971 } scaling. i choose $ im f ^ { ( 1 ) } ( s, 0 ) + im f ^ { ( 2 ) } ( s, 0 ) $ as given by particle data group ( pdg ) fits to total cross sections. the pdg formula is extended to non - zero momentum transfers using partial waves of $ im f ^ { ( 1 ) } $ and $ im f ^ { ( 2 ) } $ motivated by pomeron pole and ' grey disk ' amplitudes. $ re f ( s, t ) $ is deduced from real analyticity : i prove that $ re f ( s, t ) / imf ( s, 0 ) \ rightarrow ( \ pi / \ ln { s } ) d / d \ tau ( \ tau im f ( s, t ) / imf ( s, 0 ) ) $ for $ s \ rightarrow \ infty $ with $ \ tau = t ( ln s ) ^ 2 $ fixed, and apply it to $ f ^ { ( 2 ) } $. using also the forward slope fit by schegelsky - ryskin, the model gives real parts, differential cross sections for $ ( - t ) <. 3 gev ^ 2 $, and inelastic cross sections in good agreement with data at $ 546 gev, 1. 8 tev, 7 tev $ and $ 8 tev $. it predicts for inelastic cross sections for $ pp $ or $ \ bar { p } p $, $ \ sigma _ { inel } = 72. 7 \ pm 1. 0 mb $ at $ 7tev $ and $ 74. 2 \ pm 1. 0mb $ at $ 8 tev $ in agreement with $ pp $ totem experimental values $ 73. 1 \ pm 1
|
arxiv:1602.03627
|
we present an investigation of the boundary breather states of the sinh - gordon model restricted to a half - line. the classical boundary breathers are presented for a two parameter family of integrable boundary conditions. restricting to the case of boundary conditions which preserve the \ phi - - > - \ phi symmetry of the bulk theory, the energy spectrum of the boundary states is computed in two ways : firstly, by using the bootstrap technique and subsequently, by using a wkb approximation. requiring that the two descriptions of the spectrum agree with each other allows a determination of the relationship between the boundary parameter, the bulk coupling constant, and the parameter appearing in the reflection factor derived by ghoshal to describe the scattering of the sinh - gordon particle from the boundary.
|
arxiv:hep-th/9909145
|
the semitauonic $ b _ c ^ - \ to j / \ psi \ tau ^ - \ bar { \ nu } _ \ tau $ decay is optimal to scrutinize possible new physics effects in $ b \ to c \ tau ^ - \ bar { \ nu } _ \ tau $ transitions as indicated by the current data on $ r ( d ^ { ( * ) } ) $ anomalies. in this work, we study the $ b _ c ^ - \ to j / \ psi \ tau ^ - \ bar { \ nu } _ \ tau $ decay with both polarized $ \ tau $ and $ j / \ psi $. their subsequent decays, with $ j / \ psi \ to \ mu ^ + \ mu ^ - $ as well as $ \ tau ^ - \ to \ pi ^ - \ nu _ \ tau $, $ \ tau ^ - \ to \ rho ^ - \ nu _ \ tau $ and $ \ tau ^ - \ to \ ell ^ - \ bar { \ nu } _ \ ell \ nu _ \ tau $, are exploited to extract the energy and angular distributions of the charged final - state particles in the processes. starting with the most general effective hamiltonian relevant for the $ b \ to c \ tau ^ - \ bar { \ nu } _ \ tau $ transitions, including all possible lorentz structures of the dimension - six operators with both left - and right - handed neutrinos, we first derive the five - fold differential decay rate in terms of the visible final - state kinematics. from this distribution, we then construct in total 34 normalized observables, among which nine refer to the cp - violating triple product asymmetries that vanish within the standard model. we also construct five new observables based on the combinations of these normalized observables that can only be attributed to the right - handed neutrinos. on the other hand, considering the low statistics of the fully differential distribution, we introduce some integrated observables with only one kinematic variable left, which are more promising to be measured due to the largely increased statistics. the sensitivities of all these observables to the different new physics scenarios are investigated in detail. finally, assuming an ideal circumstance, we give an estimate of the statistical uncertainties of the nine cp - conserving observables at lhcb and found that $ \ tau ^ - \
|
arxiv:2302.13743
|
we investigate the laplacian abelian gauge on the sphere s ^ 4 in the background of a single ` t hooft instanton. to this end we solve the eigenvalue problem of the covariant laplace operator in the adjoint representation. the ground state wave function serves as an auxiliary higgs field. we find that the ground state is always degenerate and has nodes. upon diagonalisation, these zeros induce toplogical defects in the gauge potentials. the nature of the defects crucially depends on the order of the zeros. for first - order zeros one obtains magnetic monopoles. the generic defects, however, arise from zeros of second order and are pointlike. their topological invariant is the hopf index s ^ 3 - > s ^ 2. these findings are corroborated by an analysis of the laplacian gauge in the fundamental representation where similar defects occur. possible implications for the confinement scenario are discussed.
|
arxiv:hep-th/0007119
|
in the picture of eternal inflation, our observable universe resides inside a single bubble nucleated from an inflating false vacuum. many of the theories giving rise to eternal inflation predict that we have causal access to collisions with other bubble universes, providing an opportunity to confront these theories with observation. we present the results from the first observational search for the effects of bubble collisions, using cosmic microwave background data from the wmap satellite. our search targets a generic set of properties associated with a bubble collision spacetime, which we describe in detail. we use a modular algorithm that is designed to avoid a posteriori selection effects, automatically picking out the most promising signals, performing a search for causal boundaries, and conducting a full bayesian parameter estimation and model selection analysis. we outline each component of this algorithm, describing its response to simulated cmb skies with and without bubble collisions. comparing the results for simulated bubble collisions to the results from an analysis of the wmap 7 - year data, we rule out bubble collisions over a range of parameter space. our model selection results based on wmap 7 - year data do not warrant augmenting lcdm with bubble collisions. data from the planck satellite can be used to more definitively test the bubble collision hypothesis.
|
arxiv:1012.3667
|
maniac challenge raises a problem of game theory, different players strategies intertwine and the success of any player is dependent on the actions of all players in the system. a truly fair scenario is when all the strategies are identical, all the nodes co - operate and they all equally share the rewards and risks that come with every transfer. a successful strategy is one that tries to diverge from the equilibrium to maximize its own gains and it manages to do so. we propose the wolf - pack strategy. unlike standard game - theory based strategies, our strategy does not penalize the nodes that diverge from fairness or from equilibrium, as we believe most nodes will do so in an attempt to get an advantage over the other nodes. the wolf - pack strategy will try to always find the most successful node or nodes and penalize them. we believe that just like in nature, a number of small predators can take down the bigger, more profitable ones. furthermore during the challenge we test two different strategies that provide completely opposite results. these offer a clear picture of what the best strategy is and the problems of the current system.
|
arxiv:1401.1347
|
in this contribution recent developments are discussed which lead to a significant reduction of the error for $ \ alpha ( m _ z ^ 2 ) $ and $ ( g - 2 ) _ \ mu $.
|
arxiv:hep-ph/9904373
|
the calculation of quark propagators for ginsparg - wilson - type dirac operators is costly and thus limited to a few different sources. we present a new approach for determining spatially optimized operators for lattice spectroscopy of excited hadrons. jacobi smeared quark sources with different widths are combined to construct hadron operators with different spatial wave functions. we study the roper state and excited rho and pion mesons.
|
arxiv:hep-lat/0409014
|
textured meshes are becoming an increasingly popular representation combining the 3d geometry and radiometry of real scenes. however, semantic segmentation algorithms for urban mesh have been little investigated and do not exploit all radiometric information. to address this problem, we adopt an approach consisting in sampling a point cloud from the textured mesh, then using a point cloud semantic segmentation algorithm on this cloud, and finally using the obtained semantic to segment the initial mesh. in this paper, we study the influence of different parameters such as the sampling method, the density of the extracted cloud, the features selected ( color, normal, elevation ) as well as the number of points used at each training period. our result outperforms the state - of - the - art on the sum dataset, earning about 4 points in oa and 18 points in miou.
|
arxiv:2302.10635
|
the superscaling observed in inclusive electron scattering is described within the dilute fermi gas model with interaction between the particles. the comparison with the relativistic fermi gas ( rfg ) model without interaction shows an improvement in the explanation of the scaling function $ f ( \ psi ' ) $ in the region $ \ psi ' < - 1 $, where the rfg result is $ f ( \ psi ' ) = 0 $. it is found that the behavior of $ f ( \ psi ' ) $ for $ \ psi ' < - 1 $ depends on the particular form of the general power - law asymptotics of the momentum distribution $ n ( k ) \ sim 1 / k ^ { 4 + m } $ at large $ k $. the best agreement with the empirical scaling function is found for $ m \ simeq 4. 5 $ in agreement with the asymptotics of $ n ( k ) $ in the coherent density fluctuation model where $ m = 4 $. thus, superscaling gives information about the asymptotics of $ n ( k ) $ and the nn forces.
|
arxiv:nucl-th/0703003
|
among neptunian mass exoplanets ( $ 20 - 50 $ m $ _ \ oplus $ ), puffy hot neptunes are extremely rare, and their unique combination of low mass and extended radii implies very low density ( $ \ rho < 0. 3 $ ~ g ~ cm $ ^ { - 3 } $ ). over the last decade, only a few puffy planets have been detected and precisely characterized with both transit and radial velocity observations, most notably including wasp - 107 ~ $ b $, toi - 1420 ~ $ b $, and wasp - 193 $ b $. in this paper, we report the discovery of toi - 1173 a $ b $, a low - density ( $ \ rho = 0. 195 _ { - 0. 017 } ^ { + 0. 018 } $ ~ g ~ cm $ ^ { - 3 } $ ) super - neptune with $ p = 7. 06 $ days in a nearly circular orbit around the primary g - dwarf star in the wide binary system toi - 1173 a / b. using radial velocity observations with the maroon - x and hires spectrographs and transit photometry from tess, we determined a planet mass of $ m _ { \ rm { p } } = 27. 4 \ pm1. 7 \ m _ { \ oplus } $ and radius of $ r _ { \ rm { p } } = 9. 19 \ pm0. 18 \ r _ { \ oplus } $. toi - 1173 a $ b $ is the first puffy super - neptune planet detected in a wide binary system ( projected separation $ \ sim 11, 400 $ ~ au ). we explored several mechanisms to understand the puffy nature of toi - 1173 a $ b $, and showed that tidal heating is the most promising explanation. furthermore, we demonstrate that toi - 1173 a $ b $ likely has maintained its orbital stability over time and may have undergone von - zeipel - lidov - kozai migration followed by tidal circularization given its present - day architecture, with important implications for planet migration theory and induced engulfment into the host star. further investigation of the atmosphere of toi - 1173 a $ b $ will shed light on the origin of close - in low - density neptunian planets in field and binary systems, while spin - orbit analyses may elucidate the dynamical evolution of the system.
|
arxiv:2403.06240
|
in order to apply nonstandard methods to modern algebraic geometry, as a first step in this paper we study the applications of nonstandard constructions to category theory. it turns out that many categorial properties are well behaved under enlargements.
|
arxiv:math/0408177
|
we report the results of a multi - wavelength campaign on the blazar mrk 421 during outburst. we observed four strong flares at x - ray energies that were not seen at other wavelengths ( partially because of missing data ). from the fastest rise in the x - rays, an upper limit could be derived on the extension of the emission region. a time lag between high - energy and low - energy x - rays was observed, which allowed an estimation of the magnetic - field strength. the spectral analysis of the x - rays revealed a slight spectral hardening of the low - energy ( 3 - 43 kev ) spectral index. the hardness - ratio analysis of the swift - xrt ( 0. 2 - 10 kev ) data indicated a small correlation with the intensity ; i. e., a hard - to - soft evolution was observed. at the energies of ibis / isgri ( 20 - 150 kev ), such correlations are less obvious. a multiwavelength spectrum was composed and the x - ray and bolometric luminosities are calculated.
|
arxiv:0805.2577
|
a search for diphoton events with large missing transverse energy is presented. the data were collected with the atlas detector in proton - proton collisions at sqrt ( s ) = 7 tev at the cern large hadron collider and correspond to an integrated luminosity of 3. 1 / pb. no excess of such events is observed above the standard model background prediction. in the context of a specific model with one universal extra dimension with compactification radius r and gravity - induced decays, values of 1 / r < 728 gev are excluded at 95 % cl, providing the most sensitive limit on this model to date.
|
arxiv:1012.4272
|
in this paper, we present a systematic investigation on simple inverse seesaw models for neutrino masses and flavor mixing based on the modular $ s ^ { } _ 4 $ symmetry. two right - handed neutrinos and three extra fermion singlets are introduced to account for light neutrino masses through the inverse seesaw mechanism, and to provide a kev - mass sterile neutrino as the candidate for warm dark matter in our universe. considering all possible modular forms with weights no larger than four, we obtain twelve models, among which we find one is in excellent agreement with the observed lepton mass spectra and flavor mixing. moreover, we explore the allowed range of the sterile neutrino mass and mixing angles, by taking into account the direct search of $ x $ - ray line and the lyman - $ \ alpha $ observations. the model predictions for neutrino mixing parameters and the dark matter abundance will be readily testable in future neutrino oscillation experiments and cosmological observations.
|
arxiv:2106.03433
|
patient privacy is a major barrier to healthcare ai. for confidentiality reasons, most patient data remains in silo in separate hospitals, preventing the design of data - driven healthcare ai systems that need large volumes of patient data to make effective decisions. a solution to this is collective learning across multiple sites through federated learning with differential privacy. however, literature in this space typically focuses on differentially private statistical estimation and machine learning, which is different from the causal inference - related problems that arise in healthcare. in this work, we take a fresh look at federated learning with a focus on causal inference ; specifically, we look at estimating the average treatment effect ( ate ), an important task in causal inference for healthcare applications, and provide a federated analytics approach to enable ate estimation across multiple sites along with differential privacy ( dp ) guarantees at each site. the main challenge comes from site heterogeneity - - different sites have different sample sizes and privacy budgets. we address this through a class of per - site estimation algorithms that reports the ate estimate and its variance as a quality measure, and an aggregation algorithm on the server side that minimizes the overall variance of the final ate estimate. our experiments on real and synthetic data show that our method reliably aggregates private statistics across sites and provides better privacy - utility tradeoff under site heterogeneity than baselines.
|
arxiv:2310.06237
|
the paper develops the idea that the dynamics of both classical and quantum processes is time reversible. it is shown how this classical analogy allows one to define the measure for the path integral in quantum mechanics.
|
arxiv:hep-ph/0311196
|
in this work we explore 1 + 1 dimensional p - wave superconductors using the probe d - brane construction. specifically, we choose three intersecting d - brane models : d1 / d5, d2 / d4 and d3 / d3 systems. according to the dilaton running behavior, we denote the former two systems as nonconformal models and the last system as conformal. we find that all three models are qualitatively similar in describing superconducting condensate as well as some basic features ( such as the gap formation and dc superconductivity ) of superconducting conductivity. there also exist some differences among the three models as far as the ac conductivity is concerned. specifically, for d3 / d3 model there is no peak at nonzero frequency for the imaginary part of the conductivity, which is present in the nonconformal models ; their asymptotic behaviors are different - for d1 / d5 the real part of the ac conductivity approaches one at large frequency limit, for d2 / d4 it slowly goes to a certain nonzero constant smaller than one and for d3 / d3 it goes to zero. we find the profile of the ac conductivity for the d1 / d5 system is very similar to that of higher dimensional p - wave superconductors.
|
arxiv:1205.1614
|
the performance of the standard online robust principal component analysis ( or - pca ) technique depends on the optimum tuning of the explicit regularizers and this tuning is dataset sensitive. we aim to remove the dependency on these tuning parameters by using implicit regularization. we propose to use the implicit regularization effect of various modified gradient descents to make or - pca tuning free. our method incorporates three different versions of modified gradient descent that separately but naturally encourage sparsity and low - rank structures in the data. the proposed method performs comparable or better than the tuned or - pca for both simulated and real - world datasets. tuning - free orpca makes it more scalable for large datasets since we do not require dataset - dependent parameter tuning.
|
arxiv:2409.07275
|
we present a novel integrable non - autonomous partial differential equation of the schwarzian type, i. e. invariant under m \ " obius transformations, that is related to the korteweg - de vries hierarchy. in fact, this pde can be considered as the generating equation for the entire hierarchy of schwarzian kdv equations. we present its lax pair, establish its connection with the skdv hierarchy, its miura relations to similar generating pdes for the modified and regular kdv hierarchies and its lagrangian structure. finally we demonstrate that its similarity reductions lead to the { \ it full } painlev \ ' e vi equation, i. e. with four arbitary parameters.
|
arxiv:solv-int/9909026
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.