text
stringlengths
1
3.65k
source
stringlengths
15
79
we spectroscopically characterize the atmosphere of hd 106906b, a young low - mass companion near the deuterium burning limit. the wide separation from its host star of 7. 1 " makes it an ideal candidate for high s / n and high - resolution spectroscopy. we aim to derive new constraints on the spectral type, effective temperature, and luminosity of hd106906b and also to provide a high s / n template spectrum for future characterization of extrasolar planets. we obtained 1. 1 - 2. 5 $ \ mu $ m integral field spectroscopy with the vlt / sinfoni instrument with a spectral resolution of r ~ 2000 - 4000. new estimates of the parameters of hd 106906b are derived by analyzing spectral features, comparing the extracted spectra to spectral catalogs of other low - mass objects, and fitting with theoretical isochrones. we identify several spectral absorption lines that are consistent with a low mass for hd 106906b. we derive a new spectral type of l1. 5 $ \ pm $ 1. 0, one subclass earlier than previous estimates. through comparison with other young low - mass objects, this translates to a luminosity of log ( $ l / l _ \ odot $ ) = $ - 3. 65 \ pm0. 08 $ and an effective temperature of teff = $ 1820 \ pm240 $ k. our new mass estimates range between $ m = 11. 9 ^ { + 1. 7 } _ { - 0. 8 } m _ { \ rm jup } $ ( hot start ) and $ m = 14. 0 ^ { + 0. 2 } _ { - 0. 5 } m _ { \ rm jup } $ ( cold start ). these limits take into account a possibly finite formation time, i. e., hd 106906b is allowed to be 0 - - 3 myr younger than its host star. we exclude accretion onto hd 106906b at rates $ \ dot { m } > 4. 8 \ times10 ^ { - 10 } m _ { \ rm jup } $ yr $ ^ { - 1 } $ based on the fact that we observe no hydrogen ( paschen - $ \ beta $, brackett - $ \ gamma $ ) emission. this is indicative of little or no circumplanetary gas. with our new observations, hd 106906b is the planetary - mass object with one of the highest s /
arxiv:1708.05747
we show how careful control of the incident polarization of a light beam close to the brewster angle gives a giant transverse spatial shift on reflection. this resolves the long - standing puzzle of why such beam shifts transverse to the incident plane [ imbert - fedorov ( if ) shifts ] tend to be an order of magnitude smaller than the related goos - h \ " anchen ( gh ) shifts in the longitudinal direction, which are largest close to critical incidence. we demonstrate that with the proper initial polarization the transverse displacements can be equally large, which we confirm experimentally near brewster incidence. in contrast to the established understanding these polarizations are elliptical and angle - dependent. we explain the magnitude of the if shift by an analogous change of the symmetry properties for the reflection operators as compared to the gh shift.
arxiv:1401.5505
we present tokenverse - - a method for multi - concept personalization, leveraging a pre - trained text - to - image diffusion model. our framework can disentangle complex visual elements and attributes from as little as a single image, while enabling seamless plug - and - play generation of combinations of concepts extracted from multiple images. as opposed to existing works, tokenverse can handle multiple images with multiple concepts each, and supports a wide - range of concepts, including objects, accessories, materials, pose, and lighting. our work exploits a dit - based text - to - image model, in which the input text affects the generation through both attention and modulation ( shift and scale ). we observe that the modulation space is semantic and enables localized control over complex concepts. building on this insight, we devise an optimization - based framework that takes as input an image and a text description, and finds for each word a distinct direction in the modulation space. these directions can then be used to generate new images that combine the learned concepts in a desired configuration. we demonstrate the effectiveness of tokenverse in challenging personalization settings, and showcase its advantages over existing methods. project ' s webpage in https : / / token - verse. github. io /
arxiv:2501.12224
we introduce moma, a novel modality - aware mixture - of - experts ( moe ) architecture designed for pre - training mixed - modal, early - fusion language models. moma processes images and text in arbitrary sequences by dividing expert modules into modality - specific groups. these groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. our empirical results reveal substantial pre - training efficiency gains through this modality - specific parameter allocation. under a 1 - trillion - token training budget, the moma 1. 4b model, featuring 4 text experts and 4 image experts, achieves impressive flops savings : 3. 7x overall, with 2. 6x for text and 5. 2x for image processing compared to a compute - equivalent dense baseline, measured by pre - training loss. this outperforms the standard expert - choice moe with 8 mixed - modal experts, which achieves 3x overall flops savings ( 3x for text, 2. 8x for image ). combining moma with mixture - of - depths ( mod ) further improves pre - training flops savings to 4. 2x overall ( text : 3. 4x, image : 5. 3x ), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. these results demonstrate moma ' s potential to significantly advance the efficiency of mixed - modal, early - fusion language model pre - training, paving the way for more resource - efficient and capable multimodal ai systems.
arxiv:2407.21770
administration and hospital information system other health information technology and health informatics = = in science = = applications of icts in science, research and development, and academia include : internet research online research methods science communication and communication between scientists scholarly databases applied metascience = = models of access = = scholar mark warschauer defines a " models of access " framework for analyzing ict accessibility. in the second chapter of his book, technology and social inclusion : rethinking the digital divide, he describes three models of access to icts : devices, conduits, and literacy. devices and conduits are the most common descriptors for access to icts, but they are insufficient for meaningful access to icts without third model of access, literacy. combined, these three models roughly incorporate all twelve of the criteria of " real access " to ict use, conceptualized by a non - profit organization called bridges. org in 2005 : physical access to technology appropriateness of technology affordability of technology and technology use human capacity and training locally relevant content, applications, and services integration into daily routines socio - cultural factors trust in technology local economic environment macro - economic environment legal and regulatory framework political will and public support = = = devices = = = the most straightforward model of access for ict in mark warschauer ' s theory is devices. in this model, access is defined most simply as the ownership of a device such as a phone or computer. warschauer identifies many flaws with this model, including its inability to account for additional costs of ownership such as software, access to telecommunications, knowledge gaps surrounding computer use, and the role of government regulation in some countries. therefore, warschauer argues that considering only devices understates the magnitude of digital inequality. for example, the pew research center notes that 96 % of americans own a smartphone, although most scholars in this field would contend that comprehensive access to ict in the united states is likely much lower than that. = = = conduits = = = a conduit requires a connection to a supply line, which for ict could be a telephone line or internet line. accessing the supply requires investment in the proper infrastructure from a commercial company or local government and recurring payments from the user once the line is set up. for this reason, conduits usually divide people based on their geographic locations. as a pew research center poll reports, americans in rural areas are 12 % less likely to have broadband access than other americans, thereby making them less likely to own the devices. additionally, these costs
https://en.wikipedia.org/wiki/Information_and_communications_technology
in adaptive dynamical networks, the dynamics of the nodes and the edges influence each other. we show that we can treat such systems as a closed feedback loop between edge and node dynamics. using recent advances on the stability of feedback systems from control theory, we derive local, sufficient conditions for steady states of such systems to be linearly stable. these conditions are local in the sense that they are written entirely in terms of the ( linearized ) behavior of the edges and nodes. we apply these conditions to the kuramoto model with inertia written in adaptive form, and the adaptive kuramoto model. for the former we recover a classic result, for the latter we show that our sufficient conditions match necessary conditions where the latter are available, thus completely settling the question of linear stability in this setting. the method we introduce can be readily applied to a vast class of systems. it enables straightforward evaluation of stability in highly heterogeneous systems.
arxiv:2411.10387
we explore the possibilities for spin - singlet superconductivity in newly discovered altermagnets. investigating $ d $ - wave altermagnets, we show that finite - momentum superconductivity can easily emerge in altermagnets even though they have no net magnetization, when the superconducting order parameter also has $ d $ - wave symmetry with nodes coinciding with the altermagnet nodes. additionally, we find a rich phase diagram when both altermagnetism and an external magnetic field are considered, including superconductivity appearing at high magnetic fields from a parent zero - field normal state.
arxiv:2309.14427
we seek to establish the presence and properties of gas in the circumstellar disk orbiting t cha, a nearby ( d ~ 110 pc ), relatively evolved ( age ~ 5 - 7 myr ) yet actively accreting 1. 5 msun t tauri star. we used the apex 12 m radiotelescope to search for submillimeter molecular emission from the t cha disk, and we reanalyzed archival xmm - newton spectroscopy of t cha to ascertain the intervening absorption due to disk gas along the line of sight to the star ( n _ h ). we detected submillimeter rotational transitions of 12co, 13co, hcn, cn and hco + from the t cha disk. the 12co line appears to display a double - peaked line profile indicative of keplerian rotation. analysis of the co emission line data indicates that the disk around t cha has a mass ( m _ disk, h _ 2 = 80 m _ earth ) similar to, but more compact ( r _ disk, co ~ 80 au ) than, other nearby, evolved molecular disks ( e. g. v4046 sgr, tw hya, mp mus ) in which cold molecular gas has been previously detected. the hco + / 13co and hcn / 13co, line ratios measured for t cha appear similar to those of other evolved circumstellar disks ( i. e. tw hya and v4046 sgr ), while the cn / 13co ratio appears somewhat weaker. analysis of the xmm - newton data shows that the atomic absorption $ n _ h $ toward t cha is 1 - 2 orders of magnitude larger than toward the other nearby t tauri with evolved disks. furthermore, the ratio between atomic absorption and optical extinction n _ h / a _ v toward t cha is higher than the typical value observed for the interstellar medium and young stellar objects in the orion nebula cluster. this may suggest that the fraction of metals in the disk gas is higher than in the interstellar medium. our results confirm that pre - main sequence stars older than ~ 5 myr, when accreting, retain cold molecular disks, and that those relatively evolved disks display similar physical and chemical properties.
arxiv:1310.8080
we study the decay b - > k * l + l - in the qcd factorization approach and propose a new integrated observable whose dependence on the form factors is almost negligible, consequently the non - - perturbative error is significantly reduced and indeed its overall theoretical error is dominated by perturbative scale uncertainties. the new observable we propose is the ratio between the integrated forward - - backward asymmetry in the [ 4, 6 ] gev ^ 2 and [ 1, 4 ] gev ^ 2 dilepton invariant mass bins. this new observable is particularly interesting because, when compared to the location of the zero of the fba spectrum, it is experimentally easier to measure and its theoretical uncertainties are almost as small ; moreover it displays a very strong dependence on the phase of the wilson coefficient c _ 10 that is otherwise only accessible through complicated cp violating asymmetries. we illustrate the new physics sensitivity of this observable within the context of few extensions of the standard model, namely the sm with four generations, an mssm with non - - vanishing source of flavor changing neutral currents in the down squark sector and a z ' model with tree level flavor changing couplings.
arxiv:1007.4015
the gas mass fraction in galaxy clusters is a convenient probe to use in cosmological studies, as it can help derive constraints on a collection of cosmological parameters. it is however subject to various effects from the baryonic physics inside galaxy clusters, which may bias the obtained cosmological constraints. among different aspects of the baryonic physics, in this paper we focus on the impact of the hydrostatic equilibrium assumption. we analyse the hydrostatic mass bias $ b $, constraining a possible mass and redshift evolution of this quantity and its impact on the cosmological constraints. to that end we consider cluster observations of the { \ it planck } - esz sample and evaluate the gas mass fraction using x - ray counterpart observations. we show a degeneracy between the redshift dependence of the bias and cosmological parameters. in particular we find a $ 3. 8 \ sigma $ evidence for a redshift dependence of the bias when assuming a { \ it planck } prior on $ \ omega _ m $. on the other hand, assuming a constant mass bias would lead to the extreme large value of $ \ omega _ m > 0. 849 $. we however show that our results are entirely dependent on the cluster sample we consider. in particular, the mass and redshift trends that we find for the lowest mass - redshift and highest mass - redshift clusters of our sample are not compatible. nevertheless, in all the analyses we find a value for the amplitude of the bias that is consistent with $ b \ sim 0. 8 $, as expected from hydrodynamical simulations and local measurements, but still in tension with the low value of $ b \ sim 0. 6 $ derived from the combination of cosmic microwave background primary anisotropies with cluster number counts.
arxiv:2204.12823
context. a significant fraction of the predicted baryons remains undetected in the local universe. we adopted the common assumption that a large fraction of the missing baryons corresponds to the hot ( log t ( k ) = 5. 5 - 7 ) phase of the warm hot intergalactic medium ( whim ). we base our missing baryons search on the scenario whereby the whim has been heated up via accretion shocks and galactic outflows, and is concentrated towards the filaments of the cosmic web. aims. our aim is to improve the observational search of the poorly detected hot whim. methods. we detect the filamentary structure within the eagle simulation by applying the bisous formalism to the galaxy distribution. in addition, we use the mmf / nexus + classification of the large scale environment of the dark matter component in eagle. we then study the spatio - thermal distribution of the hot baryons within the extracted filaments. results. while the filaments occupy only 5 % of the full simulation volume, the diffuse hot intergalactic medium in filaments amounts to 23 % $ - $ 25 % of the total baryon budget, or 79 % $ - $ 87 % of all the hot whim. the most optimal filament sample, with a missing baryon mass fraction of 82 %, is obtained by selecting bisous filaments with a high galaxy luminosity density. for these filaments we derived analytic formulae for the radial gas density and temperature profiles, consistent with recent planck sz and cmb lensing observations within the central $ r $ ~ 1 mpc. conclusions. results from eagle suggest that the missing baryons are strongly concentrated towards the filament axes. since the filament finding methods used here are applicable to galaxy surveys, a large fraction of the missing baryons can be localised by focusing the observational efforts on the central 1 mpc regions of the filaments. moreover, focusing on high galaxy luminosity density regions will optimise the observational signal.
arxiv:2012.09203
as the practical use of answer set programming ( asp ) has grown with the development of efficient solvers, we expect a growing interest in extensions of asp as their semantics stabilize and solvers supporting them mature. epistemic specifications, which adds modal operators k and m to the language of asp, is one such extension. we call a program in this language an epistemic logic program ( elp ). solvers have thus far been practical for only the simplest elps due to exponential growth of the search space. we describe a solver that is able to solve harder problems better ( e. g., without exponentially - growing memory needs w. r. t. k and m occurrences ) and faster than any other known elp solver.
arxiv:1608.06910
k2 - 139 b is a warm jupiter with an orbital period of 28. 4 d, but only three transits of this system have previously been observed, in the long - cadence mode of k2, limiting the precision with which the orbital period can be determined, and future transits predicted. we report photometric observations of four transits of k2 - 139 b with esa ' s characterising exoplanet satellite ( cheops ), conducted with the goal of measuring the orbital obliquity via spot - crossing events. we jointly fit these cheops data alongside the three previously - published transits from the k2 mission, considerably increasing the precision of the ephemeris of k2 - 139 b. the transit times for this system can now be predicted for the next decade with a $ 1 \ sigma $ precision less than 10 minutes, compared to over one hour previously, allowing the efficient scheduling of observations with ariel. we detect no significant deviation from a linear ephemeris, allowing us to exclude the presence of a massive outer planet orbiting with a period less than 150 d, or a brown dwarf with a period less than one year. we also determine the scaled semi - major axis, the impact parameter, and the stellar limb - darkening with improved precision. this is driven by the shorter cadence of the cheops observations compared to that of k2, and validates the sub - exposure technique used for analysing long - cadence photometry. finally, we note that the stellar spot configuration has changed from the epoch of the k2 observations ; unlike the k2 transits, we detect no evidence of spot - crossing events in the cheops data.
arxiv:2205.09355
in this manuscript we analyze a data set containing information on children with hodgkin lymphoma ( hl ) enrolled on a clinical trial. treatments received and survival status were collected together with other covariates such as demographics and clinical measurements. our main task is to explore the potential of machine learning ( ml ) algorithms in a survival analysis context in order to improve over the cox proportional hazard ( coxph ) model. we discuss the weaknesses of the coxph model we would like to improve upon and then we introduce multiple algorithms, from well - established ones to state - of - the - art models, that solve these issues. we then compare every model according to the concordance index and the brier score. finally, we produce a series of recommendations, based on our experience, for practitioners that would like to benefit from the recent advances in artificial intelligence.
arxiv:2001.05534
the analysis of hadronic interactions with effective field theory techniques is complicated by the appearance of a large number of low - energy constants, which are usually fitted to data. on the other hand, the large - $ n _ c $ limit imposes natural short - distance constraints on these low - energy constants, providing a parameter reduction. a bayesian interpretation of the expected $ 1 / n _ c $ accuracy allows for an easy and efficient implementation of these constraints, using an augmented $ \ chi ^ 2 $. we apply this approach to the analysis of meson - meson scattering, in conjunction with chiral perturbation theory to one loop and coupled - channel unitarity, and show that it helps to largely reduce the many existing ambiguities and simultaneously provide an acceptable description of the available phase shifts.
arxiv:1407.3750
we propose a general langevin equation describing the universal properties of synchronization transitions in extended systems. by means of theoretical arguments and numerical simulations we show that the proposed equation exhibits, depending on parameter values, either : i ) a continuous transition in the bounded kardar - parisi - zhang universality class, with a zero largest lyapunov exponent at the critical point ; ii ) a continuous transition in the directed percolation class, with a negative lyapunov exponent, or iii ) a discontinuous transition ( that is argued to be possibly just a transient effect ). cases ii ) and iii ) exhibit coexistence of synchronized and unsynchronized phases in a broad ( fuzzy ) region. this phenomenology reproduces almost all the reported features of synchronization transitions of coupled map lattices and other models, providing a unified theoretical framework for the analysis of synchronization transitions in extended systems.
arxiv:cond-mat/0301059
we study quark thermal recombination as a function of energy density during the evolution of a heavy - ion collision in a numerical model that reproduces aspects of qcd phenomenology. we show that starting with a set of free quarks ( or quarks and antiquarks ) the probability to form colorless clusters of three quarks differs from that to form colorless clusters of quark - antiquark and that the former has a sharp jump at a critical energy density whereas the latter transits smoothly from the low to the high energy density domains. we interpret this as a quantitative difference in the production of baryons and mesons with energy density. we use this approach to compute the proton and pion spectra in a bjorken scenario that incorporates the evolution of these probabilities with energy density, and therefore with proper time. from the spectra, we compute the proton to pion ratio and compare to data at the highest rhic energies. we show that for a standard choice of parameters, this ratio reaches one, though the maximum is very sensitive to the initial evolution proper time.
arxiv:0710.3629
observations of soft x - ray transients in quiescence suggest the existence of heat sources in the crust of accreting neutron stars. heat is thought to be released by electroweak and nuclear processes triggered by the burying of ashes of x - ray bursts. the heating is studied using a fully quantum approach taking consistently into account nuclear shell effects. we have followed the evolution of ashes made of $ ^ { 56 } $ fe employing the nuclear energy - density functional theory. both the outer and inner crusts are described using the same functional, thus ensuring a unified and thermodynamically consistent treatment. to assess the role of the neutron - matter constraint, we have employed the set of accurately calibrated brussels - montreal functionals bsk19, bsk20, and bsk21 and for comparison the sly4 functional. due to nuclear shell effects, the fully accreted crust is found to be much less stratified than in previous studies. in particular, large regions of the inner crust contain clusters with the magic number $ z = 14 $. the heat deposited in the outer crust is tightly constrained by experimental atomic mass data. the shallow heating we obtain does not exceed $ 0. 2 $ ~ mev and is therefore not enough to explain the cooling of some soft x - ray transients. the total heat released in the crust is very sensitive to details of the nuclear structure and is predicted to lie in the range from $ 1. 5 $ ~ mev to $ 1. 7 $ ~ mev. the evolution of an accreted matter element and therefore the location of heat sources are governed to a large extent by the existence of nuclear shell closures. ignoring these effects in the inner crust, the total heat falls to $ \ sim 0. 6 $ ~ mev. the neutron - matter constraint is also found to play a key role. the large amount of heat obtained by steiner et al. ( 2012 ) could thus be traced back to unrealistic neutron - matter equations of state.
arxiv:1806.03861
. s., they hold only 24 % of stem jobs. research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical stem fields by half. campaigns from organizations like the national inventors hall of fame aimed to achieve a 50 / 50 gender balance in their youth stem programs by 2020. the gender gap in zimbabwe ' s stem fields is also significant, with only 28. 79 % of women holding stem degrees compared to 71. 21 % of men. = = = = intersectionality in stem = = = = stem fields have been recognized as areas where underrepresentation and exclusion of marginalized groups are prevalent. stem poses unique challenges related to intersectionality due to rigid norms and stereotypes, both in higher education and professional settings. these norms often prioritize objectivity and meritocracy while overlooking structural inequities, creating environments where individuals with intersecting marginalized identities face compounded barriers. for instance, individuals from traditionally underrepresented groups may experience a phenomenon known as " chilly climates " which refers to incidents of sexism, isolation, and pressure to prove themselves to peers and high level academics. for minority populations in stem, loneliness is experienced due to lack of belonging and social isolation. = = = = american competitiveness initiative = = = = in the state of the union address on january 31, 2006, president george w. bush announced the american competitiveness initiative. bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the stem fields. in detail, the initiative called for significant increases in federal funding for advanced r & d programs ( including a doubling of federal funding support for advanced research in the physical sciences through doe ) and an increase in u. s. higher education graduates within stem disciplines. the nasa means business competition, sponsored by the texas space grant consortium, furthers that goal. college students compete to develop promotional plans to encourage students in middle and high school to study stem subjects and to inspire professors in stem fields to involve their students in outreach activities that support stem education. the national science foundation has numerous programs in stem education, including some for k – 12 students such as the itest program that supports the global challenge award itest program. stem programs have been implemented in some arizona schools. they implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the stem fields. project lead the way ( pltw ) is a provider of stem education curricular programs
https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics
the color, and reduce the water content of food and liquid products. this process is mostly seen when processing milk, starch derivatives, coffee, fruit juices, vegetable pastes and concentrates, seasonings, sauces, sugar, and edible oil. evaporation is also used in food dehydration processes. the purpose of dehydration is to prevent the growth of molds in food, which only build when moisture is present. this process can be applied to vegetables, fruits, meats, and fish, for example. = = = packaging = = = food packaging technologies are used to extend the shelf - life of products, to stabilize food ( preserve taste, appearance, and quality ), and to maintain the food clean, protected, and appealing to the consumer. this can be achieved, for example, by packaging food in cans and jars. because food production creates large amounts of waste, many companies are transitioning to eco - friendly packaging to preserve the environment and attract the attention of environmentally conscious consumers. some types of environmentally friendly packaging include plastics made from corn or potato, bio - compostable plastic and paper products which disintegrate, and recycled content. even though transitioning to eco - friendly packaging has positive effects on the environment, many companies are finding other benefits such as reducing excess packaging material, helping to attract and retain customers, and showing that companies care about the environment. = = = energy for food processing = = = to increase sustainability of food processing there is a need for energy efficiency and waste heat recovery. the replacement of conventional energy - intensive food processes with new technologies like thermodynamic cycles and non - thermal heating processes provide another potential to reduce energy consumption, reduce production costs, and improve the sustainability in food production. = = = heat transfer in food processing = = = heat transfer is important in the processing of almost every commercialized food product and is important to preserve the hygienic, nutritional and sensory qualities of food. heat transfer methods include induction, convection, and radiation. these methods are used to create variations in the physical properties of food when freezing, baking, or deep frying products, and also when applying ohmic heating or infrared radiation to food. these tools allow food engineers to innovate in the creation and transformation of food products. = = = food safety management systems ( fsms ) = = = a food safety management system ( fsms ) is " a systematic approach to controlling food safety hazards within a business in order to ensure that the food product
https://en.wikipedia.org/wiki/Food_engineering
kneser ' s theorem in the integers asserts that denoting by $ \ underline { \ mathrm { d } } $ the lower asymptotic density, if $ \ underline { \ mathrm { d } } ( x _ 1 + \ cdots + x _ k ) < \ sum _ { i = 1 } ^ k \ underline { \ mathrm { d } } ( x _ i ) $ then the sumset $ x _ 1 + \ cdots + x _ k $ is \ emph { periodic } for some positive integer $ q $. in this article we establish a similar statement for upper buck density and compare it with the corresponding result due to jin involving upper banach density. we also provide the construction of sequences verifying counterintuitive properties with respect to buck density of a sequence $ a $ and its sumset $ a + a $.
arxiv:2410.13275
projective reed - muller codes correspond to subcodes of the reed - muller code in which the polynomials being evaluated to yield codewords, are restricted to be homogeneous. the generalized hamming weights ( ghw ) of a code $ { \ cal c } $, identify for each dimension $ \ nu $, the smallest size of the support of a subcode of $ { \ cal c } $ of dimension $ \ nu $. the ghw of a code are of interest in assessing the vulnerability of a code in a wiretap channel setting. it is also of use in bounding the state complexity of the trellis representation of the code. in prior work by the same authors, a code - shortening algorithm was employed to derive upper bounds on the ghw of binary projective, reed - muller ( prm ) codes. in the present paper, we derive a matching lower bound by adapting the proof techniques used originally for reed - muller ( rm ) codes by wei. this results in a characterization of the ghw hierarchy of binary prm codes.
arxiv:1806.02028
distribution grid reliability and resilience has become a major topic of concern for utilities and their regulators. in particular, with the increase in severity of extreme events, utilities are considering major investments in distribution grid assets to mitigate the damage of highly impactful outages. communicating the overall economic and risk - mitigation benefits of these investments to regulators is an important element of the approval process. today, industry reliability and resilience planning practices are based largely on methods that do not take explicit account of risk. this paper proposes a practical method for identifying optimal combinations of investments in new line segments and storage devices while considering the balance between the risk associated with high impact low probability events and the reliability related to routine failures. we show that this method can be scaled to address large scale networks and demonstrate its benefits using a target feeder from the commonwealth edison reliability program.
arxiv:2209.14460
modern recording techniques enable neuroscientists to simultaneously study neural activity across large populations of neurons, with capturing predictor - dependent correlations being a fundamental challenge in neuroscience. moreover, the fact that input covariates often lie in restricted subdomains, according to experimental settings, makes inference even more challenging. to address these challenges, we propose a set of nonparametric mean - covariance regression models for high - dimensional neural activity with restricted inputs. these models reduce the dimensionality of neural responses by employing a lower - dimensional latent factor model, where both factor loadings and latent factors are predictor - dependent, to jointly model mean and covariance across covariates. the smoothness of neural activity across experimental conditions is modeled nonparametrically using two gaussian processes ( gps ), applied to both loading basis and latent factors. additionally, to account for the covariates lying in restricted subspace, we incorporate graph information into the covariance structure. to flexibly infer the model, we use an mcmc algorithm to sample from posterior distributions. after validating and studying the properties of proposed methods by simulations, we apply them to two neural datasets ( local field potential and neural spiking data ) to demonstrate the usage of models for continuous and counting observations. overall, the proposed methods provide a framework to jointly model covariate - dependent mean and covariance in high dimensional neural data, especially when the covariates lie in restricted domains. the framework is general and can be easily adapted to various applications beyond neuroscience.
arxiv:2409.19717
extending the understanding of bose - einstein condensate ( bec ) physics to new geometries and topologies has a long and varied history in ultracold atomic physics. one such new geometry is that of a bubble, where a condensate would be confined to the surface of an ellipsoidal shell. study of this geometry would give insight into new collective modes, self - interference effects, topology - dependent vortex behavior, dimensionality crossovers from thick to thin shells, and the properties of condensates pushed into the ultradilute limit. here we discuss a proposal to implement a realistic experimental framework for generating shell - geometry bec using radiofrequency dressing of magnetically - trapped samples. such a tantalizing state of matter is inaccessible terrestrially due to the distorting effect of gravity on experimentally - feasible shell potentials. the debut of an orbital bec machine ( nasa cold atom laboratory, aboard the international space station ) has enabled the operation of quantum - gas experiments in a regime of perpetual freefall, and thus has permitted the planning of microgravity shell - geometry bec experiments. we discuss specific experimental configurations, applicable inhomogeneities and other experimental challenges, and outline potential experiments.
arxiv:1906.05885
in this paper, we investigate how field programmable gate arrays can serve as hardware accelerators for real - time semantic segmentation tasks relevant for autonomous driving. considering compressed versions of the enet convolutional neural network architecture, we demonstrate a fully - on - chip deployment with a latency of 4. 9 ms per image, using less than 30 % of the available resources on a xilinx zcu102 evaluation board. the latency is reduced to 3 ms per image when increasing the batch size to ten, corresponding to the use case where the autonomous vehicle receives inputs from multiple cameras simultaneously. we show, through aggressive filter reduction and heterogeneous quantization - aware training, and an optimized implementation of convolutional layers, that the power consumption and resource utilization can be significantly reduced while maintaining accuracy on the cityscapes dataset.
arxiv:2205.07690
we performed multiwavelength observations of the young planetary nebula ( pn ) m1 - 11 and obtained its elemental abundances, dust mass, and the evolutionary status of the central star. the akari / irc, vlt / visir, and spitzer / irs spectra show features due to carbon - rich dust, such as the 3. 3, 8. 6, and 11. 3 um features due to polycyclic aromatic hydrocarbons ( pahs ), a smooth continuum attributable to amorphous carbon, and the broad 11. 5 and 30 um features often ascribed to sic and mgs, respectively. we also report the presence of an unidentified broad feature at 16 - 22 um, similar to the feature found in magellanic cloud pne with either c - rich or o - rich gas - phase compositions. we identify for the first time in m1 - 11 spectral lines at 8. 5 ( blended with pah ), 17. 3, and 18. 9 um that we attribute to the c60 fullerene. this identification is strengthened by the fact that other galactic pne in which fullerenes are detected, have similar central stars, similar gas - phase abundances, and a similar dust composition to m1 - 11. the weak radiation field due to the relatively cool central stars in these pne may provide favorable conditions for fullerenes to survive in the circumstellar medium. using the photo - ionization code cloudy, combined with a modified blackbody, we have fitted the ~ 0. 1 - 90 um spectral energy distribution and determined the dust mass in the nebula to be ~ 3. 5x10 ^ { - 4 } msun $. our chemical abundance analysis and sed model suggest that m1 - 11 is perhaps a c - rich pn with c / o ratio in the gas - phase of + 0. 19 dex, and that it evolved from a 1 - 1. 5 msun star.
arxiv:1301.7104
short gamma - ray bursts ( grbs ) are known to be associated with binary neutron star ( nsns ) or black hole - neutron star ( bhns ) mergers. the detection of gravitational wave and its associated electromagnetic counterparts gw / grb 170817a has shown that interactions between relativistic jets and mildly relativistic ejecta influence observed radiation. previous studies simulated a uniform jet propagating through a homologously expanding wind, however, jets and disk outflows are launched together during accretion, making the interaction more complex. we investigate how the disk wind impacts jet propagation at distances $ r \ sim 10 ^ 8 \, - \, 10 ^ { 11 } ~ $ cm. we are using two - dimensional special relativistic hydrodynamical simulations. as initial conditions, we remap the outflows from general relativistic magnetohydrodynamical simulations of bh accretion disks that represent post - merger nsns or bhns remnants. we account for wind stratification and r - process nucleosynthesis, which alter the pressure profile from that of an ideal gas in the initial conditions. we found that a ) self - consistent wind pressure leads to significant changes in the jet collimation and cocoon expansion ; b ) the angular structure of thermal and kinetic energy components in the jets, cocoons, and winds differ with respect to simple homologous models ; c ) the temporal evolution of the structure reveals conversion of thermal to kinetic energy being different for each component in the system ( jet, cocoon, and wind ) ; d ) dynamical ejecta alters the interaction between jets and disk winds. our results show that the jet and cocoon structure is shaped by the accretion disk wind that alters the effect of dynamical ejecta and may have an impact on the observed afterglow emission.
arxiv:2401.10094
this paper presents the first numerical solution to the non - linear evolution equation for diffractive dissociation processes in deep inelastic scattering. it is shown that the solution depends on one scaling variable $ \ tau = q ^ 2 / q ^ { d 2 } _ s ( x, x _ 0 ) $, where $ q ^ d _ s ( x, x _ 0 ) $ is the saturation scale for the diffraction processes. the dependence of the saturation scale $ q ^ d _ s ( x, x _ 0 ) $ on both $ x $ and $ x _ 0 $ is investigated, ( $ y _ 0 = \ ln ( 1 / x _ 0 ) $ is a minimal rapidity gap for the diffraction process ). the $ x $ - dependence of $ q ^ d _ s $ turns out to be the same as of the saturation scale in the total inclusive dis cross section. in our calculations $ q ^ d _ s ( x, x _ 0 ) $ reveals only mild dependence on $ x _ 0 $. the scaling is shown to hold for $ x \ ll x _ 0 $ but is violated at $ x \ sim x _ 0 $.
arxiv:hep-ph/0108239
the existence of one - way functions is one of the most fundamental assumptions in classical cryptography. in the quantum world, on the other hand, there are evidences that some cryptographic primitives can exist even if one - way functions do not exist. we therefore have the following important open problem in quantum cryptography : what is the most fundamental element in quantum cryptography? in this direction, brakerski, canetti, and qian recently defined a notion called efi pairs, which are pairs of efficiently generatable states that are statistically distinguishable but computationally indistinguishable, and showed its equivalence with some cryptographic primitives including commitments, oblivious transfer, and general multi - party computations. however, their work focuses on decision - type primitives and does not cover search - type primitives like quantum money and digital signatures. in this paper, we study properties of one - way state generators ( owsgs ), which are a quantum analogue of one - way functions. we first revisit the definition of owsgs and generalize it by allowing mixed output states. then we show the following results. ( 1 ) we define a weaker version of owsgs, weak owsgs, and show that they are equivalent to owsgs. ( 2 ) quantum digital signatures are equivalent to owsgs. ( 3 ) private - key quantum money schemes ( with pure money states ) imply owsgs. ( 4 ) quantum pseudo one - time pad schemes imply both owsgs and efi pairs. ( 5 ) we introduce an incomparable variant of owsgs, which we call secretly - verifiable and statistically - invertible owsgs, and show that they are equivalent to efi pairs.
arxiv:2210.03394
denote by $ \ dc ( m ) _ 0 $ the identity component of the group of compactly supported $ c ^ \ infty $ diffeomorphisms of a connected $ c ^ \ infty $ manifold $ m $, and by $ \ hr $ the group of the homeomorphisms of $ \ r $. we show that if $ m $ is a closed manifold which fibers over $ s ^ m $ ( $ m \ geq 2 $ ), then any homomorphism from $ \ dc ( m ) _ 0 $ to $ \ hr $ is trivial.
arxiv:1309.0618
video anomaly detection ( vad ) is a significant computer vision problem. existing deep neural network ( dnn ) based vad methods mostly follow the route of frame reconstruction or frame prediction. however, the lack of mining and learning of higher - level visual features and temporal context relationships in videos limits the further performance of these two approaches. inspired by video codec theory, we introduce a brand - new vad paradigm to break through these limitations : first, we propose a new task of video event restoration based on keyframes. encouraging dnn to infer missing multiple frames based on video keyframes so as to restore a video event, which can more effectively motivate dnn to mine and learn potential higher - level visual features and comprehensive temporal context relationships in the video. to this end, we propose a novel u - shaped swin transformer network with dual skip connections ( ustn - dsc ) for video event restoration, where a cross - attention and a temporal upsampling residual skip connection are introduced to further assist in restoring complex static and dynamic motion object features in the video. in addition, we propose a simple and effective adjacent frame difference loss to constrain the motion consistency of the video sequence. extensive experiments on benchmarks demonstrate that ustn - dsc outperforms most existing methods, validating the effectiveness of our method.
arxiv:2304.05112
low - rank adaptations ( lora ) are widely used to fine - tune large models across various domains for specific downstream tasks. while task - specific loras are often available, concerns about data privacy and intellectual property can restrict access to training data, limiting the acquisition of a multi - task model through gradient - based training. in response, lora merging presents an effective solution by combining multiple loras into a unified adapter while maintaining data privacy. prior works on lora merging primarily frame it as an optimization problem, yet these approaches face several limitations, including the rough assumption about input features utilized in optimization, massive sample requirements, and the unbalanced optimization objective. these limitations can significantly degrade performance. to address these, we propose a novel optimization - based method, named iteris : 1 ) we formulate lora merging as an advanced optimization problem to mitigate the rough assumption. additionally, we employ an iterative inference - solving framework in our algorithm. it can progressively refine the optimization objective for improved performance. 2 ) we introduce an efficient regularization term to reduce the need for massive sample requirements ( requiring only 1 - 5 % of the unlabeled samples compared to prior methods ). 3 ) we utilize adaptive weights in the optimization objective to mitigate potential unbalances in lora merging process. our method demonstrates significant improvements over multiple baselines and state - of - the - art methods in composing tasks for text - to - image diffusion, vision - language models, and large language models. furthermore, our layer - wise algorithm can achieve convergence with minimal steps, ensuring efficiency in both memory and computation.
arxiv:2411.15231
we study direct searches of additional higgs bosons in multi - top - quarks events at the lhc run - ii, its luminosity upgraded version with 3000 fb $ ^ { - 1 } $, and the international linear collider ( ilc ) with the collision energy of 1 tev. additional higgs bosons are predicted in all kinds of extended higgs sectors, and their detection at collider experiments is a clear signature of the physics beyond the standard model. we consider two higgs doublet models with the discrete symmetry as benchmark models. if these additional higgs bosons are heavy enough, the decay modes including top quarks can be dominant, and the searches in multi - top - quarks events become an important probe of the higgs sector. we evaluate the discovery reach in the parameter space of the model, and find that there are parameter regions where the searches at the lhc with 3000 fb $ ^ { - 1 } $ cannot survey, but the searches at the ilc 1 tev run can. the combination of direct searches at the lhc and the ilc is useful to explore extended higgs sectors.
arxiv:1505.01089
it has long been known that qcd undergoes a deconfining phase transition at high temperature. one of the consequent features of this new, quark - gluon phase is that hadrons become unbounded. in this talk meson correlation functions at non - zero momentum are studied in the deconfined phase using the maximum entropy method.
arxiv:1010.0845
ultrafast optical techniques allow to study ultrafast molecular dynamics involving both nuclear and electronic motion. to support interpretation, theoretical approaches are needed that can describe both the nuclear and electron dynamics. hence, we revisit and expand our ansatz for the coupled description of the nuclear and electron dynamics in molecular systems ( nemol ). in this purely quantum mechanical ansatz the quantum - dynamical description of the nuclear motion is combined with the calculation of the electron dynamics in the eigenfunction basis. the nemol ansatz is applied to simulate the coupled dynamics of the molecule no2 in the vicinity of a conical intersection ( coin ) with a special focus on the coherent electron dynamics induced by the non - adiabatic coupling. furthermore, we aim to control the dynamics of the system when passing the coin. the control scheme relies on the carrier envelope phase ( cep ) of a few - cycle ir pulse. the laser pulse influences both the movement of the nuclei and the electrons during the population transfer through the coin.
arxiv:2102.13547
music is usually highly structured and it is still an open question how to design models which can successfully learn to recognize and represent musical structure. a fundamental problem is that structurally related patterns can have very distinct appearances, because the structural relationships are often based on transformations of musical material, like chromatic or diatonic transposition, inversion, retrograde, or rhythm change. in this preliminary work, we study the potential of two unsupervised learning techniques - restricted boltzmann machines ( rbms ) and gated autoencoders ( gaes ) - to capture pre - defined transformations from constructed data pairs. we evaluate the models by using the learned representations as inputs in a discriminative task where for a given type of transformation ( e. g. diatonic transposition ), the specific relation between two musical patterns must be recognized ( e. g. an upward transposition of diatonic steps ). furthermore, we measure the reconstruction error of models when reconstructing musical transformed patterns. lastly, we test the models in an analogy - making task. we find that it is difficult to learn musical transformations with the rbm and that the gae is much more adequate for this task, since it is able to learn representations of specific transformations that are largely content - invariant. we believe these results show that models such as gaes may provide the basis for more encompassing music analysis systems, by endowing them with a better understanding of the structures underlying music.
arxiv:1708.05325
the optical light that is generated simultaneously with the x - rays and gamma - rays during a gamma - ray burst ( grb ) provides clues about the nature of the explosions that occur as massive stars collapse to form black holes. we report on the bright optical flash and fading afterglow from the powerful burst grb 130427a and present a comparison with the properties of the gamma - ray emission that show correlation of the optical and > 100 mev photon flux light curves during the first 7, 000 seconds. we attribute this correlation to co - generation in an external shock. the simultaneous, multi - color, optical observations are best explained at early times by reverse shock emission generated in the relativistic burst ejecta as it collides with surrounding material and at late times by a forward shock traversing the circumburst environment. the link between optical afterglow and > 100 mev emission suggests that nearby early peaked afterglows will be the best candidates for studying grb emission at gev / tev energies.
arxiv:1311.5489
we present modeling and interpretation of the continuum and emission lines for a sample of 51 unobscured type 1 active galactic nuclei ( agn ). all of these agns have high quality spectra from both xmm - newton and sloan digital sky survey ( sdss ). we extend the wavelength coverage where possible by adding simultaneous uv data from the om onboard xmm - newton. our sample is selected based on low reddening in the optical and low gas columns implied by their x - ray spectra. they also lack clear signatures for the presence of a warm absorber. therefore the observed characteristics of this sample are likely to be directly related to the intrinsic properties of the central engine. we perform multi - component spectral fitting for strong optical emission lines and the whole optical spectra. we fit the combined optical, uv and x - ray data by applying a new broadband sed model which comprises the accretion disc emission, low temperature optically thick comptonisation and a hard x - ray tail by introducing the a corona radius ( done et al. 2011 ). we find that in order to fit the data, the model often requires an additional long wavelength optical continuum component, whose origin is discussed in this paper. we also find that the photo - recombination edge of balmer continuum shifts and broadens beyond the standard limit of 3646 { \ aa }, implying an electron number density which is far higher than that in the broad line region clouds. our results indicate that the narrow line seyfert 1s in this sample tend to have lower black hole masses, higher eddington ratios, softer 2 - 10 kev band spectra, lower 2 - 10 kev luminosities and higher \ alpha _ { ox }, compared with typical broad line seyfert 1s ( bls1 ), although their bolometric luminosities are similar. we illustrate these differences in properties by forming an average sed for three subsamples, based on the fwhm velocity width of the h { \ beta } emission line.
arxiv:1109.2069
we give an induction - free axiom system for diophantine correct open induction. we relate the problem of whether a finitely generated ring of puiseux polynomials is diophantine correct to a problem about the value - distribution of a tuple of semialgebraic functions with integer arguments. we use this result, and a theorem of bergelson and leibman on generalized polynomials, to identify a class of diophantine correct subrings of the field of descending puiseux series with real coefficients.
arxiv:1010.3798
an expression for the first variation of the area functional of the second fundamental form is given for a hypersurface in a semi - riemannian space. the concept of the " mean curvature of the second fundamental form " is then introduced. some characterisations of extrinsic hyperspheres in terms of this curvature are given.
arxiv:0709.2107
we prove that for any positive integers $ k $ and $ d $, if a graph $ g $ has maximum average degree at most $ 2k + \ frac { 2d } { d + k + 1 } $, then $ g $ decomposes into $ k + 1 $ pseudoforests $ c _ { 1 }, \ ldots, c _ { k + 1 } $ such that there is an $ i $ such that for every connected component $ c $ of $ c _ { i } $, we have that $ e ( c ) \ leq d $.
arxiv:1905.02600
we bound the locations of outermost minimal surfaces in geometrostatic manifolds whose adm mass is small relative to the separation between the black holes and prove the intrinsic flat stability of the positive mass theorem in this setting.
arxiv:1707.03008
we extract the heavy - quark diffusion coefficient \ kappa and the resulting momentum broadening < p ^ 2 > in a far - from - equilibrium non - abelian plasma. we find several features in the time dependence of the momentum broadening : a short initial rapid growth of < p ^ 2 >, followed by linear growth with time due to langevin - type dynamics and damped oscillations around this growth at the plasmon frequency. we show that these novel oscillations are not easily explained using perturbative techniques but result from an excess of gluons at low momenta. these oscillation are therefore a gauge invariant confirmation of the infrared enhancement we had previously observed in gauge - fixed correlation functions. we argue that the kinetic theory description of such systems becomes less reliable in the presence of this ir enhancement.
arxiv:2005.02418
trapped - ion hardware based on the magnetic gradient induced coupling ( magic ) scheme is emerging as a promising platform for quantum computing. nevertheless, in this - - as in any other - - quantum - computing platform, many technical questions still have to be resolved before large - scale and error - tolerant applications are possible. in this work, we present a thorough discussion of the structure and effects of higher - order terms in the magic setup, which can occur due to anharmonicities in the external potential of the ion crystal ( e. g., through coulomb repulsion ) or through curvature of the applied magnetic field. these terms generate systematic shifts in the leading - order interactions and take the form of three - spin couplings, two - spin couplings, local fields, as well as diverse phonon - phonon conversion mechanisms. we find that most of these are negligible in realistic situations, with only two contributions that need careful attention. first, there are undesired longitudinal fields contributing shifts to the resonance frequency, whose strength increases with chain length and phonon occupation numbers ; while their mean effect can easily be compensated by additional $ z $ rotations, phonon number fluctuations need to be avoided for precise gate operations. second, anharmonicities of the coulomb interaction can lead to well - known two - to - one conversions of phonon excitations. both of these error terms can be mitigated by sufficiently cooling the phonons to the ground - state. our detailed analysis constitutes an important contribution on the way of making magnetic - gradient trapped - ion quantum technology fit for large - scale applications, and it may inspire new ways to purposefully design interaction terms.
arxiv:2409.10498
the structure of an online social network in most cases cannot be described just by links between its members. we study online social networks, in which members may have certain attitude, positive or negative toward each other, and so the network consists of a mixture of both positive and negative relationships. our goal is to predict the sign of a given relationship based on the evidences provided in the current snapshot of the network. more precisely, using machine learning techniques we develop a model that after being trained on a particular network predicts the sign of an unknown or hidden link. the model uses relationships and influences from peers as evidences for the guess, however, the set of peers used is not predefined but rather learned during the training process. we use quadratic correlation between peer members to train the predictor. the model is tested on popular online datasets such as epinions, slashdot, and wikipedia. in many cases it shows almost perfect prediction accuracy. moreover, our model can also be efficiently updated as the underlaying social network evolves.
arxiv:1212.1633
to reveal the importance of temporal precision in ground truth audio event labels, we collected precise ( ~ 0. 1 sec resolution ) " strong " labels for a portion of the audioset dataset. we devised a temporally strong evaluation set ( including explicit negatives of varying difficulty ) and a small strong - labeled training subset of 67k clips ( compared to the original dataset ' s 1. 8m clips labeled at 10 sec resolution ). we show that fine - tuning with a mix of weak and strongly labeled data can substantially improve classifier performance, even when evaluated using only the original weak labels. for a resnet50 architecture, d ' on the strong evaluation data including explicit negatives improves from 1. 13 to 1. 41. the new labels are available as an update to audioset.
arxiv:2105.07031
the demands of modern electronic components require advanced computing platforms for efficient information processing to realize in - memory operations with a high density of data storage capabilities towards developing alternatives to von neumann architectures. herein, we demonstrate the multifunctionality of monolayer mos2 mem - transistors which can be used as a high - geared intrinsic transistor at room temperature ; however, at a high temperature ( > 350 k ), they exhibit synaptic multi - level memory operations. the temperature - dependent memory mechanism is governed by interfacial physics, which solely depends on the gate field modulated ion dynamics and charge transfer at the mos2 / dielectric interface. we have proposed a non - volatile memory application using a single fet device where thermal energy can be ventured to aid the memory functions with multi - level ( 3 - bit ) storage capabilities. furthermore, our devices exhibit linear and symmetry in conductance weight updates when subjected to electrical potentiation and depression. this feature has enabled us to attain a high classification accuracy while training and testing the modified national institute of standards and technology datasets through artificial neural network simulation. this work paves the way for new avenues in 2d semiconductors toward reliable data processing and storage with high - packing density arrays for brain - inspired artificial learning.
arxiv:2305.02259
let $ g $ be a co - amenable compact quantum group. we show that a right coideal of $ g $ is of quotient type if and only if it is the range of a conditional expectation preserving the haar state and is globally invariant under the left action of the dual discrete quantum group. we apply this result to theory of poisson boundaries introduced by izumi for discrete quantum groups and generalize a work of izumi - neshveyev - tuset on $ su _ q ( n ) $ for co - amenable compact quantum groups with the commutative fusion rules. more precisely, we prove that the poisson integral is an isomorphism between the poisson boundary and the right coideal of quotient type by maximal quantum subgroup of kac type. in particular, the poisson boundary and the quantum flag manifold are isomorphic for any q - deformed classical compact lie group.
arxiv:math/0611327
we report the discovery of a $ 1 ^ \ circ $ scale x - ray plume in the northern galactic center ( gc ) region observed with suzaku. the plume is located at ( $ l $, $ b $ ) $ \ sim $ ( $ 0 \ mbox { $. \! \! ^ \ circ $ } 2 $, $ 0 \ mbox { $. \! \! ^ \ circ $ } 6 $ ), east of the radio lobe reported by previous studies. no significant x - ray excesses are found inside or to the west of the radio lobe. the spectrum of the plume exhibits strong emission lines from highly ionized mg, si, and s that is reproduced by a thin thermal plasma model with $ kt \ sim 0. 7 $ kev and solar metallicity. there is no signature of non - equilibrium ionization. the unabsorbed surface brightness is $ 3 \ times10 ^ { - 14 } $ erg cm $ ^ { - 2 } $ s $ ^ { - 1 } $ arcmin $ ^ { - 2 } $ in the 1. 5 - 3. 0 kev band. strong interstellar absorption in the soft x - ray band indicates that the plume is not a foreground source but is at the gc distance, giving a physical size of $ \ sim $ 100 pc, a density of 0. 1 cm $ ^ { - 3 } $, thermal pressure of $ 1 \ times10 ^ { - 10 } $ erg cm $ ^ { - 3 } $, mass of 600 $ m _ \ odot $ and thermal energy of $ 7 \ times10 ^ { 50 } $ erg. from the apparent association with a polarized radio emission, we propose that the x - ray plume is a magnetized hot gas outflow from the gc.
arxiv:1903.02571
the classification of galaxies as spirals or ellipticals is a crucial task in understanding their formation and evolution. with the arrival of large - scale astronomical surveys, such as the sloan digital sky survey ( sdss ), astronomers now have access to images of a vast number of galaxies. however, the visual inspection of these images is an impossible task for humans due to the sheer number of galaxies to be analyzed. to solve this problem, the galaxy zoo project was created to engage thousands of citizen scientists to classify the galaxies based on their visual features. in this paper, we present a machine learning model for galaxy classification using numerical data from the galaxy zoo [ 5 ] project. our model utilizes a convolutional neural network architecture to extract features from galaxy images and classify them into spirals or ellipticals. we demonstrate the effectiveness of our model by comparing its performance with that of human classifiers using a subset of the galaxy zoo dataset. our results show that our model achieves high accuracy in classifying galaxies and has the potential to significantly enhance our understanding of the formation and evolution of galaxies.
arxiv:2312.00184
we develop a theory to teleport an unknown quantum state using entanglement between two distant parties. our theory takes into account experimental limitations due to contribution of multi - photon pair production of parametric down conversion source, inefficiency and dark counts of detectors and channel losses. we use a linear optics setup for quantum teleportation of an unknown quantum state by performing bell state measurement by the sender. our theory successfully provides a model for experimentalists to optimize the fidelity by adjusting the experimental parameters. we apply our model to a recent experiment on quantum teleportation and the results obtained by our model are in good agreement with the experiment results.
arxiv:1508.01141
a lattice boltzmann scheme is presented which recovers the dynamics of nematic and chiral liquid crystals ; the method essentially gives solutions to the qian - sheng equations for the evolution of the velocity and tensor order - parameter fields. the resulting algorithm is able to include five independent leslie viscosities, a landau - degennes free energy which introduces three or more elastic constants, a temperature dependent order parameter, surface anchoring and viscosity coefficients, flexo - electric and order electricity and chirality. when combined with a solver for the maxwell equations associated with the electric field, the algorithm is able to provide a full ' device solver ' for a liquid crystal display. coupled lattice boltzmann schemes are used to capture the evolution of the fast momentum and slow director motions in a computationally efficient way. the method is shown to give results in close agreement with analytical results for a number of validating examples. the use of the method is illustrated through the simulation of the motion of defects in a zenithal bistable liquid crystal device.
arxiv:cond-mat/0602421
this paper is aimed to study the ergodic short - term behaviour of discretizations of circle expanding maps. more precisely, we prove some asymptotics of the distance between the $ t $ - th iterate of lebesgue measure by the dynamics $ f $ and the $ t $ - th iterate of the uniform measure on the grid of order $ n $ by the discretization on this grid, when $ t $ is fixed and the order $ n $ goes to infinity. this is done under some explicit genericity hypotheses on the dynamics, and the distance between measures is measured by the mean of \ emph { cram \ ' er } distance. the proof is based on a study of the corresponding linearized problem, where the problem is translated into terms of equirepartition on tori of dimension exponential in $ t $. a numerical study associated to this work is presented in arxiv : 2206. 08000 [ math. ds ].
arxiv:2206.07991
the notion of zariski pairs for projective curves in $ \ mathbb p ^ 2 $ is known since the pioneer paper of zariski \ cite { zariski } and for further development, we refer the reference in \ cite { bartolo }. in this paper, we introduce a notion of zariski pair of links in the class of isolated hypersurface singularities. such a pair is canonically produced from a zariski ( or a weak zariski ) pair of curves $ c = \ { f ( x, y, z ) = 0 \ } $ and $ c ' = \ { g ( x, y, z ) = 0 \ } $ of degree $ d $ by simply adding a monomial $ z ^ { d + m } $ to $ f $ and $ g $ so that the corresponding affine hypersurfaces have isolated singularities at the origin. they have a same zeta function and a same milnor number ( \ cite { almost } ). we give new examples of zariski pairs which have the same $ \ mu ^ * $ sequence and a same zeta function but two functions belong to different connected components of $ \ mu $ - constant strata ( theorem \ ref { mu - zariski } ). two link 3 - folds are not diffeomorphic and they are distinguished by the first homology which implies the jordan form of their monodromies are different ( theorem \ ref { main2 } ). we start from weak zariski pairs of projective curves to construct new zariski pairs of surfaces which have non - diffeomorphic link 3 - folds. we also prove that hypersurface pair constructed from a zariski pair give a diffeomorphic links ( theorem \ ref { main3 } ).
arxiv:2203.10684
bounds on turbulent averages in shear flows can be derived from the navier - stokes equations by a mathematical approach called the background method. bounds that are optimal within this method can be computed at each reynolds number re by numerically optimizing subject to a spectral constraint, which requires a quadratic integral to be nonnegative for all possible velocity fields. past authors have eased computations by enforcing the spectral constraint only for streamwise - invariant ( 2. 5d ) velocity fields, assuming this gives the same result as enforcing it for three - dimensional ( 3d ) fields. here we compute optimal bounds over 2. 5d fields and then verify, without doing computations over 3d fields, that the bounds indeed apply to 3d flows. one way is to directly check that an optimizer computed using 2. 5d fields satisfies the spectral constraint for all 3d fields. we introduce a criterion that gives a second way, applicable to planar shear flow models with a certain symmetry, that is based on a theorem of busse ( arma 47 : 28, 1972 ) for the energy stability problem. the advantage of checking this criterion, as opposed to directly checking the 3d constraint, is lower computational cost and more natural extrapolation to large re. we compute optimal upper bounds on friction coefficients for the wall - bounded kolmogorov flow known as waleffe flow, and for plane couette flow, which require lower bounds on dissipation in the first model and upper bounds in the second. for waleffe flow, all bounds computed using 2. 5d fields satisfy our criterion, so they hold for 3d flows. for couette flow, where bounds have previously been computed using 2. 5d fields by plasting and kerswell ( jfm 477 : 363, 2003 ), our criterion holds only up to moderate re, but at larger re we directly verify the 3d spectral constraint. this supports the assumption by plasting and kerswell that their bounds hold for 3d flows.
arxiv:2503.04005
certain types of bilinearly defined sets in $ \ mathbb { r } ^ n $ exhibit a higher degree of linearity than what is apparent by inspection.
arxiv:math/0208131
let $ k = \ mathbb { f } _ q ( c ) $ be the global function field of rational functions over a smooth and projective curve $ c $ defined over a finite field $ \ mathbb { f } _ q $. the ring of regular functions on $ c - s $ where $ s \ neq \ emptyset $ is any finite set of closed points on $ c $ is a dedekind domain $ \ mathcal { o } _ s $ of $ k $. for a semisimple $ \ mathcal { o } _ s $ - group $ \ underline { g } $ with a smooth fundamental group $ \ underline { f } $, we aim to describe both the set of genera of $ \ underline { g } $ and its principal genus ( the latter if $ \ underline { g } \ otimes _ { \ mathcal { o } _ s } k $ is isotropic at $ s $ ) in terms of abelian groups depending on $ \ mathcal { o } _ s $ and $ \ underline { f } $ only. this leads to a necessary and sufficient condition for the hasse local - global principle to hold for certain $ \ underline { g } $. we also use it to express the tamagawa number $ \ tau ( g ) $ of a semisimple $ k $ - group $ g $ by the euler poincar \ ' e invariant. this facilitates the computation of $ \ tau ( g ) $ for twisted $ k $ - groups.
arxiv:1702.04922
we study time - uniform statistical inference for parameters in stochastic approximation ( sa ), which encompasses a bunch of applications in optimization and machine learning. to that end, we analyze the almost - sure convergence rates of the averaged iterates to a scaled sum of gaussians in both linear and nonlinear sa problems. we then construct three types of asymptotic confidence sequences that are valid uniformly across all times with coverage guarantees, in an asymptotic sense that the starting time is sufficiently large. these coverage guarantees remain valid if the unknown covariance matrix is replaced by its plug - in estimator, and we conduct experiments to validate our methodology.
arxiv:2410.15057
we study operator dynamics in brownian quantum many - body models with $ q $ - local interactions. the operator dynamics are characterized by the time - dependent size distribution, for which we derive an exact master equation in both the brownian majorana sachdev - ye - kitaev ( syk ) model and the spin model for general $ q $. this equation can be solved numerically for large systems. additionally, we obtain the analytical size distribution in the large $ n $ limit for arbitrary initial conditions and $ q $. the distributions for both models take the same form, related to the $ \ chi $ - squared distribution by a change of variable, and strongly depend on the initial condition. for small initial sizes, the operator dynamics are characterized by a broad distribution that narrows as the initial size increases. when the initial operator size is below $ q - 2 $ for the majorana model or $ q - 1 $ for the spin model, the distribution diverges in the small size limit at all times. the mean size of all operators, which can be directly measured by the out - of - time ordered correlator, grows exponentially during the early time. in the late time regime, the mean size for a single majorana or pauli operator for all $ q $ decays exponentially as $ t e ^ { - t } $, much slower than all other operators, which decay as $ e ^ { - t } $. at finite $ n $, the size distribution exhibits modulo - dependent branching within a symmetry sector for the $ q \ geq 8 $ majorana model and the $ q \ geq 4 $ spin model. our results reveal universal features of operator dynamics in $ q $ - local quantum many - body systems.
arxiv:2408.11737
we formulate a conjecture for the three different lax operators that describe the bosonic sectors of the three possible $ n = 2 $ supersymmetric integrable hierarchies with $ n = 2 $ super $ w _ n $ second hamiltonian structure. we check this conjecture in the simplest cases, then we verify it in general in one of the three possible supersymmetric extensions. to this end we construct the $ n = 2 $ supersymmetric extensions of the generalized non - linear schr \ " { o } dinger hierarchy by exhibiting the corresponding super lax operator. to find the correct hamiltonians we are led to a new definition of super - residues for degenerate n = 2 supersymmetric pseudodifferential operators. we have found a new non - polinomial miura - like realization for $ n = 2 $ superconformal algebra in terms of two bosonic chiral - - anti - - chiral free superfields.
arxiv:hep-th/9604165
convolutional neural networks ( cnns ) have become the state - of - the - art in supervised learning vision tasks. their convolutional filters are of paramount importance for they allow to learn patterns while disregarding their locations in input images. when facing highly irregular domains, generalized convolutional operators based on an underlying graph structure have been proposed. however, these operators do not exactly match standard ones on grid graphs, and introduce unwanted additional invariance ( e. g. with regards to rotations ). we propose a novel approach to generalize cnns to irregular domains using weight sharing and graph - based operators. using experiments, we show that these models resemble cnns on regular domains and offer better performance than multilayer perceptrons on distorded ones.
arxiv:1606.01166
we compute the thermodynamic properties of the glass phase in a binary mixture of soft spheres. our approach is a generalization to mixtures of the replica strategy, recently proposed by mezard and parisi, providing a first principle statistical mechanics computation of the thermodynamics of glasses. the method starts from the inter - atomic potentials, and translates the problem into the study of a molecular liquid. we compare uor analytical predictions to numerical simulations, focusing onto the values of the thermodynamic transition and the configurational entropy.
arxiv:cond-mat/9903129
we consider the coupling of a single dirac fermion to the three component unit vector field which appears as an order parameter in the faddeev model. classically, the coupling is determined by requiring that it preserves a certain local frame independence. but quantum mechanically the separate left and right chiral fermion number currents suffer from a frame anomaly. we employ this anomaly to compute the fermion number of a knotted soliton. the result coincides with the self - linking number of the soliton. in particular, the anomaly structure of the fermions relates directly to the inherent chiral properties of the soliton. our result suggests that interactions between fermions and knotted solitons can lead to phenomena akin the callan - rubakov effect.
arxiv:hep-th/0212053
we have established a plot of the anion height dependence of tc for the typical fe - based superconductors. the plot appeared a symmetric curve with a peak around 1. 38 a. both data at ambient pressure and under high pressure obeyed the unique curve. this plot will be one of the key strategies for both understanding the mechanism of fe - based superconductivity and search for the new fe - based superconductors with higher tc.
arxiv:1001.1801
we continue our investigation of spaces of long embeddings ( long embeddings are high - dimensional analogues of long knots ). in previous work we showed that when the dimensions are in the stable range, the rational homology groups of these spaces can be calculated as the homology of a direct sum of certain finite graph - complexes, which we described explicitly. in this paper, we establish a similar result for the rational homotopy groups of these spaces. we also put emphasis on different ways how the calculations can be done. in particular we describe three different graph - complexes computing the rational homotopy of spaces of long embeddings. we also compute the generating functions of the euler characteristics of the summands in the homological splitting.
arxiv:1108.1001
this paper is devoted to the asymptotic behavior of global solutions to the convection - diffusion equation in the fujita - subcritical case. we improve the result by zuazua ( 1993 ) and establish higher order asymptotic expansions with decay estimates of the remainders. we also discuss the optimality for the decay rates of the remainders.
arxiv:2406.01777
measures of activity of daily living ( adl ) are an important indicator of overall health but difficult to measure in - clinic. automated and accurate human activity recognition ( har ) using wrist - worn accelerometers enables practical and cost efficient remote monitoring of adl. key obstacles in developing high quality har is the lack of large labeled datasets and the performance loss when applying models trained on small curated datasets to the continuous stream of heterogeneous data in real - life. in this work we design a self - supervised learning paradigm to create a robust representation of accelerometer data that can generalize across devices and subjects. we demonstrate that this representation can separate activities of daily living and achieve strong har accuracy ( on multiple benchmark datasets ) using very few labels. we also propose a segmentation algorithm which can identify segments of salient activity and boost har accuracy on continuous real - life data.
arxiv:2112.12272
we consider the two - body problem in a periodic potential, and study the bound - state dispersion of a spin - $ \ uparrow $ fermion that is interacting with a spin - $ \ downarrow $ fermion through a short - range attractive interaction. based on a variational approach, we obtain the exact solution of the dispersion in the form of a set of self - consistency equations, and apply it to tight - binding hamiltonians with onsite interactions. we pay special attention to the bipartite lattices with a two - point basis that exhibit time - reversal symmetry, and show that the lowest - energy bound states disperse quadratically with momentum, whose effective - mass tensor is partially controlled by the quantum metric tensor of the underlying bloch states. in particular, we apply our theory to the mielke checkerboard lattice, and study the special role played by the interband processes in producing a finite effective mass for the bound states in a non - isolated flat band.
arxiv:2102.03530
most previous work in music emotion recognition assumes a single or a few song - level labels for the whole song. while it is known that different emotions can vary in intensity within a song, annotated data for this setup is scarce and difficult to obtain. in this work, we propose a method to predict emotion dynamics in song lyrics without song - level supervision. we frame each song as a time series and employ a state space model ( ssm ), combining a sentence - level emotion predictor with an expectation - maximization ( em ) procedure to generate the full emotion dynamics. our experiments show that applying our method consistently improves the performance of sentence - level baselines without requiring any annotated songs, making it ideal for limited training data scenarios. further analysis through case studies shows the benefits of our method while also indicating the limitations and pointing to future directions.
arxiv:2210.09434
perturbative expansions of relativistic quantum field theories typically contain ultraviolet divergences requiring regularization and renormalization. many different regularization techniques have been developed over the years, but most regularizations require severe mutilation of the logical foundations of the theory. in contrast, breaking lorentz invariance, while it is certainly a radical step, at least does not damage the logical foundations of the theory. we shall explore the features of a lorentz symmetry breaking regulator in a simple polynomial scalar field theory, and discuss its implications. we shall quantify just " how much " lorentz symmetry breaking is required to fully regulate the theory and render it finite. this scalar field theory provides a simple way of understanding many of the key features of horava ' s recent article [ arxiv : 0901. 3775 [ hep - th ] ] on 3 + 1 dimensional quantum gravity.
arxiv:0902.0590
in this work we present scanning fabry - perot h $ \ alpha $ observations of the isolated interacting galaxy pair ngc 5278 / 79 obtained with the puma fabry - perot interferometer. we derived velocity fields, various kinematic parameters and rotation curves for both galaxies. our kinematical results together with the fact that dust lanes have been detected in both galaxies, as well as the analysis of surface brightness profiles along the minor axis, allowed us to determine that both components of the interacting pair are trailing spirals.
arxiv:0908.2109
in this paper, we introduce some new graded lie algebras associated with a hom - lie algebra. at first, we define the cup product bracket and its application to the deformation theory of hom - lie algebra morphisms. we observe an action of the well - known hom - analogue of the nijenhuis - richardson graded lie algebra on the cup product graded lie algebra. using the corresponding semidirect product, we define the fr \ " { o } licher - nijenhuis bracket and study its application to nijenhuis operators. we show that the nijenhuis - richardson graded lie algebra and the fr \ " { o } licher - nijenhuis algebra constitute a matched pair of graded lie algebras. finally, we define another graded lie bracket, called the derived bracket that is useful to study rota - baxter operators on hom - lie algebras.
arxiv:2409.01865
we consider the assumption that a tachyonic gluon mass imitates short - distance nonperturbative physics of qcd. the phenomenological implications include modifications of the qcd sum rules for correlators of currents with various quantum numbers. the new 1 / q ^ 2 terms allow to resolve in a natural way old puzzles in the pion and scalar - gluonium channels. they lead to a slight reduction of the values of the running light quark masses from the ( pseudo ) scalar sum rules and of alpha _ s ( m _ \ tau ) from tau decay data. further tests can be provided by precision measurements of the correlators on the lattice and by the e ^ + e ^ - - - > hadrons data.
arxiv:hep-ph/9811275
we consider the problem of allocating $ m $ indivisible items to a set of $ n $ heterogeneous agents, aiming at computing a proportional allocation by introducing subsidy ( money ). it has been shown by wu et al. ( wine 2023 ) that when agents are unweighted a total subsidy of $ n / 4 $ suffices ( assuming that each item has value / cost at most $ 1 $ to every agent ) to ensure proportionality. when agents have general weights, they proposed an algorithm that guarantees a weighted proportional allocation requiring a total subsidy of $ ( n - 1 ) / 2 $, by rounding the fractional bid - and - take algorithm. in this work, we revisit the problem and the fractional bid - and - take algorithm. we show that by formulating the fractional allocation returned by the algorithm as a directed tree connecting the agents and splitting the tree into canonical components, there is a rounding scheme that requires a total subsidy of at most $ n / 3 - 1 / 6 $.
arxiv:2404.07707
we consider an ornstein - uhleneck ( ou ) process associated to self - normalised sums in i. i. d. symmetric random variables from the domain of attraction of $ n ( 0, 1 ) $ distribution. we proved the self - normalised sums converge to the ou process ( in $ c [ 0, \ infty ) $ ). importance of this is that the ou process is a stationary process as opposed to the brownian motion, which is a non - stationary distribution ( see for example, the invariance principle proved by csorgo et al ( 2003, ann probab ) for self - normalised sums that converges to brownian motion ). the proof uses recursive equations similar to those that arise in the area of stochastic approximation and it shows ( through examples ) that one can simulate any functionals of any segment of the ou process. the similar things can be done for any diffusion process as well.
arxiv:1302.0158
in this paper we characterize planar central configurations in terms of a sectional curvature value of the jacobi - maupertuis metric. this characterization works for the $ n $ - body problem with general masses and any $ 1 / r ^ { \ alpha } $ potential with $ \ alpha > 0 $. we also observe dynamical consequences of these curvature values for relative equilibrium solutions. these curvature methods work well for strong forces ( $ \ alpha \ ge 2 $ ).
arxiv:1703.08445
we introduce a geometric method for identifying the coupling direction between two dynamical systems based on a bivariate extension of recurrence network analysis. global characteristics of the resulting inter - system recurrence networks provide a correct discrimination for weakly coupled r \ " ossler oscillators not yet displaying generalised synchronisation. investigating two real - world palaeoclimate time series representing the variability of the asian monsoon over the last 10, 000 years, we observe indications for a considerable influence of the indian summer monsoon on climate in eastern china rather than vice versa. the proposed approach can be directly extended to studying $ k > 2 $ coupled subsystems.
arxiv:1301.0934
the maximum agreement forest ( maf ) problem is a well - studied problem in evolutionary biology, which asks for a largest common subforest of a given collection of phylogenetic trees with identical leaf label - set. however, the previous work about the maf problem are mainly on two binary phylogenetic trees or two general ( i. e., binary and non - binary ) phylogenetic trees. in this paper, we study the more general version of the problem : the maf problem on multiple general phylogenetic trees. we present a parameterized algorithm of running time $ o ( 3 ^ k n ^ 2m ) $ and a 3 - approximation algorithm for the maf problem on multiple rooted general phylogenetic trees, and a parameterized algorithm of running time $ o ( 4 ^ k n ^ 2m ) $ and a 4 - approximation algorithm for the maf problem on multiple unrooted general phylogenetic trees. we also implement the parameterized algorithm and approximation algorithm for the maf problem on multiple rooted general phylogenetic trees, and test them on simulated data and biological data.
arxiv:1411.0062
an attempt to predict the new atomic dark matter lines is done on the example of a dark lepton atom - positronium. its layman - alpha line with the energy near 3 gev may be observable if the appropriate conditions realize. for this we have studied a { \ gamma } - ray excess in the center of our galaxy. in principle, this excess may be produced by the l { \ alpha } line of a dark positronium in the medium with compton scattering. the possibility of observations of an annihilation line ( e ~ 300 tev ) of dark positronium is also predicted. other proposals to observe the atomic dark matter are shortly described. besides, h { \ alpha } line ( 1. 3 { \ mu } ) of usual positronum must be observable in the direction on the center of our galaxy.
arxiv:1704.08872
we report studies of optical fabry - perot microcavities based on semiconducting single - wall carbon nanotubes with a quality factor of 160. we experimentally demonstrate a huge photoluminescence signal enhancement by a factor of 30 in comparison with the identical film and by a factor of 180 if compared with a thin film containing non - purified ( 8, 7 ) nanotubes. futhermore, the spectral full - width at half - maximum of the photo - induced emission is reduced down to 8 nm with very good directivity at a wavelength of about 1. 3 $ \ mu $ m. such results prove the great potential of carbon nanotubes for photonic applications.
arxiv:1010.5041
as data sets continue to grow in size and complexity, effective and efficient techniques are needed to target important features in the variable space. many of the variable selection techniques that are commonly used alongside clustering algorithms are based upon determining the best variable subspace according to model fitting in a stepwise manner. these techniques are often computationally intensive and can require extended periods of time to run ; in fact, some are prohibitively computationally expensive for high - dimensional data. in this paper, a novel variable selection technique is introduced for use in clustering and classification analyses that is both intuitive and computationally efficient. we focus largely on applications in mixture model - based learning, but the technique could be adapted for use with various other clustering / classification methods. our approach is illustrated on both simulated and real data, highlighted by contrasting its performance with that of other comparable variable selection techniques on the real data sets.
arxiv:1303.5294
in this paper we introduce the concept of quadratic operator perspective for a continuous function { \ phi } defined on the positive semi - axis of real numbers. this generalize the quadratic weighted operator geometric mean and the quadratic relative operator entropy. some inequalities for this perspective of convex functions are established. applications for quadratic weighted operator geometric mean and quadratic relative operator entropy are also provided.
arxiv:1609.08607
we seek the best traffic allocation scheme for the edge - cloud computing network that satisfies constraints and minimizes the cost based on burstable billing. first, for a fixed network topology, we formulate a family of integer programming problems with random parameters describing the various traffic demands. then, to overcome the difficulty caused by the discrete feature of the problem, we generalize the gumbel - softmax reparameterization method to induce an unconstrained continuous optimization problem as a regularized continuation of the discrete problem. finally, we introduce the gumbel - softmax sampling network to solve the optimization problems via unsupervised learning. the network structure reflects the edge - cloud computing topology and is trained to minimize the expectation of the cost function for unconstrained continuous optimization problems. the trained network works as an efficient traffic allocation scheme sampler, remarkably outperforming the random strategy in feasibility and cost function value. besides testing the quality of the output allocation scheme, we examine the generalization property of the network by increasing the time steps and the number of users. we also feed the solution to existing integer optimization solvers as initial conditions and verify the warm - starts can accelerate the short - time iteration process. the framework is general with solid performance, and the decoupled feature of the random neural networks is adequate for practical implementations.
arxiv:2307.05170
we derive the fundamental limit to the resolution of far - field optical imaging, and demonstrate that, while a bound to the resolution of a fundamental nature does exit, contrary to the conventional wisdom it is neither exactly equal to nor necessarily close to abbe ' s estimate. our approach to imaging resolution that combines the tools from the physics of wave phenomena and the methods of information theory, is general, and can be extended beyond optical microscopy, to e. g. geophysical and ultrasound imaging.
arxiv:1903.05254
we introduce and demonstrate the power of a method to speed up current iterative techniques for n - body modified gravity simulations. our method is based on the observation that the accuracy of the final result is not compromised if the calculation of the fifth force becomes less accurate, but substantially faster, in high - density regions where it is weak due to screening. we focus on the ndgp model which employs vainshtein screening, and test our method by running amr simulations in which the solutions on the finer levels of the mesh ( high density ) are not obtained iteratively, but instead interpolated from coarser levels. we show that the impact this has on the matter power spectrum is below $ 1 \ % $ for $ k < 5h / { \ rm mpc } $ at $ z = 0 $, and even smaller at higher redshift. the impact on halo properties is also small ( $ \ lesssim 3 \ % $ for abundance, profiles, mass ; and $ \ lesssim 0. 05 \ % $ for positions and velocities ). the method can boost the performance of modified gravity simulations by more than a factor of 10, which allows them to be pushed to resolution levels that were previously hard to achieve.
arxiv:1511.08200
this paper studies how international investors ' concerns about model misspecification affect sovereign bond spreads. we develop a general equilibrium model of sovereign debt with endogenous default wherein investors fear that the probability model of the underlying state of the borrowing economy is misspecified. consequently, investors demand higher returns on their bond holdings to compensate for the default risk in the context of uncertainty. in contrast with the existing literature on sovereign default, we match the bond spreads dynamics observed in the data together with other business cycle features for argentina, while preserving the default frequency at historical low levels.
arxiv:1512.06960
in a recent letter [ phys. rev. lett. 110, 177406 ( 2013 ) ], presenting a spectroscopic study of the electrons emitted from the gan p - cap of a forward - biased ingan / gan light - emitting diode ( led ), the authors observed at least two distinct peaks in the electron energy distribution curves ( edcs ), separated by about 1. 5 ev, and concluded that the only viable explanation for the higher - energy peak was auger recombination in the led active region. we present full - band monte carlo simulations suggesting that the higher - energy peaks in the measured edcs are probably uncorrelated with the carrier distribution in the active region. this would not imply that auger recombination, and possibily auger - induced leakage, play a negligible role in led droop, but that an auger signature cannot be recovered from the experiment performed on the led structure under study. we discuss, as an alternative explanation for the observed edcs, carrier heating by the electric field in the band - bending region.
arxiv:1305.2512
operator learning based on neural operators has emerged as a promising paradigm for the data - driven approximation of operators, mapping between infinite - dimensional banach spaces. despite significant empirical progress, our theoretical understanding regarding the efficiency of these approximations remains incomplete. this work addresses the parametric complexity of neural operator approximations for the general class of lipschitz continuous operators. motivated by recent findings on the limitations of specific architectures, termed curse of parametric complexity, we here adopt an information - theoretic perspective. our main contribution establishes lower bounds on the metric entropy of lipschitz operators in two approximation settings ; uniform approximation over a compact set of input functions, and approximation in expectation, with input functions drawn from a probability measure. it is shown that these entropy bounds imply that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy $ \ epsilon $ must have a size that is exponentially large in $ \ epsilon ^ { - 1 } $. the size of architectures is here measured by counting the number of encoded bits necessary to store the given model in computational memory. the results of this work elucidate fundamental trade - offs and limitations in operator learning.
arxiv:2406.18794
let $ \ mathbf { g } : = ( g _ 1, g _ 2, g _ 3 ) $ be a triple of graphs on a common vertex set $ v $ of size $ n $. a rainbow triangle in $ \ mathbf { g } $ is a triple of edges $ ( e _ 1, e _ 2, e _ 3 ) $ with $ e _ i \ in g _ i $ for each $ i $ and $ \ { e _ 1, e _ 2, e _ 3 \ } $ forming a triangle in $ v $. in this paper we consider the following question : what triples of minimum degree conditions $ ( \ delta ( g _ 1 ), \ delta ( g _ 2 ), \ delta ( g _ 3 ) ) $ guarantee the existence of a rainbow triangle? this may be seen as a minimum degree version of a problem of aharoni, devos, de la maza, montejanos and \ v { s } \ ' amal on density conditions for rainbow triangles, which was recently resolved by the authors. we establish that the extremal behaviour in the minimum degree setting differs strikingly from that seen in the density setting, with discrete jumps as opposed to continuous transitions. our work leaves a number of natural questions open, which we discuss.
arxiv:2305.12772
in universal algebraic geometry the category of the finite generated free algebras of some fixed variety of algebras and the quotient group a / y are very important. here a is a group of all automorphisms of this category and y is a group of all inner automorphisms of this category. in the varieties of all the groups, all the abelian groups ( see b. plotkin and g. zhitomirski, 2006 ), all the nilpotent groups of the class no more then n ( see a. tsurkov, 2007 ) the group a / y is trivial. b. plotkin posed a question : " is there a subvariety of the variety of all the groups, such that the group a / y in this subvariety is not trivial? " a. tsurkov hypothesized that exist some varieties of periodic groups, such that the groups a / y in these varieties is not trivial. in this paper we give an example of one subvariety of this kind.
arxiv:1909.05955
we consider gated graphene nanoribbons subject to berry - mondragon boundary conditions in the presence of weak impurities. using field - - theoretical methods, we calculate the density of charge carriers ( and, thus, the quantum capacitance ) as well as the optical and dc conductivities at zero temperature. we discuss in detail their dependence on the gate ( chemical ) potential, and reveal a non - linear behaviour induced by the quantization of the transversal momentum.
arxiv:1311.0254
traditional supervised voice activity detection ( vad ) methods work well in clean and controlled scenarios, with performance severely degrading in real - world applications. one possible bottleneck is that speech in the wild contains unpredictable noise types, hence frame - level label prediction is difficult, which is required for traditional supervised vad training. in contrast, we propose a general - purpose vad ( gpvad ) framework, which can be easily trained from noisy data in a weakly supervised fashion, requiring only clip - level labels. we proposed two gpvad models, one full ( gpv - f ), trained on 527 audioset sound events, and one binary ( gpv - b ), only distinguishing speech and noise. we evaluate the two gpv models against a crnn based standard vad model ( vad - c ) on three different evaluation protocols ( clean, synthetic noise, real data ). results show that our proposed gpv - f demonstrates competitive performance in clean and synthetic scenarios compared to traditional vad - c. further, in real - world evaluation, gpv - f largely outperforms vad - c in terms of frame - level evaluation metrics as well as segment - level ones. with a much lower requirement for frame - labeled data, the naive binary clip - level gpv - b model can still achieve comparable performance to vad - c in real - world scenarios.
arxiv:2003.12222
our goal in this paper is to prove the global existence of a classical solution for the isentropic nozzle flow. regarding this problem, there exist some global existence theorems of weak solutions. however, that of classical solutions does not have much attention until now. when we consider the present problem, the main difficulty is to obtain the uniform bound of solutions and their derivatives. to solve this, we introduce an invariant region depending on the space variable and a functional satisfying the riccati equation along the characteristic lines.
arxiv:2402.05268
the growing use of large language model ( llm ) - based chatbots has raised concerns about fairness. fairness issues in llms can lead to severe consequences, such as bias amplification, discrimination, and harm to marginalized communities. while existing fairness benchmarks mainly focus on single - turn dialogues, multi - turn scenarios, which in fact better reflect real - world conversations, present greater challenges due to conversational complexity and potential bias accumulation. in this paper, we propose a comprehensive fairness benchmark for llms in multi - turn dialogue scenarios, \ textbf { fairmt - bench }. specifically, we formulate a task taxonomy targeting llm fairness capabilities across three stages : context understanding, user interaction, and instruction trade - offs, with each stage comprising two tasks. to ensure coverage of diverse bias types and attributes, we draw from existing fairness datasets and employ our template to construct a multi - turn dialogue dataset, \ texttt { fairmt - 10k }. for evaluation, gpt - 4 is applied, alongside bias classifiers including llama - guard - 3 and human validation to ensure robustness. experiments and analyses on \ texttt { fairmt - 10k } reveal that in multi - turn dialogue scenarios, current llms are more likely to generate biased responses, and there is significant variation in performance across different tasks and models. based on this, we curate a challenging dataset, \ texttt { fairmt - 1k }, and test 15 current state - of - the - art ( sota ) llms on this dataset. the results show the current state of fairness in llms and showcase the utility of this novel approach for assessing fairness in more realistic multi - turn dialogue contexts, calling for future work to focus on llm fairness improvement and the adoption of \ texttt { fairmt - 1k } in such efforts.
arxiv:2410.19317
in this paper, we present a computationally efficient trajectory optimizer that can exploit gpus to jointly compute trajectories of tens of agents in under a second. at the heart of our optimizer is a novel reformulation of the non - convex collision avoidance constraints that reduces the core computation in each iteration to that of solving a large scale, convex, unconstrained quadratic program ( qp ). we also show that the matrix factorization / inverse computation associated with the qp needs to be done only once and can be done offline for a given number of agents. this further simplifies the solution process, effectively reducing it to a problem of evaluating a few matrix - vector products. moreover, for a large number of agents, this computation can be trivially accelerated on gpus using existing off - the - shelf libraries. we validate our optimizer ' s performance on challenging benchmarks and show substantial improvement over state of the art in computation time and trajectory quality.
arxiv:2011.04240
the goal of this work is to study the optimal controls for the covid - 19 epidemic in brazil. we consider an age - structured seirq model with quarantine compartment, where the controls are the quarantine entrance parameters. we then compare the optimal controls for different quarantine lengths and distribution of the total control cost by assessing their respective reductions in deaths in comparison to the same period without quarantine. the best strategy provides a calendar of when to relax the isolation measures for each age group. finally, we analyse how a delay in the beginning of the quarantine affects this calendar by changing the initial conditions.
arxiv:2005.09786
we present a brief description of the ` ` consistent discretization ' ' approach to classical and quantum general relativity. we exhibit a classical simple example to illustrate the approach and summarize current classical and quantum applications. we also discuss the implications for the construction of a well defined quantum theory and in particular how to construct a quantum continuum limit.
arxiv:gr-qc/0512065
in the case of one extra dimension, well known newton ' s potential $ \ phi ( r _ 3 ) = - g _ n m / r _ 3 $ is generalized to compact and elegant formula $ \ phi ( r _ 3, \ xi ) = - ( g _ n m / r _ 3 ) \ sinh ( 2 \ pi r _ 3 / a ) [ \ cosh ( 2 \ pi r _ 3 / a ) - \ cos ( 2 \ pi \ xi / a ) ] ^ { - 1 } $ if four - dimensional space has topology $ \ mathbb { r } ^ 3 \ times t $. here, $ r _ 3 $ is magnitude of three - dimensional radius vector, $ \ xi $ is extra dimension and $ a $ is a period of a torus $ t $. this formula is valid for full range of variables $ r _ 3 \ in [ 0, + \ infty ) $ and $ \ xi \ in [ 0, a ] $ and has known asymptotic behavior : $ \ phi \ sim 1 / r _ 3 $ for $ r _ 3 > > a $ and $ \ phi \ sim 1 / r _ 4 ^ 2 $ for $ r _ 4 = \ sqrt { r _ 3 ^ 2 + \ xi ^ 2 } < < a $. obtained formula is applied to an infinitesimally thin shell, a shell, a sphere and two spheres to show deviations from the newtonian expressions. usually, these corrections are very small to observe at experiments. nevertheless, in the case of spatial topology $ \ mathbb { r } ^ 3 \ times t ^ { d } $, experimental data can provide us with a limitation on maximal number of extra dimensions.
arxiv:0905.2222
the unit - lindley distribution was recently introduced in the literature as a viable alternative to the beta and the kumaraswamy distributions with support in ( 0 ; 1 ). this distribution enjoys many virtuous properties over the named distributions. in this article, we address the issue of parameter estimation from a bayesian perspective and study relative performance of different estimators through extensive simulation studies. significant emphasis is given to the estimation of stress - strength reliability employing classical as well as bayesian approach. a non - trivial useful application in the public health domain is presented proposing a simple metric of discrepancy.
arxiv:1904.06181