text
stringlengths
1
3.65k
source
stringlengths
15
79
we describe several shortcomings of a study by patone et al, whose findings were recently published in the american heart association journal circulation, including the following : * the study ' s principal conclusion, as initially stated, begins " overall, the risk of myocarditis is greater after sars - cov - 2 infection than after covid - 19 vaccination.... " however, patone et al never attempt to assess the incidence of myocarditis in their study population following sars - cov - 2 infection. rather, they make an untenable assumption that all infections occurring in their study population are associated with ( reported ) positive covid - 19 tests. using publicly available data from the uk ' s ons and nhs, we show that patone et al ' s estimates, for the unvaccinated, of myocarditis incidence associated with infection are likely overestimated by a factor of at least 1. 58. * the method patone et al use to compute the incidence of myocarditis among the unvaccinated after a positive covid test may overestimate risk. the authors assume, without justification, that unvaccinated persons hospitalized during the study period with positive - test - associated myocarditis would later choose to vaccinate with the same probability as unvaccinated persons who have had a positive covid test. we present a plausibility argument that suggests a possible further exaggeration of myocarditis risk post infection by a factor of 1. 5. * patone et al fail to discuss important limitations of their study with respect to guiding public health recommendations. for instance, an insignificant number of cases contributing to the study ' s findings were omicron - variant cases. thus, the study ' s estimates of myocarditis risk following infection do not speak to the risk following omicron infection, which is recognized to be milder than that of previous variants.
arxiv:2210.14955
we predict a nonvanishing baryon asymmetry of the proton sea at low $ x $. it is expected to be about $ 7 \ % $ and nearly $ x $ - independent at $ x < 0. 5 \ times 10 ^ { - 3 } $. the asymmetry arises from the baryon - antibaryon component of the pomeron, rather than from the valence quarks of the proton, which are wide believed carriers of baryon number. experimental study of $ x $ - distribution of the baryon asymmetry of the proton sea can be performed in $ ep $ or $ \ gamma p $ interactions at hera, where $ x \ sim 10 ^ { - 5 } $ are reachable, smaller than at any of existing or planned proton colliders.
arxiv:hep-ph/9607486
reconstructing training data from trained neural networks is an active area of research with significant implications for privacy and explainability. recent advances have demonstrated the feasibility of this process for several data types. however, reconstructing data from group - invariant neural networks poses distinct challenges that remain largely unexplored. this paper addresses this gap by first formulating the problem and discussing some of its basic properties. we then provide an experimental evaluation demonstrating that conventional reconstruction techniques are inadequate in this scenario. specifically, we observe that the resulting data reconstructions gravitate toward symmetric inputs on which the group acts trivially, leading to poor - quality results. finally, we propose two novel methods aiming to improve reconstruction in this setup and present promising preliminary experimental results. our work sheds light on the complexities of reconstructing data from group invariant neural networks and offers potential avenues for future research in this domain.
arxiv:2411.16458
in this paper, we focus on the solvability of a class of fractional backward stochastic differential equations ( bsdes, for short ) with delayed generator. in this class of equations, the generator includes not only the values of the solutions of the present but also the past. under lipschitz condition, the existence and uniqueness of such bsdes are established. a comparison theorem for this class of bsdes is also obtained.
arxiv:2211.16826
we report protonation in several compounds by an ionic - liquid - gating method, with optimized gating conditions. this leads to single superconducting phases for several compounds. non - volatility of protons allow post - gating magnetization and transport measurements. the superconducting transition temperature $ t _ c $ is enhanced to 43. 5 ~ k for fese $ _ { 0. 93 } $ s $ _ { 0. 07 } $, and 41 ~ k for fese after protonation. superconductivity with $ t _ c $ $ \ approx $ 15 ~ k for zrncl, $ \ approx $ 7. 2 ~ k for 1 $ t $ - tas $ _ 2 $, and $ \ approx $ 3. 8 ~ k for bi $ _ 2 $ se $ _ 3 $ are induced after protonation. electric transport in protonated fese $ _ { 0. 93 } $ s $ _ { 0. 07 } $ confirms high - temperature superconductivity. our $ ^ { 1 } $ h nmr measurements on protonated fese $ _ { 1 - x } $ s $ _ { x } $ reveal enhanced spin - lattice relaxation rate $ 1 / ^ { 1 } t _ 1 $ with increasing $ x $, which is consistent with lda calculations that h $ ^ { + } $ are located in the interstitial sites close to the anions.
arxiv:1905.10080
unified vision - language frameworks have greatly advanced in recent years, most of which adopt an encoder - decoder architecture to unify image - text tasks as sequence - to - sequence generation. however, existing video - language ( vidl ) models still require task - specific designs in model architecture and training objectives for each task. in this work, we explore a unified vidl framework lavender, where masked language modeling ( mlm ) is used as the common interface for all pre - training and downstream tasks. such unification leads to a simplified model architecture, where only a lightweight mlm head, instead of a decoder with much more parameters, is needed on top of the multimodal encoder. surprisingly, experimental results show that this unified framework achieves competitive performance on 14 vidl benchmarks, covering video question answering, text - to - video retrieval and video captioning. extensive analyses further demonstrate the advantage of lavender over existing vidl methods in : ( i ) supporting all downstream tasks with just a single set of parameter values when multi - task finetuned ; ( ii ) few - shot generalization on various downstream tasks ; and ( iii ) enabling zero - shot evaluation on video question answering tasks. code is available at https : / / github. com / microsoft / lavender.
arxiv:2206.07160
to quantify trade - offs between increasing demand for open data sharing and concerns about sensitive information disclosure, statistical data privacy ( sdp ) methodology analyzes data release mechanisms which sanitize outputs based on confidential data. two dominant frameworks exist : statistical disclosure control ( sdc ), and more recent, differential privacy ( dp ). despite framing differences, both sdc and dp share the same statistical problems at its core. for inference problems, we may either design optimal release mechanisms and associated estimators that satisfy bounds on disclosure risk, or we may adjust existing sanitized output to create new optimal estimators. both problems rely on uncertainty quantification in evaluating risk and utility. in this review, we discuss the statistical foundations common to both sdc and dp, highlight major developments in sdp, and present exciting open research problems in private inference.
arxiv:2205.03336
photonic crystal fibers doped with silver nanoparticles exhibit the kerr nonlinearity that can be positive or negative depending on wavelength and vanishes at a specific wavelength. we study numerically how the simultaneous presence of a zero - nonlinearity wavelength ( znw ) and a zero - dispersion wavelength affects evolution of soliton and supercontinuum generation inside such fibers and find a number of unique features. the existence of negative nonlinearity allows soliton formation even in the normaldispersion region of the fiber, and the znw acts as a barrier for the raman - induced red shift of solitons.
arxiv:1606.07212
we prove that the filtered grid invariants of legendrian links in link floer homology, and consequently their associated invariants in the spectral sequence, obstruct decomposable lagrangian cobordisms in the symplectization of the standard contact structure on $ \ mathbb { r } ^ 3 $, strengthening a result by baldwin, lidman, and the fifth author.
arxiv:2303.16130
contextual pricing strategies are prevalent in online retailing, where the seller adjusts prices based on products ' attributes and buyers ' characteristics. although such strategies can enhance seller ' s profits, they raise concerns about fairness when significant price disparities emerge among specific groups, such as gender or race. these disparities can lead to adverse perceptions of fairness among buyers and may even violate the law and regulation. in contrast, price differences can incentivize disadvantaged buyers to strategically manipulate their group identity to obtain a lower price. in this paper, we investigate contextual dynamic pricing with fairness constraints, taking into account buyers ' strategic behaviors when their group status is private and unobservable from the seller. we propose a dynamic pricing policy that simultaneously achieves price fairness and discourages strategic behaviors. our policy achieves an upper bound of $ o ( \ sqrt { t } + h ( t ) ) $ regret over $ t $ time horizons, where the term $ h ( t ) $ arises from buyers ' assessment of the fairness of the pricing policy based on their learned price difference. when buyers are able to learn the fairness of the price policy, this upper bound reduces to $ o ( \ sqrt { t } ) $. we also prove an $ \ omega ( \ sqrt { t } ) $ regret lower bound of any pricing policy under our problem setting. we support our findings with extensive experimental evidence, showcasing our policy ' s effectiveness. in our real data analysis, we observe the existence of price discrimination against race in the loan application even after accounting for other contextual information. our proposed pricing policy demonstrates a significant improvement, achieving 35. 06 % reduction in regret compared to the benchmark policy.
arxiv:2501.15338
in quantum computation, optimizing depth and number of ancillary qubits in quantum circuits is crucial due to constraints imposed by current quantum devices. this paper presents an innovative approach to implementing arbitrary symmetric boolean functions using poly - logarithmic depth quantum circuits with logarithmic number of ancillary qubits. symmetric functions are those whose outputs rely solely on the hamming weight of the inputs. these functions find applications across diverse domains, including quantum machine learning, arithmetic circuit synthesis, and quantum algorithm design ( e. g., grover ' s algorithm ). moreover, by fully leveraging the potential of qutrits ( an additional energy level ), the ancilla count can be further reduced to 1. the key technique involves a novel poly - logarithmic depth quantum circuit designed to compute hamming weight without the need for ancillary qubits. the quantum circuit for hamming weight is of independent interest because of its broad applications, such as quantum memory and quantum machine learning.
arxiv:2404.06052
we develop an \ textit { a posteriori } error analysis for a numerical estimate of the time at which a functional of the solution to a partial differential equation ( pde ) first achieves a threshold value on a given time interval. this quantity of interest ( qoi ) differs from classical qois which are modeled as bounded linear ( or nonlinear ) functionals { of the solution }. taylor ' s theorem and an adjoint - based \ textit { a posteriori } analysis is used to derive computable and accurate error estimates in the case of semi - linear parabolic and hyperbolic pdes. the accuracy of the error estimates is demonstrated through numerical solutions of the one - dimensional heat equation and linearized shallow water equations ( swe ), representing parabolic and hyperbolic cases, respectively.
arxiv:2111.09834
language models must capture statistical dependencies between words at timescales ranging from very short to very long. earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law. however, it is unclear how this knowledge can be used for analyzing or designing neural network language models. in this work, we derived a theory for how the memory gating mechanism in long short - term memory ( lstm ) language models can capture power law decay. we found that unit timescales within an lstm, which are determined by the forget gate bias, should follow an inverse gamma distribution. experiments then showed that lstm language models trained on natural english text learn to approximate this theoretical distribution. further, we found that explicitly imposing the theoretical distribution upon the model during training yielded better language model perplexity overall, with particular improvements for predicting low - frequency ( rare ) words. moreover, the explicit multi - timescale model selectively routes information about different types of words through units with different timescales, potentially improving model interpretability. these results demonstrate the importance of careful, theoretically - motivated analysis of memory and timescale in language models.
arxiv:2009.12727
we show that there exist non - trivial piecewise - linear ( pl ) knots with isolated singularities $ s ^ { n - 2 } \ subset s ^ n $, $ n \ geq 5 $, whose complements have the homotopy type of a circle. this is in contrast to the case of smooth, pl locally - flat, and topological locally - flat knots, for which it is known that if the complement has the homotopy type of a circle, then the knot is trivial.
arxiv:math/0408325
in this work, we propose an algorithm for a filter based on the fast fourier transform ( fft ), which, due to its characteristics, allows for an efficient computational implementation, ease of use, and minimizes amplitude variation in the filtered signal. the algorithm was implemented using the programming languages python, r, and matlab. initial results led to the conclusion that there was less amplitude loss in the filtered signal compared to the fir filter. future work may address a more rigorous methodology and comparative assessment of computational cost.
arxiv:2407.13414
this paper introduces a new task of politeness transfer which involves converting non - polite sentences to polite sentences while preserving the meaning. we also provide a dataset of more than 1. 39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. we design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. for politeness as well as five other transfer tasks, our model outperforms the state - of - the - art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. the data and code is located at https : / / github. com / tag - and - generate.
arxiv:2004.14257
we report in this paper the analysis for the linear and nonlinear version of the flux corrected transport ( fem - fct ) scheme in combination with the backward euler time - stepping scheme applied to time - dependent convection - diffusion - reaction problems. we present the stability and error estimates for the linear and nonlinear fem - fct scheme. numerical results confirm the theoretical predictions.
arxiv:2103.04776
let $ d $ be a domain and $ m $ a maximal ideal of $ d $. the ring of integer - valued polynomials on a subset $ e $ of $ d $, as well as more general rings of functions from $ e $ to $ d $, can be viewed as subrings of the product $ d ^ e = \ prod _ { e \ in e } d $. we investigate which maximal ideals of $ int ( e, d ) $ ( or any other subring of $ d ^ e $ ) come from ultrapowers of the residue class ring $ d / m $.
arxiv:1604.04866
phononic crystals and acoustic metamaterials are architected lattices designed to control the propagation of acoustic or elastic waves. in these materials, the dispersion properties and the energy transfer are controlled by selecting the lattices ' geometry and their constitutive material properties. most designs, however, only affect one mode of energy propagation, transmitted either as acoustic, airborne sound or as elastic, structural vibrations. here, we present a design methodology to attenuate both acoustic and elastic waves simultaneously in all polarizations. we experimentally realize the first three - dimensional, load bearing, architected lattice, composed of a single - material, that responds in a broadband frequency range in all directions.
arxiv:1809.01252
we argue that locally cartesian closed categories form a suitable doctrine for defining dependent type theories, including non - extensional ones. using the theory of sketches, one may define syntactic categories for type theories in a style that resembles the use of martin - l \ " of ' s logical framework, following the " judgments as types " principle. the concentration of type theories into their locally cartesian closed categories of judgments is particularly convenient for proving syntactic metatheorems by semantic means ( canonicity, normalization, etc. ). perhaps surprisingly, the notion of a context plays no role in the definitions of type theories in this sense, but the structure of a class of display maps can be imposed on a theory post facto wherever needed, as advocated by the edinburgh school and realized by the % worlds declarations of the twelf proof assistant. uemura has proposed representable map categories together with a stratified logical framework for similar purposes. the stratification in uemura ' s framework restricts the use of dependent products to be strictly positive, in contrast to the tradition of martin - l \ " of ' s logical framework and schroeder - heister ' s analysis of higher - level deductions. we prove a semantic adequacy result for locally cartesian closed categories relative to uemura ' s representable map categories : if a theory is definable in the framework of uemura, the locally cartesian closed category that it generates is a conservative ( fully faithful ) extension of its syntactic representable map category. on this basis, we argue for the use of locally cartesian closed categories as a simpler alternative to uemura ' s representable map categories.
arxiv:2012.10783
a b ] i, j = a i, 1 b 1, j + a i, 2 b 2, j + + a i, n b n, j = r = 1 n a i, r b r, j, { \ displaystyle [ \ mathbf { ab } ] _ { i, j } = a _ { i, 1 } b _ { 1, j } + a _ { i, 2 } b _ { 2, j } + \ cdots + a _ { i, n } b _ { n, j } = \ sum _ { r = 1 } ^ { n } a _ { i, r } b _ { r, j }, } where 1 ≤ i ≤ m and 1 ≤ j ≤ p. for example, the underlined entry 2340 in the product is calculated as ( 2 × 1000 ) + ( 3 × 100 ) + ( 4 × 10 ) = 2340 : [ 2 _ 3 _ 4 _ 1 0 0 ] [ 0 1000 _ 1 100 _ 0 10 _ ] = [ 3 2340 _ 0 1000 ]. { \ displaystyle { \ begin { aligned } { \ begin { bmatrix } { \ underline { 2 } } & { \ underline { 3 } } & { \ underline { 4 } } \ \ 1 & 0 & 0 \ \ \ end { bmatrix } } { \ begin { bmatrix } 0 & { \ underline { 1000 } } \ \ 1 & { \ underline { 100 } } \ \ 0 & { \ underline { 10 } } \ \ \ end { bmatrix } } & = { \ begin { bmatrix } 3 & { \ underline { 2340 } } \ \ 0 & 1000 \ \ \ end { bmatrix } }. \ end { aligned } } } matrix multiplication satisfies the rules ( ab ) c = a ( bc ) ( associativity ), and ( a + b ) c = ac + bc as well as c ( a + b ) = ca + cb ( left and right distributivity ), whenever the size of the matrices is such that the various products are defined. the product ab may be defined without ba being defined, namely if a and b are m×n and n×k matrices, respectively, and m = k. even if both products are defined, they generally need not be equal, that is : a b =
https://en.wikipedia.org/wiki/Matrix_(mathematics)
i provide a thorough review of the theoretical and experimental status of electroweak multiplets as dark matter candidates, serving as the prototype of weakly interacting massive particles ( wimps ) dark matter. specifically, the examination includes both real su ( 2 ) representations with zero hypercharge and complex ones with $ y \ neq 0 $. for the first time, all calculable thermal masses for scalar and fermionic wimps are computed, incorporating significant non - perturbative non - relativistic effects such as sommerfeld enhancement and the formation of wimp bound states. wimp masses of few hundred tev are shown to be compatible both with $ s $ - wave unitarity of the annihilation cross - section, and perturbativity. additionally, a strategy is outlined for probing these scenarios in the next generation of experiments.
arxiv:2405.05087
a lattice strong coupling calculation of the spectrum and chiral condensate of the ' t hooft model is presented. the agreement with the results of the continuum theory is strikingly good even at the fourth order in the strong coupling expansion.
arxiv:hep-lat/9909027
we present high s / n uv spectra for eight quasars at $ z \ sim3 $ \ obtained with vlt / fors. the spectra enable us to analyze in detail the strongest emission features in the rest - frame range 1400 - 2000 \ aa \ of each source ( \ ciii, \ siiii, \ aliii, \ siii, \ civ \ and \ siiv ). previous work indicates that a component of these lines is emitted in a region with well - defined properties i. e., a high density and low ionization emitting region ). flux ratios \ aliii / \ siiii, \ civ / \ aliii, \ siiv / \ siiii, \ civ / \ siiv \ and \ siii / \ siiii \ for this region permit us to strongly constrain electron density, ionization parameter and metallicity through the use of diagnostic maps built from { \ sc cloudy } simulations. reliable estimates of the product density times ionization parameter allow us to derive the radius of the broad line region \ rb \ from the definition of the ionization parameter. the \ rb \ estimate and the assumption of virialized motions in the line emitting gas yields an estimate for black hole mass. we compare our results with estimates obtained from the \ rb \ - - luminosity correlation customarily employed to estimate black hole masses of high redshift quasars.
arxiv:1011.4248
we developed pyquda, a python wrapper for quda written in cython, designed to facilitate lattice qcd calculations using the python programming language. pyquda leverages the optimized linear algebra capabilities of numpy / cupy / pytorch, along with the highly optimized lattice qcd operations provided by quda to accelerate research. this integration simplifies the process of writing calculation codes, enabling researchers to build more complex python packages like easydistillation for specific physics objectives. pyquda supports a range of lattice qcd operations, including hybrid monte carlo ( hmc ) with n - flavor clover / hisq fermions and inversion for the wilson / clover / hisq fermion action with the multigrid solver. it also includes utility functions for reading lattice qcd data stored in chroma, milc, and $ \ chi $ qcd formats. type hints are supported by stub files and multi - gpu support is provided through mpi4py.
arxiv:2411.08461
of technology, 1995 aichi university of technology, 2000 = = = kenya = = = in kenya, technical universities are special universities that focus on technical and engineering courses and offer certifications in the same from artisan, craft, diploma, higher diploma, degree, masters and doctorate levels. they are former national polytechnics and are the only institutions of learning that offer the complete spectrum of tertiary education programs. they include technical university of kenya, formerly kenya national polytechnic in nairobi technical university of mombasa, formerly mombasa national polytechnic in mombasa = = = jordan = = = princess sumaya university for technology in amman jordan university of science and technology in irbid balqa applied university in salt tafila technical university in tafila = = = macau = = = the first polytechnic in macau is the polytechnic institute of the university of east asia which was established in 1981, as an institute of a private university. in 1991, following the splitting of the university of east asia into three ( university of macau, macao polytechnic institute, asia international open university ), a public and independent polytechnic institute, macao polytechnic institute, was officially established. the first private technology university macau university of science and technology is established in 2000. macao polytechnic institute has renamed macao polytechnic university in 2022. = = = malaysia = = = polytechnics polytechnics in malaysia have been in operation since 1969. the institutions provide courses for bachelor ' s degree & bachelor of science ( bsc ), advanced diploma, diploma and special skills certificate. the first polytechnic in malaysia, politeknik ungku omar, was established by the ministry of education in 1969 with the help of unesco and the amount of rm24. 5 million from the united nations development program ( undp ). at present, malaysia has 36 polytechnics all over the country providing engineering, agriculture, commerce, hospitality and design courses. the following is a list of the polytechnics in malaysia in order of establishment : technical university there are four technical universities in malaysia and all are belongs to malaysian technical university network : universiti tun hussein onn malaysia universiti malaysia perlis universiti teknikal malaysia melaka universiti malaysia pahang = = = mauritius = = = the only technical university in mauritius is the university of technology, mauritius with its main campus situated in la tour koenig, pointe aux sables. = = = mexico = = = in mexico there are different institutes and colleges of technology. most of them are public institutions. the
https://en.wikipedia.org/wiki/Institute_of_technology
these lecture notes provide an overview of existing methodologies and recent developments for estimation and inference with high dimensional time series regression models. first, we present main limit theory results for high dimensional dependent data which is relevant to covariance matrix structures as well as to dependent time series sequences. second, we present main aspects of the asymptotic theory related to time series regression models with many covariates. third, we discuss various applications of statistical learning methodologies for time series analysis purposes.
arxiv:2308.16192
multi - class systems having possibly both finite and infinite classes are investigated under a natural partial exchangeability assumption. it is proved that the conditional law of such a system, given the vector of the empirical measures of its finite classes and directing measures of its infinite ones ( given by the de finetti theorem ), corresponds to sampling independently from each class, without replacement from the finite classes and i. i. d. from the directing measure for the infinite ones. the equivalence between the convergence of multi - exchangeable systems with fixed class sizes and the convergence of the corresponding vectors of measures is then established.
arxiv:0902.0539
for any numerical semigroup $ s $, there are infinitely many numerical symmetric semigroups $ t $ such that $ s = \ frac { t } { 2 } $ is their half. we are studying the betti numbers of the numerical semigroup ring $ k [ t ] $ when $ s $ is a 3 - generated numerical semigroup or telescopic. we also consider 4 - generated symmetric semigroups and the so called 4 - irreducible numerical semigroups.
arxiv:1111.1433
we show that even a rather minimal extension of the einstein - hilbert action by a nonminimal coupling of the scalar field to the ricci curvature scalar results in configurations that resemble more the dark energy stars then the ordinary boson stars. even though many of those configurations are endowed by negative principal pressures, the strong energy condition, as a signal of repulsive gravity, is not significantly violated in these configurations. when imposing restrictions on matter from energy conditions we find that the maximally allowed masses are shifted to the lower values due to the violation of the weak and dominant energy conditions. we also calculate the effective compactness and show that its maximum value is attained in the region of negative pressures, and is greater then that in ordinary boson stars. moreover, we develop a universality technique which allows to efficiently map small configurations, that are easily solved by numerical methods, to large astrophysical objects.
arxiv:1212.3781
we consider a long - term optimal investment problem where an investor tries to minimize the probability of falling below a target growth rate. from a mathematical viewpoint, this is a large deviation control problem. this problem will be shown to relate to a risk - sensitive stochastic control problem for a sufficiently large time horizon. indeed, in our theorem we state a duality in the relation between the above two problems. furthermore, under a multidimensional linear gaussian model we obtain explicit solutions for the primal problem.
arxiv:1001.2131
we present a large - scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples ' lives. we analyze the potential allocation harms that can result from semantic representation bias. to do so, we study the impact on occupation classification of including explicit gender indicators - - - such as first names and pronouns - - - in different semantic representations of online biographies. additionally, we quantify the bias that remains when these indicators are " scrubbed, " and describe proxy behavior that occurs in the absence of explicit gender indicators. as we demonstrate, differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.
arxiv:1901.09451
it is proposed that using both self - non - self and danger theories give a better understanding of how the immune system works. it is proposed that comparing immune system to police force is useful in this case since police responds both to danger or damage signals and to foreign or suspicious behavior even if no danger signals existed. we also propose that due to low zone tolerance immunotherapy needs to be combined with another treatment method for cancer e. g. chemotherapy or / and radiotherapy to get a sufficient eradication of tumors. finally we propose that fractional order differential equations are more suitable than the familiar integer order differential equations. a fractional order example of two immune effectors attacking an antigen is given.
arxiv:0801.0849
statistical quality control methods are noteworthy to producing standard production in manufacturing processes. in this regard, there are many classical manners to control the process. many of them have a global assumption around the distributions of the process data. they are supposed to be normal, but it is clear that it is not always valid for all processes. such control charts made some wrong decisions that waste funds. so, the main question while working with multivariate data set is how to find the multivariate distribution of the data set, which saves the original dependency between variables. to our knowledge, a copula function guarantees dependence on the result function. it is not enough when there is no other fundamental information about the statistical society, and we have just a data set. therefore, we apply the maximum entropy concept to deal with this situation. in this paper, first of all, we get the joint distribution of a data set from a manufacturing process that needs to be in - control while running the production process. then, we get an elliptical control limit via the maximum copula entropy. finally, we represent a practical example using the method. average run lengths are calculated for some means and shifts to show the ability of the maximum copula entropy. in the end, two practical data examples are presented, and the results of our method are compared with the traditional way based on fisher distribution.
arxiv:2012.14759
methods from quantum information theory are used to scrutinize quantum correlations encoded in the two - quark density matrix over light - cone momentum fractions $ x _ 1 $ and $ x _ 2 $. a non - perturbative three quark model light - cone wavefunction predicts significant non - classical correlations associated with the " entanglement negativity " measure for asymmetric and small quark momentum fractions. we perform one step of qcd scale evolution of the entire density matrix, not just its diagonal ( dpdf ), by computing collinearly divergent corrections due to the emission of a gluon. finally, we present first qualitative numerical results for single - step scale evolution of quantum entanglement correlations in double quark pdfs. at a higher $ q ^ 2 $ scale, the non - classical correlations manifest in the dpdf for nearly symmetric momentum fractions.
arxiv:2501.12312
in neural machine translation ( nmt ), researchers face the challenge of un - seen ( or out - of - vocabulary oov ) words translation. to solve this, some researchers propose the splitting of western languages such as english and german into sub - words or compounds. in this paper, we try to address this oov issue and improve the nmt adequacy with a harder language chinese whose characters are even more sophisticated in composition. we integrate the chinese radicals into the nmt model with different settings to address the unseen words challenge in chinese to english translation. on the other hand, this also can be considered as semantic part of the mt system since the chinese radicals usually carry the essential meaning of the words they are constructed in. meaningful radicals and new characters can be integrated into the nmt systems with our models. we use an attention - based nmt system as a strong baseline system. the experiments on standard chinese - to - english nist translation shared task data 2006 and 2008 show that our designed models outperform the baseline model in a wide range of state - of - the - art evaluation metrics including lepor, beer, and character, in addition to bleu and nist scores, especially on the adequacy - level translation. we also have some interesting findings from the results of our various experiment settings about the performance of words and characters in chinese nmt, which is different with other languages. for instance, the fully character level nmt may perform well or the state of the art in some other languages as researchers demonstrated recently, however, in the chinese nmt model, word boundary knowledge is important for the model learning.
arxiv:1805.01565
we exhibit an example of obstructed k - polystable fano 3 - fold $ x $ such that the k - moduli stack of k - semistable fano varieties and the k - moduli space of k - polystable fano varieties have an embedded point at $ [ x ] $.
arxiv:2105.02307
using dynamic cantilever magnetometry and experimentally determining the cantilever ' s vibrational mode shape, we precisely measured the magnetic moment of a lithographically defined micron - sized superconducting nb ring, a key element for the previously proposed subpiconewton force standard. the magnetic moments due to individual magnetic fluxoids and a diamagnetic response were independently determined at t = 4. 3 k, with a subfemtoampere - square - meter resolution. the results show good agreement with the theoretical estimation yielded by the brandt and clem model within the spring constant determination accuracy.
arxiv:1701.07598
the understanding of few - nucleon systems at low energies is essential, e. g. for accurate predictions of element abundances in big - bang and stellar fusion. novel effective field theories, taking only nucleons, or nucleons and pions as explicit degrees of freedom, provide a systematic approach, permitting an estimate of theoretical uncertainties. basic constants parameterising the short range physics are derived from only a handful of experimental values. the doublet neutron scattering length a _ 2 of the deuteron is particularly sensitive to a three - nucleon contact interaction, but experimentally known with only 6 % accuracy. it can be deduced from the two experimentally accessible parameters of the nd scattering length. we plan to measure the poorly known " incoherent " nd scattering length a _ { i, d } with 10 ^ { - 3 } accuracy, using a ramsey apparatus for pseudomagnetic precession with a cold polarised neutron beam at psi. a polarised target containing both deuterons and protons will permit a measurement relative to the incoherent np scattering length, which is know experimentally with an accuracy of 2. 4 \ times 10 ^ { - 4 }.
arxiv:nucl-ex/0401029
the observed correlation between star - formation in central galaxies and in their neighbours ( a phenomenon dubbed galactic conformity ) is in need of a convincing physical explanation. we use a volume - limited sample of galaxies with redshifts less than 0. 03 drawn from the sdss dr7 to investigate the scale dependence of the effect and how it changes as a function of the mass of the central galaxy. conformity extends over a central galaxy stellar mass range spanning two orders of magnitude. in central galaxies with masses less than 10 ^ 10 m _ sun, conformity extends out to scales in excess of 4 mpc, well beyond the virial radii of their dark matter halos. for low mass central galaxies, large - scale conformity with neighbours is only seen when the centrals have low star formation rate or gas content. in contrast, at high stellar masses, conformity with neighbours applies in the gas - rich regime and is clearly confined to scales comparable to the virial radius of the dark matter halo of the central galaxy. our analysis of a mock catalogue from the guo et al ( 2011 ) models shows that conformity - like effects are much weaker than observed, and apply only to the low sfr / m * tail of neighbouring galaxies. in the models, the median and the upper percentiles of the sfr / m * distribution remain almost unchanged, which is in contradiction with the data. conformity between low - mass, gas - poor central galaxies and their distant neighbours cannot be explained within the framework of halo occupation distribution ( hod ) models. it is likely a signature of pre - heating of the intergalactic gas at an earlier epoch. the smaller - scale conformity between high - mass, gas - rich central galaxies and their close neighbours may be a signature of ongoing gas accretion onto central galaxies in a minority of massive dark matter halos.
arxiv:1209.3306
in this paper, we describe a robust algorithm for 2 - manifold generation of various kinds of shapenet models. the input of our pipeline is a triangle mesh, with a set of vertices and triangular faces. the output of our pipeline is a 2 - manifold with vertices roughly uniformly distributed on the geometry surface. our algorithm uses an octree to represent the original mesh, and construct the surface by isosurface extraction. finally, we project the vertices to the original mesh to achieve high precision. as a result, our method can be adopted efficiently to all shapenet models with the guarantee of correct 2 - manifold topology.
arxiv:1802.01698
using a maximum entropy production principle ( mepp ), we derive a new type of relaxation equations for two - dimensional turbulent flows in the case where a prior vorticity distribution is prescribed instead of the casimir constraints [ ellis, haven, turkington, nonlin., 15, 239 ( 2002 ) ]. the particular case of a gaussian prior is specifically treated in connection to minimum enstrophy states and fofonoff flows. these relaxation equations are compared with other relaxation equations proposed by robert and sommeria [ phys. rev. lett. 69, 2776 ( 1992 ) ] and chavanis [ physica d, 237, 1998 ( 2008 ) ]. they can provide a small - scale parametrization of 2d turbulence or serve as numerical algorithms to compute maximum entropy states with appropriate constraints. we perform numerical simulations of these relaxation equations in order to illustrate geometry induced phase transitions in geophysical flows.
arxiv:0912.5096
by means of high - pressure resistivity measurements on single crystals, we investigate the charge transport properties of cu $ _ x $ pdte $ _ 2 $, notable for the combination of topological type - ii dirac semimetallic properties with superconductivity up to $ t _ c = 2. 5 $ k. in both cases of pristine ( $ x = 0 $ ) and intercalated ( $ x = 0. 05 $ ) samples, we find an unconventional $ t ^ 4 $ power law behavior of the low - temperature resistivity visible up to $ \ sim $ 40 k and remarkably stable under pressure up to 8. 2 gpa. this observation is explained by the low carrier density $ n $, which strongly reduces the $ k $ - region available for electron - phonon scattering, as previously reported in other low - $ n $ two - dimensional systems, such as multilayer graphene and semiconductor heterostructures. our data analysis complemented by specific heat measurements and supported by previous quantum oscillation studies and \ textit { ab initio } calculations suggests a scenario of one - band charge transport. within this scenario, our analysis yields a large value of transport electron - phonon coupling constant $ \ lambda _ { tr } = 1. 2 $ at ambient pressure that appears to be strongly enhanced by pressure assuming a constant effective mass.
arxiv:2106.05613
graph neural networks ( gnns ) have become a popular approach for various applications, ranging from social network analysis to modeling chemical properties of molecules. while gnns often show remarkable performance on public datasets, they can struggle to learn long - range dependencies in the data due to over - smoothing and over - squashing tendencies. to alleviate this challenge, we propose pcapass, a method which combines principal component analysis ( pca ) and message passing for generating node embeddings in an unsupervised manner and leverages gradient boosted decision trees for classification tasks. we show empirically that this approach provides competitive performance compared to popular gnns on node classification benchmarks, while gathering information from longer distance neighborhoods. our research demonstrates that applying dimensionality reduction with message passing and skip connections is a promising mechanism for aggregating long - range dependencies in graph structured data.
arxiv:2202.00408
we define treetopes, a generalization of the three - dimensional roofless polyhedra ( halin graphs ) to arbitrary dimensions. like roofless polyhedra, treetopes have a designated base facet such that every face of dimension greater than one intersects the base in more than one point. we prove an equivalent characterization of the 4 - treetopes using the concept of clustered planarity from graph drawing, and we use this characterization to recognize the graphs of 4 - treetopes in polynomial time. this result provides one of the first classes of 4 - polytopes, other than pyramids and stacked polytopes, that can be recognized efficiently from their graphs.
arxiv:1510.03152
malware ascription is a relatively unexplored area, and it is rather difficult to attribute malware and detect authorship. in this paper, we employ various static and dynamic features of malicious executables to classify malware based on their family. we leverage cuckoo sandbox and machine learning to make progress in this research. post analysis, classification is performed using various deep learning and machine learning algorithms. using the features gathered from virustotal ( static ) and cuckoo ( dynamic ) reports, we ran the vectorized data against multinomial naive bayes, support vector machine, and bagging using decision trees as the base estimator. for each classifier, we tuned the hyper - parameters using exhaustive search methods. our reports can be extremely useful in malware ascription.
arxiv:2112.02639
historically two types of nlp have been investigated : fully automated processing of language by machines ( nlp ) and autonomous processing of natural language by people, i. e. the human brain ( psycholinguistics ). we believe that there is room and need for another kind, inlp : interactive natural language processing. this intermediate approach starts from peoples ' needs, trying to bridge the gap between their actual knowledge and a given goal. given the fact that peoples ' knowledge is variable and often incomplete, the aim is to build bridges linking a given knowledge state to a given goal. we present some examples, trying to show that this goal is worth pursuing, achievable and at a reasonable cost.
arxiv:1201.4733
continual self - supervised learning ( cssl ) methods have gained increasing attention in remote sensing ( rs ) due to their capability to learn new tasks sequentially from continuous streams of unlabeled data. existing cssl methods, while learning new tasks, focus on preventing catastrophic forgetting. to this end, most of them use regularization strategies to retain knowledge of previous tasks. this reduces the model ' s ability to adapt to the data of new tasks ( i. e., learning plasticity ), which can degrade performance. to address this problem, in this paper, we propose a novel cssl method that aims to learn tasks sequentially, while achieving high learning plasticity. to this end, the proposed method uses a knowledge distillation strategy with an integrated decoupling mechanism. the decoupling is achieved by first dividing the feature dimensions into task - common and task - specific parts. then, the task - common features are forced to be correlated to ensure memory stability while the task - specific features are forced to be de - correlated facilitating the learning of new features. experimental results show the effectiveness of the proposed method compared to cassle, which is a widely used cssl framework, with improvements of up to 1. 12 % in average accuracy and 2. 33 % in intransigence in a task - incremental scenario, and 1. 24 % in average accuracy and 2. 01 % in intransigence in a class - incremental scenario.
arxiv:2503.24088
we consider the phase transition in the dual yang - mills theory at finite temperature $ t $. the phase transition is associated with a change ( breaking ) of symmetry. the effective mass of the dual gauge field is derived as a function of $ t $ - dependent gauge coupling constant. we investigate the analytical criterion constraining the existence of a quark - antiquark bound state at temperatures higher than the temperature of deconfinement.
arxiv:0801.2074
we consider the non - equilibrium dynamics of a system of interacting massless fermions in a ring threaded by a magnetic flux. we focus on the quench where the flux is initially vanishing and is then turned on. we show that the definition of the limit of abrupt quench is problematic due to the presence of gauge invariance that has to be taken into account. we then propose a specific protocol where the dynamics is non - trivial. employing techniques coming from the algebraic bethe - ansatz, we present an exact formula for the loschmidt echo valid at all times as a fredholm determinant at the free fermionic point. from the analysis of the asymptotic behavior of the fredholm determinant, we show that the distribution of work done at small energies present an edge singularity whose exponent can be explicitly computed. using the correspondence between the edge singularity and the decay of the fidelity at finite - size we propose a general formula for the exponent valid also in the interacting case.
arxiv:1310.6652
we present a short review of theories based on warped extra dimensions ( motivated by the hierarchy problem of the standard model ) which can accomodate a higgs boson in the range suggested by the recent lhc results at 7 tev. using the ads / cft correspondence the higgs is composite and can be described in the dual theory by a bound state of the 4d cft. we have classified the theories in those with a scalar higgs ( 5d sm ) and those where the higgs is the fifth component ( gauge - higgs unification ) of a bulk gauge field.
arxiv:1206.4518
we study zero - cycles in families of rationally connected varieties. we show that for a smooth projective scheme over a henselian discrete valuation ring the restriction of relative zero cycles to the special fiber induces an isomorphism on chow groups if the special fiber is separably rationally connected. we further extend this result to certain higher chow groups and develop conjectures in the non - smooth case. our main results generalise a result of koll \ ' ar [ 31 ].
arxiv:2211.04300
describing chemical reactions in solution on a molecular level is a challenging task due to the high mobility of weakly interacting solvent molecules which requires configurational sampling. for instance, polar and protic solvents can interact strongly with solutes and may interfere in reactions. however, to define and identify representative arrangements of solvent molecules modulating a transition state is a non - trivial task. here, we propose to monitor their active participation in the decaying normal mode at a transition state, which defines active solvent molecules. moreover, it is desirable to prepare a low - dimensional microsolvation model in a well - defined, fully automated, high - throughput, and easy - to - deploy fashion, which we propose to derive in a stepwise protocol. first, transition state structures are optimized in a sufficiently solvated quantum - classical hybrid model, which are then subjected to a re - definition of a then reduced quantum region. from the reduced model, minimally microsolvated structures are extracted that contain only active solvent molecules. modeling the remaining solvation effects is deferred to a continuum model. to establish an easy - to - use free - energy model, we combine the standard thermochemical gas - phase model with a correction for the cavity entropy in solution. we assess our microsolvation and free - energy models for methanediol formation from formaldehyde, for the hydration of carbon dioxide ( which we consider in a solvent mixture to demonstrate the versatility of our approach ), and, finally, for the chlorination of phenol with hypochlorous acid.
arxiv:2502.07965
we show that backflow correlations in the variational wave function for the hubbard model greatly improve the previous results given by the slater - jastrow state, usually considered in this context. we provide evidence that, within this approach, it is possible to have a satisfactory connection with the strong - coupling regime. moreover, we show that, for the hubbard model on the lattice, backflow correlations are essentially short range, inducing an effective attraction between empty ( holons ) and doubly occupied sites ( doublons ). in presence of frustration, we report the evidence that the metal to mott - insulator transition is marked by a discontinuity of the double occupancy, together with a similar discontinuity of the kinetic term that does not change the number of holons and doublons, while the other kinetic terms are continuous across the transition. finally, we show the estimation of the charge gap, obtained by particle - hole excitations { \ it \ ` a la feynman } over the ground - state wave function.
arxiv:1102.3017
in [ 15 ] a homotopic variation for locality of logics was presented, namely a quillen model category - based framework for locality under logical equivalence, for every primitive - positive sentence of quantifier - rank $ k $. in this paper, we will present some of the implications and possible themes for investigations that arise from the aforementioned framework.
arxiv:2005.11322
computing - in - memory ( cim ) architectures based on emerging non - volatile memory ( nvm ) devices have demonstrated great potential for deep neural network ( dnn ) acceleration thanks to their high energy efficiency. however, nvm devices suffer from various non - idealities, especially device - to - device variations due to fabrication defects and cycle - to - cycle variations due to the stochastic behavior of devices. as such, the dnn weights actually mapped to nvm devices could deviate significantly from the expected values, leading to large performance degradation. to address this issue, most existing works focus on maximizing average performance under device variations. this objective would work well for general - purpose scenarios. but for safety - critical applications, the worst - case performance must also be considered. unfortunately, this has been rarely explored in the literature. in this work, we formulate the problem of determining the worst - case performance of cim dnn accelerators under the impact of device variations. we further propose a method to effectively find the specific combination of device variation in the high - dimensional space that leads to the worst - case performance. we find that even with very small device variations, the accuracy of a dnn can drop drastically, causing concerns when deploying cim accelerators in safety - critical applications. finally, we show that surprisingly none of the existing methods used to enhance average dnn performance in cim accelerators are very effective when extended to enhance the worst - case performance, and further research down the road is needed to address this problem.
arxiv:2207.07626
times in the west until that of martin behaim in 1492. additionally it could well be a representation of the entire " world " or cosmos. a recent study of medieval concepts of the sphericity of the earth noted that " since the eighth century, no cosmographer worthy of note has called into question the sphericity of the earth ". however, the work of these intellectuals may not have had significant influence on public opinion, and it is difficult to tell what the wider population may have thought of the shape of the earth if they considered the question at all. = = = = europe : high and late middle ages = = = = hermann of reichenau ( 1013 – 1054 ) was among the earliest christian scholars to estimate the circumference of earth with eratosthenes ' method. thomas aquinas ( 1225 – 1274 ), the most widely taught theologian of the middle ages, believed in a spherical earth and took for granted that his readers also knew the earth is round. lectures in the medieval universities commonly advanced evidence in favor of the idea that the earth was a sphere. jill tattersall shows that in many vernacular works in 12th - and 13th - century french texts the earth was considered " round like a table " rather than " round like an apple ". she writes, " [ i ] n virtually all the examples quoted... from epics and from non - ' historical ' romances ( that is, works of a less learned character ) the actual form of words used suggests strongly a circle rather than a sphere ", though she notes that even in these works the language is ambiguous. portuguese navigation down and around the coast of africa in the latter half of the 1400s gave wide - scale observational evidence for earth ' s sphericity. in these explorations, the sun ' s position moved more northward the further south the explorers travelled. its position directly overhead at noon gave evidence for crossing the equator. these apparent solar motions in detail were more consistent with north – south curvature and a distant sun, than with any flat - earth explanation. the ultimate demonstration came when ferdinand magellan ' s expedition completed the first global circumnavigation in 1521. antonio pigafetta, one of the few survivors of the voyage, recorded the loss of a day in the course of the voyage, giving evidence for east – west curvature. = = = = middle east : islamic scholars = = = = prior to the introduction of greek cosmology into
https://en.wikipedia.org/wiki/Flat_Earth
we develop a new thermodynamic approach to stochastic graph - rewriting. the ingredients are a finite set of reversible graph - rewriting rules called generating rules, a finite set of connected graphs p called energy patterns and an energy cost function. the idea is that the generators define the qualitative dynamics, by showing which transformations are possible, while the energy patterns and cost function specify the long - term probability $ \ pi $ of any reachable graph. given the generators and energy patterns, we construct a finite set of rules which ( i ) has the same qualitative transition system as the generators ; and ( ii ) when equipped with suitable rates, defines a continuous - time markov chain of which $ \ pi $ is the unique fixed point. the construction relies on the use of site graphs and a technique of ` growth policy ' for quantitative rule refinement which is of independent interest. this division of labour between the qualitative and long - term quantitative aspects of the dynamics leads to intuitive and concise descriptions for realistic models ( see the examples in s4 and s5 ). it also guarantees thermodynamical consistency ( aka detailed balance ), otherwise known to be undecidable, which is important for some applications. finally, it leads to parsimonious parameterizations of models, again an important point in some applications.
arxiv:1503.06022
we present an overview on the current experimental and phenomenological status of transverse single spin asymmetries ( tssas ) in proton - proton collisions. in particular, we focus on large - $ p _ t $ inclusive pion, photon, jet, pion - jet production and drell - yan processes. for all of them theoretical estimates are given in terms of a generalised parton model ( gpm ) based on a transverse momentum dependent ( tmd ) factorisation scheme. comparisons with the corresponding results in a collinear twist - 3 formalism and in a modified gpm approach are also made. on the experimental side, a selection of the most interesting and recent results from rhic is presented.
arxiv:1512.05379
using acoustic methods the complex high - frequency conductance of high - mobility $ n $ - gaas / algaas heterostructures was determined in magnetic fields 12 $ \ div $ 18 ~ t. based on the observed frequency and temperature dependences we conclude that in the investigated magnetic field range and at sufficiently low temperatures, $ t \ lesssim 200 $ ~ mk, the electron system forms a wigner crystal deformed due to pinning by disorder. at some temperature, which depends on the electron filling factor, the temperature dependences of both components of the complex conductance get substantially changed. we have ascribed this rapid change of the conduction mechanism to melting of the wigner crystal and study the dependence of the so - defined melting temperature on the electron filling factor.
arxiv:1607.01918
flavour changing neutral current decays are a very sensitive test of the standard model and its extensions. in particular the decay k - > pi nu nubar constitutes a clean way to provide constraints, independent of long distance effects. motivated by the recent experimental data of the e787 and e865 collaborations and by the difference between the standard model prediction and data, we consider in detail new physics scenarios such as the minimal supersymmetric standard model and r - parity violating supersymmetry. we begin with analysing the impact of new measurements on the standard model result obtaining b ( k ^ + - > pi ^ + nu nubar ) = ( 8. 18 + / - 1. 22 ) x 10 ^ ( - 11 ). predictions for other rare kaon decays are discussed, too. our results allow to improve the limits on r - parity violating couplings with respect to previous analyses.
arxiv:hep-ph/0407216
it is well known that the dynamics of a quantum system is always non - adiabatic in passage through a quantum critical point and the defect density in the final state following a quench shows a power - law scaling with the rate of quenching. however, we propose here a possible situation where the dynamics of a quantum system in passage across quantum critical regions is adiabatic and the defect density decays exponentially. this is achieved by incorporating additional interactions which lead to quantum critical behavior and gapless phases but do not participate in the time evolution of the system. to illustrate the general argument, we study the defect generation in the quantum critical dynamics of a spin - 1 / 2 anisotropic quantum xy spin chain with three spin interactions and a linearly driven staggered magnetic field.
arxiv:0906.1161
hadrons production is different in $ p \ bar { p } $ and $ pp $ interactions at high energies. there is process of hadrons production from three quark strings in $ p \ bar { p } $ which is absent in $ pp $. this process grows as $ ( \ ln \ sqrt { s } ) ^ 2 $ and becomes significant when energy of collision increases. inclusive cross sections of $ p \ bar { p } $ interaction exceed inclusive cross sections of $ pp $. theoretical estimation of the ratio of $ p \ bar { p } $ to $ pp $ at energy $ \ sqrt { s } = 900 $ gev gives $ r = 1. 12 \ pm0. 03 $. the ua1 data on $ p \ bar { p } $ transverse momentum distribution are about 1. 2 - - 1. 3 times higher than the cms, atlas and alice data on $ pp $ at energy $ \ sqrt { s } = 900 $ gev.
arxiv:1111.4978
we review the new possibilities offered by the reaction dynamics of asymmetric heavy ion collisions, using stable and unstable beams. we show that it represents a rather unique tool to probe regions of highly asymmetric nuclear matter ( $ anm $ ) in compressed as well as dilute phases, and to test the in - medium isovector interaction for high momentum nucleons. the focus is on a detailed study of the symmetry term of the nuclear equation of state ( $ eos $ ) in regions far away from saturation conditions but always under laboratory controlled conditions. thermodynamic properties of $ anm $ are surveyed starting from nonrelativistic and relativistic effective interactions. in the relativistic case the role of the isovector scalar $ \ delta $ - meson is stressed. the qualitative new features of the liquid - gas phase transition, " diffusive " instability and isospin distillation, are discussed. the results of ab - initio simulations of n - rich, n - poor, heavy ion collisions, using stochastic isospin dependent transport equations, are analysed as a function of beam energy and centrality. the isospin dynamics plays an important role in all steps of the reaction, from prompt nucleon emissions to the final fragments. the isospin diffusion is also of large interest, due to the interplay of asymmetry and density gradients. in relativistic collisions, the possibility of a direct study of the covariant structure of the effective nucleon interaction is shown. results are discussed for particle production, collective flows and iso - transparency. perspectives of further developments of the field, in theory as well as in experiment, are presented.
arxiv:nucl-th/0412060
we report optical and infrared observations of the massive x - ray binary system 4u1145 - 619 ( v801 cen ) which show that the circumstellar disc of the be star component is in decline. infrared j, h, k, l magnitudes of v801cen have been monitored from 1993 march to 1996 april. h alpha spectra have been obtained throughout the same period. we find that both the infrared excess and the balmer emission have been in decline throughout the period of observations. a 13 year optical and x - ray history of the source has been collated, revealing a possible correlation between the optical and x - ray activity. in addition, we have used u, v, b, y, beta indices, corrected for both circumstellar and interstellar effects, to calculate the physical parameters of the underlying b star.
arxiv:astro-ph/9706110
when we want to predict the future, we compute it from what we know about the present. specifically, we take a mathematical representation of observed reality, plug it into some dynamical equations, and then map the time - evolved result back to real - world predictions. but while this computational process can tell us what we want to know, we have taken this procedure too literally, implicitly assuming that the universe must compute itself in the same manner. physical theories that do not follow this computational framework are deemed illogical, right from the start. but this anthropocentric assumption has steered our physical models into an impossible corner, primarily because of quantum phenomena. meanwhile, we have not been exploring other models in which the universe is not so limited. in fact, some of these alternate models already have a well - established importance, but are thought to be mathematical tricks without physical significance. this essay argues that only by dropping our assumption that the universe is a computer can we fully develop such models, explain quantum phenomena, and understand the workings of our universe. ( this essay was awarded third prize in the 2012 fqxi essay contest ; a new afterword compares and contrasts this essay with robert spekkens ' first prize entry. )
arxiv:1211.7081
for the constant astigmatism equation, we construct a system of nonlocal conservation laws ( an abelian covering ) closed under the reciprocal transformations. we give functionally independent potentials modulo a wronskian type relation.
arxiv:1602.06861
recent deep network - based compressive sensing ( cs ) methods have achieved great success. however, most of them regard different sampling matrices as different independent tasks and need to train a specific model for each target sampling matrix. such practices give rise to inefficiency in computing and suffer from poor generalization ability. in this paper, we propose a novel controllable arbitrary - sampling network, dubbed coast, to solve cs problems of arbitrary - sampling matrices ( including unseen sampling matrices ) with one single model. under the optimization - inspired deep unfolding framework, our coast exhibits good interpretability. in coast, a random projection augmentation ( rpa ) strategy is proposed to promote the training diversity in the sampling space to enable arbitrary sampling, and a controllable proximal mapping module ( cpmm ) and a plug - and - play deblocking ( pnp - d ) strategy are further developed to dynamically modulate the network features and effectively eliminate the blocking artifacts, respectively. extensive experiments on widely used benchmark datasets demonstrate that our proposed coast is not only able to handle arbitrary sampling matrices with one single model but also to achieve state - of - the - art performance with fast speed. the source code is available on https : / / github. com / jianzhangcs / coast.
arxiv:2107.07225
let $ r $ be an associative ring with identity, $ c ( r ) $ denote the center of $ r $, and $ g ( x ) $ be a polynomial in the polynomial ring $ c ( r ) [ x ] $. $ r $ is called strongly $ g ( x ) $ - clean if every element $ r \ in r $ can be written as $ r = s + u $ with $ g ( s ) = 0 $, $ u $ a unit of $ r $, and $ su = us $. the relation between strongly $ g ( x ) $ - clean rings and strongly clean rings is determined, some general properties of strongly $ g ( x ) $ - clean rings are given, and strongly $ g ( x ) $ - clean rings generated by units are discussed.
arxiv:0803.3353
models leak information about their training data. this enables attackers to infer sensitive information about their training sets, notably determine if a data sample was part of the model ' s training set. the existing works empirically show the possibility of these membership inference ( tracing ) attacks against complex deep learning models. however, the attack results are dependent on the specific training data, can be obtained only after the tedious process of training the model and performing the attack, and are missing any measure of the confidence and unused potential power of the attack. in this paper, we theoretically analyze the maximum power of tracing attacks against high - dimensional graphical models, with the focus on bayesian networks. we provide a tight upper bound on the power ( true positive rate ) of these attacks, with respect to their error ( false positive rate ), for a given model structure even before learning its parameters. as it should be, the bound is independent of the knowledge and algorithm of any specific attack. it can help in identifying which model structures leak more information, how adding new parameters to the model increases its privacy risk, and what can be gained by adding new data points to decrease the overall information leakage. it provides a measure of the potential leakage of a model given its structure, as a function of the model complexity and the size of the training set.
arxiv:1905.12774
spline functions have long been used in numerical solution of differential equations. recently it revives as isogeometric analysis, which offers integration of finite element analysis and nurbs based cad into a single unified process. usually many nurbs pieces are needed to build geometrically continuous cad models. in this paper, we introduce some multiply periodic splines defined on hyperbolic disc. a single piece of such splines is enough to build complex cad models. multiresolution analysis on surfaces of high genus built from such splines can be carried out naturally. cad and fea are integrated directly on such models. it is difficult to derive such splines, only a theoretical framework is presented, together with some simple examples. rigorous derivation and construction of b - splines will be given in future papers.
arxiv:1908.02497
with the european union ' s artificial intelligence act taking effect on 1 august 2024, high - risk ai applications must adhere to stringent transparency and fairness standards. this paper addresses a crucial question : how can we scientifically audit algorithmic fairness? current methods typically remain at the basic detection stage of auditing, without accounting for more complex scenarios. we propose a novel framework, ` ` peer - induced fairness ' ', which combines the strengths of counterfactual fairness and peer comparison strategy, creating a reliable and robust tool for auditing algorithmic fairness. our framework is universal, adaptable to various domains, and capable of handling different levels of data quality, including skewed distributions. moreover, it can distinguish whether adverse decisions result from algorithmic discrimination or inherent limitations of the subjects, thereby enhancing transparency. this framework can serve as both a self - assessment tool for ai developers and an external assessment tool for auditors to ensure compliance with the eu ai act. we demonstrate its utility in small and medium - sized enterprises access to finance, uncovering significant unfairness - 41. 51 % of micro - firms face discrimination compared to non - micro firms. these findings highlight the framework ' s potential for broader applications in ensuring equitable ai - driven decision - making.
arxiv:2408.02558
we obtain the existence of radially symmetric and decreasing solutions to a general class of quasi - linear elliptic problems by a nonsmooth version of a symmetric minimax principle recently obtained by jean van schaftingen.
arxiv:0911.1333
we calculate the width of the delta resonance at leading two - loop order in baryon chiral perturbation theory. this gives a correlation between the leading pion - nucleon - delta and pion - delta couplings, which is relevant for the analysis of pion - nucleon scattering and other processes.
arxiv:1608.00517
a generic ` chirp ' of the form h ( t ) = a ( t ) cos ( phi ( t ) ) can be closely approximated by a connected set of multiscale chirplets with quadratically - evolving phase. the problem of finding the best approximation to a given signal using chirplets can be reduced to that of finding the path of minimum cost in a weighted, directed graph, and can be solved in polynomial time via dynamic programming. for a signal embedded in noise we apply constraints on the path length to obtain a statistic for detection of chirping signals in coloured noise. in this paper we present some results from using this test to detect binary black hole coalescences in simulated ligo noise.
arxiv:0806.4417
the mean - field langevin dynamics ( mfld ) minimizes an entropy - regularized nonlinear convex functional on the wasserstein space over $ \ mathbb { r } ^ d $, and has gained attention recently as a model for the gradient descent dynamics of interacting particle systems such as infinite - width two - layer neural networks. however, many problems of interest have constrained domains, which are not solved by existing mean - field algorithms due to the global diffusion term. we study the optimization of probability measures constrained to a convex subset of $ \ mathbb { r } ^ d $ by proposing the \ emph { mirror mean - field langevin dynamics } ( mmfld ), an extension of mfld to the mirror langevin framework. we obtain linear convergence guarantees for the continuous mmfld via a uniform log - sobolev inequality, and uniform - in - time propagation of chaos results for its time - and particle - discretized counterpart.
arxiv:2505.02621
we discuss how to use evolutionary game theory ( egt ) as a framework for studying how cultural dynamics and structural properties can influence the evolution of norms and behaviors within a society. we provide a brief tutorial on how egt works, and discuss what kinds of insights it can provide. we then describe three published studies in which we have developed egt models that help explain how structural and external conditions in a society affect the emergence of social norms.
arxiv:1606.02570
k0 - k0bar oscillations are extremely sensitive to the k0 and k0bar energy at rest. even assuming m _ k0 = m _ k0bar, the energy is not granted to be the same if gravitational effects on k0 and k0bar slightly differ. we consider various gravitation fields present and, in particular, galactic fields, which provide a negligible acceleration, but relatively large gravitational potential energy. a constraint from a possible effect of this potential energy on the kaon oscillations isfound to be | ( m _ g / m _ i ) _ k0 - ( m _ g / m _ i ) _ k0bar | < 8 x 10 ^ - 13 atcl = 90 %. the derived constraint is competitive with other tests of universality of the free fall. other applications are also discussed.
arxiv:0811.1009
we solve a unified integral equation to obtain the $ x, q _ t $ and $ q $ dependence of the gluon distribution of a proton in the small $ x $ regime ; where $ x $ and $ q _ t $ are the longitudinal momentum fraction and the transverse momentum of the gluon probed at a scale $ q $. the equation generates a gluon with a steep $ x ^ { - \ lambda } $ behaviour, with $ \ lambda \ sim 0. 5 $, and a $ q _ t $ distribution which broadens as $ x $ decreases. we compare our solutions with, on the one hand, those that we obtain using the double - leading - logarithm approximation to altarelli - parisi evolution and, on the other hand, to those that we determine from the bfkl equation.
arxiv:hep-ph/9503266
deformable robots are notoriously difficult to model or control due to its high - dimensional configuration spaces. direct trajectory optimization suffers from the curse - of - dimensionality and incurs a high computational cost, while learning - based controller optimization methods are sensitive to hyper - parameter tuning. to overcome these limitations, we hypothesize that high fidelity soft robots can be both simulated and controlled by restricting to low - dimensional spaces. under such assumption, we propose a two - stage algorithm to identify such simulation - and control - spaces. our method first identifies the so - called simulation - space that captures the salient deformation modes, to which the robot ' s governing equation is restricted. we then identify the control - space, to which control signals are restricted. we propose a multi - fidelity riemannian bayesian bilevel optimization to identify task - specific control spaces. we show that the dimension of control - space can be less than $ 10 $ for a high - dof soft robot to accomplish walking and swimming tasks, allowing low - dimensional mpc controllers to be applied to soft robots with tractable computational complexity.
arxiv:2311.01720
alon et al. introduced the concept of non - repetitive colourings of graphs. here we address some questions regarding non - repetitive colourings of planar graphs. specifically, we show that the faces of any outerplanar map can be non - repetitively coloured using at most five colours. we also give some lower bounds for the number of colours required to non - repetitively colour the vertices of both outerplanar and planar graphs.
arxiv:math/0307365
we report radial velocities for 844 fgkm - type main sequence and subgiant stars and 45 k giants, most of which had either low - precision velocity measurements or none at all. these velocities differ from the standard stars of udry et al. by 0. 035 km / s ( rms ) for the 26 fgk standard stars in common. the zero - point of our velocities differs from that of udry et al. : ( v _ present - v _ udry ) = + 0. 053 km / s. thus these new velocities agree with the best known standard stars both in precision and zero - point, to well within 0. 1 km / s. nonetheless, both these velocities and the standards suffer from three sources of systematic error, namely, convective blueshift, gravitational redshift, and spectral type mismatch of the reference spectrum. these systematic errors are here forced to be zero for g2v stars by using the sun as reference, with vesta and day sky as proxies. but for spectral types departing from solar, the systematic errors reach 0. 3 km / s in the f and k stars and 0. 4 km / s in m dwarfs. multiple spectra were obtained for all 889 stars during four years, and 782 of them exhibit velocity scatter less than 0. 1 km / s. these stars may serve as radial velocity standards if they remain constant in velocity. we found 11 new spectroscopic binaries and report orbital parameters for them.
arxiv:astro-ph/0112477
ferroelectricity in crystals is associated with the displacement of ions or rotations of polar units. here we consider the dipole created by donor doping ( $ d ^ + $ ) and the corresponding bound polaron ( $ e ^ - $ ). a dipole of 6. 15 debye is predicted, from berry phase analysis, in the ruddlesden - popper phase of $ { \ rm sr _ 3ti _ 2o _ 7 } $. a characteristic double - well potential is formed, which persists for high doping densities. the effective hubbard $ u $ interaction can vary the defect state from metallic, a two - dimensional polaron, through to a zero - dimensional polaron. the ferroelectric - like behavior reported here is localized and distinct from conventional spontaneous lattice polarization.
arxiv:2205.11604
we address the microscopic origin of the universal three - body parameter that fixes the spectrum of three - atom systems in the efimov regime. we identify it with the van der waals two - body correlation, which causes the three - atom system to deform when the three atoms come within the distance of the van der waals length, effectively preventing them from coming closer due to the kinetic - energy cost associated with this three - body deformation. this deformation mechanism explains the universal ratio of the scattering length at the triatomic resonance to the van der waals length observed in several experiments and confirmed by numerical calculations.
arxiv:1208.3912
it has been known that direct speech - to - speech translation ( s2st ) models usually suffer from the data scarcity issue because of the limited existing parallel materials for both source and target speech. therefore to train a direct s2st system, previous works usually utilize text - to - speech ( tts ) systems to generate samples in the target language by augmenting the data from speech - to - text translation ( s2tt ). however, there is a limited investigation into how the synthesized target speech would affect the s2st models. in this work, we analyze the effect of changing synthesized target speech for direct s2st models. we find that simply combining the target speech from different tts systems can potentially improve the s2st performances. following that, we also propose a multi - task framework that jointly optimizes the s2st system with multiple targets from different tts systems. extensive experiments demonstrate that our proposed framework achieves consistent improvements ( 2. 8 bleu ) over the baselines on the fisher spanish - english dataset.
arxiv:2304.04618
in this paper, we introduce the notion of " ( n, d ) - perfect rings " which is in some way a generalization of the notion of " s - rings ". after we give some basic results of this rings and we survey the relationship between " a ( n ) property " and " ( n, d ) - perfect property ". finally, we investigate the " ( n, d ) - perfect property " in pullback rings.
arxiv:0811.4627
the validation of lidar - based perception of intelligent mobile systems operating in open - world applications remains a challenge due to the variability of real environmental conditions. virtual simulations allow the generation of arbitrary scenes under controlled conditions but lack physical sensor characteristics, such as intensity responses or material - dependent effects. in contrast, real - world data offers true sensor realism but provides less control over influencing factors, hindering sufficient validation. existing approaches address this problem with augmentation of real - world point cloud data by transferring objects between scenes. however, these methods do not consider validation and remain limited in controllability because they rely on empirical data. we solve these limitations by proposing point cloud recombination, which systematically augments captured point cloud scenes by integrating point clouds acquired from physical target objects measured in controlled laboratory environments. thus enabling the creation of vast amounts and varieties of repeatable, physically accurate test scenes with respect to phenomena - aware occlusions with registered 3d meshes. using the ouster os1 - 128 rev7 sensor, we demonstrate the augmentation of real - world urban and rural scenes with humanoid targets featuring varied clothing and poses, for repeatable positioning. we show that the recombined scenes closely match real sensor outputs, enabling targeted testing, scalable failure analysis, and improved system safety. by providing controlled yet sensor - realistic data, our method enables trustworthy conclusions about the limitations of specific sensors in compound with their algorithms, e. g., object detection.
arxiv:2505.02476
in this article we consider the inviscid two - dimensional shallow water equations in a rectangle. the flow occurs near a stationary solution in the so called supercritical regime and we establish short term existence of smooth solutions for the corresponding initial and boundary value problem.
arxiv:1503.00283
expanding on the work of kemp, ratliff and shah, for any closure $ { \ rm cl } $ defined on a class of modules over a noetherian ring, we develop the theory of $ { \ rm cl } $ - prereductions of submodules. for any interior $ { \ rm i } $ on a class of $ r $ - modules, we also develop the theory of { \ rm i } - postexpansions. using the duality of epstein, r. g. and vassilev, we show that if $ { \ rm i } $ is the interior dual to $ { \ rm cl } $, then these notions are in fact dual to each other. we consider the $ { \ rm cl } $ - precore ( $ { \ rm i } $ - postcore ), the intersection of all $ { \ rm cl } $ - prereductions $ { \ rm i } $ - postexpansions ) of a submodule and the $ { \ rm cl } $ - prehull ( $ { \ rm i } $ - posthull ), the sum of all $ { \ rm cl } $ - prereductions ( $ { \ rm i } $ - postexpansions ) of a submodule and give comparisons with the $ { \ rm cl } $ - core ( $ { \ rm i } $ - hull ). we further give a classification of $ { \ rm cl } $ - prereductions of $ { \ rm cl } $ - closed ideals of a noetherian ring where $ { \ rm cl } $ is a closure with a special part.
arxiv:2303.00144
we prove an asymptotic for the moment of derivatives of quadratic twists of two distinct modular $ l $ - functions. this was previously known conditionally on grh by the work of ian petrow.
arxiv:2503.14680
active traffic management with autonomous vehicles offers the potential for reduced congestion and improved traffic flow. however, developing effective algorithms for real - world scenarios requires overcoming challenges related to infinite - horizon traffic flow and partial observability. to address these issues and further decentralize traffic management, we propose an asymmetric actor - critic model that learns decentralized cooperative driving policies for autonomous vehicles using single - agent reinforcement learning. by employing attention neural networks with masking, our approach efficiently manages real - world traffic dynamics and partial observability, eliminating the need for predefined agents or agent - specific experience buffers in multi - agent reinforcement learning. extensive evaluations across various traffic scenarios demonstrate our method ' s significant potential in improving traffic flow at critical bottleneck points. moreover, we address the challenges posed by conservative autonomous vehicle driving behaviors that adhere strictly to traffic rules, showing that our cooperative policy effectively alleviates potential slowdowns without compromising safety.
arxiv:2403.11914
monocular 3d object localization in driving scenes is a crucial task, but challenging due to its ill - posed nature. estimating 3d coordinates for each pixel on the object surface holds great potential as it provides dense 2d - 3d geometric constraints for the underlying pnp problem. however, high - quality ground truth supervision is not available in driving scenes due to sparsity and various artifacts of lidar data, as well as the practical infeasibility of collecting per - instance cad models. in this work, we present neurocs, a framework that uses instance masks and 3d boxes as input to learn 3d object shapes by means of differentiable rendering, which further serves as supervision for learning dense object coordinates. our approach rests on insights in learning a category - level shape prior directly from real driving scenes, while properly handling single - view ambiguities. furthermore, we study and make critical design choices to learn object coordinates more effectively from an object - centric view. altogether, our framework leads to new state - of - the - art in monocular 3d localization that ranks 1st on the kitti - object benchmark among published monocular methods.
arxiv:2305.17763
for any lagrangean k \ " ahler submanifold $ m \ subset t ^ * { \ bbb c } ^ n $, there exists a canonical hyper k \ " ahler metric on $ t ^ * m $. a k \ " ahler potential for this metric is given by the generalized calabi ansatz of the theoretical physicists cecotti, ferrara and girardello. this correspondence provides a method for the construction of ( pseudo ) hyper k \ " ahler manifolds with large automorphism group. using it, a class of pseudo hyper k \ " ahler manifolds of complex signature $ ( 2, 2n ) $ is constructed. for any hyper k \ " ahler manifold $ n $ in this class a group of automorphisms with a codimension one orbit on $ n $ is specified. finally, it is shown that the bundle of intermediate jacobians over the moduli space of gauged calabi yau 3 - folds admits a natural pseudo hyper k \ " ahler metric of complex signature $ ( 2, 2n ) $.
arxiv:math/9607213
we explore a model introduced by cyr - racine, ge, and knox ( arxiv : 2107. 13000 ( 2 ) ) that resolves the hubble tension by invoking a ` ` mirror world " dark sector with energy density a fixed fraction of the ` ` ordinary " sector of lambda - cdm. although it reconciles cosmic microwave background and large - scale structure observations with local measurements of the hubble constant, the model requires a value of the primordial helium mass fraction that is discrepant with observations and with the predictions of big bang nucleosynthesis ( bbn ). we consider a variant of the model with standard helium mass fraction but with the value of the electromagnetic fine - structure constant slightly different during photon decoupling from its present value. if $ \ alpha $ at that epoch is lower than its current value by $ \ delta \ alpha \ simeq - 2 \ times 10 ^ { - 5 } $, then we can achieve the same hubble tension resolution as in cyr - racine, et al. but with consistent helium abundance. as an example of such time - evolution, we consider a toy model of an ultra - light scalar field, with mass $ m < 4 \ times 10 ^ { - 29 } $ ev, coupled to electromagnetism, which evolves after photon decoupling and that appears to be consistent with late - time constraints on $ \ alpha $ variation and the weak equivalence principle.
arxiv:2211.03236
primarily to gaining understanding of a process or artifact in which the manner of its construction, use, or internal processes has not been made clear by its creator. patented items do not of themselves have to be reverse - engineered to be studied, for the essence of a patent is that inventors provide a detailed public disclosure themselves, and in return receive legal protection of the invention that is involved. however, an item produced under one or more patents could also include other technology that is not patented and not disclosed. indeed, one common motivation of reverse engineering is to determine whether a competitor ' s product contains patent infringement or copyright infringement. = = legality = = = = = united states = = = in the united states, even if an artifact or process is protected by trade secrets, reverse - engineering the artifact or process is often lawful if it has been legitimately obtained. reverse engineering of computer software often falls under both contract law as a breach of contract as well as any other relevant laws. that is because most end - user license agreements specifically prohibit it, and us courts have ruled that if such terms are present, they override the copyright law that expressly permits it ( see bowers v. baystate technologies ). according to section 103 ( f ) of the digital millennium copyright act ( 17 u. s. c. § 1201 ( f ) ), a person in legal possession of a program may reverse - engineer and circumvent its protection if that is necessary to achieve " interoperability ", a term that broadly covers other devices and programs that can interact with it, make use of it, and to use and transfer data to and from it in useful ways. a limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes. = = = european union = = = eu directive 2009 / 24 on the legal protection of computer programs, which superseded an earlier ( 1991 ) directive, governs reverse engineering in countries of the european union. = = see also = = = = notes = = = = references = = = = sources = =
https://en.wikipedia.org/wiki/Reverse_engineering
in this paper, we compare the saturation time scales for complexity, linear entropy and entanglement negativity for two open quantum systems. our first model is a coupled harmonic oscillator, where we treat one of the oscillators as the bath. the second one is a type of caldeira leggett model, where we consider a one - dimensional free scalar field as the bath. using these open quantum systems, we discovered that both the complexity of purification and the complexity from operator state mapping is always saturated for a completely mixed state. more explicitly, the saturation time scale for both types of complexity is smaller than the saturation time scale for linear entropy. on top of this, we found that the saturation time scale for linear entropy and entanglement negativity is of the same order for the caldeira leggett model.
arxiv:2210.09268
quantum information theorems state that it is possible to exploit collective quantum resources to greatly enhance the charging power of quantum batteries ( qbs ) made of many identical elementary units. we here present and solve a model of a qb that can be engineered in solid - state architectures. it consists of $ n $ two - level systems coupled to a single photonic mode in a cavity. we contrast this collective model ( " dicke qb " ), whereby entanglement is genuinely created by the common photonic mode, to the one in which each two - level system is coupled to its own separate cavity mode ( " rabi qb " ). by employing exact diagonalization, we demonstrate the emergence of a quantum advantage in the charging power of dicke qbs, which scales like $ \ sqrt { n } $ for $ n \ gg 1 $.
arxiv:1707.04930
we investigate local properties of weak solutions to nonlocal and nonlinear kinetic equations whose prototype is given by $ $ \ partial _ t u + v \ cdot \ nabla _ x u + ( - \ delta _ v ) ^ s _ p u = f ( u ). $ $ we consider equations whose diffusion part is a ( possibly degenerate ) integro - differential operator of differentiability order $ s \ in ( 0, 1 ) $ and summability exponent $ p \ in ( 1, \ infty ) $. amongst other results, we provide an explicit local boundedness estimate by combining together a suitable sobolev embedding theorem and a fractional caccioppoli - type inequality with tail. for this, we introduce in the kinetic framework a new definition of nonlocal tail of a function and of its related tail spaces, also by establishing some useful estimates for the tail of weak solutions. armed with the aforementioned results we give a precise control of the long - range interactions arising from the nonlocal behaviour of the involved diffusion operator.
arxiv:2301.06334
measuring distances of cosmological sources such as galaxies, stars and quasars plays an increasingly critical role in modern cosmology. obtaining the optical spectrum and consequently calculating the redshift as a distance indicator could instantly classify these objects. as long as spectroscopic observations are not available for many galaxies and the process of measuring the redshift is time - consuming and infeasible for large samples, machine learning ( ml ) approaches could be applied to determine the redshifts of galaxies from different features including their photometric colors. in this paper, by using the flux magnitudes from the sloan digital sky survey ( sdss ) catalog, we develop two ml regression algorithms ( decision tree and random forest ) for estimating the redshifts taking color indices as input features. we find that the random forest algorithm produces the optimum result for the redshift prediction, and it will be further improved when the dataset is limited to a subset with z $ \ le $ 2 giving the normalised standard deviation $ \ overline { \ delta z } _ { \ text { norm } } = 0. 005 $ and the standard deviation $ \ sigma _ { \ delta z } = 0. 12 $. this work shows a great potential of using the ml approach to determine the photometric redshifts of distant sources.
arxiv:2201.04391
the one dimensional s = 1 / 2 heisenberg model with dimerization and quadrumerization is studied by means of the numerical exact diagonalization of finite size systems. using the phenomenological renormalization group and finite size scaling law, the ground state phase diagram is obtained in the isotropic case. it exhibits a variety of the ground states which contains the s = 1 haldane state, s = 1 dimer state and s = 1 / 2 dimer state as limiting cases. the gap exponent $ \ nu $ is also calculated which coincides with the value for the dimerization transition of the isotropic heisenberg chain. in the xy limit, the phase diagram is obtained analytically and the comparison is made with the isotropic case.
arxiv:cond-mat/9804149