text
stringlengths
1
3.65k
source
stringlengths
15
79
we investigate thermodynamic properties of a one - dimensional s = 1 / 2 antiferromagnetic heisenberg model coupled to a lattice distortion by a quantum monte carlo method. in particular we study how spin and lattice dimerize as a function of the temperature, which gives a fundamental process of the spin - peierls transition in higher dimensions. the degree of freedom of the lattice is taken into account adiabatically and the thermal distribution of the lattice distortion is obtained by the thermal bath algorithm. we find that the dimerization develops as the temperature decreases and it converges to the value of the dimerization of the ground state at t = 0. furthermore we find that the coupling constants of spins fluctuate quite largly at high temperature and there thermodynamic properties deviate from those of the uniform chain. doping of non - magnetic impurities causes cut of the chain into short chains with open boundary. we investigate thermodynamic properties of open chains taking relaxation of the lattice into consideration. we find that strong bonds locate at the edges and a defect of the bond alternation appears in the chain with odd number of sites, which causes enhancement of the staggered magnetic order. we find a spreaded staggered structure which indicates that the defect moves diffusively in the chain even at very low temperature.
arxiv:cond-mat/9912210
recent coding strategies for deterministic and noisy relay networks are related to the pipelining of block markov encoding. for deterministic networks, it is shown that pipelined encoding improves encoding delay, as opposed to end - to - end delay. for noisy networks, it is observed that decode - and - forward exhibits good rate scaling when the signal - to - noise ratio ( snr ) increases.
arxiv:0911.3676
high level abstractions for implementing, training, and testing deep learning ( dl ) models abound. such frameworks function primarily by abstracting away the implementation details of arbitrary neural architectures, thereby enabling researchers and engineers to focus on design. in principle, such frameworks could be " zero - cost abstractions " ; in practice, they incur translation and indirection overheads. we study at which points exactly in the engineering life - cycle of a dl model the highest costs are paid and whether they can be mitigated. we train, test, and evaluate a representative dl model using pytorch, libtorch, torchscript, and cudnn on representative datasets, comparing accuracy, execution time and memory efficiency.
arxiv:2012.07163
post - quantum cryptographic ( pqc ) algorithms, especially those based on the learning with errors ( lwe ) problem, have been subjected to several physical attacks in the recent past. although the attacks broadly belong to two classes - passive side - channel attacks and active fault attacks, the attack strategies vary significantly due to the inherent complexities of such algorithms. exploring further attack surfaces is, therefore, an important step for eventually securing the deployment of these algorithms. also, it is important to test the robustness of the already proposed countermeasures in this regard. in this work, we propose a new fault attack on side - channel secure masked implementation of lwe - based key - encapsulation mechanisms ( kems ) exploiting fault propagation. the attack typically originates due to an algorithmic modification widely used to enable masking, namely the arithmetic - to - boolean ( a2b ) conversion. we exploit the data dependency of the adder carry chain in a2b and extract sensitive information, albeit masking ( of arbitrary order ) being present. as a practical demonstration of the exploitability of this information leakage, we show key recovery attacks of kyber, although the leakage also exists for other schemes like saber. the attack on kyber targets the decapsulation module and utilizes belief propagation ( bp ) for key recovery. to the best of our knowledge, it is the first attack exploiting an algorithmic component introduced to ease masking rather than only exploiting the randomness introduced by masking to obtain desired faults ( as done by delvaux ). finally, we performed both simulated and electromagnetic ( em ) fault - based practical validation of the attack for an open - source first - order secure kyber implementation running on an stm32 platform.
arxiv:2401.14098
deep learning is currently the subject of intensive study. however, fundamental concepts such as representations are not formally defined - - researchers " know them when they see them " - - and there is no common language for describing and analyzing algorithms. this essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments. the backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network. the main ingredients of the framework are thus, unsurprisingly : ( i ) game theory, to formalize distributed optimization ; and ( ii ) communication protocols, to track the flow of zeroth and first - order information. the framework allows natural definitions of semantics ( as the meaning encoded in functions ), representations ( as functions whose semantics is chosen to optimized a criterion ) and grammars ( as communication protocols equipped with first - order convergence guarantees ). much of the essay is spent discussing examples taken from the literature. the ultimate aim is to develop a graphical language for describing the structure of deep learning algorithms that backgrounds the details of the optimization procedure and foregrounds how the components interact. inspiration is taken from probabilistic graphical models and factor graphs, which capture the essential structural features of multivariate distributions.
arxiv:1509.08627
novel highly active, optically - transparent electrode catalyst containing pt, ptox, graphene oxide and stacked graphene platelet nanofibers is developed for a cathode of cu ( ii / i ) - mediated dye - sensitized solar cells.
arxiv:1804.09119
this paper analyses conversational ai multi - agent interoperability frameworks and describes the novel architecture proposed by the open voice interoperability initiative ( linux foundation ai and data ), also known briefly as ovon ( open voice network ). the new approach is illustrated, along with the main components, delineating the key benefits and use cases for deploying standard multi - modal ai agency ( or agentic ai ) communications. beginning with universal apis based on natural language, the framework establishes and enables interoperable interactions among diverse conversational ai agents, including chatbots, voicebots, videobots, and human agents. furthermore, a new discovery specification framework is introduced, designed to efficiently look up agents providing specific services and to obtain accurate information about these services through a standard manifest publication, accessible via an extended set of natural language - based apis. the main purpose of this contribution is to significantly enhance the capabilities and scalability of ai interactions across various platforms. the novel architecture for interoperable conversational ai assistants is designed to generalize, being replicable and accessible via open repositories.
arxiv:2407.19438
this paper deals with the limit cases for $ s $ - fractional heat flows in a cylindrical domain, with homogeneous dirichlet boundary conditions, as $ s \ to 0 ^ + $ and $ s \ to 1 ^ - $ \,. to this purpose, we describe the fractional heat flows as minimizing movements of the corresponding gagliardo seminorms, with respect to the $ l ^ 2 $ metric. first, we provide an abstract stability result for minimizing movements in hilbert spaces, with respect to a sequence of $ \ gamma $ - converging uniformly $ \ lambda $ - convex energy functionals. then, we provide the $ \ gamma $ - convergence analysis of the $ s $ - gagliardo seminorms as $ s \ to 0 ^ + $ and $ s \ to 1 ^ - $ \,, and apply the general stability result to such specific cases. as a consequence, we prove that $ s $ - fractional heat flows ( suitably scaled in time ) converge to the standard heat flow as $ s \ to 1 ^ - $, and to a degenerate ode type flow as $ s \ to 0 ^ + $ \,. moreover, looking at the next order term in the asymptotic expansion of the $ s $ - fractional gagliardo seminorm, we show that suitably forced $ s $ - fractional heat flows converge, as $ s \ to 0 ^ + $ \,, to the parabolic flow of an energy functional that can be seen as a sort of renormalized $ 0 $ - gagliardo seminorm : the resulting parabolic equation involves the first variation of such an energy, that can be understood as a zero ( or logarithmic ) laplacian.
arxiv:2107.13828
transferability of adversarial samples became a serious concern due to their impact on the reliability of machine learning system deployments, as they find their way into many critical applications. knowing factors that influence transferability of adversarial samples can assist experts to make informed decisions on how to build robust and reliable machine learning systems. the goal of this study is to provide insights on the mechanisms behind the transferability of adversarial samples through an attack - centric approach. this attack - centric perspective interprets how adversarial samples would transfer by assessing the impact of machine learning attacks ( that generated them ) on a given input dataset. to achieve this goal, we generated adversarial samples using attacker models and transferred these samples to victim models. we analyzed the behavior of adversarial samples on victim models and outlined four factors that can influence the transferability of adversarial samples. although these factors are not necessarily exhaustive, they provide useful insights to researchers and practitioners of machine learning systems.
arxiv:2112.01777
in the same large - scale structures as the quasars and search for protoclusters at an early epoch.
arxiv:2304.04719
phase change materials such as ge $ _ { 2 } $ sb $ _ { 2 } $ te $ _ { 5 } $ ( gst ) are ideal candidates for next - generation, non - volatile, solid - state memory due to the ability to retain binary data in the amorphous and crystal phases, and rapidly transition between these phases to write / erase information. thus, there is wide interest in using molecular modeling to study gst. recently, a gaussian approximation potential ( gap ) was trained for gst to reproduce density functional theory ( dft ) energies and forces at a fraction of the computational cost [ zhou et al. nature electronics $ \ mathbf { 6 } $, 746 - 754 ( 2023 ) ] ; however, simulations of large length and time scales are still challenging using this gap model. here we present a machine - learned ( ml ) potential for gst implemented using the atomic cluster expansion ( ace ) framework. this ace potential shows comparable accuracy to the gap potential but performs orders of magnitude faster. we train the ace potentials both directly from dft, as well as using a recently introduced indirect learning approach where the potential is trained instead from an intermediate ml potential, in this case, gap. indirect learning allows us to consider a significantly larger training set than could be generated using dft alone. we compare the directly and indirectly learned potentials and find that both reproduce the structure and thermodynamics predicted by the gap, and also match experimental measures of gst structure. the speed of the ace model, particularly when using gpu acceleration, allows us to examine repeated transitions between crystal and amorphous phases in device - scale systems with only modest computational resources.
arxiv:2411.08194
we present the modular algorithm for relativistic treatment of heavy ion interactions ( martini ), an event generator for the hard and penetrating probes in high energy nucleus - nucleus collisions. the simulation consists of a time evolution model for the soft background, such as hydrodynamics, pythia 8. 1 to generate and hadronize the hard partons after the medium evolution, which is based on the mcgill - amy formalism and includes both radiative and elastic processes. martini allows for the generation of full event configurations in the high transverse momentum region. we present results for the neutral pion and photon nuclear modification factor in au + au collisions at rhic.
arxiv:0911.4470
while large language models ( llms ) are empowered with broad knowledge, their task - specific performance is often suboptimal. it necessitates fine - tuning llms with task - specific data, but such data may be inaccessible due to privacy concerns. in this paper, we propose a novel approach to enhance llms with smaller language models ( slms ) that are trained on clients using their private task - specific data. to enable mutual enhancement between llms and slms, we propose crosslm, where the slms promote the llm to generate task - specific high - quality data, and both the llm and slms are enhanced with the generated data. we evaluate crosslm using publicly accessible language models across a range of benchmark tasks. the results demonstrate that crosslm significantly enhances the task - specific performance of slms on clients and the llm on the cloud server simultaneously while preserving the llm ' s generalization capability.
arxiv:2312.05842
in this paper, we consider a multiuser multiple - input multiple - output ( mu - mimo ) communication system between a base station equipped with multiple antennas and multiple mobile users each equipped with a single antenna. the uplink scenario is considered. the uplink channels are acquired by the base station through a training phase. two linear processing schemes are considered, namely maximum - ratio combining ( mrc ) and zero - forcing ( zf ). we optimize the training period and optimal training energy under the average and peak power constraint so that an achievable sum rate is maximized.
arxiv:1409.6059
causally interpretable meta - analysis combines information from a collection of randomized controlled trials to estimate treatment effects in a target population in which experimentation may not be possible but covariate information can be collected from a simple random sample. in such analyses, a key practical challenge is systematically missing data when some baseline covariates are not collected in all trials. here, we provide identification results for potential ( counterfactual ) outcome means and average treatment effects in the target population when covariate data are systematically missing from some of the trials in the meta - analysis. we propose three estimators for the average treatment effect in the target population, examine their asymptotic properties, and show that they have good finite - sample performance in simulation studies. we use the estimators to analyze data from two large lung cancer screening trials and target population data from the national health and nutrition examination survey ( nhanes ). to accommodate the complex survey design of the nhanes, we modify the methods to incorporate survey sampling weights and allow for clustering.
arxiv:2205.00610
we overview experimental laboratory prototypes of maze solvers. we speculate that all maze solvers implement lee algorithm by first developing a gradient of values showing a distance from any site of the maze to the destination site and then tracing a path from a given source site to the destination site. all prototypes approximate a set of many - source - one - destination paths using resistance, chemical and temporal gradients. they trace a path from a given source site to the destination site using electrical current, fluidic, growth of slime mould, marangoni flow, crawling of epithelial cells, excitation waves in chemical medium, propagating crystallisation patterns. some of the prototypes visualise the path using a stream of dye, thermal camera or glow discharge ; others require a computer to extract the path from time lapse images of the tracing. we discuss the prototypes in terms of speed, costs and durability of the path visualisation.
arxiv:1601.04672
we study the impact of the cosmological parameters uncertainties on the measurements of primordial non - gaussianity through the large - scale non - gaussian halo bias effect. while this is not expected to be an issue for the standard lcdm model, it may not be the case for more general models that modify the large - scale shape of the power spectrum. we consider the so - called local non - gaussianity model and forecasts from planned surveys, alone and combined with a planck cmb prior. in particular, we consider euclid - and lsst - like surveys and forecast the correlations among $ f _ { \ rm nl } $ and the running of the spectral index $ \ alpha _ s $, the dark energy equation of state $ w $, the effective sound speed of dark energy perturbations $ c ^ 2 _ s $, the total mass of massive neutrinos $ m _ \ nu = \ sum m _ \ nu $, and the number of extra relativistic degrees of freedom $ n _ \ nu ^ { rel } $. neglecting cmb information on $ f _ { \ rm nl } $ and scales $ k > 0. 03 h $ / mpc, we find that, if $ n _ \ nu ^ { \ rm rel } $ is assumed to be known, the uncertainty on cosmological parameters increases the error on $ f _ { \ rm nl } $ by 10 to 30 % depending on the survey. thus the $ f _ { \ rm nl } $ constraint is remarkable robust to cosmological model uncertainties. on the other hand, if $ n _ \ nu ^ { \ rm rel } $ is simultaneously constrained from the data, the $ f _ { \ rm nl } $ error increases by $ \ sim 80 % $. finally, future surveys which provide a large sample of galaxies or galaxy clusters over a volume comparable to the hubble volume can measure primordial non - gaussianity of the local form with a marginalized 1 - - $ \ sigma $ error of the order $ \ delta f _ { \ rm nl } \ sim 2 - 5 $, after combination with cmb priors for the remaining cosmological parameters. these results are competitive with cmb bispectrum constraints achievable with an ideal cmb experiment.
arxiv:1003.0456
we report the first inclusive photon measurements about mid - rapidity ( | y | < 0. 5 ) from au + au collisions at sqrt ( s _ { nn } ) = 130 gev at rhic. photon pair conversions were reconstructed from electron and positron tracks measured with the time projection chamber ( tpc ) of the star experiment. with this method, an energy resolution of delta ( e ) / e = 2 % at 0. 5 gev has been achieved. reconstructed photons have also been used to measure the transverse momentum ( pt ) spectra of pi0 mesons about mid - rapidity ( | y | < 1 ) via the pi0 - > photon photon decay channel. the fractional contribution of the pi0 - > photon photon decay to the inclusive photon spectrum decreases by 20 % + / - 5 % between pt = 1. 65 gev / c and pt = 2. 4 gev / c in the most central events, indicating that relative to pi0 - > photon photon decay the contribution of other photon sources is substantially increasing.
arxiv:nucl-ex/0401008
we consider the problem of assessing the importance of multiple variables or factors from a dataset when side information is available. in principle, using side information can allow the statistician to pay attention to variables with a greater potential, which in turn, may lead to more discoveries. we introduce an adaptive knockoff filter, which generalizes the knockoff procedure ( barber and cand \ ` es, 2015 ; cand \ ` es et al., 2018 ) in that it uses both the data at hand and side information to adaptively order the variables under study and focus on those that are most promising. adaptive knockoffs controls the finite - sample false discovery rate ( fdr ) and we demonstrate its power by comparing it with other structured multiple testing methods. we also apply our methodology to real genetic data in order to find associations between genetic variants and various phenotypes such as crohn ' s disease and lipid levels. here, adaptive knockoffs makes more discoveries than reported in previous studies on the same datasets.
arxiv:2001.07835
combination of power lines provides excess capacity. circuit breakers disconnect a power line when monitors detect an overload. power is redistributed across the remaining lines. at the toronto airport, there are 4 redundant electrical lines. each of the 4 lines supply enough power for the entire airport. a spot network substation uses reverse current relays to open breakers to lines that fail, but lets power continue to flow the airport. electrical power systems use power scheduling to reconfigure active redundancy. computing systems adjust the production output of each generating facility when other generating facilities are suddenly lost. this prevents blackout conditions during major events such as an earthquake. = = disadvantages = = charles perrow, author of normal accidents, has said that sometimes redundancies backfire and produce less, not more reliability. this may happen in three ways : first, redundant safety devices result in a more complex system, more prone to errors and accidents. second, redundancy may lead to shirking of responsibility among workers. third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely. = = voting logic = = voting logic uses performance monitoring to determine how to reconfigure individual components so that operation continues without violating specification limitations of the overall system. voting logic often involves computers, but systems composed of items other than computers may be reconfigured using voting logic. circuit breakers are an example of a form of non - computer voting logic. the simplest voting logic in computing systems involves two components : primary and alternate. they both run similar software, but the output from the alternate remains inactive during normal operation. the primary monitors itself and periodically sends an activity message to the alternate as long as everything is ok. all outputs from the primary stop, including the activity message, when the primary detects a fault. the alternate activates its output and takes over from the primary after a brief delay when the activity message ceases. errors in voting logic can cause both outputs to be active or inactive at the same time, or cause outputs to flutter on and off. a more reliable form of voting logic involves an odd number of three devices or more. all perform identical functions and the outputs are compared by the voting logic. the voting logic establishes a majority when there is a disagreement, and the majority will act to deactivate the output from other device ( s ) that disagree. a single fault will not interrupt normal operation. this technique is used with avionics
https://en.wikipedia.org/wiki/Redundancy_(engineering)
we review the spring 2022 status of the current $ b $ anomalies and their possible interpretation in terms of new phsyics. we also discuss the discovery potential of targeted lhc and future collider searches for the underlying new particles and their complementarity with low - energy flavour observables.
arxiv:2207.07354
end - to - end learning of communication systems with neural networks and particularly autoencoders is an emerging research direction which gained popularity in the last year. in this approach, neural networks learn to simultaneously optimize encoding and decoding functions to establish reliable message transmission. in this paper, this line of thinking is extended to communication scenarios in which an eavesdropper must further be kept ignorant about the communication. the secrecy of the transmission is achieved by utilizing a modified secure loss function based on cross - entropy which can be implemented with state - of - the - art machine - learning libraries. this secure loss function approach is applied in a gaussian wiretap channel setup, for which it is shown that the neural network learns a trade - off between reliable communication and information secrecy by clustering learned constellations. as a result, an eavesdropper with higher noise cannot distinguish between the symbols anymore.
arxiv:1810.12655
simulating vibrationally resolved electronic spectra of anharmonic systems, especially those involving double - well potential energy surfaces, often requires expensive quantum dynamics methods. here, we explore the applicability and limitations of the recently proposed single - hessian thawed gaussian approximation for the simulation of spectra of systems with double - well potentials, including 1, 2, 4, 5 - tetrafluorobenzene, ammonia, phosphine, and arsine. this semiclassical wavepacket approach is shown to be more robust and to provide more accurate spectra than the conventional harmonic approximation. specifically, we identify two cases in which the gaussian wavepacket method is especially useful due to the breakdown of the harmonic approximation : ( i ) when the nuclear wavepacket is initially at the top of the potential barrier but delocalized over both wells, e. g., along a low - frequency mode, and ( ii ) when the wavepacket has enough energy to classically go over the low potential energy barrier connecting the two wells. the method is efficient and requires only a single classical ab initio molecular dynamics trajectory, in addition to the data required to compute the harmonic spectra. we also present an improved algorithm for computing the wavepacket autocorrelation function, which guarantees that the evaluated correlation function is continuous for arbitrary size of the time step.
arxiv:2201.05660
currently, medical image domain translation operations show a high demand from researchers and clinicians. amongst other capabilities, this task allows the generation of new medical images with sufficiently high image quality, making them clinically relevant. deep learning ( dl ) architectures, most specifically deep generative models, are widely used to generate and translate images from one domain to another. the proposed framework relies on an adversarial denoising diffusion model ( ddm ) to synthesize echocardiography images and perform domain translation. contrary to generative adversarial networks ( gans ), ddms are able to generate high quality image samples with a large diversity. if a ddm is combined with a gan, this ability to generate new data is completed at an even faster sampling time. in this work we trained an adversarial ddm combined with a gan to learn the reverse denoising process, relying on a guide image, making sure relevant anatomical structures of each echocardiography image were kept and represented on the generated image samples. for several domain translation operations, the results verified that such generative model was able to synthesize high quality image samples : mse : 11. 50 + / - 3. 69, psnr ( db ) : 30. 48 + / - 0. 09, ssim : 0. 47 + / - 0. 03. the proposed method showed high generalization ability, introducing a framework to create echocardiography images suitable to be used for clinical research purposes.
arxiv:2403.04612
the representation ring of an affine algebraic group scheme can be endowed with the structure of a ( special ) $ \ lambda $ - ring. we show that the same is true for the ring of symmetric representations, i. e. for the grothendieck - witt ring of the representation category, for any affine algebraic group scheme over a field of characteristic not two.
arxiv:1308.0796
the standard setting of quantum computation for continuous problems uses deterministic queries and the only source of randomness for quantum algorithms is through measurement. this setting is related to the worst case setting on a classical computer in the sense that the number of qubits needed to solve a continuous problem must be at least equal to the logarithm of the worst case information complexity of this problem. since the number of qubits must be finite, we cannot solve continuous problems on a quantum computer with infinite worst case information complexity. this can even happen for continuous problems with small randomized complexity on a classical computer. a simple example is integration of bounded continuous functions. to overcome this bad property that limits the power of quantum computation for continuous problems, we study the quantum setting in which randomized queries are allowed. this type of query is used in shor ' s algorithm. the quantum setting with randomized queries is related to the randomized classical setting in the sense that the number of qubits needed to solve a continuous problem must be at least equal to the logarithm of the randomized information complexity of this problem.
arxiv:quant-ph/0601196
are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. such behaviors are studied in a chemistry laboratory. the chemistry laboratory stereotypically uses various forms of laboratory glassware. however glassware is not central to chemistry, and a great deal of experimental ( as well as applied / industrial ) chemistry is done without it. a chemical reaction is a transformation of some substances into one or more different substances. the basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. it can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. the number of atoms on the left and the right in the equation for a chemical transformation is equal. ( when the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay. ) the type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. energy and entropy considerations are invariably important in almost all chemical studies. chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. they can be analyzed using the tools of chemical analysis, e. g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom
https://en.wikipedia.org/wiki/Chemistry
machine unlearning has emerged as an effective strategy for forgetting specific information in the training data. however, with the increasing integration of visual data, privacy concerns in vision language models ( vlms ) remain underexplored. to address this, we introduce facial identity unlearning benchmark ( fiubench ), a novel vlm unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the right to be forgotten setting. specifically, we formulate the vlm unlearning task via constructing the fictitious facial identity vqa dataset and apply a two - stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels. in terms of evaluation, since vlm supports various forms of ways to ask questions with the same semantic meaning, we also provide robust evaluation metrics including membership inference attacks and carefully designed adversarial privacy attacks to evaluate the performance of algorithms. through the evaluation of four baseline vlm unlearning algorithms within fiubench, we find that all methods remain limited in their unlearning performance, with significant trade - offs between model utility and forget quality. furthermore, our findings also highlight the importance of privacy attacks for robust evaluations. we hope fiubench will drive progress in developing more effective vlm unlearning algorithms.
arxiv:2411.03554
in this paper we explore the possibility of using transition edge sensor ( tes ) detectors in multi - mode configuration in the focal plane of the short wavelength instrument for the polarization explorer ( swipe ) of the balloon - borne polarimeter large scale polarization explorer ( lspe ) for the cosmic microwave background ( cmb ) polarization. this study is motivated by the fact that maximizing the sensitivity of tes bolometers, under the augmented background due to the multi - mode design, requires a non trivial choice of detector parameters. we evaluate the best parameter combination taking into account scanning strategy, noise constraints, saturation power and operating temperature of the cryostat during the flight.
arxiv:1602.07744
in this work, a statistical analysis of the distribution of daily fluctuations of the ipc, the mexican stock market index is presented. a sample of the ipc covering the 13 - year period 04 / 19 / 1990 - 08 / 21 / 2003 was analyzed and the cumulative probability distribution of its daily logarithmic variations studied. results showed that the cumulative distribution function for extreme variations, can be described by a pareto - levy model with shape parameters alpha = 3. 634 + - 0. 272 and alpha = 3. 540 + - 0. 278 for its positive and negative tails respectively. this result is consistent with previous studies, where it has been found that 2. 5 < alpha < 4 for other financial markets worldwide.
arxiv:cond-mat/0312413
we study the direct detection rate for susy cold dark m atter ( cdm ) predicted by the minimal supersymmetric standard model with universal boundary conditions and large values for tan \ beta. the relic abundance of the lightest supersymmetric particle ( lsp ), assumed to be approximately a bino, is obtained by including its coannihilations with the next - to - lightest supersymmetric particle ( nlsp ), which is the lightest s - tau. we find detectable rates in the currently planned experiments for a sector of the parameter space consistent with the cosmological constraint on the lsp relic abundance and the ones imposed by b - > s gamma and the higgs searches.
arxiv:hep-ph/0105115
we show that our previous work on galilei and carroll gravity, apt for particles, can be generalized to galilei and carroll gravity theories adapted to p - branes ( p = 0, 1, 2,... ). within this wider brane perspective, we make use of a formal map, given in the literature, between the corresponding p - brane carroll and galilei algebras where the index describing the directions longitudinal ( transverse ) to the galilei brane is interchanged with the index covering the directions transverse ( longitudinal ) to the carroll brane with the understanding that the time coordinate is always among the longitudinal directions. this leads among other things in 3d to a map between galilei particles and carroll strings and in 4d to a similar map between galilei strings and carroll strings. we show that this formal map extends to the corresponding lie algebra expansion of the poincar \ ' e algebra and, therefore, to several extensions of the carroll and galilei algebras including central extensions. we use this formal map to construct several new examples of carroll gravity actions. furthermore, we discuss the symmetry between carroll and galilei at the level of the p - brane sigma model action and apply this formal symmetry to give several examples of 3d and 4d particles and strings in a curved carroll background.
arxiv:2003.03062
the relatively warm climate found in the north - western europe is due to the gulf stream that circulates warm saline water from southern latitudes to europe. in north atlantic ocean the stream gives out a large amount of heat, cools down and sinks to the bottom to complete the thermohaline circulation. there is considerable debate on the stability of the stream to inputs of fresh water from the melting ice in greenland and arctic. the circulation, being switched off, will have massive impact on the climate of europe. intergovernmental panel on climate change ( ipcc ) has warned of this danger in its recent report. our aim is to model the thermohaline circulation at the point where it sinks in the north - atlantic. we create a two dimensional discrete map modeling the salinity gradient and vertical velocity of the stream. we look for how a perturbation in the form of fresh water release can destabilise the circulation by pushing the velocity below a certain threshold.
arxiv:0805.2375
this paper focuses on similarity caching systems, in which a user request for an { object ~ $ o $ } that is not in the cache can be ( partially ) satisfied by a similar stored { object ~ $ o ' $ }, at the cost of a loss of user utility. similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training / serving. however, despite their relevance, the behavior of such systems is far from being well understood. in this paper, we provide a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. we show that similarity caching raises significant new challenges, for which we propose the first dynamic policies with some optimality guarantees. we evaluate the performance of our schemes under both synthetic and real request traces.
arxiv:1912.03888
this paper presents how photonics associated with new arising detection technologies is able to provide fully integrated instrument for coherent beam combination applied to astrophysical interferometry. the feasibility and operation of on - chip coherent beam combiners has been already demonstrated using various interferometric combination schemes. more recently we proposed a new detection principle aimed at directly sampling and extracting the spectral information of an input signal together with its flux level measurement. the so - called swifts demonstrated concept that stands for stationary - wave integrated fourier transform spectrometer, provides full spectral and spatial information recorded simultaneously thanks to a motionless detecting device. due to some newly available detection principles considered for the implementation of the swifts concept, some technologies can even provide photo - counting operation that brought a significant extension of the interferometry domain of investigation in astrophysics. the proposed concept is applicable to most of the interferometric instrumental modes including fringe tracking, fast and sensitive detection, fourier spectral reconstruction and also to manage a large number of incoming beams. the paper presents three practical implementations, two dealing with pair - wise integrated optics beam combinations and the third one with an all - in - one 8 beam combination. in all cases the principles turned into a pair wise baseline coding after proper data processing.
arxiv:0902.1688
we have obtained deep infrared $ j $ and $ k $ band observations of five fields located in the large magellanic cloud ( lmc ) bar with the eso new technology telescope equipped with the sofi infrared camera. in our fields, 65 rr lyrae stars catalogued by the ogle collaboration were identified. using different theoretical and empirical calibrations of the period - luminosity - metallicity relation, we find consistent lmc distance moduli values. since the observed fields are situated very close to the center of the lmc, the correction for the tilt of the lmc bar with respect to the line of sight is negligible. our adopted best true distance modulus to the lmc of $ 18. 58 \ pm 0. 03 $ ( statistical ) $ \ pm $ 0. 11 ( systematic ) mag agrees very well with most independent determinations to this galaxy.
arxiv:0804.3333
the article describes our submission to semeval 2019 task 8 on fact - checking in community forums. the systems under discussion participated in subtask a : decide whether a question asks for factual information, opinion / advice or is just socializing. our primary submission was ranked as the second one among all participants in the official evaluation phase. the article presents our primary solution : deeply regularized residual neural network ( drr nn ) with universal sentence encoder embeddings. this is followed by a description of two contrastive solutions based on ensemble methods.
arxiv:1906.01515
neutrinos offer a window to physics beyond the standard model. in particular, high - energy astrophysical neutrinos, with tev - pev energies, may provide evidence of new, " secret " neutrino - neutrino interactions that are stronger than ordinary weak interactions. during their propagation over cosmological distances, high - energy neutrinos could interact with the cosmic neutrino background via secret interactions, developing characteristic energy - dependent features in their observed energy distribution. for the first time, we look for signatures of secret neutrino interactions in the diffuse flux of high - energy astrophysical neutrinos, using 6 years of publicly available icecube high energy starting events ( hese ). we find no significant evidence for secret neutrino interactions, but place competitive upper limits on the coupling strength of the new mediator through which they occur, in the mediator mass range of 1 - 100 mev.
arxiv:2001.04994
we introduce multivariate circulant singular spectrum analysis ( m - cissa ) to provide a comprehensive framework to analyze fluctuations, extracting the underlying components of a set of time series, disentangling their sources of variation and assessing their relative phase or cyclical position at each frequency. our novel method is non - parametric and can be applied to series out of phase, highly nonlinear and modulated both in frequency and amplitude. we prove a uniqueness theorem that in the case of common information and without the need of fitting a factor model, allows us to identify common sources of variation. this technique can be quite useful in several fields such as climatology, biometrics, engineering or economics among others. we show the performance of m - cissa through a synthetic example of latent signals modulated both in amplitude and frequency and through the real data analysis of energy prices to understand the main drivers and co - movements of primary energy commodity prices at various frequencies that are key to assess energy policy at different time horizons.
arxiv:2007.07561
the widely used crystal structures for both heptazine - based and triazine - based two - dimensional ( 2d ) graphitic carbon nitride ( g - c $ _ 3 $ n $ _ 4 $ ) are the flat p - 6m2 configurations. however, the experimentally synthesized 2d g - c $ _ 3 $ n $ _ 4 $ possess thickness ranging in 0. 2 - 0. 5 nm, indicating that the theoretically used flat p - 6m2 configurations are not the correct ground states. in this work, we propose three new corrugated structures p321, p3m1 and pca21 with energies of 66 ( 86 ), 77 ( 87 ) and 78 ( 89 ) mev / atom lower than that of the corresponding heptazine - based ( triazine - based ) g - c $ _ 3 $ n $ _ 4 $ in flat p - 6m2 configuration, respectively. these corrugated structures have very similar periodic patterns to the flat p - 6m2 ones and they are difficult to be distinguished from each other according to their top - views. the optimized thicknesses of the three corrugated structures ranging in 1. 347 - 3. 142 { \ aa } are in good agreement with the experimental results. the first - principles results show that these corrugated structural candidates are also semiconductors with band gaps slightly larger than those of the correspondingly flat p - 6m2 ones. furthermore, they possess also suitable band edge positions for sun - light - driven water - splitting at both $ ph = 0 $ and $ ph = 7 $ environments. our results show that these three new structures are more promising candidates for the experimentally synthesized g - c $ _ 3 $ n $ _ 4 $.
arxiv:2002.06995
the cross - ratios do not uniquely fix the class of conformally equivalent configurations of null polygons. in view of applications to wilson loops and scattering amplitudes we characterise all conformal classes of null hexagon configurations belonging to given points in cross - ratio space. at first this is done for the ordered set of vertices. including the edges, we then investigate the equivalence classes under conformal transformations for null hexagons. this is done both for the set of null hexagons closed in finite domains of minkowski space as well as for the set including those closed via infinity.
arxiv:1211.5537
the attractive fermi - hubbard model stands out as a simple model for studying the pairing and superconductivity of fermions on a lattice. in this article, we apply several many - body theories in the three - dimensional attractive hubbard model. specifically, we compare the results of various gw methods with dqmc simulations and observe that they provide reliable results in the weak to intermediate coupling regime. the critical exponents also agree well with the accurate results obtained from the 3d xy model. in the superconducting phase, the post - gw method significantly improves the description of green ' s functions and density of states. additionally, we propose a method to determine the temperature at which the pseudogap appears.
arxiv:2502.11527
ultrahigh - dimensional variable selection plays an increasingly important role in contemporary scientific discoveries and statistical research. among others, fan and lv [ j. r. stat. soc. ser. b stat. methodol. 70 ( 2008 ) 849 - 911 ] propose an independent screening framework by ranking the marginal correlations. they showed that the correlation ranking procedure possesses a sure independence screening property within the context of the linear model with gaussian covariates and responses. in this paper, we propose a more general version of the independent learning with ranking the maximum marginal likelihood estimates or the maximum marginal likelihood itself in generalized linear models. we show that the proposed methods, with fan and lv [ j. r. stat. soc. ser. b stat. methodol. 70 ( 2008 ) 849 - 911 ] as a very special case, also possess the sure screening property with vanishing false selection rate. the conditions under which the independence learning possesses a sure screening is surprisingly simple. this justifies the applicability of such a simple method in a wide spectrum. we quantify explicitly the extent to which the dimensionality can be reduced by independence screening, which depends on the interactions of the covariance matrix of covariates and true parameters. simulation studies are used to illustrate the utility of the proposed approaches. in addition, we establish an exponential inequality for the quasi - maximum likelihood estimator which is useful for high - dimensional statistical learning.
arxiv:0903.5255
cultural learning is a unique human capacity essential for a wide range of adaptations. researchers have argued that folktales have the pedagogical function of transmitting the essential information for the environment. the most important knowledge for foraging and pastoral society is folk - zoological knowledge, such as the predator - prey relationship among wild animals, or between wild and domesticated animals. here, we analysed the descriptions of the 382 animal folktales using the natural language processing method and descriptive statistics listed in a worldwide tale - type index ( aarne - thompson - uther type index ). our analyses suggested that first, the predator - prey relationship frequently appeared in a co - occurrent animal pair within a folktale ( e. g., cat and mouse or wolf and pig ), and second, the motif of ' deception ', describing the antagonistic behaviour among animals, appeared relatively higher in ' wild and domestic animals ' and ' wild animals ' than other types. furthermore, the motif of ' deception ' appeared more frequently in pairs, corresponding to the predator - prey relationship. these results corresponded with the hypothesis that the combination of animal characters and what happens in stories represented relationships in the real world. the present study demonstrated that the combination of quantitative methods and qualitative data broaden our understanding of the evolutionary aspects of human cultures.
arxiv:1907.03969
information and communications technology ( ict ) is an extensional term for information technology ( it ) that stresses the role of unified communications and the integration of telecommunications ( telephone lines and wireless signals ) and computers, as well as necessary enterprise software, middleware, storage and audiovisual, that enable users to access, store, transmit, understand and manipulate information. ict is also used to refer to the convergence of audiovisuals and telephone networks with computer networks through a single cabling or link system. there are large economic incentives to merge the telephone networks with the computer network system using a single unified system of cabling, signal distribution, and management. ict is an umbrella term that includes any communication device, encompassing radio, television, cell phones, computer and network hardware, satellite systems and so on, as well as the various services and appliances with them such as video conferencing and distance learning. ict also includes analog technology, such as paper communication, and any mode that transmits communication. ict is a broad subject and the concepts are evolving. it covers any product that will store, retrieve, manipulate, process, transmit, or receive information electronically in a digital form ( e. g., personal computers including smartphones, digital television, email, or robots ). skills framework for the information age is one of many models for describing and managing competencies for ict professionals in the 21st century. = = etymology = = the phrase " information and communication technologies " has been used by academic researchers since the 1980s. the abbreviation " ict " became popular after it was used in a report to the uk government by dennis stevenson in 1997, and then in the revised national curriculum for england, wales and northern ireland in 2000. however, in 2012, the royal society recommended that the use of the term " ict " should be discontinued in british schools " as it has attracted too many negative connotations ". from 2014, the national curriculum has used the word computing, which reflects the addition of computer programming into the curriculum. variations of the phrase have spread worldwide. the united nations has created a " united nations information and communication technologies task force " and an internal " office of information and communications technology ". = = monetization = = the money spent on it worldwide has been estimated as us $ 3. 8 trillion in 2017 and has been growing at less than 5 % per year since 2009. the estimated 2018 growth of the entire ict is 5 %. the biggest growth of 16 % is expected in the area of new technologies ( iot, robotics,
https://en.wikipedia.org/wiki/Information_and_communications_technology
##avitation as a better mathematical model. there is still a philosophical debate whether mathematics is a science. however, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. like them, it is falsifiable, which means in mathematics that if a result or a theory is wrong, this can be proved by providing a counterexample. similarly as in science, theories and results ( theorems ) are often obtained from experimentation. in mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects ( often mind representations without physical support ). for example, when asked how he came about his theorems, gauss once replied " durch planmassiges tattonieren " ( through systematic experimentation ). however, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. = = = = unreasonable effectiveness = = = = the unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist eugene wigner. it is the fact that many mathematical theories ( even the " purest " ) have applications outside their initial object. these applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. examples of unexpected applications of mathematical theories can be found in many areas of mathematics. a notable example is the prime factorization of natural numbers that was discovered more than 2, 000 years before its common use for secure internet communications through the rsa cryptosystem. a second historical example is the theory of ellipses. they were studied by the ancient greek mathematicians as conic sections ( that is, intersections of cones with planes ). it was almost 2, 000 years later that johannes kepler discovered that the trajectories of the planets are ellipses. in the 19th century, the internal development of geometry ( pure mathematics ) led to definition and study of non - euclidean geometries, spaces of dimension higher than three and manifolds. at this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, albert einstein developed the theory of relativity that uses fundamentally these concepts. in particular, spacetime of special relativity is a non - euclidean space of dimension four, and spacetime of general relativity is a ( curved ) manifold of dimension four. a striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. this is illustrated by the
https://en.wikipedia.org/wiki/Philosophy_of_mathematics
event reweighting has been implemented in the nuwro neutrino event generator for a number of free theory parameters in the interaction model. event reweighting is a key analysis technique, used to efficiently study the effect of neutrino interaction model uncertainties. this opens up the possibility for nuwro to be used as a primary event generator by experimental analysis groups. a preliminary model tuning to anl and bnl data of quasi - elastic and single pion production events was performed to validate the reweighting engine.
arxiv:1610.07053
we study paycheck optimization, which examines how to allocate income in order to achieve several competing financial goals. for paycheck optimization, a quantitative methodology is missing, due to a lack of a suitable problem formulation. to deal with this issue, we formulate the problem as a utility maximization problem. the proposed formulation is able to ( i ) unify different financial goals ; ( ii ) incorporate user preferences regarding the goals ; ( iii ) handle stochastic interest rates. the proposed formulation also facilitates an end - to - end reinforcement learning solution, which is implemented on a variety of problem settings.
arxiv:2403.06011
small and medium enterprises ( smes ) are increasingly vulnerable to cyber threats due to limited resources and cybersecurity expertise, in addition to an increasingly hostile cyber threat environment at national and international levels. this study aims to improve the cyber resilience amongst smes by developing a national risk assessment tool. this research is guided by three key questions : 1. what current international sme risk assessment tools are available and supported or endorsed by national cybersecurity centres? 2. how can a risk assessment tool be created that is accessible to sme owners with little to no cybersecurity knowledge? 3. what are the key areas of cybersecurity risks for smes? to answer these questions, a comprehensive review of existing risk assessment tools was carried out. through iterative collaboration with smes, the development of a user - friendly tool that simplifies risk for non - expert users was made possible.
arxiv:2408.16124
although the data - driven analysis of football players ' performance has been developed for years, most research only focuses on the on - ball event including shots and passes, while the off - ball movement remains a little - explored area in this domain. players ' contributions to the whole match are evaluated unfairly, those who have more chances to score goals earn more credit than others, while the indirect and unnoticeable impact that comes from continuous movement has been ignored. this research presents a novel deep - learning network architecture which is capable to predict the potential end location of passes and how players ' movement before the pass affects the final outcome. once analysed more than 28, 000 pass events, a robust prediction can be achieved with more than 0. 7 top - 1 accuracy. and based on the prediction, a better understanding of the pitch control and pass option could be reached to measure players ' off - ball movement contribution to defensive performance. moreover, this model could provide football analysts a better tool and metric to understand how players ' movement over time contributes to the game strategy and final victory.
arxiv:2309.01526
visual transformers have achieved remarkable performance in image classification tasks, but this performance gain has come at the cost of interpretability. one of the main obstacles to the interpretation of transformers is the self - attention mechanism, which mixes visual information across the whole image in a complex way. in this paper, we propose hindered transformer ( hit ), a novel interpretable by design architecture inspired by visual transformers. our proposed architecture rethinks the design of transformers to better disentangle patch influences at the classification stage. ultimately, hit can be interpreted as a linear combination of patch - level information. we show that the advantages of our approach in terms of explicability come with a reasonable trade - off in performance, making it an attractive alternative for applications where interpretability is paramount.
arxiv:2502.17196
many ai researchers and cognitive scientists have argued that analogy is the core of cognition. the most influential work on computational modeling of analogy - making is structure mapping theory ( smt ) and its implementation in the structure mapping engine ( sme ). a limitation of sme is the requirement for complex hand - coded representations. we introduce the latent relation mapping engine ( lrme ), which combines ideas from sme and latent relational analysis ( lra ) in order to remove the requirement for hand - coded representations. lrme builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. we evaluate lrme on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. lrme achieves human - level performance on the twenty problems. we compare lrme with a variety of alternative approaches and find that they are not able to reach the same level of performance.
arxiv:0812.4446
in wireless communications systems, the user equipment ( ue ) transmits a random access preamble sequence to the base station ( bs ) to be detected and synchronized. in standardized cellular communications systems zadoff - chu sequences has been proposed due to their constant amplitude zero autocorrelation ( cazac ) properties. the conventional approach is to use matched filters to detect the sequence. sequences arrived from different antennas and time instances are summed up to reduce the noise variance. since the knowledge of the channel is unknown at this stage, a coherent combining scheme would be very difficult to implement. in this work, we leverage the system design knowledge and propose a neural network ( nn ) sequence detector and timing advanced estimator. we do not replace the whole process of preamble detection by a nn. instead, we propose to use nn only for \ textit { blind } coherent combining of the signals in the detector to compensate for the channel effect, thus maximize the signal to noise ratio. we have further reduced the problem ' s complexity using kronecker approximation model for channel covariance matrices, thereby, reducing the size of required nn. the analysis on timing advanced estimation and sequences detection has been performed and compared with the matched filter baseline.
arxiv:2110.02738
in this paper we prove some general results on constant mean curvature lamination limits of certain sequences of compact surfaces $ m _ n $ embedded in $ \ mathbb r ^ 3 $ with constant mean curvature $ h _ n $ and fixed finite genus, when the boundaries of these surfaces tend to infinity. two of these theorems generalize to the non - zero constant mean curvature case, similar structure theorems by colding and minicozzi in ~ [ 6, 8 ] for limits of sequences of minimal surfaces of fixed finite genus.
arxiv:1510.07549
this work aims to study the portuguese regional agglomeration process, using the linear form the new economic geography models that emphasize the importance of spatial factors ( distance, costs of transport and communication ) in explaining of the concentration of economic activity in certain locations. in a theoretical context, it is intended to explain the complementarily of clustering models, associated with the new economic geography, and polarization associated with the keynesian tradition, describing the mechanisms by which these processes are based. as a summary conclusion, we can say which the agglomeration process shows some signs of concentration in lisboa e vale do tejo ( which is evidence of regional divergence in portugal ) and the productivity factor significantly improves the results that explain the regional clustering in portugal ( despite being ignored in the models of new economic geography ).
arxiv:1110.5559
we combine imaging data from the advanced camera for surveys ( acs ) with vlt / fors optical spectroscopy to study the properties of star - forming galaxies in the z = 0. 837 cluster cl0152 - 1357. we have morphological information for 24 star - forming cluster galaxies, which range in morphology from late - type and irregular to compact early - type galaxies. we find that while most star - forming galaxies have $ r _ { 625 } - i _ { 775 } $ colors bluer than 1. 0, eight are in the red cluster sequence. among the star - forming cluster population we find five compact early - type galaxies which have properties consistent with their identification as progenitors of dwarf elliptical galaxies. the spatial distribution of the star - forming cluster members is nonuniform. we find none within $ r \ sim 500 $ mpc of the cluster center, which is highly suggestive of an intracluster medium interaction. we derive star formation rates from [ oii ] $ \ lambda \ lambda 3727 $ line fluxes, and use these to compare the global star formation rate of cl0152 - 1357 to other clusters at low and intermediate redshifts. we find a tentative correlation between integrated star formation rates and $ t _ { x } $, in the sense that hotter clusters have lower integrated star formation rates. additional data from clusters with low x - ray temperatures is needed to confirm this trend. we do not find a significant correlation with redshift, suggesting that evolution is either weak or absent between z = 0. 2 - 0. 8.
arxiv:astro-ph/0412083
recently, agievich proposed an interesting upper bound on binomial coefficients in the de moivre - laplace form. in this article, we show that the latter bound, in the specific case of a central binomial coefficient, is larger than the one proposed by sasvari and obtained using the binet formula for the gamma function. in addition, we provide the expression of the next - order bound and apply it to catalan numbers $ c _ n $. the bounds are very close to the exact value, the difference decreasing with $ n $ and with the order of the upper bound.
arxiv:2407.21064
we show that the null melvin map applied to a rotational isometry of global anti - de sitter space produces a spacetime with properties analogous to both the godel and schrodinger geometries. the isometry group of this godel - schrodinger spacetime is appropriate to provide a holographic dual for the same non - relativistic conformal theory as the ordinary schrodinger geometry, but defined on a sphere. this spacetime also possesses closed timelike curves outside a certain critical radius. we show that a holographic preferred screen for an observer at the origin sits at an interior radius, suggesting that a holographic dual description would only require the chronologically consistent region. additionally, giant graviton probe branes experience repulson - type instabilities in the acausal region, suggesting a condensation of branes will modify the pathological part of the geometry to remove the closed timelike curves.
arxiv:1110.3840
we are exploring the enhancement of models of agent behaviour with more " human - like " decision making strategies than are presently available. our motivation is to developed with a view to as the decision analysis and support for electric taxi company under the mission of energy saving and reduction of co2, in particular car - pool and car - sharing management policies. in order to achieve the object of decision analysis for user, we provide a human - agents interactive spatial behaviour to support user making decision real time. we adopt passenger average waiting time and electric taxi average idle time as the performance measures and decision support fro electric taxi company. finally, according to the analysis result, we demonstrate that our multi - agent simulation and gui can help users or companies quickly make a quality and accurate decision to reduce the decision - making cost and time.
arxiv:0912.3961
platinum diselenide ( ptse2 ) is an exciting new member of the two - dimensional ( 2d ) transition metal dichalcogenide ( tmd ) family. it has a semimetal to semiconductor transition when approaching monolayer thickness and has already shown significant potential for use in device applications. notably, ptse2 can be grown at low temperature making it potentially suitable for industrial usage. here, we address thickness dependent transport properties and investigate electrical contacts to ptse2, a crucial and universal element of tmd - based electronic devices. ptse2 films have been synthesized at various thicknesses and structured to allow contact engineering and the accurate extraction of electrical properties. contact resistivity and sheet resistance extracted from transmission line method ( tlm ) measurements are compared for different contact metals and different ptse2 film thicknesses. furthermore, the transition from semimetal to semiconductor in ptse2 has been indirectly verified by electrical characterization of field - effect devices. finally, the influence of edge contacts at the metal - ptse2 interface has been studied by nanostructuring the contact area using electron beam lithography. by increasing the edge contact length, the contact resistivity was improved by up to 70 % compared to devices with conventional top contacts. the results presented here represent crucial steps towards realizing high - performance nanoelectronic devices based on group - 10 tmds.
arxiv:1707.06824
recently, a phase transition to synchronized congested traffic has been observed in empirical highway data [ b. s. kerner and h. rehborn, phys. rev. lett. 79, 4030 ( 1997 ) ]. this hysteretic transition has been described by a non - local, gas - kinetic - based traffic model [ d. helbing and m. treiber, phys. rev. lett. 81, 3042 ( 1998 ) ] that, however, did not display the wide scattering of synchronized states. here, it is shown that the latter can be reproduced by a mixture of different vehicle types like cars and trucks. the simulation results are in good agreement with dutch highway data.
arxiv:cond-mat/9901119
we study abelian gauge theories with anisotropic couplings in $ 4 + d $ dimensions. a layered phase is present, in the absence as well as in the presence of fermions. a line of second order transitions separates the layered from the coulomb phase, if $ d \ leq 3 $.
arxiv:hep-th/9406003
the controlled scaling of diamond defect center based quantum registers relies on the ability to position nvs with high spatial resolution. using ion implantation, shallow ( < 10 nm ) nvs can be placed with accuracy below 20nm, but generally show reduced spin properties compared to bulk nvs. we demonstrate the augmentation of spin properties for shallow implanted nv centers using an overgrowth technique. an increase of the coherence times up to an order of magnitude ( t _ 2 = 250 \ mu s ) was achieved. dynamic decoupling of defects spins achieves ms decoherence times. the study marks a further step towards achieving strong coupling among defects positioned with nm precision.
arxiv:1208.4216
the helium trimer is studied using two - and three - body soft - core potentials. realistic helium - helium potentials present an extremely strong short - range repulsion and support a single, very shallow, bound state. the description of systems with more than two helium atoms is difficult due to the very large cancellation between kinetic and potential energy. we analyze the possibility of describing the three helium system in the ultracold regime using a gaussian representation of a widely used realistic potential, the lm2m2 interaction. however, in order to describe correctly the trimer ground state a three - body force has to be added to the gaussian interaction. with this potential model the two bound states of the trimer and the low energy scattering helium - dimer phase shifts obtained with the lm2m2 potential are well reproduced.
arxiv:1101.1719
we study the galois groupoid of a holomorphic singular codimension one foliation. geometric and algebraic caracterisations using godbillon - vey sequences and classical first integral are given.
arxiv:math/0503348
in a recent paper, the author and st \ " ohr established a bound on the number of iterated frobenius pullbacks needed to transform a non - smooth non - decomposed point on a regular geometrically integral curve into a rational point. in this note we improve this result, by establishing a new bound that is sharp in every characteristic $ p > 0 $.
arxiv:2402.14969
we study the effect of introducing separable quenched disorder on a non - equilibrium mean - field spin model exhibiting a phase transition to an oscillating state in the absence of disorder, due to non - reciprocal interactions. in the disordered model, the magnetisation and its time derivative no longer carry the signature of the phase transition to an oscillating state. however, thanks to the separable ( mattis - type ) form of the disorder, the presence of oscillations can be revealed by introducing a specific, disorder - dependent observable. we also introduce generalised linear and non - linear susceptibilities associated either with the magnetisation or with its time derivative. while linear susceptibilities show no sign of a phase transition, the third - order susceptibilities present a clear signature of the onset of an oscillating phase. in addition, we show that the overlap distribution also provides evidence for the presence of oscillations, without explicit knowledge of the disorder.
arxiv:2406.03874
we discuss the front propagation in the $ a + b \ rightarrow 2a $ reaction under subdiffusion which is described by continuous time random walks with a heavy - tailed power law waiting time probability density function. using a crossover argument, we discuss the two scaling regimes of the front propagation : an intermediate asymptotic regime given by the front solution of the corresponding continuous equation, and the final asymptotics, which is fluctuation - dominated and therefore lays out of reach of the continuous scheme. we moreover show that the continuous reaction subdiffusion equation indeed possesses a front solution that decelerates and becomes narrow in the course of time. this continuous description breaks down for larger times when the front gets atomically sharp. we show that the velocity of such fronts decays in time faster than in the continuous regime.
arxiv:1011.2643
using the china spallation neutron source ( csns ) linac as the injector, a 500 mev proton synchrotron is proposed for multidisciplinary application, such as biology, material and proton therapy. the synchrotron will deliver proton beam with energy from 80 mev to 500 mev. a compact lattice design was worked out, and all the important beam dynamics issues were investigated. the 80 mev h - beam is stripped and injected into the synchrotron by using multi - turn injection. in order to continuously extraction the proton with small beam loss, the achromatic structure is proposed and slow extraction method with rf knock - out is adopted and optimized.
arxiv:1604.03309
( abridged ) we present an analysis of ionization and metal enrichment in the magellanic stream ( ms ), the nearest gaseous tidal stream, using hst / stis and fuse ultraviolet spectroscopy of two background agn, ngc 7469 and mrk 335. for ngc 7469, we include optical spectroscopy from vlt / uves. in both sightlines the ms is detected in low - ion and high - ion absorption. toward ngc 7469, we measure a ms oxygen abundance [ o / h ] _ ms = [ oi / hi ] = - 1. 00 + / - 0. 05 ( stat ) + / - 0. 08 ( syst ), supporting the view that the stream originates in the smc rather than the lmc. we use cloudy to model the low - ion phase of the stream as a photoionized plasma using the observed si iii / si ii and c iii / c ii ratios. toward mrk 335 this yields an ionization parameter log u between - 3. 45 and - 3. 15 and a gas density log ( n _ h / cm ^ - 3 ) between - 2. 51 and - 2. 21. toward ngc 7469 we derive sub - solar abundance ratios for [ si / o ], [ fe / o ], and [ al / o ], indicating the presence of dust in the ms. the high - ion column densities are too large to be explained by photoionization, but also cannot be explained by a single - temperature collisional - ionization model ( equilibrium or non - equilibrium ). this suggests the high - ion plasma is multi - phase. summing over the low - ion and high - ion phases, we derive conservative lower limits on the ratio n ( total h ii ) / n ( h i ) of > 19 toward ngc 7469 and > 330 toward mrk 335, showing that along these two directions the vast majority of the stream has been ionized. the presence of warm - hot plasma together with the small - scale structure observed at 21 cm provides evidence for an evaporative interaction with the hot galactic corona. this scenario, predicted by hydrodynamical simulations, suggests that the fate of the ms will be to replenish the galactic corona with new plasma, rather than to bring neutral fuel to the disk.
arxiv:1006.0974
let $ n $ be the maximal nilpotent subalgebra of a simple complex lie algebra $ g $. we introduce the notion of imaginary vector in the dual canonical basis of $ u _ q ( n ) $, and we give examples of such vectors for types $ a _ n ( n \ ge 5 ) $, $ b _ n ( n \ ge 3 ) $, $ c _ n ( n \ ge 3 ) $, $ d _ n ( n \ ge 4 ) $, and all exceptional types. this disproves a conjecture of berenstein and zelevinsky about $ q $ - commuting products of vectors of the dual canonical basis. it also shows the existence of finite - dimensional irreducible representations of quantum affine algebras whose tensor square is not irreducible.
arxiv:math/0202148
in this paper, we present a formula describing the formation and decay of shock wave type solutions in some special cases.
arxiv:math-ph/0512087
we formulate a time - optimal approach to adiabatic quantum computation ( aqc ). a corresponding natural riemannian metric is also derived, through which aqc can be understood as the problem of finding a geodesic on the manifold of control parameters. this geometrization of aqc is demonstrated through two examples, where we show that it leads to improved performance of aqc, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.
arxiv:0905.2376
is the science / subject of measuring and modelling the process of care in health and social care systems. nosology is the classification of diseases for various purposes. occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained. pain management ( also called pain medicine, or algiatry ) is the medical discipline concerned with the relief of pain. pharmacogenomics is a form of individualized medicine. podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back. sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality. sports medicine deals with the treatment and prevention and rehabilitation of sports / exercise injuries such as muscle spasms, muscle tears, injuries to ligaments ( ligament tears or ruptures ) and their repair in athletes, amateur and professional. therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health. travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments. tropical medicine deals with the prevention and treatment of tropical diseases. it is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs. urgent care focuses on delivery of unscheduled, walk - in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. in some jurisdictions this function is combined with the emergency department. veterinary medicine ; veterinarians apply similar techniques as physicians to the care of non - human animals. wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available. = = education and legal controls = = medical education and training varies around the world. it typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. this can be followed by postgraduate vocational training. a variety of teaching methods have been employed in medical education, still itself a focus of active research. in canada and the united states of america, a doctor of medicine degree, often abbreviated m. d., or a doctor of osteopathic medicine degree, often abbreviated as d. o. and unique to the united states, must be completed in and delivered from a recognized university. since knowledge, techniques, and medical technology continue to evolve at a
https://en.wikipedia.org/wiki/Medicine
recent advances in facial expression synthesis have shown promising results using diverse expression representations including facial action units. facial action units for an elaborate facial expression synthesis need to be intuitively represented for human comprehension, not a numeric categorization of facial action units. to address this issue, we utilize human - friendly approach : use of natural language where language helps human grasp conceptual contexts. in this paper, therefore, we propose a new facial expression synthesis model from language - based facial expression description. our method can synthesize the facial image with detailed expressions. in addition, effectively embedding language features on facial features, our method can control individual word to handle each part of facial movement. extensive qualitative and quantitative evaluations were conducted to verify the effectiveness of the natural language.
arxiv:2007.08154
recent years witnessed a surge in network traffic due to the emergence of new online services, causing periodic saturation and complexity problems. additionally, the growing number of iot devices further compounds the problem. software defined network ( sdn ) is a new architecture which offers innovative advantages that help to reduce saturation problems. despite its benefits, sdns not only can be affected by traditional attacks but also introduce new security challenges. in this context, distributed denial of service ( ddos ) is one of the most important attacks that can damage an sdn network ' s normal operation. furthermore, if these attacks are executed using botnets, they can use thousands of compromised devices to disrupt critical online services. this paper proposes a framework for detecting ddos attacks generated by a group of botnets in an sdn network. the framework is implemented using open - source tools such as mininet and opendaylight and tested in a centralized network topology using byob and snort. the results demonstrate real - time attack identification by implementing an intrusion detection mechanism in the victim client. our proposed solution offers quick and effective detection of ddos attacks in sdn networks. the framework can successfully differentiate the type of attack with high accuracy in a short time
arxiv:2401.09358
we prove that the mixing time of the glauber dynamics for random k - colorings of the complete tree with branching factor b undergoes a phase transition at $ k = b ( 1 + o _ b ( 1 ) ) / \ ln { b } $. our main result shows nearly sharp bounds on the mixing time of the dynamics on the complete tree with n vertices for $ k = cb / \ ln { b } $ colors with constant c. for $ c \ geq1 $ we prove the mixing time is $ o ( n ^ { 1 + o _ b ( 1 ) } \ ln { n } ) $. on the other side, for $ c < 1 $ the mixing time experiences a slowing down ; in particular, we prove it is $ o ( n ^ { 1 / c + o _ b ( 1 ) } \ ln { n } ) $ and $ \ omega ( n ^ { 1 / c - o _ b ( 1 ) } ) $. the critical point c = 1 is interesting since it coincides ( at least up to first order ) with the so - called reconstruction threshold which was recently established by sly. the reconstruction threshold has been of considerable interest recently since it appears to have close connections to the efficiency of certain local algorithms, and this work was inspired by our attempt to understand these connections in this particular setting.
arxiv:0908.2665
we study the geometry of a family of lie groups, which contained the classical affine lie groups, endowed with an exact left invariant symplectic form. we show that this family is closed by symplectic reduction and symplectic double extension in the sense of dardi \ ' { e } and medina. we prouve also that these groups are endowed with two transverse left invariant lagrangian ( resp. symplectic ) foliations. this implies that these groups admit a left invariant torsion free symplectic connection.
arxiv:math/0506365
in this note we are interested in the rich geometry of the graph of a curve $ \ gamma _ { a, b } : [ 0, 1 ] \ rightarrow \ mathbb { c } $ defined as \ begin { equation * } \ gamma _ { a, b } ( t ) = \ exp ( 2 \ pi i a t ) + \ exp ( 2 \ pi i b t ), \ end { equation * } in which $ a, b $ are two different positive integers. it turns out that the sum of only two exponentials gives already rise to intriguing graphs. we determine the symmetry group and the points of self intersection of any such graph using only elementary arguments and describe various interesting phenomena that arise in the study of graphs of sums of more than two exponentials.
arxiv:1810.01674
seismic data often face challenges in their utilization due to noise contamination, incomplete acquisition, and limited low - frequency information, which hinder accurate subsurface imaging and interpretation. traditional processing methods rely heavily on task - specific designs to address these challenges and fail to account for the variability of data. to address these limitations, we present a generative seismic foundation model ( gsfm ), a unified framework based on generative diffusion models ( gdms ), designed to tackle multi - task seismic processing challenges, including denoising, backscattered noise attenuation, interpolation, and low - frequency extrapolation. gsfm leverages a pre - training stage on synthetic data to capture the features of clean, complete, and broadband seismic data distributions and applies an iterative fine - tuning strategy to adapt the model to field data. by adopting a target - oriented diffusion process prediction, gsfm improves computational efficiency without compromising accuracy. synthetic data tests demonstrate gsfm surpasses benchmarks with equivalent architectures in all tasks and achieves performance comparable to traditional pre - training strategies, even after their fine - tuning. also, field data tests suggest that our iterative fine - tuning approach addresses the generalization limitations of conventional pre - training and fine - tuning paradigms, delivering significantly enhanced performance across diverse tasks. furthermore, gsfm ' s inherent probabilistic nature enables effective uncertainty quantification, offering valuable insights into the reliability of processing results.
arxiv:2502.01111
we examine thermal transport in graphene supported on sio2 using molecular dynamics simulations. coupling to the substrate reduces the thermal conductivity ( tc ) of supported graphene by an order of magnitude, due to damping of the flexural acoustic ( za ) phonons. however, increasing the strength of the graphene - substrate interaction enhances the tc of supported graphene, contrary to expectations. the enhancement is due to the coupling of graphene za modes to the substrate rayleigh waves, which linearizes the dispersion and increases the group velocity of the hybridized modes. these findings suggest that the tc of two - dimensional supported graphene is tunable through surface interactions, providing a novel possibility for controlled energy flow in nanomaterials.
arxiv:1101.2463
hot spots in tumors are regions of high vascular density in the center of the tumor and their analysis is an important diagnostic tool in cancer treatment. we present a model for vascular remodeling in tumors predicting that the formation of hot spots correlates with local inhomogeneities of the original arterio - venous vasculature of the healthy tissue. probable locations for hot spots in the late stages of the tumor are locations of increased blood pressure gradients. the developing tumor vasculature is non - hierarchical but still complex displaying algebraically decaying density distributions.
arxiv:0801.0654
the fragile light elements lithium, beryllium, and boron are easily destroyed in stellar interiors, and are thus superb probes of physical processes occuring in the outer stellar layers. the light elements are also excellent tracers of the chemical evolution of the galaxy, and can test big bang nucleosynthesis ( bbn ). these inter - related topics are reviewed with an emphasis on stellar physics. in part i ( presented by cpd ), an overview is given of the physical processes which can modify the surface abundances of the light elements, with emphasis on population i dwarfs - convection ; gravitational settling, thermal diffusion, and radiative levitation ; slow mixing induced by gravity waves or rotation. we will discuss the increasingly large body of data which begin to enable us to discern the relative importance of these mechanisms in population i main sequence stars. in part ii ( presented by mhp ), discussion is extended to the issue of whether or not the halo li plateau is depleted, and includes the following topics : li dispersion in field and globular cluster stars, li production vs. destruction in li - rich halo stars, and constraints from 6li. also discussed are trends with metal abundance and teff and implications for chemical evolution and bbn. in part iii ( presented by cc ), evidence is reviewed that suggests that in situ mixing occurs in evolved low mass population i and population ii stars. theoretical mechanisms that can create such mixing are discussed, as well as their implications in stellar yields.
arxiv:astro-ph/0006280
we give an explicit description of the ( lowering ) kashiwara operators on mirkovi \ ' c - vilonen polytopes in types $ b $ and $ c $, which provides a simple method for generating mirkovi \ ' c - vilonen polytopes inductively. this description can be thought of as a modification of the original anderson - mirkovi \ ' c conjecture, which kamnitzer proved in the case of type $ a $, and presented a counterexample in the case of type $ c _ { 3 } $.
arxiv:0711.0071
the unique nature of the lorentz group in four dimensions is the root cause of the many remarkable properties of the einstein spacetimes, in particular their operational structure on the 2 - forms. we show how this operational structure can be used for two ends. first, it allows for a simple generalization of the birkhoff theorem to schwarzschild ( a ) de - sitter spacetime. second, it provides the means to construct an abelian endomorphism group on the space of 2 - forms. it is observed that taking the trace over this group element - wise induces a further abelian group which may be identified with a tensor representation of conformal transformations, giving einstein spacetimes access to their own conformal equivalence class. a further trace over the group yields the curvature invariants of the spacetime. the kretschmann scalar becomes the topological euler density, which may be linked in a simple way to the hawking temperature of horizons.
arxiv:2403.11527
we experimentally present a random phase feedback based on quantum noise to generate a chaotic laser with gaussian invariant distribution. the quantum noise from vacuum fluctuations is acquired by balanced homodyne detection and injected into a phase modulator to form a random phase feedback. an optical switch using high - speed intensity modulator is employed to reset the chaotic states repeatedly and the time evolutions of intensity statistical distributions of the chaotic states stemming from the initial noise are measured. by the quantum - noise random phase feedback, the transient intensity distributions of the chaotic outputs are improved from asymmetric invariant distributions to gaussian invariant distributions, and the gaussian invariant distribution indicates a randomly perturbed dynamical transition from microscopic initial noise to macroscopic stochastic fluctuation. the effects of phase feedback bandwidth and modulation depth on the invariant distributions are investigated experimentally. the chaotic time - delay signature and mean permutation entropy are suppressed to 0. 036 and enhanced to 0. 999 using the random phase feedback, respectively. the high - quality chaotic laser with gaussian invariant distribution can be a desired random source for ultrafast random number generation and secure communication.
arxiv:2306.06912
the thermodynamics of solid ( hcp ) he - 4 is studied theoretically by means of unbiased monte carlo simulations at finite temperature, in a wide range of density. this study complements and extends previous theoretical work, mainly by obtaining results at significantly lower temperatures ( down to 60 mk ) and for systems of greater size, by including in full the effect of quantum statistics, and by comparing estimates yielded by different pair potentials. all the main thermodynamic properties of the crystal, e. g., the kinetic energy per atom, are predicted to be essentially independent of temperature below 1 k. quantum - mechanical exchanges are virtually non - existent in this system, even at the lowest temperature considered. however, effects of quantum statistics are detectable in the momentum distribution. comparison with available measurements shows general agreement within the experimental uncertainties.
arxiv:2306.13848
the inclusive $ d _ s ^ { \ pm } $ production asymmetry is measured in $ pp $ collisions collected by the lhcb experiment at centre - of - mass energies of $ \ sqrt { s } = 7 $ and 8 tev. promptly produced $ d _ s ^ { \ pm } $ mesons are used, which decay as $ d _ s ^ { \ pm } \ to \ phi \ pi ^ { \ pm } $, with $ \ phi \ to k ^ + k ^ - $. the measurement is performed in bins of transverse momentum, $ p _ { \ rm t } $, and rapidity, $ y $, covering the range $ 2. 5 < p _ { \ rm t } < 25. 0 $ gev $ / c $ and $ 2. 0 < y < 4. 5 $. no kinematic dependence is observed. evidence of nonzero $ d _ s ^ { \ pm } $ production asymmetry is found with a significance of 3. 3 standard deviations.
arxiv:1805.09869
we develop crossing symmetric dispersion relations for describing 2 - 2 scattering of identical external particles carrying spin. this enables us to import techniques from geometric function theory and study two sided bounds on low energy wilson coefficients. we consider scattering of photons, gravitons in weakly coupled effective field theories. we provide general expressions for the locality / null constraints. consideration of the positivity of the absorptive part leads to an interesting connection with the recently conjectured weak low spin dominance. we also construct the crossing symmetric amplitudes and locality constraints for the massive neutral majorana fermions and parity violating photon and graviton theories. the techniques developed in this paper will be useful for considering numerical s - matrix bootstrap in the future.
arxiv:2112.11755
recent network pruning methods focus on pruning models early - on in training. to estimate the impact of removing a parameter, these methods use importance measures that were originally designed to prune trained models. despite lacking justification for their use early - on in training, such measures result in surprisingly low accuracy loss. to better explain this behavior, we develop a general framework that uses gradient flow to unify state - of - the - art importance measures through the norm of model parameters. we use this framework to determine the relationship between pruning measures and evolution of model parameters, establishing several results related to pruning models early - on in training : ( i ) magnitude - based pruning removes parameters that contribute least to reduction in loss, resulting in models that converge faster than magnitude - agnostic methods ; ( ii ) loss - preservation based pruning preserves first - order model evolution dynamics and is therefore appropriate for pruning minimally trained models ; and ( iii ) gradient - norm based pruning affects second - order model evolution dynamics, such that increasing gradient norm via pruning can produce poorly performing models. we validate our claims on several vgg - 13, mobilenet - v1, and resnet - 56 models trained on cifar - 10 / cifar - 100. code available at https : / / github. com / ekdeepslubana / flowandprune.
arxiv:2009.11839
$ where $ y $ is a closed subset of $ x $.
arxiv:1412.8240
data - free knowledge distillation is able to utilize the knowledge learned by a large teacher network to augment the training of a smaller student network without accessing the original training data, avoiding privacy, security, and proprietary risks in real applications. in this line of research, existing methods typically follow an inversion - and - distillation paradigm in which a generative adversarial network on - the - fly trained with the guidance of the pre - trained teacher network is used to synthesize a large - scale sample set for knowledge distillation. in this paper, we reexamine this common data - free knowledge distillation paradigm, showing that there is considerable room to improve the overall training efficiency through a lens of ` ` small - scale inverted data for knowledge distillation ". in light of three empirical observations indicating the importance of how to balance class distributions in terms of synthetic sample diversity and difficulty during both data inversion and distillation processes, we propose small scale data - free knowledge distillation ssd - kd. in formulation, ssd - kd introduces a modulating function to balance synthetic samples and a priority sampling function to select proper samples, facilitated by a dynamic replay buffer and a reinforcement learning strategy. as a result, ssd - kd can perform distillation training conditioned on an extremely small scale of synthetic samples ( e. g., 10x less than the original training data scale ), making the overall training efficiency one or two orders of magnitude faster than many mainstream methods while retaining superior or competitive model performance, as demonstrated on popular image classification and semantic segmentation benchmarks. the code is available at https : / / github. com / osvai / ssd - kd.
arxiv:2406.07876
in response to carbon - neutral policies in developed countries, electric vehicles route optimization has gained importance for logistics companies. with the increasing focus on customer expectations and the shift towards more customer - oriented business models, the integration of delivery time - windows has become essential in logistics operations. recognizing the critical nature of these developments, this article studies the heterogeneous electric vehicle routing problem with time - window constraints ( hevrptw ). to solve this variant of vehicle routing problem ( vrp ), we propose a drl - based approach, named edge - enhanced dual attention encoderr and feature - enhanced dual attention decoder ( edge - direct ). edge - direct features an extra graph representation, the node connectivity of which is based on the overlap of customer time - windows. edge - direct ' s self - attention encoding mechanism is enhanced by exploiting the energy consumption and travel time between the locations. to effectively account for the heterogeneity of the evs ' fleet, a dual attention decoder has been introduced. experimental results based on two real - world datasets reveal that edge - direct outperforms a state - of - the - art drl - based method and a well - established heuristic approach in solution quality and execution time. furthermore, it exhibits competitive performance when compared to another leading heuristic method.
arxiv:2407.01615
other industries and supplement to y14. 5 standards. in 2011, a new revision of iso 8015 ( geometrical product specifications ( gps ) — fundamentals — concepts, principles and rules ) was published containing the invocation principle. this states that, " once a portion of the iso geometric product specification ( gps ) system is invoked in a mechanical engineering product documentation, the entire iso gps system is invoked. " it also goes on to state that marking a drawing " tolerancing iso 8015 " is optional. the implication of this is that any drawing using iso symbols can only be interpreted to iso gps rules. the only way not to invoke the iso gps system is to invoke a national or other standard. britain, bs 8888 ( technical product specification ) has undergone important updates in the 2010s. = = media = = for centuries, until the 1970s, all engineering drawing was done manually by using pencil and pen on paper or other substrate ( e. g., vellum, mylar ). since the advent of computer - aided design ( cad ), engineering drawing has been done more and more in the electronic medium with each passing decade. today most engineering drawing is done with cad, but pencil and paper have not entirely disappeared. some of the tools of manual drafting include pencils, pens and their ink, straightedges, t - squares, french curves, triangles, rulers, protractors, dividers, compasses, scales, erasers, and tacks or push pins. ( slide rules used to number among the supplies, too, but nowadays even manual drafting, when it occurs, benefits from a pocket calculator or its onscreen equivalent. ) and of course the tools also include drawing boards ( drafting boards ) or tables. the english idiom " to go back to the drawing board ", which is a figurative phrase meaning to rethink something altogether, was inspired by the literal act of discovering design errors during production and returning to a drawing board to revise the engineering drawing. drafting machines are devices that aid manual drafting by combining drawing boards, straightedges, pantographs, and other tools into one integrated drawing environment. cad provides their virtual equivalents. producing drawings usually involves creating an original that is then reproduced, generating multiple copies to be distributed to the shop floor, vendors, company archives, and so on. the classic reproduction methods involved blue and white appearances ( whether white - on - blue or blue - on - white ), which is why engineering drawings were long
https://en.wikipedia.org/wiki/Engineering_drawing
we prove an asymptotic crystallization result in two dimensions for a class of nonlocal particle systems. to be precise, we consider the best approximation with respect to the 2 - wasserstein metric of a given absolutely continuous probability measure $ f \ mathrm { d } x $ by a discrete probability measure $ \ sum _ i m _ i \ delta _ { z _ i } $, subject to a constraint on the particle sizes $ m _ i $. the locations $ z _ i $ of the particles, their sizes $ m _ i $, and the number of particles are all unknowns of the problem. we study a one - parameter family of constraints. this is an example of an optimal location problem ( or an optimal sampling or quantization problem ) and it has applications in economics, signal compression, and numerical integration. we establish the asymptotic minimum value of the ( rescaled ) approximation error as the number of particles goes to infinity. in particular, we show that for the constrained best approximation of the lebesgue measure by a discrete measure, the discrete measure whose support is a triangular lattice is asymptotically optimal. in addition, we prove an analogous result for a problem where the constraint is replaced by a penalization. these results can also be viewed as the asymptotic optimality of the hexagonal tiling for an optimal partitioning problem. they generalise the crystallization result of bourne, peletier and theil ( communications in mathematical physics, 2014 ) from a single particle system to a class of particle systems, and prove a case of a conjecture by bouchitt \ ' { e }, jimenez and mahadevan ( journal de math \ ' ematiques pures et appliqu \ ' ees, 2011 ). finally, we prove a crystallization result which states that optimal configurations with energy close to that of a triangular lattice are geometrically close to a triangular lattice.
arxiv:2012.12129
riverine floods pose a considerable risk to many communities. improving flood hazard projections has the potential to inform the design and implementation of flood risk management strategies. current flood hazard projections are uncertain, especially due to uncertain model parameters. calibration methods use observations to quantify model parameter uncertainty. with limited computational resources, researchers typically calibrate models using either relatively few expensive model runs at high spatial resolutions or many cheaper runs at lower spatial resolutions. this leads to an open question : is it possible to effectively combine information from the high and low resolution model runs? we propose a bayesian emulation - calibration approach that assimilates model outputs and observations at multiple resolutions. as a case study for a riverine community in pennsylvania, we demonstrate our approach using the lisflood - fp flood hazard model. the multiresolution approach results in improved parameter inference over the single resolution approach in multiple scenarios. results vary based on the parameter values and the number of available models runs. our method is general and can be used to calibrate other high dimensional computer models to improve projections.
arxiv:2203.00840
the electronic and magnetic structure, including the heisenberg model exchange interaction parameters, was explored for the recently proposed novel cuprate cu $ _ 2 $ f $ _ 5 $. using the dft + u calculation, it is shown that the compound is formed by two types of copper ions with $ d ^ 9 $ and $ d ^ 8 $ electronic configurations. we have found a very stable antiferromagnetic ordering with strong anisotropy of exchange interaction that results in the appearance of an unusual 2d - magnetism : within the ( 100 ) - plane the exchange between the s = 1 and s = 1 / 2 cu ions has almost the same strength as between the two s = 1 ions. the interplane magnetic interaction is five times weaker than the in - plane one.
arxiv:2107.08636
we formulate and prove a very general relative version of the dobrushin - lanford - ruelle theorem which gives conditions on constraints of configuration spaces over a finite alphabet such that for every absolutely summable relative interaction, every translation - invariant relative gibbs measure is a relative equilibrium measure and vice versa. neither implication is true without some assumption on the space of configurations. we note that the usual finite type condition can be relaxed to a much more general class of constraints. by " relative " we mean that both the interaction and the set of allowed configurations are determined by a random environment. the result includes many special cases that are well known. we give several applications including ( 1 ) gibbsian properties of measures that maximize pressure among all those that project to a given measure via a topological factor map from one symbolic system to another ; ( 2 ) gibbsian properties of equilibrium measures for group shifts defined on arbitrary countable amenable groups ; ( 3 ) a gibbsian characterization of equilibrium measures in terms of equilibrium condition on lattice slices rather than on finite sets ; ( 4 ) a relative extension of a theorem of meyerovitch, who proved a version of the lanford - - ruelle theorem which shows that every equilibrium measure on an arbitrary subshift satisfies a gibbsian property on interchangeable patterns.
arxiv:1809.00078
perception module of autonomous vehicles ( avs ) are increasingly susceptible to be attacked, which exploit vulnerabilities in neural networks through adversarial inputs, thereby compromising the ai safety. some researches focus on creating covert adversarial samples, but existing global noise techniques are detectable and difficult to deceive the human visual system. this paper introduces a novel adversarial attack method, advswap, which creatively utilizes wavelet - based high - frequency information swapping to generate covert adversarial samples and fool the camera. advswap employs invertible neural network for selective high - frequency information swapping, preserving both forward propagation and data integrity. the scheme effectively removes the original label data and incorporates the guidance image data, producing concealed and robust adversarial samples. experimental evaluations and comparisons on the gtsrb and nuscenes datasets demonstrate that advswap can make concealed attacks on common traffic targets. the generates adversarial samples are also difficult to perceive by humans and algorithms. meanwhile, the method has strong attacking robustness and attacking transferability.
arxiv:2502.08374
numerical calculations illustrate the effect of the sign of the next nearest - neighbor hopping term t ' on the 2 - hole properties of the t - t ' - j model. working mainly on 2 - leg ladders, in the - 1. 0 < t ' / t < 1. 0 regime, it is shown that introducing t ' in the t - j model is equivalent to effectively renormalizing j, namely t ' negative ( positive ) is equivalent to an effective t - j model with smaller ( bigger ) j. this effect is present even at the level of a 2x2 plaquette toy model, and was observed also in calculations on small square clusters. analyzing the transition probabilities of a hole - pair in the plaquette toy model, it is argued that the coherent propagation of such hole - pair is enhanced by a constructive interference between both t and t ' for t ' > 0. this interference is destructive for t ' < 0.
arxiv:cond-mat/0109405