text
stringlengths 1
3.65k
| source
stringlengths 15
79
|
---|---|
national institute of standard & technology ( nist ) is currently running a multi - year - long standardization procedure to select quantum - safe or post - quantum cryptographic schemes to be used in the future. saber is the only lwr based algorithm to be in the final of round 3. this work presents a saber asic which provides 1. 37x power - efficient, 1. 75x lower area, and 4x less memory implementation w. r. t. other soa pqc asic. the energy - hungry multiplier block is 1. 5x energyefficient than soa.
|
arxiv:2201.07375
|
generation of quasi - monoenergetic ions by intense laser is one of long - standing goals in laser - plasma physics. however, existing laser - driven ion acceleration schemes often produce broad energy spectra and limited control over ion species. here we propose the acceleration mechanism, boosted coulomb explosion, initiated by a standing wave, which is formed in a pre - expanded plasma by the interference between a continuously incoming main laser pulse and the pulse reflected by a solid target, where the pre - expanded plasma is formed from a thin layer on the solid target by a relatively strong pre - pulse. this mechanism produces a persistent coulomb field on the target front side with field strengths on the order of tv / m for picoseconds. we experimentally demonstrate generation of quasi - monoenergetic deuterons up to 50 mev using an in - situ d $ _ 2 $ o - deposited target. our results show that the peak energy can be tuned by the laser pulse duration.
|
arxiv:2504.19789
|
all known up to now models of chemical oscillations are based exclusively on kinetic considerations. the chemical gross - process equation is split usually by elementary steps, each step is supplied by an arrow and a differential equation, joint solution to such a construction under certain, often ad hoc chosen conditions and with ad hoc numerical coefficients leads to chemical oscillations. kinetic perception of chemical oscillations reigns without exclusions. however, as it was recently shown by the author for the laser and for the electrochemical systems, chemical oscillations follow also from solutions to the basic expressions of discrete thermodynamics of chemical equilibria. graphically those solutions are various fork bifurcation diagrams, and, in certain types of chemical systems, oscillations are well pronounced in the bistable bifurcation areas. in this work we describe a general thermodynamic approach to chemical oscillations as opposite to kinetic models, and depict some of their new features like spontaneity and fractality. the paper doubts exclusivity of the kinetic approach to chemical oscillations, and its aim is to discuss and exemplify thermodynamically predicted chemical oscillations in closed chemical systems.
|
arxiv:1008.1100
|
in this paper, we present preliminary results on the stability of massless particles in two and three - planet systems. the results of our study may be used to address questions concerning the stability of terrestrial planets in these systems and also the trapping of particles in resonances with the planets. the possibility of the existence of islands of stability and / or instability at different regions in multi - body systems and their probable correspondence to certain resonances are also discussed.
|
arxiv:astro-ph/0208586
|
let s1, t1,... sk, tk be vertices in a graph g embedded on a surface \ sigma of genus g. a vertex v of g is " redundant " if there exist k vertex disjoint paths linking si and ti ( 1 \ lequal i \ lequal k ) in g if and only if such paths also exist in g - v. robertson and seymour proved in graph minors vii that if v is " far " from the vertices si and tj and v is surrounded in a planar part of \ sigma by l ( g, k ) disjoint cycles, then v is redundant. unfortunately, their proof of the existence of l ( g, k ) is not constructive. in this paper, we give an explicit single exponential bound in g and k.
|
arxiv:1309.7820
|
the popular q - learning algorithm is known to overestimate action values under certain conditions. it was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. in this paper, we answer all these questions affirmatively. in particular, we first show that the recent dqn algorithm, which combines q - learning with a deep neural network, suffers from substantial overestimations in some games in the atari 2600 domain. we then show that the idea behind the double q - learning algorithm, which was introduced in a tabular setting, can be generalized to work with large - scale function approximation. we propose a specific adaptation to the dqn algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
|
arxiv:1509.06461
|
the neutron direct radiative capture ( drc ) process is investigated, highlighting the role of incident p - wave neutrons. a set of calculations is shown for the 12 - c ( n, gamma ) process at incoming neutron energies up to 500 kev, a crucial region for astrophysics. the cross section for neutron capture leading to loosely bound s, p and d orbits of 13 - c is well reproduced by the drc model demonstrating the feasibility of using this reaction channel to study the properties of nuclear wave functions on and outside the nuclear surface. a sensitivity analysis of the results on the neutron - nucleus interaction is performed for incident s - as well as p - waves. it turned out that the drc cross section for p - wave neutrons is insensitive to this interaction, contrary to the case of incident s - wave neutrons. pacs number ( s ) : 25. 40lw, 21. 10gv, 23. 40. hc
|
arxiv:nucl-th/9508018
|
despite their wide - spread success, text - to - image models ( t2i ) still struggle to produce images that are both aesthetically pleasing and faithful to the user ' s input text. we introduce dreamsync, a model - agnostic training algorithm by design that improves t2i models to be faithful to the text input. dreamsync builds off a recent insight from tifa ' s evaluation framework - - that large vision - language models ( vlms ) can effectively identify the fine - grained discrepancies between generated images and the text inputs. dreamsync uses this insight to train t2i models without any labeled data ; it improves t2i models using its own generations. first, it prompts the model to generate several candidate images for a given input text. then, it uses two vlms to select the best generation : a visual question answering model that measures the alignment of generated images to the text, and another that measures the generation ' s aesthetic quality. after selection, we use lora to iteratively finetune the t2i model to guide its generation towards the selected best generations. dreamsync does not need any additional human annotation. model architecture changes, or reinforcement learning. despite its simplicity, dreamsync improves both the semantic alignment and aesthetic appeal of two diffusion - based t2i models, evidenced by multiple benchmarks ( + 1. 7 % on tifa, + 2. 9 % on dsg1k, + 3. 4 % on vila aesthetic ) and human evaluation.
|
arxiv:2311.17946
|
optimization of pre - production vehicle configurations is one of the challenges in the automotive industry. given a list of tests requiring cars with certain features, it is desirable to find the minimum number of cars that cover the tests and obey the configuration rules. in this paper, we model the problem in the framework of satisfiability and solve it by utilizing the newly introduced hybrid constrained quadratic model ( cqm ) solver provided by d - wave. the problem definition is based on the " optimizing the production of test vehicles " use case given in the bmw quantum computing challenge. we formulate a constrained quadratic model for the problem and use a greedy algorithm to configure the cars. we benchmark the results obtained from the cqm solver with the results from the classical solvers like cbc ( coin - or branch and cut ) and gurobi. we conclude that the performance of the cqm solver is comparable to classical solvers in optimizing the number of test vehicles. as an extension to the problem, we describe how the scheduling of the tests can be incorporated into the model.
|
arxiv:2203.15421
|
this work presents a complete re - evaluation of the hadronic vacuum polarisation contributions to the anomalous magnetic moment of the muon, $ a _ { \ mu } ^ { \ rm had, \, vp } $ and the hadronic contributions to the effective qed coupling at the mass of the $ z $ boson, $ \ delta \ alpha _ { \ rm had } ( m _ z ^ 2 ) $, from the combination of $ e ^ + e ^ - \ rightarrow { \ rm hadrons } $ cross section data. focus has been placed on the development of a new data combination method, which fully incorporates all correlated statistical and systematic uncertainties in a bias free approach. all available $ e ^ + e ^ - \ rightarrow { \ rm hadrons } $ cross section data have been analysed and included, where the new data compilation has yielded the full hadronic $ r $ - ratio and its covariance matrix in the energy range $ m _ { \ pi } \ leq \ sqrt { s } \ leq 11. 2 $ gev. using these combined data and pqcd above that range results in estimates of the hadronic vacuum polarisation contributions to $ g - 2 $ of the muon of $ a _ { \ mu } ^ { \ rm had, \, lo \, vp } = ( 693. 27 \ pm 2. 46 ) \ times 10 ^ { - 10 } $ and $ a _ { \ mu } ^ { \ rm had, \, nlo \, vp } = ( - 9. 82 \ pm 0. 04 ) \ times 10 ^ { - 10 } $. the new estimate for the standard model prediction is found to be $ a _ { \ mu } ^ { \ rm sm } = ( 11 \ 659 \ 182. 05 \ pm 3. 56 ) \ times 10 ^ { - 10 } $, which is $ 3. 7 \ sigma $ below the current experimental measurement. the prediction for the five - flavour hadronic contribution to the qed coupling at the $ z $ boson mass is $ \ delta \ alpha _ { \ rm had } ^ { ( 5 ) } ( m _ z ^ 2 ) = ( 276. 11 \ pm 1. 11 ) \ times 10 ^ { - 4 } $, resulting in $ \ alpha ^ { - 1 } ( m _ z ^ 2 ) = 128. 946 \ pm 0. 01
|
arxiv:1802.02995
|
classification of datasets into two or more distinct classes is an important machine learning task. many methods are able to classify binary classification tasks with a very high accuracy on test data, but cannot provide any easily interpretable explanation for users to have a deeper understanding of reasons for the split of data into two classes. in this paper, we highlight and evaluate a recently proposed nonlinear decision tree approach with a number of commonly used classification methods on a number of datasets involving a few to a large number of features. the study reveals key issues such as effect of classification on the method ' s parameter values, complexity of the classifier versus achieved accuracy, and interpretability of resulting classifiers.
|
arxiv:2008.10753
|
a fast - paced policy context is characteristic of energy and climate research, which strives to develop solutions to wicked problems such as climate change. funding agencies in the european union recognize the importance of linking research and policy in climate and energy research. this calls for an increased understanding of how stakeholder engagement can effectively be used to co - design research questions that include stakeholders ' concerns. this paper reviews the current literature on stakeholder engagement, from which we create a set of criteria. these are used to critically assess recent and relevant papers on stakeholder engagement in climate and energy projects. we obtained the papers from a scoping review of stakeholder engagement through workshops in eu climate and energy research. with insights from the literature and current eu climate and energy projects, we developed a workshop programme for stakeholder engagement. this programme was applied to the european climate and energy modelling forum project, aiming to co - design the most pressing and urgent research questions according to european stakeholders. the outcomes include 82 co - designed and ranked research questions for nine specific climate and energy research themes. findings from the scoping review indicate that papers rarely define the term ' stakeholder '. additionally, the concepts of co - creation, co - design, and co - production are used interchangeably and often without definition. we propose that workshop planners use stakeholder identification and selection methods from the broader stakeholder engagement literature.
|
arxiv:2406.01640
|
this paper describes a simple yet efficient repetition - based modular system for speeding up air - traffic controllers ( atcos ) training. e. g., a human pilot is still required in eurocontrol ' s escape lite simulator ( see https : / / www. eurocontrol. int / simulator / escape ) during atco training. however, this need can be substituted by an automatic system that could act as a pilot. in this paper, we aim to develop and integrate a pseudo - pilot agent into the atco training pipeline by merging diverse artificial intelligence ( ai ) powered modules. the system understands the voice communications issued by the atco, and, in turn, it generates a spoken prompt that follows the pilot ' s phraseology to the initial communication. our system mainly relies on open - source ai tools and air traffic control ( atc ) databases, thus, proving its simplicity and ease of replicability. the overall pipeline is composed of the following : ( 1 ) a submodule that receives and pre - processes the input stream of raw audio, ( 2 ) an automatic speech recognition ( asr ) system that transforms audio into a sequence of words ; ( 3 ) a high - level atc - related entity parser, which extracts relevant information from the communication, i. e., callsigns and commands, and finally, ( 4 ) a speech synthesizer submodule that generates responses based on the high - level atc entities previously extracted. overall, we show that this system could pave the way toward developing a real proof - of - concept pseudo - pilot system. hence, speeding up the training of atcos while drastically reducing its overall cost.
|
arxiv:2212.07164
|
diversity searcher is a tool originally developed to help analyse diversity in news media texts. it relies on a form of automated content analysis and thus rests on prior assumptions and depends on certain design choices related to diversity and fairness. one such design choice is the external knowledge source ( s ) used. in this article, we discuss implications that these sources can have on the results of content analysis. we compare two data sources that diversity searcher has worked with - dbpedia and wikidata - with respect to their ontological coverage and diversity, and describe implications for the resulting analyses of text corpora. we describe a case study of the relative over - or under - representation of belgian political parties between 1990 and 2020 in the english - language dbpedia, the dutch - language dbpedia, and wikidata, and highlight the many decisions needed with regard to the design of this data analysis and the assumptions behind it, as well as implications from the results. in particular, we came across a staggering over - representation of the political right in the english - language dbpedia.
|
arxiv:2301.00671
|
review fraud is a pervasive problem in online commerce, in which fraudulent sellers write or purchase fake reviews to manipulate perception of their products and services. fake reviews are often detected based on several signs, including 1 ) they occur in short bursts of time ; 2 ) fraudulent user accounts have skewed rating distributions. however, these may both be true in any given dataset. hence, in this paper, we propose an approach for detecting fraudulent reviews which combines these 2 approaches in a principled manner, allowing successful detection even when one of these signs is not present. to combine these 2 approaches, we formulate our bayesian inference for rating data ( bird ) model, a flexible bayesian model of user rating behavior. based on our model we formulate a likelihood - based suspiciousness metric, normalized expected surprise total ( nest ). we propose a linear - time algorithm for performing bayesian inference using our model and computing the metric. experiments on real data show that birdnest successfully spots review fraud in large, real - world graphs : the 50 most suspicious users of the flipkart platform flagged by our algorithm were investigated and all identified as fraudulent by domain experts at flipkart.
|
arxiv:1511.06030
|
bmatrix } a & - b \ \ b & a \ end { bmatrix } }, } under which addition and multiplication of complex numbers and matrices correspond to each other. for example, 2 - by - 2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. a similar interpretation is possible for quaternions and clifford algebras in general. early encryption techniques such as the hill cipher also used matrices. however, due to the linear nature of matrices, these codes are comparatively easy to break. computer graphics uses matrices to represent objects ; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three - dimensional object onto a two - dimensional screen, corresponding to a theoretical camera observation ; and to apply image convolutions such as sharpening, blurring, edge detection, and more. matrices over a polynomial ring are important in the study of control theory. chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. examples are the overlap matrix and the fock matrix used in solving the roothaan equations to obtain the molecular orbitals of the hartree β fock method. = = = graph theory = = = the adjacency matrix of a finite graph is a basic notion of graph theory. it records which vertices of the graph are connected by an edge. matrices containing just two different values ( 1 and 0 meaning for example " yes " and " no ", respectively ) are called logical matrices. the distance ( or cost ) matrix contains information about the distances of the edges. these concepts can be applied to websites connected by hyperlinks, or cities connected by roads etc., in which case ( unless the connection network is extremely dense ) the matrices tend to be sparse, that is, contain few nonzero entries. therefore, specifically tailored matrix algorithms can be used in network theory. = = = analysis and geometry = = = the hessian matrix of a differentiable function f : r n β r { \ displaystyle f : \ mathbb { r } ^ { n } \ to \ mathbb { r } } consists of the second derivatives of Ζ concerning the several coordinate directions, that is, h ( f ) = [ β 2 f β x i β x j ]. { \ displaystyle h ( f ) = \ left [ { \ frac { \ partial ^ { 2 } f } { \ partial x _ { i } \, \ partial x _
|
https://en.wikipedia.org/wiki/Matrix_(mathematics)
|
graph convolutional neural networks ( graph cnns ) are generalizations of classical cnns to handle graph data such as molecular data, point could and social networks. current filters in graph cnns are built for fixed and shared graph structure. however, for most real data, the graph structures varies in both size and connectivity. the paper proposes a generalized and flexible graph cnn taking data of arbitrary graph structure as input. in that way a task - driven adaptive graph is learned for each graph data while training. to efficiently learn the graph, a distance metric learning is proposed. extensive experiments on nine graph - structured datasets have demonstrated the superior performance improvement on both convergence speed and predictive accuracy.
|
arxiv:1801.03226
|
extensive monte carlo simulations in the semi - grand - canonical ensemble are used to study the critical behavior of a three - dimensional compressible ising model with antiferromagnetic interactions under constant volume conditions. elastic forces between spins are introduced by the stillinger - weber potential and energy parameters are chosen in such a way that antiparallel spin ordering is favored, analogous to the antiferromagnetic coupling in the rigid ising hamiltonian. all the quantities analyzed strongly indicate that the system remains in the universality class of the standard ( rigid ) three - dimensional ising model, in contrast with theoretical predictions.
|
arxiv:cond-mat/0409684
|
we use the cosmo - owls suite of cosmological hydrodynamical simulations to investigate the scatter and evolution of the global hot gas properties of large simulated populations of galaxy groups and clusters. our aim is to compare the predictions of different physical models and to explore the extent to which commonly - adopted assumptions in observational analyses ( e. g. self - similar evolution ) are violated. we examine the relations between ( true ) halo mass and the x - ray temperature, x - ray luminosity, gas mass, sunyaev - zel ' dovich ( sz ) flux, the x - ray analogue of the sz flux ( $ y _ x $ ) and the hydrostatic mass. for the most realistic models, which include agn feedback, the slopes of the various mass - observable relations deviate substantially from the self - similar ones, particularly at late times and for low - mass clusters. the amplitude of the mass - temperature relation shows negative evolution with respect to the self - similar prediction ( i. e. slower than the prediction ) for all models, driven by an increase in non - thermal pressure support at higher redshifts. the agn models predict strong positive evolution of the gas mass fractions at low halo masses. the sz flux and $ y _ x $ show positive evolution with respect to self - similarity at low mass but negative evolution at high mass. the scatter about the relations is well approximated by log - normal distributions, with widths that depend mildly on halo mass. the scatter decreases significantly with increasing redshift. the exception is the hydrostatic mass - halo mass relation, for which the scatter increases with redshift. finally, we discuss the relative merits of various hot gas - based mass proxies.
|
arxiv:1606.04545
|
cytosine methylation has been found to play a crucial role in various biological processes, including a number of human diseases. the detection of this small modification remains challenging. in this work, we computationally explore the possibility of detecting methylated dna strands through direct electrical conductance measurements. using density functional theory and the landauer - buttiker method, we study the electronic properties and charge transport through an eight base - pair methylated dna strand and its native counterpart. we first analyze the effect of cytosine methylation on the tight - binding parameters of two dna strands and then model the transmission of the electrons and conductance through the strands both with and without decoherence. we find that the main difference of the tight - binding parameters between the native dna and the methylated dna lies in the on - site energies of ( methylated ) cytosine bases. the intra - and inter - strand hopping integrals between two nearest neighboring guanine base and ( methylated ) cytosine base also change with the addition of the methyl groups. our calculations show that in the phase - coherent limit, the transmission of the methylated strand is close to the native strand when the energy is nearby the highest occupied molecular orbital level and larger than the native strand by 5 times in the bandgap. the trend in transmission also holds in the presence of the decoherence with the same rate. the lower conductance for the methylated strand in the experiment is suggested to be caused by the more stable structure due to the introduction of the methyl groups. we also study the role of the exchangecorrelation functional and the effect of contact coupling by choosing coupling strengths ranging from weak to strong coupling limit.
|
arxiv:1702.05700
|
due to the lockdown measures during the 2019 novel coronavirus ( covid - 19 ) pandemic, the economic activities and the associated emissions have significantly declined. this reduction in emissions has created a natural experiment to assess the impact of the emitted precursor control policy on ozone ( o $ _ 3 $ ) pollution, which has become a public concern in china during the last decade. in this study, we utilized comprehensive satellite, ground - level observations, and source - oriented chemical transport modeling to investigate the o $ _ 3 $ variations during the covid - 19 in china. here we found that the o $ _ 3 $ formation regime shifted from a voc - limited regime to a nox - limited regime due to the lower nox during the covid - 19 lockdown. however, instead of these changes of the o $ _ 3 $ formation region, the significant elevated o $ _ 3 $ in the north china plain ( 40 % ) and yangtze river delta ( 35 % ) were mainly attributed to the enhanced atmospheric oxidant capacity ( aoc ) in these regions, which was different from previous studies. we suggest that future o $ _ 3 $ control policies should comprehensively consider the synergistic effects of o $ _ 3 $ formation regime and aoc on the o $ _ 3 $ elevation.
|
arxiv:2009.11714
|
renormalization - group methods provide a viable approach for investigating the emergent collective behavior of classical and quantum statistical systems in both equilibrium and nonequilibrium conditions. within this approach we investigate here the dynamics of an isolated quantum system represented by a scalar $ \ phi ^ 4 $ theory after a global quench of the potential close to a dynamical critical point. we demonstrate that, within a pre - thermal regime, the time dependence of the relevant correlations is characterized by a short - time universal exponent, which we calculate at the lowest order in a dimensional expansion.
|
arxiv:1411.7939
|
the longitudinal spin seebeck effect ( lsse ) has been investigated in high - quality epitaxial cofe2o4 ( cfo ) thin films. the thermally excited spin currents in the cfo films are electrically detected in adjacent pt layers due to the inverse spin hall effect ( ishe ). the lsse signal exhibits a linear increase with increasing temperature gradient, yielding a lsse coefficient of ~ 100 nv / k at room temperature. the temperature dependence of the lsse is investigated from room temperature down to 30 k, showing a significant reduction at low temperatures, revealing that the total amount of thermally generated magnons decreases. furthermore, we demonstrate that the spin seebeck effect is an effective tool to study the magnetic anisotropy induced by epitaxial strain, especially in ultrathin films with low magnetic moments.
|
arxiv:1509.03601
|
a subset $ d \ subseteq v _ g $ is a dominating set of $ g $ if every vertex in $ v _ g - d $ has a ~ neighbor in $ d $, while $ d $ is a paired - dominating set of $ g $ if $ d $ is a ~ dominating set and the subgraph induced by $ d $ contains a perfect matching. a graph $ g $ is a $ d \! p \! d \! p $ - graph if it has a pair $ ( d, p ) $ of disjoint sets of vertices of $ g $ such that $ d $ is a dominating set and $ p $ is a paired - dominating set of $ g $. the study of the $ d \! p \! d \! p $ - graphs was initiated by southey and henning ( cent. eur. j. math. 8 ( 2010 ) 459 - - 467 ; j. comb. optim. 22 ( 2011 ) 217 - - 234 ). in this paper, we provide conditions which ensure that a graph is a $ d \! p \! d \! p $ - graph. in particular, we characterize the minimal $ d \! p \! d \! p $ - graphs.
|
arxiv:1908.04189
|
the ( modern ) arbitrary derivative ( ader ) approach is a popular technique for the numerical solution of differential problems based on iteratively solving an implicit discretization of their weak formulation. in this work, focusing on an ode context, we investigate several strategies to improve this approach. our initial emphasis is on the order of accuracy of the method in connection with the polynomial discretization of the weak formulation. we demonstrate that precise choices lead to higher - order convergences in comparison to the existing literature. then, we put ader methods into a deferred correction ( dec ) formalism. this allows to determine the optimal number of iterations, which is equal to the formal order of accuracy of the method, and to introduce efficient $ p $ - adaptive modifications. these are defined by matching the order of accuracy achieved and the degree of the polynomial reconstruction at each iteration. we provide analytical and numerical results, including the stability analysis of the new modified methods, the investigation of the computational efficiency, an application to adaptivity and an application to hyperbolic pdes with a spectral difference ( sd ) space discretization.
|
arxiv:2305.13065
|
we give a review of the state of the art with regard to the dividend problem.
|
arxiv:1603.05949
|
in hierarchical evolutionary scenarios, isolated, physical pairs may represent an intermediate phase, or " way station ", between collapsing groups and isolated elliptical ( e ) galaxies ( or fossil groups ). we started a comprehensive study of a sample of galaxy pairs composed of a giant e and a spiral ( s ) with the aim of investigating their formation / evolutionary history from observed optical and x - ray properties. here we present vlt - vimos observations designed to identify faint galaxies associated with the e + s systems from candidate lists generated using photometric criteria on wfi images covering an area of ~ 0. 2 h ^ { - 1 } mpc radius around the pairs. the results are discussed in the context of the evolution of poor galaxy group associations. a comparison between the optical luminosity functions ( olfs ) of our e + s systems and a sample of x - ray bright poor groups suggest that the olf of x - ray detected poor galaxy systems is not universal. the olf of our x - ray bright systems suggests that they are more dynamically evolved than our x - ray faint sample and some x - ray bright groups in the literature. however, we suggest that the x - ray faint e + s pairs represent a phase in the dynamical evolution of some x - ray bright poor galaxy groups. the recent or ongoing interaction in which the e member of the x - ray faint pairs is involved could have decreased the luminosity of any surrounding x - ray emitting gas.
|
arxiv:0905.1264
|
despite various playful and educational tools have been developed to support children ' s learning abilities, limited work focuses on tangible toys designed to improve and maintain children ' s hygiene perception, habits and awareness, as well as fostering their collaboration and social abilities in home education contexts. we developed \ textbf { awayvirus } to address this research and design gap, aiming to help children gain hygiene habits knowledge through tangible blocks. our findings indicate that a playful tangible interaction method can effectively increase children ' s interest in learning and encourage parents to become actively involved in their children ' s hygiene and health education. additionally, awayvirus seeks to build a collaborative bridge between children and parents, promoting communication strategies while mitigating the adverse effects of the challenging the post - pandemic period.
|
arxiv:2306.09076
|
such as superposition, interference and entanglement, with classical computers to solve complex problems and formulate algorithms much more efficiently. individuals focus on fields like quantum cryptography, physical simulations and quantum algorithms. = = benefits of engineering in society = = an accessible avenue for obtaining information and opportunities in technology, especially for young students, is through digital platforms, enabling learning, exploration, and potential income generation at minimal cost and in regional languages, none of which would be possible without engineers. computer engineering is important in the changes involved in industry 4. 0, with engineers responsible for designing and optimizing the technology that surrounds our lives, from big data to ai. their work not only facilitates global connections and knowledge access, but also plays a pivotal role in shaping our future, as technology continues to evolve rapidly, leading to a growing demand for skilled computer engineers. engineering contributes to improving society by creating devices and structures impacting various aspects of our lives, from technology to infrastructure. engineers also address challenges such as environmental protection and sustainable development, while developing medical treatments. as of 2016, the median annual wage across all bls engineering categories was over $ 91, 000. some were much higher, with engineers working for petroleum companies at the top ( over $ 128, 000 ). other top jobs include : computer hardware engineer β $ 115, 080, aerospace engineer β $ 109, 650, nuclear engineer β $ 102, 220. = = see also = = = = = related fields = = = = = = associations = = = ieee computer society association for computing machinery = = notes and references = = = = = notes = = = = = = references = = = = = external links = = media related to computer engineering at wikimedia commons
|
https://en.wikipedia.org/wiki/Computer_engineering
|
study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. similarly, it has been used to study recruitment into political movements and social organizations. it has also been used to conceptualize scientific disagreements as well as academic prestige. more recently, network analysis ( and its close cousin traffic analysis ) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature. in criminology, it is being used to identify influential actors in criminal gangs, offender movements, co - offending, predict criminal activities and make policies. = = = dynamic network analysis = = = dynamic network analysis examines the shifting structure of relationships among different classes of entities in complex socio - technical systems effects, and reflects social stability and changes such as the emergence of new groups, topics, and leaders. dynamic network analysis focuses on meta - networks composed of multiple types of nodes ( entities ) and multiple types of links. these entities can be highly varied. examples include people, organizations, topics, resources, tasks, events, locations, and beliefs. dynamic network techniques are particularly useful for assessing trends and changes in networks over time, identification of emergent leaders, and examining the co - evolution of people and ideas. = = = biological network analysis = = = with the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. the type of analysis in this content are closely related to social network analysis, but often focusing on local patterns in the network. for example, network motifs are small subgraphs that are over - represented in the network. activity motifs are similar over - represented patterns in the attributes of nodes and edges in the network that are over represented given the network structure. the analysis of biological networks has led to the development of network medicine, which looks at the effect of diseases in the interactome. = = = semantic network analysis = = = semantic network analysis is a sub - field of network analysis that focuses on the relationships between words and concepts in a network. words are represented as nodes and their proximity or co - occurrences in the text are represented as edges. semantic networks are therefore graphical representations of knowledge and are commonly used in neurolinguistics and natural language processing applications. semantic network analysis is also used as a method to analyze large texts and identify the main themes and topics ( e. g., of social media posts ), to reveal biases ( e. g., in news coverage ), or even
|
https://en.wikipedia.org/wiki/Network_science
|
recently the \ mu _ { \ delta ^ { + + } } was found from a fit to ( \ pi ^ + ) p scattering. this enable us to pinpoint condensate parameters more precisely in the context of qcd sum rules ( qcdsr ). in the octet sector, the coleman - glashow sum rule ( cgsr ) is violated by the experimental \ mu - s. qcdsr allows us to write down two sum rules similar to the cgsr, which are obeyed by the experimental magnetic moments, whereas they rule out a specific model using the wilson loop approach and a particular chiral quark model. it is amusing to note that the qcdsr allows us to write down the quark and gluon condensates in terms of measurables like the \ mu - s of the nucleons and the \ sigma ^ { + / - }.
|
arxiv:hep-ph/0404027
|
nowadays, commonly used resistive plate chambers ( rpcs ) have counting rate capabilities of ~ 10e4hz / cm2 and position resolutions of ~ 1cm. we have developed small prototypes of rpcs ( 5x5 and 10x10cm2 ) having rate capabilities of up to 10e7hz / cm2 and position resolutions of 50 micron ( " on line " without application of any treatment method like " center of gravity " ). the breakthrough in achieving extraordinary rate and position resolutions was only possible after solving several serious problems : rpc cleaning and assembling technology, aging, spurious pulses and afterpulses, discharges in the amplification gap and along the spacers. high - rate, high - position resolution rpcs can find a wide range of applications in many different fields, for example in medical imaging. rpcs with the cathodes coated by csi photosensitive layer can detect ultraviolet photons with a position resolution that is better than ~ 30 micron. such detectors can also be used in many applications, for example in the focal plane of high resolution vacuum spectrographs or as image scanners.
|
arxiv:physics/0210006
|
we perform a model dependent fit to recent data on charmless hadronic b decays and determine $ \ gamma $, the phase of $ v ^ * _ { ub } $. we find $ \ gamma = 114 ^ { + 25 } _ { - 21 } $ degrees, which disfavors the often quoted $ \ gamma \ sim 60 ^ \ circ $ at the two standard deviation level. we also fit for the form factors $ f _ 0 ^ { b \ pi } $ and $ a _ 0 ^ { b \ rho } $, and the strange - quark mass. they are consistent with theoretical expectations, although $ m _ s $ is somewhat low. such agreement and the good $ \ chi ^ 2 $ for the fit may be interpreted as a confirmation of the adequacy of our model assumptions.
|
arxiv:hep-ex/9910014
|
we show that every automorphism of the group $ \ mathcal { g } _ n : = \ textrm { aut } ( \ mathbb { a } ^ n ) $ of polynomial automorphisms of complex affine $ n $ - space $ \ mathbb { a } ^ n = \ mathbb { c } ^ n $ is inner up to field automorphisms when restricted to the subgroup $ t \ mathcal { g } _ n $ of tame automorphisms. this generalizes a result of \ textsc { julie deserti } who proved this in dimension $ n = 2 $ where all automorphisms are tame : $ t \ mathcal { g } _ 2 = \ mathcal { g } _ 2 $.
|
arxiv:1105.3739
|
we consider the quantum hall effect in terms of an effective field theory formulation of the edge states, providing a natural common framework for the fractional and integral effects.
|
arxiv:cond-mat/9406119
|
we borrow the minisuperspace approximation from quantum cosmology and the quenching approximation from qcd in order to derive a new form of the bosonic p - brane propagator. in this new approximation we obtain an exact description of both the collective mode deformation of the brane and the center of mass dynamics in the target spacetime. the collective mode dynamics is a generalization of string dynamics in terms of area variables. the final result is that the evolution of a p - brane in the quenched - minisuperspace approximation is formally equivalent to the effective motion of a particle in a spacetime where points as well as hypersurfaces are considered on the same footing as fundamental geometrical objects. this geometric equivalence leads us to define a new tension - shell condition that is a direct extension of the klein - gordon condition for material particles to the case of a physical p - brane.
|
arxiv:hep-th/0105027
|
neutrons, originating cosmogenically or from radioactive decays, can produce signals in dark matter detectors that are indistinguishable from weakly interacting massive particles ( wimps ). to combat this background for the supercdms snolab experiment, we are investigating designs for an active neutron veto within the constrained space of the compact supercdms passive shielding. the current design employs an organic liquid scintillator mixed with an agent to enhance thermal neutron captures, with the scintillation light collected using wavelength - shifting fibers and read out by silicon photo - multipliers. we will describe the proposed veto and its predicted efficiency in detail and give some recent results from our r & d and prototyping efforts.
|
arxiv:1506.01922
|
we consider the problem of secure and reliable communication over a noisy multipath network. previous work considering a noiseless version of our problem proposed a hybrid universal network coding cryptosystem ( huncc ). by combining an information - theoretically secure encoder together with partial encryption, huncc is able to obtain security guarantees, even in the presence of an all - observing eavesdropper. in this paper, we propose a version of huncc for noisy channels ( n - huncc ). this modification requires four main novelties. first, we present a network coding construction which is jointly, individually secure and error - correcting. second, we introduce a new security definition which is a computational analogue of individual security, which we call individual indistinguishability under chosen ciphertext attack ( individual ind - cca1 ), and show that nhuncc satisfies it. third, we present a noise based decoder for n - huncc, which permits the decoding of the encoded - thenencrypted data. finally, we discuss how to select parameters for n - huncc and its error - correcting capabilities.
|
arxiv:2202.03002
|
the karlsruhe tritium neutrino experiment ( katrin ) aims to measure the effective electron anti - neutrino mass with an unprecedented sensitivity of $ 0. 2 \, \ mathrm { ev } / \ mathrm { c } ^ 2 $, using $ \ beta $ - electrons from tritium decay. the electrons are guided magnetically by a system of superconducting magnets through a vacuum beamline from the windowless gaseous tritium source through differential and cryogenic pumping sections to a high resolution spectrometer and a segmented silicon pin detector. at the same time tritium gas has to be prevented from entering the spectrometer. therefore, the pumping sections have to reduce the tritium flow by more than 14 orders of magnitude. this paper describes the measurement of the reduction factor of the differential pumping section performed with high purity tritium gas during the first measurement campaigns of the katrin experiment. the reduction factor results are compared with previously performed simulations, as well as the stringent requirements of the katrin experiment.
|
arxiv:2009.10403
|
general expressions are obtained for the coefficient of light absorption by free carriers as well as the intensity of the spontaneous light emission by hot electrons in multivalley semiconductors. these expressions depend on the electron concentration and electron temperature in the individual valleys. an anisotropy of the dispersion law and electron scattering mechanisms is taken into account. impurity - related and acoustic scattering mechanisms are analyzed. polarization dependence of the spontaneous emission by hot electrons is found out. at unidirectional pressure applied or high irradiation intensities, the polarization dependence also appears in the coefficient of light absorption by free electrons.
|
arxiv:0811.2952
|
not only is the bekenstein expression for the entropy of a black hole a convex function of the energy, rather than being a concave function as it must be, it predicts a final equilibrium temperature given by the harmonic mean. this violates the third law, and the principle of maximum work. the property that means are monotonically increasing functions of their argument underscores the error of transferring from temperature means to means in the internal energy when the energy is not a monotonically increasing function of temperature. whereas the former leads to an increase in entropy, the latter lead to a decrease in entropy thereby violating the second law. the internal energy cannot increase at a slower rate than the temperature itself.
|
arxiv:1110.5322
|
##gy = = while the public image of science education may be one of simply learning facts by rote, science education in recent history also generally concentrates on the teaching of science concepts and addressing misconceptions that learners may hold regarding science concepts or other content. thomas kuhn, whose 1962 book the structure of scientific revolutions greatly influenced the post - positivist philosophy of science, argued that the traditional method of teaching in the natural sciences tends to produce a rigid mindset. since the 1980s, science education has been strongly influenced by constructivist thinking. constructivism in science education has been informed by an extensive research programme into student thinking and learning in science, and in particular exploring how teachers can facilitate conceptual change towards canonical scientific thinking. constructivism emphasises the active role of the learner, and the significance of current knowledge and understanding in mediating learning, and the importance of teaching that provides an optimal level of guidance to learners. according to a 2004 policy forum in science magazine, " scientific teaching involves active learning strategies to engage students in the process of science and teaching methods that have been systematically tested and shown to reach diverse students. " the 2007 volume scientific teaching lists three major tenets of scientific teaching : active learning : a process in which students are actively engaged in learning. it may include inquiry - based learning, cooperative learning, or student - centered learning. assessment : tools for measuring progress toward and achievement of the learning goals. diversity : the breadth of differences that make each student unique, each cohort of students unique, and each teaching experience unique. diversity includes everything in the classroom : the students, the instructors, the content, the teaching methods, and the context. these elements should underlie educational and pedagogical decisions in the classroom. the " scale - up " learning environment is an example of applying the scientific teaching approach. in practice, scientific teaching employs a " backward design " approach. the instructor first decides what the students should know and be able to do ( learning goals ), then determines what would be evidence of student achievement of the learning goals, then designs assessments to measure this achievement. finally, the instructor plans the learning activities, which should facilitate student learning through scientific discovery. = = = guided - discovery approach = = = along with john dewey, jerome bruner, and many others, arthur koestler offers a critique of contemporary science education and proposes its replacement with the guided - discovery approach : to derive pleasure from the art of discovery, as from the other arts, the consumer β in this case the student β
|
https://en.wikipedia.org/wiki/Science_education
|
we calculate the mean free path of protons and neutrons in symmetric and asymmetric nuclear matter, based on microscopic in - medium nucleon - nucleon cross sections. those are obtained from calculations of the g - matrix including relativistic " dirac " effects. the dependence of the mean free path on energy and isospin asymmetry is discussed. we conclude by suggesting possible ways our microscopic predictions may be helpful in conjunction with studies of rare isotopes.
|
arxiv:0712.4028
|
the two - dimensional magnetic recording ( tdmr ) technology promises storage densities of $ 10 $ terabits per square inch. however, when tracks are squeezed together, a bit stored in the two - dimensional ( td ) grid suffers inter - symbol interference ( isi ) from adjacent bits in the same track, and inter - track interference ( iti ) from nearby bits in the adjacent tracks. a bit is highly likely to be read incorrectly if it is isolated in the middle of a $ 3 \ times3 $ square ; surrounded by its complements, horizontally and vertically. we improve the reliability of tdmr systems by designing two - dimensional constrained codes that prevent these square isolation patterns. we exploit the way td read heads operate to design our codes, and we focus on td read heads that collect signals from three adjacent tracks. we represent the two - dimensional square isolation constraint as a one - dimensional constraint on an alphabet of eight non - binary symbols. we use this new representation to construct a non - binary lexicographically - ordered constrained code where one third of the information bits are unconstrained. our td constraint codes are capacity - achieving, and the data protection is achieved with redundancy less than $ 3 \ % $ and at modest complexity.
|
arxiv:2005.11412
|
we investigate transmission optimization for intelligent reflecting surface ( irs ) assisted multi - antenna systems from the physical - layer security perspective. the design goal is to maximize the system secrecy rate subject to the source transmit power constraint and the unit modulus constraints imposed on phase shifts at the irs. to solve this complicated non - convex problem, we develop an efficient alternating algorithm where the solutions to the transmit covariance of the source and the phase shift matrix of the irs are achieved in closed form and semi - closed forms, respectively. the convergence of the proposed algorithm is guaranteed theoretically. simulations results validate the performance advantage of the proposed optimized design.
|
arxiv:1905.10075
|
personal devices such as mobile phones can produce and store large amounts of data that can enhance machine learning models ; however, this data may contain private information specific to the data owner that prevents the release of the data. we want to reduce the correlation between user - specific private information and the data while retaining the useful information. rather than training a large model to achieve privatization from end to end, we first decouple the creation of a latent representation, and then privatize the data that allows user - specific privatization to occur in a setting with limited computation and minimal disturbance on the utility of the data. we leverage a variational autoencoder ( vae ) to create a compact latent representation of the data that remains fixed for all devices and all possible private labels. we then train a small generative filter to perturb the latent representation based on user specified preferences regarding the private and utility information. the small filter is trained via a gan - type robust optimization that can take place on a distributed device such as a phone or tablet. under special conditions of our linear filter, we disclose the connections between our generative approach and renyi differential privacy. we conduct experiments on multiple datasets including mnist, uci - adult, and celeba, and give a thorough evaluation including visualizing the geometry of the latent embeddings and estimating the empirical mutual information to show the effectiveness of our approach.
|
arxiv:2012.01467
|
this paper is part of the prelaunch status lfi papers published on jinst : http : / / www. iop. org / ej / journal / - page = extra. proc5 / jinst the planck lfi radiometer chain assemblies ( rcas ) have been calibrated in two dedicated cryogenic facilities. in this paper the facilities and the related instrumentation are described. the main satellite thermal interfaces for the single chains have to be reproduced and stability requirements have to be satisfied. setup design, problems occurred and improving solutions implemented are discussed. performance of the cryogenic setup are reported.
|
arxiv:1001.4644
|
in 1993, fishburn and graham established the following qualitative extension of the classical erd \ h { o } s - szekeres theorem. if $ n $ is sufficiently large with respect to $ n $, then any $ n \ times n $ real matrix contains an $ n \ times n $ submatrix in which every row and every column is monotone. we prove that the smallest such $ n $ is at most $ 2 ^ { n ^ { 4 + o ( 1 ) } } $, greatly improving the previously best known double - exponential upper bound, and getting close to the best known lower bound $ n ^ { n / 2 } $. in particular, we prove the following surprising sharp transition in the asymmetric setting. on one hand, every $ 8n ^ 2 \ times 2 ^ { n ^ { 4 + o ( 1 ) } } $ matrix contains an $ n \ times n $ submatrix, in which every row is mononote. on the other hand, there exist $ n ^ { 2 } / 6 \ times 2 ^ { 2 ^ { n ^ { 1 - o ( 1 ) } } } $ matrices containing no such submatrix.
|
arxiv:2305.07003
|
observables of light hadron decays are analyzed in a model of chiral lagrangian which includes resonance fields of vector mesons. in particular, transition form factors are investigated for dalitz decays of $ v \ to pl ^ + l ^ - $ and $ p \ to \ gamma l ^ + l ^ - $ $ ( v = 1 ^ -, p = 0 ^ - ) $. moreover, the differential decay width of $ p \ to \ pi ^ + \ pi ^ - \ gamma $ and the partial widths of $ p \ to2 \ gamma, v \ to p \ gamma, \ eta ^ \ prime \ to v \ gamma, \ phi ( 1020 ) \ to \ omega ( 782 ) \ pi ^ 0 $ and $ v \ to 3p $ are also calculated. in this study, we consider a model which contains octet and singlet fields as representation of $ su ( 3 ) $. as an extension of chiral perturbation theory, we include 1 - loop ordered interaction terms. for both pseudoscalar and vector meson, we evaluate mixing matrices in which isospin / $ su ( 3 ) $ breaking is taken into account. furthermore, intrinsic parity violating interactions are considered with singlet fields. for parameter estimation, we carry out $ \ chi ^ 2 $ fittings in which a spectral function of $ \ tau $ decays, vector meson masses, decay widths of $ v \ to p \ gamma $ and transition form factor of $ v \ to pl ^ + l ^ - $ are utilized as input data. using the estimated parameter region in the model, we give predictions for decay widths and transition form factors of intrinsic parity violating decays. as further model predictions, we calculate the transition form factors of $ \ phi ( 1020 ) \ to \ pi ^ 0l ^ + l ^ - $ and $ \ eta ^ \ prime \ to \ gamma l ^ + l ^ - $ in the vicinity of resonance regions, taking account of the contribution for intermediate $ \ rho ( 770 ) $ and $ \ omega ( 782 ) $.
|
arxiv:1609.09235
|
autonomous driving is expected to provide a range of far - reaching economic, environmental and safety benefits. in this study, we propose a fog computing based framework to assist autonomous driving. our framework relies on overhead views from cameras and data streams from vehicle sensors to create a network of distributed digital twins, called an edge twin, on fog machines. the edge twin will be continuously updated with the locations of both autonomous and human - piloted vehicles on the road segments. the vehicle locations will be harvested from overhead cameras as well as location feeds from the vehicles themselves. although the edge twin can make fair road space allocations from a global viewpoint, there is a communication cost ( delay ) in reaching it from the cameras and vehicular sensors. to address this, we introduce a machine learning forecaster as a part of the edge twin which is responsible for predicting the future location of vehicles. lastly, we introduce a box algorithm that will use the forecasted values to create a hazard map for the road segment which would be used by the framework to suggest safe manoeuvres for the autonomous vehicles such as lane changes and accelerations. we present the complete fog computing framework for autonomous driving assist and evaluate key portions of the proposed framework using simulations based on a real - world dataset of vehicle position traces on a highway
|
arxiv:1907.09454
|
we have investigated the relationship between the kinematics and mass of young ( < 3x10 ^ 8 years ) white dwarfs using proper motions. our sample is taken from the colour selected catalogues of sdss ( eisenstein et al. 2006 ) and the palomar - green survey ( liebert, bergeron & holberg 2005 ), both of which have spectroscopic temperature and gravity determinations. we find that the dispersion decreases with increasing white dwarf mass. this can be explained as a result of less scattering by objects in the galactic disk during the shorter lifetime of their more massive progenitors. a direct result of this is that white dwarfs with high mass have a reduced scale height, and hence their local density is enhanced over their less massive counterparts. in addition, we have investigated whether the kinematics of the highest mass white dwarfs ( > 0. 95msun ) are consistent with the expected relative contributions of single star evolution and mergers. we find that the kinematics are consistent with the majority of high - mass white dwarfs being formed through single star evolution.
|
arxiv:1206.1056
|
i provide a systematic construction of points, defined over finite radical extensions, on any legendre curve over any field of characteristic not equal two. this includes as special case douglas ulmer ' s construction of rational points over a rational function field in characteristic $ p > 0 $. in particular i show that if $ n \ geq 4 $ is any even integer and not divisible by the characteristic of the field then any elliptic curve $ e $ over this field has at least $ 2n $ rational points over a finite solvable field extension. under additional hypothesis, when the ground field is a number field, i show that these are of infinite order. i also show that ulmer ' s points lift to characteristic zero and in particular to the canonical lifting.
|
arxiv:1801.06245
|
we provide one theorem of spectral equivalence of koopman operators of an original dynamical system and its reconstructed one through the delay - embedding technique. the theorem is proved for measure - preserving maps ( e. g. dynamics on compact attractors ) and provides a mathematical foundation of computing spectral properties of the koopman operators by a combination of extended dynamic mode decomposition and delay - embedding.
|
arxiv:1706.01006
|
we investigate the influence of axion dark matter as a background on the spin - independent axion forces between nucleons. notably, we find that the potential for axion forces scales from $ 1 / r ^ 3 $ in a vacuum - only context to $ 1 / r $ when the background effect is considered. also, the magnitude of the axion force is substantially amplified in proportion to the number density of axion dm particles. these enhancements significantly improve the constraints on the axion decay constant by several orders of magnitude, across a broad range of axion masses, based on the fifth - force experiments such as the casimir - less and torsion balance tests. this suggests that such experiments are more effective than previously understood in detecting axions.
|
arxiv:2504.02702
|
this paper presents a new radix - 2 ^ k multi - path fft architecture, named msc fft, which is based on a single - path radix - 2 serial commutator ( sc ) fft architecture. the proposed multi - path architecture has a very high hardware utilization that results in a small chip area, while providing high throughput. in addition, the adoption of radix - 2 ^ k fft algorithms allows for simplifying the rotators even further. it is achieved by optimizing the structure of the processing element ( pe ). the implemented architecture is a 128 - point 4 - parallel multi - path sc fft using 90 nm process. its area and power consumption at 250 mhz are only 0. 167 mm2 and 14. 81 mw, respectively. compared with existing works, the proposed design reduces significantly the chip rea and the power consumption, while providing high throughput.
|
arxiv:2007.14736
|
research on formulae production in spreadsheets has established the practice as high risk yet unrecognised as such by industry. there are numerous software applications that are designed to audit formulae and find errors. however these are all post creation, designed to catch errors before the spreadsheet is deployed. as a general conclusion from eusprig 2003 conference it was decided that the time has come to attempt novel solutions based on an understanding of human factors. hence in this paper we examine one such possibility namely a novel example driven modelling approach. we discuss a control experiment that compares example driven modelling against traditional approaches over several progressively more difficult tests. the results are very interesting and certainly point to the value of further investigation of the example driven potential. lastly we propose a method for statistically analysing the problem of overconfidence in spreadsheet modellers.
|
arxiv:0803.1754
|
explainability of reinforcement learning ( rl ) policies remains a challenging research problem, particularly when considering rl in a safety context. understanding the decisions and intentions of an rl policy offer avenues to incorporate safety into the policy by limiting undesirable actions. we propose the use of a boolean decision rules model to create a post - hoc rule - based summary of an agent ' s policy. we evaluate our proposed approach using a dqn agent trained on an implementation of a lava gridworld and show that, given a hand - crafted feature representation of this gridworld, simple generalised rules can be created, giving a post - hoc explainable summary of the agent ' s policy. we discuss possible avenues to introduce safety into a rl agent ' s policy by using rules generated by this rule - based model as constraints imposed on the agent ' s policy, as well as discuss how creating simple rule summaries of an agent ' s policy may help in the debugging process of rl agents.
|
arxiv:2207.08651
|
it has been an open question as to whether the modular structural operational semantics framework can express the dynamic semantics of call / cc. this paper shows that it can, and furthermore, demonstrates that it can express the more general delimited control operators control and shift.
|
arxiv:1606.06381
|
we consider a logistic differential equation subject to impulsive delayed harvesting, where the deduction information is a function of the population size at the time of one of the previous impulses. a close connection to the dynamics of high - order difference equations is used to conclude that while the inclusion of a delay in the impulsive condition does not impact the optimality of the yield, sustainability may be highly affected and is generally delay - dependent. maximal and other types of yields are explored, and sharp stability tests are obtained for the model, as well as explicit sufficient conditions. it is also shown that persistence of the solution is not guaranteed for all positive initial conditions, and extinction in finite time is possible, as is illustrated in the simulations.
|
arxiv:2210.05878
|
we compute scattering amplitudes involving one massive scalar and two, three, or four gravitons. we show that when the conformal dimension of the massive scalar is set to zero, the resulting celestial correlators depend { \ it only } on the coordinates of the gravitons. such correlators of gravitons are well - defined and do not suffer from divergences associated with the mellin transform of usual graviton amplitudes. moreover, they are non - distributional and take the form of standard cft correlators. we show that they are consistent with the usual opes but the statement of the soft theorem is modified.
|
arxiv:2310.00520
|
merging binary systems consisting of two collapsed objects are among the most promising sources of high frequency gravitational wave, gw, signals for ground based interferometers. double neutron star or black hole / neutron star mergers are also believed to give rise to short hard bursts, shbs, a subclass of gamma ray bursts. shbs might thus provide a powerful way to infer the merger rate of two - collapsed object binaries. under the hypothesis that most shbs originate from double neutron star or black hole / neutron star mergers, we outline here a method to estimate the incidence of merging events from dynamically formed binaries in globular clusters and infer the corresponding gw event rate that can be detected with advanced ligo / virgo. in particular a sizeable fraction of detectable gw events is expected to be coincident with shbs. the beaming and redshift distribution of shbs are reassessed and their luminosity function constrained by using the results from recent shbs observations. we confirm that a substantial fraction of shbs goes off at low redshifts, where the merging of systems formed in globular clusters through dynamical interactions is expected.
|
arxiv:0811.0684
|
the structure of omega - pi state with isospin i = 1 and spin - parity j ^ p = 3 / 2 ^ - are dynamically studied in both the chiral su ( 3 ) quark model and the extended chiral su ( 3 ) quark model by solving a resonating group method ( rgm ) equation. the model parameters are taken from our previous work, which gave a satisfactory description of the energies of the baryon ground states, the binding energy of the deuteron, the nucleon - nucleon ( nn ) scattering phase shifts, and the hyperon - nucleon ( yn ) cross sections. the calculated results show that the omega - pi state has an attractive interaction, and in the extended chiral su ( 3 ) quark model such an attraction can make for an omega - pi quasi - bound state with the binding energy of about several mev.
|
arxiv:nucl-th/0612007
|
in face detection, low - resolution faces, such as numerous small faces of a human group in a crowded scene, are common in dense face prediction tasks. they usually contain limited visual clues and make small faces less distinguishable from the other small objects, which poses great challenge to accurate face detection. although deep convolutional neural network has significantly promoted the research on face detection recently, current deep face detectors rarely take into account low - resolution faces and are still vulnerable to the real - world scenarios where massive amount of low - resolution faces exist. consequently, they usually achieve degraded performance for low - resolution face detection. in order to alleviate this problem, we develop an efficient detector termed efficientsrface by introducing a feature - level super - resolution reconstruction network for enhancing the feature representation capability of the model. this module plays an auxiliary role in the training process, and can be removed during the inference without increasing the inference time. extensive experiments on public benchmarking datasets, such as fddb and wider face, show that the embedded image super - resolution module can significantly improve the detection accuracy at the cost of a small amount of additional parameters and computational overhead, while helping our model achieve competitive performance compared with the state - of - the - arts methods.
|
arxiv:2306.02277
|
the stellar initial mass function ( imf ) is a key parameter to understand the star formation process and the integrated properties of stellar populations in remote galaxies. we present a spectroscopic study of young massive clusters ( ymcs ) in the starburst galaxies ngc 4038 / 39. the integrated spectra of seven ymcs obtained with gmos - s attached to the 8. 2 - m gemini south telescope reveal the spectral features associated with stellar ages and the underlying imfs. we constrain the ages of the ymcs using the absorption lines and strong emission bands from wolf - rayet stars. the internal reddening is also estimated from the strength of the na i d absorption lines. based on these constraints, the observed spectra are matched with the synthetic spectra generated from a simple stellar population model. several parameters of the clusters including age, reddening, cluster mass, and the underlying imf are derived from the spectral matching. the ages of the ymcs range from 2. 5 to 6. 5 myr, and these clusters contain stellar masses ranging from 1. 6 x 10 ^ 5 m _ sun to 7. 9 x 10 ^ 7 m _ sun. the underlying imfs appear to differ from the universal form of the salpeter / kroupa imf. interestingly, massive clusters tend to have the bottom - heavy imfs, although the masses of some clusters are overestimated due to the crowding effect. based on this, our results suggest that the universal form of the imf is not always valid when analyzing integrated light from unresolved stellar systems. however, further study with a larger sample size is required to reach a definite conclusion.
|
arxiv:2411.00521
|
we study the problem of synthesizing a controller that maximizes the entropy of a partially observable markov decision process ( pomdp ) subject to a constraint on the expected total reward. such a controller minimizes the predictability of an agent ' s trajectories to an outside observer while guaranteeing the completion of a task expressed by a reward function. we first prove that an agent with partial observations can achieve an entropy at most as well as an agent with perfect observations. then, focusing on finite - state controllers ( fscs ) with deterministic memory transitions, we show that the maximum entropy of a pomdp is lower bounded by the maximum entropy of the parametric markov chain ( pmc ) induced by such fscs. this relationship allows us to recast the entropy maximization problem as a so - called parameter synthesis problem for the induced pmc. we then present an algorithm to synthesize an fsc that locally maximizes the entropy of a pomdp over fscs with the same number of memory states. in numerical examples, we illustrate the relationship between the maximum entropy, the number of memory states in the fsc, and the expected reward.
|
arxiv:2105.07490
|
using microwave detected, microwave - optical double resonance, we have measured the homogeneous linewidths of individual rovibrational transitions in the \ ~ a state of nh3, nh2d, nhd2, and nd3. we have used this excited state spectroscopic data to characterize the height of the dissociation barrier and the mechanisms by which the molecule uses its excess vibrational and rotational energies to help overcome this barrier. to interpret the observed vibronic widths, a one dimensional, local mode potential has been developed along a n - h ( d ) bond. these calculations suggest the barrier height is roughly 2100cm - 1, approximately 1000cm - 1 below the ab initio prediction. the observed vibronic dependence of levels containing two or more quanta in nu2 is explained by a fermi resonance between nu2 and the n - h ( d ) stretch. this interaction also explains the observed trends due to isotopic substitution. the rotational enhancement of the predissociation rates in the nh3 2 ( 1 ) level is dominated by coriolis coupling while for the same level in nd3, centrifugal effects dominate.
|
arxiv:chem-ph/9412001
|
spherically symmetric, null dust clouds, like their time - like counterparts, may collapse classically into black holes or naked singularities depending on their initial conditions. we consider the hamiltonian dynamics of the collapse of an arbitrary distribution of null dust, expressed in terms of the physical radius, $ r $, the null coordinates, $ v $ for a collapsing cloud or $ u $ for an expanding cloud, the mass function, $ m $, of the null matter, and their conjugate momenta. this description is obtained from the adm description by a kucha \ v { r } - type canonical transformation. the constraints are linear in the canonical momenta and dirac ' s constraint quantization program is implemented. explicit solutions the constraints are obtained for both expanding and contracting null dust clouds with arbitrary mass functions.
|
arxiv:gr-qc/0112024
|
the aim of this paper is to study $ l ^ p $ - boundedness property of the pseudo differential operator associated with a symbol, on rank one riemannian symmetric spaces of noncompact type, where the symbol satisfies h \ " ormander - type conditions near infinity.
|
arxiv:2204.12327
|
we study the uniqueness of solutions to a class of heat equations with positive density posed on infinite weighted graphs. we separately consider the case when the density is bounded from below by a positive constant and the case of possibly vanishing density, showing that these two scenarios lead to two different classes of uniqueness.
|
arxiv:2409.03617
|
the goal of this paper is to prove a comparison principle for viscosity solutions of semilinear hamilton - jacobi equations in the space of probability measures. the method involves leveraging differentiability properties of the $ 2 $ - wasserstein distance in the doubling of variables argument, which is done by introducing a further entropy penalization that ensures that the relevant optima are achieved at positive, lipschitz continuous densities with finite fischer information. this allows to prove uniqueness and stability of viscosity solutions in the class of bounded lipschitz continuous ( with respect to the $ 1 $ - wasserstein distance ) functions. the result does not appeal to a mean field control formulation of the equation, and, as such, applies to equations with nonconvex hamiltonians and measure - dependent volatility. for convex hamiltonians that derive from a potential, we prove that the value function associated with a suitable mean - field optimal control problem with nondegenerate idiosyncratic noise is indeed the unique viscosity solution.
|
arxiv:2308.15174
|
we measure the low - j co line ratio r21 = co ( 2 - 1 ) / co ( 1 - 0 ), r32 = co ( 3 - 2 ) / co ( 2 - 1 ), and r31 = co ( 3 - 2 ) / co ( 1 - 0 ) using whole - disk co maps of nearby galaxies. we draw co ( 2 - 1 ) from phangs - - alma, heracles, and follow - up iram surveys ; co ( 1 - 0 ) from coming and the nobeyama co atlas of nearby spiral galaxies ; and co ( 3 - 2 ) from the jcmt ngls and apex lasma mapping. altogether this yields 76, 47, and 29 maps of r21, r32, and r31 at 20 " \ sim 1. 3 kpc resolution, covering 43, 34, and 20 galaxies. disk galaxies with high stellar mass, log10 m _ * [ msun ] = 10. 25 - 11 and star formation rate, sfr = 1 - 5 msun / yr, dominate the sample. we find galaxy - integrated mean values and 16 % - 84 % range of r21 = 0. 65 ( 0. 50 - 0. 83 ), r32 = 0. 50 ( 0. 23 - 0. 59 ), and r31 = 0. 31 ( 0. 20 - 0. 42 ). we identify weak trends relating galaxy - integrated line ratios to properties expected to correlate with excitation, including sfr / m _ * and sfr / l _ co. within galaxies, we measure central enhancements with respect to the galaxy - averaged value of \ sim 0. 18 ^ { + 0. 09 } _ { - 0. 14 } dex for r21, 0. 27 ^ { + 0. 13 } _ { - 0. 15 } dex for r31, and 0. 08 ^ { + 0. 11 } _ { - 0. 09 } dex for r32. all three line ratios anti - correlate with galactocentric radius and positively correlate with the local star formation rate surface density and specific star formation rate, and we provide approximate fits to these relations. the observed ratios can be reasonably reproduced by models with low temperature, moderate opacity, and moderate densities, in good agreement with expectations for the cold ism. because the line ratios are expected to anti - correlate with the co ( 1 - 0 ) - to - h _ 2
|
arxiv:2109.11583
|
this paper presents a kernel - based discriminative learning framework on probability measures. rather than relying on large collections of vectorial training examples, our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data. by representing these probability distributions as mean embeddings in the reproducing kernel hilbert space ( rkhs ), we are able to apply many standard kernel - based learning techniques in straightforward fashion. to accomplish this, we construct a generalization of the support vector machine ( svm ) called a support measure machine ( smm ). our analyses of smms provides several insights into their relationship to traditional svms. based on such insights, we propose a flexible svm ( flex - svm ) that places different kernel functions on each training example. experimental results on both synthetic and real - world data demonstrate the effectiveness of our proposed framework.
|
arxiv:1202.6504
|
we prove conditional near - quadratic running time lower bounds for approximate bichromatic closest pair with euclidean, manhattan, hamming, or edit distance. specifically, unless the strong exponential time hypothesis ( seth ) is false, for every $ \ delta > 0 $ there exists a constant $ \ epsilon > 0 $ such that computing a $ ( 1 + \ epsilon ) $ - approximation to the bichromatic closest pair requires $ n ^ { 2 - \ delta } $ time. in particular, this implies a near - linear query time for approximate nearest neighbor search with polynomial preprocessing time. our reduction uses the distributed pcp framework of [ arw ' 17 ], but obtains improved efficiency using algebraic geometry ( ag ) codes. efficient pcps from ag codes have been constructed in other settings before [ bkkms ' 16, bcgrs ' 17 ], but our construction is the first to yield new hardness results.
|
arxiv:1803.00904
|
a naive introduction of a dependency of the mass of a black hole on the schwarzschild time coordinate results in singular behavior of curvature invariants at the horizon, violating expectations from complementarity. if instead a temporal dependence is introduced in terms of a coordinate akin to the river time representation, the ricci scalar is nowhere singular away from the origin. it is found that for a shrinking mass scale due to evaporation, the null radial geodesics that generate the horizon are slightly displaced from the coordinate singularity. in addition, a changing horizon scale significantly alters the form of the coordinate singularity in diagonal ( orthogonal ) metric coordinates representing the space - time. a penrose diagram describing the growth and evaporation of an example black hole is constructed to examine the evolution of the coordinate singularity.
|
arxiv:gr-qc/0609019
|
a new approach to the steering problem for quantum systems relying on nelson ' s stochastic mechanics and on the theory of schroedinger bridges is presented. the method is illustrated by working out a simple gaussian example.
|
arxiv:quant-ph/0112170
|
the initial emergence of the primary root from a germinating seed is a pivotal phase that influences a plant ' s survival. abiotic factors such as ph, nutrient availability, and soil composition significantly affect root morphology and architecture. of particular interest is the impact of nutrient flow on thigmomorphogenesis, a response to mechanical stimulation in early root growth, which remains largely unexplored. this study explores the intricate factors influencing early root system development, with a focus on the cooperative correlation between nutrient uptake and its flow dynamics. using physiologically relevant, portable, and cost - effective microfluidic system for the controlled fluid environments offering hydraulic conductivity comparable to that of the soil, this study analyzes the interplay between nutrient flow and root growth post - germination. emphasizing the relationship between root growth and nitrogen uptake, the findings reveal that nutrient flow significantly influences early root morphology, leading to increased length and improved nutrient uptake, varying with the flow rate. the experimental findings are supported by stress - related fluid flow - root interaction simulations and quantitative determination of nitrogen uptake using the total kjeldahl nitrogen ( tkn ) method. the microfluidic approach offers novel insights into plant root dynamics under controlled flow conditions, filling a critical research gap. by providing a high - resolution platform, this study contributes to the understanding of how fluid - flow assisted nutrient uptake and pressure affect root - cell behavior, which, in turn, induces mechanical stress leading to thigmomorphogenesis. the findings hold implications for comprehending root responses to changing environmental conditions, paving the way for innovative agricultural and environmental management strategies.
|
arxiv:2403.04806
|
graph disaggregation is a technique used to address the high cost of computation for power law graphs on parallel processors. the few high - degree vertices are broken into multiple small - degree vertices, in order to allow for more efficient computation in parallel. in particular, we consider computations involving the graph laplacian, which has significant applications, including diffusion mapping and graph partitioning, among others. we prove results regarding the spectral approximation of the laplacian of the original graph by the laplacian of the disaggregated graph. in addition, we construct an alternate disaggregation operator whose eigenvalues interlace those of the original laplacian. using this alternate operator, we construct a uniform preconditioner for the original graph laplacian.
|
arxiv:1605.00698
|
we prove a law of large numbers for empirical approximations of the spectrum of a kernel integral operator by the spectrum of random matrices based on a sample drawn from a markov chain, which complements the results by v. koltchinskii and e. gin \ ' { e } for i. i. d. sequences. in a special case of mercer ' s kernels and geometrically ergodic chains, we also provide exponential inequalities, quantifying the speed of convergence.
|
arxiv:1311.7566
|
we present an exact solution for effective polaron - polaron interactions between heavy impurities, mediated by a sea of non - interacting light fermions in the quantum hall regime with highly degenerate landau levels. for weak attraction between impurities and fermions, where only the manifold of lowest landau levels is relevant, we obtain an analytical expression of mediated polaron - polaorn interactions. remarkably, polaron interactions are exactly zero when fermions in lowest landau levels outnumber heavy impurities. for strong attraction, different manifolds of higher landau levels come into play and we derive a set of equations that can be used to numerically solve the mediated polaron interaction potential. we find that the potential vanishes when the distance r between impurities is larger than the magnetic length, but strongly diverges at short range following a coulomb form - 1 / r. our exact results of polaron - polaron interactions might be examined in cold - atom setups, where a system of fermi polarons in the quantum hall regime is realized with synthetic gauge field or under fast rotation. our predictions could also be useful to understand the effective interaction between exciton - polarons in electron - doped semiconductors under strong magnetic field.
|
arxiv:2408.15007
|
stellar magnetic field measurements obtained from spectropolarimetry offer key data for activity and dynamo studies, and we present the results of a major high - resolution spectropolarimetric bcool project magnetic snapshot survey of 170 solar - type stars from observations with the telescope bernard lyot and the canada - france - hawaii telescope. for each target star a high signal - to - noise circularly polarised stokes v profile has been obtained using least - squares deconvolution, and used to detect surface magnetic fields and measure the corresponding mean surface longitudinal magnetic field ( $ b _ { l } $ ). chromospheric activity indicators were also measured. surface magnetic fields were detected for 67 stars, with 21 of these stars classified as mature solar - type stars, a result that increases by a factor of four the number of mature solar - type stars on which magnetic fields have been observed. in addition, a magnetic field was detected for 3 out of 18 of the subgiant stars surveyed. for the population of k - dwarfs the mean value of $ b _ { l } $ ( $ | b _ { l } | _ { mean } $ ) was also found to be higher ( 5. 7 g ) than $ | b _ { l } | _ { mean } $ measured for the g - dwarfs ( 3. 2 g ) and the f - dwarfs ( 3. 3 g ). for the sample as a whole $ | b _ { l } | _ { mean } $ increases with rotation rate and decreases with age, and the upper envelope for $ | b _ { l } | $ correlates well with the observed chromospheric emission. stars with a chromospheric s - index greater than about 0. 2 show a high magnetic field detection rate and so offer optimal targets for future studies. this survey constitutes the most extensive spectropolarimetric survey of cool stars undertaken to date, and suggests that it is feasible to pursue magnetic mapping of a wide range of moderately active solar - type stars to improve understanding of their surface fields and dynamos.
|
arxiv:1311.3374
|
continuing the first part of the paper, we consider scalar decentralized average - cost infinite - horizon lqg problems with two controllers. this paper focuses on the slow dynamics case when the eigenvalue of the system is small and prove that the single - controller optimal strategies - - - linear strategies - - - are constant ratio optimal among all distributed control strategies.
|
arxiv:1308.5042
|
the majority of existing few - shot learning methods describe image relations with binary labels. however, such binary relations are insufficient to teach the network complicated real - world relations, due to the lack of decision smoothness. furthermore, current few - shot learning models capture only the similarity via relation labels, but they are not exposed to class concepts associated with objects, which is likely detrimental to the classification performance due to underutilization of the available class labels. to paraphrase, children learn the concept of tiger from a few of actual examples as well as from comparisons of tiger to other animals. thus, we hypothesize that in fact both similarity and class concept learning must be occurring simultaneously. with these observations at hand, we study the fundamental problem of simplistic class modeling in current few - shot learning methods. we rethink the relations between class concepts, and propose a novel absolute - relative learning paradigm to fully take advantage of label information to refine the image representations and correct the relation understanding in both supervised and unsupervised scenarios. our proposed paradigm improves the performance of several the state - of - the - art models on publicly available datasets.
|
arxiv:2001.03919
|
a model for quantum zeno effect based upon an effective schr \ " odinger equation originated by the path - integral approach is developed and applied to a two - level system simultaneously stimulated by a resonant perturbation. it is shown that inhibition of stimulated transitions between the two levels appears as a consequence of the influence of the meter whenever measurements of energy, either continuous or pulsed, are performed at quantum level of sensitivity. the generality of this approach allows to qualitatively understand the inhibition of spontaneous transitions as the decay of unstable particles, originally presented as a paradox of quantum measurement theory.
|
arxiv:quant-ph/9709003
|
we report the realization, using nuclear magnetic resonance techniques, of the first quantum computer that reliably executes an algorithm in the presence of strong decoherence. the computer is based on a quantum error avoidance code that protects against a class of multiple - qubit errors. the code stores two decoherence - free logical qubits in four noisy physical qubits. the computer successfully executes grover ' s search algorithm in the presence of arbitrarily strong engineered decoherence. a control computer with no decoherence protection consistently fails under the same conditions.
|
arxiv:quant-ph/0302175
|
solving the wave equation on an infinite domain has been an ongoing challenge in scientific computing. conventional approaches to this problem only generate numerical solutions on a small subset of the infinite domain. in this paper, we present a method for solving the wave equation on the entire infinite domain using only finite computation time and memory. our method is based on the conformal invariance of the scalar wave equation under the kelvin transformation in minkowski spacetime. as a result of the conformal invariance, any wave problem with compact initial data contained in a causality cone is equivalent to a wave problem on a bounded set in minkowski spacetime. we use this fact to perform wave simulations in infinite spacetime using a finite discretization of the bounded spacetime with no additional loss of accuracy introduced by the kelvin transformation.
|
arxiv:2305.08033
|
we introduce a general range of science drivers for using the virtual observatory ( vo ) and identify some common aspects to these as well as the advantages of vo data access. we then illustrate the use of existing vo tools to tackle multi wavelength science problems. we demonstrate the ease of multi mission data access using the voexplorer resource browser, as provided by astrogrid ( http : / / www. astrogrid. org ) and show how to pass the various results into any vo enabled tool such as topcat for catalogue correlation. voexplorer offers a powerful data - centric visualisation for browsing and filtering the entire vo registry using an itunes type interface. this allows the user to bookmark their own personalised lists of resources and to run tasks on the selected resources as desired. we introduce an example of how more advanced querying can be performed to access existing x - ray cluster of galaxies catalogues and then select extended only x - ray sources as candidate clusters of galaxies in the 2xmmi catalogue. finally we introduce scripted access to vo resources using python with astrogrid and demonstrate how the user can pass on the results of such a search and correlate with e. g. optical datasets such as sloan. hence we illustrate the power of enabling large scale data mining of multi wavelength resources in an easily reproducible way using the vo.
|
arxiv:0906.1535
|
quantile regression, a robust method for estimating conditional quantiles, has advanced significantly in fields such as econometrics, statistics, and machine learning. in high - dimensional settings, where the number of covariates exceeds sample size, penalized methods like lasso have been developed to address sparsity challenges. bayesian methods, initially connected to quantile regression via the asymmetric laplace likelihood, have also evolved, though issues with posterior variance have led to new approaches, including pseudo / score likelihoods. this paper presents a novel probabilistic machine learning approach for high - dimensional quantile prediction. it uses a pseudo - bayesian framework with a scaled student - t prior and langevin monte carlo for efficient computation. the method demonstrates strong theoretical guarantees, through pac - bayes bounds, that establish non - asymptotic oracle inequalities, showing minimax - optimal prediction error and adaptability to unknown sparsity. its effectiveness is validated through simulations and real - world data, where it performs competitively against established frequentist and bayesian techniques.
|
arxiv:2409.01687
|
a positive definite integral quadratic form is said to be almost ( primitively ) universal if it ( primitively ) represents all but at most finitely many positive integers. in general, almost primitive universality is a stronger property than almost universality. the two main results of this paper are : 1 ) every primitively universal form nontrivially represents zero over every ring of p - adic integers, and 2 ) every almost universal form in five or more variables is almost primitively universal.
|
arxiv:2005.11268
|
person treating the patient ) have the right to be treated with dignity. truthfulness and honesty β the concept of informed consent has increased in importance since the historical events of the doctors ' trial of the nuremberg trials, tuskegee syphilis experiment, and others. values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. when moral values are in conflict, the result may be an ethical dilemma or crisis. sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community ( i. e., the hospital and its staff ) conflict with the values of the individual patient, family, or larger non - medical community. conflicts can also arise between health care providers, or among family members. for example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life - saving ; and truth - telling was not emphasized to a large extent before the hiv era. = = history = = = = = ancient world = = = prehistoric medicine incorporated plants ( herbalism ), animal parts, and minerals. in many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. well - known spiritual systems include animism ( the notion of inanimate objects having spirits ), spiritualism ( an appeal to gods or communion with ancestor spirits ) ; shamanism ( the vesting of an individual with mystic powers ) ; and divination ( magically obtaining the truth ). the field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues. the earliest known medical texts in the world were found in the ancient syrian city of ebla and date back to 2500 bce. other early records on medicine have been discovered from ancient egyptian medicine, babylonian medicine, ayurvedic medicine ( in the indian subcontinent ), classical chinese medicine ( alternative medicine ) predecessor to the modern traditional chinese medicine ), and ancient greek medicine and roman medicine. in egypt, imhotep ( 3rd millennium bce ) is the first physician in history known by name. the oldest egyptian medical text is the kahun gynaecological papyrus from around 2000 bce, which describes gynaecological diseases. the edwin smith papyrus dating back to 1600 bce is an early work on surgery, while the ebers papyrus dating back to 1500 bce is akin to a textbook on medicine. in china, archaeological
|
https://en.wikipedia.org/wiki/Medicine
|
we study the phenomenon of an eigenvalue emerging from essential spectrum of a schroedinger operator perturbed by a fast oscillating compactly supported potential. we prove the sufficient conditions for the existence and absence of such eigenvalue. if exists, we obtain the leading term of its asymptotics expansion.
|
arxiv:math-ph/0508013
|
we consider how a vertex operator algebra can be extended to an abelian intertwining algebra by a family of weak twisted modules which are { \ em simple currents } associated with semisimple weight one primary vectors. in the case that the extension is again a vertex operator algebra, the rationality of the extended algebra is discussed. these results are applied to affine kac - moody algebras in order to construct all the simple currents explicitly ( except for $ e _ 8 $ ) and to get various extensions of the vertex operator algebras associated with integrable representations.
|
arxiv:q-alg/9504008
|
we give an affirmative answer to a conjecture proposed by tevelev in characteristic 0 case : any variety contains a sch \ " on very affine open subvariety. also we show that any fan supported on the tropicalization of a sch \ " on very affine variety produces a sch \ " on compactification. using toric schemes over a discrete valuation ring, we extend tropical compatifications to the non - constant coefficient case.
|
arxiv:0902.2009
|
in this talk we summarize previous work on mass bounds of a light neutralino in the minimal supersymmetric standard model. we show that without the gut relation between the gaugino mass parameters m _ 1 and m _ 2, the mass of the lightest neutralino is essentially unconstrained by collider bounds and precision observables. we conclude by considering also the astrophysics and cosmology of a light neutralino.
|
arxiv:1011.3450
|
in viticulture, visual inspection of the plant is a necessary task for measuring relevant variables. in many cases, these visual inspections are susceptible to automation through computer vision methods. bud detection is one such visual task, central for the measurement of important variables such as : measurement of bud sunlight exposure, autonomous pruning, bud counting, type - of - bud classification, bud geometric characterization, internode length, bud area, and bud development stage, among others. this paper presents a computer method for grapevine bud detection based on a fully convolutional networks mobilenet architecture ( fcn - mn ). to validate its performance, this architecture was compared in the detection task with a strong method for bud detection, scanning windows ( sw ) based on a patch classifier, showing improvements over three aspects of detection : segmentation, correspondence identification and localization. the best version of fcn - mn showed a detection f1 - measure of $ 88. 6 \ % $ ( for true positives defined as detected components whose intersection - over - union with the true bud is above $ 0. 5 $ ), and false positives that are small and near the true bud. splits - - false positives overlapping the true bud - - showed a mean segmentation precision of $ 89. 3 \ % ( 21. 7 ) $, while false alarms - - false positives not overlapping the true bud - - showed a mean pixel area of only $ 8 \ % $ the area of a true bud, and a distance ( between mass centers ) of $ 1. 1 $ true bud diameters. the paper concludes by discussing how these results for fcn - mn would produce sufficiently accurate measurements of bud variables such as bud number, bud area, and internode length, suggesting a good performance in a practical setup.
|
arxiv:2008.11872
|
the self consistency between the impressive dama annual modulation signal and the differential energy spectrum is an important test for dark matter candidates. mirror matter - type dark matter passes this test while other dark matter candidates, including standard ( spin - independent ) wimps and mini - electric charged particle dark matter, do not do so well. we argue that the unique properties of mirror matter - type dark matter seem to be just those required to fully explain the data, suggesting that the dark matter problem has finally been solved.
|
arxiv:astro-ph/0403043
|
we present chandra x - ray observatory archival observations of the supernova remnant 1e0102. 2 - 7219, a young oxygen - rich remnant in the small magellanic cloud. combining 28 obsids for 324 ks of total exposure time, we present an acis image with an unprecedented signal - to - noise ratio ( mean s / n ~ sqrt ( s ) ~ 6 ; maximum s / n > 35 ). we search within the remnant, using the source detection software { \ sc wavdetect }, for point sources which may indicate a compact object. despite finding numerous detections of high significance in both broad and narrow band images of the remnant, we are unable to satisfactorily distinguish whether these detections correspond to emission from a compact object. we also present upper limits to the luminosity of an obscured compact stellar object which were derived from an analysis of spectra extracted from the high signal - to - noise image. we are able to further constrain the characteristics of a potential neutron star for this remnant with the results of the analysis presented here, though we cannot confirm the existence of such an object for this remnant.
|
arxiv:1005.0635
|
we present a statistical mechanical calculation of the thermodynamical properties of ( non rotating ) isolated horizons. the introduction of planck scale allows for the definition of an universal horizon temperature ( independent of the mass of the black hole ) and a well - defined notion of energy ( as measured by suitable local observers ) proportional to the horizon area in planck units. the microcanonical and canonical ensembles associated with the system are introduced. black hole entropy and other thermodynamical quantities can be consistently computed in both ensembles and results are in agreement with hawking ' s semiclassical analysis for all values of the immirzi parameter.
|
arxiv:1107.1320
|
standard natural language processing ( nlp ) pipelines operate on symbolic representations of language, which typically consist of sequences of discrete tokens. however, creating an analogous representation for ancient logographic writing systems is an extremely labor intensive process that requires expert knowledge. at present, a large portion of logographic data persists in a purely visual form due to the absence of transcription - - this issue poses a bottleneck for researchers seeking to apply nlp toolkits to study ancient logographic languages : most of the relevant data are images of writing. this paper investigates whether direct processing of visual representations of language offers a potential solution. we introduce logogramnlp, the first benchmark enabling nlp analysis of ancient logographic languages, featuring both transcribed and visual datasets for four writing systems along with annotations for tasks like classification, translation, and parsing. our experiments compare systems that employ recent visual and text encoding strategies as backbones. the results demonstrate that visual representations outperform textual representations for some investigated tasks, suggesting that visual processing pipelines may unlock a large amount of cultural heritage data of logographic languages for nlp - based analyses.
|
arxiv:2408.04628
|
new models challenge the long - standing conclusion about mimas, an icy satellite of saturn, being an inactive snowball, suggesting the existence of a young stealth ocean. unfortunately, no observable evidence has been found yet implying tectonic activity and the theoretical subsurface ocean. here, we present the first structural geological map of the icy satellite, with the signs of various tectonic features, along with a simple crosscutting chronology of lineaments formation. in accordance with the supposedly young age of the stealth ocean, the observed phenomena are described as putative lineaments, ridges, and troughs. simple tectonic features are identified as young compared to complex structures. the pattern of the linear features seems to overlap with the allocation of various modeled global nonlinear tidal dissipation patterns. in such a way, it may provide the first observed evidence for the existence of the theoretical subsurface stealth ocean. however, the overlapping and crosscutting relation between craters and the observed features may raise concerns about the recent formation of such linear features, indicating possibly long - time dormant or already stopped tectonic processes at the early embryonic phase of lineament formation billions of years ago.
|
arxiv:2404.14084
|
in this paper, we investigate the performance of an intelligent omni - surface ( ios ) assisted downlink non - orthogonal multiple access ( noma ) network with phase quantization errors and channel estimation errors, where the channels related to the ios are spatially correlated. first, upper bounds on the average achievable rates of the two users are derived. then, channel hardening is shown to occur in the proposed system, based on which we derive approximations of the average achievable rates of the two users. the analytical results illustrate that the proposed upper bound and approximation on the average achievable rates are asymptotically equivalent in the number of elements. furthermore, it is proved that the asymptotic equivalence also holds for the average achievable rates with correlated and uncorrelated channels. additionally, we extend the analysis by evaluating the average achievable rates for ios assisted orthogonal multiple access ( oma ) and ios assisted multi - user noma scenarios. simulation results corroborate the theoretical analysis and demonstrate that : i ) low - precision elements with only two - bit phase adjustment can achieve the performance close to the ideal continuous phase shifting scheme ; ii ) the average achievable rates with correlated channels and uncorrelated channels are asymptotically equivalent in the number of elements ; iii ) ios - assisted noma does not always perform better than oma due to the reconfigurability of ios in different time slots.
|
arxiv:2112.11512
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.