text
stringlengths
1
3.65k
source
stringlengths
15
79
a family f of permutations of the vertices of a hypergraph h is called " pairwise suitable " for h if, for every pair of disjoint edges in h, there exists a permutation in f in which all the vertices in one edge precede those in the other. the cardinality of a smallest such family of permutations for h is called the " separation dimension " of h and is denoted by \ pi ( h ). equivalently, \ pi ( h ) is the smallest natural number k so that the vertices of h can be embedded in r ^ k such that any two disjoint edges of h can be separated by a hyperplane normal to one of the axes. we show that the separation dimension of a hypergraph h is equal to the " boxicity " of the line graph of h. this connection helps us in borrowing results and techniques from the extensive literature on boxicity to study the concept of separation dimension.
arxiv:1212.6756
cosmological n - body simulations of galaxies operate at the level of " star particles " with a mass resolution on the scale of thousands of solar masses. turning these simulations into stellar mock catalogs requires " upsampling " the star particles into individual stars following the same phase - space density. in this paper, we introduce two new upsampling methods. first, we describe galaxyflow, a sophisticated upsampling method that utilizes normalizing flows to both estimate the stellar phase space density and sample from it. second, we improve on existing upsamplers based on adaptive kernel density estimation, using maximum likelihood estimation to fine - tune the bandwidth for such algorithms in a way that improves both the density estimation accuracy and upsampling results. we demonstrate our upsampling techniques on a neighborhood of the solar location in two simulated galaxies : auriga 6 and h277. both yield smooth stellar distributions that closely resemble the stellar densities seen in the gaia dr3 catalog. furthermore, we introduce a novel multi - model classifier test to compare the accuracy of different upsampling methods quantitatively. this test confirms that galaxyflow estimates the density of the underlying star particles more accurately than methods based on kernel density estimation, at the cost of being more computationally intensive.
arxiv:2211.11765
we introduce a notion of ricci flow in generalized geometry, extending a previous definition by gualtieri on exact courant algebroids. special stationary points of the flow are given by solutions to first - order differential equations, the killing spinor equations, which encompass special holonomy metrics with solutions of the hull - strominger system. our main result investigates a method to produce new solutions of the ricci flow and the killing spinor equations. for this, we consider t - duality between possibly topologically distinct torus bundles endowed with courant structures, and demonstrate that solutions of the equations are exchanged under this symmetry. as applications, we give a mathematical explanation of the dilaton shift in string theory and prove that the hull - strominger system is preserved by t - duality.
arxiv:1611.08926
a brief review of measurements and expectations concerning the spin structure function g _ 1 of the nucleon at low values of the scaling variable x is given.
arxiv:hep-ph/0110355
ii. = = approaches to history of science = = the nature of the history of science is a topic of debate ( as is, by implication, the definition of science itself ). the history of science is often seen as a linear story of progress, but historians have come to see the story as more complex. alfred edward taylor has characterised lean periods in the advance of scientific discovery as " periodical bankruptcies of science ". science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration. the relationship between science and religion has been variously characterized in terms of " conflict ", " harmony ", " complexity ", and " mutual independence ", among others. events in europe such as the galileo affair of the early 17th century – associated with the scientific revolution and the age of enlightenment – led scholars such as john william draper to postulate ( c. 1874 ) a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. the " conflict thesis " has since lost favor among the majority of contemporary scientists and historians of science. however, some contemporary philosophers and scientists, such as richard dawkins, still subscribe to this thesis. historians have emphasized that trust is necessary for agreement on claims about nature. in this light, the 1660 establishment of the royal society and its code of experiment – trustworthy because witnessed by its members – has become an important chapter in the historiography of science. many people in modern history ( typically women and persons of color ) were excluded from elite scientific communities and characterized by the science establishment as inferior. historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals. historians have also investigated the mundane practices of science such as fieldwork and specimen collection, correspondence, drawing, record - keeping, and the use of laboratory and field equipment. = = prehistory = = in prehistoric times, knowledge and technique were passed from generation to generation in an oral tradition. for instance, the domestication of maize for agriculture has been dated to about 9, 000 years ago in southern mexico, before the development of writing systems. similarly, archaeological evidence indicates the development of astronomical knowledge in preliterate societies. the oral tradition of preliterate societies had several features, the first of which was its fluidity. new information was constantly absorbed and adjusted to new circumstances or community needs. there
https://en.wikipedia.org/wiki/History_of_science
we study the non - zero eigenmodes for the modular hamiltonian in the context of ads $ _ 3 / $ cft $ _ 2 $. we show how to perturbatively construct zero eigenmodes for the modular hamiltonian of the class of excited states constructed in lashkari et al. ( arxiv : 1811. 05052 ), using information about the vacuum non - zero eigen modular modes.
arxiv:1906.00726
intelligent reflecting surface ( irs ) has been widely considered as one of the key enabling techniques for future wireless communication networks owing to its ability of dynamically controlling the phase shift of reflected electromagnetic ( em ) waves to construct a favorable propagation environment. while irs only focuses on signal reflection, the recently emerged innovative concept of intelligent omni - surface ( ios ) can provide the dual functionality of manipulating reflecting and transmitting signals. thus, ios is a new paradigm for achieving ubiquitous wireless communications. in this paper, we consider an iosassisted multi - user multi - input single - output ( mu - miso ) system where the ios utilizes its reflective and transmissive properties to enhance the mu - miso transmission. both power minimization and sum - rate maximization problems are solved by exploiting the second - order cone programming ( socp ), riemannian manifold, weighted minimum mean square error ( wmmse ), and block coordinate descent ( bcd ) methods. simulation results verify the advancements of the ios for wireless systems and illustrate the significant performance improvement of our proposed joint transmit beamforming, reflecting and transmitting phase - shift, and ios energy division design algorithms. compared with conventional irs, ios can significantly extend the communication coverage, enhance the strength of received signals, and improve the quality of communication links.
arxiv:2209.00199
the magic telescope has observed very - high energy gamma - ray emission from the bl lac object pg 1553 + 113 in 2005 and 2006 at an overall significance is 8. 8 sigma. the light curve shows no significant flux variations on a daily timescale. the flux level during 2005 was, however, significantly higher as compared to 2006. the differential energy spectrum between approx. 90 gev and 500 gev is well described by a power law with a spectral index of - 4. 2 + - 0. 3. the photon energy spectrum and spectral modeling allow to pose upper limits of z = 0. 74 and z = 0. 56, respectively, on the yet undetermined redshift of pg 1553 + 113. recent vlt observations of this blazar show featureless spectra in the near - ir, thus no direct redshift could be determined from these measurements.
arxiv:0711.1586
s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source – induced pluripotent stem cells – may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra - embryonic tissue. pluripotent cells are stem cells which can differentiate into any cell type in the body except extra - embryonic tissue. induced pluripotent stem cells ( ipscs )
https://en.wikipedia.org/wiki/Tissue_engineering
recent analysis of the cosmic ray data together with earlier experimental measurements at isr and sps provides us a sound footing to discuss the behavior of total cross section at asymptotic energies. we will study the growth of total cross section at high energies in the light of various theoretical approaches with special reference to measurements at rhic and lhc.
arxiv:hep-ph/0508282
inspired by the finn - osserman ( 1964 ), chern ( 1969 ), do carmo - peng ( 1979 ) proofs of the bernstein theorem, which characterizes flat planes as the only entire minimal graphs, we prove a new rigidity theorem for associate families connecting the doubly periodic scherk graphs and the singly periodic scherk towers. our characterization of scherk ' s surfaces discovers a new idea from the original finn - osserman curvature estimate. combining two generically independent flat structures introduced by chern and ricci, we shall construct geometric harmonic functions on minimal surfaces, and establish that periodic minimal surfaces admit fresh uniqueness results.
arxiv:1812.01401
the requirement that the supersymmetric scalar potential be stable in the minimal supergravity ( msugra ) model imposes an upper bound on the universal gaugino mass $ m _ { 1 / 2 } $ as function of the common scalar mass $ m _ 0 $. combining this with the experimental lower bound on $ m _ { 1 / 2 } $ from lep data, we find a new lower bound on $ m _ 0 $, stronger than the one that comes from experimental data alone. if the corresponding upper and lower limits on the superparticle masses, presented in this letter, are found to be violated at tevatron run ii or at the lhc, it would imply that we are living on a false vacuum. special attention has been paid in estimating the uncertainties in these predictions due to the choice of the renormalization scale. the implications of our limits for the constraints obtained by indirect methods ( susy dark matter, g - 2 of the muon, $ b \ to s \ gamma $.... ) are briefly discussed.
arxiv:hep-ph/0406129
we study electromagnetic radiation by a fast particle carrying electric charge in chiral medium. the medium is homogeneous and isotropic and supports the chiral magnetic current which renders the fermion and photon states unstable. the instability manifests as the chirality - dependent resonances in the bremsstrahlung cross section, which enhance the energy loss in the chiral medium. we compute the corresponding cross sections in the single scattering approximation and derive the energy loss in the high energy approximation.
arxiv:2307.05761
let $ 2 < a < b $ be two relatively prime integers and $ g = ab - a - b $. it is proved that there exists at least one prime $ p \ le g $ of the form $ p = ax + by ~ ( x, y \ in \ mathbb { z } _ { \ ge 0 } ) $, which confirms a 2020 conjecture of ram \ ' { \ i } rez alfons \ ' { \ i } n and ska { \ l } ba.
arxiv:2411.09446
functional connectivity plays an essential role in modern neuroscience. the modality sheds light on the brain ' s functional and structural aspects, including mechanisms behind multiple pathologies. one such pathology is schizophrenia which is often followed by auditory verbal hallucinations. the latter is commonly studied by observing functional connectivity during speech processing. in this work, we have made a step toward an in - depth examination of functional connectivity during a dichotic listening task via deep learning for three groups of people : schizophrenia patients with and without auditory verbal hallucinations and healthy controls. we propose a graph neural network - based framework within which we represent eeg data as signals in the graph domain. the framework allows one to 1 ) predict a brain mental disorder based on eeg recording, 2 ) differentiate the listening state from the resting state for each group and 3 ) recognize characteristic task - depending connectivity. experimental results show that the proposed model can differentiate between the above groups with state - of - the - art performance. besides, it provides a researcher with meaningful information regarding each group ' s functional connectivity, which we validated on the current domain knowledge.
arxiv:2206.01930
this text presents the cognitive - ergonomics approach to design, in both its individual and collective form. it focuses on collective design with respect to individual design. the theoretical framework adopted is that of information processing, specified for design problems. the cognitive characteristics of design problems are presented : the effects of their ill - defined character and of the different types of representation implemented in solving these problems, amongst others the more or less " satisficing " character of the different possible solutions. the text first describes the cognitive activities implemented in both individual and collective design : different types of control activities and of the executive activities of solution development and evaluation. specific collective - design characteristics are then presented : co - design and distributed - design activities, temporo - operative and cognitive synchronisation, and different types of argumentation, of co - designers ' intervention modes in the design process, of solution - proposals evaluation. the paper concludes by a confrontation between the two types of design, individual and collective.
arxiv:0711.1290
we introduce deterministic suffix - reading automata ( dsa ), a new automaton model over finite words. transitions in a dsa are labeled with words. from a state, a dsa triggers an outgoing transition on seeing a word ending with the transition ' s label. therefore, rather than moving along an input word letter by letter, a dsa can jump along blocks of letters, with each block ending in a suitable suffix. this feature allows dsas to recognize regular languages more concisely, compared to dfas. in this work, we focus on questions around finding a " minimal " dsa for a regular language. the number of states is not a faithful measure of the size of a dsa, since the transition - labels contain strings of arbitrary length. hence, we consider total - size ( number of states + number of edges + total length of transition - labels ) as the size measure of dsas. we start by formally defining the model and providing a dsa - to - dfa conversion that allows to compare the expressiveness and succinctness of dsa with related automata models. our main technical contribution is a method to derive dsas from a given dfa : a dfa - to - dsa conversion. we make a surprising observation that the smallest dsa derived from the canonical dfa of a regular language l need not be a minimal dsa for l. this observation leads to a fundamental bottleneck in deriving a minimal dsa for a regular language. in fact, we prove that given a dfa and a number k, the problem of deciding if there exists an equivalent dsa of total - size at most k is np - complete.
arxiv:2410.22761
6 msun / yr ) are larger than what was measured for class ii jets. similarly to class ii sources, the mass - loss rates are ~ 1 % - 50 % of the mass accretion rates suggesting that the correlation between ejection and accretion in young stars holds from 1e4 yr up to a few myr.
arxiv:2012.15379
the homogeneous poisson point process ( ppp ) is widely used to model spatial distribution of base stations and mobile terminals. the same process can be used to model underlay device - to - device ( d2d ) network, however, neglecting homophilic relation for d2d pairing presents underestimated system insights. in this paper, we model both spatial and social distributions of interfering d2d nodes as proximity based independently marked homogeneous poisson point process. the proximity considers physical distance between d2d nodes whereas social relationship is modeled as zipf based marks. we apply these two paradigms to analyze the effect of interference on coverage probability of distance - proportional power - controlled cellular user. effectively, we apply two type of functional mappings ( physical distance, social marks ) to laplace functional of ppp. the resulting coverage probability has no closed - form expression, however for a subset of social marks, the mark summation converges to digamma and polygamma functions. this subset constitutes the upper and lower bounds on coverage probability. we present numerical evaluation of these bounds on coverage probability by varying number of different parameters. the results show that by imparting simple power control on cellular user, ultra - dense underlay d2d network can be realized without compromising the coverage probability of cellular user.
arxiv:1606.03668
in this note, we discuss various aspects of invariant measures for nonlinear hamiltonian pdes. in particular, we show almost sure global existence for some hamiltonian pdes with initial data of the form : " smooth deterministic function + a rough random perturbation ", as a corollary to cameron - martin theorem and known almost sure global existence results with respect to gaussian measures on spaces of functions.
arxiv:1405.7323
speech audio quality is subject to degradation caused by an acoustic environment and isotropic ambient and point noises. the environment can lead to decreased speech intelligibility and loss of focus and attention by the listener. basic acoustic parameters that characterize the environment well are ( i ) signal - to - noise ratio ( snr ), ( ii ) speech transmission index, ( iii ) reverberation time, ( iv ) clarity, and ( v ) direct - to - reverberant ratio. except for the snr, these parameters are usually derived from the room impulse response ( rir ) measurements ; however, such measurements are often not available. this work presents a universal room acoustic estimator design based on convolutional recurrent neural networks that estimate the acoustic environment measurement blindly and jointly. our results indicate that the proposed system is robust to non - stationary signal variations and outperforms current state - of - the - art methods.
arxiv:2109.14436
colloidal gels formed by strongly attractive particles at low particle volume fractions are composed of space - spanning networks of uniformly sized clusters. we study the thermal fluctuations of the clusters using differential dynamic microscopy by decomposing them into two modes of dynamics, and link them to the macroscopic viscoelasticity via rheometry. the first mode, dominant at early times, represents the localized, elastic fluctuations of individual clusters. the second mode, pronounced at late times, reflects the collective, viscoelastic dynamics facilitated by the connectivity of the clusters. by mixing two types of particles of distinct attraction strengths in different proportions, we control the transition time at which the collective mode starts to dominate, and hence tune the frequency dependence of the linear viscoelastic moduli of the binary gels.
arxiv:2103.02173
we characterize and study variable importance ( vimp ) and pairwise variable associations in binary regression trees. a key component involves the node mean squared error for a quantity we refer to as a maximal subtree. the theory naturally extends from single trees to ensembles of trees and applies to methods like random forests. this is useful because while importance values from random forests are used to screen variables, for example they are used to filter high throughput genomic data in bioinformatics, very little theory exists about their properties.
arxiv:0711.2434
robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human - robot systems. however, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. this is because a truly collaborative human - robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. in this paper, we present a system for human - robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning - based vision system. the proposed system is demonstrated using a physical 6 dof manipulator in a collaborative human - robot assembly scenario. we show successful generalization of the system ' s operation to changes in the initial and final goal locations through various experiments.
arxiv:2212.01434
radio map is an efficient demonstration for visually displaying the wireless signal coverage within a certain region. it has been considered to be increasingly helpful for the future sixth generation ( 6g ) of wireless networks, as wireless nodes are becoming more crowded and complicated. however, the construction of high resolution radio map is very challenging due to the sparse sampling in practical systems. generative artificial intelligence ( ai ), which is capable to create synthetic data to fill in gaps in real - world measurements, is an effective technique to construct high precision radio maps. currently, generative models for radio map construction are trained with two - dimension ( 2d ) single band radio maps in urban scenario, which has poor generalization in diverse terrain scenarios, spectrum bands, and heights. to tackle this problem, we provide a multiband three - dimension ( 3d ) radio map dataset with consideration of terrain and climate information, named spectrumnet. it is the largest radio map dataset in terms of dimensions and scale, which contains the radio map of 3 spacial dimensions, 5 frequency bands, 11 terrain scenarios, and 3 climate scenarios. we introduce the parameters and settings for the spectrumnet dataset generation, and evaluate three baseline methods for radio map construction based on the spectrumnet dataset. experiments show the necessity of the spectrumnet dataset for training models with strong generalization in spacial, frequency, and scenario domains. future works on the spectrumnet dataset are also discussed, including the dataset expansion and calibration, as well as the extended studies on generative models for radio map construction based on the spectrumnet dataset.
arxiv:2408.15252
accurate segmentation of colorectal polyps in colonoscopy images is crucial for effective diagnosis and management of colorectal cancer ( crc ). however, current deep learning - based methods primarily rely on fusing rgb information across multiple scales, leading to limitations in accurately identifying polyps due to restricted rgb domain information and challenges in feature misalignment during multi - scale aggregation. to address these limitations, we propose the polyp segmentation network with shunted transformer ( pstnet ), a novel approach that integrates both rgb and frequency domain cues present in the images. pstnet comprises three key modules : the frequency characterization attention module ( fcam ) for extracting frequency cues and capturing polyp characteristics, the feature supplementary alignment module ( fsam ) for aligning semantic information and reducing misalignment noise, and the cross perception localization module ( cpm ) for synergizing frequency cues with high - level semantics to achieve efficient polyp segmentation. extensive experiments on challenging datasets demonstrate pstnet ' s significant improvement in polyp segmentation accuracy across various metrics, consistently outperforming state - of - the - art methods. the integration of frequency domain cues and the novel architectural design of pstnet contribute to advancing computer - assisted polyp segmentation, facilitating more accurate diagnosis and management of crc.
arxiv:2409.08501
since the seminal works of thomas and fermi, researchers in the density - functional theory ( dft ) community are searching for accurate electron density functionals. arguably, the toughest functional to approximate is the noninteracting kinetic energy, $ t _ s [ \ rho ] $, the subject of this work. the typical paradigm is to first approximate the energy functional, and then take its functional derivative, $ \ frac { \ delta t _ s [ \ rho ] } { \ delta \ rho ( r ) } $, yielding a potential that can be used in orbital - free dft, or subsystem dft simulations. here, this paradigm is challenged by constructing the potential from the second - functional derivative via functional integration. a new nonlocal functional for $ t _ s [ \ rho ] $ is prescribed ( which we dub mgp ) having a density independent kernel. mgp is constructed to satisfy three exact conditions : ( 1 ) a nonzero " kinetic electron " arising from a nonzero exchange hole ; ( 2 ) the second functional derivative must reduce to the inverse lindhard function in the limit of homogenous densities ; ( 3 ) the potential derives from functional integration of the second functional derivative. pilot calculations show that mgp is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic ( bcc, fcc ) and semiconducting ( cd ) phases of silicon as well as of iii - v semiconductors. mgp functional is found to be numerically stable typically reaching selfconsistency within 12 iteration of a truncated newton minimization algorithm. mgp ' s computational cost and memory requirements are low and comparable to the wang - teter ( wt ) nonlocal functional or any gga functional.
arxiv:1704.08943
generalized statistical arbitrage concepts are introduced corresponding to trading strategies which yield positive gains on average in a class of scenarios rather than almost surely. the relevant scenarios or market states are specified via an information system given by a $ \ sigma $ - algebra and so this notion contains classical arbitrage as a special case. it also covers the notion of statistical arbitrage introduced in bondarenko ( 2003 ). relaxing these notions further we introduce generalized profitable strategies which include also static or semi - static strategies. under standard no - arbitrage there may exist generalized gain strategies yielding positive gains on average under the specified scenarios. in the first part of the paper we characterize these generalized statistical no - arbitrage notions. in the second part of the paper we construct several profitable generalized strategies with respect to various choices of the information system. in particular, we consider several forms of embedded binomial strategies and follow - the - trend strategies as well as partition - type strategies. we study and compare their behaviour on simulated data. additionally, we find good performance on market data of these simple strategies which makes them profitable candidates for real applications.
arxiv:1907.09218
effective interactions can be obtained from a renormalization group analysis in two complementary ways. one can either explicitly integrate out higher energy modes or impose given conditions at low energies for a cut - off theory. while the first method is numerically involved, the second one can be solved almost analytically. in both cases we compare the out coming effective interactions for the two nucleon system as functions of the cut - off scale and find a strikingly wide energy region where both approaches overlap, corresponding to relevant scales in light nuclei about 200mev. this amounts to a great simplification in the determination of the effective interaction parameters.
arxiv:1307.1231
with the rapid development of facial manipulation techniques, face forgery has received considerable attention in multimedia and computer vision community due to security concerns. existing methods are mostly designed for single - frame detection trained with precise image - level labels or for video - level prediction by only modeling the inter - frame inconsistency, leaving potential high risks for deepfake attackers. in this paper, we introduce a new problem of partial face attack in deepfake video, where only video - level labels are provided but not all the faces in the fake videos are manipulated. we address this problem by multiple instance learning framework, treating faces and input video as instances and bag respectively. a sharp mil ( s - mil ) is proposed which builds direct mapping from instance embeddings to bag prediction, rather than from instance embeddings to instance prediction and then to bag prediction in traditional mil. theoretical analysis proves that the gradient vanishing in traditional mil is relieved in s - mil. to generate instances that can accurately incorporate the partially manipulated faces, spatial - temporal encoded instance is designed to fully model the intra - frame and inter - frame inconsistency, which further helps to promote the detection performance. we also construct a new dataset ffpms for partially attacked deepfake video detection, which can benefit the evaluation of different methods at both frame and video levels. experiments on ffpms and the widely used dfdc dataset verify that s - mil is superior to other counterparts for partially attacked deepfake video detection. in addition, s - mil can also be adapted to traditional deepfake image detection tasks and achieve state - of - the - art performance on single - frame datasets.
arxiv:2008.04585
a $ ( p, q ) $ - double form on a riemannian manifold $ ( m, g ) $ can be considered simultaneously as a vector - valued differential $ p $ - form over $ m $ or alternatively as a vector - valued $ q $ - form. accordingly, the usual hodge - de rham laplacian on differential forms can be extended to double forms in two ways. the differential operators obtained in this way are denoted by $ \ delta $ and $ \ widetilde { \ delta } $. \ \ in this paper, we show that the lichn \ ' erowicz laplacian $ \ delta _ l $ once operating on double forms, is nothing but the average of the two operators mentioned above. we introduce a new product on double forms to establish index - free formulas for the curvature terms in the weitzenb \ " ock formulas corresponding to the laplacians $ \ delta, \ widetilde { \ delta } $ and $ \ delta _ l $. we prove vanishing theorems for the hodge - de rham laplacian $ \ delta $ on $ ( p, 0 ) $ double forms and for $ \ delta _ l $ and $ \ delta $ on symmetric double forms of arbitrary order. these results generalize recent results by petersen - wink. our vanishing theorems reveal the impact of the role played by the rank of the eigenvectors of the curvature operator on the structure ( e. g. the topology ) of the manifold.
arxiv:2405.12828
we highlight here several solutions developed to make high - level cherenkov data fair : findable, accessible, interoperable and reusable. the first three fair principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the international virtual observatory alliance ( ivoa ). however, the reusability principle is particularly subtle as the question of trust is raised. provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. we developed a prototype platform to make the first h. e. s. s. public test data findable and accessible through the virtual observatory ( vo ). the exposed high - level data follows the gamma - ray astronomy data format ( gadf ) proposed as a community standard to ensure wider interoperability. we also designed a provenance management system in connection with the development of pipelines and analysis tools for cta ( ctapipe and gammapy ), in order to collect rich and detailed provenance information, as recommended by the fair reusability principle. the prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user.
arxiv:2201.03247
x - ray phase - contrast tomography ( xpct ) is widely used for high contrast 3d imaging using either synchrotron or laboratory microfocus x - ray sources. xpct enables an order of magnitude improvement in image contrast of the reconstructed material interfaces with low x - ray absorption contrast. the dominant approaches to 3d reconstruction using xpct relies on the use of phase - retrieval algorithms that make one or more limiting approximations for the experimental configuration and material properties. since many experimental scenarios violate such approximations, the resulting reconstructions contain blur, artifacts, or other quantitative inaccuracies. our solution to this problem is to formulate new iterative non - linear phase - retrieval ( nlpr ) algorithms that avoid such limiting approximations. compared to the widely used state - of - the - art approaches, we show that our proposed algorithms result in sharp and quantitatively accurate reconstruction with reduced artifacts. unlike existing nlpr algorithms, our approaches avoid the laborious manual tuning of regularization hyper - parameters while still achieving the stated goals. as an alternative to regularization, we propose explicit constraints on the material properties to constrain the solution space and solve the phase - retrieval problem. these constraints are easily user - configurable since they follow directly from the imaged object ' s dimensions and material properties.
arxiv:2305.00334
we discuss analytical approximations to the ground - state phase diagram and the elementary excitations of the cooperative jahn - teller model describing strongly correlated spin - boson system on a lattice in various quantum optical systems. based on the mean - field theory approach we show that the system exhibits quantum magnetic structural phase transition which leads to magnetic ordering of the spins and formation of the bosonic condensates. we determine existing of one gapless goldstone mode and two gapped amplitude modes inside the symmetry - broken phase.
arxiv:1410.4016
we introduce the subject of modal model theory, where one studies a mathematical structure within a class of similar structures under an extension concept that gives rise to mathematically natural notions of possibility and necessity. a statement $ \ varphi $ is possible in a structure ( written $ \ diamond \ varphi $ ) if $ \ varphi $ is true in some extension of that structure, and $ \ varphi $ is necessary ( written $ \ box \ varphi $ ) if it is true in all extensions of the structure. a principal case for us will be the class mod ( t ) of all models of a given theory t - - - all graphs, all groups, all fields, or what have you - - - considered under the substructure relation. in this article, we aim to develop the resulting modal model theory. the class of all graphs is a particularly insightful case illustrating the remarkable power of the modal vocabulary, for the modal language of graph theory can express connectedness, $ k $ - colorability, finiteness, countability, size continuum, size $ \ aleph _ 1 $, $ \ aleph _ 2 $, $ \ aleph _ \ omega $, $ \ beth _ \ omega $, first $ \ beth $ - fixed point, first $ \ beth $ - hyper - fixed - point and much more. a graph obeys the maximality principle $ \ diamond \ box \ varphi ( a ) \ to \ varphi ( a ) $ with parameters if and only if it satisfies the theory of the countable random graph, and it satisfies the maximality principle for sentences if and only if it is universal for finite graphs.
arxiv:2009.09394
this paper presents the experiments carried out by us at jadavpur university as part of the participation in fire 2015 task : entity extraction from social media text - indian languages ( esm - il ). the tool that we have developed for the task is based on trigram hidden markov model that utilizes information like gazetteer list, pos tag and some other word level features to enhance the observation probabilities of the known tokens as well as unknown tokens. we submitted runs for english only. a statistical hmm ( hidden markov models ) based model has been used to implement our system. the system has been trained and tested on the datasets released for fire 2015 task : entity extraction from social media text - indian languages ( esm - il ). our system is the best performer for english language and it obtains precision, recall and f - measures of 61. 96, 39. 46 and 48. 21 respectively.
arxiv:1512.03950
we propose tabtransformer, a novel deep tabular data modeling architecture for supervised and semi - supervised learning. the tabtransformer is built upon self - attention based transformers. the transformer layers transform the embeddings of categorical features into robust contextual embeddings to achieve higher prediction accuracy. through extensive experiments on fifteen publicly available datasets, we show that the tabtransformer outperforms the state - of - the - art deep learning methods for tabular data by at least 1. 0 % on mean auc, and matches the performance of tree - based ensemble models. furthermore, we demonstrate that the contextual embeddings learned from tabtransformer are highly robust against both missing and noisy data features, and provide better interpretability. lastly, for the semi - supervised setting we develop an unsupervised pre - training procedure to learn data - driven contextual embeddings, resulting in an average 2. 1 % auc lift over the state - of - the - art methods.
arxiv:2012.06678
we provide a sufficient geometric condition for $ \ mathbb { r } ^ n $ to be countably $ ( \ mu, m ) $ rectifiable of class $ \ mathscr { c } ^ { 1, \ alpha } $ ( using the terminology of federer ), where $ \ mu $ is a radon measure having positive lower density and finite upper density $ \ mu $ almost everywhere. our condition involves integrals of certain many - point interaction functions ( discrete curvatures ) which measure flatness of simplices spanned by the parameters.
arxiv:1506.00507
a procedure is developed to rigorously decompose experimental loss spectra of medium - energy electrons reflected from solid surfaces into contributions due to surface and volume electronic excitations. this can be achieved by analysis of two spectra acquired under different experimental conditions, e. g. measured at two different energies and / or geometrical configurations. the input parameters of this procedure comprise the elastic scattering cross section and the inelastic mean free path for volume scattering. the ( normalized ) differential inelastic mean free path as well as the differential surface excitation probability are retrieved by this procedure. reflection electron energy loss spectroscopy ( reels ) data for si, cu and au are subjected to this procedure and the retrieved differential surface and volume excitation probabilities are compared with data from the literature. the results verify the commonly accepted model for medium energy electron transport in solids with unprecedented detail.
arxiv:cond-mat/0503470
a nematic liquid crystal ( nlc ) layer with the anisotropy axis modulated at a fixed rate q in the transverse direction is considered. if the layer locally constitutes a half - wave plate, then the thin - screen approximation predicts 100 % - efficient diffraction of normal incident wave. the possibility of implementing such a layer via anchoring at both surfaces of a cell with thickness l is studied as a function of parameter ql and threshold values of this parameter are found for a variety of cases. distortions of the structure of director in comparison with the preferable ideal profile are found via numerical modeling. freedericksz transition is studied for this configuration. coupled - mode theory is applied to light propagation through such cell allowing to account for walk - off effects and effects of nematic distortion. in summary, this cell is suggested as a means for projection display ; high efficiency is predicted.
arxiv:cond-mat/0508555
in forensics it is a classical problem to determine, when a suspect $ s $ shares a property $ \ gamma $ with a criminal $ c $, the probability that $ s = c $. in this paper we give a detailed account of this problem in various degrees of generality. we start with the classical case where the probability of having $ \ gamma $, as well as the a priori probability of being the criminal, is the same for all individuals. we then generalize the solution to deal with heterogeneous populations, biased search procedures for the suspect, $ \ gamma $ - correlations, uncertainty about the subpopulation of the criminal and the suspect, and uncertainty about the $ \ gamma $ - frequencies. we also consider the effect of the way the search for $ s $ is conducted, in particular when this is done by a database search. a returning theme is that we show that conditioning is of importance when one wants to quantify the " weight " of the evidence by a likelihood ratio. apart from these mathematical issues, we also discuss the practical problems in applying these issues to the legal process. the posterior probabilities of $ c = s $ are typically the same for all reasonable choices of the hypotheses, but this is not the whole story. the legal process might force one to dismiss certain hypotheses, for instance when the relevant likelihood ratio depends on prior probabilities. we discuss this and related issues as well. as such, the paper is relevant both from a theoretical and from an applied point of view.
arxiv:1201.4647
the rise of self - driving cars ( sdcs ) presents important safety challenges to address in dynamic environments. while field testing is essential, current methods lack diversity in assessing critical sdc scenarios. prior research introduced simulation - based testing for sdcs, with frenetic, a test generation approach based on frenet space encoding, achieving a relatively high percentage of valid tests ( approximately 50 % ) characterized by naturally smooth curves. the " minimal out - of - bound distance " is often taken as a fitness function, which we argue to be a sub - optimal metric. instead, we show that the likelihood of leading to an out - of - bound condition can be learned by the deep - learning vanilla transformer model. we combine this " inherently learned metric " with a genetic algorithm, which has been shown to produce a high diversity of tests. to validate our approach, we conducted a large - scale empirical evaluation on a dataset comprising over 1, 174 simulated test cases created to challenge the sdcs behavior. our investigation revealed that our approach demonstrates a substantial reduction in generating non - valid test cases, increased diversity, and high accuracy in identifying safety violations during sdc test execution.
arxiv:2401.14682
and organization of military units, as well as the military as a whole. in addition, this area studies other associated aspects as mobilization / demobilization, and military government for areas recently conquered ( or liberated ) from enemy control. = = = force structuring = = = force structuring is the method by which personnel and the weapons and equipment they use are organized and trained for military operations, including combat. development of force structure in any country is based on strategic, operational, and tactical needs of the national defense policy, the identified threats to the country, and the technological capabilities of the threats and the armed forces. force structure development is guided by doctrinal considerations of strategic, operational and tactical deployment and employment of formations and units to territories, areas and zones where they are expected to perform their missions and tasks. force structuring applies to all armed services, but not to their supporting organisations such as those used for defense science research activities. in the united states force structure is guided by the table of organization and equipment ( toe or to & e ). the toe is a document published by the u. s. department of defense which prescribes the organization, manning, and equipage of units from divisional size and down, but also including the headquarters of corps and armies. force structuring also provides information on the mission and capabilities of specific units, as well as the unit ' s current status in terms of posture and readiness. a general toe is applicable to a type of unit ( for instance, infantry ) rather than a specific unit ( the 3rd infantry division ). in this way, all units of the same branch ( such as infantry ) follow the same structural guidelines which allows for more efficient financing, training, and employment of like units operationally. = = = military education and training = = = studies the methodology and practices involved in training soldiers, ncos ( non - commissioned officers, i. e. sergeants and corporals ), and officers. it also extends this to training small and large units, both individually and in concert with one another for both the regular and reserve organizations. military training, especially for officers, also concerns itself with general education and political indoctrination of the armed forces. = = military concepts and methods = = much of capability development depends on the concepts which guide use of the armed forces and their weapons and equipment, and the methods employed in any given theatre of war or combat environment. according to dr. kajal nayan : artificial intelligence cyber war era currently, along with
https://en.wikipedia.org/wiki/Military_science
very deep convolutional neural networks ( cnns ) have greatly improved the performance on various image restoration tasks. however, this comes at a price of increasing computational burden, hence limiting their practical usages. we observe that some corrupted image regions are inherently easier to restore than others since the distortion and content vary within an image. to leverage this, we propose path - restore, a multi - path cnn with a pathfinder that can dynamically select an appropriate route for each image region. we train the pathfinder using reinforcement learning with a difficulty - regulated reward. this reward is related to the performance, complexity and " the difficulty of restoring a region ". a policy mask is further investigated to jointly process all the image regions. we conduct experiments on denoising and mixed restoration tasks. the results show that our method achieves comparable or superior performance to existing approaches with less computational cost. in particular, path - restore is effective for real - world denoising, where the noise distribution varies across different regions on a single image. compared to the state - of - the - art ridnet, our method achieves comparable performance and runs 2. 7x faster on the realistic darmstadt noise dataset.
arxiv:1904.10343
we consider a stochastic differential equation with additive fractional noise with hurst parameter $ h > 1 / 2 $, and a non - linear drift depending on an unknown parameter. we show the local asymptotic normality property ( lan ) of this parametric model with rate $ \ sqrt { \ tau } $ as $ \ tau \ rightarrow \ infty $, when the solution is observed continuously on the time interval $ [ 0, \ tau ] $. the proof uses ergodic properties of the equation and a girsanov - type transform. we analyse the particular case of the fractional ornstein - uhlenbeck process and show that the maximum likelihood estimator is asymptotically efficient in the sense of the minimax theorem.
arxiv:1509.00003
we present ample - - a novel multiple path loss exponent ( ple ) radio propagation model that can adapt to different environmental factors. the proposed model aims at accurately predicting path loss with low computational complexity considering environmental factors. in the proposed model, the scenario under consideration is classified into regions from a raster map, and each type of region is assigned with a ple. the path loss is then computed based on a direct path between the transmitter ( tx ) and receiver ( rx ), which records the intersected regions and the weighted region path loss. to regress the model, the parameters, including ples, are extracted via measurement and the region map. we also verify the model in a suburban area. to the best of our knowledge, this is the first time that a multi - slope model precisely maps ples and region types. besides, this model can be integrated into map systems by creating a new path loss attribute for digital maps.
arxiv:2303.12441
the origin of the pseudogap in many strongly correlated materials has been a longstanding puzzle. here, we uncover which many - body interactions underlie the pseudogap in quasi - one - dimensional ( quasi - 1d ) material ( tase4 ) 2i by weak photo - excitation of the material to partially melt the ground state order and thereby reveal the underlying states in the gap. we observe the appearance of both dispersive and flat bands by using time - and angle - resolved photoemission spectroscopy. we assign the dispersive band to a single - particle bare band, while the flat band to a collection of single - polaron sub - bands. our results provide direct experimental evidence that many - body interactions among small holstein polarons i. e., the formation of bipolarons, are primarily responsible for the pseudogap in ( tase4 ) 2i. recent theoretical studies of the holstein model support the presence of such a bipolaron - to - polaron crossover. we also observe dramatically different relaxation times for the excited in - gap states in ( tase4 ) 2i ( ~ 600 fs ) compared with another quasi - 1d material rb0. 3moo3 ( ~ 60 fs ), which provides a new method for distinguishing between pseudogaps induced by polaronic or luttinger - liquid many - body interactions.
arxiv:2203.05655
stone - type duality theorems, which relate algebraic and relational / topological models, are important tools in logic because - - in addition to elegant abstraction - - they strengthen soundness and completeness to a categorical equivalence, yielding a framework through which both algebraic and topological methods can be brought to bear on a logic. we give a systematic treatment of stone - type duality for the structures that interpret bunched logics, starting with the weakest systems, recovering the familiar bi and boolean bi ( bbi ), and extending to both classical and intuitionistic separation logic. we demonstrate the uniformity and modularity of this analysis by additionally capturing the bunched logics obtained by extending bi and bbi with modalities and multiplicative connectives corresponding to disjunction, negation and falsum. this includes the logic of separating modalities ( lsm ), de morgan bi ( dmbi ), classical bi ( cbi ), and the sub - classical family of logics extending bi - intuitionistic ( b ) bi ( bi ( b ) bi ). we additionally obtain as corollaries soundness and completeness theorems for the specific kripke - style models of these logics as presented in the literature : for dmbi, the sub - classical logics extending bibi and a new bunched logic, concurrent kleene bi ( connecting our work to concurrent separation logic ), this is the first time soundness and completeness theorems have been proved. we thus obtain a comprehensive semantic account of the multiplicative variants of all standard propositional connectives in the bunched logic setting. this approach synthesises a variety of techniques from modal, substructural and categorical logic and contextualizes the " resource semantics " interpretation underpinning separation logic amongst them.
arxiv:1710.03021
the mission of statistics is to provide adequate statistical hypotheses ( models ) for observed data. but what is an " adequate " model? to answer this question, one needs to use the notions of algorithmic information theory. it turns out that for every data string $ x $ one can naturally define " stochasticity profile ", a curve that represents a trade - off between complexity of a model and its adequacy. this curve has four different equivalent definitions in terms of ( 1 ) ~ randomness deficiency, ( 2 ) ~ minimal description length, ( 3 ) ~ position in the lists of simple strings and ( 4 ) ~ kolmogorov complexity with decompression time bounded by busy beaver function. we present a survey of the corresponding definitions and results relating them to each other.
arxiv:1504.04950
the total irradiance monitor ( tim ) from nasa ' s solar radiation and climate experiment ( sorce ) can detect changes in the total solar irradiance ( tsi ) to a precision of 2 ppm, allowing observations of variations due to the largest x - class solar ares for the first time. presented here is a robust algorithm for determining the radiative output in the tim tsi measurements, in both the impulsive and gradual phases, for the four solar ares presented in woods et al. ( 2006 ), as well as an additional are measured on 2006 december 6. the radiative outputs for both phases of these five ares are then compared to the vacuum ultraviolet ( vuv ) irradiance output from the flare irradiance spectral model ( fism ) in order to derive an empirical relationship between the fism vuv model and the tim tsi data output to estimate the tsi radiative output for eight other x - class ares. this model provides the basis for the bolometric energy estimates for the solar ares analyzed in the emslie et al. ( 2012 ) study.
arxiv:1509.06074
the data experimental synthesis and studies of layered organic - inorganic nanocomposites [ cu2 ( oh ) 3 + ds ], resulting from ablation of copper in aqueous solutions of surfactant - dodecyl sodium sulfate ( sds ) are presented. by the methods of absorption spectroscopy of colloidal solutions, x - ray diffraction, scanning electron ( sem ) and atomic force microscopy ( afm ) of solid phase colloids was traced the formation dynamics of this composite, depending on the exposure duration of copper vapor laser radiation on the target of copper as well as the aging time of the colloid. bilayered structures of composite [ cu2 ( oh ) 3 + ds ] fabricated by method of laser ablation copper metal target in liquid are demonstrated for the first time.
arxiv:1203.3026
stellar photometric variability and instrumental effects, like cosmic ray hits, data discontinuities, data leaks, instrument aging etc. cause difficulties in the characterization of exoplanets and have an impact on the accuracy and precision of the modelling and detectability of transits, occultations and phase curves. this paper aims to make an attempt to improve the transit, occultation and phase - curve modelling in the presence of strong stellar variability and instrumental noise. we invoke the wavelet - formulation to reach this goal. we explore the capabilities of the software package transit and light curve modeller ( tlcm ). it is able to perform a joint radial velocity and light curve fit or light curve fit only. it models the transit, occultation, beaming, ellipsoidal and reflection effects in the light curves ( including the gravity darkening effect, too ). the red - noise, the stellar variability and instrumental effects are modelled via wavelets. the wavelet - fit is constrained by prescribing that the final white noise level must be equal to the average of the uncertainties of the photometric data points. this helps to avoid the overfit and regularizes the noise model. the approach was tested by injecting synthetic light curves into kepler ' s short cadence data and then modelling them. the method performs well over a certain signal - to - noise ( s / n ) ratio. in general a s / n ratio of 10 is needed to get good results but some parameters requires larger s / n, some others can be retrieved at lower s / ns. we give limits in terms of signal - to - noise ratio for every studied system parameter which is needed to accurate parameter retrieval. the wavelet - approach is able to manage and to remove the impacts of data discontinuities, cosmic ray events, long - term stellar variability and instrument ageing, short term stellar variability and pulsation and flares among others. (... )
arxiv:2108.11822
we study questions around the existence of bounds and the dependence on parameters for linear - algebraic problems in polynomial rings over rings of an arithmetic flavor. in particular, we show that the module of syzygies of polynomials $ f _ 1,..., f _ n \ in r [ x _ 1,..., x _ n ] $ with coefficients in a pr \ " ufer domain $ r $ can be generated by elements whose degrees are bounded by a number only depending on $ n $, $ n $ and the degree of the $ f _ j $. this implies that if $ r $ is a b \ ' ezout domain, then the generators can be parametrized in terms of the coefficients of $ f _ 1,..., f _ n $ using the ring operations and a certain division function, uniformly in $ r $.
arxiv:math/0306240
we propose a novel design for a lightweight, high - performance space - based solar power array combined with power beaming capability for operation in geosynchronous orbit and transmission of power to earth. we use a modular configuration of small, repeatable unit cells, called tiles, that each individually perform power collection, conversion, and transmission. sunlight is collected via lightweight parabolic concentrators and converted to dc electric power with high efficiency iii - v photovoltaics. several cmos integrated circuits within each tile generates and controls the phase of multiple independently - controlled microwave sources using the dc power. these sources are coupled to multiple radiating antennas which act as elements of a large phased array to beam the rf power to earth. the power is sent to earth at a frequency chosen in the range of 1 - 10 ghz and collected with ground - based rectennas at a local intensity no larger than ambient sunlight. we achieve significantly reduced mass compared to previous designs by taking advantage of solar concentration, current cmos integrated circuit technology, and ultralight structural elements. of note, the resulting satellite has no movable parts once it is fully deployed and all beam steering is done electronically. our design is safe, scalable, and able to be deployed and tested with progressively larger configurations starting with a single unit cell that could fit on a cube satellite. the design reported on here has an areal mass density of 160 g / m2 and an end - to - end efficiency of 7 - 14 %. we believe this is a significant step forward to the realization of space - based solar power, a concept once of science fiction.
arxiv:2206.08373
service - oriented computing ( soc ) enables the composition of loosely coupled service agents provided with varying quality of service ( qos ) levels, effectively forming a multiagent system ( mas ). selecting a ( near - ) optimal set of services for a composition in terms of qos is crucial when many functionally equivalent services are available. as the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the qos keeps increasing. despite this and opposed to most mas approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. also, we propose an extended qos aggregation algorithm that allows to accurately estimate network qos. finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
arxiv:1301.4839
nine since 2014. = = campuses = = the main campus, located on marquam hill ( colloquially known as " pill hill " ) in the southwest neighborhood of homestead, is home to the university ' s medical school as well as two associated hospitals. the oregon health & science university hospital is a level i trauma center and general hospital ; doernbecher children ' s hospital is a children ' s hospital which specializes in pediatric medicine and care of children with long - term illness. the university maintains a number of outpatient primary care facilities including the physician ' s pavilion at the marquam hill campus as well as throughout the portland metropolitan area. a third hospital, the portland veterans affairs medical center is located next to the main ohsu campus ; this hospital is run by the united states department of veterans affairs and is outside the auspices of ohsu. a 660 feet ( 200 m ) pedestrian skybridge connecting ohsu hospital and the va medical center was constructed in 1992. additionally, the portland shriners hospital for children is located on the ohsu campus. the university also had a campus in hillsboro, at the site of the former ogi. this campus specialized in graduate - level science and engineering education and is located in the heart of oregon ' s silicon forest. since 1998, the university has controlled the oregon national primate research center, located adjacent to ogi in hillsboro. with the marquam hill campus running out of room for expansion, beginning in 2003 ohsu announced plans to expand into the south waterfront district, formerly known as the north macadam district. the expansion area is along the willamette river in the south portland neighborhood to the east of marquam hill and south of the city center. the center for health & healing earned leed platinum certification in february 2007, becoming the largest health care center in the u. s. to achieve that status. as part of the continued expansion of the south waterfront, on june 26, 2014, ohsu opened the collaborative life sciences building ( clsb ). the building cost $ 295 million to construct, and houses ohsu school of dentistry and dental clinics, portland state university classes and oregon state university ' s doctor of pharmacy program. in april, 2018, clsb was renamed to the joseph e. robertson, jr. collaborative life sciences building ( rlsb ). as existing surface streets were deemed insufficient to connect the south waterfront campus to the marquam hill campus, the portland aerial tram was built as the primary link between them and opened december 1, 2006.
https://en.wikipedia.org/wiki/Oregon_Health_&_Science_University
of $ [ { n \ brace k } _ { [ r ] } ] ^ { - 1 } _ { n, k \ geq 1 } $ have combinatorial interpretations, affirmatively answering a question of choi, long, ng and smith from 2006. if $ 1, 2 \ in r $ and if for all $ n \ in r $ with $ n $ odd and $ n \ geq 3 $, we have $ n \ pm 1 \ in r $, we additionally show that each entry of $ [ { n \ brace k } _ r ] ^ { - 1 } _ { n, k \ geq 1 } $, $ [ { n \ brack k } _ r ] ^ { - 1 } _ { n, k \ geq 1 } $ and $ [ l ( n, k ) _ r ] ^ { - 1 } _ { n, k \ geq 1 } $ is up to an explicit sign the cardinality of a single explicitly defined family of labeled forests. our results also provide combinatorial interpretations of the $ k $ th whitney numbers of the first and second kinds of $ \ pi _ n ^ { 1, d } $, the poset of partitions of $ [ n ] $ that have each part size congruent to $ 1 $ mod $ d $.
arxiv:1610.05803
measuring the performance of solar energy and heat transfer systems requires a lot of time, economic cost and manpower. meanwhile, directly predicting their performance is challenging due to the complicated internal structures. fortunately, a knowledge - based machine learning method can provide a promising prediction and optimization strategy for the performance of energy systems. in this chapter, the authors will show how they utilize the machine learning models trained from a large experimental database to perform precise prediction and optimization on a solar water heater ( swh ) system. a new energy system optimization strategy based on a high - throughput screening ( hts ) process is proposed. this chapter consists of : i ) comparative studies on varieties of machine learning models ( artificial neural networks ( anns ), support vector machine ( svm ) and extreme learning machine ( elm ) ) to predict the performances of swhs ; ii ) development of an ann - based software to assist the quick prediction and iii ) introduction of a computational hts method to design a high - performance swh system.
arxiv:1710.02511
we exhibit a smoothly bounded domain $ \ omega $ with the property that for suitable $ k \ subset \ partial \ omega $ and $ z \ in \ omega $ the " sadullaev boundary relative extremal functions " satisfy the inequality $ \ omega _ 1 ( z, k, \ omega ) < \ omega _ 2 ( z, k, \ omega ) \ le \ omega ( z, k, \ omega ) $.
arxiv:1805.05329
we generalize one part of thurston ' s hyperbolic dehn filling theorem to arbitrary - rank semisimple lie groups by showing that certain deformations of extended geometrically finite subgroups of a semisimple lie group are still extended geometrically finite. as a special case, our theorem gives a criterion which guarantees that a deformation of a relatively anosov subgroup is ( non - relatively ) anosov, and also ensures that limit sets vary continuously. our result also applies to several higher - rank examples in convex projective geometry which are outside of the relatively anosov setting.
arxiv:2502.17592
we consider a coefficient inverse problem for the dielectric permittivity in maxwell ' s equations, with data consisting of boundary measurements of one or two backscattered or transmitted waves. the problem is treated using a lagrangian approach to the minimization of a tikhonov functional, where an adaptive finite element method forms the basis of the computations. a new a posteriori error estimate for the coefficient is derived. the method is tested successfully in numerical experiments for the reconstruction of two, three, and four small inclusions with low contrast, as well as the reconstruction of a superposition of two gaussian functions.
arxiv:1603.04870
recent work has considered corpus - based or statistical approaches to the problem of prepositional phrase attachment ambiguity. typically, ambiguous verb phrases of the form { v np1 p np2 } are resolved through a model which considers values of the four head words ( v, n1, p and n2 ). this paper shows that the problem is analogous to n - gram language models in speech recognition, and that one of the most common methods for language modeling, the backed - off estimate, is applicable. results on wall street journal data of 84. 5 % accuracy are obtained using this method. a surprising result is the importance of low - count events - ignoring events which occur less than 5 times in training data reduces performance to 81. 6 %.
arxiv:cmp-lg/9506021
intra - cardiac echocardiography ( ice ) plays a crucial role in electrophysiology ( ep ) and structural heart disease ( shd ) interventions by providing high - resolution, real - time imaging of cardiac structures. however, existing navigation methods rely on electromagnetic ( em ) tracking, which is susceptible to interference and position drift, or require manual adjustments based on operator expertise. to overcome these limitations, we propose a novel anatomy - aware pose estimation system that determines the ice catheter position and orientation solely from ice images, eliminating the need for external tracking sensors. our approach leverages a vision transformer ( vit ) - based deep learning model, which captures spatial relationships between ice images and anatomical structures. the model is trained on a clinically acquired dataset of 851 subjects, including ice images paired with position and orientation labels normalized to the left atrium ( la ) mesh. ice images are patchified into 16x16 embeddings and processed through a transformer network, where a token independently predicts position and orientation via separate linear layers. the model is optimized using a mean squared error ( mse ) loss function, balancing positional and orientational accuracy. experimental results demonstrate an average positional error of 9. 48 mm and orientation errors of ( 16. 13 deg, 8. 98 deg, 10. 47 deg ) across x, y, and z axes, confirming the model accuracy. qualitative assessments further validate alignment between predicted and target views within 3d cardiac meshes. this ai - driven system enhances procedural efficiency, reduces operator workload, and enables real - time ice catheter localization for tracking - free procedures. the proposed method can function independently or complement existing mapping systems like carto, offering a transformative approach to ice - guided interventions.
arxiv:2505.07851
we present a new approach to describe the dynamics of an isolated, gravitationally bound astronomical $ n $ - body system in the weak field and slow - motion approximation of the general theory of relativity. celestial bodies are described using an arbitrary energy - momentum tensor and assumed to possess any number of internal multipole moments. the solution of the gravitational field equations in any reference frame is presented as a sum of three terms : i ) the inertial flat spacetime in that frame, ii ) unperturbed solutions for each body in the system that is covariantly transformed to the coordinates of this frame, and iii ) the gravitational interaction term. we use the harmonic gauge conditions that allow reconstruction of a significant part of the structure of the post - galilean coordinate transformation functions relating global coordinates of the inertial reference frame to the local coordinates of the non - inertial frame associated with a particular body. the remaining parts of these functions are determined from dynamical conditions, obtained by constructing the relativistic proper reference frame associated with a particular body. in this frame, the effect of external forces acting on the body is balanced by the fictitious frame - reaction force that is needed to keep the body at rest with respect to the frame, conserving its relativistic three - momentum. the resulting post - galilean coordinate transformations have an approximate group structure that extends the poincar ' e group of global transformations to the case of accelerating observers in a gravitational field of $ n $ - body system. we present and discuss the structure of the metric tensors corresponding to the reference frames involved, the rules for transforming relativistic gravitational potentials, the coordinate transformations between frames and the resulting relativistic equations of motion.
arxiv:1304.8122
estimation of the precision matrix ( or inverse covariance matrix ) is of great importance in statistical data analysis and machine learning. however, as the number of parameters scales quadratically with the dimension $ p $, computation becomes very challenging when $ p $ is large. in this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the $ \ ell _ 1 $ penalized d - trace loss, with each subproblem being solved by a second - order algorithm. in each iteration of our algorithm, we are able to greatly reduce the number of variables in the { problem } based on the karush - kuhn - tucker ( kkt ) conditions and the sparse structure of the estimated precision matrix in the previous iteration. as a result, our algorithm is capable of handling datasets with very high dimensions that may go beyond the capacity of the existing methods. moreover, for the sub - problem in each iteration, other than solving the primal problem directly, we develop a semismooth newton augmented lagrangian algorithm with global linear convergence rate on the dual problem to improve the efficiency. theoretical properties of our proposed algorithm have been established. in particular, we show that the convergence rate of our algorithm is asymptotically superlinear. the high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state - of - the - art solvers.
arxiv:2106.13508
an analytic pair of dimension n and center v is a pair ( v, m ) where m is a complex manifold of ( complex ) dimension n and v is a closed totally real analytic submanifold of dimension n. to an analytic pair ( v, m ) we associate the class of the functions u from m to a positive bounded interval which are plurisubharmonic in m and such that u ( p ) = 0 for each p in v. if the class admits a maximal function u, the triple ( v, m, u ) is said to be a maximal plurisubharmonic model. after defining a pseudo - metric e ( v, m ) on the center v of an analytic pair ( v, m ) we prove ( see theorem 4. 1, theorem 5. 1 ) that maximal plurisubharmonic models provide a natural generalization of the monge - ampere models introduced by lempert and szoke in [ 16 ].
arxiv:0806.1275
the coupled - channel theory is a natural way of treating nonelastic channels, in particular those arising from collective excitations, defined by nuclear deformations. proper treatment of such excitations is often essential to the accurate description of reaction experimental data. previous works have applied different models to specific nuclei with the purpose of determining angular - integrated cross sections. in this work, we present an extensive study of the effects of collective couplings and nuclear deformations on integrated cross sections as well as on angular distributions in a consistent manner for neutron - induced reactions on nuclei in the rare - earth region. this specific subset of the nuclide chart was chosen precisely because of a clear static deformation pattern. we analyze the convergence of the coupled - channel calculations regarding the number of states being explicitly coupled. inspired by the work done by dietrich \ emph { et al. }, a model for deforming the spherical koning - delaroche optical potential as function of quadrupole and hexadecupole deformations is also proposed. we demonstrate that the obtained results of calculations for total, elastic and inelastic cross sections, as well as elastic and inelastic angular distributions correspond to a remarkably good agreement with experimental data for scattering energies above around a few mev.
arxiv:1311.1735
we consider kemp ' s q - analogue of the binomial distribution. several convergence results involving the classical binomial, the heine, the discrete normal, and the poisson distribution are established. some of them are q - analogues of classical convergence properties. besides elementary estimates, we apply mellin transform asymptotics.
arxiv:0804.0534
recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature tc of epsilon near zero ( enz ) metamaterial superconductors. for example, tripling of the critical temperature has been observed in al - al2o3 enz core - shell metamaterials. here, we perform theoretical modelling of tc increase in metamaterial superconductors based on the maxwell - garnett approximation of their dielectric response function. good agreement is demonstrated between theoretical modelling and experimental results in both aluminium and tin - based metamaterials. taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, mgb2 and h2s - based metamaterial superconductors is evaluated. the mgb2 - based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. in the case of an h2s - based metamaterial tc appears to reach ~ 250k.
arxiv:1603.02879
for any $ \ delta > 0 $ unless the orthogonal vector hypothesis fails. for walks, this lower bound holds even when $ g $ is planar, unit - weight and has $ o ( 1 ) $ vertices.
arxiv:2201.02121
the introduction of light emitting diodes ( led ) in automotive exterior lighting systems provides opportunities to develop viable alternatives to conventional communication and sensing technologies. most of the advanced driver - assist and autonomous vehicle technologies are based on radio detection and ranging ( radar ) or light detection and ranging ( lidar ) systems that use radio frequency or laser signals, respectively. while reliable and real - time information on vehicle speeds is critical for traffic operations management and autonomous vehicles safety, radar or lidar systems have some deficiencies especially in curved road scenarios where the incidence angle is rapidly varying. in this paper, we propose a novel speed estimation system so - called the visible light detection and ranging ( vildar ) that builds upon sensing visible light variation of the vehicle ' s headlamp. we determine the accuracy of the proposed speed estimator in straight and curved road scenarios. we further present how the algorithm design parameters and the channel noise level affect the speed estimation accuracy. for wide incidence angles, the simulation results show that the vildar outperforms radar / lidar systems in both straight and curved road scenarios. a provisional patent ( us # 62 / 541, 913 ) has been obtained for this work.
arxiv:1807.05412
primary decomposition of commutative monoid congruences is insensitive to certain features of primary decomposition in commutative rings. these features are captured by the more refined theory of mesoprimary decomposition of congruences, introduced here complete with witnesses and associated prime objects. the combinatorial theory of mesoprimary decomposition lifts to arbitrary binomial ideals in monoid algebras. the resulting binomial mesoprimary decomposition is a new type of intersection decomposition for binomial ideals that enjoys computational efficiency and independence from ground field hypotheses. binomial primary decompositions are easily recovered from mesoprimary decomposition.
arxiv:1107.4699
using data collected with the cleo ii detector at the cornell electron storage ring, we determine the ratio r ( chrg ) for the mean charged multiplicity observed in upsilon ( 1s ) - > gggamma events, to the mean charged multiplicity observed in e + e - - > qqbar gamma events. we find r ( chrg ) = 1. 04 + / - 0. 02 + / - 0. 05 for jet - jet masses less than 7 gev.
arxiv:hep-ex/9701006
this is the first of a series of papers on the infrared database of extragalactic observables from spitzer ( ideos ). in this work we describe the identification of optical counterparts of the infrared sources detected in spitzer infrared spectrograph ( irs ) observations, and the acquisition and validation of redshifts. the ideos sample includes all the spectra from the cornell atlas of spitzer / irs sources ( cassis ) of galaxies beyond the local group. optical counterparts were identified from correlation of the extraction coordinates with the nasa extragalactic database ( ned ). to confirm the optical association and validate ned redshifts, we measure redshifts with unprecedented accuracy on the irs spectra ( { \ sigma } ( dz / ( 1 + z ) ) = 0. 0011 ) by using an improved version of the maximum combined pseudo - likelihood method ( mcpl ). we perform a multi - stage verification of redshifts that considers alternate ned redshifts, the mcpl redshift, and visual inspection of the irs spectrum. the statistics is as follows : the ideos sample contains 3361 galaxies at redshift 0 < z < 6. 42 ( mean : 0. 48, median : 0. 14 ). we confirm the default ned redshift for 2429 sources and identify 124 with incorrect ned redshifts. we obtain irs - based redshifts for 568 ideos sources without optical spectroscopic redshifts, including 228 with no previous redshift measurements. we provide the entire ideos redshift catalog in machine - readable formats. the catalog condenses our compilation and verification effort, and includes our final evaluation on the most likely redshift for each source, its origin, and reliability estimates.
arxiv:1511.07451
opportunistic analysis has traditionally relied on independence assumptions that break down in many interesting and useful network topologies. this paper develops techniques that expand opportunistic analysis to a broader class of networks, proposes new opportunistic methods for several network geometries, and analyzes them in the high - snr regime. for each of the geometries studied in the paper, we analyze the opportunistic dmt of several relay protocols, including amplify - and - forward, decode - and - forward, compress - and - forward, non - orthogonal amplify - forward, and dynamic decode - forward. among the highlights of the results : in a variety of multi - user single - relay networks, simple selection strategies are developed and shown to be dmt - optimal. it is shown that compress - forward relaying achieves the dmt upper bound in the opportunistic multiple - access relay channel as well as in the opportunistic nxn user network with relay. other protocols, e. g. dynamic decode - forward, are shown to be near optimal in several cases. finite - precision feedback is analyzed for the opportunistic multiple - access relay channel, the opportunistic broadcast relay channel, and the opportunistic gateway channel, and is shown to be almost as good as full channel state information.
arxiv:1104.4491
the components underpinning plms - - large weight matrices - - were shown to bear considerable redundancy. matrix factorization, a well - established technique from matrix theory, has been utilized to reduce the number of parameters in plm. however, it fails to retain satisfactory performance under moderate to high compression rate. in this paper, we identify the \ textit { full - rankness } of fine - tuned plm as the fundamental bottleneck for the failure of matrix factorization and explore the use of network pruning to extract low - rank sparsity pattern desirable to matrix factorization. we find such low - rank sparsity pattern exclusively exists in models generated by first - order pruning, which motivates us to unite the two approaches and achieve more effective model compression. we further propose two techniques : sparsity - aware svd and mixed - rank fine - tuning, which improve the initialization and training of the compression procedure, respectively. experiments on glue and question - answering tasks show that the proposed method has superior compression - performance trade - off compared to existing approaches.
arxiv:2306.14152
multi - hop reasoning ( i. e., reasoning across two or more documents ) is a key ingredient for nlp models that leverage large corpora to exhibit broad knowledge. to retrieve evidence passages, multi - hop models must contend with a fast - growing search space across the hops, represent complex queries that combine multiple information needs, and resolve ambiguity about the best order in which to hop between training passages. we tackle these problems via baleen, a system that improves the accuracy of multi - hop retrieval while learning robustly from weak training signals in the many - hop setting. to tame the search space, we propose condensed retrieval, a pipeline that summarizes the retrieved passages after each hop into a single compact context. to model complex queries, we introduce a focused late interaction retriever that allows different parts of the same query representation to match disparate relevant passages. lastly, to infer the hopping dependencies among unordered training passages, we devise latent hop ordering, a weak - supervision strategy in which the trained retriever itself selects the sequence of hops. we evaluate baleen on retrieval for two - hop question answering and many - hop claim verification, establishing state - of - the - art performance.
arxiv:2101.00436
we briefly recall the main physical features of the parton distributions in the quantum statistical picture of the nucleon. some predictions from a next - to - leading order qcd analysis are successfully compared to recent unpolarized and polarized experimental results. we will discuss the extension to the transverse momentum dependence of the parton distributions and its relevance for semiinclusive deep inelastic scattering. finally, we will present some new positivity constraints for spin observables and their implications for parton distributions.
arxiv:1112.0304
we present a new calculation of the cp violation parameter $ \ epsilon ^ { \ prime } / \ epsilon $. the results reported in this paper have been obtained by using the $ \ delta s = 1 $ effective hamiltonian computed at the next - to - leading order, including qcd and qed penguins. the matrix elements of the relevant operators have been taken from lattice qcd, at a scale $ \ mu = 2 $ gev. at this relatively large scale, the perturbative matching between the relevant operators and the corresponding coefficients is quite reliable. the effect of the next - to - leading corrections is to lower the prediction obtained at the leading order, thus favouring the experimental result of e731. we analyze different contributions to the final result and compare the leading and next - to - leading cases.
arxiv:hep-ph/9212203
born - jordan operators are a class of pseudodifferential operators arising as a generalization of the quantization rule for polynomials on the phase space introduced by born and jordan in 1925. the weak definition of such operators involves the born - jordan distribution, first introduced by cohen in 1966 as a member of the cohen class. we perform a time - frequency analysis of the cohen kernel of the born - jordan distribution, using modulation and wiener amalgam spaces. we then provide sufficient and necessary conditions for born - jordan operators to be bounded on modulation spaces. we use modulation spaces as appropriate symbols classes.
arxiv:1601.05303
we propose a new model for supervised learning to rank. in our model, the relevance labels are assumed to follow a categorical distribution whose probabilities are constructed based on a scoring function. we optimize the training objective with respect to the multivariate categorical variables with an unbiased and low - variance gradient estimator. learning - to - rank methods can generally be categorized into pointwise, pairwise, and listwise approaches. although our scoring function is pointwise, the proposed framework permits flexibility over the choice of the loss function. in our new model, the loss function need not be differentiable and can either be pointwise or listwise. our proposed method achieves better or comparable results on two datasets compared with existing pairwise and listwise methods.
arxiv:1911.00465
this paper studies the measurement scheduling problem for a group of n mobile robots moving on a flat surface that are preforming cooperative localization ( cl ). we consider a scenario in which due to the limited on - board resources such as battery life and communication bandwidth only a given number of relative measurements per robot are allowed at observation and update stage. optimal selection of which teammates a robot should take a relative measurement from such that the updated joint localization uncertainty of the team is minimized is an np - hard problem. in this paper, we propose a suboptimal greedy approach that allows each robot to choose its landmark robots locally in polynomial time. our method, unlike the known results in the literature, does not assume full - observability of cl algorithm. moreover, it does not require inter - robot communication at scheduling stage. that is, there is no need for the robots to collaborate to carry out the landmark robot selections. we discuss the application of our method in the context of an state - of - the - art decentralized cl algorithm and demonstrate its effectiveness through numerical simulations. even though our solution does not come with rigorous performance guarantees, its low computational cost along with no communication requirement makes it an appealing solution for operatins with resource constrained robots.
arxiv:1912.04709
the honeycomb - lattice cobaltate na $ _ 3 $ co $ _ 2 $ sbo $ _ 6 $ has recently been proposed to be a proximate kitaev quantum spin liquid ~ ( qsl ) candidate. however, non - kitaev terms in the hamiltonian lead to a zigzag - type antiferromagnetic ~ ( afm ) order at low temperatures. here, we partially substitute magnetic co $ ^ { 2 + } $ with nonmagnetic zn $ ^ { 2 + } $ and investigate the chemical doping effect in tuning the magnetic ground states of na $ _ 3 $ co $ _ { 2 - x } $ zn $ _ x $ sbo $ _ 6 $. x - ray diffraction characterizations reveal no structural transition but quite tiny changes on the lattice parameters over our substitution range $ 0 \ leq x \ leq0. 4 $. magnetic susceptibility and specific heat results both show that afm transition temperature is continuously suppressed with increasing zn content $ x $ and neither long - range magnetic order nor spin freezing is observed when $ x \ geq0. 2 $. more importantly, a linear term of the specific heat representing fermionic excitations is captured below 5 ~ k in the magnetically disordered regime, as opposed to the $ c _ { \ rm m } \ propto t ^ 3 $ behavior expected for bosonic excitations in the afm state. based on the data above, we establish a magnetic phase diagram of na $ _ 3 $ co $ _ { 2 - x } $ zn $ _ x $ sbo $ _ 6 $. our results indicate the presence of gapless fractional excitations in the samples with no magnetic order, evidencing a potential qsl state induced by doping in a kitaev system.
arxiv:2304.13362
photo - induced phase - segregation in mixed halide perovskite mapb ( brxi1 - x ) 3 is investigated in the full compositional range by correlative x - ray diffraction and photo - luminescence experiments.
arxiv:2101.00645
we present the weak lensing mass calibration of the stellar mass based $ \ mu _ { \ star } $ mass proxy for redmapper galaxy clusters in the dark energy survey year 1. for the first time we are able to perform a calibration of $ \ mu _ { \ star } $ at high redshifts, $ z > 0. 33 $. in a blinded analysis, we use $ \ sim 6, 000 $ clusters split into 12 subsets spanning the ranges $ 0. 1 \ leqslant z < 0. 65 $ and $ \ mu _ { \ star } $ up to $ \ sim 5. 5 \ times 10 ^ { 13 } m _ { \ odot } $, and infer the average masses of these subsets through modelling of their stacked weak lensing signal. in our model we account for the following sources of systematic uncertainty : shear measurement and photometric redshift errors, miscentring, cluster - member contamination of the source sample, deviations from the nfw halo profile, halo triaxiality and projection effects. we use the inferred masses to estimate the joint mass - - $ \ mu _ { \ star } $ - - $ z $ scaling relation given by $ \ langle m _ { 200c } | \ mu _ { \ star }, z \ rangle = m _ 0 ( \ mu _ { \ star } / 5. 16 \ times 10 ^ { 12 } \ mathrm { m _ { \ odot } } ) ^ { f _ { \ mu _ { \ star } } } ( ( 1 + z ) / 1. 35 ) ^ { g _ z } $. we find $ m _ 0 = ( 1. 14 \ pm 0. 07 ) \ times 10 ^ { 14 } \ mathrm { m _ { \ odot } } $ with $ f _ { \ mu _ { \ star } } = 0. 76 \ pm 0. 06 $ and $ g _ z = - 1. 14 \ pm 0. 37 $. we discuss the use of $ \ mu _ { \ star } $ as a complementary mass proxy to the well - studied richness $ \ lambda $ for : $ i ) $ exploring the regimes of low $ z $, $ \ lambda < 20 $ and high $ \ lambda $, $ z \ sim 1 $ ; $ ii ) $ testing systematics such as projection effects for applications in cluster cosmology.
arxiv:2006.10162
we present a method for learning generalized hamiltonian decompositions of ordinary differential equations given a set of noisy time series measurements. our method simultaneously learns a continuous time model and a scalar energy function for a general dynamical system. learning predictive models in this form allows one to place strong, high - level, physics inspired priors onto the form of the learnt governing equations for general dynamical systems. moreover, having shown how our method extends and unifies some previous work in deep learning with physics inspired priors, we present a novel method for learning continuous time models from the weak form of the governing equations which is less computationally taxing than standard adjoint methods.
arxiv:2104.05096
the thermal entanglement of a two - qubit anisotropic heisenberg $ xyz $ chain under an inhomogeneous magnetic field b is studied. it is shown that when inhomogeneity is increased to certain value, the entanglement can exhibit a larger revival than that of less values of b. the property is both true for zero temperature and a finite temperature. the results also show that the entanglement and critical temperature can be increased by increasing inhomogeneous exteral magnetic field.
arxiv:quant-ph/0602051
plaquette models are short range ferromagnetic spin models that play a key role in the dynamic facilitation approach to the liquid glass transition. in this paper we study the dynamics of the square plaquette model at the smallest of the three critical length scales discovered in arxiv : 1707. 03036. our main result is the computation of the spectral gap, and mixing times, for two natural boundary conditions. we observe that these time scales depend heavily on the boundary condition in this scaling regime.
arxiv:1807.00634
we use high precision lattice simulations to calculate the plaquette expectation value in three - dimensional su ( n ) gauge theory for n = 2, 3, 4, 5, 8. using these results, we study the n - dependence of the first non - perturbative coefficient in the weak - coupling expansion of hot qcd. we demonstrate that, in the limit of large n, the functional form of the plaquette expectation value with ultraviolet divergences subtracted is 15. 9 ( 2 ) - 44 ( 2 ) / n ^ 2.
arxiv:hep-lat/0609015
we perform the stationary phase analysis of the vertex amplitude for the eprl spin foam model extended to include timelike tetrahedra. we analyse both, tetrahedra of signature $ - - - $ ( standard eprl ), as well as of signature $ + - - $ ( hnybida - conrady extension ), in a unified fashion. however, we assume all faces to be of signature $ - - $. the stationary points of the extended model are described again by $ 4 $ - simplices and the phase of the amplitude is equal to the regge action. interestingly, in addition to the lorentzian and euclidean sectors there appear also split signature $ 4 $ - simplices.
arxiv:1705.02862
given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non - decreasing with respect to the numerical value. the space complexity of estimating the edit distance to monotonicity of a data stream is becoming well - understood over the past few years. motivated by applications on network quality monitoring, we extend the study to estimating the edit distance to monotonicity of a sliding window covering the $ w $ most recent items in the stream for any $ w \ ge 1 $. we give a deterministic algorithm which can return an estimate within a factor of $ ( 4 + \ eps ) $ using $ o ( \ frac { 1 } { \ eps ^ 2 } \ log ^ 2 ( \ eps w ) ) $ space. we also extend the study in two directions. first, we consider a stream where each item is associated with a value from a partial ordered set. we give a randomized $ ( 4 + \ epsilon ) $ - approximate algorithm using $ o ( \ frac { 1 } { \ epsilon ^ 2 } \ log \ epsilon ^ 2 w \ log w ) $ space. second, we consider an out - of - order stream where each item is associated with a creation time and a numerical value, and items may be out of order with respect to their creation times. the goal is to estimate the edit distance to monotonicity with respect to the numerical value of items arranged in the order of creation times. we show that any randomized constant - approximate algorithm requires linear space.
arxiv:1111.5386
in this article, we present new random walk methods to solve flow and transport problems in unsaturated / saturated porous media, including coupled flow and transport processes in soils, heterogeneous systems modeled through random hydraulic conductivity and recharge fields, processes at the field and regional scales. the numerical schemes are based on global random walk algorithms ( grw ) which approximate the solution by moving large numbers of computational particles on regular lattices according to specific random walk rules. to cope with the nonlinearity and the degeneracy of the richards equation and of the coupled system, we implemented the grw algorithms by employing linearization techniques similar to the $ l $ - scheme developed in finite element / volume approaches. the resulting grw $ l $ - schemes converge with the number of iterations and provide numerical solutions that are first - order accurate in time and second - order in space. a remarkable property of the flow and transport grw solutions is that they are practically free of numerical diffusion. the grw solutions are validated by comparisons with mixed finite element and finite volume solutions in one - and two - dimensional benchmark problems. they include richards ' equation fully coupled with the advection - diffusion - reaction equation and capture the transition from unsaturated to saturated flow regimes. for completeness, we also consider decoupled flow and transport model problems for saturated aquifers.
arxiv:2011.12889
we present a knowledge - grounded dialog system developed for the ninth dialog system technology challenge ( dstc9 ) track 1 - beyond domain apis : task - oriented conversational modeling with unstructured knowledge access. we leverage transfer learning with existing language models to accomplish the tasks in this challenge track. specifically, we divided the task into four sub - tasks and fine - tuned several transformer models on each of the sub - tasks. we made additional changes that yielded gains in both performance and efficiency, including the combination of the model with traditional entity - matching techniques, and the addition of a pointer network to the output layer of the language model.
arxiv:2106.14444
in the last decade, the study of labour dynamics has led to the introduction of labour flow networks ( lfns ) as a way to conceptualise job - to - job transitions, and to the development of mathematical models to explore the dynamics of these networked flows. to date, lfn models have relied upon an assumption of static network structure. however, as recent events ( increasing automation in the workplace, the covid - 19 pandemic, a surge in the demand for programming skills, etc. ) have shown, we are experiencing drastic shifts to the job landscape that are altering the ways individuals navigate the labour market. here we develop a novel model that emerges lfns from agent - level behaviour, removing the necessity of assuming that future job - to - job flows will be along the same paths where they have been historically observed. this model, informed by microdata for the united kingdom, generates empirical lfns with a high level of accuracy. we use the model to explore how shocks impacting the underlying distributions of jobs and wages alter the topology of the lfn. this framework represents a crucial step towards the development of models that can answer questions about the future of work in an ever - changing world.
arxiv:2301.07979
recently, the cogent collaboration presented a positive signal for an annual modulation in their data set. in light of the long standing annual modulation signal in dama / libra, we analyze the compatibility of both of these signal within the hypothesis of dark matter ( dm ) scattering on nuclei, taking into account existing experimental constraints. we consider the cases of elastic and inelastic scattering with either spin - dependent or spin - independent coupling to nucleons. we allow for isospin violating interactions as well as for light mediators. we find that there is some tension between the size of the modulation signal and the time - integrated event excess in cogent, making it difficult to explain both simultaneously. moreover, within the wide range of dm interaction models considered, we do not find a simultaneous explanation of cogent and dama / libra compatible with constraints from other experiments. however, in certain cases part of the data can be made consistent. for example, the modulation signal from cogent becomes consistent with the total rate and with limits from other dm searches at 90 % cl ( but not with the dama / libra signal ) if dm scattering is inelastic spin - independent with just the right couplings to protons and neutrons to reduce the scattering rate on xenon. conversely the dama / libra signal ( but not cogent ) can be explained by spin - dependent inelastic dm scattering.
arxiv:1106.6241
we report neutron scattering measurements of cooperative spin excitations in antiferromagnetically ordered bafe2as2, the parent phase of an iron pnictide superconductor. the data extend up to ~ 100mev and show that the spin excitation spectrum is sharp and highly dispersive. by fitting the spectrum to a linear spin - wave model we estimate the magnon bandwidth to be in the region of 0. 17ev. the large characteristic spin fluctuation energy suggests that magnetism could play a role in the formation of the superconducting state.
arxiv:0808.2836
autonomous trading robots have been studied in artificial intelligence area for quite some time. many ai techniques have been tested for building autonomous agents able to trade financial assets. these initiatives include traditional neural networks, fuzzy logic, reinforcement learning but also more recent approaches like deep neural networks and deep reinforcement learning. many developers claim to be successful in creating robots with great performance when simulating execution with historical price series, so called backtesting. however, when these robots are used in real markets frequently they present poor performance in terms of risks and return. in this paper, we propose an open source framework ( mt5se ) that helps the development, backtesting, live testing and real operation of autonomous traders. we built and tested several traders using mt5se. the results indicate that it may help the development of better traders. furthermore, we discuss the simple architecture that is used in many studies and propose an alternative multiagent architecture. such architecture separates two main concerns for portfolio manager ( pm ) : price prediction and capital allocation. more than achieve a high accuracy, a pm should increase profits when it is right and reduce loss when it is wrong. furthermore, price prediction is highly dependent of asset ' s nature and history, while capital allocation is dependent only on analyst ' s prediction performance and assets ' correlation. finally, we discuss some promising technologies in the area.
arxiv:2101.08169
materials science is an interdisciplinary field of researching and discovering materials. materials engineering is an engineering field of finding uses for materials in other fields and industries. the intellectual origins of materials science stem from the age of enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. materials science still incorporates elements of physics, chemistry, and engineering. as such, the field was long considered by academic institutions as a sub - field of these related fields. beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. materials scientists emphasize understanding how the history of a material ( processing ) influences its structure, and thus the material ' s properties and performance. the understanding of processing - structure - properties relationships is called the materials paradigm. this paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. = = history = = the material of choice of a given era is often a defining point. phases such as stone age, bronze age, iron age, and steel age are historic, if arbitrary examples. originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. a major breakthrough in the understanding of materials occurred in the late 19th century, when the american scientist josiah willard gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. important elements of modern materials science were products of the space race ; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. before the 1960s ( and in some cases decades after ), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting
https://en.wikipedia.org/wiki/Materials_science
the transient behaviour of highly concentrated colloidal liquids and dynamically arrested states ( glasses ) under time - dependent shear is reviewed. this includes both theoretical and experimental studies and comprises the macroscopic rheological behaviour as well as changes in the structure and dynamics on a microscopic individual - particle level. the microscopic and macroscopic levels of the systems are linked by a comprehensive theoretical framework which is exploited to quantitatively describe these systems while they are subjected to an arbitrary flow history. within this framework, theoretical predictions are compared to experimental data, which were gathered by rheology and confocal microscopy experiments, and display consistent results. particular emphasis is given to ( i ) switch - on of shear flow during which the system can liquify, ( ii ) switch - off of shear flow which might still leave residual stresses in the system, and ( iii ) large amplitude oscillatory shearing. the competition between timescales and the dependence on flow history leads to novel features in both the rheological response and the microscopic structure and dynamics.
arxiv:1309.0401
silicon - based terahertz quantum cascade lasers ( qcls ) offer potential advantages over existing iii - v devices. although coherent electron transport effects are known to be important in qcls, they have never been considered in si - based device designs. we describe a density matrix transport model that is designed to be more general than those in previous studies and to require less a priori knowlege of electronic bandstructure, allowing its use in semi - automated design procedures. the basis of the model includes all states involved in interperiod transport, and our steady - state solution extends beyond the rotating - wave approximation by including dc and counter - propagating terms. we simulate the potential performance of bound - to - continuum ge / sige qcls and find that devices with 4 - 5 - nm - thick barriers give the highest simulated optical gain. we also examine the effects of interdiffusion between ge and sige layers ; we show that if it is taken into account in the design, interdiffusion lengths of up to 1. 5 nm do not significantly affect the simulated device performance.
arxiv:1206.0280