Search is not available for this dataset
query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
what are the primary germ layers
From Wikipedia, the free encyclopedia. Ectoderm is one of the three primary germ layers in the very early embryo. The other two layers are the mesoderm (middle layer) and endoderm (most proximal layer), with the ectoderm as the most exterior (or distal) layer.It emerges and originates from the outer layer of germ cells.he other two layers are the mesoderm (middle layer) and endoderm (most proximal layer), with the ectoderm as the most exterior (or distal) layer. It emerges and originates from the outer layer of germ cells.
germ layer (germ cell layer) any of the three primary layers of cells formed in the early development of the embryo (ectoderm, entoderm, and mesoderm), from which the organs and tissues develop. germinative layer stratum germinativum. granular layer. 1. stratum granulosum.
eng_Latn
33,900
vacuole bio definition
Vacuole, in biology, a space within a cell that is empty of cytoplasm, lined with a membrane, and filled with fluid. Especially in protozoa, vacuoles are cytoplasmic organs (organelles), performing functions such as storage, ingestion, digestion, excretion, and expulsion of excess water.
A vacuole is a cell organelle found in a number of different cell types. Vacuoles are fluid-filled, enclosed structures that are separated from the cytoplasm by a single membrane.They are found mostly in plant cells and fungi. However, some protist, animal cells, and bacteria also contain vacuoles.acuoles are fluid-filled, enclosed structures that are separated from the cytoplasm by a single membrane. They are found mostly in plant cells and fungi. However, some protist, animal cells, and bacteria also contain vacuoles.
ita_Latn
33,901
what does glycogen in vagina do
The intermediate cells in vagina have glycogen which serves as the food for the bacteria. When the bacteria digest this glycogen, a byproduct in the form of lactic acid is created in the vagina. Due to this the cell lyses or dies and the cytoplasm of the cell is torn apart.This results in the leftovers of naked nuclei and the debris of the dead cytoplasm You do not have access to view this node.ytolysis could happen when there is an infection by viruses, bacteria or it could be the immune system which destroys cells if they have been infected such as the cytotoxic lymphocytes which destroy the altered cells by pathogens or cancer You do not have access to view this node.
Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals and fungi. The polysaccharide structure represents the main storage form of glucose in the body. In humans, glycogen is made and stored primarily in the cells of the liver and the muscles hydrated with three or four parts of water.
eng_Latn
33,902
what digestive enzymes are secreted by the brush border (microvilli)?
Answers. Best Answer: Brush border enzymes are digestive enzymes located in the membrane of the brush border (microvilli) on intestinal epithelial cells. The brush border greatly increases the surface area available for the absorption of digested food.
Microvilli are small projections of cell membranes that increase the surface area of cells. The main functions of microvilli are absorption, secretion, cellular adhesion, and mechanotransduction. These microvilli are organized to form a structure called brush border. Villi and microvilli are both found in the small intestine, whereas only microvilli are found in surface of eggs and white blood cells. • Microvilli make brush border, while villi do not. • The cells with microvilli are found in the outermost cell layer of villi.
eng_Latn
33,903
what is an exotoxin
a potent toxin formed and excreted by the bacterial cell and found free in the surrounding medium; exotoxins are the most poisonous substances known. They are protein in nature and heat labile, and are detoxified with retention of antigenicity by treatment with formaldehyde.
endotoxin. a heat-stable toxin associated with the outer membranes of certain gram-negative bacteria, including Brucella, Neisseria, and Vibrio species. Endotoxins are not secreted but are released only when the cells are disrupted; they are less potent and less specific than the exotoxins; and they do not form toxoids.n·do·tox·in. n. A toxin that forms an integral part of the cell wall of certain bacteria and is only released upon destruction of the bacterial cell. Endotoxins are less potent and less specific than most exotoxins and do not form toxoids. Also called intracellular toxin.
eng_Latn
33,904
which protein provides the epidermis with water
After they reach the outermost layer of epidermis, they eventually shed. The fibrous protein i.e. keratin is meant to protect the skin and provides water-proof barrier to the skin. The Melanocytes: These are one of the types of cells in epidermis which produce pigment and give color to the skin.
Spirulina Arthrospira is a planktonic blue-green algae (Cyanobacteria) found in warm water alkaline volcanic lakes and is rich in raw protein and seven major vitamins: A1, B1, B2, B6, B12 (one of the best natural sources for B12, although the bioavailability its B12 is in dispute by many researchers), C and E.
eng_Latn
33,905
what does the epidermis usually protects
The epidermis is the very outer layer of the skin that provides protection from bacteria (thanks to it's pH level), becoming overloaded with water (it's waterproof) and from heat loss. It has four of five distinct layers to achieve this.
The outermost portion of the epidermis, known as the stratum corneum, is relatively waterproof and, when undamaged, prevents most bacteria, viruses, and other foreign substances from entering the body. The epidermis (along with other layers of the skin) also protects the internal organs, muscles, nerves, and blood vessels against trauma.
eng_Latn
33,906
what does keratin do
Keratin is formed by keratinocytes, living cells that make up a large part of skin, hair, nails, and other parts of the body. The cells slowly push their way upwards, eventually dying and forming a protective layer.
Keratinocytes (pronounced: ker-uh-TIH-no-sites) produce keratin, a type of protein that is a basic component of hair and nails. Keratin is also found in skin cells in the skin's outer layer, where it helps create a protective barrier. Langerhans (pronounced: LAHNG-ur-hanz) cells help protect the body against infection.
eng_Latn
33,907
what is cloroplast
A chloroplast is one of three types of plastids, characterized by its high concentration of chlorophyll (the other two types, the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis).laucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin mura , meaning wall).
The Clorox Company (formerly Clorox Chemical Co.), based in Oakland, California, is an American worldwide manufacturer and marketer of consumer and professional products with approximately 7,700 employees worldwide as of June 30, 2015.
eng_Latn
33,908
where are keratinocytes are located?
The anatomical relationship between keratinocytes and melanocytes is known as 'the epidermal melanin unit' and it has been estimated that each melanocyte is in contact with ∼40 keratinocytes in the basal and suprabasal layers.. Translated: Melanocytes produce melanin in nice little packages called, melanosomes.
The anatomical relationship between keratinocytes and melanocytes is known as 'the epidermal melanin unit' and it has been estimated that each melanocyte is in contact with ∼40 keratinocytes in the basal and suprabasal layers.. Translated: Melanocytes produce melanin in nice little packages called, melanosomes.
eng_Latn
33,909
how do guard cells work in gas exchange inside a plant
The guard cells surround each stoma in a plant cell. They regulate the opening and closing of stomata to facilitate gas exchange and control transpiration in plants.
Leaves. The exchange of oxygen and carbon dioxide in the leaf (as well as the loss of water vapor in transpiration) occurs through pores called stomata (singular = stoma). The immediate cause is a change in the turgor of the guard cells.The inner wall of each guard cell is thick and elastic.eaves. The exchange of oxygen and carbon dioxide in the leaf (as well as the loss of water vapor in transpiration) occurs through pores called stomata (singular = stoma). The immediate cause is a change in the turgor of the guard cells.
eng_Latn
33,910
The parts inside a living cell move in this semifluid substance whose name in part means "cell"
Cell - humans, examples, body, used, water, process, plants, type Nerve cells, or neurons, are another kind of specialized cell whose form reflects function. ... molecules that are essential to the structure and functioning of all living cells. ... Cytoplasm: The semifluid substance of a cell containing organelles and .... Some eukaryotic cells move about by means of microtubules attached to the...
History & Overview of Hezbollah | Jewish Virtual Library Hezbollah, also known as 'The Party of God,' is a radical Shi'a Muslim group fighting ... Led by religious clerics, the organization wanted to adopt an Iranian doctrine as a ... In response, Hezbollah, with the help of a UN peacekeeping force, .... of the movement in Lebanon is Sheikh Muhammed Hussein Fadlallah who acts as...
eng_Latn
33,911
what is desmos
This article is about the tree. For the graphing calculator, see Desmos (graphing). Desmos is a genus of trees and shrubs in the plant family Annonaceae. The genus consists of 27 species and 5 unresolved species.
Desmosomes are molecular complexes of cell adhesion proteins and linking proteins that attach the cell surface adhesion proteins to intracellular keratin cytoskeletal filaments. The cell adhesion proteins of the desmosome, desmoglein and desmocollin, are members of the cadherin family of cell adhesion molecules.
eng_Latn
33,912
Nonlinear Estimation of Salient-Pole Synchronous Machines Parameters via Particle Filter
Novel approach to nonlinear/non-Gaussian Bayesian state estimation
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,913
A disk-based join with probabilistic guarantees
Continuous sampling for online aggregation over multiple queries
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,914
On Sarhan-Balakrishnan Bivariate Distribution
Bayesian analysis of absolute continuous Marshall-Olkin bivariate Pareto distribution with location and scale parameters
Fast And Exact Simulation Of Stationary Gaussian Processes Through Circulant Embedding Of The Covariance Matrix
eng_Latn
33,915
The rules are judgmental, not probabilistic.
There are judgemental rules in place.
The rules are more probabilistic than they are judgemental.
eng_Latn
33,916
Bayesian Inference Approaches for Particle Trajectory Analysis in Cell Biology
Multiplexed and high-throughput neuronal fluorescence imaging with diffusible probes
Approximating probabilistic inference in Bayesian belief networks is NP-hard
kor_Hang
33,917
The main two arguments for probabilism are flawed
Representation Theorems and Realism About Degrees of Belief"
Probabilistic argumentation systems: a new perspective on Dempster-Shafer theory
eng_Latn
33,918
April 7, 2006Analyzing Spatial Panel Data of Cigarette Demand: A Bayesian Hierarchical Modeling Approach
Estimating Dynamic Demand for Cigarettes Using Panel Data: The Effects of Bootlegging, Taxation and Advertising Reconsidered
Approximating probabilistic inference in Bayesian belief networks is NP-hard
kor_Hang
33,919
of Spatial Domain and Genetic Algorithm
Fusing multimodal biometrics with quality estimates via a Bayesian belief network
Scalar triplet on a domain wall: an exact solution
eng_Latn
33,920
Implementation of Bayesian Inference In Distributed Neural Networks
Exact Inferences in a Neural Implementation of a Hidden Markov Model
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,921
Inference of generic types in Java
Making the future safe for the past: adding genericity to the Java programming language
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,922
Estimation of the density of regression errors
Bayesian bandwidth selection for a nonparametric regression model with mixed types of regressors
Performance on the Cognitive Reflection Test is Stable Across Time
eng_Latn
33,923
A Bayesian Algorithm of Wireless Sensor Network Link Selection under Asymmetric Loss Function
Bayesian approach to life testing and reliability estimation using asymmetric loss function
On the drawdown of completely asymmetric Levy processes
eng_Latn
33,924
Generalized inference for Weibull distributions
Confidence limits and prediction limits for a Weibull distribution based on the generalized variable approach
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,925
Yet Another Derivation of The Principle of Maximum Entropy
Se p 20 00 What is the question that MaxEnt answers ? A probabilistic interpretation
Inability of the Submaximal Treadmill Stress Test to Predict the Location of Coronary Disease
eng_Latn
33,926
The expression of the model uncertainty in measurements
Efficient Bayesian inference for multimodal problems in cosmology
Evaluating the Measurement Uncertainty: Fundamentals and practical guidance
eng_Latn
33,927
A pointwise Poisson approximation for independent binomial random variables
POISSON APPROXIMATION FOR INDEPENDENT BINOMIAL RANDOM VARIABLES
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,928
STATISTICAL EXPERIMENTS AND OPTIMAL DESIGN
Bayesian Experimental Design: A Review
Inability of the Submaximal Treadmill Stress Test to Predict the Location of Coronary Disease
yue_Hant
33,929
Ascent EM for fast and global solutions to finite mixtures: An application to curve-clustering of online auctions
Frequentist analysis of hierarchical models for population dynamics and demographic data
Surface of localized pleural plaques quantitated by computed tomography scanning: no relation with cumulative asbestos exposure and no effect on lung function
eng_Latn
33,930
Lower bounds on the error probability of block codes based on improvements on de Caen's inequality
Improved Lower Bounds for the Error Rate of Linear Block Codes �
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,931
Exact probabilistic inference for inexact graph matching
Structural matching by discrete relaxation
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,932
Research on Bayesian Network Structure Learning Based on Rough Set
Virus Detection Based on Rough Set and Bayes Classifier
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,933
A sampling-based speaker clustering using utterance-oriented Dirichlet process mixture model and its evaluation on large-scale data
Clustering via the Bayesian information criterion with applications in speech recognition
An instrumental variable approach finds no associated harm or benefit from early dialysis initiation in the United States
eng_Latn
33,934
Peirce and Lonergan on Inquiry and the Pragmatics of Inference
SYNTHETIC KNOWLEDGE AS “ABDUCTION”
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,935
A continuous approach to inductive inference
LP as a Global Search Heuristic Across Different Constrainedness Regions ⋆ Extended Abstract
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,936
Tighter estimates for the posteriors of imprecise prior and conditional probabilities
Preference Programming – Multicriteria Weighting Models under Incomplete Information
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,937
In this paper, we apply a Bayesian analysis to calibrate the parameters of a model for atomic Nitrogen ionization using experimental data from the Electric Arc Shock Tube (EAST, from NASA) wind-tunnel. We use a one-dimensional ow solver coupled with a radiation solver for the simulation of the radiative signature emitted in the shock-heated air plasma, as well as a Park’s two-temperature model for the thermal and chemical nonequilibrium eects. We simultaneously quantify model parameter uncertainties and physical model inadequacies when solving the statistical inverse problem. Prior to the solution of such a problem, we perform a sensitivity analysis of the radiative heat ux in order to identify important sources of uncertainty. This analysis clearly shows the importance of the direct ionization of atomic Nitrogen as it mostly inuences the radiative heating. We then solve the statistical inverse problem and compare the calibrated reaction rates against values available in the literature. Our calculations estimate the reaction rate of the atomic Nitrogen ionization to be (3:7 1:5) 10 11 cm 3 mol 1 s 1 at 10,000 K, a range consistent with Park’s estimation. Finally, in order to assess the validity of the estimated parameters, we propagate their uncertainties through a statistical forward problem dened on a prediction scenario dierent from the calibration scenarios and compare the model predictions against other experimental data.
Bayesian methods are growing ever more popular in chemical kinetics. The reasons for this and general challenges related to kinetic parameter estimation are shortly reviewed. Most authors content themselves with using one single (mostly uniform) prior distribution. The goal of this paper is to go into some serious issues this raises. The problems of confusing knowledge and ignorance and of reparametrisation are examined. The legitimacy of a probabilistic Ockham’s razor is called into question. A synthetic example involving two reaction models was used to illustrate how merging the parameter space volume with the model accuracy into a single number might be unwise. Robust Bayesian analysis appears to be a simple and ::: straightforward way to avoid the problems mentioned throughout this article.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
33,938
The Indian Buffet Process is a versatile statistical tool for modeling distributions over binary matrices. We provide an efficient spectral algorithm as an alternative to costly Variational Bayes and sampling-based algorithms. We derive a novel tensorial characterization of the moments of the Indian Buffet Process proper and for two of its applications. We give a computationally efficient iterative inference algorithm, concentration of measure bounds, and reconstruction guarantees. Our algorithm provides superior accuracy and cheaper computation than comparable Variational Bayesian approach on a number of reference problems.
This work considers the problem of learning the structure of multivariate linear tree models, which include a variety of directed tree graphical models with continuous, discrete, and mixed latent variables such as linear-Gaussian models, hidden Markov models, Gaussian mixture models, and Markov evolutionary trees. The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i.e., the graph of how the underlying hidden variables are connected to each other and to the observed variables). We propose the Spectral Recursive Grouping algorithm, an efficient and simple bottom-up procedure for recovering the tree structure from independent samples of the observed variables. Our finite sample size bounds for exact recovery of the tree structure reveal certain natural dependencies on underlying statistical and structural properties of the underlying joint distribution. Furthermore, our sample complexity guarantees have no explicit dependence on the dimensionality of the observed variables, making the algorithm applicable to many high-dimensional settings. At the heart of our algorithm is a spectral quartet test for determining the relative topology of a quartet of variables from second-order statistics.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,939
A simple example simulating a mixture of two normal populations results in some important observations, nonnormality and nonsymmetry of the mixture conditional pdf, nonlinearity of the conditional mean as a function of the conditioning data, heteroscedasticity of the conditional variance and its nonmonotonicity as a function of distance of the unknown to the conditioning data. A comparison of the mixture statistics with those predicted by traditional models ignoring the mixture reveals the inadequacy and inappropriateness of these traditional approaches. A mixture of two multivariate normal populations is illustrated through the analytical expressions of its conditional distribution and moments.
SUMMARY Variograms are used to describe the spatial variability of environmental variables. In this study, the parameters that characterize the variogram are obtained from a variogram in a different but comparably polluted area. A procedure is presented for improving the variogram modelling when data become available from the area of interest. Interpolation is carried out by means of a Bayesian form of kriging, where prior distributions of the variogram parameters are used. This procedure differs from current procedures, since commonly applied least squares estimation for the variogram is avoided. The study is illustrated with data from a cadmium pollution in the Netherlands, where this form of extrapolation was compared with ordinary kriging. When sufficient data are available (more than 140), ordinary kriging gave the most precise predictions. When the number of data was small (i.e. less than 60), predictions obtained with Bayesian kriging were more precise as compared to those obtained with ordinary kriging. This leads to a considerable reduction of costs, without loss of information.
We prove uniform local-in-time existence and uniqueness of solutions to the density-dependent magnetohydrodynamic equations.
eng_Latn
33,940
A description is given of the design and implementation of a probabilistic reasoning system based on a causal graph approach and its application in the field of forensic science. An influence diagram (or a causal graph) is a directed acyclic graph where nodes represent variables and directed links represent probabilistic dependence among variables. For topmost nodes, prior probabilities are usually given. Other nodes are associated with conditional probabilities. The causal graph and corresponding prior and conditional probabilities form the knowledge base to perform reasoning. The probabilistic reasoning system has two distinct stages; the first is to develop a knowledge elicitation procedure through which an expert may supply information about some area of knowledge in the form of a causal graph and associated conditional probabilities; the second stage involves the development of an evidence propagation procedures which will deal with any individual case relating to the area of knowledge which has been dealt with by the first stage. This system would be able to conclude from the current state of the system the probability of any condition or group of conditions occurring.< ::: >
Computer (digital) forensic examiners typically write a report to document the examination process, including tools used, major processing steps, summary of the findings, and a detailed listing of relevant evidence (files, artifacts) exported to external media (CD, DVD, hard copy) for the case investigator or attorney. However, proper interpretation of the significance of extracted evidence often requires additional consultation with the examiner. This paper proposes a practical methodology for transforming the findings in typical forensic reports to a graphical representation using Bayesian networks (BNs). BNs offer the following advantages: (1) Delineate the cause-effect relationship among relevant pieces of evidence described in the report; and (2) Use probability and established Bayesian inference rules to deal with uncertainty of digital evidence. A realistic forensic report is used to demonstrate this methodology.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
33,941
We consider the problem of learning the parameters of a Bayesian network from data, while taking into account prior knowledge about the signs of influences between variables. Such prior knowledge can be readily obtained from domain experts. We show that this problem of parameter learning is a special case of isotonic regression and provide a simple algorithm for computing isotonic estimates. Our experimental results for a small Bayesian network in the medical domain show that taking prior knowledge about the signs of influences into account leads to an improved fit of the true distribution, especially when only a small sample of data is available. More importantly, however, the isotonic estimator provides parameter estimates that are consistent with the specified prior knowledge, thereby resulting in a network that is more likely to be accepted by experts in its domain of application.
Among the tasks involved in building a Bayesian network, obtaining the required probabilities is generally considered the most daunting. Available data collections are often too small to allow for estimating reliable probabilities. Most domain experts, on the other hand, consider assessing the numbers to be quite demanding. Qualitative probabilistic knowledge, however, is provided more easily by experts. We propose a method for obtaining probabilities, that uses qualitative expert knowledge to constrain the probabilities learned from a small data collection. A dedicated elicitation technique is designed to support the acquisition of the qualitative knowledge required for this purpose. We demonstrate the application of our method by quantifying part of a network in the field of classical swine fever.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,942
We provide a characterization of symplectic Grassmannians in terms of their varieties of minimal rational tangents.
For an embedded Fano manifold $X$, we introduce a new invariant $S_X$ related to the dimension of covering linear spaces. The aim of this paper is to classify Fano manifolds $X$ which have large $S_X$.
Bayesian approach remained rather unsuccessful in treating nonparametric problems. This is primarily due to the difficulty in finding workable prior distribution on the parameter space , which in nonparametric problems is taken to be a set of probability distributions on a given sample space. Two Desirable Properties of a Prior 1. The support of the prior should be large. 2. Posterior distribution given a sample of observation should be manageable analytically. These properties are antagonistic : One may be obtained at the expense of other.
eng_Latn
33,943
The release of synthetic data generated from a model estimated on the data helps statistical agencies disseminate respondent-level data with high utility and privacy protection. Motivated by the challenge of disseminating sensitive variables containing geographic information in the Consumer Expenditure Surveys (CE) at the U.S. Bureau of Labor Statistics, we propose two non-parametric Bayesian models as data synthesizers for the county identifier of each data record: a Bayesian latent class model and a Bayesian areal model. Both data synthesizers use Dirichlet Process priors to cluster observations of similar characteristics and allow borrowing information across observations. We develop innovative disclosure risks measures to quantify inherent risks in the original CE data and how those data risks are ameliorated by our proposed synthesizers. By creating a lower bound and an upper bound of disclosure risks under a minimum and a maximum disclosure risks scenarios respectively, our proposed inherent risks measures provide a range of acceptable disclosure risks for evaluating risks level in the synthetic datasets.
High-utility and low-risks synthetic data facilitates microdata dissemination by statistical agencies. In a previous work, we induced privacy protection into any Bayesian data synthesis model by employing a pseudo posterior likelihood that exponentiates each contribution by an observation record-indexed weight in [0, 1], defined to be inversely proportional to the marginal identification risk for that record. Relatively risky records with high marginal probabilities of identification risk tend to be isolated from other records. The downweighting of their likelihood contribution will tend to shrink the synthetic data value for those high-risk records, which in turn often tends to increase the isolation of other moderate-risk records. The result is that the identification risk actually increases for some moderate-risk records after risk-weighted pseudo posterior estimation synthesis, compared to an unweighted synthesis; a phenomenon we label "whack-a-mole". This paper constructs a weight for each record from a collection of pairwise identification risk probabilities with other records, where each pairwise probability measures the joint probability of re-identification of the pair of records. The by-record weights constructed from the pairwise identification risk probabilities tie together the identification risk probabilities across the data records and compresses the distribution of by-record risks, which mitigates the whack-a-mole and produces a more efficient set of synthetic data with lower risk and higher utility. We illustrate our method with an application to the Consumer Expenditure Surveys of the U.S. Bureau of Labor Statistics. We provide general guidelines to statistical agencies to achieve their desired utility-risk trade-off balance when disseminating public use microdata files through synthetic data.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,944
This paper argues that the sometimes-conflicting results of a modern revisionist literature on data mining in econometrics reflect different approaches to solving the central problem of model uncertainty in a science of non-experimental data. The literature has entered an exciting phase with theoretical development, methodological reflection, considerable technological strides on the computing front and interesting empirical applications providing momentum for this branch of econometrics. The organising principle for this discussion of data mining is a philosophical spectrum that sorts the various econometric traditions according to their epistemological assumptions (about the underlying data-generating-process DGP) starting with nihilism at one end and reaching claims of encompassing the DGP at the other end; call it the DGP-spectrum. In the course of exploring this spectrum the reader will encounter various Bayesian, specific-to-general (S-G) as well general-to-specific (G-S) methods. To set the stage for this exploration the paper starts with a description of data mining, its potential risks and a short section on potential institutional safeguards to these problems.
Data mining is a new discipline lying at the interface of statistics, database technology, pattern recognition, machine learning, and other areas. It is concerned with the secondary analysis of large databases in order to find previously unsuspected relationships which are of interest or value to the database owners. New problems arise, partly as a consequence of the sheer size of the data sets involved, and partly because of issues of pattern matching. However, since statistics provides the intellectual glue underlying the effort, it is important for statisticians to become involved. There are very real opportunities for statisticians to make significant contributions.
We report enhancement of the mechanical stability of graphene through a one-step method to disperse gold nanoparticles on the pristine graphene without any added agent.
eng_Latn
33,945
Model free probabilistic design with uncertain Markov parameters identified in closed loop
Probabilistic and Randomized Methods for Design under Uncertainty
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,946
A Bayesian approach to retrieve soil parameters from SAR data: effect of prior information
Information Theory, Inference, and Learning Algorithms
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,947
Nonparametric Bayesian inference on multivariate exponential families
Bayesian Generalized Kernel Inference for Terrain Traversability Mapping
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,948
Bounded solutions of affine stochastic differential equations and stability
Linear Differential Equations and Control
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,949
Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax
Level-Based Analysis of the Univariate Marginal Distribution Algorithm
The joy of copulas: bivariate distributions with uniform marginals
eng_Latn
33,950
A probability model for irradiated bacteria and methods to obtain estimates of the parameters
Actions of Radiations on Living Cells
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,951
An Introduction to Intuitionistic Markov Chain
Processor power considerations-An application of fuzzy Markov chains
Approximating probabilistic inference in Bayesian belief networks is NP-hard
kor_Hang
33,952
Variational Bayesian Inference in Large Vector Autoregressions with Hierarchical Shrinkage
VAR forecasting using Bayesian variable selection
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,953
Optimal importance sampling for simulation of Lévy processes
Correcting for Simulation Bias in Monte Carlo Methods to Value Exotic Options in Models Driven by Lévy Processes
Lipoprotein Electrophoresis Should Be Discontinued as a Routine Procedure
eng_Latn
33,954
Fast and efficient Bayesian semi-parametric curve-fitting and clustering in massive data
A Short Note on Almost Sure Convergence of Bayes Factors in the General Set-Up
No evidence for apparent extent between parallels as the basis of the Poggendorff effect
eng_Latn
33,955
A New Method of Testability Prediction on Model and Probability Analysis
Multi-signal flow graphs: a novel approach for system testability analysis and fault diagnosis
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,956
Sequential Bayesian Inference for Detection and Response to Seasonal Epidemics
Particle Learning for Sequential Bayesian Computation
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,957
ALLIN Regularity properties of the equilibrium distribution
Removable Singularities of Continuous Harmonic Functions in Rm.
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,958
Nonparametric Bayesian models for AR and ARX identification
Infinite latent feature models and the Indian buffet process
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,959
Parameter estimation for discretely-observed linear birth-and-death processes
Frequentist estimation of an epidemic’s spreading potential when observations are scarce
Fully nonparametric estimation of scalar diffusion models
eng_Latn
33,960
Weighted quantile regression with missing covariates using empirical likelihood
Weighted empirical likelihood for quantile regression with nonignorable missing covariates
Weighted empirical likelihood for quantile regression with nonignorable missing covariates
eng_Latn
33,961
On the calculation of a possibilistic equivalent value
Fuzzy sets and information granularity
Approximating probabilistic inference in Bayesian belief networks is NP-hard
eng_Latn
33,962
Most modern SAT solvers expose a range of parameters to allow some customization for improving performance on specific types of instances. Performing this customization manually can be challenging and time-consuming, and as a consequence several automated algorithm configuration methods have been developed for this purpose. Although automatic algorithm configuration has already been applied successfully to many different SAT solvers, a comprehensive analysis of the configuration process is usually not readily available to users. Here, we present SpySMAC to address this gap by providing a lightweight and easy-to-use toolbox for (i) automatic configuration of SAT solvers in different settings, (ii) a thorough performance analysis comparing the best found configuration to the default one, and (iii) an assessment of each parameter’s importance using the fANOVA framework. To showcase our tool, we apply it to Lingeling and probSAT, two state-of-the-art solvers with very different characteristics.
Stochastic local search solvers for SAT made a large progress with the introduction of probability distributions like the ones used by the SAT Competition 2011 winners Sparrow2010 and EagleUp. These solvers though used a relatively complex decision heuristic, where probability distributions played a marginal role. ::: ::: In this paper we analyze a pure and simple probability distribution based solver probSAT, which is probably one of the simplest SLS solvers ever presented. We analyze different functions for the probability distribution for selecting the next flip variable with respect to the performance of the solver. Further we also analyze the role of make and break within the definition of these probability distributions and show that the general definition of the score improvement by flipping a variable, as make minus break is questionable. By empirical evaluations we show that the performance of our new algorithm exceeds that of the SAT Competition winners by orders of magnitude.
We prove that groups acting geometrically on delta-quasiconvex spaces contain no essential Baumslag-Solitar quotients as subgroups. This implies that they are translation discrete, meaning that the translation numbers of their nontorsion elements are bounded away from zero.
eng_Latn
33,963
We consider the stochastic thin-film equation with colored Gaussian Stratonovich noise and establish the existence of nonnegative weak (martingale) solutions. The construction is based on a Trotter-Kato-type decomposition into a deterministic and a stochastic evolution, which yields an easy to implement numerical algorithm. Compared to previous work, no interface potential has to be included and the Trotter-Kato scheme allows for a simpler proof of existence than in case of Ito noise.
We study pathwise entropy solutions for scalar conservation laws with inhomogeneous fluxes and quasilinear multiplicative rough path dependence. This extends the previous work of Lions, Perthame and Souganidis who considered spatially independent and inhomogeneous fluxes with multiple paths and a single driving singular path respectively. The approach is motivated by the theory of stochastic viscosity solutions which relies on special test functions constructed by inverting locally the flow of the stochastic characteristics. For conservation laws this is best implemented at the level of the kinetic formulation which we follow here.
Data mining has a wide range of applications in the real world. However, it is possible to disclose the private information of users in the process of data mining. Therefore, it is of great significance to protect the users' privacy while mining the knowledge behind the data. In this paper, we propose a Naive Bayes classification method based on differential privacy. For nominal attributes, we add Laplace noise to the count. For numerical attributes, we add Laplace noise to the mean, standard deviation, and scale parameter, and then use the noisy parameters to calculate the prior probability and conditional probability. For numerical attributes, we assume that they follow Gaussian, Laplace, or lognormal distribution, and apply our algorithms to compare utilities.
eng_Latn
33,964
In approximate design theory, necessary and sufficient conditions that a repeated measurements design be universally optimal are given as linear equations whose unknowns are the proportions of subjects on the treatment sequences. Both the number of periods and the number of treatments in the designs are arbitrary, as is the covariance matrix of the normal response model. The existence of universally optimal symmetric designs is proved; the single linear equation which the proportions satisfy is given. A formula for the information matrix of a universally optimal design is derived.
This article discusses D-optimal Bayesian crossover designs for generalized linear models. Crossover trials with t treatments and p periods, for $t<= p$, are considered. The designs proposed in this paper minimize the log determinant of the variance of the estimated treatment effects over all possible allocation of the n subjects to the treatment sequences. It is assumed that the p observations from each subject are mutually correlated while the observations from different subjects are uncorrelated. Since main interest is in estimating the treatment effects, the subject effect is assumed to be nuisance, and generalized estimating equations are used to estimate the marginal means. To address the issue of parameter dependence a Bayesian approach is employed. Prior distributions are assumed on the model parameters which are then incorporated into the D-optimal design criterion by integrating it over the prior distribution. Three case studies, one with binary outcomes in a 4$\times$4 crossover trial, second one based on count data for a 2$\times$2 trial and a third one with Gamma responses in a 3$times$2 crossover trial are used to illustrate the proposed method. The effect of the choice of prior distributions on the designs is also studied.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,965
Some applications of Inductive Logic Programming (ILP) are presented. Those applications are chosen that specifically benefit from relational descriptions generated by ILP programs, and from ILP's ability to accommodate background knowledge. Applications included are: drug design, predicting the secondary structure of proteins, and design of finite-element meshes. Some other applications are briefly described. The practical advantages and disadvantages of ILP learning are discussed.
Inductive Logic Programming (ILP) involves the construction of first-order definite clause theories from examples and background knowledge. Unlike both traditional Machine Learning and Computational Learning Theory, ILP is based on lock-step development of Theory, Implementations and Applications. ILP systems have successful applications in the learning of structure-activity rules for drug design, semantic grammars rules, finite element mesh design rules and rules for prediction of protein structure and mutagenic molecules. The strong applications in ILP can be contrasted with relatively weak PAC-learning results (even highly-restricted forms of logic programs are known to be prediction-hard). It has been recently argued that the mismatch is due to distributional assumptions made in application domains. These assumptions can be modelled as a Bayesian prior probability representing subjective degrees of belief. Other authors have argued for the use of Bayesian prior distributions for reasons different to those here, though this has not lead to a new model of polynomial-time learnability. Incorporation of Bayesian prior distributions over time-bounded hypotheses in PAC leads to a new model called U-learnability. It is argued that U-learnability is more appropriate than PAC for Universal (Turing computable) languages. Time-bounded logic programs have been shown to be polynomially U-learnable under certain distributions. The use of time-bounded hypotheses enforces decidability and allows a unified characterisation of speed-up learning and inductive learning. U-learnability has as special cases PAC and Natarajan's model of speed-up learning.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,966
Pilgrim's monopoly is a probabilistic process giving rise to a non-negative sequence $T_1, T_2,\ldots$ that is infinitely exchangeable, a natural model for time-to-event data. The one-dimensional marginal distributions are exponential. The rules are simple, the process is easy to generate sequentially, and a simple expression is available for both the joint density and the multivariate survivor function. There is a close connection with the Kaplan-Meier estimator of the survival distribution. Embedded within the process is an infinitely exchangeable ordered partition processes connected to Markov branching processes in neutral evolutionary theory. Some aspects of the process, such as the distribution of the number of blocks, can be investigated analytically and confirmed by simulation. By ignoring the order, the embedded process can be considered as an infinitely exchangeable partition process, shown to be closely related to the Chinese restaurant process. Further connection to the Indian buffet process is also provided. Thus we establish a previously unknown link between the well-known Kaplan-Meier estimator and the important Ewens sampling formula.
Bayesian approach remained rather unsuccessful in treating nonparametric problems. This is primarily due to the difficulty in finding workable prior distribution on the parameter space , which in nonparametric problems is taken to be a set of probability distributions on a given sample space. Two Desirable Properties of a Prior 1. The support of the prior should be large. 2. Posterior distribution given a sample of observation should be manageable analytically. These properties are antagonistic : One may be obtained at the expense of other.
In this paper, we introduce an automated Bayesian visual inspection framework for printed circuit board (PCB) assemblies, which is able to simultaneously deal with various shaped circuit elements (CEs) on multiple scales. We propose a novel hierarchical multi-marked point process model for this purpose and demonstrate its efficiency on the task of solder paste scooping detection and scoop area estimation, which are important factors regarding the strength of the joints. A global optimization process attempts to find the optimal configuration of circuit entities, considering the observed image data, prior knowledge, and interactions between the neighboring CEs. The computational requirements are kept tractable by a data-driven stochastic entity generation scheme. The proposed method is evaluated on real PCB data sets containing 125 images with more than 10 000 splice entities.
eng_Latn
33,967
Model-based design can shorten the development time of complex systems by the use of simulation techniques. However, it can be hard to simulate the system as a whole if it is developed in a concurr...
Co-simulation consists of the theory and techniques to enable global simulation of a coupled system via the composition of simulators. Despite the large number of applications and growing interest in the challenges, the field remains fragmented into multiple application domains, with limited sharing of knowledge. This tutorial aims at introducing co-simulation of continuous systems, targeted at researchers new to the field.
A Bayesian representation of the analysis of variance by A. Gelman is introduced with ecological examples. These examples demonstrate typical situations encountered in ecological studies. Compared to conventional methods, the multilevel approach is more flexible in model formulation, easier to set up, and easier to present. Because the emphasis is on estimation, multilevel models are more informative than the results from a significance test. The improved capacity is largely due to the changed computation methods. In our examples, we show that (1) the multilevel model is able to discern a treatment effect that is smaller than the conventional approach can detect, (2) the graphical presentation associated with the multilevel method is more informative, and (3) the multilevel model can incorporate all sources of uncertainty to accurately describe the true relationship between the outcome and potential predictors.
eng_Latn
33,968
Bayesian Belief Networks (BBNs) provide suitable formalism for many medical applications. Unfortunately, learning a BBN from a database is a very complex task, since the specification of a BBN requires a large amount of information that is not always available in real-world databases. In this paper, we introduce a new class of BBNS, called Ignorant Belief Networks, able to reason with incomplete information. We will show how this new formalism can be used to forecast blood glucose concentration in insulin-dependent diabetic patients using underspecified probabilistic models directly derived from a real-world database.
: Expert systems commonly employ some means of drawing inferences from domain and problem knowledge, where both the knowledge and its implications are less than certain. Methods used include subjective Bayesian reasoning, measures of belief and disbelief, and the Dempster-Shafer theory of evidence. Analysis of systems based on these methods reveals important deficiencies in areas such as the reliability of deductions and the ability to detect inconsistencies in the knowledge from which deductions were made. A new system call INFERNO addresses some of these points. Its approach is probabilistic but makes no assumptions whatsoever about the joint probability distributions of pieces of knowledge, so the correctness of inferences can be guaranteed. INFERNO informs the user of inconsistencies that may be present in the information presented to it, and can make suggestions about changing the information to make it consistent. An example from a Bayesian system is reworked, and the conclusions reached by that system and INFERNO are compared.
This paper presents a framework for gesture recognition by modeling a system based on Dynamic Bayesian Networks (DBNs) from a Marionette point of view. To incorporate human qualities like anticipation and empathy inside the perception system of a social robot remains, so far an open issue. It is our goal to search for ways of implementation and test the feasibility. Towards this end we started tlie development of the guide robot 'Nicole' equipped with a monocular camera and an inertial sensor to observe its environment. The context of interaction is a person performing gestures and 'Nicole' reacting by means of audio output and motion. In this paper we present a solution to the gesture recognition task based on Dynamic Bayesian Network (DBN). We show that using a DBN is a human-like concept of recognizing gestures that encompass the quality of anticipation through the concept of prediction and update. A novel approach is used by incorporating a marionette model in the DBN as a trade-off between simple constant acceleration models and complex articulated models.
eng_Latn
33,969
This paper presents a new Bayesian nonlinear structural equation modeling approach to hierarchical model assessment of dynamic systems, considering uncertainty in both predicted and measured time series data. A generalized structural equation modeling with nonlinear latent variables is presented to model two sets of relationships in multivariate hierarchical model assessment, namely, the computational model to system-level data, and low-level data to system-level data. A hierarchical Bayesian network with Markov Chain Monte Carlo simulation and Gibbs sampling is developed to represent the two relationships and estimate the influencing factors between them. A Bayesian interval hypothesis testing-based method is employed to quantify the confidence in the predictive model at various levels. The effect of low-level data on the model assessment at the system level is identified by Bayesian inference and factor analysis. The proposed methodology is implemented for hierarchical model validation of three dynamic system problems.
Structural equation models (SEMs) have been widely adopted for inference of causal interactions in complex networks. Recent examples include unveiling topologies of hidden causal networks over which processes, such as spreading diseases, or rumors propagate. The appeal of SEMs in these settings stems from their simplicity and tractability, since they typically assume linear dependencies among observable variables. Acknowledging the limitations inherent to adopting linear models, the present paper put forth nonlinear SEMs, which account for (possible) nonlinear dependencies among network nodes. The advocated approach leverages kernels as a powerful encompassing framework for nonlinear modeling, and an efficient estimator with affordable tradeoffs is put forth. Interestingly, pursuit of the novel kernel-based approach yields a convex regularized estimator that promotes edge sparsity, a property exhibited by most real world networks, and the resulting optimization problem is amenable to proximal-splitting optimization methods. To this end, solvers with complementary merits are developed by leveraging the alternating direction method of multipliers, and proximal gradient iterations. Experiments conducted on simulated data demonstrate that the novel approach outperforms linear SEMs with respect to edge detection errors. Furthermore, tests on a real gene expression dataset unveil interesting new edges that were not revealed by linear SEMs, which could shed more light on regulatory behavior of human genes.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,970
We discuss a nonparametric estimation method for the mixing distributions in mixture models. The problem is formalized as a minimization of a one-parameter objective functional, which becomes the maximum likelihood estimation or the kernel vector quantization in special cases. Generalizing the theorem for the nonparametric maximum likelihood estimation, we prove the existence and discreteness of the optimal mixing distribution and provide an algorithm to calculate it. It is demonstrated that with an appropriate choice of the parameter, the proposed method is less prone to overfitting than the maximum likelihood method. We further discuss the connection between the unifying estimation framework and the rate-distortion problem.
A minimum divergence estimation method is developed for robust parameter estimation. The proposed approach uses new density-based divergences which, unlike existing methods of this type such as minimum Hellinger distance estimation, avoid the use of nonparametric density estimation and associated complications such as bandwidth selection. The proposed class of ‘density power divergences’ is indexed by a single parameter α which controls the trade-off between robustness and efficiency. The methodology affords a robust extension of maximum likelihood estimation for which α = 0. Choices of α near zero afford considerable robustness while retaining efficiency close to that of maximum likelihood.
Bayesian approach remained rather unsuccessful in treating nonparametric problems. This is primarily due to the difficulty in finding workable prior distribution on the parameter space , which in nonparametric problems is taken to be a set of probability distributions on a given sample space. Two Desirable Properties of a Prior 1. The support of the prior should be large. 2. Posterior distribution given a sample of observation should be manageable analytically. These properties are antagonistic : One may be obtained at the expense of other.
eng_Latn
33,971
The subject of this report is a methodology for the transformation of (experimental) data into predictive models. We use a concrete example, drawn from the field of combustion chemistry, and examine the data in terms of precisely defined modes of scientific collaboration. The numerical methodology that we employ is founded on a combination of response surface technique and robust control theory. The numerical results show that an essential element of scientific collaboration is collaborative processing of data, demonstrating that combining the entire collection of data into a joint analysis extracts substantially more of the information content of the data. © 2003 Wiley Periodicals, Inc. Int J Chem Kinet 36: 57–66, 2004
We apply a Bayesian parameter estimation technique to a chemical kinetic mechanism for n-propylbenzene oxidation in a shock tube to propagate errors in experimental data to errors in Arrhenius parameters and predicted species concentrations. We find that, to apply the methodology successfully, conventional optimization is required as a preliminary step. This is carried out in two stages: First, a quasi-random global search using a Sobol low-discrepancy sequence is conducted, followed by a local optimization by means of a hybrid gradient-descent/Newton iteration method. The concentrations of 37 species at a variety of temperatures, pressures, and equivalence ratios are optimized against a total of 2378 experimental observations. We then apply the Bayesian methodology to study the influence of uncertainties in the experimental measurements on some of the Arrhenius parameters in the model as well as some of the predicted species concentrations. Markov chain Monte Carlo algorithms are employed to sample from the posterior probability densities, making use of polynomial surrogates of higher order fitted to the model responses. We conclude that the methodology provides a useful tool for the analysis of distributions of model parameters and responses, in particular their uncertainties and correlations. Limitations of the method are discussed. For example, we find that using second-order response surfaces and assuming normal distributions for propagated errors is largely adequate, but not always.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,972
A Bayesian Belief Network (BN) is a model of a joint distribution over a setof n variables, with a DAG structure to represent the immediate dependenciesbetween the variables, and a set of parameters (aka CPTables) to represent thelocal conditional probabilities of a node, given each assignment to itsparents. In many situations, these parameters are themselves random variables - this may reflect the uncertainty of the domain expert, or may come from atraining sample used to estimate the parameter values. The distribution overthese"CPtable variables"induces a distribution over the response the BNwill return to any"What is Pr(H | E)?"query. This paper investigates thevariance of this response, showing first that it is asymptotically normal,then providing its mean and asymptotical variance. We then present aneffective general algorithm for computing this variance, which has the samecomplexity as simply computing the (mean value of) the response itself - ie,O(n 2^w), where n is the number of variables and w is the effective treewidth. Finally, we provide empirical evidence that this algorithm, whichincorporates assumptions and approximations, works effectively in practice,given only small samples.
In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architectures--Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer--from the perspective of graphical structure for message propagation, message-passing scheme, computational efficiency, and storage efficiency.
State-of-the-art Bayesian Network learning algorithms do not scale to more than a few hundred variables; thus, they fall far short from addressing the challenges posed by the large datasets in biomedical informatics (e.g., gene expression, proteomics, or text-categorization data). In this paper, we present a BN learning algorithm, called the Max-Min Bayesian Network learning (MMBN) algorithm that can induce networks with tens of thousands of variables, or alternatively, can selectively reconstruct regions of interest if time does not permit full reconstruction. MMBN is based on a local algorithm that returns targeted areas of the network and on putting these pieces together. On a small dataset MMBN outperforms other state-of-the-art methods. Subsequently, its scalability is demonstrated by fully reconstructing from data a Bayesian Network with 10,000 variables using ordinary PC hardware. The novel algorithm pushes the envelope of Bayesian Network learning (an NP-complete problem) by about two orders of magnitude.
eng_Latn
33,973
We consider families of semiparametric Bayesian models based on Dirich- let process mixtures, indexed by a multidimensional hyperparameter that includes the precision parameter. We wish to select the hyperparameter by considering Bayes factors. Our approach involves distinguishing some arbitrary value of the hyperparameter, and estimating the Bayes factor for the model indexed by the hyperparameter vs. the model indexed by the distinguished point, as the hyper- parameter varies. The approach requires us to select a finite number of hyper- parameter values, and for each get Markov chain Monte Carlo samples from the posterior distribution corresponding to the model indexed by that hyperparame- ter value. Implementation of the approach relies on a likelihood ratio formula for Dirichlet process models. Because we may view parametric models as limiting cases where the precision hyperparameter is infinite, the method also enables us to de- cide whether or not to use a semiparametric or an entirely parametric model. We illustrate the methodology through two detailed examples involving meta-analysis.
We present a method for comparing semiparametric Bayesian models, constructed under the Dirichlet process mixture (DPM) framework, with alternative semiparameteric or parameteric Bayesian models. A distinctive feature of the method is that it can be applied to semiparametric models containing covariates and hierarchical prior structures, and is apparently the first method of its kind. Formally, the method is based on the marginal likelihood estimation approach of Chib (1995) and requires estimation of the likelihood and posterior ordinates of the DPM model at a single high-density point. An interesting computation is involved in the estimation of the likelihood ordinate, which is devised via collapsed sequential importance sampling. Extensive experiments with synthetic and real data involving semiparametric binary data regression models and hierarchical longitudinal mixed-effects models are used to illustrate the implementation, performance, and applicability of the method.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,974
In this contribution we review the mean field approach to Bayesian independent component analysis (ICA) recently developed by the authors [1, 2]. For the chosen setting of additive Gaussian noise on the measured signal and Maximum Likelihood II estimation of the mixing matrix and the noise, the expected sufficient statistics are obtained from the two first posterior moments of the sources. These can be effectively estimated using variational mean field theory and its linear response correction. We give an application to feature extraction in neuro-imaging using a binary (stimuli/no stimuli) source paradigm. Finally, we discuss the possibilities of extending the framework to convolutive mixtures, temporal and ‘spatial’ source prior correlations, identification of common sources in mixtures of different media and ICA for density estimation.
Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In this tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,975
[1] A technique to estimate the uncertainties of the parameters of a neural network model, i.e., the synaptic weights, was described in the work of Aires [2004]. Using these weight uncertainty estimates, we compute the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of the neural network technique. The theory is applied to the same remote sensing problem as in the work of Aires [2004] concerning the retrieval of surface skin temperature, microwave surface emissivities and integrated water vapor content from a combined analysis of microwave and infrared observations over land.
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.
We report enhancement of the mechanical stability of graphene through a one-step method to disperse gold nanoparticles on the pristine graphene without any added agent.
eng_Latn
33,976
In this article we describe Bayesian nonparametric procedures for two-sample hypothesis testing. Namely, given two sets of samples $\mathbf{y}^{\scriptscriptstyle(1)}\;$\stackrel{\scriptscriptstyle{iid}}{\s im}$\;F^{\scriptscriptstyle(1)}$ and $\mathbf{y}^{\scriptscriptstyle(2 )}\;$\stackrel{\scriptscriptstyle{iid}}{\sim}$\;F^{\scriptscriptstyle( 2)}$, with $F^{\scriptscriptstyle(1)},F^{\scriptscriptstyle(2)}$ unknown, we wish to evaluate the evidence for the null hypothesis $H_0:F^{\scriptscriptstyle(1)}\equiv F^{\scriptscriptstyle(2)}$ versus the alternative $H_1:F^{\scriptscriptstyle(1)}\neq F^{\scriptscriptstyle(2)}$. Our method is based upon a nonparametric P\'{o}lya tree prior centered either subjectively or using an empirical procedure. We show that the P\'{o}lya tree prior leads to an analytic expression for the marginal likelihood under the two hypotheses and hence an explicit measure of the probability of the null $\mathrm{Pr}(H_0|\{\mathbf {y}^{\scriptscriptstyle(1)},\mathbf{y}^{\scriptscriptstyle(2)}\}\mathbf{)}$.
Bayesian methods are ubiquitous in machine learning. Nevertheless, the analysis of empirical results is typically performed by frequentist tests. This implies dealing with null hypothesis significance tests and p-values, even though the shortcomings of such methods are well known. We propose a nonparametric Bayesian version of the Wilcoxon signed-rank test using a Dirichlet process (DP) based prior. We address in two different ways the problem of how to choose the infinite dimensional parameter that characterizes the DP. The proposed test has all the traditional strengths of the Bayesian approach; for instance, unlike the frequentist tests, it allows verifying the null hypothesis, not only rejecting it, and taking decisions which minimize the expected loss. Moreover, one of the solutions proposed to model the infinite-dimensional parameter of the DP allows isolating instances in which the traditional frequentist test is guessing at random. We show results dealing with the comparison of two classifiers using real and simulated data.
ABSTRACTUNC-45A is an ubiquitously expressed protein highly conserved throughout evolution. Most of what we currently know about UNC-45A pertains to its role as a regulator of the actomyosin system...
eng_Latn
33,977
Reinforcement learning is a hard problem and the majority of the existing algorithms suffer from poor convergence properties for difficult problems. In this paper we propose a new reinforcement learning method that utilizes the power of global optimization methods such as simulated annealing. Specifically, we use a particularly powerful version of simulated annealing called adaptive simulated annealing (ASA) (Ingber, 1989). Towards this end we consider a batch formulation for the reinforcement learning problem, unlike the online formulation almost always used. The advantage of the batch formulation is that it allows state-of-the-art optimization procedures to be employed, and thus can lead to further improvements in algorithmic convergence properties. The proposed algorithm is applied to a decision making test problem, and it is shown to obtain better results than the conventional Q-learning algorithm.
Ideas by Statistical Mechanics (ISM) is a generic program to model evolution and propagation of ideas/patterns throughout populations subjected to endogenous and exogenous interactions. The program is based on the author's work in Statistical Mechanics of Neocortical Interactions (SMNI), and uses the author's Adaptive Simulated Annealing (ASA) code for optimizations of training sets, as well as for importance-sampling to apply the author's copula financial riskmanagement codes, Trading in Risk Dimensions (TRD), for assessments of risk and uncertainty. This product can be used for decision support for projects ranging from diplomatic, information, military, and economic (DIME) factors of propagation/evolution of ideas, to commercial sales, trading indicators across sectors of financial markets, advertising and political campaigns, etc. It seems appropriate to base an approach for propagation of ideas on the only system so far demonstrated to develop and nurture ideas, i.e., the neocortical brain. A statistical mechanical model of neocortical interactions, developed by the author and tested successfully in describing short-term memory and EEG indicators, is the proposed model. ISM develops subsets of macrocolumnar activity of multivariate stochastic descriptions of defined populations, with macrocolumns defined by their local parameters within specific regions and with parameterized endogenous inter-regional and exogenous external connectivities. Parameters of subsets of macrocolumns will be fit using ASA to patterns representing ideas. Parameters of external and inter-regional interactions will be determined that promote or inhibit the spread of these ideas. Tools of financial risk management, developed by the author to process correlated multivariate systems with differing non-Gaussian distributions using modern copula analysis, importancesampled using ASA, will enable bona fide correlations and uncertainties of success and failure to be calculated. Marginal distributions will be evolved to determine their expected duration and stability using algorithms developed by the author, i.e., PATHTREE and PATHINT codes.
Berzelius failed to make use of Faraday's electrochemical laws in his laborious determination of equivalent weights.
eng_Latn
33,978
Dynamical Foundations of the Neural Circuit for Bayesian Decision Making
On the basis of accumulating behavioral and neural evidences, it has recently been proposed that the brain neural circuits of humans and animals are equipped with several specific properties, which ensure that perceptual decision making implemented by the circuits can be nearly optimal in terms of Bayesian inference. Here, I introduce the basic ideas of such a proposal and discuss its implications from the standpoint of biophysical modeling developed in the framework of dynamical systems.
Deconstructing networks and rewiring alterable modules in a rational way is critical to optimize drug discovery and develop personalized medicine.
eng_Latn
33,979
The Role of Randomization in Inference
It is argued that randomization has no role to play in the design or analysis of an experiment. If a Bayesian approach is adopted this conclusion is easily demonstrated. Outside that approach two principles, of conditionality and similarity, lead, via the likelihood principle, to the same conclusion. In the case of design, however, it is important to avoid confounding the effect of interest with an unexpected factor and this consideration leads to a principle of haphazardness that is clearly related to, but not identical with, randomization. The role of exchangeability is discussed.
The methods of false color composite band choice,samples size,and samples methods,influence classification precisions,were introduced.This article is very important reference to advance classification precision.
eng_Latn
33,980
On the Generative Nature of Prediction
Given an observed stochastic process, computational mechanics provides an explicit and efficient method of constructing a minimal hidden Markov model within the class of maximally predictive models. Here, the corresponding so-called e-machine encodes the mechanisms of prediction. We propose an alternative notion of predictive models in terms of a hidden Markov model capable of generating the underlying stochastic process. A comparison of these two notions of prediction reveals that our approach is less restrictive and thereby allows for predictive models that are more concise than the e-machine.
The present paper is focused on the development of generating function for partial mock theta functions of order two and order ten. Mathematics Subject Classification: 33D15, 11B65, 11D15, 41A21
eng_Latn
33,981
A Tentative Notation System for Kashaya Pomo Dances
Author(s): McMurray, Susan | Abstract: Given the inadequacies of narrative description and filming, dance notation is singularly efficient as a mode of recording movement for future analysis. It allows the maximum in clarity and detail with the minimum expenditure of time and energy.In deciding which of the movement notation systems in current use might be the most practical in recording Kashaya Pomo dances, I found both Labanotation (Hutchinson 1954) and Benesh notation (Benesh and Benesh 1969) to be too complex for my purposes, and the system devised by G. Kurath in her work with the Tewa (Kurath and Garcia 1970) to be too simplified. Therefore, the obvious alternative was to devise a system of my own which could be especially tailored to Kashaya Pomo dances.
We present a scalable approach to perform- ::: ing approximate fully Bayesian inference in ::: generic state space models. The proposed ::: method is an alternative to particle MCMC ::: that provides fully Bayesian inference of both ::: the dynamic latent states and the static pa- ::: rameters of the model. We build up on re- ::: cent advances in computational statistics that ::: combine variational methods with sequential ::: Monte Carlo sampling and we demonstrate ::: the advantages of performing full Bayesian in- ::: ference over the static parameters rather than ::: just performing variational EM approxima- ::: tions. We illustrate how our approach enables ::: scalable inference in multivariate stochastic ::: volatility models and self-exciting point pro- ::: cess models that allow for flexible dynamics ::: in the latent intensity function.
eng_Latn
33,982
Applying Jackson’s Methodological Ideal-Types: Problems of Differentiation and Classification
In The Conduct of Inquiry in International Relations, Patrick Jackson situates methodologies in International Relations in relation to their underlying philosophical assumptions. One of his aims is to map International Relations debates in a way that ‘capture[s] current controversies’ (p. 40). This ambition is overstated: whilst Jackson’s typology is useful as a clarificatory tool, (re)classifying existing scholarship in International Relations is more problematic. One problem with Jackson’s approach is that he tends to run together the philosophical assumptions which decisively differentiate his methodologies (by stipulating a distinctive warrant for knowledge claims) and the explanatory strategies that are employed to generate such knowledge claims, suggesting that the latter are entailed by the former. In fact, the explanatory strategies which Jackson associates with each methodology reflect conventional practice in International Relations just as much as they reflect philosophical assumptions. This ma...
Abstract In this paper, identification of the joint probability dentisity function (PDF) from missing data is considered. The model of PDF is Gaussian mixture. It is well known that the expectation-maximization (EM) algorithm is useful for the identification of Gaussian mixture. Here it is extended to the case of missing elements of the observations. It will be shown that, after identifying the PDF model, it is easy to estimate the missing elements as well as the system output variable.
eng_Latn
33,983
Bayesian composite quantile regression for linear mixed-effects models
ABSTRACTLongitudinal data are commonly modeled with the normal mixed-effects models. Most modeling methods are based on traditional mean regression, which results in non robust estimation when suffering extreme values or outliers. Median regression is also not a best choice to estimation especially for non normal errors. Compared to conventional modeling methods, composite quantile regression can provide robust estimation results even for non normal errors. In this paper, based on a so-called pseudo composite asymmetric Laplace distribution (PCALD), we develop a Bayesian treatment to composite quantile regression for mixed-effects models. Furthermore, with the location-scale mixture representation of the PCALD, we establish a Bayesian hierarchical model and achieve the posterior inference of all unknown parameters and latent variables using Markov Chain Monte Carlo (MCMC) method. Finally, this newly developed procedure is illustrated by some Monte Carlo simulations and a case analysis of HIV/AIDS clinical...
Our study aimed to examine psychometric properties and cross-cultural utility of the Behavior Assessment System for Children-2, Parent Rating Scale-Child (BASC-2 PRS-C) in Korean children.Two study populations were recruited: a general population sample (n
eng_Latn
33,984
Certain Generating Functions for Partial Mock Theta Functions of Order Two and Order Ten
The present paper is focused on the development of generating function for partial mock theta functions of order two and order ten. Mathematics Subject Classification: 33D15, 11B65, 11D15, 41A21
We explore the subject of uniting the control-theoretic with the factorization-based approach to recommendation, arguing that tensor factorization may be employed to vanquish combinatorial complexity impediments related to more sophisticated MDP models that take a history of previous states rather than one single state into account. Specifically, we introduce a tensor representation of transition probabilities of Markov-k-processes and devise a Tucker-based approximation architecture that relies crucially on the notion of an aggregation basis described in Chap. 6. As our method requires a partitioning of the set of state transition histories, we are left with the challenge of how to determine a suitable partitioning, for which we propose a genetic algorithm.
eng_Latn
33,985
A non-parametric Bayesian network model for predicting corrosion depth on buried pipelines
The present study develops a non-parametric Bayesian network (NPBN) model to predict the corrosion depth on buried pipelines using the pipeline age and local soil properties. The dependence structu...
Dynamic characteristic of a system have to be obtained so as to offer the reliable data for the system dynamic modification and dynamic optimum design. The system modal parameters identification can be changed into the global optimization problem in wavelet plane because the wavelet ridges can carry much information of system characteristic parameters. Firstly, the paper will introduce the so-called PSO (Particle Swarm Optimization) technique into the modal analysis and propose the PSO wavelet ridge extraction algorithm. Furthermore, the formulas identifying modal parameters from a wavelet ridge are derived and computation steps were given. The simulation experiments are implemented in order to evaluate precision of this method and the robustness of noise. Finally, the proposed method applies to the modal parameters identification of a watermelon to explore the effectiveness. The experiment results prove the method is precision and insensitive to noise.
eng_Latn
33,986
Knowing the basis on which the case instances were selected, for example, is crucial in cumulation; otherwise it is not possible to know whether best case, worst case, typical, or the like instances are being aggregated.
To know the basis by which the case instances were chosen.
There were no ideas as to how to base the cases.
eng_Latn
33,987
The problem of estimating the parameters of a non-Gaussian autoregressive process is addressed. Departure of the driving noise from Gaussianity is shown to have the potential for improving their accuracy of the estimation of the parameters. While the standard linear prediction techniques are computationally efficient, they show a substantial loss of efficiency when applied to non-Gaussian processes. A maximum-likelihood estimator is proposed for more precise estimation of the parameters of these processes coupled with a realistic non-Gaussian model for the driving noise. The performance is compared to that of the linear prediction estimator and, as expected, the maximum-likelihood estimator displays a marked improvement. >
By weakening the bigger and strengthening the smaller, gaussianization can enhance the gaussianity of samples and improve performance of subsequent correlation test. Firstly, an explicit definition on gaussianizing filter and a practical method to evaluate the filtering performance are given. Secondly, based on the cumulative distribution function and its inverse, one typical gaussianizing filters, so-called G-filter, are proposed and studied. Finally, instances with lake trial data are illustrated.
In this paper we prove that every nonlinear ∗ -Lie derivation from a factor von Neumann algebra into itself is an additive ∗ -derivation.
eng_Latn
33,988
Critical percolation in bidimensional coarsening
I discuss a recently unveiled feature in the dynamics of two dimensional coarsening systems on the lattice with Ising symmetry: they first approach a critical percolating state via the growth of a new length scale, and only later enter the usual dynamic scaling regime. The time needed to reach the critical percolating state diverges with the system size. These observations are common to Glauber, Kawasaki, and voter dynamics in pure and weakly disordered systems. An extended version of this account appeared in 2016 C. R. Phys. . I refer to the relevant publications for details.
In this letter, we employ the sparse Bayesian multitask learning to realize joint sparsity-enforcing polarimetric inverse scattering. The prior assumption about the data model is redesigned to avoid information sharing across unrelated tasks. Based on this assumption, we provide the formulas for Bayesian inference as well as the algorithm flowchart, which still has the linear complexity. Experimental results demonstrate that polarimetric inverse scattering with the proposed method can effectively extract the characteristics of canonical scatterers.
eng_Latn
33,989
Crowdsourcing prior information to improve study design and data analysis
Though Bayesian methods are being used more frequently, many still struggle with the best method for setting priors with novel measures or task environments. We propose a method for setting priors by eliciting continuous probability distributions from naive participants. This allows us to include any relevant information participants have for a given effect. Even when prior means are near-zero, this method provides a principle way to estimate dispersion and produce shrinkage, reducing the occurrence of overestimated effect sizes. We demonstrate this method with a number of published studies and compare the effect of different prior estimation and aggregation methods.
e21543Background: Our study aims at developing a very short screening questionnaire for the preoperative evaluation of older cancer patients in the surgical clinics. Methods: Our study was based on...
eng_Latn
33,990
Methodologies for system-level remaining useful life prediction
Novel approach to nonlinear/non-Gaussian Bayesian state estimation
ONE Evolution of the National Schistosomiasis Control Programmes in The People ’ s Republic of China
eng_Latn
33,991
Bayesian Analysis of Factorial Designs
time required for judgements of numerical inequality .
Guidelines for Colonoscopy Surveillance After Screening and Polypectomy: A Consensus Update by the US Multi-Society Task Force on Colorectal Cancer
eng_Latn
33,992
The Physiological Functions of Soybean Saponins and Their Application in Animal Feed
The soybean in China has a wide range of planting.It is rich in physiologically active substances,among which soybean saponin is a commendable pharmacological active ingredient with unique biological properties.This paper reviews the physiological functions of soybean saponin and briefly describes its applications in the animal feed.
In this article,an S potential energy function based on Bayesian theorem is presented which is used to test the loop decoys.The fine result shows that this S potential energy function is efficient and suited for loop structure prediction and the prediction ability is better than the RAPDF by the discrimination rate of the loop structure.
eng_Latn
33,993
The incidence of natural Clostridium welchii alpha-antitoxin in Indian equines: its influence on the results of antigenic stimulus.
The presence of natural circulating antitoxins in horses and its influence on hyperimmunization has been studied by many observers including Bolton (1896), Glenny (1925a, b), Barr & Glenny (1945), Basu & Roy (1946) and Ottensooser (1946). Bolton (1896), working with 2 horses, failed to observe any correlation between the amount of natural circulating antitoxin and the results of immunization with diphtheria toxin. Other observers mentioned above, working with a much larger number of animals and with different exotoxins, obtained a definite relationship between natural immunity and the immunological performances of the animals on subsequent hyperimmunization, and the significance of the presence of natural antitoxin is now well recognized.
This chapter investigated a neuro-rough model –a combination of a Multi-Layered Perceptron (MLP) neural network with rough set theory– for the modeling of interstate conflict. The model was formulated using a Bayesian framework and trained using a Monte Carlo technique with the Metropolis criterion. The model was then tested on militarized interstate dispute and was found to combine the accuracy of the Bayesian MLP model with the transparency of the rough set model. The technique presented was compared to the genetic algorithm optimized rough sets. The presented Bayesian neuro-rough model performed better than the genetic algorithm optimized rough set model.
eng_Latn
33,994
Effective cubature FastSLAM: SLAM with Rao-Blackwellized particle filter and cubature rule for Gaussian weighted integral
AbstractSimultaneous localization and mapping (SLAM) is a key technology for mobile robot autonomous navigation in unknown environments. While FastSLAM algorithm is a popular solution to the large-scale SLAM problem, it suffers from two major drawbacks: one is particle set degeneracy due to lack of measurements in proposal distribution of particle filter; the other is errors accumulation caused by inaccurate linearization of the nonlinear robot motion model and the environment measurement model. To overcome the problems, a new Jacobian-free cubature FastSLAM (CFastSLAM) algorithm is proposed in this paper. The main contribution of the algorithm lies in the utilization of third-degree cubature rule, which calculates the nonlinear transition density of Gaussian prior more accurately, to design an optimal proposal distribution of the particle filter and to estimate the Gaussian densities of the feature landmarks. On the basis of Rao-Blackwellized particle filter, the proposed algorithm is comprised by two ma...
Two-choice response times are a common type of data, and much research has been devoted to the development of process models for such data. However, the practical application of these models is notoriously complicated, and flexible methods are largely nonexistent. We combine a popular model for choice response times—the Wiener diffusion process—with techniques from psychometrics in order to construct a hierarchical diffusion model. Chief among these techniques is the application of random effects, with which we allow for unexplained variability among participants, items, or other experimental units. These techniques lead to a modeling framework that is highly flexible and easy to work with. Among the many novel models this statistical framework provides are a multilevel diffusion model, regression diffusion models, and a large family of explanatory diffusion models. We provide examples and the necessary computer code.
eng_Latn
33,995
Research on Benefit Calculation of Joint Distribution
In accordance with the concept of joint distribution,the paper gives out the calculation range and items of joint distribution in order to make the benefit calculation of joint distribution clear.
In this article,an S potential energy function based on Bayesian theorem is presented which is used to test the loop decoys.The fine result shows that this S potential energy function is efficient and suited for loop structure prediction and the prediction ability is better than the RAPDF by the discrimination rate of the loop structure.
eng_Latn
33,996
Neuro-Rough Sets for Modeling Interstate Conflict
This chapter investigated a neuro-rough model –a combination of a Multi-Layered Perceptron (MLP) neural network with rough set theory– for the modeling of interstate conflict. The model was formulated using a Bayesian framework and trained using a Monte Carlo technique with the Metropolis criterion. The model was then tested on militarized interstate dispute and was found to combine the accuracy of the Bayesian MLP model with the transparency of the rough set model. The technique presented was compared to the genetic algorithm optimized rough sets. The presented Bayesian neuro-rough model performed better than the genetic algorithm optimized rough set model.
The present study develops a non-parametric Bayesian network (NPBN) model to predict the corrosion depth on buried pipelines using the pipeline age and local soil properties. The dependence structu...
eng_Latn
33,997
Monte Carlo EM algorithm for generalized linear models with linear structural random effects
We propose generalized linear mixed models with linear structural random effects based on generalized linear mixed models. In this article, an Monte Carlo EM type algorithm is developed for maximum likelihood estimations of the proposed models. To avoid computation of the complicated multiple integrals involved, the E-step is completed by Monte Carlo techniques. The M-step is carried out efficiently by simple Conditional Maximization. Standard errors of MLEs are obtained via Louis's identity.
The study of the stability of many stochastic processes as Markov chains needs sometimes to use eigenvalues and eigenvectors of the transition matrix. This paper is an investigation on a methodology which computes fuzzy eigenvalues and fuzzy eigenvectors within the context of a fuzzy Markov chain transition matrix, under max-min composition.
eng_Latn
33,998
Mie light-scattering granulometer with adaptive numerical filtering. I. Theory.
A search procedure based on a least-squares method including a regularization scheme constructed from numerical filtering is presented. This method, with the addition of a nephelometer, can be used to determine the particle-size distributions of various scattering media (aerosols, fogs, rocket exhausts, motor plumes) from angular static light-scattering measurements. For retrieval of the distribution function, the experimental data are matched with theoretical patterns derived from Mie theory. The method is numerically investigated with simulated data, and the performance of the inverse procedure is evaluated. The results show that the retrieved distribution function is quite reliable, even for strong levels of noise.
In order to compute an optimal Kernel probability density function estimator (KDE), the plug-in method is considered in this paper. Such algorithm gives a way to optimize both the kernel and the bandwidth. Here, we propose a new procedure witch is faster than the common plug-in one. For each time, a factor J(f), which is linked to the second order derivative of the pdf, is analytically approximated. The pdf is estimated only once at the end of iterations. Different random variables with difficult distributions are generated in order to prove the efficiency of the proposed optimal estimator. These algorithms are then, applied to genetics data in order to give a better characterisation in the mean of neutrality of a given population by using the fast MISE optimal pdf estimation of the Tajima's D criterion. This criterion is a statistic allowing the estimation of the genetic neutrality of the sample population.
eng_Latn
33,999