idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,001
Probabilities vs. Odds Ratios
In an observational study, the odds ratio can be calculated either by conditioning on exposure ($E$ and its complement $E'$) or outcome ($C$ and its complement $C'$): $\psi = \frac{P(C|E)/P(C'|E)}{P(C|E')/P(C'|E')} = \frac{P(E|C)/P(E'|C)}{P(E|C')/P(E'|C')}$ According to your observations, the binary outcome is whether subjects get into the program ($C$) or not ($C'$), and the exposure could be femininity ($E$) versus masculinity ($E'$). The problem is that your data is ambiguous. You say: Suppose we know that the probability of a female getting into a program are $p=0.7$ If this means $P(C|E)=0.7$ then clearly $P(C'|E)=1-P(C|E)=0.3$. But we have no information about $P(C|E')$ or $P(C'|E')=1-P(C|E')$. In other words, what's the "probability of a male getting into the program"? (It's not 0.3). Likewise, if you interpret that as $P(E|C)=0.7$ then $P(E'|C)=1-P(E|C)=0.3$. But now you're missing $P(E|C')$ and $P(E'|C')=1-P(E|C')$. So you need to ask yourself whether that $0.7$ number is the probability of getting into the program for females, or the probability of being a female for those who got into the program. As for what the odds ratio is used for, like you imply, it measure relative odds in a single number that may be less than, equal to, or greater than 1 and so summarises the data well in can be calculated simply; for example, there's a simple formula for a $2\times2$ table of exposure counts versus outcome counts it is used in calculations when analysing matched pairs in observational studies it may be interpreted as the relative risk when certain outcomes (such as the incidence of lung cancer) are rare, and it approximates $\frac{P(C|E)}{P(C|E')}$ and there's probably some uses I haven't encountered yet.
Probabilities vs. Odds Ratios
In an observational study, the odds ratio can be calculated either by conditioning on exposure ($E$ and its complement $E'$) or outcome ($C$ and its complement $C'$): $\psi = \frac{P(C|E)/P(C'|E)}{P(C
Probabilities vs. Odds Ratios In an observational study, the odds ratio can be calculated either by conditioning on exposure ($E$ and its complement $E'$) or outcome ($C$ and its complement $C'$): $\psi = \frac{P(C|E)/P(C'|E)}{P(C|E')/P(C'|E')} = \frac{P(E|C)/P(E'|C)}{P(E|C')/P(E'|C')}$ According to your observations, the binary outcome is whether subjects get into the program ($C$) or not ($C'$), and the exposure could be femininity ($E$) versus masculinity ($E'$). The problem is that your data is ambiguous. You say: Suppose we know that the probability of a female getting into a program are $p=0.7$ If this means $P(C|E)=0.7$ then clearly $P(C'|E)=1-P(C|E)=0.3$. But we have no information about $P(C|E')$ or $P(C'|E')=1-P(C|E')$. In other words, what's the "probability of a male getting into the program"? (It's not 0.3). Likewise, if you interpret that as $P(E|C)=0.7$ then $P(E'|C)=1-P(E|C)=0.3$. But now you're missing $P(E|C')$ and $P(E'|C')=1-P(E|C')$. So you need to ask yourself whether that $0.7$ number is the probability of getting into the program for females, or the probability of being a female for those who got into the program. As for what the odds ratio is used for, like you imply, it measure relative odds in a single number that may be less than, equal to, or greater than 1 and so summarises the data well in can be calculated simply; for example, there's a simple formula for a $2\times2$ table of exposure counts versus outcome counts it is used in calculations when analysing matched pairs in observational studies it may be interpreted as the relative risk when certain outcomes (such as the incidence of lung cancer) are rare, and it approximates $\frac{P(C|E)}{P(C|E')}$ and there's probably some uses I haven't encountered yet.
Probabilities vs. Odds Ratios In an observational study, the odds ratio can be calculated either by conditioning on exposure ($E$ and its complement $E'$) or outcome ($C$ and its complement $C'$): $\psi = \frac{P(C|E)/P(C'|E)}{P(C
51,002
Probabilities vs. Odds Ratios
I think your confusion here is that sex is one binary variable. It has two levels, male and female, but they are not separate variables. So there aren't two variables to calculate the odds ratio of.
Probabilities vs. Odds Ratios
I think your confusion here is that sex is one binary variable. It has two levels, male and female, but they are not separate variables. So there aren't two variables to calculate the odds ratio of.
Probabilities vs. Odds Ratios I think your confusion here is that sex is one binary variable. It has two levels, male and female, but they are not separate variables. So there aren't two variables to calculate the odds ratio of.
Probabilities vs. Odds Ratios I think your confusion here is that sex is one binary variable. It has two levels, male and female, but they are not separate variables. So there aren't two variables to calculate the odds ratio of.
51,003
How to generate confidence bands for $\hat{Y}$
By fitting an lm object you obtain all the necessary components to do this. Mathematically you have estimates: $$\hat{\beta} = \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \left( \mathbf{X}^T y \right) $$ and and estimate: $$\mbox{vcov}\left(\hat{\beta} \right) = \hat{\sigma}^2 \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} $$ the beta-hats are obtained by calling coef to the lm object and the variance estimate, vcov to the lm object. Mathematically, for any $\mathbf{X}_{pred}$ observation you wish to predict the fitting $\hat{Y} = E \left[ Y | \mathbf{X} = \mathbf{X}_{pred} \right]$ then since the $\hat{Y}$ is given by: $\mathbf{X}_{pred}^T \hat{\beta}$ it is a simple mathematical manipulation to find that: $$\mbox{var} \left( \hat{Y} \right) = \mathbf{X}_{pred}^T \mbox{vcov}\left(\hat{\beta} \right) \mathbf{X}_{pred} = \hat{\sigma}^2 \mathbf{X}_{pred}^T \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \mathbf{X}_{pred} $$ It is a simple rule of quadratic forms that the farther $\mathbf{X}_{pred}$ is from the sample mean for each covariate (in a Euclidean sense), the greater $\left( \mathbf{X}_{pred}^T\mathbf{X}_{pred} \right)$ will be and, hence, the greater the variance of $\hat{Y}$. Simply, the variance only differs as a function of the cross product of your predicted $X$. An illustrating example in R since you seem to be interested in both the theoretical and computational aspects... x <- 1:100 y <- rnorm(100, x, 100) plot(x, y) f <- lm(y ~ x) X <- model.matrix(f) pred.se <- apply(X, 1, function(Xrow) t(Xrow) %*% vcov(f) %*% Xrow) lines(1:100, 1:100 + 1.96*sqrt(pred.se)) lines(1:100, 1:100 - 1.96*sqrt(pred.se)) ## "conf band is for uncertainty in predicted ys, should be substantially ## tighter than observed vales
How to generate confidence bands for $\hat{Y}$
By fitting an lm object you obtain all the necessary components to do this. Mathematically you have estimates: $$\hat{\beta} = \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \left( \mathbf{X}^T y \right)
How to generate confidence bands for $\hat{Y}$ By fitting an lm object you obtain all the necessary components to do this. Mathematically you have estimates: $$\hat{\beta} = \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \left( \mathbf{X}^T y \right) $$ and and estimate: $$\mbox{vcov}\left(\hat{\beta} \right) = \hat{\sigma}^2 \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} $$ the beta-hats are obtained by calling coef to the lm object and the variance estimate, vcov to the lm object. Mathematically, for any $\mathbf{X}_{pred}$ observation you wish to predict the fitting $\hat{Y} = E \left[ Y | \mathbf{X} = \mathbf{X}_{pred} \right]$ then since the $\hat{Y}$ is given by: $\mathbf{X}_{pred}^T \hat{\beta}$ it is a simple mathematical manipulation to find that: $$\mbox{var} \left( \hat{Y} \right) = \mathbf{X}_{pred}^T \mbox{vcov}\left(\hat{\beta} \right) \mathbf{X}_{pred} = \hat{\sigma}^2 \mathbf{X}_{pred}^T \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \mathbf{X}_{pred} $$ It is a simple rule of quadratic forms that the farther $\mathbf{X}_{pred}$ is from the sample mean for each covariate (in a Euclidean sense), the greater $\left( \mathbf{X}_{pred}^T\mathbf{X}_{pred} \right)$ will be and, hence, the greater the variance of $\hat{Y}$. Simply, the variance only differs as a function of the cross product of your predicted $X$. An illustrating example in R since you seem to be interested in both the theoretical and computational aspects... x <- 1:100 y <- rnorm(100, x, 100) plot(x, y) f <- lm(y ~ x) X <- model.matrix(f) pred.se <- apply(X, 1, function(Xrow) t(Xrow) %*% vcov(f) %*% Xrow) lines(1:100, 1:100 + 1.96*sqrt(pred.se)) lines(1:100, 1:100 - 1.96*sqrt(pred.se)) ## "conf band is for uncertainty in predicted ys, should be substantially ## tighter than observed vales
How to generate confidence bands for $\hat{Y}$ By fitting an lm object you obtain all the necessary components to do this. Mathematically you have estimates: $$\hat{\beta} = \left( \mathbf{X}^T\mathbf{X} \right) ^{-1} \left( \mathbf{X}^T y \right)
51,004
How to generate confidence bands for $\hat{Y}$
we have $$ \hat{\beta}\pm t_{\alpha/2,n-2} \sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}} $$ then l=lm(y~x) MSE=mean ( (l$residuals)^2) SSX=sum ( (x-mean(x))^2 ) U= l$coefficients + qt(1-alpha/2,n-2) * sqrt(MSE/SSX) L= l$coefficients - qt(1-alpha/2,n-2) * sqrt(MSE/SSX)
How to generate confidence bands for $\hat{Y}$
we have $$ \hat{\beta}\pm t_{\alpha/2,n-2} \sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}} $$ then l=lm(y~x) MSE=mean ( (l$residuals)^2) SSX=sum ( (x-mean(x))^2 ) U= l$coefficients + qt(1-alph
How to generate confidence bands for $\hat{Y}$ we have $$ \hat{\beta}\pm t_{\alpha/2,n-2} \sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}} $$ then l=lm(y~x) MSE=mean ( (l$residuals)^2) SSX=sum ( (x-mean(x))^2 ) U= l$coefficients + qt(1-alpha/2,n-2) * sqrt(MSE/SSX) L= l$coefficients - qt(1-alpha/2,n-2) * sqrt(MSE/SSX)
How to generate confidence bands for $\hat{Y}$ we have $$ \hat{\beta}\pm t_{\alpha/2,n-2} \sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}} $$ then l=lm(y~x) MSE=mean ( (l$residuals)^2) SSX=sum ( (x-mean(x))^2 ) U= l$coefficients + qt(1-alph
51,005
Choosing a method to solve a many-to-one mapping problem
One possible approach is to assume that, conditional on a device's features, each cookie appears independently. In that case, you can fit a SVM or decision tree or some other classifier (I don't recommend logistic regression for classification), with the appearance of each cookie being a binary outcome. This means you have one model for each cookie. Yes, this means you are training 100,000 separate classifiers, each on who-knows-how-many data points. But the theoretical framework is straightforward and the computational challenge is not insurmountable
Choosing a method to solve a many-to-one mapping problem
One possible approach is to assume that, conditional on a device's features, each cookie appears independently. In that case, you can fit a SVM or decision tree or some other classifier (I don't recom
Choosing a method to solve a many-to-one mapping problem One possible approach is to assume that, conditional on a device's features, each cookie appears independently. In that case, you can fit a SVM or decision tree or some other classifier (I don't recommend logistic regression for classification), with the appearance of each cookie being a binary outcome. This means you have one model for each cookie. Yes, this means you are training 100,000 separate classifiers, each on who-knows-how-many data points. But the theoretical framework is straightforward and the computational challenge is not insurmountable
Choosing a method to solve a many-to-one mapping problem One possible approach is to assume that, conditional on a device's features, each cookie appears independently. In that case, you can fit a SVM or decision tree or some other classifier (I don't recom
51,006
Relationship between VC dimension and degrees of freedom
Yaser Abu-Mostafa --- Learning with data Degrees of freedom are an abstraction of the effective number of parameters. The effective number is based on how many dichotomies one can get, rather than how many real-valued parameters are used. In the case of 2-dimensional perceptron, one can think of slope and intercept (plus a binary degree of freedom for which region goes to +1), or one can think of 3 parameters w_0,w_1,w_2 (though the weights can be simultaneously scaled up or down without affecting the resulting hypothesis). The degrees of freedom, however, are 3 because we have the flexibility to shatter 3 points, not because of one way or another of counting the number of parameters.
Relationship between VC dimension and degrees of freedom
Yaser Abu-Mostafa --- Learning with data Degrees of freedom are an abstraction of the effective number of parameters. The effective number is based on how many dichotomies one can get, rather than ho
Relationship between VC dimension and degrees of freedom Yaser Abu-Mostafa --- Learning with data Degrees of freedom are an abstraction of the effective number of parameters. The effective number is based on how many dichotomies one can get, rather than how many real-valued parameters are used. In the case of 2-dimensional perceptron, one can think of slope and intercept (plus a binary degree of freedom for which region goes to +1), or one can think of 3 parameters w_0,w_1,w_2 (though the weights can be simultaneously scaled up or down without affecting the resulting hypothesis). The degrees of freedom, however, are 3 because we have the flexibility to shatter 3 points, not because of one way or another of counting the number of parameters.
Relationship between VC dimension and degrees of freedom Yaser Abu-Mostafa --- Learning with data Degrees of freedom are an abstraction of the effective number of parameters. The effective number is based on how many dichotomies one can get, rather than ho
51,007
Does one need to adjust for document length (in terms of pages) in topic modeling?
I haven't used topic models much, but I can say that if you are to apply usual clustering methods to un-normalized document-term matrices (even when the dimensionality of the data is reduced with LSA), you'll see that longer articles will tent to cluster together, just because they have more words. So you may take a look at some of your topics and see if documents inside make sense. Also, try to calculate average length of document per topic and see if the phenomenon I mention takes place or not. Then you can repeat the same on the unit-normalized data and see if the results make more sense or not.
Does one need to adjust for document length (in terms of pages) in topic modeling?
I haven't used topic models much, but I can say that if you are to apply usual clustering methods to un-normalized document-term matrices (even when the dimensionality of the data is reduced with LSA)
Does one need to adjust for document length (in terms of pages) in topic modeling? I haven't used topic models much, but I can say that if you are to apply usual clustering methods to un-normalized document-term matrices (even when the dimensionality of the data is reduced with LSA), you'll see that longer articles will tent to cluster together, just because they have more words. So you may take a look at some of your topics and see if documents inside make sense. Also, try to calculate average length of document per topic and see if the phenomenon I mention takes place or not. Then you can repeat the same on the unit-normalized data and see if the results make more sense or not.
Does one need to adjust for document length (in terms of pages) in topic modeling? I haven't used topic models much, but I can say that if you are to apply usual clustering methods to un-normalized document-term matrices (even when the dimensionality of the data is reduced with LSA)
51,008
How to make a two-tailed hypergeometric test?
I ended up considering the mass of probability on the shortest tail and multiplied this probability by 2 to account for the fact that it is a two-tailed test. I have no reference that this is the best method but it felt quite intuitive to me!
How to make a two-tailed hypergeometric test?
I ended up considering the mass of probability on the shortest tail and multiplied this probability by 2 to account for the fact that it is a two-tailed test. I have no reference that this is the best
How to make a two-tailed hypergeometric test? I ended up considering the mass of probability on the shortest tail and multiplied this probability by 2 to account for the fact that it is a two-tailed test. I have no reference that this is the best method but it felt quite intuitive to me!
How to make a two-tailed hypergeometric test? I ended up considering the mass of probability on the shortest tail and multiplied this probability by 2 to account for the fact that it is a two-tailed test. I have no reference that this is the best
51,009
anomaly detection with Markov chain
The final properties of the score may depend on the normalization procedure. Therefore, it may be better to keep using fixed length unix command sequences. However, something that offers "basic consistency" is geometric average. Most likely : $\hat{P}(S)=(q_{S_1}\prod_{t=2}^{|S|}p_{S_{t-1}S_t})^{1/S}$ Therefore, typing two times the same sequence leads to the following : $\hat{P}(S+S)=(q_{S_1} p_{S_{t}S_{1}}\prod_{t=2}^{|S|}p_{S_{t-1}S_t} \prod_{t=2}^{|S|}p_{S_{t-1}S_t})^{1/2S}$ If the sequence is long enough (i.e. $S$ large), you obtain that: $\hat{P}(S+S)\approx\hat{P}(S)$
anomaly detection with Markov chain
The final properties of the score may depend on the normalization procedure. Therefore, it may be better to keep using fixed length unix command sequences. However, something that offers "basic consis
anomaly detection with Markov chain The final properties of the score may depend on the normalization procedure. Therefore, it may be better to keep using fixed length unix command sequences. However, something that offers "basic consistency" is geometric average. Most likely : $\hat{P}(S)=(q_{S_1}\prod_{t=2}^{|S|}p_{S_{t-1}S_t})^{1/S}$ Therefore, typing two times the same sequence leads to the following : $\hat{P}(S+S)=(q_{S_1} p_{S_{t}S_{1}}\prod_{t=2}^{|S|}p_{S_{t-1}S_t} \prod_{t=2}^{|S|}p_{S_{t-1}S_t})^{1/2S}$ If the sequence is long enough (i.e. $S$ large), you obtain that: $\hat{P}(S+S)\approx\hat{P}(S)$
anomaly detection with Markov chain The final properties of the score may depend on the normalization procedure. Therefore, it may be better to keep using fixed length unix command sequences. However, something that offers "basic consis
51,010
Estimating AR process for Logistic Regression
I do believe you should recode your values. They are not categorical, they are time-based. Say your first month is January 1995. THen that would be 1, then 2, then 3...January 1999 would take the value 4*12. This is what you should fit the AR(1) on. It is fine to fit the GLM part of the model with the categorical equivalent if you believe that there is an effect of "January". This is essentially assuming there is cyclic behavior every 12 months and estimating that effect. Something that is necessary in order to fit an AR(1) model to begin with. I am not wholly sure what the implications of using a glm and an AR(1) model together are of your covariance structures/inference.
Estimating AR process for Logistic Regression
I do believe you should recode your values. They are not categorical, they are time-based. Say your first month is January 1995. THen that would be 1, then 2, then 3...January 1999 would take the valu
Estimating AR process for Logistic Regression I do believe you should recode your values. They are not categorical, they are time-based. Say your first month is January 1995. THen that would be 1, then 2, then 3...January 1999 would take the value 4*12. This is what you should fit the AR(1) on. It is fine to fit the GLM part of the model with the categorical equivalent if you believe that there is an effect of "January". This is essentially assuming there is cyclic behavior every 12 months and estimating that effect. Something that is necessary in order to fit an AR(1) model to begin with. I am not wholly sure what the implications of using a glm and an AR(1) model together are of your covariance structures/inference.
Estimating AR process for Logistic Regression I do believe you should recode your values. They are not categorical, they are time-based. Say your first month is January 1995. THen that would be 1, then 2, then 3...January 1999 would take the valu
51,011
Finding the support of transformations of random variables
$\DeclareMathOperator{\support}{support}$The general question here is a very hard problem, for the following reason. Let $f(x_1, \dots, x_n)$ be any function, and let $X_1, \dots, X_n$ be Gaussian random variables. Then the r.v. $f(X_1, \dots, X_n)$ has support at 0 if and only if the equation $f(x_1, \dots, x_n) = 0$ has real-valued solutions. So determining the support of a function of random variables is at least as hard as finding the zeros of the relevant functions. But for even fairly simple classes of functional expressions, finding their zeros is undecidable (there is no algorithm!) For instance, by Richardson's Theorem, even if $f$ is restricted to be a function of one argument using the constants $\pi$ and $\ln 2$, addition, multiplication, and the functions $\sin, \exp$ and absolute value--the question of whether $f(X)$ has support at 0 is in general undecidable. You can make some progress on specific sub-cases. For instance, if $X$ has support on $[a, b]$ and $f$ is continuous and monotone increasing in that range, then $f(X)$ has support on $[f(a_X), f(b)]$. Similarly, if the random variables $X$ and $Y$ have joint support on the entire rectangle $[a_X, b_X] \times [a_Y, b_Y]$, and $f$ is a continuous function of two arguments, monotone increasing in both on that rectangle, then $f(X, Y)$ has support on $[f(a_X, a_Y), f(b_X, b_Y)]$ (and there are similar versions if $f$ is monotone decreasing in one or both arguments instead). Finally, to tackle non-monotone functions of random variables, you can chop up their domain into rectangular chunks where they are monotone, and then calculate the support of the function conditional on the arguments lying within that chunk, and then take the union of those supports at the end: for instance, $\support(X^2) = \support(X^2 \mid X \ge 0) \cup \support(X^2 \mid X \le 0)$. But the process of conditioning in this way can get very thorny if there are lots of sub-expressions and dependence between the arguments. Clever application of the above three principles will probably get you fairly far, but there's no general answer for arbitrary functions.
Finding the support of transformations of random variables
$\DeclareMathOperator{\support}{support}$The general question here is a very hard problem, for the following reason. Let $f(x_1, \dots, x_n)$ be any function, and let $X_1, \dots, X_n$ be Gaussian ran
Finding the support of transformations of random variables $\DeclareMathOperator{\support}{support}$The general question here is a very hard problem, for the following reason. Let $f(x_1, \dots, x_n)$ be any function, and let $X_1, \dots, X_n$ be Gaussian random variables. Then the r.v. $f(X_1, \dots, X_n)$ has support at 0 if and only if the equation $f(x_1, \dots, x_n) = 0$ has real-valued solutions. So determining the support of a function of random variables is at least as hard as finding the zeros of the relevant functions. But for even fairly simple classes of functional expressions, finding their zeros is undecidable (there is no algorithm!) For instance, by Richardson's Theorem, even if $f$ is restricted to be a function of one argument using the constants $\pi$ and $\ln 2$, addition, multiplication, and the functions $\sin, \exp$ and absolute value--the question of whether $f(X)$ has support at 0 is in general undecidable. You can make some progress on specific sub-cases. For instance, if $X$ has support on $[a, b]$ and $f$ is continuous and monotone increasing in that range, then $f(X)$ has support on $[f(a_X), f(b)]$. Similarly, if the random variables $X$ and $Y$ have joint support on the entire rectangle $[a_X, b_X] \times [a_Y, b_Y]$, and $f$ is a continuous function of two arguments, monotone increasing in both on that rectangle, then $f(X, Y)$ has support on $[f(a_X, a_Y), f(b_X, b_Y)]$ (and there are similar versions if $f$ is monotone decreasing in one or both arguments instead). Finally, to tackle non-monotone functions of random variables, you can chop up their domain into rectangular chunks where they are monotone, and then calculate the support of the function conditional on the arguments lying within that chunk, and then take the union of those supports at the end: for instance, $\support(X^2) = \support(X^2 \mid X \ge 0) \cup \support(X^2 \mid X \le 0)$. But the process of conditioning in this way can get very thorny if there are lots of sub-expressions and dependence between the arguments. Clever application of the above three principles will probably get you fairly far, but there's no general answer for arbitrary functions.
Finding the support of transformations of random variables $\DeclareMathOperator{\support}{support}$The general question here is a very hard problem, for the following reason. Let $f(x_1, \dots, x_n)$ be any function, and let $X_1, \dots, X_n$ be Gaussian ran
51,012
Will normalizing training and testing data separately cause under/overfitting?
In general, this should be avoided. The basic assumption of learning algorithms is that all data come from the same distribution. Applying different normalization procedures (or with different parameters) on the training and test data violates this. There are cases however where this may be appropriate. If it is known that the testing data follow a different distribution other methods can be applied. If the testing data (which are unlabelled) are available for training, techniques such as semi-supervised learning can be applied. In case the differences are due to measurement errors (such as batch effects, which are common on biological data), batch effect removal methods can be applied (see this paper for a comparison and references therein). In case unlabelled data are not available I don't see any way to handle it, unless prior knowledge is available.
Will normalizing training and testing data separately cause under/overfitting?
In general, this should be avoided. The basic assumption of learning algorithms is that all data come from the same distribution. Applying different normalization procedures (or with different paramet
Will normalizing training and testing data separately cause under/overfitting? In general, this should be avoided. The basic assumption of learning algorithms is that all data come from the same distribution. Applying different normalization procedures (or with different parameters) on the training and test data violates this. There are cases however where this may be appropriate. If it is known that the testing data follow a different distribution other methods can be applied. If the testing data (which are unlabelled) are available for training, techniques such as semi-supervised learning can be applied. In case the differences are due to measurement errors (such as batch effects, which are common on biological data), batch effect removal methods can be applied (see this paper for a comparison and references therein). In case unlabelled data are not available I don't see any way to handle it, unless prior knowledge is available.
Will normalizing training and testing data separately cause under/overfitting? In general, this should be avoided. The basic assumption of learning algorithms is that all data come from the same distribution. Applying different normalization procedures (or with different paramet
51,013
Will normalizing training and testing data separately cause under/overfitting?
I don't see any relation with underfitting or overfitting. It is more about applying the same type of transformation with different parameters. I've tried ZCAWhitening on MNIST data with various datasize m = 10k, 10k, 100. My intuition was that the same transformation with the same parameters(mu,sigma) must be applied. Otherwise, the data on training or test set get biased. We can think it like adding 1.0 to training set, while adding 1.2 to test set. Test set get biased by 0.2 at this condition. Should we expect to take similar results if we apply ZCAWhitening seperately? I say yes only if training and test set has sufficiently many examples. (If underlying variation is not much, small dataset may also show similar results because of their intrinsically similar structures) As number of examples increases, variation between each set decreases, so we can collect pretty similar mu, sigma, etc. Here are results: m = 10k common normalization = 94.980% seperate normalization = 94.740% m = 1k common normalization = 81.500% seperate normalization = 79.600% m = 100 common normalization = 70.000% seperate normalization = 65.000% It is clear that as number of samples increases difference between common and seperate normalization dicreases, as a result of diminishing sampling error.
Will normalizing training and testing data separately cause under/overfitting?
I don't see any relation with underfitting or overfitting. It is more about applying the same type of transformation with different parameters. I've tried ZCAWhitening on MNIST data with various datas
Will normalizing training and testing data separately cause under/overfitting? I don't see any relation with underfitting or overfitting. It is more about applying the same type of transformation with different parameters. I've tried ZCAWhitening on MNIST data with various datasize m = 10k, 10k, 100. My intuition was that the same transformation with the same parameters(mu,sigma) must be applied. Otherwise, the data on training or test set get biased. We can think it like adding 1.0 to training set, while adding 1.2 to test set. Test set get biased by 0.2 at this condition. Should we expect to take similar results if we apply ZCAWhitening seperately? I say yes only if training and test set has sufficiently many examples. (If underlying variation is not much, small dataset may also show similar results because of their intrinsically similar structures) As number of examples increases, variation between each set decreases, so we can collect pretty similar mu, sigma, etc. Here are results: m = 10k common normalization = 94.980% seperate normalization = 94.740% m = 1k common normalization = 81.500% seperate normalization = 79.600% m = 100 common normalization = 70.000% seperate normalization = 65.000% It is clear that as number of samples increases difference between common and seperate normalization dicreases, as a result of diminishing sampling error.
Will normalizing training and testing data separately cause under/overfitting? I don't see any relation with underfitting or overfitting. It is more about applying the same type of transformation with different parameters. I've tried ZCAWhitening on MNIST data with various datas
51,014
Population version of Kendall's tau
Yes, we are measuring the difference between the probability of concordance and discordance for two observations coming from different distribution. I think $ Q(C_1,C_2) $ does not give you many informations (it turns out $ Q $ depends on the copulas, hence I used this notation) unless one of the copulas represents a "reference" copula. For example, if you have a couple as $ (X,Y) $ with copula $ C $, you may have $ Q(C,M) $ near $1$, where $ M $ is the comonotonicity copula. This means the probability that one observation of your couple $ (X,Y) $ is concordant with an observation of the couple $ (X',Y') $ with same marginals but functionally dependent variables is much higher than the probability of discordance, hence $ Y $ tends to be higher when $ X $ is higher. Other example: $ Q(C,\Pi)>0 $, where $ \Pi $ is the product copula. Then the probability that one observation of your couple is concordant with (lies on the 1st or 3rd quadrant with respect to) a random observation of the couple $ (X',Y') $, where $ X' $ and $ Y' $ are distributed as $X$ and $Y$ but independent, is less than the probability of discordance. So $ X $ and $ Y $ have to be somehow positively dependent. This is the idea behind Spearman's rho rank correlation coefficient. Of course, now that you have some information about $ C $ you can use that copula as a "reference" as well.
Population version of Kendall's tau
Yes, we are measuring the difference between the probability of concordance and discordance for two observations coming from different distribution. I think $ Q(C_1,C_2) $ does not give you many infor
Population version of Kendall's tau Yes, we are measuring the difference between the probability of concordance and discordance for two observations coming from different distribution. I think $ Q(C_1,C_2) $ does not give you many informations (it turns out $ Q $ depends on the copulas, hence I used this notation) unless one of the copulas represents a "reference" copula. For example, if you have a couple as $ (X,Y) $ with copula $ C $, you may have $ Q(C,M) $ near $1$, where $ M $ is the comonotonicity copula. This means the probability that one observation of your couple $ (X,Y) $ is concordant with an observation of the couple $ (X',Y') $ with same marginals but functionally dependent variables is much higher than the probability of discordance, hence $ Y $ tends to be higher when $ X $ is higher. Other example: $ Q(C,\Pi)>0 $, where $ \Pi $ is the product copula. Then the probability that one observation of your couple is concordant with (lies on the 1st or 3rd quadrant with respect to) a random observation of the couple $ (X',Y') $, where $ X' $ and $ Y' $ are distributed as $X$ and $Y$ but independent, is less than the probability of discordance. So $ X $ and $ Y $ have to be somehow positively dependent. This is the idea behind Spearman's rho rank correlation coefficient. Of course, now that you have some information about $ C $ you can use that copula as a "reference" as well.
Population version of Kendall's tau Yes, we are measuring the difference between the probability of concordance and discordance for two observations coming from different distribution. I think $ Q(C_1,C_2) $ does not give you many infor
51,015
Extrapolation of 2d movement
This is the approach I used to impute missing values in IMU (Inertial Measurement Unit) data: Build trajectory matrix out of time-series Run the missing value imputation algorithm from the TFOCS library (MATLAB) Extract the time-series from the filled trajectory matrix The function you want to use is the Nuclear Norm Minimization for missing value imputation for matrices. The assumption is that the values are missing at random and the matrix is low-rank (the case for many real-world data). The only important parameter to tune is the window length. The bigger - the better, but at the computation expense. Bigger window captures more information. The results were pretty impressive for our IMU data (acceleration, radial acceleration, magnetometer and quaternion data). It did not work well if the missing data came in large contiguous blocks.
Extrapolation of 2d movement
This is the approach I used to impute missing values in IMU (Inertial Measurement Unit) data: Build trajectory matrix out of time-series Run the missing value imputation algorithm from the TFOCS libr
Extrapolation of 2d movement This is the approach I used to impute missing values in IMU (Inertial Measurement Unit) data: Build trajectory matrix out of time-series Run the missing value imputation algorithm from the TFOCS library (MATLAB) Extract the time-series from the filled trajectory matrix The function you want to use is the Nuclear Norm Minimization for missing value imputation for matrices. The assumption is that the values are missing at random and the matrix is low-rank (the case for many real-world data). The only important parameter to tune is the window length. The bigger - the better, but at the computation expense. Bigger window captures more information. The results were pretty impressive for our IMU data (acceleration, radial acceleration, magnetometer and quaternion data). It did not work well if the missing data came in large contiguous blocks.
Extrapolation of 2d movement This is the approach I used to impute missing values in IMU (Inertial Measurement Unit) data: Build trajectory matrix out of time-series Run the missing value imputation algorithm from the TFOCS libr
51,016
Extrapolation of 2d movement
You can assume a constant acceleration... V0=Initial Velocity VF= Final Velocity t= time elapsed a=(Vf-V0)/t Then use the same equation but that constant acceleration to solve for the velocity at each interpolated timepoint: V(t)=a*t+V0 Perhaps instead you do not want to make that assumption, you want to guess a "transition" from some set of observed transitions. Then you will need to give a more detailed description of the data.
Extrapolation of 2d movement
You can assume a constant acceleration... V0=Initial Velocity VF= Final Velocity t= time elapsed a=(Vf-V0)/t Then use the same equation but that constant acceleration to solve for the velocity at ea
Extrapolation of 2d movement You can assume a constant acceleration... V0=Initial Velocity VF= Final Velocity t= time elapsed a=(Vf-V0)/t Then use the same equation but that constant acceleration to solve for the velocity at each interpolated timepoint: V(t)=a*t+V0 Perhaps instead you do not want to make that assumption, you want to guess a "transition" from some set of observed transitions. Then you will need to give a more detailed description of the data.
Extrapolation of 2d movement You can assume a constant acceleration... V0=Initial Velocity VF= Final Velocity t= time elapsed a=(Vf-V0)/t Then use the same equation but that constant acceleration to solve for the velocity at ea
51,017
Prove $X_i$ in Span of $ X_k, k \neq i$
Based on @whuber's hint, I came up with the following. Please do let me know if I made a mistake somewhere. Let $\alpha^{m}:=\left(a_{1}^{m},\dots, a_{n}^{m}\right)/\left\Vert \left(a_{1}^{m},\dots\,a_{n}^{m}\right)\right\Vert$. Since every bounded sequence in $\mathbb{R}^{n}$ has a convergent sub-sequence (Bolzano-Weierstrass), we may pick $m_{l}:\alpha^{m_{l}}\to b$ for some $b\in\mathbb{R}^{n}$. Then $\sum_{k}\alpha_{k}^{m_{l}}X_{k}\to\sum_{k}b_{k}X_{k}$ pointwise. By assumption we also have $$ \mathbb{E}\left|\sum_{k}\alpha_{k}^{m_{l}}X_{k}\right|\leq1/\left\Vert a^{m_{l}}\right\Vert \to0 $$ Since $\sum_{k}\mathbb{E}\left|X_{k}\right|<\infty$ and $\left|\sum_{k} \alpha_{k}^{m_{l}}X_{k}\right|\leq\sum_{k}\left| \alpha_{k}^{m_{l}}X_{k}\right|\leq\sum_{k}\left|X_{k}\right|$ dominated convergence gives: $$0=\lim_{l\to\infty}\mathbb{E}\left|\sum_{k} \alpha_{k}^{m_{l}}X_{k}\right|=\mathbb{E}\left|\sum_{k}b_{k}X_{k}\right|,$$ which is what we wanted to prove (upon standardizing $b$ so that $b_k=1$ for at least one $k$).
Prove $X_i$ in Span of $ X_k, k \neq i$
Based on @whuber's hint, I came up with the following. Please do let me know if I made a mistake somewhere. Let $\alpha^{m}:=\left(a_{1}^{m},\dots, a_{n}^{m}\right)/\left\Vert \left(a_{1}^{m},\dots\,a
Prove $X_i$ in Span of $ X_k, k \neq i$ Based on @whuber's hint, I came up with the following. Please do let me know if I made a mistake somewhere. Let $\alpha^{m}:=\left(a_{1}^{m},\dots, a_{n}^{m}\right)/\left\Vert \left(a_{1}^{m},\dots\,a_{n}^{m}\right)\right\Vert$. Since every bounded sequence in $\mathbb{R}^{n}$ has a convergent sub-sequence (Bolzano-Weierstrass), we may pick $m_{l}:\alpha^{m_{l}}\to b$ for some $b\in\mathbb{R}^{n}$. Then $\sum_{k}\alpha_{k}^{m_{l}}X_{k}\to\sum_{k}b_{k}X_{k}$ pointwise. By assumption we also have $$ \mathbb{E}\left|\sum_{k}\alpha_{k}^{m_{l}}X_{k}\right|\leq1/\left\Vert a^{m_{l}}\right\Vert \to0 $$ Since $\sum_{k}\mathbb{E}\left|X_{k}\right|<\infty$ and $\left|\sum_{k} \alpha_{k}^{m_{l}}X_{k}\right|\leq\sum_{k}\left| \alpha_{k}^{m_{l}}X_{k}\right|\leq\sum_{k}\left|X_{k}\right|$ dominated convergence gives: $$0=\lim_{l\to\infty}\mathbb{E}\left|\sum_{k} \alpha_{k}^{m_{l}}X_{k}\right|=\mathbb{E}\left|\sum_{k}b_{k}X_{k}\right|,$$ which is what we wanted to prove (upon standardizing $b$ so that $b_k=1$ for at least one $k$).
Prove $X_i$ in Span of $ X_k, k \neq i$ Based on @whuber's hint, I came up with the following. Please do let me know if I made a mistake somewhere. Let $\alpha^{m}:=\left(a_{1}^{m},\dots, a_{n}^{m}\right)/\left\Vert \left(a_{1}^{m},\dots\,a
51,018
is there a book on stats similar to Kallenberg's on probability?
Presumably the person who asked this question is long gone, but for future reference I will mention a book here. Note: I have not read this book. Theoretical Statistics: Topics for a Core Course, by Robert W. Keener. Amazon: link. I quote the last three lines of the review by Amazon user Der Boandlkramer: This book is in a class by itself! Its natural companion (and prerequisite, to some extent) textbook on probability is the monumental "Foundations of Modern Probability," by Olav Kallenberg. For whatever it's worth, Keener's book is used at UCLA (STAT 200B).
is there a book on stats similar to Kallenberg's on probability?
Presumably the person who asked this question is long gone, but for future reference I will mention a book here. Note: I have not read this book. Theoretical Statistics: Topics for a Core Course, by R
is there a book on stats similar to Kallenberg's on probability? Presumably the person who asked this question is long gone, but for future reference I will mention a book here. Note: I have not read this book. Theoretical Statistics: Topics for a Core Course, by Robert W. Keener. Amazon: link. I quote the last three lines of the review by Amazon user Der Boandlkramer: This book is in a class by itself! Its natural companion (and prerequisite, to some extent) textbook on probability is the monumental "Foundations of Modern Probability," by Olav Kallenberg. For whatever it's worth, Keener's book is used at UCLA (STAT 200B).
is there a book on stats similar to Kallenberg's on probability? Presumably the person who asked this question is long gone, but for future reference I will mention a book here. Note: I have not read this book. Theoretical Statistics: Topics for a Core Course, by R
51,019
is there a book on stats similar to Kallenberg's on probability?
Peter Bickel and Kjell Doksum wrote Mathematical Statistics Volume 1 and Volume 2. The two books form a comprehensive guide to modern (frequentist) statistical methods at a level of mathematical sophistication similar to Kallenberg.
is there a book on stats similar to Kallenberg's on probability?
Peter Bickel and Kjell Doksum wrote Mathematical Statistics Volume 1 and Volume 2. The two books form a comprehensive guide to modern (frequentist) statistical methods at a level of mathematical sophi
is there a book on stats similar to Kallenberg's on probability? Peter Bickel and Kjell Doksum wrote Mathematical Statistics Volume 1 and Volume 2. The two books form a comprehensive guide to modern (frequentist) statistical methods at a level of mathematical sophistication similar to Kallenberg.
is there a book on stats similar to Kallenberg's on probability? Peter Bickel and Kjell Doksum wrote Mathematical Statistics Volume 1 and Volume 2. The two books form a comprehensive guide to modern (frequentist) statistical methods at a level of mathematical sophi
51,020
Practical collaborative filtering application for large database
A simple approach would be: Let's say that you 5 items you can train on and 3 that you are recommending: 1) create a similarity matrix between active items and training items, so you similarity matrix is 5 x 3. Similarity can be based on just item attributes, or/and other user's activities on these items. 2) Each time a user comes to the site you grab their evaluations, say they range from 1 to 10, and look like this: [10,4,10,3,1]. 3) Decompose, evaluations into binary matrices, for example: eval_10 => [1,0,1,0,0] 4) item_scores = eval_10 * similarity item scores is now 1 x 3, where the scores mean: items similar to items that a user ranked 10, you can do this for 9 and give a smaller weight, and for 1 and give an negative weight. And sum up the scores. If you are able to process user actions quickly (I know it is can be tricky), then your recommendations are pretty good proxy for real time. The key is you don't have to recalculate the similarity matrix over and over. Perhaps just once 24 hours is enough. This is a very simple approach, but I've used that pretty successfully. Hope this helps.
Practical collaborative filtering application for large database
A simple approach would be: Let's say that you 5 items you can train on and 3 that you are recommending: 1) create a similarity matrix between active items and training items, so you similarity matrix
Practical collaborative filtering application for large database A simple approach would be: Let's say that you 5 items you can train on and 3 that you are recommending: 1) create a similarity matrix between active items and training items, so you similarity matrix is 5 x 3. Similarity can be based on just item attributes, or/and other user's activities on these items. 2) Each time a user comes to the site you grab their evaluations, say they range from 1 to 10, and look like this: [10,4,10,3,1]. 3) Decompose, evaluations into binary matrices, for example: eval_10 => [1,0,1,0,0] 4) item_scores = eval_10 * similarity item scores is now 1 x 3, where the scores mean: items similar to items that a user ranked 10, you can do this for 9 and give a smaller weight, and for 1 and give an negative weight. And sum up the scores. If you are able to process user actions quickly (I know it is can be tricky), then your recommendations are pretty good proxy for real time. The key is you don't have to recalculate the similarity matrix over and over. Perhaps just once 24 hours is enough. This is a very simple approach, but I've used that pretty successfully. Hope this helps.
Practical collaborative filtering application for large database A simple approach would be: Let's say that you 5 items you can train on and 3 that you are recommending: 1) create a similarity matrix between active items and training items, so you similarity matrix
51,021
Interpretation P value one sample KS test
So, since P > α, I can not reject the null? That's correct, you wouldn't reject the null with that test. The Kolmogorov-Smirnov can detect scale shifts, but not nearly as efficiently as something specifically designed to pick them up. and hence assume my date follows a exponential distribution with rate 1/117.5? Failure to reject the null doesn't mean the null is actually the case. You can't distinguish it from an exponential with mean 117.5 but that doesn't mean it's exponential or that its mean is 117.5. Many other rate parameters would be consistent with the data, and many distributions other than the exponential would, as well. If you have a strong a priori reason to think it should take the null value and be exponential, it may in some situations make sense to act as if the null were true, but generally speaking it probably isn't. It helps to keep that it mind.
Interpretation P value one sample KS test
So, since P > α, I can not reject the null? That's correct, you wouldn't reject the null with that test. The Kolmogorov-Smirnov can detect scale shifts, but not nearly as efficiently as something sp
Interpretation P value one sample KS test So, since P > α, I can not reject the null? That's correct, you wouldn't reject the null with that test. The Kolmogorov-Smirnov can detect scale shifts, but not nearly as efficiently as something specifically designed to pick them up. and hence assume my date follows a exponential distribution with rate 1/117.5? Failure to reject the null doesn't mean the null is actually the case. You can't distinguish it from an exponential with mean 117.5 but that doesn't mean it's exponential or that its mean is 117.5. Many other rate parameters would be consistent with the data, and many distributions other than the exponential would, as well. If you have a strong a priori reason to think it should take the null value and be exponential, it may in some situations make sense to act as if the null were true, but generally speaking it probably isn't. It helps to keep that it mind.
Interpretation P value one sample KS test So, since P > α, I can not reject the null? That's correct, you wouldn't reject the null with that test. The Kolmogorov-Smirnov can detect scale shifts, but not nearly as efficiently as something sp
51,022
Interpretation P value one sample KS test
if you want exponential distribution , I think try to transform exponential to normal, with ln your data I think you should change your data ln your data ln(data) run your code with ks.test(x, "pnorm",mean=0,sd=1)
Interpretation P value one sample KS test
if you want exponential distribution , I think try to transform exponential to normal, with ln your data I think you should change your data ln your data ln(data) run your code with ks.test(x, "pnorm
Interpretation P value one sample KS test if you want exponential distribution , I think try to transform exponential to normal, with ln your data I think you should change your data ln your data ln(data) run your code with ks.test(x, "pnorm",mean=0,sd=1)
Interpretation P value one sample KS test if you want exponential distribution , I think try to transform exponential to normal, with ln your data I think you should change your data ln your data ln(data) run your code with ks.test(x, "pnorm
51,023
Model Stacking algorithm
If I understand your pseudo-code correctly, I don't see where the stacking model is being tested in the cross validation loop. I would expect to see something like model4 = fitmodel4(model1, model2, model3, train) y4 = predict(model4, test) Similar to how the base models' hyper-parameters are being tuned using cross validation prediction error (e.g. neural network's number of nodes, regression's independent variables and potential transformations), there would also be tuning of the stacking model's hyper-parameters in the cross validation loop As for your questions: Yes, the final base learners are fit using the entire training data Yes. The stacking model would use the outputs from the final base learners. Over fitting is related to model complexity and not training on more data.
Model Stacking algorithm
If I understand your pseudo-code correctly, I don't see where the stacking model is being tested in the cross validation loop. I would expect to see something like model4 = fitmodel4(model1, model2,
Model Stacking algorithm If I understand your pseudo-code correctly, I don't see where the stacking model is being tested in the cross validation loop. I would expect to see something like model4 = fitmodel4(model1, model2, model3, train) y4 = predict(model4, test) Similar to how the base models' hyper-parameters are being tuned using cross validation prediction error (e.g. neural network's number of nodes, regression's independent variables and potential transformations), there would also be tuning of the stacking model's hyper-parameters in the cross validation loop As for your questions: Yes, the final base learners are fit using the entire training data Yes. The stacking model would use the outputs from the final base learners. Over fitting is related to model complexity and not training on more data.
Model Stacking algorithm If I understand your pseudo-code correctly, I don't see where the stacking model is being tested in the cross validation loop. I would expect to see something like model4 = fitmodel4(model1, model2,
51,024
Correct standard errors for weighted linear regression
These two expressions disagree, as you note, in terms of the use of the residuals in calculation: the difference of $Y$ from the predicted values, being included or omitted from the calculation of the standard errors. They are indeed different estimators but they converge to the same thing in the long run. They also can be combined to create a "sandwich" estimator. To revisit some basic modeling assumptions: the weighted linear regression model is estimated from a weighted estimating equation of the form: $$U(\beta) = \mathbf{X}^T \mathbf{W}\left( Y - \mathbf{X}^T\beta\right)$$ When $\mathbf{W}$ is just the diagonal matrix of weights. This estimating equation is also the normal equations (partial log likelihood) for the MLE. Then, the expected information is: $$\mathbf{A}= \frac{\partial U(\beta)}{\partial \beta} = \mathbf{X}^T\mathbf{W} \mathbf{X}$$ Then $\mathbf{A}^{-1}$ is a consistent estimator of the covariance matrix for $\beta$ when 1. the mean model is appropriately specified and 2. the weights are the inverse variance of the residuals. You have already stated the A matrix is your first display. Contrast this with the observed information: $$\mathbf{B} = E[U(\beta)U(\beta)^T] = \mathbf{X}^T \mathbf{W}E((Y-\mathbf{X}\beta)^T(Y-\mathbf{X}\beta)) \mathbf{W}\mathbf{X} $$ One of the weight matrices can multiply with the squared errors and factor out of the expression as a constant because it is orthogonal to the $\mathbf{X}$, and you'll note that is the expression for $\sigma_e^2= \sum_{i=1}^n w_i (y_i - a - bx_i)/(n-2)$. $\mathbf{B}$ is also a consistent estimator of the information matrix, but will disagree with $\mathbf{A}$ in finite samples. As for which one to use, why not use both? A sandwich estimator is obtained by $\left(\mathbf{A}^T\mathbf{B}\mathbf{A}\right)^{-1}$ and depends neither on the mean model being correct nor on the weights being properly specified.
Correct standard errors for weighted linear regression
These two expressions disagree, as you note, in terms of the use of the residuals in calculation: the difference of $Y$ from the predicted values, being included or omitted from the calculation of the
Correct standard errors for weighted linear regression These two expressions disagree, as you note, in terms of the use of the residuals in calculation: the difference of $Y$ from the predicted values, being included or omitted from the calculation of the standard errors. They are indeed different estimators but they converge to the same thing in the long run. They also can be combined to create a "sandwich" estimator. To revisit some basic modeling assumptions: the weighted linear regression model is estimated from a weighted estimating equation of the form: $$U(\beta) = \mathbf{X}^T \mathbf{W}\left( Y - \mathbf{X}^T\beta\right)$$ When $\mathbf{W}$ is just the diagonal matrix of weights. This estimating equation is also the normal equations (partial log likelihood) for the MLE. Then, the expected information is: $$\mathbf{A}= \frac{\partial U(\beta)}{\partial \beta} = \mathbf{X}^T\mathbf{W} \mathbf{X}$$ Then $\mathbf{A}^{-1}$ is a consistent estimator of the covariance matrix for $\beta$ when 1. the mean model is appropriately specified and 2. the weights are the inverse variance of the residuals. You have already stated the A matrix is your first display. Contrast this with the observed information: $$\mathbf{B} = E[U(\beta)U(\beta)^T] = \mathbf{X}^T \mathbf{W}E((Y-\mathbf{X}\beta)^T(Y-\mathbf{X}\beta)) \mathbf{W}\mathbf{X} $$ One of the weight matrices can multiply with the squared errors and factor out of the expression as a constant because it is orthogonal to the $\mathbf{X}$, and you'll note that is the expression for $\sigma_e^2= \sum_{i=1}^n w_i (y_i - a - bx_i)/(n-2)$. $\mathbf{B}$ is also a consistent estimator of the information matrix, but will disagree with $\mathbf{A}$ in finite samples. As for which one to use, why not use both? A sandwich estimator is obtained by $\left(\mathbf{A}^T\mathbf{B}\mathbf{A}\right)^{-1}$ and depends neither on the mean model being correct nor on the weights being properly specified.
Correct standard errors for weighted linear regression These two expressions disagree, as you note, in terms of the use of the residuals in calculation: the difference of $Y$ from the predicted values, being included or omitted from the calculation of the
51,025
Expectation maxmisation algorithm increases true likelihood at each iteration
I think your picture is misleading you. Rather than envisioning a single orange lower-bound approximation to the true blue likelihood, instead you should be thinking of a series of orange lower-bounds that are approximations about particular points $\theta_t$. In particular, the approximation is necessarily tightest about the approximation point. So at $\theta_2$ your orange approximation must be closer to the blue curve than it is at $\theta_{2}^{\mbox{max}}$. See figure (from wikipedia). This follows from the EM derivation. First observe that the incomplete data likelihood can be written as a ratio of a joint and a conditional: $p(X|\theta)=\frac{P(X,Z|\theta)}{P(Z|X,\theta)}$, and that the expectation over $Z$ leaves the LHS unchanged. Restating this fact and taking logs, $$ \begin{align*} \log p(\mathbf{X}|\theta) &= E_z \left[ \log p(\mathbf{X},\mathbf{Z}|\theta) - \log p(\mathbf{Z}|\mathbf{X},\theta) \right] \\ \log p(\mathbf{X}|\theta) & = \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\theta_t) \log p(\mathbf{X},\mathbf{Z}|\theta) - \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\theta_t) \log p(\mathbf{Z}|\mathbf{X},\theta) \\ & = Q(\theta|\theta_t) + H(\theta|\theta_t). \end{align*} $$ So we hope to use $Q(\theta|\theta_t)$ as a surrogate of $\log p(\mathbf{X}|\theta)$, and the error from using this approximation is $-H(\theta|\theta_t)=Q(\theta|\theta_t)-p(\mathbf{X}|\theta)$. Now, I contend that when $\theta = \theta_t$, then $-H(\theta_t|\theta_t)$ is minimized, thus the approximation is tightest. In fact, this follows directly from $H$ being an expectation of a log-likelihood, which is maximized (thus $-H$ minimized) at its generative value,in this case $\theta_t$. (This is just restating that the expectation of a score function is zero).
Expectation maxmisation algorithm increases true likelihood at each iteration
I think your picture is misleading you. Rather than envisioning a single orange lower-bound approximation to the true blue likelihood, instead you should be thinking of a series of orange lower-bound
Expectation maxmisation algorithm increases true likelihood at each iteration I think your picture is misleading you. Rather than envisioning a single orange lower-bound approximation to the true blue likelihood, instead you should be thinking of a series of orange lower-bounds that are approximations about particular points $\theta_t$. In particular, the approximation is necessarily tightest about the approximation point. So at $\theta_2$ your orange approximation must be closer to the blue curve than it is at $\theta_{2}^{\mbox{max}}$. See figure (from wikipedia). This follows from the EM derivation. First observe that the incomplete data likelihood can be written as a ratio of a joint and a conditional: $p(X|\theta)=\frac{P(X,Z|\theta)}{P(Z|X,\theta)}$, and that the expectation over $Z$ leaves the LHS unchanged. Restating this fact and taking logs, $$ \begin{align*} \log p(\mathbf{X}|\theta) &= E_z \left[ \log p(\mathbf{X},\mathbf{Z}|\theta) - \log p(\mathbf{Z}|\mathbf{X},\theta) \right] \\ \log p(\mathbf{X}|\theta) & = \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\theta_t) \log p(\mathbf{X},\mathbf{Z}|\theta) - \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\theta_t) \log p(\mathbf{Z}|\mathbf{X},\theta) \\ & = Q(\theta|\theta_t) + H(\theta|\theta_t). \end{align*} $$ So we hope to use $Q(\theta|\theta_t)$ as a surrogate of $\log p(\mathbf{X}|\theta)$, and the error from using this approximation is $-H(\theta|\theta_t)=Q(\theta|\theta_t)-p(\mathbf{X}|\theta)$. Now, I contend that when $\theta = \theta_t$, then $-H(\theta_t|\theta_t)$ is minimized, thus the approximation is tightest. In fact, this follows directly from $H$ being an expectation of a log-likelihood, which is maximized (thus $-H$ minimized) at its generative value,in this case $\theta_t$. (This is just restating that the expectation of a score function is zero).
Expectation maxmisation algorithm increases true likelihood at each iteration I think your picture is misleading you. Rather than envisioning a single orange lower-bound approximation to the true blue likelihood, instead you should be thinking of a series of orange lower-bound
51,026
Expectation maxmisation algorithm increases true likelihood at each iteration
The EM algorithm directly maximizes the expected complete data likelihood, but can guarantee the increase of observed data likelihood. The correct version of the proof is in the following reference. Statistical Inference by Casella and Berger also addresses this problem. Wu, C. F. Jeff (1983). "On the Convergence Properties of the EM Algorithm". Annals of Statistics 11 (1): 95–103.
Expectation maxmisation algorithm increases true likelihood at each iteration
The EM algorithm directly maximizes the expected complete data likelihood, but can guarantee the increase of observed data likelihood. The correct version of the proof is in the following reference. S
Expectation maxmisation algorithm increases true likelihood at each iteration The EM algorithm directly maximizes the expected complete data likelihood, but can guarantee the increase of observed data likelihood. The correct version of the proof is in the following reference. Statistical Inference by Casella and Berger also addresses this problem. Wu, C. F. Jeff (1983). "On the Convergence Properties of the EM Algorithm". Annals of Statistics 11 (1): 95–103.
Expectation maxmisation algorithm increases true likelihood at each iteration The EM algorithm directly maximizes the expected complete data likelihood, but can guarantee the increase of observed data likelihood. The correct version of the proof is in the following reference. S
51,027
LASSO closed form with two regressors, JRSSB eq. (6)
If $$\max[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=0$$ then $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0.$$ But you wrote $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 < 0.$$ Now $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0$$ implies $$s/2\leq (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2.$$ Next we have: $$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+=max[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0].$$ Therefore, $$s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2\geq s/2+s/2.$$ So $$s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2\geq s \geq 0\qquad\qquad (1)$$ Hence using (1) we have: $$\hat\beta_1=max[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \qquad\qquad (2)$$ But you found $$\hat\beta_1>s.$$ Finally using (2) we have: $$|\hat\beta_1|+|\hat\beta_2|=|\hat\beta_1|+0=|\hat\beta_1|=|s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2| \leq s/2 +|(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2|.$$ The last in-equality can be less than or equal to $s$ and nothing is violated.
LASSO closed form with two regressors, JRSSB eq. (6)
If $$\max[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=0$$ then $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0.$$ But you wrote $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 < 0.$$
LASSO closed form with two regressors, JRSSB eq. (6) If $$\max[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=0$$ then $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0.$$ But you wrote $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 < 0.$$ Now $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0$$ implies $$s/2\leq (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2.$$ Next we have: $$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+=max[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0].$$ Therefore, $$s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2\geq s/2+s/2.$$ So $$s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2\geq s \geq 0\qquad\qquad (1)$$ Hence using (1) we have: $$\hat\beta_1=max[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \qquad\qquad (2)$$ But you found $$\hat\beta_1>s.$$ Finally using (2) we have: $$|\hat\beta_1|+|\hat\beta_2|=|\hat\beta_1|+0=|\hat\beta_1|=|s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2| \leq s/2 +|(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2|.$$ The last in-equality can be less than or equal to $s$ and nothing is violated.
LASSO closed form with two regressors, JRSSB eq. (6) If $$\max[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2,0]=0$$ then $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 \leq 0.$$ But you wrote $$s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 < 0.$$
51,028
Asymptotic distribution of $\chi^2_{(1)}$
If you want the limiting distribution of the minimum of a collection of non-identically distributed chi-squares (each with a different degrees-of-freedom parameter), then this thread, Order statistics (e.g., minimum) of infinite collection of chi-square variates? I believe answers exactly the question. If you want to see what happens to the distribution of the minimum of $n$ identically distributed chi-squares, as both the sample size and their common degrees-of-freedom parameter goes to infinity, then I have to offer the following remarks: Assume then that $n/k \rightarrow 0$. It is well known that as $k$ increases the standardized chi-square approaches a standard normal distribution. Since in this case $k$ "goes to infinity faster than $n$", "at" infinity its standardized version is a standard normal distribution and its minimum should follow what the minimum of a normal distribution follows: using David & Nagarajah's notation, the CDF is $$G^*_3(x) = 1-\exp{\{-e^x\}},\;\; -\infty < x < \infty$$ with $X$ being the minimum standardized by some $a_n, b_n$. Note that $G^*_3(x) = 1 - G_3(-x)$ with $G_3()$ being the CDF of standard Gumbel distribution. Now assume that $n/k \rightarrow \infty$. Here, $n$ escapes faster than $k$ and "at" infinity, the variable is -well, it is again a normal, because the normal approximation sets in much earlier than infinity. So it appears that we will again obtain the same result as in the $n/k \rightarrow 0$ case.
Asymptotic distribution of $\chi^2_{(1)}$
If you want the limiting distribution of the minimum of a collection of non-identically distributed chi-squares (each with a different degrees-of-freedom parameter), then this thread, Order statistics
Asymptotic distribution of $\chi^2_{(1)}$ If you want the limiting distribution of the minimum of a collection of non-identically distributed chi-squares (each with a different degrees-of-freedom parameter), then this thread, Order statistics (e.g., minimum) of infinite collection of chi-square variates? I believe answers exactly the question. If you want to see what happens to the distribution of the minimum of $n$ identically distributed chi-squares, as both the sample size and their common degrees-of-freedom parameter goes to infinity, then I have to offer the following remarks: Assume then that $n/k \rightarrow 0$. It is well known that as $k$ increases the standardized chi-square approaches a standard normal distribution. Since in this case $k$ "goes to infinity faster than $n$", "at" infinity its standardized version is a standard normal distribution and its minimum should follow what the minimum of a normal distribution follows: using David & Nagarajah's notation, the CDF is $$G^*_3(x) = 1-\exp{\{-e^x\}},\;\; -\infty < x < \infty$$ with $X$ being the minimum standardized by some $a_n, b_n$. Note that $G^*_3(x) = 1 - G_3(-x)$ with $G_3()$ being the CDF of standard Gumbel distribution. Now assume that $n/k \rightarrow \infty$. Here, $n$ escapes faster than $k$ and "at" infinity, the variable is -well, it is again a normal, because the normal approximation sets in much earlier than infinity. So it appears that we will again obtain the same result as in the $n/k \rightarrow 0$ case.
Asymptotic distribution of $\chi^2_{(1)}$ If you want the limiting distribution of the minimum of a collection of non-identically distributed chi-squares (each with a different degrees-of-freedom parameter), then this thread, Order statistics
51,029
Distinguish an ARMA and an ARIMA model graphically
It is not easy to distinguish an ARMA model from an ARIMA model graphically. The problem is that the stationary ARMA form can be made arbitrarily close to the ARIMA form by setting one of the roots of the auto-regressive characteristic polynomial arbitrarily close to one. Trying to distinguish between these models is equivalent to trying to determine whether there is a unit root in the auto-regressive characteristic polynomial. This usually requires formal modelling and testing, and it is not something that is easy to do on a purely graphical basis.
Distinguish an ARMA and an ARIMA model graphically
It is not easy to distinguish an ARMA model from an ARIMA model graphically. The problem is that the stationary ARMA form can be made arbitrarily close to the ARIMA form by setting one of the roots o
Distinguish an ARMA and an ARIMA model graphically It is not easy to distinguish an ARMA model from an ARIMA model graphically. The problem is that the stationary ARMA form can be made arbitrarily close to the ARIMA form by setting one of the roots of the auto-regressive characteristic polynomial arbitrarily close to one. Trying to distinguish between these models is equivalent to trying to determine whether there is a unit root in the auto-regressive characteristic polynomial. This usually requires formal modelling and testing, and it is not something that is easy to do on a purely graphical basis.
Distinguish an ARMA and an ARIMA model graphically It is not easy to distinguish an ARMA model from an ARIMA model graphically. The problem is that the stationary ARMA form can be made arbitrarily close to the ARIMA form by setting one of the roots o
51,030
Distinguish an ARMA and an ARIMA model graphically
Given that are no Pulses, Level Shifts, Seasonal Pulses, Local Time Trends AND the ultimate ARIMA parameters and model error variance are constant over time AND that you have suitably (not over) differenced the series to obtain stationarity THEN: compare the acf and the pacf for dominance. Examine the dominant one and if it has a decaying structure conclude that the model has an AR component if the dominant one was the acf and alternatively an MA structure if the dominant one was the pacf. The order of the AR or MA structure is based upon the number of significant coeffficients in the subordinate. This might be of some help http://www.autobox.com/cms/index.php/blog/entry/build-or-make-your-own-arima-forecasting-model . The idea is that ARIMA model identification sometimes(often) requires a sequence of model identification/estimation/diagnostic checking in order to detect the underlying signal (ARIMA model). Think of the first pass as the residuals from a mean model ( the first tentative model) and you can understand that there is no such thing as "an identification phase" just a sequence of model revision phases.
Distinguish an ARMA and an ARIMA model graphically
Given that are no Pulses, Level Shifts, Seasonal Pulses, Local Time Trends AND the ultimate ARIMA parameters and model error variance are constant over time AND that you have suitably (not over) diffe
Distinguish an ARMA and an ARIMA model graphically Given that are no Pulses, Level Shifts, Seasonal Pulses, Local Time Trends AND the ultimate ARIMA parameters and model error variance are constant over time AND that you have suitably (not over) differenced the series to obtain stationarity THEN: compare the acf and the pacf for dominance. Examine the dominant one and if it has a decaying structure conclude that the model has an AR component if the dominant one was the acf and alternatively an MA structure if the dominant one was the pacf. The order of the AR or MA structure is based upon the number of significant coeffficients in the subordinate. This might be of some help http://www.autobox.com/cms/index.php/blog/entry/build-or-make-your-own-arima-forecasting-model . The idea is that ARIMA model identification sometimes(often) requires a sequence of model identification/estimation/diagnostic checking in order to detect the underlying signal (ARIMA model). Think of the first pass as the residuals from a mean model ( the first tentative model) and you can understand that there is no such thing as "an identification phase" just a sequence of model revision phases.
Distinguish an ARMA and an ARIMA model graphically Given that are no Pulses, Level Shifts, Seasonal Pulses, Local Time Trends AND the ultimate ARIMA parameters and model error variance are constant over time AND that you have suitably (not over) diffe
51,031
Estimate mean & standard deviation of set S if I know stats of an inner set and outer set
I'm going to solve this by calculation. I'm sure there's some clever statistics way to figure this out -- if you know, please make a post and I'll probably accept it :p. Until then, this will have to do. Problem statement We know $\mu_1=\mu(S_1)$, $\sigma_1=\sigma(S_1)$, and $N_1=\lvert S_1\rvert$, as well as the equivalent statistics on $S_3$: $\mu_3$, $\sigma_3$, and $N_3$. We also know $N_2$ and wish to find estimates for $\mu_2$ and $\sigma_2$ along with error bounds. Statistical identities on disjoint unions For $X\cap Y=\emptyset$ we'll make frequent use of the identities $$ \mu_{X\cup Y}=\frac{N_X\mu_X+N_Y\mu_Y}{N_X+N_Y} $$ and $$ (N_X+N_Y)\sigma_{X\cup Y}^2=N_X\sigma_X^2+N_Y\sigma_Y^2+\frac{N_XN_Y}{N_X+N_Y}(\mu_X-\mu_Y)^2. $$ S3 minus S1 Using these formula, we can immediately find exactI mean and variance for $S_3\setminus S_1$, namely $$ \mu_{3\setminus1}=\frac{N_3\mu_3-N_1\mu_1}{N_3-N_1} $$ and $$ (N_3-N_1)\sigma_{3\setminus1}^2=N_3\sigma_3^2-N_1\sigma_1^2-\frac{N_1N_3}{N_3-N_1}(\mu_3-\mu_1)^2. $$ S2 minus S1 If we can use this information to find good estimates for $\mu_{2\setminus1}$ and $\sigma_{2\setminus1}$, the mean and standard deviation of $S_2\setminus S_1$, then we can use the union formulae to make estimates for $\mu_2$ and $\sigma_2$. Intuitively we expect that $\mu_{2\setminus1}\approx\mu_{3\setminus1}$ and $\sigma_{2\setminus1}\approx\sigma_{3\setminus1}$ are reasonable estimates on that subset, but we would need to justify this, at least finding upper and lower bounds to determine how good (or bad) these estimates are. Now we have by identity an equation we'll repeatedly refer to as (A) $$ N_{3\setminus1}\sigma_{3\setminus1}^2=N_{3\setminus2}\sigma_{3\setminus2}^2+N_{2\setminus1}\sigma_{2\setminus1}^2+\frac{N_{3\setminus2}N_{2\setminus1}}{N_{3\setminus1}}(\mu_{3\setminus2}-\mu_{2\setminus1})^2 $$ where we've abbreviated $N_{a\setminus b}:=\lvert{S_a\setminus S_b\rvert}=N_a-N_b$. If we wish to maximize $\sigma_{2\setminus1}$ it suffices to note that we could choose a distribution of $S_3\setminus S_1$ so that all the varianceII is in $S_2\setminus S_1$ and the means $\mu_{3\setminus2}-\mu_{2\setminus1}=0$. In this case, all the terms of (A) become zero except the LHS and the $\sigma_{2\setminus1}$ term, yielding $$ N_{3\setminus1}\sigma_{3\setminus1}^2=N_{2\setminus1}\sigma_{2\setminus1}^2, $$ implying that $\sigma_{2\setminus1}^2\leq\frac{N_{3\setminus1}}{N_{2\setminus1}}\sigma_{3\setminus1}^2$. By the same reasoning we could choose a different distribution so that all the variance is in $S_3\setminus S_2$, in which case $\sigma_{2\setminus1}^2=0$, obviously a lower bound. So we have these bounds on $\sigma_{2\setminus1}$: $$ 0\leq\sigma_{2\setminus1}\leq\sqrt{\frac{N_3-N_1}{N_2-N_1}}\sigma_{3\setminus1}. $$ To find the maximum difference of the mean $\mu_{2\setminus1}$ from our proposed estimate $\mu_{2\setminus1}$ a good first step is to note we can maximize $(\mu_{3\setminus2}-\mu_{2\setminus1})^2$ in (A) by choosing a distribution where $\sigma_{3\setminus2}=\sigma_{2\setminus1}=0$. In this case all the variance in $S_3\setminus S_1$ is due to the wide disparity between the means on $S_3\setminus S_2$ and $S_2\setminus S_1$. So from (A) we get $$ \lvert\mu_{3\setminus2}-\mu_{2\setminus1}\rvert\leq\frac{N_{3\setminus1}}{\sqrt{N_{3\setminus2}N_{2\setminus1}}}\sigma_{3\setminus1}. $$ If we rearrange this along with our mean union identity at the top, we get two equations with two unknowns: $$ \begin{align} N_{3\setminus2}(\mu_{3\setminus2}-\mu_{3\setminus1}) +N_{2\setminus1}(\mu_{2\setminus1}-\mu_{3\setminus1}) &=0 \\ (\mu_{3\setminus2}-\mu_{3\setminus1}) -(\mu_{2\setminus1}-\mu_{3\setminus1}) &={\frac{N_{3\setminus1}}{\sqrt{N_{3\setminus2}N_{2\setminus1}}}}\sigma_{3\setminus1} \end{align} $$ Solving this gives us our bounded error: $$ \left\lvert\mu_{2\setminus1}-\mu_{3\setminus1}\right\rvert\leq\sqrt{\frac{N_3-N_2}{N_2-N_1}}\sigma_{3\setminus1}. $$ So now all the pieces are in place, and we need to simply combine what we got on $S_2\setminus S_1$ with what we already know for $S_1$ and we will have our estimate and error bounds. S2 Now the estimated mean and its bounds on $S_2$ can readily be found: $$ \begin{align} \mu_2&=\frac{N_1\mu_1+(N_2-N_1)\left(\mu_{3\setminus1}\pm\sqrt{\frac{N_3-N_2}{N_2-N_1}}\sigma_{3\setminus1}\right)}{N_2}\\ &=\frac{N_1\mu_1+(N_2-N_1)\mu_{3\setminus1}}{N_2}\pm\frac{\sqrt{(N_2-N_1)(N_3-N_2)}}{N_2}\sigma_{3\setminus1}\\ \end{align} $$ These bounds are exact and this estimate is quite good when $N_3-N_1\ll N_2$ and $\sigma_{3\setminus1}$ is small. Solving for $\sigma_2$, we find that $$ \sigma_2^2=\frac{N_{2\setminus1}}{N_2}\sigma_{2\setminus1}^2+\frac{N_1}{N_2}\sigma_1^2+\frac{N_{2\setminus1}N_1}{N_2}(\mu_{2\setminus1}-\mu_1)^2 $$ We won't find the exact error here because $\mu_{2\setminus1}$ and $\sigma_{2\setminus1}$ aren't independent, but the $v$ term gives rise to an error of order $\frac{N_{3\setminus1}}{N_2}\sigma_{3\setminus1}^2$ and the squared difference of means gives rise to the (small if the mean error is small) error term $$ \frac{N_{2\setminus1}^2N_{3\setminus2}N_1}{N_2^4}\sigma_{3\setminus1} $$ and the potentially more concerning error term $$2\left\lvert\mu_{3\setminus1}-\mu_1\right\rvert\frac{N_{2\setminus1}N_1\sqrt{N_{2\setminus1}N_{3\setminus2}}}{N_2^3}\sigma_{3\setminus1}. $$ ITheoretically we can find this exactly. In practice there are precision issues with this calculation. In my particular case I actually have these values already, but if you don't, beware! II There's a actually special cases here where $N_{3\setminus2}=1$ or $N_{3\setminus2}=1$ since one element sets don't have any variance. But the bounds derived still hold -- we could just have made them tighter for these cases.
Estimate mean & standard deviation of set S if I know stats of an inner set and outer set
I'm going to solve this by calculation. I'm sure there's some clever statistics way to figure this out -- if you know, please make a post and I'll probably accept it :p. Until then, this will have t
Estimate mean & standard deviation of set S if I know stats of an inner set and outer set I'm going to solve this by calculation. I'm sure there's some clever statistics way to figure this out -- if you know, please make a post and I'll probably accept it :p. Until then, this will have to do. Problem statement We know $\mu_1=\mu(S_1)$, $\sigma_1=\sigma(S_1)$, and $N_1=\lvert S_1\rvert$, as well as the equivalent statistics on $S_3$: $\mu_3$, $\sigma_3$, and $N_3$. We also know $N_2$ and wish to find estimates for $\mu_2$ and $\sigma_2$ along with error bounds. Statistical identities on disjoint unions For $X\cap Y=\emptyset$ we'll make frequent use of the identities $$ \mu_{X\cup Y}=\frac{N_X\mu_X+N_Y\mu_Y}{N_X+N_Y} $$ and $$ (N_X+N_Y)\sigma_{X\cup Y}^2=N_X\sigma_X^2+N_Y\sigma_Y^2+\frac{N_XN_Y}{N_X+N_Y}(\mu_X-\mu_Y)^2. $$ S3 minus S1 Using these formula, we can immediately find exactI mean and variance for $S_3\setminus S_1$, namely $$ \mu_{3\setminus1}=\frac{N_3\mu_3-N_1\mu_1}{N_3-N_1} $$ and $$ (N_3-N_1)\sigma_{3\setminus1}^2=N_3\sigma_3^2-N_1\sigma_1^2-\frac{N_1N_3}{N_3-N_1}(\mu_3-\mu_1)^2. $$ S2 minus S1 If we can use this information to find good estimates for $\mu_{2\setminus1}$ and $\sigma_{2\setminus1}$, the mean and standard deviation of $S_2\setminus S_1$, then we can use the union formulae to make estimates for $\mu_2$ and $\sigma_2$. Intuitively we expect that $\mu_{2\setminus1}\approx\mu_{3\setminus1}$ and $\sigma_{2\setminus1}\approx\sigma_{3\setminus1}$ are reasonable estimates on that subset, but we would need to justify this, at least finding upper and lower bounds to determine how good (or bad) these estimates are. Now we have by identity an equation we'll repeatedly refer to as (A) $$ N_{3\setminus1}\sigma_{3\setminus1}^2=N_{3\setminus2}\sigma_{3\setminus2}^2+N_{2\setminus1}\sigma_{2\setminus1}^2+\frac{N_{3\setminus2}N_{2\setminus1}}{N_{3\setminus1}}(\mu_{3\setminus2}-\mu_{2\setminus1})^2 $$ where we've abbreviated $N_{a\setminus b}:=\lvert{S_a\setminus S_b\rvert}=N_a-N_b$. If we wish to maximize $\sigma_{2\setminus1}$ it suffices to note that we could choose a distribution of $S_3\setminus S_1$ so that all the varianceII is in $S_2\setminus S_1$ and the means $\mu_{3\setminus2}-\mu_{2\setminus1}=0$. In this case, all the terms of (A) become zero except the LHS and the $\sigma_{2\setminus1}$ term, yielding $$ N_{3\setminus1}\sigma_{3\setminus1}^2=N_{2\setminus1}\sigma_{2\setminus1}^2, $$ implying that $\sigma_{2\setminus1}^2\leq\frac{N_{3\setminus1}}{N_{2\setminus1}}\sigma_{3\setminus1}^2$. By the same reasoning we could choose a different distribution so that all the variance is in $S_3\setminus S_2$, in which case $\sigma_{2\setminus1}^2=0$, obviously a lower bound. So we have these bounds on $\sigma_{2\setminus1}$: $$ 0\leq\sigma_{2\setminus1}\leq\sqrt{\frac{N_3-N_1}{N_2-N_1}}\sigma_{3\setminus1}. $$ To find the maximum difference of the mean $\mu_{2\setminus1}$ from our proposed estimate $\mu_{2\setminus1}$ a good first step is to note we can maximize $(\mu_{3\setminus2}-\mu_{2\setminus1})^2$ in (A) by choosing a distribution where $\sigma_{3\setminus2}=\sigma_{2\setminus1}=0$. In this case all the variance in $S_3\setminus S_1$ is due to the wide disparity between the means on $S_3\setminus S_2$ and $S_2\setminus S_1$. So from (A) we get $$ \lvert\mu_{3\setminus2}-\mu_{2\setminus1}\rvert\leq\frac{N_{3\setminus1}}{\sqrt{N_{3\setminus2}N_{2\setminus1}}}\sigma_{3\setminus1}. $$ If we rearrange this along with our mean union identity at the top, we get two equations with two unknowns: $$ \begin{align} N_{3\setminus2}(\mu_{3\setminus2}-\mu_{3\setminus1}) +N_{2\setminus1}(\mu_{2\setminus1}-\mu_{3\setminus1}) &=0 \\ (\mu_{3\setminus2}-\mu_{3\setminus1}) -(\mu_{2\setminus1}-\mu_{3\setminus1}) &={\frac{N_{3\setminus1}}{\sqrt{N_{3\setminus2}N_{2\setminus1}}}}\sigma_{3\setminus1} \end{align} $$ Solving this gives us our bounded error: $$ \left\lvert\mu_{2\setminus1}-\mu_{3\setminus1}\right\rvert\leq\sqrt{\frac{N_3-N_2}{N_2-N_1}}\sigma_{3\setminus1}. $$ So now all the pieces are in place, and we need to simply combine what we got on $S_2\setminus S_1$ with what we already know for $S_1$ and we will have our estimate and error bounds. S2 Now the estimated mean and its bounds on $S_2$ can readily be found: $$ \begin{align} \mu_2&=\frac{N_1\mu_1+(N_2-N_1)\left(\mu_{3\setminus1}\pm\sqrt{\frac{N_3-N_2}{N_2-N_1}}\sigma_{3\setminus1}\right)}{N_2}\\ &=\frac{N_1\mu_1+(N_2-N_1)\mu_{3\setminus1}}{N_2}\pm\frac{\sqrt{(N_2-N_1)(N_3-N_2)}}{N_2}\sigma_{3\setminus1}\\ \end{align} $$ These bounds are exact and this estimate is quite good when $N_3-N_1\ll N_2$ and $\sigma_{3\setminus1}$ is small. Solving for $\sigma_2$, we find that $$ \sigma_2^2=\frac{N_{2\setminus1}}{N_2}\sigma_{2\setminus1}^2+\frac{N_1}{N_2}\sigma_1^2+\frac{N_{2\setminus1}N_1}{N_2}(\mu_{2\setminus1}-\mu_1)^2 $$ We won't find the exact error here because $\mu_{2\setminus1}$ and $\sigma_{2\setminus1}$ aren't independent, but the $v$ term gives rise to an error of order $\frac{N_{3\setminus1}}{N_2}\sigma_{3\setminus1}^2$ and the squared difference of means gives rise to the (small if the mean error is small) error term $$ \frac{N_{2\setminus1}^2N_{3\setminus2}N_1}{N_2^4}\sigma_{3\setminus1} $$ and the potentially more concerning error term $$2\left\lvert\mu_{3\setminus1}-\mu_1\right\rvert\frac{N_{2\setminus1}N_1\sqrt{N_{2\setminus1}N_{3\setminus2}}}{N_2^3}\sigma_{3\setminus1}. $$ ITheoretically we can find this exactly. In practice there are precision issues with this calculation. In my particular case I actually have these values already, but if you don't, beware! II There's a actually special cases here where $N_{3\setminus2}=1$ or $N_{3\setminus2}=1$ since one element sets don't have any variance. But the bounds derived still hold -- we could just have made them tighter for these cases.
Estimate mean & standard deviation of set S if I know stats of an inner set and outer set I'm going to solve this by calculation. I'm sure there's some clever statistics way to figure this out -- if you know, please make a post and I'll probably accept it :p. Until then, this will have t
51,032
Time series with multiple subjects and multiple variables in R
The Arima function in the forecast package can fit a regression model to the data with an ARIMA model for the errors. The order argument specifies the orders of the ARIMA model, while the argument xreg defines which data object contains the observations of the predictors. E.g., if xreg is a matrix of predictors: model = Arima(series, order = c(1,1,0), xreg = covariates) To find the order of the ARIMA process, you can simply use the auto.arima function also found in the forecast package. It automatically locates the best-fitting ARIMA model to the data, “fit” defined by one of three possible information criteria in the ic argument: the AIC (given by aic), the AICc (aicc), or the BIC (bic). E.g., model = auto.arima(series, ic = “aic”) I think you may find this page really helpful, especially the section about R.
Time series with multiple subjects and multiple variables in R
The Arima function in the forecast package can fit a regression model to the data with an ARIMA model for the errors. The order argument specifies the orders of the ARIMA model, while the argument xre
Time series with multiple subjects and multiple variables in R The Arima function in the forecast package can fit a regression model to the data with an ARIMA model for the errors. The order argument specifies the orders of the ARIMA model, while the argument xreg defines which data object contains the observations of the predictors. E.g., if xreg is a matrix of predictors: model = Arima(series, order = c(1,1,0), xreg = covariates) To find the order of the ARIMA process, you can simply use the auto.arima function also found in the forecast package. It automatically locates the best-fitting ARIMA model to the data, “fit” defined by one of three possible information criteria in the ic argument: the AIC (given by aic), the AICc (aicc), or the BIC (bic). E.g., model = auto.arima(series, ic = “aic”) I think you may find this page really helpful, especially the section about R.
Time series with multiple subjects and multiple variables in R The Arima function in the forecast package can fit a regression model to the data with an ARIMA model for the errors. The order argument specifies the orders of the ARIMA model, while the argument xre
51,033
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained?
I've heard all three but I don't particularly like any of them (with variance being at the bottom of the list ). Really, I prefer calling $R^{2}$ a measure of fit for our model to the data (and if you really want to use $R^{2}$, use its adjusted version instead.) The reason I don't like calling it a variance is because only one term in the expression for it proportional to the variance. It's form is $1-\frac {SS_{E}}{SS_{T}} $, right? In that expression only $SS_{T}$ is proportional to a variance. Considering, as the comment claimed, that variance is a well defined formal quantity it doesn't seem accurate to call what we get from $R^{2} $ a proportion of variance explained.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance*
I've heard all three but I don't particularly like any of them (with variance being at the bottom of the list ). Really, I prefer calling $R^{2}$ a measure of fit for our model to the data (and if you
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained? I've heard all three but I don't particularly like any of them (with variance being at the bottom of the list ). Really, I prefer calling $R^{2}$ a measure of fit for our model to the data (and if you really want to use $R^{2}$, use its adjusted version instead.) The reason I don't like calling it a variance is because only one term in the expression for it proportional to the variance. It's form is $1-\frac {SS_{E}}{SS_{T}} $, right? In that expression only $SS_{T}$ is proportional to a variance. Considering, as the comment claimed, that variance is a well defined formal quantity it doesn't seem accurate to call what we get from $R^{2} $ a proportion of variance explained.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* I've heard all three but I don't particularly like any of them (with variance being at the bottom of the list ). Really, I prefer calling $R^{2}$ a measure of fit for our model to the data (and if you
51,034
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained?
I like to explain R-squared to clients as follows - it's just the squared correlation between the predicted values of the Dependent Variable and the actual values of the Dependent Variable. if you plot the former against the latter as a scatter chart, and fit a regression line, then the R-squared of that regression line is the same as the R-squared of your original regression. Plotting actual vs predicted is also a rather good way of showing how bad an R-squared of (say) 70% really is - there is lots of scatter even at what might be thought, at first sight, to be a relatively high R-squared value.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance*
I like to explain R-squared to clients as follows - it's just the squared correlation between the predicted values of the Dependent Variable and the actual values of the Dependent Variable. if you plo
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained? I like to explain R-squared to clients as follows - it's just the squared correlation between the predicted values of the Dependent Variable and the actual values of the Dependent Variable. if you plot the former against the latter as a scatter chart, and fit a regression line, then the R-squared of that regression line is the same as the R-squared of your original regression. Plotting actual vs predicted is also a rather good way of showing how bad an R-squared of (say) 70% really is - there is lots of scatter even at what might be thought, at first sight, to be a relatively high R-squared value.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* I like to explain R-squared to clients as follows - it's just the squared correlation between the predicted values of the Dependent Variable and the actual values of the Dependent Variable. if you plo
51,035
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained?
Informally, R2 is a measure for the variation or variability. In this context, variation measures the size of the residuals. Clearly, variation isn't equal to the variance. Variance is simply a measure of the deviation from the sample mean.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance*
Informally, R2 is a measure for the variation or variability. In this context, variation measures the size of the residuals. Clearly, variation isn't equal to the variance. Variance is simply a measur
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained? Informally, R2 is a measure for the variation or variability. In this context, variation measures the size of the residuals. Clearly, variation isn't equal to the variance. Variance is simply a measure of the deviation from the sample mean.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* Informally, R2 is a measure for the variation or variability. In this context, variation measures the size of the residuals. Clearly, variation isn't equal to the variance. Variance is simply a measur
51,036
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained?
In the context of linear regression, $R^2$ is a measure (estimate?) of the proportion of variance explained by the model. It seems to me that if we want to be precise, variance is the only word that can be used. To say that it is a measure of proportion of response variation is at best informal; the quote from Wikipedia is misleading, as this answer to my question In linear regression, does $R^2$ really measure the fraction of explained variation? explains. Someone should rewrite the Wikipedia article!
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance*
In the context of linear regression, $R^2$ is a measure (estimate?) of the proportion of variance explained by the model. It seems to me that if we want to be precise, variance is the only word that
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained? In the context of linear regression, $R^2$ is a measure (estimate?) of the proportion of variance explained by the model. It seems to me that if we want to be precise, variance is the only word that can be used. To say that it is a measure of proportion of response variation is at best informal; the quote from Wikipedia is misleading, as this answer to my question In linear regression, does $R^2$ really measure the fraction of explained variation? explains. Someone should rewrite the Wikipedia article!
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* In the context of linear regression, $R^2$ is a measure (estimate?) of the proportion of variance explained by the model. It seems to me that if we want to be precise, variance is the only word that
51,037
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained?
“Variation” and “variability” are English words that have colloquial meanings that basically everyone more-or-less understands: the extent to which the values are different. If there is minimal variability, the values are close together; is there is extensive variation, the values are far apart. “Variance” is technical nomenclature in statistics that refers to a specific way of quantifying the colloquial notion of variability or variation. In this regard, the standard definition of $R^2$ addresses the degree to which the variability/variation in the data is explained by the model, using variance as the mathematical definition of that variability. Nothing stops us from using alternative measures of variability, such as comparing the IQRs of the raw data and the residuals or the mean absolute errors, except that this particular way of measuring the degree to which the variability is explained aligns with some other nice ideas related to model fit that I discuss here.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance*
“Variation” and “variability” are English words that have colloquial meanings that basically everyone more-or-less understands: the extent to which the values are different. If there is minimal variab
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* explained? “Variation” and “variability” are English words that have colloquial meanings that basically everyone more-or-less understands: the extent to which the values are different. If there is minimal variability, the values are close together; is there is extensive variation, the values are far apart. “Variance” is technical nomenclature in statistics that refers to a specific way of quantifying the colloquial notion of variability or variation. In this regard, the standard definition of $R^2$ addresses the degree to which the variability/variation in the data is explained by the model, using variance as the mathematical definition of that variability. Nothing stops us from using alternative measures of variability, such as comparing the IQRs of the raw data and the residuals or the mean absolute errors, except that this particular way of measuring the degree to which the variability is explained aligns with some other nice ideas related to model fit that I discuss here.
Does $R^2$ interpretable as the proportion of *variation* explained, or the proportion of *variance* “Variation” and “variability” are English words that have colloquial meanings that basically everyone more-or-less understands: the extent to which the values are different. If there is minimal variab
51,038
Probability of getting $k$ answers correct in an exam with 4 and 2 questions on $N$ possible subjects?
Consider each section of the exam as containing the result of several draws (without replacement) from an urn. The urn contains the K subjects the student studied, and the N - K subjects he/she skipped. Let J = N - K, for convenience. Part A contains between 0 and 4 subjects that the student studied, while part B contains between 0 and 2 such subjects. Now, for each of these 15 possible combinations, consider the value of k that that combination gives rise to: k = 0: 0 correct on part A, 0 correct on part B k = 1: 0 correct on part A, 1 or 2 correct on part B, OR 1 correct on part A, 0 correct on part B k = 2: 1 correct on part A, 1 or 2 correct on part B, OR 2 correct on part A, 0 correct on part B k = 3: 2 correct on part A, 1 or 2 correct on part B, OR 3 correct on part A, 0 correct on part B, OR 4 correct on part A, 0 correct on part B k = 4: 3 correct on part A, 1 or 2 correct on part B, OR 4 correct on part A, 1 or 2 correct on part B At this point, it's just a matter of summing up the probabilities of the configurations, for each value of k. Let f(k, K, J, n) be the pmf of the hypergeometric distribution, with k white balls drawn, K white balls and J black balls in the urn, and n balls drawn overall. Then the probabilities are: k = 0: f(0, K, J, 4) * f(0, K, J - 4, 2) k = 1: f(0, K, J, 4) * (f(1, K, J - 4, 2) + f(2, K, J - 4, 2)) + f(1, K, J, 4) * f(0, K - 1, J - 3, 2) k = 2: f(1, K, J, 4) * (f(1, K - 1, J - 3, 2) + f(2, K - 1, J - 3, 2)) + f(2, K, J, 4) * f(0, K - 2, J - 2, 2) k = 3: f(2, K, J, 4) * (f(1, K - 2, J - 2, 2) + f(2, K - 2, J - 2, 2)) + f(3, K, J, 4) * f(0, K - 3, J - 1, 2) + f(4, K, J, 4) * f(0, K - 4, J, 2) k = 4: f(3, K, J, 4) * (f(1, K - 3, J - 1, 2) + f(2, K - 3, J - 1, 2)) + f(4, K, J, 4) * (f(1, K - 4, J, 2) + f(2, K - 4, J, 2)) I'm assuming the probability is just 0 wherever the distribution isn't supported (e.g., where k > K). As Joel W. says in the comments, probability is tricky, and it's always worth checking your work with a simulation. Here's my R code to do just that (with N set to 25 and K to 17; you could of course set these to whatever you wanted): N <- 25 K <- 17 answered <- sapply(1:300000, function(i) { subjects <- seq(from = 1, to = N) studied <- sample(subjects, K) asked <- sample(subjects, 6) asked.1 <- asked[1:4] asked.2 <- asked[5:6] answerable.1 <- sum(is.element(asked.1, studied)) answerable.2 <- sum(is.element(asked.2, studied)) answered.1 <- min(answerable.1, 3) answered.2 <- min(answerable.2, 1) answered.1 + answered.2 }) table(answered) / length(answered) Running the above, I got these observed proportions: k = 0: 0.00016 k = 1: 0.00910 k = 2: 0.09298 k = 3: 0.34898 k = 4: 0.54879 Meanwhile, using R to evaulate the probabilities described above (with 25 and 17 substituted for N and K), I got: k = 0: 0.00016 k = 1: 0.00896 k = 2: 0.09318 k = 3: 0.34762 k = 4: 0.55009 Good enough agreement, I think, to lend credence to my solution. (Happily, the probabilities sum to 1, ignoring a bit of rounding error.) I realize that a single overall formula would have been more satisfying than the tabulation-based approach I took above. Unfortunately, I wasn't able to come up with a clean, readable formula that encapsulated all the various sums. I think the distinction between answerable and answered questions really complicates the problem, but it could very well be that someone more skilled in probability/combinatorics could find a way to express the various sums as a single crisp formula.
Probability of getting $k$ answers correct in an exam with 4 and 2 questions on $N$ possible subject
Consider each section of the exam as containing the result of several draws (without replacement) from an urn. The urn contains the K subjects the student studied, and the N - K subjects he/she skipp
Probability of getting $k$ answers correct in an exam with 4 and 2 questions on $N$ possible subjects? Consider each section of the exam as containing the result of several draws (without replacement) from an urn. The urn contains the K subjects the student studied, and the N - K subjects he/she skipped. Let J = N - K, for convenience. Part A contains between 0 and 4 subjects that the student studied, while part B contains between 0 and 2 such subjects. Now, for each of these 15 possible combinations, consider the value of k that that combination gives rise to: k = 0: 0 correct on part A, 0 correct on part B k = 1: 0 correct on part A, 1 or 2 correct on part B, OR 1 correct on part A, 0 correct on part B k = 2: 1 correct on part A, 1 or 2 correct on part B, OR 2 correct on part A, 0 correct on part B k = 3: 2 correct on part A, 1 or 2 correct on part B, OR 3 correct on part A, 0 correct on part B, OR 4 correct on part A, 0 correct on part B k = 4: 3 correct on part A, 1 or 2 correct on part B, OR 4 correct on part A, 1 or 2 correct on part B At this point, it's just a matter of summing up the probabilities of the configurations, for each value of k. Let f(k, K, J, n) be the pmf of the hypergeometric distribution, with k white balls drawn, K white balls and J black balls in the urn, and n balls drawn overall. Then the probabilities are: k = 0: f(0, K, J, 4) * f(0, K, J - 4, 2) k = 1: f(0, K, J, 4) * (f(1, K, J - 4, 2) + f(2, K, J - 4, 2)) + f(1, K, J, 4) * f(0, K - 1, J - 3, 2) k = 2: f(1, K, J, 4) * (f(1, K - 1, J - 3, 2) + f(2, K - 1, J - 3, 2)) + f(2, K, J, 4) * f(0, K - 2, J - 2, 2) k = 3: f(2, K, J, 4) * (f(1, K - 2, J - 2, 2) + f(2, K - 2, J - 2, 2)) + f(3, K, J, 4) * f(0, K - 3, J - 1, 2) + f(4, K, J, 4) * f(0, K - 4, J, 2) k = 4: f(3, K, J, 4) * (f(1, K - 3, J - 1, 2) + f(2, K - 3, J - 1, 2)) + f(4, K, J, 4) * (f(1, K - 4, J, 2) + f(2, K - 4, J, 2)) I'm assuming the probability is just 0 wherever the distribution isn't supported (e.g., where k > K). As Joel W. says in the comments, probability is tricky, and it's always worth checking your work with a simulation. Here's my R code to do just that (with N set to 25 and K to 17; you could of course set these to whatever you wanted): N <- 25 K <- 17 answered <- sapply(1:300000, function(i) { subjects <- seq(from = 1, to = N) studied <- sample(subjects, K) asked <- sample(subjects, 6) asked.1 <- asked[1:4] asked.2 <- asked[5:6] answerable.1 <- sum(is.element(asked.1, studied)) answerable.2 <- sum(is.element(asked.2, studied)) answered.1 <- min(answerable.1, 3) answered.2 <- min(answerable.2, 1) answered.1 + answered.2 }) table(answered) / length(answered) Running the above, I got these observed proportions: k = 0: 0.00016 k = 1: 0.00910 k = 2: 0.09298 k = 3: 0.34898 k = 4: 0.54879 Meanwhile, using R to evaulate the probabilities described above (with 25 and 17 substituted for N and K), I got: k = 0: 0.00016 k = 1: 0.00896 k = 2: 0.09318 k = 3: 0.34762 k = 4: 0.55009 Good enough agreement, I think, to lend credence to my solution. (Happily, the probabilities sum to 1, ignoring a bit of rounding error.) I realize that a single overall formula would have been more satisfying than the tabulation-based approach I took above. Unfortunately, I wasn't able to come up with a clean, readable formula that encapsulated all the various sums. I think the distinction between answerable and answered questions really complicates the problem, but it could very well be that someone more skilled in probability/combinatorics could find a way to express the various sums as a single crisp formula.
Probability of getting $k$ answers correct in an exam with 4 and 2 questions on $N$ possible subject Consider each section of the exam as containing the result of several draws (without replacement) from an urn. The urn contains the K subjects the student studied, and the N - K subjects he/she skipp
51,039
Is using PCA then autoencoders for pre-processing useful?
I'm not a deep learning expert by any means, but my guess is that the PCA serves two functions: computational improvements if the input dimensionality is significantly reduced, and a kind of preconditioning for the optimization problem. Although a normal autoencoder setup certainly can learn the linear relationships, it may make the learning process easier if that step is useful and initialized with it. Broadly speaking, it should be about equivalent to pretraining the first layer of the autoencoder with the principal components (if you don't drop much in the PCA). Preprocessing with an autoencoder is often used to derive features to use in some other classifier or whatever. Compared to pretraining, plugging preprocessor autoencoder features into a neural net means that the final classifier can't adapt the learned features for its particular learning problem. Depending on your problem, this might hurt the performance of the final classifier somewhat. But it means that you can reuse the same learned features for multiple final classification/regression/whatever tasks, which saves a lot of training time in trying to adapt the features, and might save a substantial amount of test time if you're running a suite of learning methods on data and so can reuse the autoencoder outputs for all of them. In semi-supervised settings it might also give you better results to avoid overfitting the feature extractor to your limited labeled data.
Is using PCA then autoencoders for pre-processing useful?
I'm not a deep learning expert by any means, but my guess is that the PCA serves two functions: computational improvements if the input dimensionality is significantly reduced, and a kind of precondit
Is using PCA then autoencoders for pre-processing useful? I'm not a deep learning expert by any means, but my guess is that the PCA serves two functions: computational improvements if the input dimensionality is significantly reduced, and a kind of preconditioning for the optimization problem. Although a normal autoencoder setup certainly can learn the linear relationships, it may make the learning process easier if that step is useful and initialized with it. Broadly speaking, it should be about equivalent to pretraining the first layer of the autoencoder with the principal components (if you don't drop much in the PCA). Preprocessing with an autoencoder is often used to derive features to use in some other classifier or whatever. Compared to pretraining, plugging preprocessor autoencoder features into a neural net means that the final classifier can't adapt the learned features for its particular learning problem. Depending on your problem, this might hurt the performance of the final classifier somewhat. But it means that you can reuse the same learned features for multiple final classification/regression/whatever tasks, which saves a lot of training time in trying to adapt the features, and might save a substantial amount of test time if you're running a suite of learning methods on data and so can reuse the autoencoder outputs for all of them. In semi-supervised settings it might also give you better results to avoid overfitting the feature extractor to your limited labeled data.
Is using PCA then autoencoders for pre-processing useful? I'm not a deep learning expert by any means, but my guess is that the PCA serves two functions: computational improvements if the input dimensionality is significantly reduced, and a kind of precondit
51,040
Power martingales for change detection: M goes to zero?
Your implementation is correct. The power martingale tends to get very small (closer to 0) when p-values are uniformly distributed. To avoid that, you just need to restart your martingale from 1 as soon as it gets smaller than 1. So just add: Mtest[i] = max(Mtest[i], 1) This will keep your martingale small (but not less than 1) when the p-values are uniformly distributed, even for a long period.
Power martingales for change detection: M goes to zero?
Your implementation is correct. The power martingale tends to get very small (closer to 0) when p-values are uniformly distributed. To avoid that, you just need to restart your martingale from 1 as so
Power martingales for change detection: M goes to zero? Your implementation is correct. The power martingale tends to get very small (closer to 0) when p-values are uniformly distributed. To avoid that, you just need to restart your martingale from 1 as soon as it gets smaller than 1. So just add: Mtest[i] = max(Mtest[i], 1) This will keep your martingale small (but not less than 1) when the p-values are uniformly distributed, even for a long period.
Power martingales for change detection: M goes to zero? Your implementation is correct. The power martingale tends to get very small (closer to 0) when p-values are uniformly distributed. To avoid that, you just need to restart your martingale from 1 as so
51,041
Power martingales for change detection: M goes to zero?
I think you can see the nature of the problem if you calculate how many "strange" data points you need, given that you've observed a long run of "not-strange" data points (using the language of Ho and Wechsler here). Let's do some back-of-the-envelope calculations: for a stream of $k$ data points with constant $p_i = p$ then $M_k \sim 10^{k(\text{log}_{10}(\epsilon) + (\epsilon-1) \text{log}_{10}(p))}$. Let's say that after a number $n$ of non-strange data points with $p_i = .5$ (the expected value of a uniform RV over $[0,1]$), we get a stream of $\eta$ strange events with $p_i \sim 10^{-3}$. We then ask "how big does $\eta$ have to be before $M_{n+\eta} \sim 10$?" (the threshold given in Ho and Wechsler). If we use $\epsilon = .92$ then after the not-strange data stream ($n$ points with $p_i=.5$) we have that $M_n \sim 10^{-.012n}$. Abusing notation a little let's denote the "contribution" of the surprising data stream to be $M_{\eta}$ (so $M_{n+\eta} = M_n M_{\eta}$), we can then calculate that $M_{\eta} = 10^{.204 \eta}$. Given the threshold of $10$ we want $M_{n+\eta} = 10^{.204\eta - .012n} = 10$ or $.204\eta - .012n = 1$, hence $\eta = \frac{1+.012n}{.204} \approx 5 + .06 n$. What this means is that $\eta$ needs to be around 6% of $n$ if we're going to pass the threshold of 10. Obviously this is only a rough calculation, but I think it presents a clear intuition of the problem: this martingale method is robust against short runs of surprising data. The more unsurprising data you have, the more surprising data you will need before it concludes that a change has occurred. Because you're feeding it 5000 not-strange observations, you'll need around 300 pretty strange ones to convince it that a change has occurred. Note that in Ho and Wechsler they seem to be feeding in surprising data every 1000 data points, with means that they only need around 60 surprising data points.
Power martingales for change detection: M goes to zero?
I think you can see the nature of the problem if you calculate how many "strange" data points you need, given that you've observed a long run of "not-strange" data points (using the language of Ho and
Power martingales for change detection: M goes to zero? I think you can see the nature of the problem if you calculate how many "strange" data points you need, given that you've observed a long run of "not-strange" data points (using the language of Ho and Wechsler here). Let's do some back-of-the-envelope calculations: for a stream of $k$ data points with constant $p_i = p$ then $M_k \sim 10^{k(\text{log}_{10}(\epsilon) + (\epsilon-1) \text{log}_{10}(p))}$. Let's say that after a number $n$ of non-strange data points with $p_i = .5$ (the expected value of a uniform RV over $[0,1]$), we get a stream of $\eta$ strange events with $p_i \sim 10^{-3}$. We then ask "how big does $\eta$ have to be before $M_{n+\eta} \sim 10$?" (the threshold given in Ho and Wechsler). If we use $\epsilon = .92$ then after the not-strange data stream ($n$ points with $p_i=.5$) we have that $M_n \sim 10^{-.012n}$. Abusing notation a little let's denote the "contribution" of the surprising data stream to be $M_{\eta}$ (so $M_{n+\eta} = M_n M_{\eta}$), we can then calculate that $M_{\eta} = 10^{.204 \eta}$. Given the threshold of $10$ we want $M_{n+\eta} = 10^{.204\eta - .012n} = 10$ or $.204\eta - .012n = 1$, hence $\eta = \frac{1+.012n}{.204} \approx 5 + .06 n$. What this means is that $\eta$ needs to be around 6% of $n$ if we're going to pass the threshold of 10. Obviously this is only a rough calculation, but I think it presents a clear intuition of the problem: this martingale method is robust against short runs of surprising data. The more unsurprising data you have, the more surprising data you will need before it concludes that a change has occurred. Because you're feeding it 5000 not-strange observations, you'll need around 300 pretty strange ones to convince it that a change has occurred. Note that in Ho and Wechsler they seem to be feeding in surprising data every 1000 data points, with means that they only need around 60 surprising data points.
Power martingales for change detection: M goes to zero? I think you can see the nature of the problem if you calculate how many "strange" data points you need, given that you've observed a long run of "not-strange" data points (using the language of Ho and
51,042
Power martingales for change detection: M goes to zero?
In my opinion you don't understand change concept. Try to do this: You must implement a dataset with at least 2 clusters with n-dimension vectors (for a test is better 2-dimesion). Your code must reads one-to-one all the dates from cluster #1. When your code starts to read dates from cluster #2 your martigale must detect a change. About the algorithm. You need to implement (all the reference are on Ho & Wechler paper): Strangeness measure. I'd try to implement Cluster Model (formula 7). Calculate all the dates mean read until that moment (cluster centroid). Following you need to calculate the euclidean distance for each point read until that moment from the centroid. P-value function (formula 7). Strangeness measure is an input for this formula. Introduce each p-value in Martingale. You must define a Lambda value to trigger an change alarm.
Power martingales for change detection: M goes to zero?
In my opinion you don't understand change concept. Try to do this: You must implement a dataset with at least 2 clusters with n-dimension vectors (for a test is better 2-dimesion). Your code must rea
Power martingales for change detection: M goes to zero? In my opinion you don't understand change concept. Try to do this: You must implement a dataset with at least 2 clusters with n-dimension vectors (for a test is better 2-dimesion). Your code must reads one-to-one all the dates from cluster #1. When your code starts to read dates from cluster #2 your martigale must detect a change. About the algorithm. You need to implement (all the reference are on Ho & Wechler paper): Strangeness measure. I'd try to implement Cluster Model (formula 7). Calculate all the dates mean read until that moment (cluster centroid). Following you need to calculate the euclidean distance for each point read until that moment from the centroid. P-value function (formula 7). Strangeness measure is an input for this formula. Introduce each p-value in Martingale. You must define a Lambda value to trigger an change alarm.
Power martingales for change detection: M goes to zero? In my opinion you don't understand change concept. Try to do this: You must implement a dataset with at least 2 clusters with n-dimension vectors (for a test is better 2-dimesion). Your code must rea
51,043
computing the posterior of two Gaussian probability distributions
If $\epsilon$ is a scalar (which it seems to be), then your integral is a 1D convolution of $p_\epsilon(\epsilon|g)$ with a Gaussian. If $p_\epsilon$ is fairly well behaved, you could evaluate it on a 1D grid and use any old signal processing toolbox to calculate the convolution (you could also quite easily code it by hand). You could also include $p(z)$ in the same way. The neat thing about working in one or two dimensions is that brute force is often a feasible approach.
computing the posterior of two Gaussian probability distributions
If $\epsilon$ is a scalar (which it seems to be), then your integral is a 1D convolution of $p_\epsilon(\epsilon|g)$ with a Gaussian. If $p_\epsilon$ is fairly well behaved, you could evaluate it on
computing the posterior of two Gaussian probability distributions If $\epsilon$ is a scalar (which it seems to be), then your integral is a 1D convolution of $p_\epsilon(\epsilon|g)$ with a Gaussian. If $p_\epsilon$ is fairly well behaved, you could evaluate it on a 1D grid and use any old signal processing toolbox to calculate the convolution (you could also quite easily code it by hand). You could also include $p(z)$ in the same way. The neat thing about working in one or two dimensions is that brute force is often a feasible approach.
computing the posterior of two Gaussian probability distributions If $\epsilon$ is a scalar (which it seems to be), then your integral is a 1D convolution of $p_\epsilon(\epsilon|g)$ with a Gaussian. If $p_\epsilon$ is fairly well behaved, you could evaluate it on
51,044
simulate GLM with square root link in R
I figured out some answers to my questions. Regarding the mathematical expressions of predict and simulate, these can be obtained by the following code (thanks to a tip from W. van der Elst): getS3method(c("predict"), class = "glm") getS3method(c("simulate"), class = "lm") . Regarding the predict function, I incorrectly used the option type=”response”. This does not include the stochastic uncertainty as was my aim. It is the prediction on the backtransformed linear scale. This can be tested by the following graph (run after the initial code stated in the question): plot(predict(m, type="response"), p_lin^2) . Regarding the simulate function, it seemed to be correct if I look at the plot from the initial question: plot(p, d[,1], xlab="predicted values", ylab="original data", xlim=xylim, ylim=xylim, col=rgb(0,0,0,alpha=0.1)) points(simulate(m)$sim_1, d[,1], col=rgb(0,1,0,alpha=0.1)) However, if I upscale the dependent variable: d[,1] <- d[,1]*10000 And recalculate the predictions: m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) p_lin <- m$coef[1] + m$coef[2]*d[,2] + m$coef[3]*d[,3] + m$coef[4]*d[,4] p <- rnorm(n=n, mean=p_lin^2, sd=sd(p_lin^2 - d[,1])) And plot the results: par(mfrow=c(1,1), mar=c(4,2,2,1), pch=16, cex=0.8, pty="s") xylim <- c(min(c(d[,1], p)), max(c(d[,1], p))) plot(p, d[,1], xlab="predicted values", ylab="original data", xlim=xylim, ylim=xylim, col=rgb(0,0,0,alpha=0.1)) points(simulate(m)$sim_1, d[,1], col=rgb(0,1,0,alpha=0.1)) I see different predictions. This seems to be attributable to a difference between the value of the sd in my general formula approach and the simulate() function approach. Recall the calculation of the sd in the general formulas approach: sd=sd(p_lin^2 - d[,1]) in the simulate function R calculated the sd differently from the general formulas approach (for a Gaussian family): vars <- deviance(m)/df.residual(m) if (!is.null(m$weights)) vars <- vars/m$weights # the m$weights seems similar to the m$fitted.values multiplied by about 4 fitted(m) + rnorm(n, sd = sqrt(vars)) I don’t understand why the sd is calculated and then divided by the m$weights. Why is sd a vector and not a single value? The simulate() help text states: “The methods for linear models fitted by lm or glm(family = "gaussian") assume that any weights which have been supplied are inversely proportional to the error variance.” I can’t seem to grasp the meaning of this. If I run multiple simulations using the simulate() function, the simulations look very much like each other: plot(simulate(m)$sim_1,simulate(m,2)$sim_2) If I do not use a sqrt-link function I get simulations that seem to better reflect stochastic uncertainty as they are less identical when I run them twice: library(MASS) rm(list=ls()) set.seed(2) n <- 1500 d <- mvrnorm(n=n, mu=c(0,0,0,0), Sigma=matrix(.7, nrow=4, ncol=4) + diag(4)*.3) d[,1] <- qgamma(p=pnorm(q=d[,1]), shape=2, rate=2) d[,1] <- d[,1]*10000 m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) plot(simulate(m)$sim_1,simulate(m,2)$sim_2) What method is correct? What is the difference? (should I post this as a separate question?)
simulate GLM with square root link in R
I figured out some answers to my questions. Regarding the mathematical expressions of predict and simulate, these can be obtained by the following code (thanks to a tip from W. van der Elst): getS3me
simulate GLM with square root link in R I figured out some answers to my questions. Regarding the mathematical expressions of predict and simulate, these can be obtained by the following code (thanks to a tip from W. van der Elst): getS3method(c("predict"), class = "glm") getS3method(c("simulate"), class = "lm") . Regarding the predict function, I incorrectly used the option type=”response”. This does not include the stochastic uncertainty as was my aim. It is the prediction on the backtransformed linear scale. This can be tested by the following graph (run after the initial code stated in the question): plot(predict(m, type="response"), p_lin^2) . Regarding the simulate function, it seemed to be correct if I look at the plot from the initial question: plot(p, d[,1], xlab="predicted values", ylab="original data", xlim=xylim, ylim=xylim, col=rgb(0,0,0,alpha=0.1)) points(simulate(m)$sim_1, d[,1], col=rgb(0,1,0,alpha=0.1)) However, if I upscale the dependent variable: d[,1] <- d[,1]*10000 And recalculate the predictions: m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) p_lin <- m$coef[1] + m$coef[2]*d[,2] + m$coef[3]*d[,3] + m$coef[4]*d[,4] p <- rnorm(n=n, mean=p_lin^2, sd=sd(p_lin^2 - d[,1])) And plot the results: par(mfrow=c(1,1), mar=c(4,2,2,1), pch=16, cex=0.8, pty="s") xylim <- c(min(c(d[,1], p)), max(c(d[,1], p))) plot(p, d[,1], xlab="predicted values", ylab="original data", xlim=xylim, ylim=xylim, col=rgb(0,0,0,alpha=0.1)) points(simulate(m)$sim_1, d[,1], col=rgb(0,1,0,alpha=0.1)) I see different predictions. This seems to be attributable to a difference between the value of the sd in my general formula approach and the simulate() function approach. Recall the calculation of the sd in the general formulas approach: sd=sd(p_lin^2 - d[,1]) in the simulate function R calculated the sd differently from the general formulas approach (for a Gaussian family): vars <- deviance(m)/df.residual(m) if (!is.null(m$weights)) vars <- vars/m$weights # the m$weights seems similar to the m$fitted.values multiplied by about 4 fitted(m) + rnorm(n, sd = sqrt(vars)) I don’t understand why the sd is calculated and then divided by the m$weights. Why is sd a vector and not a single value? The simulate() help text states: “The methods for linear models fitted by lm or glm(family = "gaussian") assume that any weights which have been supplied are inversely proportional to the error variance.” I can’t seem to grasp the meaning of this. If I run multiple simulations using the simulate() function, the simulations look very much like each other: plot(simulate(m)$sim_1,simulate(m,2)$sim_2) If I do not use a sqrt-link function I get simulations that seem to better reflect stochastic uncertainty as they are less identical when I run them twice: library(MASS) rm(list=ls()) set.seed(2) n <- 1500 d <- mvrnorm(n=n, mu=c(0,0,0,0), Sigma=matrix(.7, nrow=4, ncol=4) + diag(4)*.3) d[,1] <- qgamma(p=pnorm(q=d[,1]), shape=2, rate=2) d[,1] <- d[,1]*10000 m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) plot(simulate(m)$sim_1,simulate(m,2)$sim_2) What method is correct? What is the difference? (should I post this as a separate question?)
simulate GLM with square root link in R I figured out some answers to my questions. Regarding the mathematical expressions of predict and simulate, these can be obtained by the following code (thanks to a tip from W. van der Elst): getS3me
51,045
simulate GLM with square root link in R
What is the difference? This model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \sqrt{\mu_i} = \vec{\beta}^\top\vec{x}_i $$ It is estimated with iterative weighted least squares. Thus you will find that more iteration are used library(MASS) set.seed(2) n <- 1500 d <- mvrnorm(n=n, mu=c(0,0,0,0), Sigma=matrix(.7, nrow=4, ncol=4) + diag(4)*.3) d[,1] <- qgamma(p=pnorm(q=d[,1]), shape=2, rate=2) m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) m$iter #R> [1] 5 It is a generalized linear model and the weights element contains the working weights at convergence. On the other hand this model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \mu_i = \vec{\beta}^\top\vec{x}_i $$ and is estimated in one iteration with least squares. m1 <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) m1$iter # takes an extra iteration in `glm.fit` loop #R> [1] 2 m2 <- lm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) stopifnot(all.equal(coef(m1), coef(m2))) # gives the same Neither are the model you start with? Regarding I don’t understand why the sd is calculated and then divided by the m$weights. Why is sd a vector and not a single value? then the following The methods for linear models fitted by lm or glm(family = "gaussian") assume that any weights which have been supplied are inversely proportional to the error variance. makes sense e.g., if you have a mean outcome for $n_i$ observations with the same covariates. The variance of the mean response will be inversely proportional to the $n_i$ which you would supply as the weight. However, it not just supplied weights but also working weights from convergence when you use other link function then the identity function. I am not sure if this makes sense.
simulate GLM with square root link in R
What is the difference? This model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \sqrt{\mu_i} = \vec{\beta}^\top\vec{x}_i $
simulate GLM with square root link in R What is the difference? This model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \sqrt{\mu_i} = \vec{\beta}^\top\vec{x}_i $$ It is estimated with iterative weighted least squares. Thus you will find that more iteration are used library(MASS) set.seed(2) n <- 1500 d <- mvrnorm(n=n, mu=c(0,0,0,0), Sigma=matrix(.7, nrow=4, ncol=4) + diag(4)*.3) d[,1] <- qgamma(p=pnorm(q=d[,1]), shape=2, rate=2) m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) m$iter #R> [1] 5 It is a generalized linear model and the weights element contains the working weights at convergence. On the other hand this model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \mu_i = \vec{\beta}^\top\vec{x}_i $$ and is estimated in one iteration with least squares. m1 <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) m1$iter # takes an extra iteration in `glm.fit` loop #R> [1] 2 m2 <- lm(formula=d[,1] ~ d[,2] + d[,3] + d[,4]) stopifnot(all.equal(coef(m1), coef(m2))) # gives the same Neither are the model you start with? Regarding I don’t understand why the sd is calculated and then divided by the m$weights. Why is sd a vector and not a single value? then the following The methods for linear models fitted by lm or glm(family = "gaussian") assume that any weights which have been supplied are inversely proportional to the error variance. makes sense e.g., if you have a mean outcome for $n_i$ observations with the same covariates. The variance of the mean response will be inversely proportional to the $n_i$ which you would supply as the weight. However, it not just supplied weights but also working weights from convergence when you use other link function then the identity function. I am not sure if this makes sense.
simulate GLM with square root link in R What is the difference? This model m <- glm(formula=d[,1] ~ d[,2] + d[,3] + d[,4], family=gaussian(link="sqrt")) is $$ y_i \sim N(\mu_i,\sigma^2), \quad \sqrt{\mu_i} = \vec{\beta}^\top\vec{x}_i $
51,046
Is the PLS-DA approach for categorical variables the same as that used for PLS regression?
Yes. PLS-DA is basically PLS regression where Y consists of categorical variables. Here is an example of Y matrix with 3 groups each consists of 2 samples (the first row is headers and is not involved in calculations). After applying PLS-DA you can obtain a BETA matrix (if you are using SIMPLS algorithm, for example) whose number of columns equals to the number of groups for 2+ groups. There are, however, few differences of PLS-DA from logistic regression and some other classification methods. The predicted Y values might get out of the 0 to 1 range. So you may find values such as 1.1 and -0.3 etc... Another property of PLS-DA is the sum of the predicted Y values in row is always equal to 1. Assigning predicted samples to a group can be done in several ways. The most common one is assignation of the sample to the group having the highest value. An alternative approach (Bayesian) is fitting a normal distribution to the predictions of the training set and finding the threshold that minimizes the classification error. The samples, then, can be assigned to the group whose threshold value is exceeded.
Is the PLS-DA approach for categorical variables the same as that used for PLS regression?
Yes. PLS-DA is basically PLS regression where Y consists of categorical variables. Here is an example of Y matrix with 3 groups each consists of 2 samples (the first row is headers and is not involved
Is the PLS-DA approach for categorical variables the same as that used for PLS regression? Yes. PLS-DA is basically PLS regression where Y consists of categorical variables. Here is an example of Y matrix with 3 groups each consists of 2 samples (the first row is headers and is not involved in calculations). After applying PLS-DA you can obtain a BETA matrix (if you are using SIMPLS algorithm, for example) whose number of columns equals to the number of groups for 2+ groups. There are, however, few differences of PLS-DA from logistic regression and some other classification methods. The predicted Y values might get out of the 0 to 1 range. So you may find values such as 1.1 and -0.3 etc... Another property of PLS-DA is the sum of the predicted Y values in row is always equal to 1. Assigning predicted samples to a group can be done in several ways. The most common one is assignation of the sample to the group having the highest value. An alternative approach (Bayesian) is fitting a normal distribution to the predictions of the training set and finding the threshold that minimizes the classification error. The samples, then, can be assigned to the group whose threshold value is exceeded.
Is the PLS-DA approach for categorical variables the same as that used for PLS regression? Yes. PLS-DA is basically PLS regression where Y consists of categorical variables. Here is an example of Y matrix with 3 groups each consists of 2 samples (the first row is headers and is not involved
51,047
Confidence interval for differences in total sums
Yes, you could simply go down the t-test route, because those deviations from normality don't really matter with sample sizes like that. Obviously, bootstrapping is a perfect alternative and I may display how easy that is with the following commented R code: # examples raw wins in A and be raw_win_A <- abs(rnorm(100000, mean=5, sd=15)) hist(raw_win_A, xlim=c(-10,100), breaks=20) #skewed raw_win_B <- abs(rnorm(2000, mean=4.9, sd=20)) hist(raw_win_B, xlim=c(-10,100), breaks=20) #skewed #compute means of n bootstrap samples of wins in A n <- 10000 wins_A <- replicate(n, mean(sample(raw_win_A, replace=TRUE))) #the same with B wins_B <- replicate(n, mean(sample(raw_win_B, replace=TRUE))) # show distribution of bootstrapped wins in A and B, # these aber bound to be normally distributed with increasing n hist(wins_A) hist(wins_B) # show distribution of wins_A minus wins_B hist(wins_A - wins_B) cat("Mean of wins_A minus wins-B: ") cat(mean(wins_A - wins_B)) cat("1.96 times standard deviation of that:") cat(1.96*sd(wins_A - wins_B)) cat("Confidence interval lower bound: ") cat(mean(wins_A-wins_B)-1.96*sd(wins_A - wins_B)) cat("Confidence intercal upper bound:") cat(mean(wins_A-wins_B)+1.96*sd(wins_A - wins_B)) cat("---\n Compare to t-test results:") print(t.test(raw_win_A, raw_win_B)) This takes some seconds (less than a minute) to run. With the example data simulated in the first few lines I get a bootstrapped conficende/credible interval from -3.973402 to -2.906095 and the t.test function gives a confidence interval from -3.971132 to -2.895014 even though the data are highly skewed (see all those histograms produced by my code). So yes, the t-test is robust against violations from normality whenever n is high enough. The Central Limit Theorem holds.
Confidence interval for differences in total sums
Yes, you could simply go down the t-test route, because those deviations from normality don't really matter with sample sizes like that. Obviously, bootstrapping is a perfect alternative and I may dis
Confidence interval for differences in total sums Yes, you could simply go down the t-test route, because those deviations from normality don't really matter with sample sizes like that. Obviously, bootstrapping is a perfect alternative and I may display how easy that is with the following commented R code: # examples raw wins in A and be raw_win_A <- abs(rnorm(100000, mean=5, sd=15)) hist(raw_win_A, xlim=c(-10,100), breaks=20) #skewed raw_win_B <- abs(rnorm(2000, mean=4.9, sd=20)) hist(raw_win_B, xlim=c(-10,100), breaks=20) #skewed #compute means of n bootstrap samples of wins in A n <- 10000 wins_A <- replicate(n, mean(sample(raw_win_A, replace=TRUE))) #the same with B wins_B <- replicate(n, mean(sample(raw_win_B, replace=TRUE))) # show distribution of bootstrapped wins in A and B, # these aber bound to be normally distributed with increasing n hist(wins_A) hist(wins_B) # show distribution of wins_A minus wins_B hist(wins_A - wins_B) cat("Mean of wins_A minus wins-B: ") cat(mean(wins_A - wins_B)) cat("1.96 times standard deviation of that:") cat(1.96*sd(wins_A - wins_B)) cat("Confidence interval lower bound: ") cat(mean(wins_A-wins_B)-1.96*sd(wins_A - wins_B)) cat("Confidence intercal upper bound:") cat(mean(wins_A-wins_B)+1.96*sd(wins_A - wins_B)) cat("---\n Compare to t-test results:") print(t.test(raw_win_A, raw_win_B)) This takes some seconds (less than a minute) to run. With the example data simulated in the first few lines I get a bootstrapped conficende/credible interval from -3.973402 to -2.906095 and the t.test function gives a confidence interval from -3.971132 to -2.895014 even though the data are highly skewed (see all those histograms produced by my code). So yes, the t-test is robust against violations from normality whenever n is high enough. The Central Limit Theorem holds.
Confidence interval for differences in total sums Yes, you could simply go down the t-test route, because those deviations from normality don't really matter with sample sizes like that. Obviously, bootstrapping is a perfect alternative and I may dis
51,048
Confidence interval for differences in total sums
A straightforward bootstrap approach would be to build a vector of samples (A -B) if A and B are your logs of sales of the different versions of the website. Given this vector you perform bootstrap resamples of this vector. You thus end up with a number of resamples that can approximate the true distribution of the difference. Finally to compute the confidence interval you just use the basic gaussian approximation (or even the empirical percentiles e.g. 5%/95%) of the mean of the bootstrap resamples.
Confidence interval for differences in total sums
A straightforward bootstrap approach would be to build a vector of samples (A -B) if A and B are your logs of sales of the different versions of the website. Given this vector you perform bootstrap re
Confidence interval for differences in total sums A straightforward bootstrap approach would be to build a vector of samples (A -B) if A and B are your logs of sales of the different versions of the website. Given this vector you perform bootstrap resamples of this vector. You thus end up with a number of resamples that can approximate the true distribution of the difference. Finally to compute the confidence interval you just use the basic gaussian approximation (or even the empirical percentiles e.g. 5%/95%) of the mean of the bootstrap resamples.
Confidence interval for differences in total sums A straightforward bootstrap approach would be to build a vector of samples (A -B) if A and B are your logs of sales of the different versions of the website. Given this vector you perform bootstrap re
51,049
Bootstrapping the data to set up a prior
Have you considered simply applying a scaling to the covariance matrix, as suggested by Andrew Gelman? See also this paper on the scaled inverse Wishart.
Bootstrapping the data to set up a prior
Have you considered simply applying a scaling to the covariance matrix, as suggested by Andrew Gelman? See also this paper on the scaled inverse Wishart.
Bootstrapping the data to set up a prior Have you considered simply applying a scaling to the covariance matrix, as suggested by Andrew Gelman? See also this paper on the scaled inverse Wishart.
Bootstrapping the data to set up a prior Have you considered simply applying a scaling to the covariance matrix, as suggested by Andrew Gelman? See also this paper on the scaled inverse Wishart.
51,050
Combining Posterior Distributions of Separate Models
One approach to combine results from multiple models in a manner that reflects how well each model works on the dataset under analysis is Bayesian model averaging. One simple way of approximating full Bayesian model averaging is to use weights proportional to $e^{-\text{BIC}_m/2}$ (or using the effective number of parameters à la DIC for a hierarchical model) for model $m=1,\ldots,M$. These weights approximate the posterior model probabilities, if a-priori all models are equally likely. Other weights are also possible, e.g. you might a-priori favor more complex models to get a AIC like behavior in the model averaging. You can either average estimates (on a suitable scale) using these weights (and there are straighforward formulae published for combining uncertainty assuming a reasonably normal posteriors from each model) or perhaps in a more Bayesian fashion sample from the posteriors in proportion to these weights.
Combining Posterior Distributions of Separate Models
One approach to combine results from multiple models in a manner that reflects how well each model works on the dataset under analysis is Bayesian model averaging. One simple way of approximating ful
Combining Posterior Distributions of Separate Models One approach to combine results from multiple models in a manner that reflects how well each model works on the dataset under analysis is Bayesian model averaging. One simple way of approximating full Bayesian model averaging is to use weights proportional to $e^{-\text{BIC}_m/2}$ (or using the effective number of parameters à la DIC for a hierarchical model) for model $m=1,\ldots,M$. These weights approximate the posterior model probabilities, if a-priori all models are equally likely. Other weights are also possible, e.g. you might a-priori favor more complex models to get a AIC like behavior in the model averaging. You can either average estimates (on a suitable scale) using these weights (and there are straighforward formulae published for combining uncertainty assuming a reasonably normal posteriors from each model) or perhaps in a more Bayesian fashion sample from the posteriors in proportion to these weights.
Combining Posterior Distributions of Separate Models One approach to combine results from multiple models in a manner that reflects how well each model works on the dataset under analysis is Bayesian model averaging. One simple way of approximating ful
51,051
R - how to get standard error for a breakpoint/parameter in a piecewise regression
Hmm, so I think I figured out the issues I was having with segmented. It had to do with the weight statement (it doesn't work to weight the lm and the segmented model). Segmented seems like the best bet for me. It does a good job estimating breakpoints, even with my short datasets. It isn't terribly difficult to constrain the second slope to 0, and it has a built-in test for the significance of the change in slope. If anyone feels like explaining the difficulty with estimating error around a breakpoint estimate, I'm all ears! library(segmented) with second segment not constrained to 0 out.lm <- lm(y~x, data=dat) o<-segmented(out.lm, seg.Z= ~x, weights=wt, psi=10) with(dat, plot(x,y, ylim=c(0, max(y)), pch=16, cex=wt/(13), main="segmented")) lines(x=dat$x, y=fitted(o), col="blue") lines.segmented(o, col="blue") fix from vito to fix slope to 0 o2<-lm(y~1) xx<- -x o3<-segmented(o2,seg.Z=~xx, weights=wt, psi=list(xx=-4)) points(x,fitted(o3),col="green") ci<-confint(o3, rev.sgn=TRUE) lines.segmented(o3, col="green", rev.sgn=TRUE, lwd=2) test for slope change davies.test(o3,~xx)
R - how to get standard error for a breakpoint/parameter in a piecewise regression
Hmm, so I think I figured out the issues I was having with segmented. It had to do with the weight statement (it doesn't work to weight the lm and the segmented model). Segmented seems like the best
R - how to get standard error for a breakpoint/parameter in a piecewise regression Hmm, so I think I figured out the issues I was having with segmented. It had to do with the weight statement (it doesn't work to weight the lm and the segmented model). Segmented seems like the best bet for me. It does a good job estimating breakpoints, even with my short datasets. It isn't terribly difficult to constrain the second slope to 0, and it has a built-in test for the significance of the change in slope. If anyone feels like explaining the difficulty with estimating error around a breakpoint estimate, I'm all ears! library(segmented) with second segment not constrained to 0 out.lm <- lm(y~x, data=dat) o<-segmented(out.lm, seg.Z= ~x, weights=wt, psi=10) with(dat, plot(x,y, ylim=c(0, max(y)), pch=16, cex=wt/(13), main="segmented")) lines(x=dat$x, y=fitted(o), col="blue") lines.segmented(o, col="blue") fix from vito to fix slope to 0 o2<-lm(y~1) xx<- -x o3<-segmented(o2,seg.Z=~xx, weights=wt, psi=list(xx=-4)) points(x,fitted(o3),col="green") ci<-confint(o3, rev.sgn=TRUE) lines.segmented(o3, col="green", rev.sgn=TRUE, lwd=2) test for slope change davies.test(o3,~xx)
R - how to get standard error for a breakpoint/parameter in a piecewise regression Hmm, so I think I figured out the issues I was having with segmented. It had to do with the weight statement (it doesn't work to weight the lm and the segmented model). Segmented seems like the best
51,052
R - how to get standard error for a breakpoint/parameter in a piecewise regression
The distribution of the breakpoint estimator is a complicated one, and you cannot use standard methods for that. Fortunately, package strucchange implements breakpoint tests and confidence intervals (see the references in strucchange::breakpoints), which you can use very simply: I rewrote a data-generating process in a slightly more concise way (also assuming a permanent shift in intercept and slope instead of time-specific random shifts as you did). Note that if you have just 12 values, I don't think you would get any relevant estimation... ## DGP set.seed(123) n <- 100 breakpoint <- 50 x <- rnorm(n) err <- rnorm(n, sd=0.1) y <- 1.2+0.9*x +ifelse(1:n >breakpoint, 0.1+0.1*x,0)+err library(strucchange) b <- breakpoints(y~x, breaks=1) #selects 47 confint(b) # great, 50 is in the confidence interval!
R - how to get standard error for a breakpoint/parameter in a piecewise regression
The distribution of the breakpoint estimator is a complicated one, and you cannot use standard methods for that. Fortunately, package strucchange implements breakpoint tests and confidence intervals (
R - how to get standard error for a breakpoint/parameter in a piecewise regression The distribution of the breakpoint estimator is a complicated one, and you cannot use standard methods for that. Fortunately, package strucchange implements breakpoint tests and confidence intervals (see the references in strucchange::breakpoints), which you can use very simply: I rewrote a data-generating process in a slightly more concise way (also assuming a permanent shift in intercept and slope instead of time-specific random shifts as you did). Note that if you have just 12 values, I don't think you would get any relevant estimation... ## DGP set.seed(123) n <- 100 breakpoint <- 50 x <- rnorm(n) err <- rnorm(n, sd=0.1) y <- 1.2+0.9*x +ifelse(1:n >breakpoint, 0.1+0.1*x,0)+err library(strucchange) b <- breakpoints(y~x, breaks=1) #selects 47 confint(b) # great, 50 is in the confidence interval!
R - how to get standard error for a breakpoint/parameter in a piecewise regression The distribution of the breakpoint estimator is a complicated one, and you cannot use standard methods for that. Fortunately, package strucchange implements breakpoint tests and confidence intervals (
51,053
Statistical test for power law samples
I do not believe there is a simple answer to your question without more details about the distribution. Power law distributions can have infinite variance, in which cases the large sample guarantees from the CLT will not apply. If you know the variance of the distribution is finite (which can be guaranteed for certain parameter values) you can use something like the Wilcoxin ranked test detailed in https://www.stat.auckland.ac.nz/~wild/ChanceEnc/Ch10.wilcoxon.pdf I would not rely on these tests completely without some simple checks based on the Berry-Essen theorem (https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem) Essentially, you need some insights into the ratio of the third to the second moments of the distribution. This again, will depend on the nature of the exponential distribution under consideration
Statistical test for power law samples
I do not believe there is a simple answer to your question without more details about the distribution. Power law distributions can have infinite variance, in which cases the large sample guarantees f
Statistical test for power law samples I do not believe there is a simple answer to your question without more details about the distribution. Power law distributions can have infinite variance, in which cases the large sample guarantees from the CLT will not apply. If you know the variance of the distribution is finite (which can be guaranteed for certain parameter values) you can use something like the Wilcoxin ranked test detailed in https://www.stat.auckland.ac.nz/~wild/ChanceEnc/Ch10.wilcoxon.pdf I would not rely on these tests completely without some simple checks based on the Berry-Essen theorem (https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem) Essentially, you need some insights into the ratio of the third to the second moments of the distribution. This again, will depend on the nature of the exponential distribution under consideration
Statistical test for power law samples I do not believe there is a simple answer to your question without more details about the distribution. Power law distributions can have infinite variance, in which cases the large sample guarantees f
51,054
Statistical test for power law samples
If your distribution is a Pareto distribution, you can apply $\log$ to the values and you will get an exponential distribution. Then you can perform a t-test
Statistical test for power law samples
If your distribution is a Pareto distribution, you can apply $\log$ to the values and you will get an exponential distribution. Then you can perform a t-test
Statistical test for power law samples If your distribution is a Pareto distribution, you can apply $\log$ to the values and you will get an exponential distribution. Then you can perform a t-test
Statistical test for power law samples If your distribution is a Pareto distribution, you can apply $\log$ to the values and you will get an exponential distribution. Then you can perform a t-test
51,055
Correct statistical test when people could appear in multiple groups
I presume that when users were asked to rate the ease of website use, it was their overall impression, and ease was asked only once per user rather than specifically for each task. Therefore, if I said I used the website to update my profile & post a blog and I rated my ease of use 1 (presumably difficult), you would not know if updating the profile or posting the blog was difficult, or both. I typically conduct two analyses for data of these nature. First off, I define an outer-product-variable which is a unique identifier corresponding to each possible combination of task or response. So if there were 3 possible tasks which users might endorse, I would have $2^3=8$ possible combination levels: those who endorse no tasks, the first task only, the second task only, the third task only, the first and second tasks, the first and third tasks, and all tasks. With 8 levels, we can inspect average difficulty ratings and error bar charts. If this leads to many possible levels, it can be useful to sort such an error bar based on a meaningful metric which combines reported difficulty as well as representation. For instance, you might consider response categories that have at least 5 or 10 respondents and sort them from most problematic to least problematic to see which particular tasks have been the worst for users. Second: A linear regression model gives a way of exploring a similar issue. Treating ease as an outcome, 0/1 indicator variables can be used for each possible task to be used as covariates in the model. In addition, product terms can be created between two or more specific tasks. This leads to a very high dimensional model and techniques of model selection can be applied to identify heterogeneity. Doing this in two passes, first for only the main effects and second for the product terms, gives many useful hypotheses for process improvement.
Correct statistical test when people could appear in multiple groups
I presume that when users were asked to rate the ease of website use, it was their overall impression, and ease was asked only once per user rather than specifically for each task. Therefore, if I sai
Correct statistical test when people could appear in multiple groups I presume that when users were asked to rate the ease of website use, it was their overall impression, and ease was asked only once per user rather than specifically for each task. Therefore, if I said I used the website to update my profile & post a blog and I rated my ease of use 1 (presumably difficult), you would not know if updating the profile or posting the blog was difficult, or both. I typically conduct two analyses for data of these nature. First off, I define an outer-product-variable which is a unique identifier corresponding to each possible combination of task or response. So if there were 3 possible tasks which users might endorse, I would have $2^3=8$ possible combination levels: those who endorse no tasks, the first task only, the second task only, the third task only, the first and second tasks, the first and third tasks, and all tasks. With 8 levels, we can inspect average difficulty ratings and error bar charts. If this leads to many possible levels, it can be useful to sort such an error bar based on a meaningful metric which combines reported difficulty as well as representation. For instance, you might consider response categories that have at least 5 or 10 respondents and sort them from most problematic to least problematic to see which particular tasks have been the worst for users. Second: A linear regression model gives a way of exploring a similar issue. Treating ease as an outcome, 0/1 indicator variables can be used for each possible task to be used as covariates in the model. In addition, product terms can be created between two or more specific tasks. This leads to a very high dimensional model and techniques of model selection can be applied to identify heterogeneity. Doing this in two passes, first for only the main effects and second for the product terms, gives many useful hypotheses for process improvement.
Correct statistical test when people could appear in multiple groups I presume that when users were asked to rate the ease of website use, it was their overall impression, and ease was asked only once per user rather than specifically for each task. Therefore, if I sai
51,056
MCMC sampling with noisy likelihood
Generalized Poisson estimator seems to be what I'm looking for http://www.maths.lancs.ac.uk/~rowlings/Chicas/Talks/LightningNov2011/Prangle.pdf
MCMC sampling with noisy likelihood
Generalized Poisson estimator seems to be what I'm looking for http://www.maths.lancs.ac.uk/~rowlings/Chicas/Talks/LightningNov2011/Prangle.pdf
MCMC sampling with noisy likelihood Generalized Poisson estimator seems to be what I'm looking for http://www.maths.lancs.ac.uk/~rowlings/Chicas/Talks/LightningNov2011/Prangle.pdf
MCMC sampling with noisy likelihood Generalized Poisson estimator seems to be what I'm looking for http://www.maths.lancs.ac.uk/~rowlings/Chicas/Talks/LightningNov2011/Prangle.pdf
51,057
Missing data which simply cannot exist
For sources, I'd suggest Missing Data Problem in Machine Learning, by Benjamin M. Merlin (Thesis), or Statistical Analysis with Missing Data, by Roderick and Rubin. What you are explaining in your answer is similar to the augmented model. There are multiple approaches to deal with missing data, and as a quick look into it, the easiest ones to implement are Multiple Mean Imputation: This works if the data is Missing At Random, i.e., the fact that the data is missing as no impact on the dependent variable, outside the fact that you do not know the value of the variable. The idea is to assign the mean of the variable to missing values, and add a random error to avoid having a big spike in the distribution. Do this multiple times to get different training sets, and average the models. Another technique that works in this context is matrix reconstruction. Reduced Models: If the data is Not Missing At Random, you get into troubles with mean imputation. The idea of reduced models is to use multiple models for the different patterns of training data. In your case, you would train four models, $[x_1,x_2],[x_1,?],[?,x_2],[?,?]$. Note that when training $[?,x_2]$ you can use data that is valid from the $[x_1,x_2]$ pattern. The question of if you should will come from testing the resulting models. If you do not include other pattern and the dependent variable is dependent on the pattern, not including other data will improve the result. If the independent variable are missing based on their value, but the dependent variable does not depend on the "missingness", including more data should improve the results. Augmented Models: This is the model you finally choose to implement: Set unknown values to $0$ and complement the model with $[0,1]$ variable to represent missingness of features. This works for some models, such as linear regression, where $0$ has a special meaning. This would probably work less well using trees or SVMs, but should still work using NeuralNets. This provides a way to compensate the intercept for different pattern types. This should work well in your case, but when a lot of features have missing values, and there are a lot of patterns with similarities, two patterns may need compensation in different ways, and some additional work may be required to properly identify patterns that are similar.
Missing data which simply cannot exist
For sources, I'd suggest Missing Data Problem in Machine Learning, by Benjamin M. Merlin (Thesis), or Statistical Analysis with Missing Data, by Roderick and Rubin. What you are explaining in your ans
Missing data which simply cannot exist For sources, I'd suggest Missing Data Problem in Machine Learning, by Benjamin M. Merlin (Thesis), or Statistical Analysis with Missing Data, by Roderick and Rubin. What you are explaining in your answer is similar to the augmented model. There are multiple approaches to deal with missing data, and as a quick look into it, the easiest ones to implement are Multiple Mean Imputation: This works if the data is Missing At Random, i.e., the fact that the data is missing as no impact on the dependent variable, outside the fact that you do not know the value of the variable. The idea is to assign the mean of the variable to missing values, and add a random error to avoid having a big spike in the distribution. Do this multiple times to get different training sets, and average the models. Another technique that works in this context is matrix reconstruction. Reduced Models: If the data is Not Missing At Random, you get into troubles with mean imputation. The idea of reduced models is to use multiple models for the different patterns of training data. In your case, you would train four models, $[x_1,x_2],[x_1,?],[?,x_2],[?,?]$. Note that when training $[?,x_2]$ you can use data that is valid from the $[x_1,x_2]$ pattern. The question of if you should will come from testing the resulting models. If you do not include other pattern and the dependent variable is dependent on the pattern, not including other data will improve the result. If the independent variable are missing based on their value, but the dependent variable does not depend on the "missingness", including more data should improve the results. Augmented Models: This is the model you finally choose to implement: Set unknown values to $0$ and complement the model with $[0,1]$ variable to represent missingness of features. This works for some models, such as linear regression, where $0$ has a special meaning. This would probably work less well using trees or SVMs, but should still work using NeuralNets. This provides a way to compensate the intercept for different pattern types. This should work well in your case, but when a lot of features have missing values, and there are a lot of patterns with similarities, two patterns may need compensation in different ways, and some additional work may be required to properly identify patterns that are similar.
Missing data which simply cannot exist For sources, I'd suggest Missing Data Problem in Machine Learning, by Benjamin M. Merlin (Thesis), or Statistical Analysis with Missing Data, by Roderick and Rubin. What you are explaining in your ans
51,058
Missing data which simply cannot exist
So I sat down and worked this through. I thought I would share the answer - I struggled to find a source which worked this through and the couple of upvotes suggest interest in the solution. Firstly, lets set up a scenario: 1 dependent variable: Y 2 variables: X1 & X2 X1 & X2 are correlated Both have a subset of observations where the data cannot exist (going to call this 'missing' for simplicity) This is such that there exists 2 or more observations which X1 & X2 both cannot exist. There are four combinations: X1 & X2 are both avaliable X1 is missing, X2 isn't X2 is missing, X1 isn't X1 and X2 are both missing We could run four separate regressions to take each combination in turn. This would give us valid estimates. Not including observations where they have a 'missing' observation is not bad in any capacity, as the observations simply do not hold any information. However, we want to run all four within one regression. To do this, we need to be aware of two things: We need to be able to generate four different intercepts depending on the four combinations above. Changing 'missings' to 0s will adjust the coefficient estimates of X1 and X2. Therefore, we need to be able to regulate the coefficient estimates so they are correct for situations where at least one of X1 and X2 are present. How? To achieve both of the points above, 3 dummy variables (denoted by D1, D2 and D3 respectively) need to be used to avoid incorrect coefficients: D1 takes the value 1 if X1 is missing AND X2 is not (0 otherwise) D2 takes the value 1 if X2 is missing AND X1 is not (0 otherwise) D3 takes the value 1 if X1 and X2 are both NOT missing (0 otherwise) We then need to interact D1 with X2 & D2 with X1. In total we have 5 additional independent variables, alongside the intercept, X1 and X2. D1, D2 and D3 regulate the intercept depending on whether a) X1 and X2 are not missing, b) X1 and X2 are missing, c) X1 is missing, d) X2 is missing The interactions, D1*X2 and D2*X1 regulate the coefficients for X1 and X2 such that: a) If X1 is missing AND X2 is not, D1*X2 regulates the coefficient on X2, akin to running a regression of Y on just the intercept and X2 b) If X2 is missing AND X1 is not, D2*X1 regulates the coefficient on X1, akin to running a regression of Y on just the intercept and X1. The inclusion of the 5 additional variables allow you to achieve the coefficient and intercept estimates for all four combinations within one regression. Scaling the approach up to 3 or more variables As the number of variables with missing values increase, the number of additional variables needed also increases. In the scenario of 3 variables, for example, assuming all six combinations of missing data can exist for 2 or more observations (so, for example, four observation are missing X1 and X2 but have X3 present; six observations are missing X2 and X3 but have X1 present; five observations are missing X1, X2 and X3) a total of 11 additional variables are needed to create the coefficient and intercept estimates. It is easy to see that the approach gets more unweildy in scenarios of 4+ variables.
Missing data which simply cannot exist
So I sat down and worked this through. I thought I would share the answer - I struggled to find a source which worked this through and the couple of upvotes suggest interest in the solution. Firstly,
Missing data which simply cannot exist So I sat down and worked this through. I thought I would share the answer - I struggled to find a source which worked this through and the couple of upvotes suggest interest in the solution. Firstly, lets set up a scenario: 1 dependent variable: Y 2 variables: X1 & X2 X1 & X2 are correlated Both have a subset of observations where the data cannot exist (going to call this 'missing' for simplicity) This is such that there exists 2 or more observations which X1 & X2 both cannot exist. There are four combinations: X1 & X2 are both avaliable X1 is missing, X2 isn't X2 is missing, X1 isn't X1 and X2 are both missing We could run four separate regressions to take each combination in turn. This would give us valid estimates. Not including observations where they have a 'missing' observation is not bad in any capacity, as the observations simply do not hold any information. However, we want to run all four within one regression. To do this, we need to be aware of two things: We need to be able to generate four different intercepts depending on the four combinations above. Changing 'missings' to 0s will adjust the coefficient estimates of X1 and X2. Therefore, we need to be able to regulate the coefficient estimates so they are correct for situations where at least one of X1 and X2 are present. How? To achieve both of the points above, 3 dummy variables (denoted by D1, D2 and D3 respectively) need to be used to avoid incorrect coefficients: D1 takes the value 1 if X1 is missing AND X2 is not (0 otherwise) D2 takes the value 1 if X2 is missing AND X1 is not (0 otherwise) D3 takes the value 1 if X1 and X2 are both NOT missing (0 otherwise) We then need to interact D1 with X2 & D2 with X1. In total we have 5 additional independent variables, alongside the intercept, X1 and X2. D1, D2 and D3 regulate the intercept depending on whether a) X1 and X2 are not missing, b) X1 and X2 are missing, c) X1 is missing, d) X2 is missing The interactions, D1*X2 and D2*X1 regulate the coefficients for X1 and X2 such that: a) If X1 is missing AND X2 is not, D1*X2 regulates the coefficient on X2, akin to running a regression of Y on just the intercept and X2 b) If X2 is missing AND X1 is not, D2*X1 regulates the coefficient on X1, akin to running a regression of Y on just the intercept and X1. The inclusion of the 5 additional variables allow you to achieve the coefficient and intercept estimates for all four combinations within one regression. Scaling the approach up to 3 or more variables As the number of variables with missing values increase, the number of additional variables needed also increases. In the scenario of 3 variables, for example, assuming all six combinations of missing data can exist for 2 or more observations (so, for example, four observation are missing X1 and X2 but have X3 present; six observations are missing X2 and X3 but have X1 present; five observations are missing X1, X2 and X3) a total of 11 additional variables are needed to create the coefficient and intercept estimates. It is easy to see that the approach gets more unweildy in scenarios of 4+ variables.
Missing data which simply cannot exist So I sat down and worked this through. I thought I would share the answer - I struggled to find a source which worked this through and the couple of upvotes suggest interest in the solution. Firstly,
51,059
Significance test for entropy?
I agree with @Nir Friedman that a Bayesian approach would be a good fit here, so I went ahead and implemented it in Python. Since the uniform prior is conjugate to the multinomial distribution, we can implement it without any fancy MCMC/HMC stuff. First things first, I imported a few libraries and defined a function to calculate entropy: import numpy as np import seaborn as sns from matplotlib import pyplot as plt def entropy(x): return np.sum( -x*np.log2(x) , axis=-1) Then, I took Monte-Carlo samples from the posterior distribution of the entropy. This was done in two steps: First, notice that the posterior distribution for the true multinomial proportions is a Dirichlet. We can sample from it in Python in a single line of code: np.random.dirichlet(counts_die+1, 1000000) Calculate the entropy for each sample from that Dirichlet distribution The code for this is as follows: counts_die_1 = np.array([6,7,3,5,2,1]) counts_die_2 = np.array([3,4,2,1,1,2]) entropy_die_1 = entropy(np.random.dirichlet(counts_die_1+1, 1000000)) entropy_die_2 = entropy(np.random.dirichlet(counts_die_2+1, 1000000)) Then, we can plot the distribution for the difference between the entropies: sns.kdeplot(entropy_die_1-entropy_die_2, fill=True) plt.axvline(0, ls="--", c="k") # changing plot aesthetics plt.gca().set(yticklabels=[], ylabel="", xlabel="Difference in entropy, in bits") plt.gca().tick_params(left=False) sns.despine(left=True) The result looks like this: We don't see evidence for a difference in entropies, and we can be fairly certain (>99%) that any such difference is less than half a bit. We can get the probability of die 1 being less random than die 2 like this: (entropy_die_1 < entropy_die_2).mean() This gives us 0.512942: very close to 0.50 meaning that we have little to no evidence of a die being more random than another. Hope it was helpful!
Significance test for entropy?
I agree with @Nir Friedman that a Bayesian approach would be a good fit here, so I went ahead and implemented it in Python. Since the uniform prior is conjugate to the multinomial distribution, we can
Significance test for entropy? I agree with @Nir Friedman that a Bayesian approach would be a good fit here, so I went ahead and implemented it in Python. Since the uniform prior is conjugate to the multinomial distribution, we can implement it without any fancy MCMC/HMC stuff. First things first, I imported a few libraries and defined a function to calculate entropy: import numpy as np import seaborn as sns from matplotlib import pyplot as plt def entropy(x): return np.sum( -x*np.log2(x) , axis=-1) Then, I took Monte-Carlo samples from the posterior distribution of the entropy. This was done in two steps: First, notice that the posterior distribution for the true multinomial proportions is a Dirichlet. We can sample from it in Python in a single line of code: np.random.dirichlet(counts_die+1, 1000000) Calculate the entropy for each sample from that Dirichlet distribution The code for this is as follows: counts_die_1 = np.array([6,7,3,5,2,1]) counts_die_2 = np.array([3,4,2,1,1,2]) entropy_die_1 = entropy(np.random.dirichlet(counts_die_1+1, 1000000)) entropy_die_2 = entropy(np.random.dirichlet(counts_die_2+1, 1000000)) Then, we can plot the distribution for the difference between the entropies: sns.kdeplot(entropy_die_1-entropy_die_2, fill=True) plt.axvline(0, ls="--", c="k") # changing plot aesthetics plt.gca().set(yticklabels=[], ylabel="", xlabel="Difference in entropy, in bits") plt.gca().tick_params(left=False) sns.despine(left=True) The result looks like this: We don't see evidence for a difference in entropies, and we can be fairly certain (>99%) that any such difference is less than half a bit. We can get the probability of die 1 being less random than die 2 like this: (entropy_die_1 < entropy_die_2).mean() This gives us 0.512942: very close to 0.50 meaning that we have little to no evidence of a die being more random than another. Hope it was helpful!
Significance test for entropy? I agree with @Nir Friedman that a Bayesian approach would be a good fit here, so I went ahead and implemented it in Python. Since the uniform prior is conjugate to the multinomial distribution, we can
51,060
Significance test for entropy?
I think what's a little tricky is this: in a Student T test for comparing whether two populations have the same sample mean, you have a null hypothesis that the two populations have the same sample mean. Notice that this is as far as you have to specify the null hypothesis, you don't need to specify what both means are for the null hypothesis, just the difference. Because the actual value of the two means does not affect the distribution that results in the t-quantity, only their difference. Here, the situation is quite different. The null hypothesis that both entropies are 0, and the null hypothesis that both entropies are some other value, are very different. If the entropy is truly 0, then you should always measure zero entropy in every test (i.e. there's zero variance). I would take a Bayesian approach instead: assuming uniform priors (on the multnomial distribution parameters, not the entropy) and a measured entropy, you can come up with the distribution of the true entropy using the likelihood function. Note that this basically boils down to a series of questions about the multinomial distribution and frequency counts. That is, you can do the problem by getting a distribution over the multinomial parameters by adding the probabilities over all the permutations of frequency counts (since they all yield the same entropy). Once you have that distribution, you can convert that to a distribution over entropy. If you do that once for each die, then you'll two distributions. You can then get the distribution of the difference. You can see how sharply peaked that distribution is away from 0, and that will tell you about whether the difference in entropy is significant.
Significance test for entropy?
I think what's a little tricky is this: in a Student T test for comparing whether two populations have the same sample mean, you have a null hypothesis that the two populations have the same sample me
Significance test for entropy? I think what's a little tricky is this: in a Student T test for comparing whether two populations have the same sample mean, you have a null hypothesis that the two populations have the same sample mean. Notice that this is as far as you have to specify the null hypothesis, you don't need to specify what both means are for the null hypothesis, just the difference. Because the actual value of the two means does not affect the distribution that results in the t-quantity, only their difference. Here, the situation is quite different. The null hypothesis that both entropies are 0, and the null hypothesis that both entropies are some other value, are very different. If the entropy is truly 0, then you should always measure zero entropy in every test (i.e. there's zero variance). I would take a Bayesian approach instead: assuming uniform priors (on the multnomial distribution parameters, not the entropy) and a measured entropy, you can come up with the distribution of the true entropy using the likelihood function. Note that this basically boils down to a series of questions about the multinomial distribution and frequency counts. That is, you can do the problem by getting a distribution over the multinomial parameters by adding the probabilities over all the permutations of frequency counts (since they all yield the same entropy). Once you have that distribution, you can convert that to a distribution over entropy. If you do that once for each die, then you'll two distributions. You can then get the distribution of the difference. You can see how sharply peaked that distribution is away from 0, and that will tell you about whether the difference in entropy is significant.
Significance test for entropy? I think what's a little tricky is this: in a Student T test for comparing whether two populations have the same sample mean, you have a null hypothesis that the two populations have the same sample me
51,061
Significance test for entropy?
For each distribution, compute the maximum possible entropy as log2N, then divide by the actual entropy. Test this ratio using a Z-test for proportions.
Significance test for entropy?
For each distribution, compute the maximum possible entropy as log2N, then divide by the actual entropy. Test this ratio using a Z-test for proportions.
Significance test for entropy? For each distribution, compute the maximum possible entropy as log2N, then divide by the actual entropy. Test this ratio using a Z-test for proportions.
Significance test for entropy? For each distribution, compute the maximum possible entropy as log2N, then divide by the actual entropy. Test this ratio using a Z-test for proportions.
51,062
What is an appropriate statistical test to identify significantly different time-points in two time-courses?
Obviously super late with an answer here, but in case you haven't yet solved the issue, or you are working in the field and will face similar issues again... I would suggest adopting your first strategy and adjusting for multiple comparisons using a threshold-free cluster-enhancement technique followed by a maximum permutation test to correct for the comparisons. Theoretical details with simulated EEG data can be found here A user-friendly toolbox for matlab can be found here Of course, when using permutation you are not limited to conducting a t-test and you could potentially use any sort of difference measure you see fit (with proper justification). That also entails that if you have multiple groups, or multiple factors using the F-values from the appropriate ANOVA as summary measures for your permutation would also work. Good luck!
What is an appropriate statistical test to identify significantly different time-points in two time-
Obviously super late with an answer here, but in case you haven't yet solved the issue, or you are working in the field and will face similar issues again... I would suggest adopting your first strate
What is an appropriate statistical test to identify significantly different time-points in two time-courses? Obviously super late with an answer here, but in case you haven't yet solved the issue, or you are working in the field and will face similar issues again... I would suggest adopting your first strategy and adjusting for multiple comparisons using a threshold-free cluster-enhancement technique followed by a maximum permutation test to correct for the comparisons. Theoretical details with simulated EEG data can be found here A user-friendly toolbox for matlab can be found here Of course, when using permutation you are not limited to conducting a t-test and you could potentially use any sort of difference measure you see fit (with proper justification). That also entails that if you have multiple groups, or multiple factors using the F-values from the appropriate ANOVA as summary measures for your permutation would also work. Good luck!
What is an appropriate statistical test to identify significantly different time-points in two time- Obviously super late with an answer here, but in case you haven't yet solved the issue, or you are working in the field and will face similar issues again... I would suggest adopting your first strate
51,063
Confidence interval for a constrained fit to Gaussian-like data
Jeff, Two comments. 1) The steps you described correspond to fitting the model: \begin{equation} G_i = AG_0(x_i) + \epsilon_i \end{equation} with {epsilon_i} iid, normally distributed, with mean 0 and variance sigma, which is a standard linear model (linear in the way the observations are assumed to depend on the un-known parameter A), without intercept. The least-square estimator is \begin{equation} \hat A = \dfrac{\Sigma _i G_i G_0(x_i)}{\Sigma _i G_0(x_i)^2 } \end{equation} whose variance can be estimated by \begin{equation} se^2(\hat A) = \dfrac{\frac{1}{n- 1}\Sigma_i e_i^2}{\Sigma _i G_0(x_i)^2 }. \end{equation} There would be an \bar{x} if the model had an intercept. 2) It seems, from what you say, that to reflect the uncertainty in A, a model of the form: \begin{equation} ln(G_i) = a + ln(G_0(x_i)) + \epsilon_i \end{equation} would be better. One could fit the constant a, taking the ln(G_0(x_i)) term as a fixed offset, and translate the uncertainty in the estimator of a into uncertainty in A.
Confidence interval for a constrained fit to Gaussian-like data
Jeff, Two comments. 1) The steps you described correspond to fitting the model: \begin{equation} G_i = AG_0(x_i) + \epsilon_i \end{equation} with {epsilon_i} iid, normally distributed, with mean 0 and
Confidence interval for a constrained fit to Gaussian-like data Jeff, Two comments. 1) The steps you described correspond to fitting the model: \begin{equation} G_i = AG_0(x_i) + \epsilon_i \end{equation} with {epsilon_i} iid, normally distributed, with mean 0 and variance sigma, which is a standard linear model (linear in the way the observations are assumed to depend on the un-known parameter A), without intercept. The least-square estimator is \begin{equation} \hat A = \dfrac{\Sigma _i G_i G_0(x_i)}{\Sigma _i G_0(x_i)^2 } \end{equation} whose variance can be estimated by \begin{equation} se^2(\hat A) = \dfrac{\frac{1}{n- 1}\Sigma_i e_i^2}{\Sigma _i G_0(x_i)^2 }. \end{equation} There would be an \bar{x} if the model had an intercept. 2) It seems, from what you say, that to reflect the uncertainty in A, a model of the form: \begin{equation} ln(G_i) = a + ln(G_0(x_i)) + \epsilon_i \end{equation} would be better. One could fit the constant a, taking the ln(G_0(x_i)) term as a fixed offset, and translate the uncertainty in the estimator of a into uncertainty in A.
Confidence interval for a constrained fit to Gaussian-like data Jeff, Two comments. 1) The steps you described correspond to fitting the model: \begin{equation} G_i = AG_0(x_i) + \epsilon_i \end{equation} with {epsilon_i} iid, normally distributed, with mean 0 and
51,064
Restoring original distribution from noisy observations
If the $\sigma_i$ are a known constant then this is just a variation on kernel density estimation. If they are not constant (but known at least up to a constant) then it would be a form of weighted kernel density estimation. If the $\sigma_i$ are not known, but believed to be related to $x_i$ then it becomes more complicated, but you should still be able to find a reasonable estimate using a different kernel. For example if larger $x_i$ means a larger $\sigma_i$ then a given $y_i$ would be more likely to be from an $x_i$ that is larger than smaller, so a non-symmetric kernel would be appropriate. The kernel would be a convolution of the normal and the relationship between $x_i$ and $\sigma_i$. The ideas behind log spline density estimation may apply here as well. You could use that idea to estimate the density of $x$ given your relationship above and maximum likelihood. There are also Bayesian methods that use a mixture of dirichlet distributions to represent and unknown density. Edit Here is some example R code that uses the kernel density estimate to try to reconstruct the original density: x <- rgamma(100, 3, 1/3) e <- rnorm(100) sig <- runif(100, 0.5, 1.5) y <- x + sig*e plot( density(y, kernel='gaussian', bw=1, weights=sig/sum(sig)), type='l' ) densgen <- function(y, sig) { n <- length(y) function(x) { sum( dnorm(x, y, sig)/n ) } } tmpdens <- Vectorize(densgen(y, sig)) curve(tmpdens, from=0, to=25, add=FALSE) curve(dgamma(x,3,1/3), add=TRUE, col='red') You can smooth the density estimate by multiplying the values of sig by a constant, the bigger the constant, the more smooth. And here is some code that does a maximum likelihood fit to the parameters when assuming a particular distribution: library(distr) library(stats4) ll <- function(shape, rate) { if( shape <= 0 || rate <= 0 ) return(Inf) X <- Gammad(shape,1/rate) -sum( sapply( seq_along(y), function(i) { E <- Norm(0,sig[i]) Y <- X + E log( d(Y)(y[i]) ) } ) ) } fit <- mle( ll, start = list( shape=3, rate=1/3 ) ) curve( dgamma(x, coef(fit)[1], coef(fit)[2]), add=TRUE, col='blue' ) If you need a non-parametric estimate of the density, then you could use the logspline density estimate or mixture of Dirichlets, or other non parametric estimates in place of the parametric density in the above.
Restoring original distribution from noisy observations
If the $\sigma_i$ are a known constant then this is just a variation on kernel density estimation. If they are not constant (but known at least up to a constant) then it would be a form of weighted ke
Restoring original distribution from noisy observations If the $\sigma_i$ are a known constant then this is just a variation on kernel density estimation. If they are not constant (but known at least up to a constant) then it would be a form of weighted kernel density estimation. If the $\sigma_i$ are not known, but believed to be related to $x_i$ then it becomes more complicated, but you should still be able to find a reasonable estimate using a different kernel. For example if larger $x_i$ means a larger $\sigma_i$ then a given $y_i$ would be more likely to be from an $x_i$ that is larger than smaller, so a non-symmetric kernel would be appropriate. The kernel would be a convolution of the normal and the relationship between $x_i$ and $\sigma_i$. The ideas behind log spline density estimation may apply here as well. You could use that idea to estimate the density of $x$ given your relationship above and maximum likelihood. There are also Bayesian methods that use a mixture of dirichlet distributions to represent and unknown density. Edit Here is some example R code that uses the kernel density estimate to try to reconstruct the original density: x <- rgamma(100, 3, 1/3) e <- rnorm(100) sig <- runif(100, 0.5, 1.5) y <- x + sig*e plot( density(y, kernel='gaussian', bw=1, weights=sig/sum(sig)), type='l' ) densgen <- function(y, sig) { n <- length(y) function(x) { sum( dnorm(x, y, sig)/n ) } } tmpdens <- Vectorize(densgen(y, sig)) curve(tmpdens, from=0, to=25, add=FALSE) curve(dgamma(x,3,1/3), add=TRUE, col='red') You can smooth the density estimate by multiplying the values of sig by a constant, the bigger the constant, the more smooth. And here is some code that does a maximum likelihood fit to the parameters when assuming a particular distribution: library(distr) library(stats4) ll <- function(shape, rate) { if( shape <= 0 || rate <= 0 ) return(Inf) X <- Gammad(shape,1/rate) -sum( sapply( seq_along(y), function(i) { E <- Norm(0,sig[i]) Y <- X + E log( d(Y)(y[i]) ) } ) ) } fit <- mle( ll, start = list( shape=3, rate=1/3 ) ) curve( dgamma(x, coef(fit)[1], coef(fit)[2]), add=TRUE, col='blue' ) If you need a non-parametric estimate of the density, then you could use the logspline density estimate or mixture of Dirichlets, or other non parametric estimates in place of the parametric density in the above.
Restoring original distribution from noisy observations If the $\sigma_i$ are a known constant then this is just a variation on kernel density estimation. If they are not constant (but known at least up to a constant) then it would be a form of weighted ke
51,065
Detecting outliers in circular data?
I am battling similar problems at the moment and found some literature that help you. Abuzaid, Mohamed, Hussin have designed and proposed circular boxplots, see: Boxplot for circular variables (2012), doi 10.1007/s00180-011-0261-5 http://dl.acm.org/citation.cfm?id=2347773 Outlier labeling via circular boxplot http://eprints.um.edu.my/10365/1/Outlier_labeling_via_circular_boxplot.pdf There is also an R package that seems to include this: OmicCircos: A Simple-to-Use R Package for the Circular Visualization of Multidimensional Omics Data http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3921174/
Detecting outliers in circular data?
I am battling similar problems at the moment and found some literature that help you. Abuzaid, Mohamed, Hussin have designed and proposed circular boxplots, see: Boxplot for circular variables (2012),
Detecting outliers in circular data? I am battling similar problems at the moment and found some literature that help you. Abuzaid, Mohamed, Hussin have designed and proposed circular boxplots, see: Boxplot for circular variables (2012), doi 10.1007/s00180-011-0261-5 http://dl.acm.org/citation.cfm?id=2347773 Outlier labeling via circular boxplot http://eprints.um.edu.my/10365/1/Outlier_labeling_via_circular_boxplot.pdf There is also an R package that seems to include this: OmicCircos: A Simple-to-Use R Package for the Circular Visualization of Multidimensional Omics Data http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3921174/
Detecting outliers in circular data? I am battling similar problems at the moment and found some literature that help you. Abuzaid, Mohamed, Hussin have designed and proposed circular boxplots, see: Boxplot for circular variables (2012),
51,066
Detecting outliers in circular data?
One way to go about this would be to calculate the circular dispersion as in this answer. If you set the constant $c$ fairly high, ie. 2 or 3, you may view all observations outside the interval $ \left[\hat\mu - c \hat\delta, \hat\mu + c \hat\delta \right]$ as outliers. In that answer, you may also find some code to visualize this.
Detecting outliers in circular data?
One way to go about this would be to calculate the circular dispersion as in this answer. If you set the constant $c$ fairly high, ie. 2 or 3, you may view all observations outside the interval $
Detecting outliers in circular data? One way to go about this would be to calculate the circular dispersion as in this answer. If you set the constant $c$ fairly high, ie. 2 or 3, you may view all observations outside the interval $ \left[\hat\mu - c \hat\delta, \hat\mu + c \hat\delta \right]$ as outliers. In that answer, you may also find some code to visualize this.
Detecting outliers in circular data? One way to go about this would be to calculate the circular dispersion as in this answer. If you set the constant $c$ fairly high, ie. 2 or 3, you may view all observations outside the interval $
51,067
Bootstrap Confidence Intervals for Weir & Cockerham's Fst
The method you outlined of computing the Studentized bootstrap confidence intervals looks fine to me. As long as you have data from multiple loci and are using that to estimate one value of $F_{st}$, bootstrapping over loci using a Studentized bootstrap confidence interval should be the way to go. The first paper you cited demonstrates, through simulation studies, that the percentile-t method used on Weir and Cockerham's multiple-locus estimator $\hat{\theta}_{loci}$ will give good results. Thus, you should feel confident with using that method.
Bootstrap Confidence Intervals for Weir & Cockerham's Fst
The method you outlined of computing the Studentized bootstrap confidence intervals looks fine to me. As long as you have data from multiple loci and are using that to estimate one value of $F_{st}$,
Bootstrap Confidence Intervals for Weir & Cockerham's Fst The method you outlined of computing the Studentized bootstrap confidence intervals looks fine to me. As long as you have data from multiple loci and are using that to estimate one value of $F_{st}$, bootstrapping over loci using a Studentized bootstrap confidence interval should be the way to go. The first paper you cited demonstrates, through simulation studies, that the percentile-t method used on Weir and Cockerham's multiple-locus estimator $\hat{\theta}_{loci}$ will give good results. Thus, you should feel confident with using that method.
Bootstrap Confidence Intervals for Weir & Cockerham's Fst The method you outlined of computing the Studentized bootstrap confidence intervals looks fine to me. As long as you have data from multiple loci and are using that to estimate one value of $F_{st}$,
51,068
How to apply hurdle models to panel data (using Stata)?
You can estimate double hurdle models to panel data with random effects using the module called dhreg. The Stata module is described on the following article (paywall until 2017): http://www.stata-journal.com/article.html?article=st0359 You can find and install the software by typing findit dhreg on Stata.
How to apply hurdle models to panel data (using Stata)?
You can estimate double hurdle models to panel data with random effects using the module called dhreg. The Stata module is described on the following article (paywall until 2017): http://www.stata-jou
How to apply hurdle models to panel data (using Stata)? You can estimate double hurdle models to panel data with random effects using the module called dhreg. The Stata module is described on the following article (paywall until 2017): http://www.stata-journal.com/article.html?article=st0359 You can find and install the software by typing findit dhreg on Stata.
How to apply hurdle models to panel data (using Stata)? You can estimate double hurdle models to panel data with random effects using the module called dhreg. The Stata module is described on the following article (paywall until 2017): http://www.stata-jou
51,069
Interpretation of Spearman's rank correlation coefficient - beyond its significance
The Spearman's rank c. c. is the Pearson' c.c. of the ranked variables; in its turn the Pearson's c.c. is defined as the mean of the product of the paired standardized scores $z(X_i)$, $z(Y_i)$. \begin{equation} r(X,Y) = \Sigma_i[z(X_i) z(Y_i)]/(n-1) \end{equation} in which $n$ is the sample size and the standard scores \begin{equation} z(X_i) = [X_i - \bar{X}]/std(X) \end{equation} \begin{equation} z(Y_i) = [Y_i - \bar{Y}]/std(Y) \end{equation} are relative to the ranked variables ($X_i$, $Y_i$). Squaring $r(X_i, Y_i)$ we obtain the coefficient of determination $r²$, which we can equate to the fraction of explained variance. So if my Spearman's rank c.c. is of 0.6, I can deduce that the variance of the ranked variables is shared at 36%. From the first equation and attempting at a simpler way of explaining $r(X,Y)$, I would say is the average value of concordance of z-score variations. For instance, let us say I repeat an experiment by increasing the sample size $n$ and calculate $r(X,Y)$ for both the small sample and the larger one. Let us say that associated to an increase in n of $~3$ I get a decrease in $r(X,Y)$ of roughly 50%; this corresponds to a decrease in standard scores concordance of 50%. My interpretation should be then that the latest dataset provides weaker evidence for the presence of a correlation in the data.
Interpretation of Spearman's rank correlation coefficient - beyond its significance
The Spearman's rank c. c. is the Pearson' c.c. of the ranked variables; in its turn the Pearson's c.c. is defined as the mean of the product of the paired standardized scores $z(X_i)$, $z(Y_i)$. \beg
Interpretation of Spearman's rank correlation coefficient - beyond its significance The Spearman's rank c. c. is the Pearson' c.c. of the ranked variables; in its turn the Pearson's c.c. is defined as the mean of the product of the paired standardized scores $z(X_i)$, $z(Y_i)$. \begin{equation} r(X,Y) = \Sigma_i[z(X_i) z(Y_i)]/(n-1) \end{equation} in which $n$ is the sample size and the standard scores \begin{equation} z(X_i) = [X_i - \bar{X}]/std(X) \end{equation} \begin{equation} z(Y_i) = [Y_i - \bar{Y}]/std(Y) \end{equation} are relative to the ranked variables ($X_i$, $Y_i$). Squaring $r(X_i, Y_i)$ we obtain the coefficient of determination $r²$, which we can equate to the fraction of explained variance. So if my Spearman's rank c.c. is of 0.6, I can deduce that the variance of the ranked variables is shared at 36%. From the first equation and attempting at a simpler way of explaining $r(X,Y)$, I would say is the average value of concordance of z-score variations. For instance, let us say I repeat an experiment by increasing the sample size $n$ and calculate $r(X,Y)$ for both the small sample and the larger one. Let us say that associated to an increase in n of $~3$ I get a decrease in $r(X,Y)$ of roughly 50%; this corresponds to a decrease in standard scores concordance of 50%. My interpretation should be then that the latest dataset provides weaker evidence for the presence of a correlation in the data.
Interpretation of Spearman's rank correlation coefficient - beyond its significance The Spearman's rank c. c. is the Pearson' c.c. of the ranked variables; in its turn the Pearson's c.c. is defined as the mean of the product of the paired standardized scores $z(X_i)$, $z(Y_i)$. \beg
51,070
Zero inflated negative binomial with selection
Not an answer, but more of a comment. I once spent time looking for this with no luck. This particular model is not even in the Winkelmann or Cameron and Trivedi's count data books. Some possible solutions: Ask this on Statalist. Jeff Wooldridge may know the answer. Do note the SL policy on cross-posting. etpoisson with robust errors if your endogenous variable is binary. E-mail Greene himself. Flesh out your problem more in case there's another solution, like fmm. If you figure it out, do post the answer(s).
Zero inflated negative binomial with selection
Not an answer, but more of a comment. I once spent time looking for this with no luck. This particular model is not even in the Winkelmann or Cameron and Trivedi's count data books. Some possible solu
Zero inflated negative binomial with selection Not an answer, but more of a comment. I once spent time looking for this with no luck. This particular model is not even in the Winkelmann or Cameron and Trivedi's count data books. Some possible solutions: Ask this on Statalist. Jeff Wooldridge may know the answer. Do note the SL policy on cross-posting. etpoisson with robust errors if your endogenous variable is binary. E-mail Greene himself. Flesh out your problem more in case there's another solution, like fmm. If you figure it out, do post the answer(s).
Zero inflated negative binomial with selection Not an answer, but more of a comment. I once spent time looking for this with no luck. This particular model is not even in the Winkelmann or Cameron and Trivedi's count data books. Some possible solu
51,071
Generalized linear model with lasso regularization for continuous non-negative response
Check out the machine learning application H2O (http://www.h2o.ai/), which can be run from within R. This allows you to fit penalised glms (e.g. ridge and lasso models) with a gamma error and log link (as well as several other error structures).
Generalized linear model with lasso regularization for continuous non-negative response
Check out the machine learning application H2O (http://www.h2o.ai/), which can be run from within R. This allows you to fit penalised glms (e.g. ridge and lasso models) with a gamma error and log lin
Generalized linear model with lasso regularization for continuous non-negative response Check out the machine learning application H2O (http://www.h2o.ai/), which can be run from within R. This allows you to fit penalised glms (e.g. ridge and lasso models) with a gamma error and log link (as well as several other error structures).
Generalized linear model with lasso regularization for continuous non-negative response Check out the machine learning application H2O (http://www.h2o.ai/), which can be run from within R. This allows you to fit penalised glms (e.g. ridge and lasso models) with a gamma error and log lin
51,072
Are world cup predictions testable?
Yes, World Cup predictions are testable. In addition to the great comments above, here is one way to think about it: The 45% probability that Brazil wins (or won, since my answer is years late) is not drawn out of thin air. Instead, it comes out of simulations of the outcomes of individual matches, in which the model predicts wins and losses, with some confidence attached to those predictions as well. Therefore, instead of one single prediction (Brazil wins!), you have predictions of every single match in the tournament, and uncertainty in each of those predictions. In this case, you can do a fairly sophisticated check. Of the list of matches in which Silver's model predicts that one team wins with 90% probability, it should be correct 9/10 times. Of matches which it predicts that one team has only a 60% probability of winning, they should be correct 6/10 times. And so on. My point here is a simple one: instead of having just one prediction, you have a a very large number of them to test, which gives you a simple, accurate way to assess how well the model performed. Of course, none of this is new for Silver. He rose to prominence not because he predicted the outcome of the 2008 US federal elections. Many models, both formal and informal, did this. His work was considered impressive because it correctly predicted 49/50 state-level results in the presidential elections, and of every single Senate race. And he also generally made accurate predictions of the margin of victory.
Are world cup predictions testable?
Yes, World Cup predictions are testable. In addition to the great comments above, here is one way to think about it: The 45% probability that Brazil wins (or won, since my answer is years late) is no
Are world cup predictions testable? Yes, World Cup predictions are testable. In addition to the great comments above, here is one way to think about it: The 45% probability that Brazil wins (or won, since my answer is years late) is not drawn out of thin air. Instead, it comes out of simulations of the outcomes of individual matches, in which the model predicts wins and losses, with some confidence attached to those predictions as well. Therefore, instead of one single prediction (Brazil wins!), you have predictions of every single match in the tournament, and uncertainty in each of those predictions. In this case, you can do a fairly sophisticated check. Of the list of matches in which Silver's model predicts that one team wins with 90% probability, it should be correct 9/10 times. Of matches which it predicts that one team has only a 60% probability of winning, they should be correct 6/10 times. And so on. My point here is a simple one: instead of having just one prediction, you have a a very large number of them to test, which gives you a simple, accurate way to assess how well the model performed. Of course, none of this is new for Silver. He rose to prominence not because he predicted the outcome of the 2008 US federal elections. Many models, both formal and informal, did this. His work was considered impressive because it correctly predicted 49/50 state-level results in the presidential elections, and of every single Senate race. And he also generally made accurate predictions of the margin of victory.
Are world cup predictions testable? Yes, World Cup predictions are testable. In addition to the great comments above, here is one way to think about it: The 45% probability that Brazil wins (or won, since my answer is years late) is no
51,073
When forecasting sequential data is it best to use auto-regressive models or build a more traditional n x p dataset with features?
I've yet to find a satisfactory line of research in the literature that handles nonparametric time-series forecasting. What follows is my duct-tape approach. The pervasive question in performing nonparametric time-series analysis is: What do I really care about? Most problems are made more complex by requiring confidence intervals or made simpler with a classifier for "is this going up?" The generalized approach to time-series forecasting is to build feature matrix of lagged values, possibly perform a log-transform if your values are strictly positive, then carry out temporal cross-validation on nonparametric regression models. With this approach there are multiple options for forecasting over a horizon. You can recurse on your model on it's outputs or you can use a multioutput regression model. There are multiple options for generating confidence intervals over that horizon. You can cross-validate a standard error then multiply it by the forecast horizon or you can use a nonparametric multioutput model that produces confidence intervals associated each element of a fixed length horizon. If you are using kernel methods, it's possibly to weight features by their recency. If you are using a method that uses some form of gradient descent, you can use the previously found parameters as a warm-start then training after observing new data-points. This rapidly speeds up convergence. Online methods can be quite successful for some problems, while offering nice complexity guarantees and never becoming stale. Regarding: Is it even worth building a more sophisticated model? Don't fix what isn't broken. Would I have to continually retrain it between every hour or would a few months worth of this hourly data be enough to forecast for several days before retraining? See above for some approaches to try.
When forecasting sequential data is it best to use auto-regressive models or build a more traditiona
I've yet to find a satisfactory line of research in the literature that handles nonparametric time-series forecasting. What follows is my duct-tape approach. The pervasive question in performing nonpa
When forecasting sequential data is it best to use auto-regressive models or build a more traditional n x p dataset with features? I've yet to find a satisfactory line of research in the literature that handles nonparametric time-series forecasting. What follows is my duct-tape approach. The pervasive question in performing nonparametric time-series analysis is: What do I really care about? Most problems are made more complex by requiring confidence intervals or made simpler with a classifier for "is this going up?" The generalized approach to time-series forecasting is to build feature matrix of lagged values, possibly perform a log-transform if your values are strictly positive, then carry out temporal cross-validation on nonparametric regression models. With this approach there are multiple options for forecasting over a horizon. You can recurse on your model on it's outputs or you can use a multioutput regression model. There are multiple options for generating confidence intervals over that horizon. You can cross-validate a standard error then multiply it by the forecast horizon or you can use a nonparametric multioutput model that produces confidence intervals associated each element of a fixed length horizon. If you are using kernel methods, it's possibly to weight features by their recency. If you are using a method that uses some form of gradient descent, you can use the previously found parameters as a warm-start then training after observing new data-points. This rapidly speeds up convergence. Online methods can be quite successful for some problems, while offering nice complexity guarantees and never becoming stale. Regarding: Is it even worth building a more sophisticated model? Don't fix what isn't broken. Would I have to continually retrain it between every hour or would a few months worth of this hourly data be enough to forecast for several days before retraining? See above for some approaches to try.
When forecasting sequential data is it best to use auto-regressive models or build a more traditiona I've yet to find a satisfactory line of research in the literature that handles nonparametric time-series forecasting. What follows is my duct-tape approach. The pervasive question in performing nonpa
51,074
find the point at which the curve significantly shoots up
You could use Outlier Detection from Time Series (Zhao - R and data mining). The first chart is the original time series, the second the seasonality , the third shows the trend and the last one plots the outliers on top of the remaining components after removing trend and seasonality. Reproducible example code: # use robust fitting f <- stl(AirPassengers, "periodic", robust=TRUE) (outliers <- which(f$weights<1e-8)) # set layout op <- par(mar=c(0, 4, 0, 3), oma=c(5, 0, 4, 0), mfcol=c(4, 1)) plot(f, set.pars=NULL) sts <- f$time.series # plot outliers points(time(sts)[outliers], sts[,"remainder"][outliers], pch="x", col="red", cex=2) par(op) # reset layout
find the point at which the curve significantly shoots up
You could use Outlier Detection from Time Series (Zhao - R and data mining). The first chart is the original time series, the second the seasonality , the third shows the trend and the last one plot
find the point at which the curve significantly shoots up You could use Outlier Detection from Time Series (Zhao - R and data mining). The first chart is the original time series, the second the seasonality , the third shows the trend and the last one plots the outliers on top of the remaining components after removing trend and seasonality. Reproducible example code: # use robust fitting f <- stl(AirPassengers, "periodic", robust=TRUE) (outliers <- which(f$weights<1e-8)) # set layout op <- par(mar=c(0, 4, 0, 3), oma=c(5, 0, 4, 0), mfcol=c(4, 1)) plot(f, set.pars=NULL) sts <- f$time.series # plot outliers points(time(sts)[outliers], sts[,"remainder"][outliers], pch="x", col="red", cex=2) par(op) # reset layout
find the point at which the curve significantly shoots up You could use Outlier Detection from Time Series (Zhao - R and data mining). The first chart is the original time series, the second the seasonality , the third shows the trend and the last one plot
51,075
find the point at which the curve significantly shoots up
This really depends on what the data looks like. Without a plot and from the description it sounds like the mean increases during the rainy season. If it is just a case of a baseline value of rainfall outside the rainy season and then this switches to another (higher) baseline during rainfall season then you are looking at a change in mean model. You can fit this using the cpt.mean function in the R package changepoint. Alternatively, if there aren't really two baselines and during the rainy season you can see an increase in the mean but it isn't really constant and the variability is higher then you might want to transform your data. The easiest way to do this is to take first differences, i.e. $x_2-x_1$ (you can use the diff function in R). Then you find the changepoint in the differences you would use the cpt.var function in the changepoint package. Both of these find changes in the mean and variance respectively. Without seeing the data it is hard to know which one (if either) might be appropriate for your data.
find the point at which the curve significantly shoots up
This really depends on what the data looks like. Without a plot and from the description it sounds like the mean increases during the rainy season. If it is just a case of a baseline value of rainfal
find the point at which the curve significantly shoots up This really depends on what the data looks like. Without a plot and from the description it sounds like the mean increases during the rainy season. If it is just a case of a baseline value of rainfall outside the rainy season and then this switches to another (higher) baseline during rainfall season then you are looking at a change in mean model. You can fit this using the cpt.mean function in the R package changepoint. Alternatively, if there aren't really two baselines and during the rainy season you can see an increase in the mean but it isn't really constant and the variability is higher then you might want to transform your data. The easiest way to do this is to take first differences, i.e. $x_2-x_1$ (you can use the diff function in R). Then you find the changepoint in the differences you would use the cpt.var function in the changepoint package. Both of these find changes in the mean and variance respectively. Without seeing the data it is hard to know which one (if either) might be appropriate for your data.
find the point at which the curve significantly shoots up This really depends on what the data looks like. Without a plot and from the description it sounds like the mean increases during the rainy season. If it is just a case of a baseline value of rainfal
51,076
find the point at which the curve significantly shoots up
I would start with applying the kernel smoother, such as Gaussian. It'll give you a smooth curve $f(t)$, where you can play with the scale of the length $b$ to get it the curve as smooth as you wish. Now, you can apply analytical methods, such as the first derivative $\frac{df(t)}{dt}>a$, where $a$ is some threshold, to identify the point of onset of the rainfall during the year. Start with graphing the $f(t)$, to get more intuition about the smoothed curve.
find the point at which the curve significantly shoots up
I would start with applying the kernel smoother, such as Gaussian. It'll give you a smooth curve $f(t)$, where you can play with the scale of the length $b$ to get it the curve as smooth as you wish.
find the point at which the curve significantly shoots up I would start with applying the kernel smoother, such as Gaussian. It'll give you a smooth curve $f(t)$, where you can play with the scale of the length $b$ to get it the curve as smooth as you wish. Now, you can apply analytical methods, such as the first derivative $\frac{df(t)}{dt}>a$, where $a$ is some threshold, to identify the point of onset of the rainfall during the year. Start with graphing the $f(t)$, to get more intuition about the smoothed curve.
find the point at which the curve significantly shoots up I would start with applying the kernel smoother, such as Gaussian. It'll give you a smooth curve $f(t)$, where you can play with the scale of the length $b$ to get it the curve as smooth as you wish.
51,077
find the point at which the curve significantly shoots up
If you are just looking at modelling the seasonality in the data, you could use a (possibly non-linear) regression model to predict the rainfall as a function of the sine and cosine of the day of year. If you want to look for changes or trends, then you could include other variables, such as the number of days since the start of the dataset. However, rainfall data (unless averaged over a very large area) will have lots of zeros, representing days where it didn't rain at all, and this is likely to skew the analysis unless this is taken into account. The method I like best is the mixed Bernoulli-Gamma model devised by Peter Williams, which jointly models the ocurrence and amount processes using a single likelihood. It really is very elegant and I have found it very useful for my work in downscaling rainfall data. I suspect that paper would be of interest to anybody performing statistical analyses of rainfall data (at least at station level). Note this paper discussed modelling of seasonality and trends.
find the point at which the curve significantly shoots up
If you are just looking at modelling the seasonality in the data, you could use a (possibly non-linear) regression model to predict the rainfall as a function of the sine and cosine of the day of year
find the point at which the curve significantly shoots up If you are just looking at modelling the seasonality in the data, you could use a (possibly non-linear) regression model to predict the rainfall as a function of the sine and cosine of the day of year. If you want to look for changes or trends, then you could include other variables, such as the number of days since the start of the dataset. However, rainfall data (unless averaged over a very large area) will have lots of zeros, representing days where it didn't rain at all, and this is likely to skew the analysis unless this is taken into account. The method I like best is the mixed Bernoulli-Gamma model devised by Peter Williams, which jointly models the ocurrence and amount processes using a single likelihood. It really is very elegant and I have found it very useful for my work in downscaling rainfall data. I suspect that paper would be of interest to anybody performing statistical analyses of rainfall data (at least at station level). Note this paper discussed modelling of seasonality and trends.
find the point at which the curve significantly shoots up If you are just looking at modelling the seasonality in the data, you could use a (possibly non-linear) regression model to predict the rainfall as a function of the sine and cosine of the day of year
51,078
How to test the ARIMA coefficients?
You can use use the standard error of the coefficient to COMPUTE the t value . INTERPREATION of the t value i.e. converting the t value to a probability using the normal distribution REQUIRES that the errors from the model are Gaussian. To test the Gaussian assumption one must verify the following: There are no pulses/level shifts/seasonal pulses/time trends in the error process The error variance is free of structural change i.e. deterministic change suggesting the possible need for Weighted Least Squares. The error variance is not relatable to the expected value suggesting the possible need for a power transform e.g. logs/square roots/reciprocals etc. The parameters of the ARIMA model are invariant over time suggesting time varying parameters (coefficients) The square of the errors is not describable as an ARIMA process possibly suggesting the need for a GARCH augmentation. Good software not only estimates the parameters but tests the assumptions that are necessary to convert the computed t values to probabilities. Alas and alack this is often missing !
How to test the ARIMA coefficients?
You can use use the standard error of the coefficient to COMPUTE the t value . INTERPREATION of the t value i.e. converting the t value to a probability using the normal distribution REQUIRES that the
How to test the ARIMA coefficients? You can use use the standard error of the coefficient to COMPUTE the t value . INTERPREATION of the t value i.e. converting the t value to a probability using the normal distribution REQUIRES that the errors from the model are Gaussian. To test the Gaussian assumption one must verify the following: There are no pulses/level shifts/seasonal pulses/time trends in the error process The error variance is free of structural change i.e. deterministic change suggesting the possible need for Weighted Least Squares. The error variance is not relatable to the expected value suggesting the possible need for a power transform e.g. logs/square roots/reciprocals etc. The parameters of the ARIMA model are invariant over time suggesting time varying parameters (coefficients) The square of the errors is not describable as an ARIMA process possibly suggesting the need for a GARCH augmentation. Good software not only estimates the parameters but tests the assumptions that are necessary to convert the computed t values to probabilities. Alas and alack this is often missing !
How to test the ARIMA coefficients? You can use use the standard error of the coefficient to COMPUTE the t value . INTERPREATION of the t value i.e. converting the t value to a probability using the normal distribution REQUIRES that the
51,079
Is there a negative impact from imbalance/skew in predictor variables?
Lack of balance is not bad for a saturated model. With two categorical variables this means having $ AB $ predictors ($ A $ is number of categories in the variable, $ B $ the second). If you include the interaction term $ x_1$ and $ x_2$ you should be fine. If this fit is too "noisy" then include it as a random effect in a mixed model - so you get the benefits of partial pooling. The problem comes in when you are pooling relationships across categories (such as main effects only). If you are polling across two imbalanced groups, the larger group contributes more to the pooled estimate. This causes problems when the groups are actually different, and the imbalance is "unrepresentative" - that is, an artifact of the sampling mechanism that you obtained your data from (eg quota sampling, or "balancing" by subsampling your data).
Is there a negative impact from imbalance/skew in predictor variables?
Lack of balance is not bad for a saturated model. With two categorical variables this means having $ AB $ predictors ($ A $ is number of categories in the variable, $ B $ the second). If you include
Is there a negative impact from imbalance/skew in predictor variables? Lack of balance is not bad for a saturated model. With two categorical variables this means having $ AB $ predictors ($ A $ is number of categories in the variable, $ B $ the second). If you include the interaction term $ x_1$ and $ x_2$ you should be fine. If this fit is too "noisy" then include it as a random effect in a mixed model - so you get the benefits of partial pooling. The problem comes in when you are pooling relationships across categories (such as main effects only). If you are polling across two imbalanced groups, the larger group contributes more to the pooled estimate. This causes problems when the groups are actually different, and the imbalance is "unrepresentative" - that is, an artifact of the sampling mechanism that you obtained your data from (eg quota sampling, or "balancing" by subsampling your data).
Is there a negative impact from imbalance/skew in predictor variables? Lack of balance is not bad for a saturated model. With two categorical variables this means having $ AB $ predictors ($ A $ is number of categories in the variable, $ B $ the second). If you include
51,080
Is there a negative impact from imbalance/skew in predictor variables?
$y = B0 + B1*x1 + B2*x2$? Does it even make sense for categorical variables? If you look at continuous variables, for example most values of $x$ are clustered together, and there's a couple of outliers, these outliers is going to have high leverage and have a large impact on the regression coefficients. I imagine the same might happen to the strongly imbalanced categorical variables (if you interpret the equation y ~ B0 + B1*x1 + B2*x2 in the sense of, say, logistic regression, with x1, x2 etc being dummy variables).
Is there a negative impact from imbalance/skew in predictor variables?
$y = B0 + B1*x1 + B2*x2$? Does it even make sense for categorical variables? If you look at continuous variables, for example most values of $x$ are clustered together, and there's a couple of outlier
Is there a negative impact from imbalance/skew in predictor variables? $y = B0 + B1*x1 + B2*x2$? Does it even make sense for categorical variables? If you look at continuous variables, for example most values of $x$ are clustered together, and there's a couple of outliers, these outliers is going to have high leverage and have a large impact on the regression coefficients. I imagine the same might happen to the strongly imbalanced categorical variables (if you interpret the equation y ~ B0 + B1*x1 + B2*x2 in the sense of, say, logistic regression, with x1, x2 etc being dummy variables).
Is there a negative impact from imbalance/skew in predictor variables? $y = B0 + B1*x1 + B2*x2$? Does it even make sense for categorical variables? If you look at continuous variables, for example most values of $x$ are clustered together, and there's a couple of outlier
51,081
Best way to generate Gaussian Field
Yes this is too big for Cholesky! If you generate on a regular grid, then the spectral methods are the best. They are kind of hard to set up. Fortunately, there are several R packages, for example, RandomFields.
Best way to generate Gaussian Field
Yes this is too big for Cholesky! If you generate on a regular grid, then the spectral methods are the best. They are kind of hard to set up. Fortunately, there are several R packages, for example, R
Best way to generate Gaussian Field Yes this is too big for Cholesky! If you generate on a regular grid, then the spectral methods are the best. They are kind of hard to set up. Fortunately, there are several R packages, for example, RandomFields.
Best way to generate Gaussian Field Yes this is too big for Cholesky! If you generate on a regular grid, then the spectral methods are the best. They are kind of hard to set up. Fortunately, there are several R packages, for example, R
51,082
DNA: The number of 'AAAAA'-s in a randomly generated DNA sequence that's 1000 base pairs long
Well, there is a way to get an asymptotic probability that gets closer as the size of the sequence gets bigger. For a 1000-lenght sequence I think it can give you a good approximation. Call $L_{i}$ the letter in the position $i$. Consider the Markov Chain $X_{i} = \max(n|L_{i-j}, 0\leq j < n). $ The transition probabilities of that chain are simple: $p(n,0)= 0,75$ for every n between 0 and 5. $p(n,n+1)= 0,25$ for every n between 0 and 4. $p(5,5) = 0,25$ Then you get the equilibrium distribution of the states $\pi_{n}$ by solving $\pi_{n} = \sum_{m} \pi_{m}.p(m,n)$ When you get $\pi_{5}$ you can get approximation by considering that you have a binomial distribution with 1000 tries and probability $\pi_{5}$. It gets better with bigger sequences.
DNA: The number of 'AAAAA'-s in a randomly generated DNA sequence that's 1000 base pairs long
Well, there is a way to get an asymptotic probability that gets closer as the size of the sequence gets bigger. For a 1000-lenght sequence I think it can give you a good approximation. Call $L_{i}$ t
DNA: The number of 'AAAAA'-s in a randomly generated DNA sequence that's 1000 base pairs long Well, there is a way to get an asymptotic probability that gets closer as the size of the sequence gets bigger. For a 1000-lenght sequence I think it can give you a good approximation. Call $L_{i}$ the letter in the position $i$. Consider the Markov Chain $X_{i} = \max(n|L_{i-j}, 0\leq j < n). $ The transition probabilities of that chain are simple: $p(n,0)= 0,75$ for every n between 0 and 5. $p(n,n+1)= 0,25$ for every n between 0 and 4. $p(5,5) = 0,25$ Then you get the equilibrium distribution of the states $\pi_{n}$ by solving $\pi_{n} = \sum_{m} \pi_{m}.p(m,n)$ When you get $\pi_{5}$ you can get approximation by considering that you have a binomial distribution with 1000 tries and probability $\pi_{5}$. It gets better with bigger sequences.
DNA: The number of 'AAAAA'-s in a randomly generated DNA sequence that's 1000 base pairs long Well, there is a way to get an asymptotic probability that gets closer as the size of the sequence gets bigger. For a 1000-lenght sequence I think it can give you a good approximation. Call $L_{i}$ t
51,083
Conditional density and variance of Nadaraya-Watson model
Here you have well described the conditional density (with $y$ in the place of your $t$): http://en.wikipedia.org/wiki/Kernel_regression#Derivation The conditional density is the $f(y|x) = \frac{f(x,y)}{f(x)}$. The conditional mean would be the mean for that density $E(Y|X)$ and the same thing for variance $E(Y-E(Y|X)|X)$. It is the integral of the conditional density $f(y|X)$ over all y that shall be equal to 1. That will lead you to an integral of all K functions.
Conditional density and variance of Nadaraya-Watson model
Here you have well described the conditional density (with $y$ in the place of your $t$): http://en.wikipedia.org/wiki/Kernel_regression#Derivation The conditional density is the $f(y|x) = \frac{f(x,y
Conditional density and variance of Nadaraya-Watson model Here you have well described the conditional density (with $y$ in the place of your $t$): http://en.wikipedia.org/wiki/Kernel_regression#Derivation The conditional density is the $f(y|x) = \frac{f(x,y)}{f(x)}$. The conditional mean would be the mean for that density $E(Y|X)$ and the same thing for variance $E(Y-E(Y|X)|X)$. It is the integral of the conditional density $f(y|X)$ over all y that shall be equal to 1. That will lead you to an integral of all K functions.
Conditional density and variance of Nadaraya-Watson model Here you have well described the conditional density (with $y$ in the place of your $t$): http://en.wikipedia.org/wiki/Kernel_regression#Derivation The conditional density is the $f(y|x) = \frac{f(x,y
51,084
Significance of overlap between multiple lists
I'm dealing with similar problems, and haven't found a straightforward function. So I wrote a function myself. Although it's not very concise, it does the work. Hope it also helps you. hyper_matrix <- function(gene.list, background){ # generate every combinations of two gene lists combination <- expand.grid(names(gene.list),names(gene.list)) combination$values <- rep(NA, times=nrow(combination)) # convert long table into wide combination <- reshape(combination, idvar="Var1", timevar="Var2", direction="wide") rownames(combination) <- combination$Var1 combination <- combination[,-1] colnames(combination) <- gsub("values.", "", colnames(combination)) # calculate the length of overlap of each pair for(i in colnames(combination)){ for(j in rownames(combination)){ combination[j,i]<-length(intersect(gene.list[[j]],gene.list[[i]])) } } # calculate the significance of the overlap of each pair for(m in 1:length(gene.list)){ for(n in 1:length(gene.list)){ if(n>m){ combination[n,m] <- phyper(combination[m,n]-1, length(gene.list[[m]]), background-length(gene.list[[m]]), length(gene.list[[n]]), lower.tail=F) # note that the phyper function (lower.tail=F) give the probability of P[X>x], so the the overlap length should subtract 1 to get a P[X>=x]. } } } # round to 2 digit. return(round(combination,2)) } With this, if you have, let's say, 4 gene lists. gene.list <- list(listA=paste0("gene",c(1,2,3,4,5,6,7,8,9)), listB=paste0("gene",c(1,3,4,6,7,9)), listC=paste0("gene",c(5,6,7,8,9,11)), listD=paste0("gene",c(11,12,13,14,15))) and the background number is 14 genes (number of all balls in the urn) in the world, the result would be: hyper_matrix(gene.list, 14) listA listB listC listD listA 9.00 6.00 5.00 0 listB 0.03 6.00 3.00 0 listC 0.24 0.53 6.00 1 listD 1.00 1.00 0.97 5 where the upper triangle on the right is the lengths of overlap of each pair, and the bottom triangle on the left is the significance of the overlap by hypergeometric test. Here in this toy example, the overlap between listA and listB is significant in a world a 14 genes if you choose 0.05 as your p-value cutoff. Any other pair is not significantly overlapping.
Significance of overlap between multiple lists
I'm dealing with similar problems, and haven't found a straightforward function. So I wrote a function myself. Although it's not very concise, it does the work. Hope it also helps you. hyper_matrix <-
Significance of overlap between multiple lists I'm dealing with similar problems, and haven't found a straightforward function. So I wrote a function myself. Although it's not very concise, it does the work. Hope it also helps you. hyper_matrix <- function(gene.list, background){ # generate every combinations of two gene lists combination <- expand.grid(names(gene.list),names(gene.list)) combination$values <- rep(NA, times=nrow(combination)) # convert long table into wide combination <- reshape(combination, idvar="Var1", timevar="Var2", direction="wide") rownames(combination) <- combination$Var1 combination <- combination[,-1] colnames(combination) <- gsub("values.", "", colnames(combination)) # calculate the length of overlap of each pair for(i in colnames(combination)){ for(j in rownames(combination)){ combination[j,i]<-length(intersect(gene.list[[j]],gene.list[[i]])) } } # calculate the significance of the overlap of each pair for(m in 1:length(gene.list)){ for(n in 1:length(gene.list)){ if(n>m){ combination[n,m] <- phyper(combination[m,n]-1, length(gene.list[[m]]), background-length(gene.list[[m]]), length(gene.list[[n]]), lower.tail=F) # note that the phyper function (lower.tail=F) give the probability of P[X>x], so the the overlap length should subtract 1 to get a P[X>=x]. } } } # round to 2 digit. return(round(combination,2)) } With this, if you have, let's say, 4 gene lists. gene.list <- list(listA=paste0("gene",c(1,2,3,4,5,6,7,8,9)), listB=paste0("gene",c(1,3,4,6,7,9)), listC=paste0("gene",c(5,6,7,8,9,11)), listD=paste0("gene",c(11,12,13,14,15))) and the background number is 14 genes (number of all balls in the urn) in the world, the result would be: hyper_matrix(gene.list, 14) listA listB listC listD listA 9.00 6.00 5.00 0 listB 0.03 6.00 3.00 0 listC 0.24 0.53 6.00 1 listD 1.00 1.00 0.97 5 where the upper triangle on the right is the lengths of overlap of each pair, and the bottom triangle on the left is the significance of the overlap by hypergeometric test. Here in this toy example, the overlap between listA and listB is significant in a world a 14 genes if you choose 0.05 as your p-value cutoff. Any other pair is not significantly overlapping.
Significance of overlap between multiple lists I'm dealing with similar problems, and haven't found a straightforward function. So I wrote a function myself. Although it's not very concise, it does the work. Hope it also helps you. hyper_matrix <-
51,085
Item-based collaborative filtering – Can you add demographic information to initial user×item matrix?
I had a similar question/problem and got an answer here - How to integrate users' profile information into a recommender system . The simplest solution is to use kNN method - find the closest neighbors using your demographic data and infer recommendations from their ratings A somewhat more profound technique is to use model-based approach ( matrix factorization method ) where you can put all your data in the model. THis may help - Matrix Factorization algorithms for Recommender Systems
Item-based collaborative filtering – Can you add demographic information to initial user×item matrix
I had a similar question/problem and got an answer here - How to integrate users' profile information into a recommender system . The simplest solution is to use kNN method - find the closest neighbo
Item-based collaborative filtering – Can you add demographic information to initial user×item matrix? I had a similar question/problem and got an answer here - How to integrate users' profile information into a recommender system . The simplest solution is to use kNN method - find the closest neighbors using your demographic data and infer recommendations from their ratings A somewhat more profound technique is to use model-based approach ( matrix factorization method ) where you can put all your data in the model. THis may help - Matrix Factorization algorithms for Recommender Systems
Item-based collaborative filtering – Can you add demographic information to initial user×item matrix I had a similar question/problem and got an answer here - How to integrate users' profile information into a recommender system . The simplest solution is to use kNN method - find the closest neighbo
51,086
Meta-analysis of prevalence at the country level
I see your question is quite old, but still it might be worthwhile to try to answer, very humbly though. First, I think you extracted correctly the weights from metaprop. Second, my impression (but I am not the ultimate expert) is that you built both your models correctly. Third, I would consider reporting results from both models, but stick to the first model for the primary analysis and report the second mainly as sensitivity analysis. In any case, thinking out the box, it will boil down to how many patients, studies, and study strata you are using for evidence synthesis.
Meta-analysis of prevalence at the country level
I see your question is quite old, but still it might be worthwhile to try to answer, very humbly though. First, I think you extracted correctly the weights from metaprop. Second, my impression (but I
Meta-analysis of prevalence at the country level I see your question is quite old, but still it might be worthwhile to try to answer, very humbly though. First, I think you extracted correctly the weights from metaprop. Second, my impression (but I am not the ultimate expert) is that you built both your models correctly. Third, I would consider reporting results from both models, but stick to the first model for the primary analysis and report the second mainly as sensitivity analysis. In any case, thinking out the box, it will boil down to how many patients, studies, and study strata you are using for evidence synthesis.
Meta-analysis of prevalence at the country level I see your question is quite old, but still it might be worthwhile to try to answer, very humbly though. First, I think you extracted correctly the weights from metaprop. Second, my impression (but I
51,087
"Robust" normalization of features from multiple groups and unknown distributions prior to learning
I agree with gaborous in that you maybe seeking standardization, and a quick search for a visualization reveals lectures on why standardization is useful. TLDW version is that this is all to avoid squashed (elliptical or canoe) shaped input space, because such shapes will cause linear regression to wander. Credit to Blitz Kim Now from what little I've grokked of the fields concerned it seems as though one can get away with ranges that do not produce a nice dartboard like shapes, eg -1 to 1 and 0 to 3, however, ranges greater than 1 or less than -1 may cause high variability in outputs, or lead to the exploding or vanishing gradient problem. Supposedly there are activation functions and other methods that can be used that are more resilient or less likely to be susceptible to gradients misbehaving, but when a little data massaging can mitigate having a later fight with debugging a model, it seems like a far better choice to pre-clean the input space. As to what makes sense or not... personally when reading (and re-reading) the question my instinct would be to suggest adapting something like word2vec for the data to input encoding method and feeding that to a GCN for classification and perhaps generation. Because supposedly word to vector methods better preserve relationships, like similar meanings of words, or perhaps in this case similar functions between proteins that may seem unrelated at first glance in structure. And Graph Convolution Networks, because supposedly they do really well at point cloud inputs, which is kinda what word2vec methods output But this is just based off speculation on what it is that you're attempting to do, and what I've learned on these subjects. Essentially I'd treat proteins as paragraphs because they express that level of complex meaning/intent.
"Robust" normalization of features from multiple groups and unknown distributions prior to learning
I agree with gaborous in that you maybe seeking standardization, and a quick search for a visualization reveals lectures on why standardization is useful. TLDW version is that this is all to avoid squ
"Robust" normalization of features from multiple groups and unknown distributions prior to learning I agree with gaborous in that you maybe seeking standardization, and a quick search for a visualization reveals lectures on why standardization is useful. TLDW version is that this is all to avoid squashed (elliptical or canoe) shaped input space, because such shapes will cause linear regression to wander. Credit to Blitz Kim Now from what little I've grokked of the fields concerned it seems as though one can get away with ranges that do not produce a nice dartboard like shapes, eg -1 to 1 and 0 to 3, however, ranges greater than 1 or less than -1 may cause high variability in outputs, or lead to the exploding or vanishing gradient problem. Supposedly there are activation functions and other methods that can be used that are more resilient or less likely to be susceptible to gradients misbehaving, but when a little data massaging can mitigate having a later fight with debugging a model, it seems like a far better choice to pre-clean the input space. As to what makes sense or not... personally when reading (and re-reading) the question my instinct would be to suggest adapting something like word2vec for the data to input encoding method and feeding that to a GCN for classification and perhaps generation. Because supposedly word to vector methods better preserve relationships, like similar meanings of words, or perhaps in this case similar functions between proteins that may seem unrelated at first glance in structure. And Graph Convolution Networks, because supposedly they do really well at point cloud inputs, which is kinda what word2vec methods output But this is just based off speculation on what it is that you're attempting to do, and what I've learned on these subjects. Essentially I'd treat proteins as paragraphs because they express that level of complex meaning/intent.
"Robust" normalization of features from multiple groups and unknown distributions prior to learning I agree with gaborous in that you maybe seeking standardization, and a quick search for a visualization reveals lectures on why standardization is useful. TLDW version is that this is all to avoid squ
51,088
How to perform parameters tuning for machine learning?
Given that you trust your validation setup option 2 is the way to go. You have performed the CV to identify the most general parameter setup (or model selection or whatever you're trying to optimize). These findings should be applied to the entire trainingset and tested (once) on the test set. The picture below illustrates a setup I think works well when evaluating and testing the performance of machine learning algorithms.
How to perform parameters tuning for machine learning?
Given that you trust your validation setup option 2 is the way to go. You have performed the CV to identify the most general parameter setup (or model selection or whatever you're trying to optimize).
How to perform parameters tuning for machine learning? Given that you trust your validation setup option 2 is the way to go. You have performed the CV to identify the most general parameter setup (or model selection or whatever you're trying to optimize). These findings should be applied to the entire trainingset and tested (once) on the test set. The picture below illustrates a setup I think works well when evaluating and testing the performance of machine learning algorithms.
How to perform parameters tuning for machine learning? Given that you trust your validation setup option 2 is the way to go. You have performed the CV to identify the most general parameter setup (or model selection or whatever you're trying to optimize).
51,089
Time varying predictors at higher aggregation levels in multilevel survival analysis
I think I found a solution. I read two book chapters about multilevel event history models (Courgeau, 2007; Goldstein, 2011), which discuss similar cases and suggest using a three-level structure such as time (level-1) nested within households (level-2), which are in turn nested within municipalities (level-3). Goldstein (2011, p. 221) explicitly states for this structure that “The exploratory variables can be defined at any level. They may also vary over time, allowing so-called time varying covariates.” So here is a quick explanation why I think that such a three-level model is able to correctly incorporate time-varying predictors at the municipality-level (level-3), such as the environmental variable “Env1”. Because Env1 varies across time, the model automatically treats it as a level-1 variable. It does not know that at each time step (e.g., year 1990), the values for Env1 are the same for all households located in a particular municipality. However, I don’t think that this biases the standard errors for the Env1 variable because I have household random effects (level-2) included in the model, which estimate a separate intercept for each household. Moreover, I also include an additional variance component at level-2 that allows the slope of Env1 to vary randomly across households. In this way the effect of Env1 is uniquely computed for each household. References: Courgeau, D. (2007). Multilevel synthesis: From the group to the individual. Dordrecht, The Netherlands: Springer. Goldstein, H. (2011). Multilevel statistical models (4th ed.). Chichester, U.K.: John Wiley & Sons.
Time varying predictors at higher aggregation levels in multilevel survival analysis
I think I found a solution. I read two book chapters about multilevel event history models (Courgeau, 2007; Goldstein, 2011), which discuss similar cases and suggest using a three-level structure such
Time varying predictors at higher aggregation levels in multilevel survival analysis I think I found a solution. I read two book chapters about multilevel event history models (Courgeau, 2007; Goldstein, 2011), which discuss similar cases and suggest using a three-level structure such as time (level-1) nested within households (level-2), which are in turn nested within municipalities (level-3). Goldstein (2011, p. 221) explicitly states for this structure that “The exploratory variables can be defined at any level. They may also vary over time, allowing so-called time varying covariates.” So here is a quick explanation why I think that such a three-level model is able to correctly incorporate time-varying predictors at the municipality-level (level-3), such as the environmental variable “Env1”. Because Env1 varies across time, the model automatically treats it as a level-1 variable. It does not know that at each time step (e.g., year 1990), the values for Env1 are the same for all households located in a particular municipality. However, I don’t think that this biases the standard errors for the Env1 variable because I have household random effects (level-2) included in the model, which estimate a separate intercept for each household. Moreover, I also include an additional variance component at level-2 that allows the slope of Env1 to vary randomly across households. In this way the effect of Env1 is uniquely computed for each household. References: Courgeau, D. (2007). Multilevel synthesis: From the group to the individual. Dordrecht, The Netherlands: Springer. Goldstein, H. (2011). Multilevel statistical models (4th ed.). Chichester, U.K.: John Wiley & Sons.
Time varying predictors at higher aggregation levels in multilevel survival analysis I think I found a solution. I read two book chapters about multilevel event history models (Courgeau, 2007; Goldstein, 2011), which discuss similar cases and suggest using a three-level structure such
51,090
Statistical comparison of two signals
Time series analysis incorporating both ARIMA structure and empirically identifiable deterministic structure (level shifts/local time trends.seasonal pulses and pulses) might be of some use to you http://www.unc.edu/~jbhill/tsay.pdf. Good analytics/software should/could identify 2 level shifts ( 3 regimes) which would be important starting point for your analysis providing "the region of interest". If you wish you can post your data and I will try and demonstrate that to you and the list.
Statistical comparison of two signals
Time series analysis incorporating both ARIMA structure and empirically identifiable deterministic structure (level shifts/local time trends.seasonal pulses and pulses) might be of some use to you htt
Statistical comparison of two signals Time series analysis incorporating both ARIMA structure and empirically identifiable deterministic structure (level shifts/local time trends.seasonal pulses and pulses) might be of some use to you http://www.unc.edu/~jbhill/tsay.pdf. Good analytics/software should/could identify 2 level shifts ( 3 regimes) which would be important starting point for your analysis providing "the region of interest". If you wish you can post your data and I will try and demonstrate that to you and the list.
Statistical comparison of two signals Time series analysis incorporating both ARIMA structure and empirically identifiable deterministic structure (level shifts/local time trends.seasonal pulses and pulses) might be of some use to you htt
51,091
Statistical comparison of two signals
There is no one single algorithm that will yield what you want. If your desire is to compare two signals then from the signal procesing point of view you can do the following. If the signal is stationary use multitaper magnitude-squared coherence from multitaper package using Harmonic F Statistic against a null hypothesis of white noise for the result.This will tell you in what frequencies the two signals overlap. If the signal is non stationary use phase differential analysis and wavelet cross correlation. This will tell you across time how strong the similarities between the two series are. In regards to you desired output 1.Mean amplitude of the area of interest, relative to the start and end amplitudes I believe you refer here at what are the dominant frequencies of each series. You can calculate these by extracting the amplitudes of the signal across frequencies from multitaper. 2.Slope of fall/rise transitions and of individual peaks and troughs Use gradients as to measure this. The gradient represents the slope of the tangent of the graph of the function representing a trend or down. the gradient points in the direction of the greatest rate of increase of the function and its magnitude is the slope of the graph in that direction. 3.Number of peaks and troughs Use function finpeaks from R package MassArray
Statistical comparison of two signals
There is no one single algorithm that will yield what you want. If your desire is to compare two signals then from the signal procesing point of view you can do the following. If the signal is stati
Statistical comparison of two signals There is no one single algorithm that will yield what you want. If your desire is to compare two signals then from the signal procesing point of view you can do the following. If the signal is stationary use multitaper magnitude-squared coherence from multitaper package using Harmonic F Statistic against a null hypothesis of white noise for the result.This will tell you in what frequencies the two signals overlap. If the signal is non stationary use phase differential analysis and wavelet cross correlation. This will tell you across time how strong the similarities between the two series are. In regards to you desired output 1.Mean amplitude of the area of interest, relative to the start and end amplitudes I believe you refer here at what are the dominant frequencies of each series. You can calculate these by extracting the amplitudes of the signal across frequencies from multitaper. 2.Slope of fall/rise transitions and of individual peaks and troughs Use gradients as to measure this. The gradient represents the slope of the tangent of the graph of the function representing a trend or down. the gradient points in the direction of the greatest rate of increase of the function and its magnitude is the slope of the graph in that direction. 3.Number of peaks and troughs Use function finpeaks from R package MassArray
Statistical comparison of two signals There is no one single algorithm that will yield what you want. If your desire is to compare two signals then from the signal procesing point of view you can do the following. If the signal is stati
51,092
Multi-output decision tree
If I understand your problem right, maybe the best way about is not multi output. You are trying to predict which segmentation to use. So it seems like you can do this in two ways. Give each tumor a class - the class is the segmentation that got the best accuracy score - and do class prediction. This is, I think, what you said to Peter's response. It's true that it ignores the second best method, but you may get probability measures for the class prediction being right. Frame it as a regression problem of predicting the accuracy of each method. So you'd have a predicted accuracy score per class for any new tumor. And then, you'd go with that method. Having said that, if you really want multi output prediction: http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression_multioutput.html http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputRegressor.html
Multi-output decision tree
If I understand your problem right, maybe the best way about is not multi output. You are trying to predict which segmentation to use. So it seems like you can do this in two ways. Give each tumor
Multi-output decision tree If I understand your problem right, maybe the best way about is not multi output. You are trying to predict which segmentation to use. So it seems like you can do this in two ways. Give each tumor a class - the class is the segmentation that got the best accuracy score - and do class prediction. This is, I think, what you said to Peter's response. It's true that it ignores the second best method, but you may get probability measures for the class prediction being right. Frame it as a regression problem of predicting the accuracy of each method. So you'd have a predicted accuracy score per class for any new tumor. And then, you'd go with that method. Having said that, if you really want multi output prediction: http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression_multioutput.html http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputRegressor.html
Multi-output decision tree If I understand your problem right, maybe the best way about is not multi output. You are trying to predict which segmentation to use. So it seems like you can do this in two ways. Give each tumor
51,093
Multi-output decision tree
For n-way outputs, I think you could build n decision (regression) trees. Tree i would take the m input variables (m=6 tumor parameters), and predict the rank of the accuracy of the i-th output (i in {1..n}, n=8, segmentation methods). The i-th tree would thus try to capture the range of parameter values in which the i-th segmentation method works well. When two methods i,j work equally well, as you allude to in your comment, the decision trees for i and j may both output a similar rank value. Therefore, any standard tree software would work.
Multi-output decision tree
For n-way outputs, I think you could build n decision (regression) trees. Tree i would take the m input variables (m=6 tumor parameters), and predict the rank of the accuracy of the i-th output (i in
Multi-output decision tree For n-way outputs, I think you could build n decision (regression) trees. Tree i would take the m input variables (m=6 tumor parameters), and predict the rank of the accuracy of the i-th output (i in {1..n}, n=8, segmentation methods). The i-th tree would thus try to capture the range of parameter values in which the i-th segmentation method works well. When two methods i,j work equally well, as you allude to in your comment, the decision trees for i and j may both output a similar rank value. Therefore, any standard tree software would work.
Multi-output decision tree For n-way outputs, I think you could build n decision (regression) trees. Tree i would take the m input variables (m=6 tumor parameters), and predict the rank of the accuracy of the i-th output (i in
51,094
Multivariate data analyis of compositional data
Make sure you understand the algorithms before using them. E.g. k-means minimizes variance, and of course an attribute with a larger scale, with have a much larger variance, too. Therefore, standardizing data is often beneficial there. But with e.g. hierarchical clustering, you need to give a distance function. Euclidean distance is just one of the many options; and maybe you can be much more specific as to which attribute should have which amount of effect on the result. The key question is: what is a sensible measure of similarity for your domain. There is no universal measure. With hierarchical clustering, this is just more explicit - K-means is based on the sum-of-squared-deviations, so there you need to rescale / transform your data to give appropriate weight, which is much more limited than specifying a similarity measure for your data. So: when are two soil samples alike - as you can see this is a domain and purpose question, not so much a statistical question.
Multivariate data analyis of compositional data
Make sure you understand the algorithms before using them. E.g. k-means minimizes variance, and of course an attribute with a larger scale, with have a much larger variance, too. Therefore, standardiz
Multivariate data analyis of compositional data Make sure you understand the algorithms before using them. E.g. k-means minimizes variance, and of course an attribute with a larger scale, with have a much larger variance, too. Therefore, standardizing data is often beneficial there. But with e.g. hierarchical clustering, you need to give a distance function. Euclidean distance is just one of the many options; and maybe you can be much more specific as to which attribute should have which amount of effect on the result. The key question is: what is a sensible measure of similarity for your domain. There is no universal measure. With hierarchical clustering, this is just more explicit - K-means is based on the sum-of-squared-deviations, so there you need to rescale / transform your data to give appropriate weight, which is much more limited than specifying a similarity measure for your data. So: when are two soil samples alike - as you can see this is a domain and purpose question, not so much a statistical question.
Multivariate data analyis of compositional data Make sure you understand the algorithms before using them. E.g. k-means minimizes variance, and of course an attribute with a larger scale, with have a much larger variance, too. Therefore, standardiz
51,095
How to correct for non-linearity of response in linear regression
I don't know details of your model, but in my opinion you need to deal with the large amount of "zero responses". Look into compound models with a mass point at zero. Something like the "Tweedie model".
How to correct for non-linearity of response in linear regression
I don't know details of your model, but in my opinion you need to deal with the large amount of "zero responses". Look into compound models with a mass point at zero. Something like the "Tweedie model
How to correct for non-linearity of response in linear regression I don't know details of your model, but in my opinion you need to deal with the large amount of "zero responses". Look into compound models with a mass point at zero. Something like the "Tweedie model".
How to correct for non-linearity of response in linear regression I don't know details of your model, but in my opinion you need to deal with the large amount of "zero responses". Look into compound models with a mass point at zero. Something like the "Tweedie model
51,096
Marginal distributions of off-diagonal terms in a Wishart-distributed random variable
I believe the answer to this can actually be found on the wikipedia article about the Wishart distribution, which indicates that it is variance-gamma distributed.
Marginal distributions of off-diagonal terms in a Wishart-distributed random variable
I believe the answer to this can actually be found on the wikipedia article about the Wishart distribution, which indicates that it is variance-gamma distributed.
Marginal distributions of off-diagonal terms in a Wishart-distributed random variable I believe the answer to this can actually be found on the wikipedia article about the Wishart distribution, which indicates that it is variance-gamma distributed.
Marginal distributions of off-diagonal terms in a Wishart-distributed random variable I believe the answer to this can actually be found on the wikipedia article about the Wishart distribution, which indicates that it is variance-gamma distributed.
51,097
Interpreting plm output in R - number of observations used with very unbalanced panel
Capital N gives you the total number of rows in your data which corresponds to the number of observations in the pooled model (option model="pooling" in the plm function). Lowercase n gives you the unique number of observations (let's say groups or individuals). This corresponds to the number of dummies you add if you use the least square dummy variable estimator rather than the within estimator which is equivalent. Capital $T$ shows how often an observation or an individual is observed (referring to the time dimension). $T=1-2$ means that there are both individuals which you observe only ones but also others are who are observed twice. Since you have at most two periods, you can easily calculate that there are 35 individuals (211-176) who appear twice while information for 141 individuals is only available for one period. In general, you cannot cannot make this calculation if T>2 and you need further information. Here is a dataset of soccer matches for three seasons in the German Bundesliga. In Germany, 18 teams play in a season 34 matches each. Each game appears here twice from the view of each team, "Home" refers to the home team. We use the fixed-effects within model to estimate the home advantage over time (relative to the first season in our sample) controlling for possible time/season effects. rm(list=ls(all=TRUE)) library(plm) bundesliga<-read.csv("https://drive.google.com/uc?export=download&id=0B70aDwYo0zuGRGxVV1p2MTlqaUk") head(bundesliga) > head(bundesliga) Season Round Team Opponent Home Goals_Diff 1 2013/14 1 Bayern München Bor. Mönchengladbach H 2 2 2013/14 1 1899 Hoffenheim 1. FC Nürnberg H 0 3 2013/14 1 Bayer 04 Leverkusen SC Freiburg H 2 4 2013/14 1 Hannover 96 VfL Wolfsburg H 2 5 2013/14 1 FC Augsburg Borussia Dortmund H -4 6 2013/14 1 Hertha BSC Eintracht Frankfurt H 5 # Create time index bundesliga$Index<-as.numeric(as.factor(bundesliga$Season))*100+bundesliga$Round # Declare panel data bl_panel<-pdata.frame(bundesliga,c("Team","Index")) # Run regression summary(plm(Goals_Diff~Home*Season,data=bl_panel,model = "within")) Here are the results. Home advantage is strong and corresponds to a goal difference of 0.67 goals in the first season, no statistical differences in home advantage in the subsequent two seasons relative to the first season (and no discernible season effects): > summary(plm(Goals_Diff~Home*Season,data=bl_panel,model = "within")) Oneway (individual) effect Within Model Call: plm(formula = Goals_Diff ~ Home * Season, data = bl_panel, model = "within") Unbalanced Panel: n=22, T=34-102, N=1836 Residuals : Min. 1st Qu. Median 3rd Qu. Max. -7.0400 -1.1700 -0.0185 1.1300 5.8300 Coefficients : Estimate Std. Error t-value Pr(>|t|) HomeH 0.673203 0.143947 4.6768 3.131e-06 *** Season2014/15 -0.132377 0.147800 -0.8957 0.3706 Season2015/16 -0.057980 0.149638 -0.3875 0.6985 HomeH:Season2014/15 0.169935 0.203571 0.8348 0.4040 HomeH:Season2015/16 -0.071895 0.203571 -0.3532 0.7240 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Total Sum of Squares: 5970.7 Residual Sum of Squares: 5735 R-Squared : 0.039484 Adj. R-Squared : 0.038904 F-statistic: 14.8726 on 5 and 1809 DF, p-value: 2.5075e-14 There are in total n=1,836 observations, 612 observations for each of the three seasons. We observe n=22 unique teams. T=34-102 means that there are teams which we observe only for one seasons or 34 matches while others appear in all three seasons (or 102 games). To assess how balanced your sample is, you can look at the number of seasons per team which shows you that most teams are observed for all three seasons. You can calculate $N=1836=(15*3+2*2+5*1)*34$ and $n=15+2+5$. > table(table(bl_panel$Team)/34) 1 2 3 5 2 15
Interpreting plm output in R - number of observations used with very unbalanced panel
Capital N gives you the total number of rows in your data which corresponds to the number of observations in the pooled model (option model="pooling" in the plm function). Lowercase n gives you the u
Interpreting plm output in R - number of observations used with very unbalanced panel Capital N gives you the total number of rows in your data which corresponds to the number of observations in the pooled model (option model="pooling" in the plm function). Lowercase n gives you the unique number of observations (let's say groups or individuals). This corresponds to the number of dummies you add if you use the least square dummy variable estimator rather than the within estimator which is equivalent. Capital $T$ shows how often an observation or an individual is observed (referring to the time dimension). $T=1-2$ means that there are both individuals which you observe only ones but also others are who are observed twice. Since you have at most two periods, you can easily calculate that there are 35 individuals (211-176) who appear twice while information for 141 individuals is only available for one period. In general, you cannot cannot make this calculation if T>2 and you need further information. Here is a dataset of soccer matches for three seasons in the German Bundesliga. In Germany, 18 teams play in a season 34 matches each. Each game appears here twice from the view of each team, "Home" refers to the home team. We use the fixed-effects within model to estimate the home advantage over time (relative to the first season in our sample) controlling for possible time/season effects. rm(list=ls(all=TRUE)) library(plm) bundesliga<-read.csv("https://drive.google.com/uc?export=download&id=0B70aDwYo0zuGRGxVV1p2MTlqaUk") head(bundesliga) > head(bundesliga) Season Round Team Opponent Home Goals_Diff 1 2013/14 1 Bayern München Bor. Mönchengladbach H 2 2 2013/14 1 1899 Hoffenheim 1. FC Nürnberg H 0 3 2013/14 1 Bayer 04 Leverkusen SC Freiburg H 2 4 2013/14 1 Hannover 96 VfL Wolfsburg H 2 5 2013/14 1 FC Augsburg Borussia Dortmund H -4 6 2013/14 1 Hertha BSC Eintracht Frankfurt H 5 # Create time index bundesliga$Index<-as.numeric(as.factor(bundesliga$Season))*100+bundesliga$Round # Declare panel data bl_panel<-pdata.frame(bundesliga,c("Team","Index")) # Run regression summary(plm(Goals_Diff~Home*Season,data=bl_panel,model = "within")) Here are the results. Home advantage is strong and corresponds to a goal difference of 0.67 goals in the first season, no statistical differences in home advantage in the subsequent two seasons relative to the first season (and no discernible season effects): > summary(plm(Goals_Diff~Home*Season,data=bl_panel,model = "within")) Oneway (individual) effect Within Model Call: plm(formula = Goals_Diff ~ Home * Season, data = bl_panel, model = "within") Unbalanced Panel: n=22, T=34-102, N=1836 Residuals : Min. 1st Qu. Median 3rd Qu. Max. -7.0400 -1.1700 -0.0185 1.1300 5.8300 Coefficients : Estimate Std. Error t-value Pr(>|t|) HomeH 0.673203 0.143947 4.6768 3.131e-06 *** Season2014/15 -0.132377 0.147800 -0.8957 0.3706 Season2015/16 -0.057980 0.149638 -0.3875 0.6985 HomeH:Season2014/15 0.169935 0.203571 0.8348 0.4040 HomeH:Season2015/16 -0.071895 0.203571 -0.3532 0.7240 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Total Sum of Squares: 5970.7 Residual Sum of Squares: 5735 R-Squared : 0.039484 Adj. R-Squared : 0.038904 F-statistic: 14.8726 on 5 and 1809 DF, p-value: 2.5075e-14 There are in total n=1,836 observations, 612 observations for each of the three seasons. We observe n=22 unique teams. T=34-102 means that there are teams which we observe only for one seasons or 34 matches while others appear in all three seasons (or 102 games). To assess how balanced your sample is, you can look at the number of seasons per team which shows you that most teams are observed for all three seasons. You can calculate $N=1836=(15*3+2*2+5*1)*34$ and $n=15+2+5$. > table(table(bl_panel$Team)/34) 1 2 3 5 2 15
Interpreting plm output in R - number of observations used with very unbalanced panel Capital N gives you the total number of rows in your data which corresponds to the number of observations in the pooled model (option model="pooling" in the plm function). Lowercase n gives you the u
51,098
How to select the best classification scheme in survival analysis (SurvivalROC, R2, Concordance, AIC)?
I figure the schemas' groups does not come with an absolute probability of an event in any given period. In that case, you care about the ranking which leads to focusing on AUC or concordance of the options you have given. The concordance measure is a generalization of AUC to time outcomes. So unless you really care about a specific point in time then I would say you use the concordance measure rather than an AUC at a fixed point in time. You can get a concordance measures for either of your model with survival::survConcordance after having changed you schema outcome into numeric variable with the given ordinal scale and setting them on the right hand side of the ~ in the formula argument. Caveat My assumption is that you are not doing the modelling but are given models/schemas which you have to compare where each model/schema gives you ordinal grouping of risk for each person. However, if you do the modelling then I agree with @Frank Harrell comment Reject the ordinal groups. Start over. Inappropriate.
How to select the best classification scheme in survival analysis (SurvivalROC, R2, Concordance, AIC
I figure the schemas' groups does not come with an absolute probability of an event in any given period. In that case, you care about the ranking which leads to focusing on AUC or concordance of the o
How to select the best classification scheme in survival analysis (SurvivalROC, R2, Concordance, AIC)? I figure the schemas' groups does not come with an absolute probability of an event in any given period. In that case, you care about the ranking which leads to focusing on AUC or concordance of the options you have given. The concordance measure is a generalization of AUC to time outcomes. So unless you really care about a specific point in time then I would say you use the concordance measure rather than an AUC at a fixed point in time. You can get a concordance measures for either of your model with survival::survConcordance after having changed you schema outcome into numeric variable with the given ordinal scale and setting them on the right hand side of the ~ in the formula argument. Caveat My assumption is that you are not doing the modelling but are given models/schemas which you have to compare where each model/schema gives you ordinal grouping of risk for each person. However, if you do the modelling then I agree with @Frank Harrell comment Reject the ordinal groups. Start over. Inappropriate.
How to select the best classification scheme in survival analysis (SurvivalROC, R2, Concordance, AIC I figure the schemas' groups does not come with an absolute probability of an event in any given period. In that case, you care about the ranking which leads to focusing on AUC or concordance of the o
51,099
What are the 2nd derivatives of the log multivariate normal density?
Alright, the negative information matrix for $L(\mu,K)$ is $$\frac{\partial^2 L}{\partial\mu\,\partial\mu'} = -\left(\frac{1}{N}K\right)^{-1}$$ $$\frac{\partial^2 L}{\partial K\partial \mu} = 0$$ $\frac{\partial^2 L}{\partial K\, \partial K'}$, in a more general formulation, is given at http://en.wikipedia.org/wiki/Fisher_information#Multivariate_normal_distribution Thanks to Mike Hunter for finding it.
What are the 2nd derivatives of the log multivariate normal density?
Alright, the negative information matrix for $L(\mu,K)$ is $$\frac{\partial^2 L}{\partial\mu\,\partial\mu'} = -\left(\frac{1}{N}K\right)^{-1}$$ $$\frac{\partial^2 L}{\partial K\partial \mu} = 0$$ $\fr
What are the 2nd derivatives of the log multivariate normal density? Alright, the negative information matrix for $L(\mu,K)$ is $$\frac{\partial^2 L}{\partial\mu\,\partial\mu'} = -\left(\frac{1}{N}K\right)^{-1}$$ $$\frac{\partial^2 L}{\partial K\partial \mu} = 0$$ $\frac{\partial^2 L}{\partial K\, \partial K'}$, in a more general formulation, is given at http://en.wikipedia.org/wiki/Fisher_information#Multivariate_normal_distribution Thanks to Mike Hunter for finding it.
What are the 2nd derivatives of the log multivariate normal density? Alright, the negative information matrix for $L(\mu,K)$ is $$\frac{\partial^2 L}{\partial\mu\,\partial\mu'} = -\left(\frac{1}{N}K\right)^{-1}$$ $$\frac{\partial^2 L}{\partial K\partial \mu} = 0$$ $\fr
51,100
Dealing with correlating fixed effects in a linear mixed-effects analysis
I'm not sure if this is what Baayen mean, but one advantage of the model comparison approach is that you get to see the effect of adding pupil size on the other parameters, model fit statistics and so on. This isn't specific to multi-level models; often people build up models, first using a simple model, then a more complex one, to see these changes.
Dealing with correlating fixed effects in a linear mixed-effects analysis
I'm not sure if this is what Baayen mean, but one advantage of the model comparison approach is that you get to see the effect of adding pupil size on the other parameters, model fit statistics and so
Dealing with correlating fixed effects in a linear mixed-effects analysis I'm not sure if this is what Baayen mean, but one advantage of the model comparison approach is that you get to see the effect of adding pupil size on the other parameters, model fit statistics and so on. This isn't specific to multi-level models; often people build up models, first using a simple model, then a more complex one, to see these changes.
Dealing with correlating fixed effects in a linear mixed-effects analysis I'm not sure if this is what Baayen mean, but one advantage of the model comparison approach is that you get to see the effect of adding pupil size on the other parameters, model fit statistics and so