idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
48,901 |
Robust regression or ANOVA for non-normal dependent variable
|
First a comment: "robust" usually refers to approaches guarding against outliers and violations of distributional assumptions. In your case, the problem is obviously a violation of distributional assumption, but it seems to depend on your DV (sorry for the pun).
What method to use depends on whether 100 is "truely" the highest possible value of your DV or if your DV measures an unobserved variable that has a latent distribution with possibly infinite values.
For illustration of the "latent variable" concept: On a cognitive test, you want to measure "intelligence", but you only observe if someone solves a question. So if some people solve all the questions, you do not know if these people all have the same intelligence or if there is still some variance in their intelligence scores.
If your DV is of the second kind, you could use tobit regression.
It's more difficult if your DV is really of the first kind, that is, if 100 is truely the highest score that could ever be measured.
And BTW, even with the "right" kind of approach you might still end up with a non-significant interaction.
|
Robust regression or ANOVA for non-normal dependent variable
|
First a comment: "robust" usually refers to approaches guarding against outliers and violations of distributional assumptions. In your case, the problem is obviously a violation of distributional assu
|
Robust regression or ANOVA for non-normal dependent variable
First a comment: "robust" usually refers to approaches guarding against outliers and violations of distributional assumptions. In your case, the problem is obviously a violation of distributional assumption, but it seems to depend on your DV (sorry for the pun).
What method to use depends on whether 100 is "truely" the highest possible value of your DV or if your DV measures an unobserved variable that has a latent distribution with possibly infinite values.
For illustration of the "latent variable" concept: On a cognitive test, you want to measure "intelligence", but you only observe if someone solves a question. So if some people solve all the questions, you do not know if these people all have the same intelligence or if there is still some variance in their intelligence scores.
If your DV is of the second kind, you could use tobit regression.
It's more difficult if your DV is really of the first kind, that is, if 100 is truely the highest score that could ever be measured.
And BTW, even with the "right" kind of approach you might still end up with a non-significant interaction.
|
Robust regression or ANOVA for non-normal dependent variable
First a comment: "robust" usually refers to approaches guarding against outliers and violations of distributional assumptions. In your case, the problem is obviously a violation of distributional assu
|
48,902 |
Robust regression or ANOVA for non-normal dependent variable
|
Rank based tests work by transforming the data to a uniform distribution then relying on the central limit theorem to justify approximate normality (the clt kicks in for the uniform around n=5 or 6), this helps counter the effects of skewness or outliers. Your data has the opposite problem and the rank transform is unlikely to help (the 100's will still all be ties in the ranks). For your sample size and the restrictions on the data, the normal theory tests are probably fine due to the clt. I would be more concerned about unequal variances if some combinations have only 100's or mostly 100's.
If you really want to you could do a permutation test, but I doubt that it will tell you much more than what you have already done, possibly using some statistic based on medians rather than the F-stat may help.
|
Robust regression or ANOVA for non-normal dependent variable
|
Rank based tests work by transforming the data to a uniform distribution then relying on the central limit theorem to justify approximate normality (the clt kicks in for the uniform around n=5 or 6),
|
Robust regression or ANOVA for non-normal dependent variable
Rank based tests work by transforming the data to a uniform distribution then relying on the central limit theorem to justify approximate normality (the clt kicks in for the uniform around n=5 or 6), this helps counter the effects of skewness or outliers. Your data has the opposite problem and the rank transform is unlikely to help (the 100's will still all be ties in the ranks). For your sample size and the restrictions on the data, the normal theory tests are probably fine due to the clt. I would be more concerned about unequal variances if some combinations have only 100's or mostly 100's.
If you really want to you could do a permutation test, but I doubt that it will tell you much more than what you have already done, possibly using some statistic based on medians rather than the F-stat may help.
|
Robust regression or ANOVA for non-normal dependent variable
Rank based tests work by transforming the data to a uniform distribution then relying on the central limit theorem to justify approximate normality (the clt kicks in for the uniform around n=5 or 6),
|
48,903 |
Robust regression or ANOVA for non-normal dependent variable
|
Without knowing that the data are really about it's hard to say. One potential very general solution is to consider that 100 isn't really 100 (sometimes). What to do with that is what you need to work out. You need to come up with a model about what other values 100 is. Would some people have wanted to pick 1000? 110? 99.9? or was it just a garbage answer? If you can work that out then you can either throw data away or jitter it in log or linear space. You could add random noise to the 100s and do it repeatedly and see if outcomes are still relatively consistent across your conditions.
But without much more information it's hard to help. I hope that I've given you some things to think about.
|
Robust regression or ANOVA for non-normal dependent variable
|
Without knowing that the data are really about it's hard to say. One potential very general solution is to consider that 100 isn't really 100 (sometimes). What to do with that is what you need to wo
|
Robust regression or ANOVA for non-normal dependent variable
Without knowing that the data are really about it's hard to say. One potential very general solution is to consider that 100 isn't really 100 (sometimes). What to do with that is what you need to work out. You need to come up with a model about what other values 100 is. Would some people have wanted to pick 1000? 110? 99.9? or was it just a garbage answer? If you can work that out then you can either throw data away or jitter it in log or linear space. You could add random noise to the 100s and do it repeatedly and see if outcomes are still relatively consistent across your conditions.
But without much more information it's hard to help. I hope that I've given you some things to think about.
|
Robust regression or ANOVA for non-normal dependent variable
Without knowing that the data are really about it's hard to say. One potential very general solution is to consider that 100 isn't really 100 (sometimes). What to do with that is what you need to wo
|
48,904 |
Incorporating random effects in the logistic regression formula in R
|
Short answer is you can't - well, not without recoding a version of stepAIC() that knows how to handle S4 objects. stepAIC() knows nothing about lmer() and glmer() models, and there is no equivalent code in lme4 that will allow you to do this sort of stepping.
I also think your whole process needs carefully rethinking - why should there be the one best model? AIC could be used to identify several candidate models that do similar jobs and average those models, rather than trying to find the best model for your sample of data.
Selection via AIC is effectively doing multiple testing - but how should you correct the AIC to take into account the fact that you are doing all this testing? How do you interpret the precision of the coefficients for the final model you might select?
A final point; don;t do all the as.factor() in the model formula as it just makes the whole thing a mess, takes up a lot of space and doesn't aid understanding of the model you fitted. Get the data in the correct format first, then fit the model, e.g.:
RShifting <- transform(RShifting,
Age = as.factor(Age),
Educ = as.factor(Educ),
Child = as.factor(Child))
then
glmer(decision ~ Age + Educ + Child + (1|town), family=binomial,
data=RShifting)
Apart from making things far more readable, it separates the tasks of data processing from the data analysis steps.
|
Incorporating random effects in the logistic regression formula in R
|
Short answer is you can't - well, not without recoding a version of stepAIC() that knows how to handle S4 objects. stepAIC() knows nothing about lmer() and glmer() models, and there is no equivalent c
|
Incorporating random effects in the logistic regression formula in R
Short answer is you can't - well, not without recoding a version of stepAIC() that knows how to handle S4 objects. stepAIC() knows nothing about lmer() and glmer() models, and there is no equivalent code in lme4 that will allow you to do this sort of stepping.
I also think your whole process needs carefully rethinking - why should there be the one best model? AIC could be used to identify several candidate models that do similar jobs and average those models, rather than trying to find the best model for your sample of data.
Selection via AIC is effectively doing multiple testing - but how should you correct the AIC to take into account the fact that you are doing all this testing? How do you interpret the precision of the coefficients for the final model you might select?
A final point; don;t do all the as.factor() in the model formula as it just makes the whole thing a mess, takes up a lot of space and doesn't aid understanding of the model you fitted. Get the data in the correct format first, then fit the model, e.g.:
RShifting <- transform(RShifting,
Age = as.factor(Age),
Educ = as.factor(Educ),
Child = as.factor(Child))
then
glmer(decision ~ Age + Educ + Child + (1|town), family=binomial,
data=RShifting)
Apart from making things far more readable, it separates the tasks of data processing from the data analysis steps.
|
Incorporating random effects in the logistic regression formula in R
Short answer is you can't - well, not without recoding a version of stepAIC() that knows how to handle S4 objects. stepAIC() knows nothing about lmer() and glmer() models, and there is no equivalent c
|
48,905 |
Heckman sample selection
|
The answer is yes, you do not need to use the parameters of inverse Mills ratios. But you must include them in the regression nevertheless, or your other parameters will be biased.
According to the article yes. Although if different variables are statistically significant in different regression there is no problem. Just assume that coefficients for the non-significant regressors are zero.
Splitting is perfectly reasonable. Since you are fitting two models, one for decision whether to go to college or not and another for log-earnings, it is perfectly reasonable to assume that different variables will be important. I should investigate this further though, high multicolinearity when using the same variables in probit and ols regression is not a standard feature of Heckman model as far as I know.
|
Heckman sample selection
|
The answer is yes, you do not need to use the parameters of inverse Mills ratios. But you must include them in the regression nevertheless, or your other parameters will be biased.
According to the ar
|
Heckman sample selection
The answer is yes, you do not need to use the parameters of inverse Mills ratios. But you must include them in the regression nevertheless, or your other parameters will be biased.
According to the article yes. Although if different variables are statistically significant in different regression there is no problem. Just assume that coefficients for the non-significant regressors are zero.
Splitting is perfectly reasonable. Since you are fitting two models, one for decision whether to go to college or not and another for log-earnings, it is perfectly reasonable to assume that different variables will be important. I should investigate this further though, high multicolinearity when using the same variables in probit and ols regression is not a standard feature of Heckman model as far as I know.
|
Heckman sample selection
The answer is yes, you do not need to use the parameters of inverse Mills ratios. But you must include them in the regression nevertheless, or your other parameters will be biased.
According to the ar
|
48,906 |
One-inflated negative binomial?
|
Sure. You can write
$Pr(X=x) = p*f(x) + (1-p)1(x=1)$
where $f$ is the NB pmf, $1()$ is just an indicator function and $p$ is some probability. Of course in general the $x=1$ could be $x=k$ for any integer $k$, and $f$ could be any pmf.
In principle it shouldn't be harder to fit than a ZI model, but an off-the-shelf model-fitting solution may not exist.
|
One-inflated negative binomial?
|
Sure. You can write
$Pr(X=x) = p*f(x) + (1-p)1(x=1)$
where $f$ is the NB pmf, $1()$ is just an indicator function and $p$ is some probability. Of course in general the $x=1$ could be $x=k$ for any int
|
One-inflated negative binomial?
Sure. You can write
$Pr(X=x) = p*f(x) + (1-p)1(x=1)$
where $f$ is the NB pmf, $1()$ is just an indicator function and $p$ is some probability. Of course in general the $x=1$ could be $x=k$ for any integer $k$, and $f$ could be any pmf.
In principle it shouldn't be harder to fit than a ZI model, but an off-the-shelf model-fitting solution may not exist.
|
One-inflated negative binomial?
Sure. You can write
$Pr(X=x) = p*f(x) + (1-p)1(x=1)$
where $f$ is the NB pmf, $1()$ is just an indicator function and $p$ is some probability. Of course in general the $x=1$ could be $x=k$ for any int
|
48,907 |
Given two responses for two groups, how to decide what to test on response or profit?
|
The reason why you are conducting this test is to determine which policy is more valuable, and if value is measured in profitability, then it makes no sense to do statistical testing on any other variable. A properly conducted test on profitability gives you all the information needed for your companies' decision: once you have the results of this test, results about other variables (e.g. response rate) provide no decision-relevant information.
|
Given two responses for two groups, how to decide what to test on response or profit?
|
The reason why you are conducting this test is to determine which policy is more valuable, and if value is measured in profitability, then it makes no sense to do statistical testing on any other vari
|
Given two responses for two groups, how to decide what to test on response or profit?
The reason why you are conducting this test is to determine which policy is more valuable, and if value is measured in profitability, then it makes no sense to do statistical testing on any other variable. A properly conducted test on profitability gives you all the information needed for your companies' decision: once you have the results of this test, results about other variables (e.g. response rate) provide no decision-relevant information.
|
Given two responses for two groups, how to decide what to test on response or profit?
The reason why you are conducting this test is to determine which policy is more valuable, and if value is measured in profitability, then it makes no sense to do statistical testing on any other vari
|
48,908 |
Calculation of natural cubic splines in R
|
Wikipedia has a nice explanation of spline interpolation
I posted the code to create cubic Bezier splines on Rosettacode a while ago.
Also, you can have a look at this discussion on SO about spline extrapolation.
|
Calculation of natural cubic splines in R
|
Wikipedia has a nice explanation of spline interpolation
I posted the code to create cubic Bezier splines on Rosettacode a while ago.
Also, you can have a look at this discussion on SO about spline ex
|
Calculation of natural cubic splines in R
Wikipedia has a nice explanation of spline interpolation
I posted the code to create cubic Bezier splines on Rosettacode a while ago.
Also, you can have a look at this discussion on SO about spline extrapolation.
|
Calculation of natural cubic splines in R
Wikipedia has a nice explanation of spline interpolation
I posted the code to create cubic Bezier splines on Rosettacode a while ago.
Also, you can have a look at this discussion on SO about spline ex
|
48,909 |
Calculation of natural cubic splines in R
|
I learnt about the use of splines in regression from the book "Regression Modeling Strategies" by Frank Harrell. Harrell's R package rms allows you to easily fit regression models in which some predictor variables are represented as splines.
|
Calculation of natural cubic splines in R
|
I learnt about the use of splines in regression from the book "Regression Modeling Strategies" by Frank Harrell. Harrell's R package rms allows you to easily fit regression models in which some predic
|
Calculation of natural cubic splines in R
I learnt about the use of splines in regression from the book "Regression Modeling Strategies" by Frank Harrell. Harrell's R package rms allows you to easily fit regression models in which some predictor variables are represented as splines.
|
Calculation of natural cubic splines in R
I learnt about the use of splines in regression from the book "Regression Modeling Strategies" by Frank Harrell. Harrell's R package rms allows you to easily fit regression models in which some predic
|
48,910 |
Pivotal quantities, test statistics and hypothesis tests
|
The first thing you should do is challenge your lecturer to explain these things clearly. If anything whatsoever seems counter-intuitive or backwards, them demand that he/she explains why it is intuitive. Statistics always makes sense if you think about it in the "right" way.
Calculating pivotal quantities is a very tricky business - I completely understand your bewilderment in "where should I start?"
For normal variance parameters, The "pivotal quantity" is the sum of squares divided by the variance parameters:
$$T_{X}=\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2} \sim \chi^{2}(N)$$
And a similar expression for $T_{Y}$. Note that the distribution only depends on $N$, which is known (if $\mu_{X}$ is unknown, replace by $\overline{X}$ and you lose one degree of freedom in the chi-square distribution). Thus $\frac{T_{X}}{T_{Y}}$ is a pivotal quantity, which has a value of:
$$\frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}}
$$
Note that because it is a pivotal quantity, we can create an exact confidence interval using the pivot as a starting point, and then substituting in our statistic. Now because the degrees of freedom are the same for each chi-square, we do indeed have an F distribution. So you can write:
$$1-\alpha=Pr(L < F < U)=Pr(L < \frac{T_{X}}{T_{Y}} < U)$$
$$1-\alpha=Pr(L < \frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}} < U)$$
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}}\frac{\sum_{i=1}^{N}(X_{i}-\mu_{X})^{2}}{\sum_{i=1}^{N}(Y_{i}-\mu_{Y})^{2}} < U)$$
Writing the observed ratio of the sum of squares as $R$ we get:
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} R < U)$$
$$1-\alpha=Pr(\frac{L}{R} < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} < \frac{U}{R})$$
As for how this solution comes about, I have absolutely no idea. What "principles" were followed (apart from being good at re-arranging statistical expression)?
One thing that I can think of is that you need to find some way to "standardise" your sampling distribution. So for example, normals you subtract mean and divide by standard deviation. For gamma you multiply by the scale parameter. I don't know many pivotal quantities that exist outside of the normal and gamma families.
I think this is one reason why ordinary "sampling statistics" is an art more than a sciecne, because you have to use your intuition about what statistics to try. And then you have to try and figure out if you can standardise your data.
I am almost certain your lecturer will bring up the subject of confidence intervals - be sure to ask him/her what you should do when you only have one sample, or when you have 2 or more nuisance parameters. :)
|
Pivotal quantities, test statistics and hypothesis tests
|
The first thing you should do is challenge your lecturer to explain these things clearly. If anything whatsoever seems counter-intuitive or backwards, them demand that he/she explains why it is intui
|
Pivotal quantities, test statistics and hypothesis tests
The first thing you should do is challenge your lecturer to explain these things clearly. If anything whatsoever seems counter-intuitive or backwards, them demand that he/she explains why it is intuitive. Statistics always makes sense if you think about it in the "right" way.
Calculating pivotal quantities is a very tricky business - I completely understand your bewilderment in "where should I start?"
For normal variance parameters, The "pivotal quantity" is the sum of squares divided by the variance parameters:
$$T_{X}=\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2} \sim \chi^{2}(N)$$
And a similar expression for $T_{Y}$. Note that the distribution only depends on $N$, which is known (if $\mu_{X}$ is unknown, replace by $\overline{X}$ and you lose one degree of freedom in the chi-square distribution). Thus $\frac{T_{X}}{T_{Y}}$ is a pivotal quantity, which has a value of:
$$\frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}}
$$
Note that because it is a pivotal quantity, we can create an exact confidence interval using the pivot as a starting point, and then substituting in our statistic. Now because the degrees of freedom are the same for each chi-square, we do indeed have an F distribution. So you can write:
$$1-\alpha=Pr(L < F < U)=Pr(L < \frac{T_{X}}{T_{Y}} < U)$$
$$1-\alpha=Pr(L < \frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}} < U)$$
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}}\frac{\sum_{i=1}^{N}(X_{i}-\mu_{X})^{2}}{\sum_{i=1}^{N}(Y_{i}-\mu_{Y})^{2}} < U)$$
Writing the observed ratio of the sum of squares as $R$ we get:
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} R < U)$$
$$1-\alpha=Pr(\frac{L}{R} < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} < \frac{U}{R})$$
As for how this solution comes about, I have absolutely no idea. What "principles" were followed (apart from being good at re-arranging statistical expression)?
One thing that I can think of is that you need to find some way to "standardise" your sampling distribution. So for example, normals you subtract mean and divide by standard deviation. For gamma you multiply by the scale parameter. I don't know many pivotal quantities that exist outside of the normal and gamma families.
I think this is one reason why ordinary "sampling statistics" is an art more than a sciecne, because you have to use your intuition about what statistics to try. And then you have to try and figure out if you can standardise your data.
I am almost certain your lecturer will bring up the subject of confidence intervals - be sure to ask him/her what you should do when you only have one sample, or when you have 2 or more nuisance parameters. :)
|
Pivotal quantities, test statistics and hypothesis tests
The first thing you should do is challenge your lecturer to explain these things clearly. If anything whatsoever seems counter-intuitive or backwards, them demand that he/she explains why it is intui
|
48,911 |
Calculating predicted values from categorical predictors in logistic regression
|
My initial thought would have been to display the probability of of acceptance as a function of relative GPA for each of your four schools, using some kind of trellis displays. In this case, facetting should do the job well as the number of schools is not so large. This is very easy to do with lattice (y ~ gpa | school) or ggplot2 (facet_grid(. ~ school)). In fact, you can choose the conditioning variable you want: this can be school, but also situation at undergrad institution. In the latter case, you'll have 4 curves for each plot, and three three plot of Prob(admitting) ~ GPA.
Now, if you are looking for effective displays of effects in GLM, I would recommend the effects package, from John Fox. Currently, it works with binomial and multinomial link, and ordinal logistic model. Marginalizing over other covariates is handled internally, so you don't have to bother with that. There are a lot of illustrations in the on-line help, see help(effect). But, for a more thorough overview of effects displays in GLM, please refer to
Fox (2003). Effect Displays in R for Generalised Linear Models. JSS 8(15).
Fox and Andersen (2004). Effect displays for multinomial and proportional-odds logit models. ASA Methodology Conference -- Here is the corresponding JSS paper
|
Calculating predicted values from categorical predictors in logistic regression
|
My initial thought would have been to display the probability of of acceptance as a function of relative GPA for each of your four schools, using some kind of trellis displays. In this case, facetting
|
Calculating predicted values from categorical predictors in logistic regression
My initial thought would have been to display the probability of of acceptance as a function of relative GPA for each of your four schools, using some kind of trellis displays. In this case, facetting should do the job well as the number of schools is not so large. This is very easy to do with lattice (y ~ gpa | school) or ggplot2 (facet_grid(. ~ school)). In fact, you can choose the conditioning variable you want: this can be school, but also situation at undergrad institution. In the latter case, you'll have 4 curves for each plot, and three three plot of Prob(admitting) ~ GPA.
Now, if you are looking for effective displays of effects in GLM, I would recommend the effects package, from John Fox. Currently, it works with binomial and multinomial link, and ordinal logistic model. Marginalizing over other covariates is handled internally, so you don't have to bother with that. There are a lot of illustrations in the on-line help, see help(effect). But, for a more thorough overview of effects displays in GLM, please refer to
Fox (2003). Effect Displays in R for Generalised Linear Models. JSS 8(15).
Fox and Andersen (2004). Effect displays for multinomial and proportional-odds logit models. ASA Methodology Conference -- Here is the corresponding JSS paper
|
Calculating predicted values from categorical predictors in logistic regression
My initial thought would have been to display the probability of of acceptance as a function of relative GPA for each of your four schools, using some kind of trellis displays. In this case, facetting
|
48,912 |
Interpreting significance of predictor vs significance of predictor coeffs in multinomial logistic regression
|
The p-value itself cannot tell you how strong the relationship is, because the p-value is so influenced by sample size, among other things. But assuming your N is something on the order of 100-150, I'd say there's a reasonably strong effect involving Size whereby as Size increases, the log of the odds of Y being 1 is notably different from the log of the odds of Y being 0. As you indicate, the same cannot be said of the comparison of Y values of -1 and 0.
You are right in viewing all of this as somewhat invalidated by the overall nonsignificance of Size (depending on your alpha, or criterion for significance). You wouldn't get too many arguments if you simply declared Size a nonfactor due to its high p-value. But then again, if your N is sufficiently small--perhaps below 80 or 100--then your design affords low power for detecting effects, and you might make a case for taking seriously the specific effect that managed to show up anyway.
A way around the problem of relying on p-values involves two steps. First, decide what range of odds ratios would constitute an effect worth bothering with, or worth calling substantial. (The trick there is in being facile enough with odds to recognize what they mean for the more intuitive metric of probability.) Then construct a confidence interval for the odds ratio associated with each coefficient and consider it in light of your hypothetical range. Regardless of statistical significance, does the effect have practical significance?
|
Interpreting significance of predictor vs significance of predictor coeffs in multinomial logistic r
|
The p-value itself cannot tell you how strong the relationship is, because the p-value is so influenced by sample size, among other things. But assuming your N is something on the order of 100-150, I
|
Interpreting significance of predictor vs significance of predictor coeffs in multinomial logistic regression
The p-value itself cannot tell you how strong the relationship is, because the p-value is so influenced by sample size, among other things. But assuming your N is something on the order of 100-150, I'd say there's a reasonably strong effect involving Size whereby as Size increases, the log of the odds of Y being 1 is notably different from the log of the odds of Y being 0. As you indicate, the same cannot be said of the comparison of Y values of -1 and 0.
You are right in viewing all of this as somewhat invalidated by the overall nonsignificance of Size (depending on your alpha, or criterion for significance). You wouldn't get too many arguments if you simply declared Size a nonfactor due to its high p-value. But then again, if your N is sufficiently small--perhaps below 80 or 100--then your design affords low power for detecting effects, and you might make a case for taking seriously the specific effect that managed to show up anyway.
A way around the problem of relying on p-values involves two steps. First, decide what range of odds ratios would constitute an effect worth bothering with, or worth calling substantial. (The trick there is in being facile enough with odds to recognize what they mean for the more intuitive metric of probability.) Then construct a confidence interval for the odds ratio associated with each coefficient and consider it in light of your hypothetical range. Regardless of statistical significance, does the effect have practical significance?
|
Interpreting significance of predictor vs significance of predictor coeffs in multinomial logistic r
The p-value itself cannot tell you how strong the relationship is, because the p-value is so influenced by sample size, among other things. But assuming your N is something on the order of 100-150, I
|
48,913 |
Hellwig's method of selection of variables
|
After spending too long on web research, I'm pretty sure the source of 'Hellwig's method' is:
Hellwig, Zdzisław. On the optimal choice of predictors. Study VI in Z. Gostkowski (ed.): Toward a system of quantitative indicators of components of human resources development; Paris: UNESCO, 1968; 23 pages. [pdf]
Google Scholar finds 3 papers that have cited it. None of them appear particularly noteworthy. So I think the answer to your first question is 'No'. As for your second question, I'll leave you to study the paper as i've spent far too long on this already. But from a skim, it appears the motivation behind his method was to avoid calculations that were very tedious without an electronic computer:
"... generally speaking one has to compute $2^n-1$ times the inverse matrices, which is of course an extremely dull perspective. The method we are going to present in this paper does not require finding inverse matrices." (p3-4).
Biographical note:
A little further googling reveals Zdzisław Hellwig was born on 26 May 1925 in Dobrzyca, Poland, and was for many years professor of statistics at the Wrocław University of Economics. There was a scientific meeting to honor his 85th birthday in November 2010.
|
Hellwig's method of selection of variables
|
After spending too long on web research, I'm pretty sure the source of 'Hellwig's method' is:
Hellwig, Zdzisław. On the optimal choice of predictors. Study VI in Z. Gostkowski (ed.): Toward a system o
|
Hellwig's method of selection of variables
After spending too long on web research, I'm pretty sure the source of 'Hellwig's method' is:
Hellwig, Zdzisław. On the optimal choice of predictors. Study VI in Z. Gostkowski (ed.): Toward a system of quantitative indicators of components of human resources development; Paris: UNESCO, 1968; 23 pages. [pdf]
Google Scholar finds 3 papers that have cited it. None of them appear particularly noteworthy. So I think the answer to your first question is 'No'. As for your second question, I'll leave you to study the paper as i've spent far too long on this already. But from a skim, it appears the motivation behind his method was to avoid calculations that were very tedious without an electronic computer:
"... generally speaking one has to compute $2^n-1$ times the inverse matrices, which is of course an extremely dull perspective. The method we are going to present in this paper does not require finding inverse matrices." (p3-4).
Biographical note:
A little further googling reveals Zdzisław Hellwig was born on 26 May 1925 in Dobrzyca, Poland, and was for many years professor of statistics at the Wrocław University of Economics. There was a scientific meeting to honor his 85th birthday in November 2010.
|
Hellwig's method of selection of variables
After spending too long on web research, I'm pretty sure the source of 'Hellwig's method' is:
Hellwig, Zdzisław. On the optimal choice of predictors. Study VI in Z. Gostkowski (ed.): Toward a system o
|
48,914 |
Any good reference books/material to help me build a txn level fraud detection model?
|
Fraud detection is a rare class problem. Chapter Six of Charles Elkan's Notes for his Graduate Course in Data Mining and Predictive Analytics at UCSD walks you through the prediction of a rare class, and the pitfalls and proper ways to evaluate the success of such a model. The methods he specifically uses are Isotonic and Univariate Logistic Regression. The software he uses in the class is Rapidminer, but I prefer R. If you choose to use R, you can perform both of these functions using the isoreg and glm functions. Many people also like to use SVMs in fraud detection, but part of the model selection criterion should be the speed with which you need to validate the transactions. If, for example, it's the swipe of a credit card, SVMs are wholly unfeasible because it will take far too long to process. This is why, in production environments, variants on regression models are typically used for fraud detection.
|
Any good reference books/material to help me build a txn level fraud detection model?
|
Fraud detection is a rare class problem. Chapter Six of Charles Elkan's Notes for his Graduate Course in Data Mining and Predictive Analytics at UCSD walks you through the prediction of a rare class,
|
Any good reference books/material to help me build a txn level fraud detection model?
Fraud detection is a rare class problem. Chapter Six of Charles Elkan's Notes for his Graduate Course in Data Mining and Predictive Analytics at UCSD walks you through the prediction of a rare class, and the pitfalls and proper ways to evaluate the success of such a model. The methods he specifically uses are Isotonic and Univariate Logistic Regression. The software he uses in the class is Rapidminer, but I prefer R. If you choose to use R, you can perform both of these functions using the isoreg and glm functions. Many people also like to use SVMs in fraud detection, but part of the model selection criterion should be the speed with which you need to validate the transactions. If, for example, it's the swipe of a credit card, SVMs are wholly unfeasible because it will take far too long to process. This is why, in production environments, variants on regression models are typically used for fraud detection.
|
Any good reference books/material to help me build a txn level fraud detection model?
Fraud detection is a rare class problem. Chapter Six of Charles Elkan's Notes for his Graduate Course in Data Mining and Predictive Analytics at UCSD walks you through the prediction of a rare class,
|
48,915 |
Splitting one variable according to bins from another variable
|
> A <- round(rnorm(100, 100, 15), 2) # generate some data
> age <- sample(18:65, 100, replace=TRUE)
> sex <- factor(sample(0:1, 100, replace=TRUE), labels=c("f", "m"))
# 1) bin age into 4 groups of similar size
> ageFac <- cut(age, breaks=quantile(age, probs=seq(from=0, to=1, by=0.25)),
+ include.lowest=TRUE)
> head(ageFac)
[1] (26,36.5] (26,36.5] (36.5,47] [18,26] [18,26] [18,26]
Levels: [18,26] (26,36.5] (36.5,47] (47,65]
> table(ageFac) # check group size
ageFac
[18,26] (26,36.5] (36.5,47] (47,65]
27 23 26 24
# 2) test continuous DV in age-groups
> anova(lm(A ~ ageFac))
Analysis of Variance Table
Response: A
Df Sum Sq Mean Sq F value Pr(>F)
ageFac 3 15.8 5.272 0.0229 0.9953
Residuals 96 22099.2 230.200
# 3) chi^2-test for equal distributions of sex in age-groups
> addmargins(table(sex, ageFac))
ageFac
sex [18,26] (26,36.5] (36.5,47] (47,65] Sum
f 11 10 12 11 44
m 16 13 14 13 56
Sum 27 23 26 24 100
> chisq.test(table(sex, ageFac))
Pearson's Chi-squared test
data: table(sex, ageFac)
X-squared = 0.2006, df = 3, p-value = 0.9775
|
Splitting one variable according to bins from another variable
|
> A <- round(rnorm(100, 100, 15), 2) # generate some data
> age <- sample(18:65, 100, replace=TRUE)
> sex <- factor(sample(0:1, 100, replace=TRUE), labels=c("f", "m"))
# 1) bin age into 4 gro
|
Splitting one variable according to bins from another variable
> A <- round(rnorm(100, 100, 15), 2) # generate some data
> age <- sample(18:65, 100, replace=TRUE)
> sex <- factor(sample(0:1, 100, replace=TRUE), labels=c("f", "m"))
# 1) bin age into 4 groups of similar size
> ageFac <- cut(age, breaks=quantile(age, probs=seq(from=0, to=1, by=0.25)),
+ include.lowest=TRUE)
> head(ageFac)
[1] (26,36.5] (26,36.5] (36.5,47] [18,26] [18,26] [18,26]
Levels: [18,26] (26,36.5] (36.5,47] (47,65]
> table(ageFac) # check group size
ageFac
[18,26] (26,36.5] (36.5,47] (47,65]
27 23 26 24
# 2) test continuous DV in age-groups
> anova(lm(A ~ ageFac))
Analysis of Variance Table
Response: A
Df Sum Sq Mean Sq F value Pr(>F)
ageFac 3 15.8 5.272 0.0229 0.9953
Residuals 96 22099.2 230.200
# 3) chi^2-test for equal distributions of sex in age-groups
> addmargins(table(sex, ageFac))
ageFac
sex [18,26] (26,36.5] (36.5,47] (47,65] Sum
f 11 10 12 11 44
m 16 13 14 13 56
Sum 27 23 26 24 100
> chisq.test(table(sex, ageFac))
Pearson's Chi-squared test
data: table(sex, ageFac)
X-squared = 0.2006, df = 3, p-value = 0.9775
|
Splitting one variable according to bins from another variable
> A <- round(rnorm(100, 100, 15), 2) # generate some data
> age <- sample(18:65, 100, replace=TRUE)
> sex <- factor(sample(0:1, 100, replace=TRUE), labels=c("f", "m"))
# 1) bin age into 4 gro
|
48,916 |
Correct way to calibrate means
|
There is something called "small error propagation", and it says that the error of a function $f$ of variables $x_1,x_2,\cdots,x_n$ with errors $\Delta x_1,\Delta x_2,\cdots,\Delta x_n$ equals
$$\Delta f=\sqrt{\sum_i\left(\frac{\partial f}{\partial x_i}\Delta x_i\right)^2},$$
so for $f(a,b):=a-b$ the error is $\Delta f=\sqrt{\Delta a^2+\Delta b^2}$.
So, subtract the means and report this euclidean length of errors as a final error.
|
Correct way to calibrate means
|
There is something called "small error propagation", and it says that the error of a function $f$ of variables $x_1,x_2,\cdots,x_n$ with errors $\Delta x_1,\Delta x_2,\cdots,\Delta x_n$ equals
$$\Delt
|
Correct way to calibrate means
There is something called "small error propagation", and it says that the error of a function $f$ of variables $x_1,x_2,\cdots,x_n$ with errors $\Delta x_1,\Delta x_2,\cdots,\Delta x_n$ equals
$$\Delta f=\sqrt{\sum_i\left(\frac{\partial f}{\partial x_i}\Delta x_i\right)^2},$$
so for $f(a,b):=a-b$ the error is $\Delta f=\sqrt{\Delta a^2+\Delta b^2}$.
So, subtract the means and report this euclidean length of errors as a final error.
|
Correct way to calibrate means
There is something called "small error propagation", and it says that the error of a function $f$ of variables $x_1,x_2,\cdots,x_n$ with errors $\Delta x_1,\Delta x_2,\cdots,\Delta x_n$ equals
$$\Delt
|
48,917 |
Saturation in ARIMA (et al) models?
|
If Y is customer demand, than you are observing X=min(Y,1000) due to resource constraints. The actual Y could be larger, but you never observe it. So if you fit a time series model to X, you can set the forecasts to min(F,1000) where F is the forecast from the time series model. I don't think there is a need to do anything more fancy than that.
|
Saturation in ARIMA (et al) models?
|
If Y is customer demand, than you are observing X=min(Y,1000) due to resource constraints. The actual Y could be larger, but you never observe it. So if you fit a time series model to X, you can set t
|
Saturation in ARIMA (et al) models?
If Y is customer demand, than you are observing X=min(Y,1000) due to resource constraints. The actual Y could be larger, but you never observe it. So if you fit a time series model to X, you can set the forecasts to min(F,1000) where F is the forecast from the time series model. I don't think there is a need to do anything more fancy than that.
|
Saturation in ARIMA (et al) models?
If Y is customer demand, than you are observing X=min(Y,1000) due to resource constraints. The actual Y could be larger, but you never observe it. So if you fit a time series model to X, you can set t
|
48,918 |
Saturation in ARIMA (et al) models?
|
Perhaps your ARIMA model needs to be tempered with identifiable deterministic structure such as "changes in intercept" or changes in trend. These models would then be classified as robust ARIMA models or Transfer FunctionModels. If there is a True Limiting Value then the data might suggest that as it grows towards that limit. Good analysis might lead to a forecast that reached its zenith at that limit.It might help the list if you actually posted one of these troublesome series.
|
Saturation in ARIMA (et al) models?
|
Perhaps your ARIMA model needs to be tempered with identifiable deterministic structure such as "changes in intercept" or changes in trend. These models would then be classified as robust ARIMA models
|
Saturation in ARIMA (et al) models?
Perhaps your ARIMA model needs to be tempered with identifiable deterministic structure such as "changes in intercept" or changes in trend. These models would then be classified as robust ARIMA models or Transfer FunctionModels. If there is a True Limiting Value then the data might suggest that as it grows towards that limit. Good analysis might lead to a forecast that reached its zenith at that limit.It might help the list if you actually posted one of these troublesome series.
|
Saturation in ARIMA (et al) models?
Perhaps your ARIMA model needs to be tempered with identifiable deterministic structure such as "changes in intercept" or changes in trend. These models would then be classified as robust ARIMA models
|
48,919 |
Data mining algorithm suggestion
|
Your first task is to find a reasonable model relating an outcome $Y$ to the sequence of workouts that preceded it. One might start by supposing that the outcome depends quite generally on a linear combination of time-weighted workout efforts $X$, but such a model would be unidentifiable (from having more parameters than data points). One popular simplification is to suppose that the "influence" of a workout at time $t$ on the outcome at time $s$ is
a. proportional to the intensity of the workout,
b. decays exponentially; that is, is reduced by a factor $\exp(\theta(t-s)))$ for some unknown decay rate $\theta$, and
c. independently adds to the influences of all other workouts preceding time $t$.
Of course we must be prepared to allow some deviation between the actual outcome and that predicted by the model; it is natural to model that deviation as a set of independent random variables of zero mean.
This leads to a formal model which can serve as a useful point of departure for EDA. To write it down, let the times be $t_1 \lt t_2 \lt \ldots \lt t_n$ with corresponding workout intensities $x_1, x_2, \ldots, x_n$ and let the outcomes be measured at times $s_1 \lt s_2 \lt \ldots \lt s_m$ with values $y_1, \ldots, y_m$, respectively. The model is
$$y_j =\alpha + \beta \exp(-\theta(s_j - t_{k})) \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right) + \epsilon_j$$
where $\alpha$ and $\beta$ are coefficients in a linear relation, $k$ is the index of the most recent workout preceding time $s_j$, $\Delta_{i,j} = t_i - t_j$ is the time elapsed between the $i^\text{th}$ and $j^\text{th}$ workouts, and the $\epsilon_j$ are independent random variables with zero expectations.
This can get messy when workouts and endpoint measurements are unevenly spaced. If to a good approximation the spacing between a workout and the next measurement is constant (say, a time difference of $s$) and--as an expository simplification--if each workout is followed by a measurement (so that $m = n$), then this model suggests some useful EDA procedures. As an abbreviation, let's write (somewhat loosely)
$$f_k(x,t,\theta) = \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right)$$
for the weighted sum of the workouts up to and including the $k^\text{th}$ one, whence
$$y_k = \alpha + \gamma f_k(x,t,\theta) + \epsilon_k$$
where $\gamma = \beta \exp(-\theta s)$. Note that this formulation accommodates any irregular time sequence of workouts, so it's not grossly oversimplified.
What you want to know is whether this makes sense: do the data at all behave like this? We're really asking about the possibility of a linear relationship between the $x$'s and the $y$'s. We need that to hold for at least one decay constant $\theta$ with a reasonable value.
One way to check is to note there is a relatively simple relationship between successive terms $f_{k+1}$ and $f_k$; you let $f_k$ decay for additional time $t_{k+1} - t_k$ and add $x_{k+1}$ to it:
$$f_{k+1}(x,t,\theta) = x_{k+1} + \exp(-\theta \Delta_{k+1,k}) f_k(x,t,\theta).$$
(This formula, by the way, provides an efficient way to compute all the $f_k$ by starting at $f_1 = x_1$ and continuing recursively for $k = 2, 3, \ldots, n$--a simple spreadsheet formula. It is a generalization of the weighted running averages used extensively in financial analysis.)
Equivalently, we can isolate $x_k$ by subtracting the right hand term. This suggests exploring the relationship between the adjusted values $z_k = y_{k+1} - \exp(-\theta \Delta_{k+1,k})y_k$ and the workouts $x_k$, because
$$z_k = (1 - \exp(-\theta \Delta_{k+1,k}))\alpha + \gamma x_k + (\epsilon_{k+1} - \exp(-\theta \Delta_{k+1,k}))\epsilon_k).$$
If the workouts are approximately regularly spaced, so that $\Delta_{k+1,k}$ is roughly constant, then for any fixed value of $\theta$ this expression is in the form
$$z = \text{ constant + constant *} x + \text{ error}.$$
The error terms will be positively correlated (in pairs) but still unbiased. It is now clear how to check linearity: Pick a trial value for $\theta$, compute the $z$'s (which depend on it), make a scatterplot of the $z$'s versus the $x$'s, and look for linearity. Vary $\theta$ interactively to search for linear-looking scatterplots. If you can produce one, you already have a reasonable estimate of $\theta$. You can then estimate the other parameters ($\alpha$ and $\beta$) if you like. If you cannot produce a linear scatterplot, use standard EDA techniques to re-express the variables (the $x$'s and $y$'s) until you can. Look for a value of $\theta$ that minimizes the typical sizes of the residuals: that is a rough estimate of the decay rate.
I don't expect this method to be highly stable: there is likely a wide range of values of $\theta$ that will induce linearity and relatively small residuals in the scatterplot. But that is something for you to find out. If you discover that only a narrow range of values accomplishes this, then you can have confidence that the decay effect is there and can be estimated. Use maximum likelihood; it will be convenient to suppose the $\epsilon$'s are normally distributed. (The profile likelihood, with $\theta$ fixed, is an ordinary least squares problem, so it will be easy to fit this model. Alternatively you could try fitting the relationship between $z$ and $x$ directly using generalized least squares, but I think that would be trickier to implement.)
This all might sound complicated but it's actually quite simple. You could set up a spreadsheet in which $\theta$ is the value in a cell, add a $\theta$-varying control to a scatterplot of $x$ and $z$ (computed in the spreadsheet from a column of $t$ values and a column of $y$ values), and simply adjust the control to straighten out the plot.
It will be harder to explore a dataset in which there are fewer (or more) measurements $y_j$ than there are workouts $x_k$ or where the temporal spacing between workouts and measurements varies a lot. You might have to settle for maximum likelihood solutions alone, without the benefit of supporting graphics to verify the reasonableness and the adequacy of this model a priori.
Even if my assumptions do not agree with your situation in all details, I hope that this discussion at least suggests effective approaches for furthering your investigation.
|
Data mining algorithm suggestion
|
Your first task is to find a reasonable model relating an outcome $Y$ to the sequence of workouts that preceded it. One might start by supposing that the outcome depends quite generally on a linear c
|
Data mining algorithm suggestion
Your first task is to find a reasonable model relating an outcome $Y$ to the sequence of workouts that preceded it. One might start by supposing that the outcome depends quite generally on a linear combination of time-weighted workout efforts $X$, but such a model would be unidentifiable (from having more parameters than data points). One popular simplification is to suppose that the "influence" of a workout at time $t$ on the outcome at time $s$ is
a. proportional to the intensity of the workout,
b. decays exponentially; that is, is reduced by a factor $\exp(\theta(t-s)))$ for some unknown decay rate $\theta$, and
c. independently adds to the influences of all other workouts preceding time $t$.
Of course we must be prepared to allow some deviation between the actual outcome and that predicted by the model; it is natural to model that deviation as a set of independent random variables of zero mean.
This leads to a formal model which can serve as a useful point of departure for EDA. To write it down, let the times be $t_1 \lt t_2 \lt \ldots \lt t_n$ with corresponding workout intensities $x_1, x_2, \ldots, x_n$ and let the outcomes be measured at times $s_1 \lt s_2 \lt \ldots \lt s_m$ with values $y_1, \ldots, y_m$, respectively. The model is
$$y_j =\alpha + \beta \exp(-\theta(s_j - t_{k})) \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right) + \epsilon_j$$
where $\alpha$ and $\beta$ are coefficients in a linear relation, $k$ is the index of the most recent workout preceding time $s_j$, $\Delta_{i,j} = t_i - t_j$ is the time elapsed between the $i^\text{th}$ and $j^\text{th}$ workouts, and the $\epsilon_j$ are independent random variables with zero expectations.
This can get messy when workouts and endpoint measurements are unevenly spaced. If to a good approximation the spacing between a workout and the next measurement is constant (say, a time difference of $s$) and--as an expository simplification--if each workout is followed by a measurement (so that $m = n$), then this model suggests some useful EDA procedures. As an abbreviation, let's write (somewhat loosely)
$$f_k(x,t,\theta) = \left( x_{k} + \exp(-\theta \Delta_{k,k-1}) x_{k-1} + \cdots + \exp(-\theta \Delta_{k, 1})x_1 \right)$$
for the weighted sum of the workouts up to and including the $k^\text{th}$ one, whence
$$y_k = \alpha + \gamma f_k(x,t,\theta) + \epsilon_k$$
where $\gamma = \beta \exp(-\theta s)$. Note that this formulation accommodates any irregular time sequence of workouts, so it's not grossly oversimplified.
What you want to know is whether this makes sense: do the data at all behave like this? We're really asking about the possibility of a linear relationship between the $x$'s and the $y$'s. We need that to hold for at least one decay constant $\theta$ with a reasonable value.
One way to check is to note there is a relatively simple relationship between successive terms $f_{k+1}$ and $f_k$; you let $f_k$ decay for additional time $t_{k+1} - t_k$ and add $x_{k+1}$ to it:
$$f_{k+1}(x,t,\theta) = x_{k+1} + \exp(-\theta \Delta_{k+1,k}) f_k(x,t,\theta).$$
(This formula, by the way, provides an efficient way to compute all the $f_k$ by starting at $f_1 = x_1$ and continuing recursively for $k = 2, 3, \ldots, n$--a simple spreadsheet formula. It is a generalization of the weighted running averages used extensively in financial analysis.)
Equivalently, we can isolate $x_k$ by subtracting the right hand term. This suggests exploring the relationship between the adjusted values $z_k = y_{k+1} - \exp(-\theta \Delta_{k+1,k})y_k$ and the workouts $x_k$, because
$$z_k = (1 - \exp(-\theta \Delta_{k+1,k}))\alpha + \gamma x_k + (\epsilon_{k+1} - \exp(-\theta \Delta_{k+1,k}))\epsilon_k).$$
If the workouts are approximately regularly spaced, so that $\Delta_{k+1,k}$ is roughly constant, then for any fixed value of $\theta$ this expression is in the form
$$z = \text{ constant + constant *} x + \text{ error}.$$
The error terms will be positively correlated (in pairs) but still unbiased. It is now clear how to check linearity: Pick a trial value for $\theta$, compute the $z$'s (which depend on it), make a scatterplot of the $z$'s versus the $x$'s, and look for linearity. Vary $\theta$ interactively to search for linear-looking scatterplots. If you can produce one, you already have a reasonable estimate of $\theta$. You can then estimate the other parameters ($\alpha$ and $\beta$) if you like. If you cannot produce a linear scatterplot, use standard EDA techniques to re-express the variables (the $x$'s and $y$'s) until you can. Look for a value of $\theta$ that minimizes the typical sizes of the residuals: that is a rough estimate of the decay rate.
I don't expect this method to be highly stable: there is likely a wide range of values of $\theta$ that will induce linearity and relatively small residuals in the scatterplot. But that is something for you to find out. If you discover that only a narrow range of values accomplishes this, then you can have confidence that the decay effect is there and can be estimated. Use maximum likelihood; it will be convenient to suppose the $\epsilon$'s are normally distributed. (The profile likelihood, with $\theta$ fixed, is an ordinary least squares problem, so it will be easy to fit this model. Alternatively you could try fitting the relationship between $z$ and $x$ directly using generalized least squares, but I think that would be trickier to implement.)
This all might sound complicated but it's actually quite simple. You could set up a spreadsheet in which $\theta$ is the value in a cell, add a $\theta$-varying control to a scatterplot of $x$ and $z$ (computed in the spreadsheet from a column of $t$ values and a column of $y$ values), and simply adjust the control to straighten out the plot.
It will be harder to explore a dataset in which there are fewer (or more) measurements $y_j$ than there are workouts $x_k$ or where the temporal spacing between workouts and measurements varies a lot. You might have to settle for maximum likelihood solutions alone, without the benefit of supporting graphics to verify the reasonableness and the adequacy of this model a priori.
Even if my assumptions do not agree with your situation in all details, I hope that this discussion at least suggests effective approaches for furthering your investigation.
|
Data mining algorithm suggestion
Your first task is to find a reasonable model relating an outcome $Y$ to the sequence of workouts that preceded it. One might start by supposing that the outcome depends quite generally on a linear c
|
48,920 |
Coefficient / model averaging to control for exogenous circumstances in prediction
|
I am not sure you need any special tricks as long as the relevant factors are captured by the model. To keep things simple I will discuss the issue in the context of linear regression. The same intuition carries over to the time series setting.
Suppose, that you want to predict the monthly sales of cell phones for brand X and to illustrate the ideas suppose that you use linear regression to do so. The issue you have is that there are some factors that impact sales every month (e.g., weather of the month to take a silly example) and some factors that are idiosyncratic to a month (e.g., launch of a new cell phone). You wish to account for both factors consistently.
Denote the factors that impact sales every month by $C_m$ (where 'C' stands for common and 'm' for month) and idiosyncratic factors by $I_m$ (where I stands for idiosyncratic and m for month). Then, your model would be:
$S_m = \beta_c C_M + \beta_i I_m + \epsilon$
In the months where there is no idiosyncratic factors $I_m$ would be 0 and hence the impact on sales would be captured by $\beta_c$ and in the months when you do have idiosyncratic factors the impact on sales would be $\beta_c + \beta_i$ with $\beta_i$ presumably being negative if the idiosyncratic factor dampens sales for brand X. Thus, the coefficients 'automatically' adjust themselves provided all the necessary factors are present in the model.
Thus, it seems to me you do not have to worry about the issue you raise if you do account for all factors whether they are idiosyncratic to a time period or common across time periods.
Edit in response to comment
Your example of concert and increased cell phone usage by young consumers is an example of an interaction effect. To put it in different words your example is saying that the effect of the concert on cell phone use is higher for younger consumers rather than older consumers. Basically, your model is saying the following:
Minutes = beta1 age + beta2 * concert + beta3 * age * concert + error
Thus, the beta3 parameter will capture the differential impact how younger/older consumers respond to the concert.
|
Coefficient / model averaging to control for exogenous circumstances in prediction
|
I am not sure you need any special tricks as long as the relevant factors are captured by the model. To keep things simple I will discuss the issue in the context of linear regression. The same intuit
|
Coefficient / model averaging to control for exogenous circumstances in prediction
I am not sure you need any special tricks as long as the relevant factors are captured by the model. To keep things simple I will discuss the issue in the context of linear regression. The same intuition carries over to the time series setting.
Suppose, that you want to predict the monthly sales of cell phones for brand X and to illustrate the ideas suppose that you use linear regression to do so. The issue you have is that there are some factors that impact sales every month (e.g., weather of the month to take a silly example) and some factors that are idiosyncratic to a month (e.g., launch of a new cell phone). You wish to account for both factors consistently.
Denote the factors that impact sales every month by $C_m$ (where 'C' stands for common and 'm' for month) and idiosyncratic factors by $I_m$ (where I stands for idiosyncratic and m for month). Then, your model would be:
$S_m = \beta_c C_M + \beta_i I_m + \epsilon$
In the months where there is no idiosyncratic factors $I_m$ would be 0 and hence the impact on sales would be captured by $\beta_c$ and in the months when you do have idiosyncratic factors the impact on sales would be $\beta_c + \beta_i$ with $\beta_i$ presumably being negative if the idiosyncratic factor dampens sales for brand X. Thus, the coefficients 'automatically' adjust themselves provided all the necessary factors are present in the model.
Thus, it seems to me you do not have to worry about the issue you raise if you do account for all factors whether they are idiosyncratic to a time period or common across time periods.
Edit in response to comment
Your example of concert and increased cell phone usage by young consumers is an example of an interaction effect. To put it in different words your example is saying that the effect of the concert on cell phone use is higher for younger consumers rather than older consumers. Basically, your model is saying the following:
Minutes = beta1 age + beta2 * concert + beta3 * age * concert + error
Thus, the beta3 parameter will capture the differential impact how younger/older consumers respond to the concert.
|
Coefficient / model averaging to control for exogenous circumstances in prediction
I am not sure you need any special tricks as long as the relevant factors are captured by the model. To keep things simple I will discuss the issue in the context of linear regression. The same intuit
|
48,921 |
Interpreting 2D correspondence analysis plots (Part II)
|
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-)
I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking status classes and vice versa.
The problem with the above figure is the axis scaling and the fact that you can't display all the relationships (chi-square distance between regions and chi-square distance between smoking status) on the one figure. By the looks of it, the figure is using a what is known as symmetric scaling which has been shown to be a good compromise preserving as much of the information in the sets of scores as possible.
I'm not familiar with the ca package but I am with the vegan package and it's cca function:
require(vegan)
df <- data.frame(df)
ord <- cca(df)
plot(ord, scaling = 3)
The last plot is a bit easier to read than the one you show but AFAICT they are the same (or at least similarly scaled).
So I would say that occasional smokers are lower in number than expected in QC, BC and AB, and most associated with ON, but that in all regions, occasional smokers are low in number - they differ markedly from the expected number.
However, there is a single dominant "gradient" or axis of variation in these data and as the second axis represents so little variation, I would likely not interpret this component at all.
|
Interpreting 2D correspondence analysis plots (Part II)
|
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-)
I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking
|
Interpreting 2D correspondence analysis plots (Part II)
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-)
I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking status classes and vice versa.
The problem with the above figure is the axis scaling and the fact that you can't display all the relationships (chi-square distance between regions and chi-square distance between smoking status) on the one figure. By the looks of it, the figure is using a what is known as symmetric scaling which has been shown to be a good compromise preserving as much of the information in the sets of scores as possible.
I'm not familiar with the ca package but I am with the vegan package and it's cca function:
require(vegan)
df <- data.frame(df)
ord <- cca(df)
plot(ord, scaling = 3)
The last plot is a bit easier to read than the one you show but AFAICT they are the same (or at least similarly scaled).
So I would say that occasional smokers are lower in number than expected in QC, BC and AB, and most associated with ON, but that in all regions, occasional smokers are low in number - they differ markedly from the expected number.
However, there is a single dominant "gradient" or axis of variation in these data and as the second axis represents so little variation, I would likely not interpret this component at all.
|
Interpreting 2D correspondence analysis plots (Part II)
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-)
I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking
|
48,922 |
How to compute the standard error of an L-estimator?
|
As you know, from
$$Var[q] = Var[\sum_i w_i x_{(i)}] = \sum_i\sum_j w_i w_j Cov[x_{(i)}, x_{(j)}]$$
it follows you need only compute the variances and covariances of the order statistics. To do this, diagonalize the covariance matrix! Although this cannot be done in general, M. A. Stephens has obtained (heuristically) an asymptotic diagonalization. (The eigenvectors are Hermite polynomials.) In the spirit of PCA, limiting your calculations to the largest few eigenvalues can greatly reduce the computational effort and might produce a reasonable approximation, depending on the structure of the $w_i$. In fact, if you adjust that weight vector to be a linear combination of a small number of the eigenvectors, that will assure a simple calculation of $Var[q]$ and perhaps not cost you much in terms of the accuracy of $q$ itself. At worst, a preliminary eigendecomposition of $\vec{w}$ will then require only $O(N)$ calculations of the variance rather than $O(N^2)$.
|
How to compute the standard error of an L-estimator?
|
As you know, from
$$Var[q] = Var[\sum_i w_i x_{(i)}] = \sum_i\sum_j w_i w_j Cov[x_{(i)}, x_{(j)}]$$
it follows you need only compute the variances and covariances of the order statistics. To do this,
|
How to compute the standard error of an L-estimator?
As you know, from
$$Var[q] = Var[\sum_i w_i x_{(i)}] = \sum_i\sum_j w_i w_j Cov[x_{(i)}, x_{(j)}]$$
it follows you need only compute the variances and covariances of the order statistics. To do this, diagonalize the covariance matrix! Although this cannot be done in general, M. A. Stephens has obtained (heuristically) an asymptotic diagonalization. (The eigenvectors are Hermite polynomials.) In the spirit of PCA, limiting your calculations to the largest few eigenvalues can greatly reduce the computational effort and might produce a reasonable approximation, depending on the structure of the $w_i$. In fact, if you adjust that weight vector to be a linear combination of a small number of the eigenvectors, that will assure a simple calculation of $Var[q]$ and perhaps not cost you much in terms of the accuracy of $q$ itself. At worst, a preliminary eigendecomposition of $\vec{w}$ will then require only $O(N)$ calculations of the variance rather than $O(N^2)$.
|
How to compute the standard error of an L-estimator?
As you know, from
$$Var[q] = Var[\sum_i w_i x_{(i)}] = \sum_i\sum_j w_i w_j Cov[x_{(i)}, x_{(j)}]$$
it follows you need only compute the variances and covariances of the order statistics. To do this,
|
48,923 |
How to compute the standard error of an L-estimator?
|
It looks like I am probably stuck with a bootstrap. One interesting possibility here is to compute the 'exact bootstrap covariance', as outlined by Hutson & Ernst. Presumably the bootstrap covariance gives a good estimate of the standard error, asymptotically. However, the approach of Hutson & Ernst requires computation of the covariance of each pair of order statistics, and so this method is quadratic in the number of samples. Maybe I should just stick with the bootstrap!
|
How to compute the standard error of an L-estimator?
|
It looks like I am probably stuck with a bootstrap. One interesting possibility here is to compute the 'exact bootstrap covariance', as outlined by Hutson & Ernst. Presumably the bootstrap covariance
|
How to compute the standard error of an L-estimator?
It looks like I am probably stuck with a bootstrap. One interesting possibility here is to compute the 'exact bootstrap covariance', as outlined by Hutson & Ernst. Presumably the bootstrap covariance gives a good estimate of the standard error, asymptotically. However, the approach of Hutson & Ernst requires computation of the covariance of each pair of order statistics, and so this method is quadratic in the number of samples. Maybe I should just stick with the bootstrap!
|
How to compute the standard error of an L-estimator?
It looks like I am probably stuck with a bootstrap. One interesting possibility here is to compute the 'exact bootstrap covariance', as outlined by Hutson & Ernst. Presumably the bootstrap covariance
|
48,924 |
Few machine learning problems
|
For (1), as ebony1 suggests, there are several incremental or on-line SVM algorithms you could try, the only thing I would mention is that the hyper-parameters (regularisation and kernel parameters) may also need tuning as you go along as well, and there are fewer algorithmic tricks to help with that. The regularisation parameter will almost certainly benefit from tuning, because as the amount of training data increases, the less regularisation that is normally required.
For (2) you could try fitting a one-class SVM to the training data, which would at least tell you if the data were consistent with the classes you do know about, and then classify as "unknown" if the output of the one-class SVM was sufficiently low. IIRC libsvm has an implementation of one-class SVM.
For (3) if you just use 1-v-1 class component SVMs, then you just need to make three new SVMs, one for each of the unknown-v-known class combinations, and there is no need to retrain the others.
HTH
|
Few machine learning problems
|
For (1), as ebony1 suggests, there are several incremental or on-line SVM algorithms you could try, the only thing I would mention is that the hyper-parameters (regularisation and kernel parameters) m
|
Few machine learning problems
For (1), as ebony1 suggests, there are several incremental or on-line SVM algorithms you could try, the only thing I would mention is that the hyper-parameters (regularisation and kernel parameters) may also need tuning as you go along as well, and there are fewer algorithmic tricks to help with that. The regularisation parameter will almost certainly benefit from tuning, because as the amount of training data increases, the less regularisation that is normally required.
For (2) you could try fitting a one-class SVM to the training data, which would at least tell you if the data were consistent with the classes you do know about, and then classify as "unknown" if the output of the one-class SVM was sufficiently low. IIRC libsvm has an implementation of one-class SVM.
For (3) if you just use 1-v-1 class component SVMs, then you just need to make three new SVMs, one for each of the unknown-v-known class combinations, and there is no need to retrain the others.
HTH
|
Few machine learning problems
For (1), as ebony1 suggests, there are several incremental or on-line SVM algorithms you could try, the only thing I would mention is that the hyper-parameters (regularisation and kernel parameters) m
|
48,925 |
Few machine learning problems
|
I'm not really sure if an implementation exists to address all your needs.
For (1), you can use any of the online implementations of SVM such as Pegasos or LASVM. If you want something simpler, you may use Perceptron or Kernel Perceptron. Basically, in all these algorithms, given an already learned weight vector (say w0), you can update w0 incrementally, given a fresh set of new examples.
For (2) and (3), I'm not sure if the above approaches would straightaway allow but you can probably borrow some ideas from the literature dealing with unknown classes. I'd suggest taking a look at this.
|
Few machine learning problems
|
I'm not really sure if an implementation exists to address all your needs.
For (1), you can use any of the online implementations of SVM such as Pegasos or LASVM. If you want something simpler, you ma
|
Few machine learning problems
I'm not really sure if an implementation exists to address all your needs.
For (1), you can use any of the online implementations of SVM such as Pegasos or LASVM. If you want something simpler, you may use Perceptron or Kernel Perceptron. Basically, in all these algorithms, given an already learned weight vector (say w0), you can update w0 incrementally, given a fresh set of new examples.
For (2) and (3), I'm not sure if the above approaches would straightaway allow but you can probably borrow some ideas from the literature dealing with unknown classes. I'd suggest taking a look at this.
|
Few machine learning problems
I'm not really sure if an implementation exists to address all your needs.
For (1), you can use any of the online implementations of SVM such as Pegasos or LASVM. If you want something simpler, you ma
|
48,926 |
Best way to show these or similar count data are not independent?
|
You could just plot the ACF and check if the first coefficient is inside the critical values. The critical values are ok for non-Gaussian time series (at least asymptotically).
Alternatively, fit a simple count time series model such as the INAR(1) and see if the coefficient is significantly different from zero.
|
Best way to show these or similar count data are not independent?
|
You could just plot the ACF and check if the first coefficient is inside the critical values. The critical values are ok for non-Gaussian time series (at least asymptotically).
Alternatively, fit a si
|
Best way to show these or similar count data are not independent?
You could just plot the ACF and check if the first coefficient is inside the critical values. The critical values are ok for non-Gaussian time series (at least asymptotically).
Alternatively, fit a simple count time series model such as the INAR(1) and see if the coefficient is significantly different from zero.
|
Best way to show these or similar count data are not independent?
You could just plot the ACF and check if the first coefficient is inside the critical values. The critical values are ok for non-Gaussian time series (at least asymptotically).
Alternatively, fit a si
|
48,927 |
What are the indications that one should be using interaction variables in their linear regression model?
|
The short, but perhaps unsatisfying answer is: when you have a prior reason to think that the effect of one variable might depend on what's going on with another variable.
For example, let's say I'm trying to model student scores on a math test as a function of math test scores in the previous year and a binary variable indicating whether the student attended a (randomly assigned) refresher course in rudimentary math topics.
Given that the course only covered rudimentary abilities there are good theoretical reasons to think that it might produce a bigger impact on test scores for students who started at a lower baseline, and little or no impact on those students who were already doing well (and thus already knew the rudimentary topics it covered). So I should include an interaction term between prior test scores and course attendance to test if this is true or not (I would predict that the interaction term coefficient would be negative and significant in this case).
Note that this decision was made purely based on my prior theoretical understanding of how the variables should (or could) work. I didn't run a model first without the interaction term and then check some diagnostic or run some post hoc tests.
In general, when you are trying to decide how to specify a model - including whether to include interaction terms - you really want to try to make these decision based on prior theory and literature. It can be tempting to search for some sort of algorithmic approach ("if this number here is less than .05, then include an interaction") as you seem to be doing, but these approaches tend to cause big problems in practice - like unintentional p hacking. See prior discussions here about the problems with other attempts to specify models using "algorithmic" approaches.
In the case of interaction terms - there are always a large number of interaction terms that you COULD specify in any model. But if you try and check them all you will end up causing a multiple comparisons problems - you will find one of them to be significant at the .05 level just due to random chance, because you ran so many statistical tests. Plus some of these interaction terms - even if significant - will just make no substantive sense. Finally, including interaction terms in a model eats up degrees of freedom, makes the model harder to interpret, and reduces statistical power. So you only want to include an interaction term if you think that the benefit (in terms of interpretation and model fit) outweighs these costs.
In short: take a step back from diagnostics and think about what the variables you are considering for your model are actually doing, and why and how they might relate to the dependent variable. If you can think of a good substantive reason why the effect of one variable might depend on the level of another variable, then consider testing for an interaction between them.
|
What are the indications that one should be using interaction variables in their linear regression m
|
The short, but perhaps unsatisfying answer is: when you have a prior reason to think that the effect of one variable might depend on what's going on with another variable.
For example, let's say I'm t
|
What are the indications that one should be using interaction variables in their linear regression model?
The short, but perhaps unsatisfying answer is: when you have a prior reason to think that the effect of one variable might depend on what's going on with another variable.
For example, let's say I'm trying to model student scores on a math test as a function of math test scores in the previous year and a binary variable indicating whether the student attended a (randomly assigned) refresher course in rudimentary math topics.
Given that the course only covered rudimentary abilities there are good theoretical reasons to think that it might produce a bigger impact on test scores for students who started at a lower baseline, and little or no impact on those students who were already doing well (and thus already knew the rudimentary topics it covered). So I should include an interaction term between prior test scores and course attendance to test if this is true or not (I would predict that the interaction term coefficient would be negative and significant in this case).
Note that this decision was made purely based on my prior theoretical understanding of how the variables should (or could) work. I didn't run a model first without the interaction term and then check some diagnostic or run some post hoc tests.
In general, when you are trying to decide how to specify a model - including whether to include interaction terms - you really want to try to make these decision based on prior theory and literature. It can be tempting to search for some sort of algorithmic approach ("if this number here is less than .05, then include an interaction") as you seem to be doing, but these approaches tend to cause big problems in practice - like unintentional p hacking. See prior discussions here about the problems with other attempts to specify models using "algorithmic" approaches.
In the case of interaction terms - there are always a large number of interaction terms that you COULD specify in any model. But if you try and check them all you will end up causing a multiple comparisons problems - you will find one of them to be significant at the .05 level just due to random chance, because you ran so many statistical tests. Plus some of these interaction terms - even if significant - will just make no substantive sense. Finally, including interaction terms in a model eats up degrees of freedom, makes the model harder to interpret, and reduces statistical power. So you only want to include an interaction term if you think that the benefit (in terms of interpretation and model fit) outweighs these costs.
In short: take a step back from diagnostics and think about what the variables you are considering for your model are actually doing, and why and how they might relate to the dependent variable. If you can think of a good substantive reason why the effect of one variable might depend on the level of another variable, then consider testing for an interaction between them.
|
What are the indications that one should be using interaction variables in their linear regression m
The short, but perhaps unsatisfying answer is: when you have a prior reason to think that the effect of one variable might depend on what's going on with another variable.
For example, let's say I'm t
|
48,928 |
What are the indications that one should be using interaction variables in their linear regression model?
|
The prior for any interaction is that it is likely very, perhaps vanishingly, small (see Tosh 2021). As such, you should not explore or add-in interaction terms unless you expect them, a priori, to be large or informative. Doing so creates a massive risk for false-positives. Generally, one should only add in interactions when they are well-established or when the interaction is the question of interest (one might also add in specific interaction terms as confounders when testing an interaction of interest in a regression - see Keller 2014).
|
What are the indications that one should be using interaction variables in their linear regression m
|
The prior for any interaction is that it is likely very, perhaps vanishingly, small (see Tosh 2021). As such, you should not explore or add-in interaction terms unless you expect them, a priori, to be
|
What are the indications that one should be using interaction variables in their linear regression model?
The prior for any interaction is that it is likely very, perhaps vanishingly, small (see Tosh 2021). As such, you should not explore or add-in interaction terms unless you expect them, a priori, to be large or informative. Doing so creates a massive risk for false-positives. Generally, one should only add in interactions when they are well-established or when the interaction is the question of interest (one might also add in specific interaction terms as confounders when testing an interaction of interest in a regression - see Keller 2014).
|
What are the indications that one should be using interaction variables in their linear regression m
The prior for any interaction is that it is likely very, perhaps vanishingly, small (see Tosh 2021). As such, you should not explore or add-in interaction terms unless you expect them, a priori, to be
|
48,929 |
What does "predictive discrimination" mean and how is it different from classification?
|
Frank Harrell's comment [emphasis is mine]:
Predictive discrimination is the degree to which predictive signals can separate those with good outcomes from those with worse outcomes. The most popular measures of discrimination are R2 and c-index (concordance probability; equal to AUROC when Y is binary). Rank correlations between X and Y are measures of predictive discrimination. This is more general than classification as it takes into account tendencies/gray zones as in probability models. See https://fharrell.com/post/addvalue
His comparison of predictive discrimination to classification seems like apples and oranges to me, because the former is a statistic or performance measure, while the latter is a task or a problem setting. But this will do.
|
What does "predictive discrimination" mean and how is it different from classification?
|
Frank Harrell's comment [emphasis is mine]:
Predictive discrimination is the degree to which predictive signals can separate those with good outcomes from those with worse outcomes. The most popular
|
What does "predictive discrimination" mean and how is it different from classification?
Frank Harrell's comment [emphasis is mine]:
Predictive discrimination is the degree to which predictive signals can separate those with good outcomes from those with worse outcomes. The most popular measures of discrimination are R2 and c-index (concordance probability; equal to AUROC when Y is binary). Rank correlations between X and Y are measures of predictive discrimination. This is more general than classification as it takes into account tendencies/gray zones as in probability models. See https://fharrell.com/post/addvalue
His comparison of predictive discrimination to classification seems like apples and oranges to me, because the former is a statistic or performance measure, while the latter is a task or a problem setting. But this will do.
|
What does "predictive discrimination" mean and how is it different from classification?
Frank Harrell's comment [emphasis is mine]:
Predictive discrimination is the degree to which predictive signals can separate those with good outcomes from those with worse outcomes. The most popular
|
48,930 |
What does "predictive discrimination" mean and how is it different from classification?
|
Predictive discrimination is the ability of a model to produce (distributions of) predicted values that are separated when the observed values are distinct, and to have the correct order, and I think this is totally consistent with the quote in the answer by paperskilltrees. This is not the same as making quality predictions. A model can have good, even perfect, predictive discrimination while having terrible performance, as I will demonstrate below.
set.seed(2023)
N <- 1000
y_true <- runif(N, 0, 1)
y_pred <- 10 + y_true
plot(y_true, y_pred)
The predicted values are perfectly correlated with the true values, yet the predictions are terrible.
Similarly, a model that predicts probabilities (a "classifier" in many circles) can produce probabilities that are well-separated between the classes yet are not reflective of the true probability of an event occurring. This would be reflected in the ROCAUC being high (or the correlation between the true and predicted values) yet the performance according to something like Brier score or log loss being poor.
library(rms)
set.seed(2023)
N <- 1000
p_true <- rbeta(N, 1/4, 1/4)
y_true <- rbinom(N, 1, p_true)
y_pred <- ecdf(p_true)(p_true)
rms::val.prob(y_pred, y_true)
The measures of predictive discrimination, AUC and squared correlation, are quite high (especially AUC), but the calibration is terrible, as the curved plot that deviates from the "ideal" shows.
|
What does "predictive discrimination" mean and how is it different from classification?
|
Predictive discrimination is the ability of a model to produce (distributions of) predicted values that are separated when the observed values are distinct, and to have the correct order, and I think
|
What does "predictive discrimination" mean and how is it different from classification?
Predictive discrimination is the ability of a model to produce (distributions of) predicted values that are separated when the observed values are distinct, and to have the correct order, and I think this is totally consistent with the quote in the answer by paperskilltrees. This is not the same as making quality predictions. A model can have good, even perfect, predictive discrimination while having terrible performance, as I will demonstrate below.
set.seed(2023)
N <- 1000
y_true <- runif(N, 0, 1)
y_pred <- 10 + y_true
plot(y_true, y_pred)
The predicted values are perfectly correlated with the true values, yet the predictions are terrible.
Similarly, a model that predicts probabilities (a "classifier" in many circles) can produce probabilities that are well-separated between the classes yet are not reflective of the true probability of an event occurring. This would be reflected in the ROCAUC being high (or the correlation between the true and predicted values) yet the performance according to something like Brier score or log loss being poor.
library(rms)
set.seed(2023)
N <- 1000
p_true <- rbeta(N, 1/4, 1/4)
y_true <- rbinom(N, 1, p_true)
y_pred <- ecdf(p_true)(p_true)
rms::val.prob(y_pred, y_true)
The measures of predictive discrimination, AUC and squared correlation, are quite high (especially AUC), but the calibration is terrible, as the curved plot that deviates from the "ideal" shows.
|
What does "predictive discrimination" mean and how is it different from classification?
Predictive discrimination is the ability of a model to produce (distributions of) predicted values that are separated when the observed values are distinct, and to have the correct order, and I think
|
48,931 |
When should one use a Tweedie GLM over a Zero-Inflated GLM?
|
Tweedie GLMs are true GLMs and enjoy the usual properties of GLMs.
ZI GLMs are more complex models that assume a GLM plus an extra zero-inflation process, so they are obviously more flexible but at the cost of extra parameters. If the simpler Tweedie GLM fits the data adequately then it is the preferable model. It is doesn't, then you probably need the added complexity of the ZI GLM.
Tweedie GLMs assume that the probability of an exact zero is related to the expected value of the process.
If the expected value is low, then zeros will be more common.
If the expected value is high, then zeros will be less common.
This relationship can be tuned somewhat by choosing the index of the Tweedie model but, if the relationship doesn't hold as expected, then a more complex ZI model may be required.
You can judge the fit of the Tweedie GLM using randomized quantile residuals.
See for example
Interpreting GLM residual plot
or
Poisson regression residuals diagnostic.
You might for example give special attention to the residuals arising from exact zeros in the datasets.
Also see
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros?
regarding how to estimate the probability of zeros predicted by a Tweedie GLM.
See also
A model for non-negative data with many zeros: pros and cons of Tweedie GLM
for a related question and answer.
References
Smyth, G. K. (1996). Regression modelling of quantity data with exact zeroes. Proceedings of the Second Australia-Japan Workshop on Stochastic Models in Engineering, Technology and Management. Technology Management Centre, University of Queensland, 572-580.
http://www.statsci.org/smyth/pubs/RegressionWithExactZerosPreprint.pdf
Dunn PK, Smyth GK (1996). Randomized quantile residuals. J Comput Graph Stat 5(3):236–44. https://www.tandfonline.com/doi/abs/10.1080/10618600.1996.10474708
Feng, C., Li, L., & Sadeghpour, A. (2020). A comparison of residual diagnosis tools for diagnosing regression models for count data. BMC Medical Research Methodology 20(1), 1-21. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-020-01055-2
|
When should one use a Tweedie GLM over a Zero-Inflated GLM?
|
Tweedie GLMs are true GLMs and enjoy the usual properties of GLMs.
ZI GLMs are more complex models that assume a GLM plus an extra zero-inflation process, so they are obviously more flexible but at th
|
When should one use a Tweedie GLM over a Zero-Inflated GLM?
Tweedie GLMs are true GLMs and enjoy the usual properties of GLMs.
ZI GLMs are more complex models that assume a GLM plus an extra zero-inflation process, so they are obviously more flexible but at the cost of extra parameters. If the simpler Tweedie GLM fits the data adequately then it is the preferable model. It is doesn't, then you probably need the added complexity of the ZI GLM.
Tweedie GLMs assume that the probability of an exact zero is related to the expected value of the process.
If the expected value is low, then zeros will be more common.
If the expected value is high, then zeros will be less common.
This relationship can be tuned somewhat by choosing the index of the Tweedie model but, if the relationship doesn't hold as expected, then a more complex ZI model may be required.
You can judge the fit of the Tweedie GLM using randomized quantile residuals.
See for example
Interpreting GLM residual plot
or
Poisson regression residuals diagnostic.
You might for example give special attention to the residuals arising from exact zeros in the datasets.
Also see
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros?
regarding how to estimate the probability of zeros predicted by a Tweedie GLM.
See also
A model for non-negative data with many zeros: pros and cons of Tweedie GLM
for a related question and answer.
References
Smyth, G. K. (1996). Regression modelling of quantity data with exact zeroes. Proceedings of the Second Australia-Japan Workshop on Stochastic Models in Engineering, Technology and Management. Technology Management Centre, University of Queensland, 572-580.
http://www.statsci.org/smyth/pubs/RegressionWithExactZerosPreprint.pdf
Dunn PK, Smyth GK (1996). Randomized quantile residuals. J Comput Graph Stat 5(3):236–44. https://www.tandfonline.com/doi/abs/10.1080/10618600.1996.10474708
Feng, C., Li, L., & Sadeghpour, A. (2020). A comparison of residual diagnosis tools for diagnosing regression models for count data. BMC Medical Research Methodology 20(1), 1-21. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-020-01055-2
|
When should one use a Tweedie GLM over a Zero-Inflated GLM?
Tweedie GLMs are true GLMs and enjoy the usual properties of GLMs.
ZI GLMs are more complex models that assume a GLM plus an extra zero-inflation process, so they are obviously more flexible but at th
|
48,932 |
does convolution of a probability distribution with itself converge to its mean
|
$Y_0=\lambda X + (1-\lambda) X'$ so
$\text{var}(Y_0) = (\lambda^2 + (1-\lambda)^2)\text{var}(X)$
define $v = (\lambda^2 + (1-\lambda)^2)$
note $0.5< v < 1$ for $0< \lambda<1$
$Y_{1}=\lambda Y_0 + (1-\lambda) Y_0'$ and we have the general pattern
$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$
since $Y_n$ and $Y_n'$ are independent copies
$\text{var}(Y_{n+1})= v Y_n$
$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$ so
$\text{var}(Y_{n}) = v^{n+1}\text{var}(X)$ so variance goes to zero and $\text{mean}(Y_n)=\text{mean}(X)$
|
does convolution of a probability distribution with itself converge to its mean
|
$Y_0=\lambda X + (1-\lambda) X'$ so
$\text{var}(Y_0) = (\lambda^2 + (1-\lambda)^2)\text{var}(X)$
define $v = (\lambda^2 + (1-\lambda)^2)$
note $0.5< v < 1$ for $0< \lambda<1$
$Y_{1}=\lambda Y_0 + (1-\
|
does convolution of a probability distribution with itself converge to its mean
$Y_0=\lambda X + (1-\lambda) X'$ so
$\text{var}(Y_0) = (\lambda^2 + (1-\lambda)^2)\text{var}(X)$
define $v = (\lambda^2 + (1-\lambda)^2)$
note $0.5< v < 1$ for $0< \lambda<1$
$Y_{1}=\lambda Y_0 + (1-\lambda) Y_0'$ and we have the general pattern
$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$
since $Y_n$ and $Y_n'$ are independent copies
$\text{var}(Y_{n+1})= v Y_n$
$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$ so
$\text{var}(Y_{n}) = v^{n+1}\text{var}(X)$ so variance goes to zero and $\text{mean}(Y_n)=\text{mean}(X)$
|
does convolution of a probability distribution with itself converge to its mean
$Y_0=\lambda X + (1-\lambda) X'$ so
$\text{var}(Y_0) = (\lambda^2 + (1-\lambda)^2)\text{var}(X)$
define $v = (\lambda^2 + (1-\lambda)^2)$
note $0.5< v < 1$ for $0< \lambda<1$
$Y_{1}=\lambda Y_0 + (1-\
|
48,933 |
does convolution of a probability distribution with itself converge to its mean
|
This is just an example that your statement seems to be wrong, at least in some scenarios.
Suppose there are two same but independent distributions:
$Y_0$: 50% we get 1, 50% we get 0.
$Y_0'$: 50% we get 1, 50% we get 0.
Let $Y_1=1/2Y_0+1/2Y_0'$
Then $Y_1$ is: 25% we get 1, 50% we get 0.5, 25% we get 0.
The 25% comes from $50\%\times50\%$.
Then we have $Y_1$' totally same as $Y_1$. $Y_1'$ and $Y_1$ are independent.
By repeating, it seems that the probability of $Y_n=0.5$ can never exceed $50\%$
|
does convolution of a probability distribution with itself converge to its mean
|
This is just an example that your statement seems to be wrong, at least in some scenarios.
Suppose there are two same but independent distributions:
$Y_0$: 50% we get 1, 50% we get 0.
$Y_0'$: 50% we g
|
does convolution of a probability distribution with itself converge to its mean
This is just an example that your statement seems to be wrong, at least in some scenarios.
Suppose there are two same but independent distributions:
$Y_0$: 50% we get 1, 50% we get 0.
$Y_0'$: 50% we get 1, 50% we get 0.
Let $Y_1=1/2Y_0+1/2Y_0'$
Then $Y_1$ is: 25% we get 1, 50% we get 0.5, 25% we get 0.
The 25% comes from $50\%\times50\%$.
Then we have $Y_1$' totally same as $Y_1$. $Y_1'$ and $Y_1$ are independent.
By repeating, it seems that the probability of $Y_n=0.5$ can never exceed $50\%$
|
does convolution of a probability distribution with itself converge to its mean
This is just an example that your statement seems to be wrong, at least in some scenarios.
Suppose there are two same but independent distributions:
$Y_0$: 50% we get 1, 50% we get 0.
$Y_0'$: 50% we g
|
48,934 |
Why is it that if you undersample or oversample you have to calibrate your output probabilities?
|
I'm taking a lot of my answer from Agresti's "Categorical Data Analysis". For some context, let's say we want to predict if an email is spam or ham using a single binary variable, namely if the subject line contains the word "Viagra". Let's call this predictor $x$. This is a simplification of our more general problem, but will suffice.
Under/Oversampling can be thought of as a case-control study. Viewing the spam/ham problem as a case-control design, we would fix the marginal distribution of ham to spam (via oversampling in this case) and the outcome of the study would be if the email subject line had the word "Viagra". Before oversampling, our estimate of $Pr(\mbox{spam})=\delta$ (its just the proportion of our sample which is spam). After oversampling, our estimate of $Pr(\mbox{spam})=\delta^\star >\delta$. This will be important later.
In most other studies, we would want to know $Pr(y=\mbox{spam} \vert x)$. This is referred to as the conditional distribution of spam (conditional on $x$ in this case). However, because of the sampling design fixing the marginal distribution of ham/spam, we can't estimate the conditional distribution of spam, but we can estimate the conditional distribution of $x$, $Pr(x \vert y=\mbox{spam})$.
In order to get the conditional distribution of spam, we would need to account for the prevalence of spam. Following Lachin from chapter 5 of his book Biostatistical Methods: The Assessment of Relative Risks, 2nd Edition and an application of Bayes' Rule, the conditional distribution of spam would be calculated as
$$Pr(\mbox{spam} \vert x) = \dfrac{Pr(x\vert \mbox{spam})\cdot \delta}{Pr(x \vert \mbox{spam}) \cdot \delta + Pr(x \vert \mbox{ham}) \cdot (1-\delta)}$$
Can you spot the problem now?
Here is the problem: the sampling design fixes the prevalence in our sample to be something else other than $\delta$. In essence, we have forced the prevalence to be $\delta^
\star > \delta$ via oversampling. Hence, any estimate of the risk of spam from the data we have using oversampling is biased precisely because the prevalence is biased by design.
doesn't that just mean it has learnt the wrong things during training
Some of what you have learned would be wrong, but surprisingly not everything. The prevalence would certainly be wrong, hence the estimated risk would be wrong, but the relationship between $x$ and the risk of spam is unaffected. From Agresti (edited to align with our example),
We can find the odds ratio, however, because it treats the variables symmetrically, taking the same value using the conditional distribution of [$x$] given [spam] as it does using the conditional distribution of [spam] given [$x$].
So our model would learn the correct relationships between inputs and outputs, but the probabilities would be biased.
Let's make this more concrete by modelling $Pr(\mbox{spam} \vert x)$ with a logistic regression. Our model would be
$$ \operatorname{logit}(Pr(\mbox{spam} \vert x)) = \beta_0 + \beta_1 x$$
If we were to run a case-control study on the spam/ham problem, $\beta_0$ would be biased, but not $\beta_1$. Its easy to demonstrate this too via simulation. I will simulate data from the model
$$ \operatorname{logit}(p) = -2.2 + 0.46x $$
and upsample the minority class. Then, I will compute the difference between the estimated coefficients. I'll do this 1000 times and plot a histogram of the differences. We will see that $\beta_1 - 0.46$ will be centred around 0 (hence unbiased) whereas $\beta_0 - (-2.2)$ will not be centered around 0 (hence biased) due to the upsampling. I've added a red line at the point of 0 difference for reference.
Because the intercept is biased, the entire risk estimate is biased. Not performing the upsampling and fitting the model on the raw data fixes this bias (shown below, though it should be noted that the estimates are asymptotically unbiased, so we would need enough data for this to work).
Code to reproduce the plots:
library(tidyverse)
z = rerun(1000, {
# sample data to fit the model to
n = 1000
x = rbinom(n, 1, 0.5)
b0 = qlogis(0.1)
b1 = (qlogis(0.15) - qlogis(0.1))
b = c(b0, b1)
p = plogis(b0 + b1*x)
y = rbinom(n, 1, p)
d = tibble(x, y)
# upsample postivie cases
nsamp = length(y[y==0]) - length(y[y==1])
yd = filter(d, y==1) %>%
sample_n(size=nsamp, replace = T)
newd = bind_rows(yd, d)
# switch data to newd if you want to get the biased estimates
model = glm(y~x, data=d, family=binomial())
estbeta = coef(model)
tibble(coef = c('Intercept','slope'), difference = b - estbeta)
})
bind_rows(z) %>%
ggplot(aes(difference))+
geom_histogram(color = 'black', fill = 'dark gray')+
facet_wrap(~coef, ncol = 1)+
geom_vline(aes(xintercept=0), color = 'red')+
theme_light()+
theme(aspect.ratio = 1/1.61)
```
|
Why is it that if you undersample or oversample you have to calibrate your output probabilities?
|
I'm taking a lot of my answer from Agresti's "Categorical Data Analysis". For some context, let's say we want to predict if an email is spam or ham using a single binary variable, namely if the subje
|
Why is it that if you undersample or oversample you have to calibrate your output probabilities?
I'm taking a lot of my answer from Agresti's "Categorical Data Analysis". For some context, let's say we want to predict if an email is spam or ham using a single binary variable, namely if the subject line contains the word "Viagra". Let's call this predictor $x$. This is a simplification of our more general problem, but will suffice.
Under/Oversampling can be thought of as a case-control study. Viewing the spam/ham problem as a case-control design, we would fix the marginal distribution of ham to spam (via oversampling in this case) and the outcome of the study would be if the email subject line had the word "Viagra". Before oversampling, our estimate of $Pr(\mbox{spam})=\delta$ (its just the proportion of our sample which is spam). After oversampling, our estimate of $Pr(\mbox{spam})=\delta^\star >\delta$. This will be important later.
In most other studies, we would want to know $Pr(y=\mbox{spam} \vert x)$. This is referred to as the conditional distribution of spam (conditional on $x$ in this case). However, because of the sampling design fixing the marginal distribution of ham/spam, we can't estimate the conditional distribution of spam, but we can estimate the conditional distribution of $x$, $Pr(x \vert y=\mbox{spam})$.
In order to get the conditional distribution of spam, we would need to account for the prevalence of spam. Following Lachin from chapter 5 of his book Biostatistical Methods: The Assessment of Relative Risks, 2nd Edition and an application of Bayes' Rule, the conditional distribution of spam would be calculated as
$$Pr(\mbox{spam} \vert x) = \dfrac{Pr(x\vert \mbox{spam})\cdot \delta}{Pr(x \vert \mbox{spam}) \cdot \delta + Pr(x \vert \mbox{ham}) \cdot (1-\delta)}$$
Can you spot the problem now?
Here is the problem: the sampling design fixes the prevalence in our sample to be something else other than $\delta$. In essence, we have forced the prevalence to be $\delta^
\star > \delta$ via oversampling. Hence, any estimate of the risk of spam from the data we have using oversampling is biased precisely because the prevalence is biased by design.
doesn't that just mean it has learnt the wrong things during training
Some of what you have learned would be wrong, but surprisingly not everything. The prevalence would certainly be wrong, hence the estimated risk would be wrong, but the relationship between $x$ and the risk of spam is unaffected. From Agresti (edited to align with our example),
We can find the odds ratio, however, because it treats the variables symmetrically, taking the same value using the conditional distribution of [$x$] given [spam] as it does using the conditional distribution of [spam] given [$x$].
So our model would learn the correct relationships between inputs and outputs, but the probabilities would be biased.
Let's make this more concrete by modelling $Pr(\mbox{spam} \vert x)$ with a logistic regression. Our model would be
$$ \operatorname{logit}(Pr(\mbox{spam} \vert x)) = \beta_0 + \beta_1 x$$
If we were to run a case-control study on the spam/ham problem, $\beta_0$ would be biased, but not $\beta_1$. Its easy to demonstrate this too via simulation. I will simulate data from the model
$$ \operatorname{logit}(p) = -2.2 + 0.46x $$
and upsample the minority class. Then, I will compute the difference between the estimated coefficients. I'll do this 1000 times and plot a histogram of the differences. We will see that $\beta_1 - 0.46$ will be centred around 0 (hence unbiased) whereas $\beta_0 - (-2.2)$ will not be centered around 0 (hence biased) due to the upsampling. I've added a red line at the point of 0 difference for reference.
Because the intercept is biased, the entire risk estimate is biased. Not performing the upsampling and fitting the model on the raw data fixes this bias (shown below, though it should be noted that the estimates are asymptotically unbiased, so we would need enough data for this to work).
Code to reproduce the plots:
library(tidyverse)
z = rerun(1000, {
# sample data to fit the model to
n = 1000
x = rbinom(n, 1, 0.5)
b0 = qlogis(0.1)
b1 = (qlogis(0.15) - qlogis(0.1))
b = c(b0, b1)
p = plogis(b0 + b1*x)
y = rbinom(n, 1, p)
d = tibble(x, y)
# upsample postivie cases
nsamp = length(y[y==0]) - length(y[y==1])
yd = filter(d, y==1) %>%
sample_n(size=nsamp, replace = T)
newd = bind_rows(yd, d)
# switch data to newd if you want to get the biased estimates
model = glm(y~x, data=d, family=binomial())
estbeta = coef(model)
tibble(coef = c('Intercept','slope'), difference = b - estbeta)
})
bind_rows(z) %>%
ggplot(aes(difference))+
geom_histogram(color = 'black', fill = 'dark gray')+
facet_wrap(~coef, ncol = 1)+
geom_vline(aes(xintercept=0), color = 'red')+
theme_light()+
theme(aspect.ratio = 1/1.61)
```
|
Why is it that if you undersample or oversample you have to calibrate your output probabilities?
I'm taking a lot of my answer from Agresti's "Categorical Data Analysis". For some context, let's say we want to predict if an email is spam or ham using a single binary variable, namely if the subje
|
48,935 |
Linear regression to predict both mean and SD of dependent variable
|
As you are interested in modeling percentiles, you should have a look at quantile regression methods. Instead of modeling conditional means (as in linear regression), quantile regression allows you to model (conditional) quantiles.
As mentioned in the comments, a good introduction to quantile regression is the vignette to the quantreg R package. One of the examples in the vignette illustrates your use case:
|
Linear regression to predict both mean and SD of dependent variable
|
As you are interested in modeling percentiles, you should have a look at quantile regression methods. Instead of modeling conditional means (as in linear regression), quantile regression allows you to
|
Linear regression to predict both mean and SD of dependent variable
As you are interested in modeling percentiles, you should have a look at quantile regression methods. Instead of modeling conditional means (as in linear regression), quantile regression allows you to model (conditional) quantiles.
As mentioned in the comments, a good introduction to quantile regression is the vignette to the quantreg R package. One of the examples in the vignette illustrates your use case:
|
Linear regression to predict both mean and SD of dependent variable
As you are interested in modeling percentiles, you should have a look at quantile regression methods. Instead of modeling conditional means (as in linear regression), quantile regression allows you to
|
48,936 |
Linear regression to predict both mean and SD of dependent variable
|
In this case, it would make sense to try to predict the log of daily expenditure - this will probably be closer to linear and so easier to predict.
|
Linear regression to predict both mean and SD of dependent variable
|
In this case, it would make sense to try to predict the log of daily expenditure - this will probably be closer to linear and so easier to predict.
|
Linear regression to predict both mean and SD of dependent variable
In this case, it would make sense to try to predict the log of daily expenditure - this will probably be closer to linear and so easier to predict.
|
Linear regression to predict both mean and SD of dependent variable
In this case, it would make sense to try to predict the log of daily expenditure - this will probably be closer to linear and so easier to predict.
|
48,937 |
What is "natural" about the natural parameterization of an exponential family and the natural parameter space?
|
"Natural" is the qualification chosen for this particular class of exponential families, so one can accept it as a definition without seeking further reasons. (The alternative qualification "canonical" is also sometimes employed.)
If one looks at the generic production of exponential families, one starts$ ^\dagger$ with a reference measure corresponding to $h(\cdot)$ and a collection of $k$ statistics (aka functions) $t_i(\cdot)$. The ensuing exponential family is then created as
$$f(x|\mathbf{\eta}) \propto h(x)\,\exp\{\eta_1t_1(x)+\cdots+\eta_kt_k(x)\}\tag{1}$$
i.e., by considering all densities using a linear combination of the $t_i$'s (with the exponential ensuring the function is positive). Using a linear combination of the $t_i$'s may appear as the "natural" choice. The largest range of possible linear combinations is "naturally" made of all parameters $\mathbf{\eta}$ ensuring that (1) is integrable, i.e., defines a probability density.
In historical terms, the reverse happened, namely, standard distributions such as the Normal or the Poisson distributions, were observed (by R.A. Fisher, I presume) to share this exponential structure, albeit possibly requiring a transform of the standard parameter $\mathbf{\theta}$ (such as the mean~x~variance parameter for the Normal distribution):
$$f(x|\mathbf{\theta}) \propto h(x)\,\exp\{w(\theta_1)t_1(x)+\cdots+\omega(\theta_k)t_k(x)\}\tag{2}$$
Since (2) can be expressed as (1) by a reparameterisation
$$\eta_1=w(\theta_1),\ldots,~\eta_k=\omega(\theta_k)$$
the parameterisation in $\eta_1,\ldots,\eta_k$ can be seen as "natural" in that it is sufficient (enough) to characterise the distribution (in the sense that the reparameterisation is not necessarily bijective). It is also the case that for this parameterisation that the cumulants can be derived from the normalising constant.
$ ^\dagger$ To quote from Morris and Lock (2014), "Starting with a solitary member distribution of an NEF, all possible distributions within that NEF can be generated via five operations: using linear functions (translations and re-scalings), convolution and division (division being the inverse of convolution), and exponential generation..."
|
What is "natural" about the natural parameterization of an exponential family and the natural parame
|
"Natural" is the qualification chosen for this particular class of exponential families, so one can accept it as a definition without seeking further reasons. (The alternative qualification "canonical
|
What is "natural" about the natural parameterization of an exponential family and the natural parameter space?
"Natural" is the qualification chosen for this particular class of exponential families, so one can accept it as a definition without seeking further reasons. (The alternative qualification "canonical" is also sometimes employed.)
If one looks at the generic production of exponential families, one starts$ ^\dagger$ with a reference measure corresponding to $h(\cdot)$ and a collection of $k$ statistics (aka functions) $t_i(\cdot)$. The ensuing exponential family is then created as
$$f(x|\mathbf{\eta}) \propto h(x)\,\exp\{\eta_1t_1(x)+\cdots+\eta_kt_k(x)\}\tag{1}$$
i.e., by considering all densities using a linear combination of the $t_i$'s (with the exponential ensuring the function is positive). Using a linear combination of the $t_i$'s may appear as the "natural" choice. The largest range of possible linear combinations is "naturally" made of all parameters $\mathbf{\eta}$ ensuring that (1) is integrable, i.e., defines a probability density.
In historical terms, the reverse happened, namely, standard distributions such as the Normal or the Poisson distributions, were observed (by R.A. Fisher, I presume) to share this exponential structure, albeit possibly requiring a transform of the standard parameter $\mathbf{\theta}$ (such as the mean~x~variance parameter for the Normal distribution):
$$f(x|\mathbf{\theta}) \propto h(x)\,\exp\{w(\theta_1)t_1(x)+\cdots+\omega(\theta_k)t_k(x)\}\tag{2}$$
Since (2) can be expressed as (1) by a reparameterisation
$$\eta_1=w(\theta_1),\ldots,~\eta_k=\omega(\theta_k)$$
the parameterisation in $\eta_1,\ldots,\eta_k$ can be seen as "natural" in that it is sufficient (enough) to characterise the distribution (in the sense that the reparameterisation is not necessarily bijective). It is also the case that for this parameterisation that the cumulants can be derived from the normalising constant.
$ ^\dagger$ To quote from Morris and Lock (2014), "Starting with a solitary member distribution of an NEF, all possible distributions within that NEF can be generated via five operations: using linear functions (translations and re-scalings), convolution and division (division being the inverse of convolution), and exponential generation..."
|
What is "natural" about the natural parameterization of an exponential family and the natural parame
"Natural" is the qualification chosen for this particular class of exponential families, so one can accept it as a definition without seeking further reasons. (The alternative qualification "canonical
|
48,938 |
What is the difference between a non-zero nugget and a noise term in Kriging/GPR?
|
Random noise and nugget effect are indeed quite similar to some extent. The difference between the two appears
when there are repeated observations (i.e., several observations at the same location), and
when you compute the predicted value at an observation point.
The random noise model assumes that observations are corrupted by additive, IID Gaussian noise. Practically, this means that repeated observations at a single location are producing differents outcomes. The posterior mean of the GP is not equal in this case to the observed value (even if there is only one observation at a particular location). This is GP regression, with a (usually) smooth regression function.
The nugget model, on the other hand, assumes a deterministic observations model (repeated observations should provide to the same value) but a very rough underlying function. The posterior mean of the GP is equal in this case to the observed value at each observation point, but is discontinuous at these points. This is in fact a form of GP interpolation, with a discontinuous interpolant.
Remark: in the first case (random noise), the individual values of the repeated observations do not matter. The posterior distribution of the GP depends only on the number of observations at each location and on their average.
|
What is the difference between a non-zero nugget and a noise term in Kriging/GPR?
|
Random noise and nugget effect are indeed quite similar to some extent. The difference between the two appears
when there are repeated observations (i.e., several observations at the same location),
|
What is the difference between a non-zero nugget and a noise term in Kriging/GPR?
Random noise and nugget effect are indeed quite similar to some extent. The difference between the two appears
when there are repeated observations (i.e., several observations at the same location), and
when you compute the predicted value at an observation point.
The random noise model assumes that observations are corrupted by additive, IID Gaussian noise. Practically, this means that repeated observations at a single location are producing differents outcomes. The posterior mean of the GP is not equal in this case to the observed value (even if there is only one observation at a particular location). This is GP regression, with a (usually) smooth regression function.
The nugget model, on the other hand, assumes a deterministic observations model (repeated observations should provide to the same value) but a very rough underlying function. The posterior mean of the GP is equal in this case to the observed value at each observation point, but is discontinuous at these points. This is in fact a form of GP interpolation, with a discontinuous interpolant.
Remark: in the first case (random noise), the individual values of the repeated observations do not matter. The posterior distribution of the GP depends only on the number of observations at each location and on their average.
|
What is the difference between a non-zero nugget and a noise term in Kriging/GPR?
Random noise and nugget effect are indeed quite similar to some extent. The difference between the two appears
when there are repeated observations (i.e., several observations at the same location),
|
48,939 |
What is the frequentist interpretation of uncertainty vs. variability?
|
Short answer: Any uncertainty (confidence) we have surrounding an unknown population quantity is due to the sampling variability of the estimation or testing procedure that uses a limited sample size.
Full answer: To the frequentist, population-level quantities (typically denoted by greek characters) are fixed and unknown because we are unable to sample the entire population. If we could sample the entire population we would know the population-level quantity of interest. In practice we have a limited sample from the population, and the only thing one can objectively describe is the operating characteristics of an estimation and testing procedure. Understanding the long-run performance of the estimation and testing procedure is what gives the frequentist confidence in the conclusions drawn from a single experimental result. What the experimenter or anyone else subjectively believes before or after the experiment is irrelevant since this belief is not evidence of anything. Beliefs and opinions are not facts. If the frequentist has historical data ("prior knowledge") this can be incorporated in a meta-analysis through the likelihood and does not require the use of belief probabilities regarding parameters. If fixed population quantities are treated as random variables this can introduce bias in estimation and inference. I find confidence curves to be a particularly useful way to visualize frequentist inference, analogous to Bayesian posterior distributions.
|
What is the frequentist interpretation of uncertainty vs. variability?
|
Short answer: Any uncertainty (confidence) we have surrounding an unknown population quantity is due to the sampling variability of the estimation or testing procedure that uses a limited sample size.
|
What is the frequentist interpretation of uncertainty vs. variability?
Short answer: Any uncertainty (confidence) we have surrounding an unknown population quantity is due to the sampling variability of the estimation or testing procedure that uses a limited sample size.
Full answer: To the frequentist, population-level quantities (typically denoted by greek characters) are fixed and unknown because we are unable to sample the entire population. If we could sample the entire population we would know the population-level quantity of interest. In practice we have a limited sample from the population, and the only thing one can objectively describe is the operating characteristics of an estimation and testing procedure. Understanding the long-run performance of the estimation and testing procedure is what gives the frequentist confidence in the conclusions drawn from a single experimental result. What the experimenter or anyone else subjectively believes before or after the experiment is irrelevant since this belief is not evidence of anything. Beliefs and opinions are not facts. If the frequentist has historical data ("prior knowledge") this can be incorporated in a meta-analysis through the likelihood and does not require the use of belief probabilities regarding parameters. If fixed population quantities are treated as random variables this can introduce bias in estimation and inference. I find confidence curves to be a particularly useful way to visualize frequentist inference, analogous to Bayesian posterior distributions.
|
What is the frequentist interpretation of uncertainty vs. variability?
Short answer: Any uncertainty (confidence) we have surrounding an unknown population quantity is due to the sampling variability of the estimation or testing procedure that uses a limited sample size.
|
48,940 |
What is the frequentist interpretation of uncertainty vs. variability?
|
Uncertainty means we do not know the value (or outcome) of some quantity, eg the average porosity of a specific reservoir (or the porosity of a core-sized piece of rock at some point within the reservoir). Variability refers to the multiple values a quantity has at different locations, times or instances – eg the average porosities of a collection of different reservoirs (or the range of core-plugs porosities at different locations within a specific reservoir).
I think the uncertainty amounts to the epistemic uncertainty(or model uncertainty; caused by parameter uncertainty, and can be reduced by providing more data), and the variability the aleatoric uncertainty(or data uncertainty; inherent uncertainty in the data, but cannot be reduced by providing more data). And their difference is illustrated in this answer.
The frequentist models can only capture the aleatoric uncertainty while the Bayesian both, then the frequentist interpretation of (epistemic) uncertainty would be of no meaning.
The aleatoric uncertainty is expressed in the distribution across the classes, which is zero if one class gets a probability of one. The epistemic uncertainty is expressed in the spread of the predicted probabilities of one class, which is zero if the spread is zero. The non-Bayesian NN can’t express epistemic uncertainty (you can’t get different predictions for the same image), but the BNN can.
The above is a quote from this book: Probabilistic Deep Learning: with Python, Keras and Tensorflow Probability
And how is it quantified?
The aleatoric uncertainty can be measured using entropy, and the spistemic uncertainty can also be quantified as illustrated in 8.5.2 of this book: Probabilistic Deep Learning: with Python, Keras and Tensorflow Probability.
|
What is the frequentist interpretation of uncertainty vs. variability?
|
Uncertainty means we do not know the value (or outcome) of some quantity, eg the average porosity of a specific reservoir (or the porosity of a core-sized piece of rock at some point within the reserv
|
What is the frequentist interpretation of uncertainty vs. variability?
Uncertainty means we do not know the value (or outcome) of some quantity, eg the average porosity of a specific reservoir (or the porosity of a core-sized piece of rock at some point within the reservoir). Variability refers to the multiple values a quantity has at different locations, times or instances – eg the average porosities of a collection of different reservoirs (or the range of core-plugs porosities at different locations within a specific reservoir).
I think the uncertainty amounts to the epistemic uncertainty(or model uncertainty; caused by parameter uncertainty, and can be reduced by providing more data), and the variability the aleatoric uncertainty(or data uncertainty; inherent uncertainty in the data, but cannot be reduced by providing more data). And their difference is illustrated in this answer.
The frequentist models can only capture the aleatoric uncertainty while the Bayesian both, then the frequentist interpretation of (epistemic) uncertainty would be of no meaning.
The aleatoric uncertainty is expressed in the distribution across the classes, which is zero if one class gets a probability of one. The epistemic uncertainty is expressed in the spread of the predicted probabilities of one class, which is zero if the spread is zero. The non-Bayesian NN can’t express epistemic uncertainty (you can’t get different predictions for the same image), but the BNN can.
The above is a quote from this book: Probabilistic Deep Learning: with Python, Keras and Tensorflow Probability
And how is it quantified?
The aleatoric uncertainty can be measured using entropy, and the spistemic uncertainty can also be quantified as illustrated in 8.5.2 of this book: Probabilistic Deep Learning: with Python, Keras and Tensorflow Probability.
|
What is the frequentist interpretation of uncertainty vs. variability?
Uncertainty means we do not know the value (or outcome) of some quantity, eg the average porosity of a specific reservoir (or the porosity of a core-sized piece of rock at some point within the reserv
|
48,941 |
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected loss?
|
I think the answer is no as the expected loss depends on $P(Y = 1)$, and this information isn't given by a ROC curve.
Let's say you have a binary random variable $Y$ with $p = P(Y = 1)$, and note $\hat{Y}_t$ a classifier depending on a treshold (or more generally a parameter) $t$.
The expected loss of classifier $\hat{Y}_t$ is
$$
\begin{array}{ccl}
L(\hat{Y}_t) &= &a P(\hat{Y}_t = 0 \cap Y = 1) + b P(\hat{Y}_t = 1 \cap Y = 0)\\
& = & a \cdot p\cdot P(\hat{Y}_t = 0 \mid Y = 1) + b \cdot (1-p)\cdot P(\hat{Y}_t = 1 \mid Y = 0).
\end{array}$$
The ROC curve only gives you the conditional probabilities $P(\hat{Y}_t = 1 \mid Y = 0)$ and $P(\hat{Y}_t = 0 \mid Y = 1)$ as a function of $t$, but they don't give you $p$.
Consider for example two (very bad) classifiers $\hat{Y}_t$ and $\hat{Z}_t$ with $$
\begin{array}{ccl|ccl}
P(\hat{Y}_t = 1 \mid Y = 0) &=& 1/2 & P(\hat{Z}_t = 1 \mid Y = 0) & = & t\\
P(\hat{Y}_t = 0 \mid Y = 1) &=& 1 - t & P(\hat{Z}_t = 0 \mid Y = 1) & = & 1/2
\end{array}
$$
giving the following ROC curves.
The the expected loss of $\hat{Y}_t$ is
$$L(\hat{Y}_t) = \frac{1}{2}(1-p)b + (1-t)a p$$
and the expected loss of $\hat{Z}_t$ is
$$L(\hat{Z}_t) = t b (1-p) + \frac{1}{2} a p.$$ There is no way to tell which is the best without knowing $p$...
However, if a ROC curve dominates another, meaning it's above it all the time, then you known that whatever probability $p$ and whatever losses $a$ and $b$, the dominating classifier will have lower expected loss than the other (it follows directly from the expression of the expected loss).
Indeed, if the roc curve of $\hat{Y}_t$ is above the ROC curve of $\hat{Z}_t$, then for each $t$, the ROC curve of $\hat{Y}_t$ is either left or above (or both) of the ROC curve of $\hat{Z}_t$, this implies that $$P(\hat{Y}_t = 1 \mid Y = 0) \leq P(\hat{Z}_t = 1 \mid Z = 0)$$ and $$P(\hat{Y}_t = 1 \mid Y = 1) \geq P(\hat{Z}_t = 1 \mid Z = 1), $$ and thus
$$P(\hat{Y}_t = 0 \mid Y = 1) \leq P(\hat{Z}_t = 0 \mid Z = 1).$$
Then, for any $a \geq 0$, $b\geq 0$ and $0\leq p \leq 1$,
$$\begin{array}{ccl}
L(\hat{Y}_t) & = & P(\hat{Y}_t = 0 \mid Y = 1) \cdot p \cdot a + P(\hat{Y}_t = 1 \mid Y = 0) \cdot (1-p) \cdot b \\
& \leq &P(\hat{Z}_t = 0 \mid Y = 1) \cdot p \cdot a + P(\hat{Z}_t = 1 \mid Y = 0) \cdot (1-p) \cdot b \\
& = & L(\hat{Z}_t)
\end{array}.
$$
I hope this helps.
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected
|
I think the answer is no as the expected loss depends on $P(Y = 1)$, and this information isn't given by a ROC curve.
Let's say you have a binary random variable $Y$ with $p = P(Y = 1)$, and note $\ha
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected loss?
I think the answer is no as the expected loss depends on $P(Y = 1)$, and this information isn't given by a ROC curve.
Let's say you have a binary random variable $Y$ with $p = P(Y = 1)$, and note $\hat{Y}_t$ a classifier depending on a treshold (or more generally a parameter) $t$.
The expected loss of classifier $\hat{Y}_t$ is
$$
\begin{array}{ccl}
L(\hat{Y}_t) &= &a P(\hat{Y}_t = 0 \cap Y = 1) + b P(\hat{Y}_t = 1 \cap Y = 0)\\
& = & a \cdot p\cdot P(\hat{Y}_t = 0 \mid Y = 1) + b \cdot (1-p)\cdot P(\hat{Y}_t = 1 \mid Y = 0).
\end{array}$$
The ROC curve only gives you the conditional probabilities $P(\hat{Y}_t = 1 \mid Y = 0)$ and $P(\hat{Y}_t = 0 \mid Y = 1)$ as a function of $t$, but they don't give you $p$.
Consider for example two (very bad) classifiers $\hat{Y}_t$ and $\hat{Z}_t$ with $$
\begin{array}{ccl|ccl}
P(\hat{Y}_t = 1 \mid Y = 0) &=& 1/2 & P(\hat{Z}_t = 1 \mid Y = 0) & = & t\\
P(\hat{Y}_t = 0 \mid Y = 1) &=& 1 - t & P(\hat{Z}_t = 0 \mid Y = 1) & = & 1/2
\end{array}
$$
giving the following ROC curves.
The the expected loss of $\hat{Y}_t$ is
$$L(\hat{Y}_t) = \frac{1}{2}(1-p)b + (1-t)a p$$
and the expected loss of $\hat{Z}_t$ is
$$L(\hat{Z}_t) = t b (1-p) + \frac{1}{2} a p.$$ There is no way to tell which is the best without knowing $p$...
However, if a ROC curve dominates another, meaning it's above it all the time, then you known that whatever probability $p$ and whatever losses $a$ and $b$, the dominating classifier will have lower expected loss than the other (it follows directly from the expression of the expected loss).
Indeed, if the roc curve of $\hat{Y}_t$ is above the ROC curve of $\hat{Z}_t$, then for each $t$, the ROC curve of $\hat{Y}_t$ is either left or above (or both) of the ROC curve of $\hat{Z}_t$, this implies that $$P(\hat{Y}_t = 1 \mid Y = 0) \leq P(\hat{Z}_t = 1 \mid Z = 0)$$ and $$P(\hat{Y}_t = 1 \mid Y = 1) \geq P(\hat{Z}_t = 1 \mid Z = 1), $$ and thus
$$P(\hat{Y}_t = 0 \mid Y = 1) \leq P(\hat{Z}_t = 0 \mid Z = 1).$$
Then, for any $a \geq 0$, $b\geq 0$ and $0\leq p \leq 1$,
$$\begin{array}{ccl}
L(\hat{Y}_t) & = & P(\hat{Y}_t = 0 \mid Y = 1) \cdot p \cdot a + P(\hat{Y}_t = 1 \mid Y = 0) \cdot (1-p) \cdot b \\
& \leq &P(\hat{Z}_t = 0 \mid Y = 1) \cdot p \cdot a + P(\hat{Z}_t = 1 \mid Y = 0) \cdot (1-p) \cdot b \\
& = & L(\hat{Z}_t)
\end{array}.
$$
I hope this helps.
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected
I think the answer is no as the expected loss depends on $P(Y = 1)$, and this information isn't given by a ROC curve.
Let's say you have a binary random variable $Y$ with $p = P(Y = 1)$, and note $\ha
|
48,942 |
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected loss?
|
Given a threshold* $t$, model 1 has lower estimated expected loss than model 2 if the corresponding ROC point of model 1 dominates** the ROC point of model 2. Here is why.
Let the confusion matrix corresponding to a particular threshold $t$ be
$$
\text{Conf}_t=\begin{pmatrix} j_t & k_t\\ l_t & m_t \end{pmatrix}
$$
with predicted classes in rows (row 1 ~ class 0, row 2 ~ class 1) and actual classes in columns (column 1 ~ class 0, column 2 ~ class 1). Concretely,
$$
\text{Conf}_t=\begin{pmatrix} \#\{{\hat Y=0 \cap Y=0\}}_t & \#\{{\hat Y=0\cap Y=1\}}_t \\ \#\{{\hat Y=1\cap Y=0\}}_t & \#\{{\hat Y=1\cap Y=1\}}_t \end{pmatrix}
$$
with $\#$ counting the number of elements that satisfy the condition. We will later add a subscript 1 for model 1 and 2 for model 2.
For any given sample, the number of actual zeros $j_t+l_t$ and the number of actual unities $k_t+m_t$ are fixed at $r$ and $s$, respectively:
\begin{aligned}
j_t+l_t &= r \quad \text{and} \\
k_t+m_t &= s.
\end{aligned}
We will make use of the latter equality in a subsequent step. Let us also define the sample size
$$
n:=j_t+k_t+l_t+m_t.
$$
The estimated expected loss of a model is
\begin{aligned}
\hat{\mathbb{E}}(L) &= \frac{1}{n}\big[ak_{t}+bl_{t}\big] \\
&= \frac{1}{n}\big[a(s-m_{t})+bl_{t}\big].
\end{aligned}
Explicitly, the estimated expected losses of models 1 and 2 are
\begin{aligned}
\hat{\mathbb{E}}(L_1) &= \frac{1}{n}\big[a(s-m_{1t})+bl_{1t}\big] \quad \text{and} \\
\hat{\mathbb{E}}(L_2) &= \frac{1}{n}\big[a(s-m_{2t})+bl_{2t}\big].
\end{aligned}
The ROC points (specific to the threshold $t$) of models 1 and 2 have coordinates $(l_{1t},m_{1t})$ and $(l_{2t},m_{2t})$, respectively. If the former point dominates the latter point, we have $l_{1t}\leq l_{2t}$ and $m_{1t}\geq m_{2t}$ and at least one of the two inequalities is strict.
What does this imply regarding $\hat{\mathbb{E}}(L_1)$ vs. $\hat{\mathbb{E}}(L_2)$? Since $a,b>0$ and $s-m_{1t},s-m_{2t}\geq0$, then looking at the formulas above we immediately see that $\hat{\mathbb{E}}(L_1)<\hat{\mathbb{E}}(L_2)$. Thus if the ROC point of model 1 dominates the ROC point of model 2, model 1 has lower estimated expected loss than model 2.
*The relevant threshold would be the optimal one.
**{ Is above and to the left } OR { is above and not to the right } OR { is to the left and not below }.
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected
|
Given a threshold* $t$, model 1 has lower estimated expected loss than model 2 if the corresponding ROC point of model 1 dominates** the ROC point of model 2. Here is why.
Let the confusion matrix cor
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected loss?
Given a threshold* $t$, model 1 has lower estimated expected loss than model 2 if the corresponding ROC point of model 1 dominates** the ROC point of model 2. Here is why.
Let the confusion matrix corresponding to a particular threshold $t$ be
$$
\text{Conf}_t=\begin{pmatrix} j_t & k_t\\ l_t & m_t \end{pmatrix}
$$
with predicted classes in rows (row 1 ~ class 0, row 2 ~ class 1) and actual classes in columns (column 1 ~ class 0, column 2 ~ class 1). Concretely,
$$
\text{Conf}_t=\begin{pmatrix} \#\{{\hat Y=0 \cap Y=0\}}_t & \#\{{\hat Y=0\cap Y=1\}}_t \\ \#\{{\hat Y=1\cap Y=0\}}_t & \#\{{\hat Y=1\cap Y=1\}}_t \end{pmatrix}
$$
with $\#$ counting the number of elements that satisfy the condition. We will later add a subscript 1 for model 1 and 2 for model 2.
For any given sample, the number of actual zeros $j_t+l_t$ and the number of actual unities $k_t+m_t$ are fixed at $r$ and $s$, respectively:
\begin{aligned}
j_t+l_t &= r \quad \text{and} \\
k_t+m_t &= s.
\end{aligned}
We will make use of the latter equality in a subsequent step. Let us also define the sample size
$$
n:=j_t+k_t+l_t+m_t.
$$
The estimated expected loss of a model is
\begin{aligned}
\hat{\mathbb{E}}(L) &= \frac{1}{n}\big[ak_{t}+bl_{t}\big] \\
&= \frac{1}{n}\big[a(s-m_{t})+bl_{t}\big].
\end{aligned}
Explicitly, the estimated expected losses of models 1 and 2 are
\begin{aligned}
\hat{\mathbb{E}}(L_1) &= \frac{1}{n}\big[a(s-m_{1t})+bl_{1t}\big] \quad \text{and} \\
\hat{\mathbb{E}}(L_2) &= \frac{1}{n}\big[a(s-m_{2t})+bl_{2t}\big].
\end{aligned}
The ROC points (specific to the threshold $t$) of models 1 and 2 have coordinates $(l_{1t},m_{1t})$ and $(l_{2t},m_{2t})$, respectively. If the former point dominates the latter point, we have $l_{1t}\leq l_{2t}$ and $m_{1t}\geq m_{2t}$ and at least one of the two inequalities is strict.
What does this imply regarding $\hat{\mathbb{E}}(L_1)$ vs. $\hat{\mathbb{E}}(L_2)$? Since $a,b>0$ and $s-m_{1t},s-m_{2t}\geq0$, then looking at the formulas above we immediately see that $\hat{\mathbb{E}}(L_1)<\hat{\mathbb{E}}(L_2)$. Thus if the ROC point of model 1 dominates the ROC point of model 2, model 1 has lower estimated expected loss than model 2.
*The relevant threshold would be the optimal one.
**{ Is above and to the left } OR { is above and not to the right } OR { is to the left and not below }.
|
Is a pair of threshold-specific points on two ROC curves sufficient to rank classifiers by expected
Given a threshold* $t$, model 1 has lower estimated expected loss than model 2 if the corresponding ROC point of model 1 dominates** the ROC point of model 2. Here is why.
Let the confusion matrix cor
|
48,943 |
Is there an intuition to the mean of a Gumbel distribution being the Euler constant vis-à-vis the modeling of extreme events?
|
The Euler-Mascheroni constant $\gamma$ is like many other constants ($e$, $\pi$, and , $\varphi$) used in a simple pattern. That is why it shows up in many places and also in different fields that do not seem to be connected at first glance.
Below we give two 'reasons' behind the occurrence of $\gamma$ in the expression for the mean of the Gumbel distribution. (sidenote: The word 'reason' might be a bit strong. What we do is dive a bit deeper into the Gumbel distribution and see how it relates to other constructions that relate to the constant.)
We look at two expressions of the Euler-Mascheroni constant
The derivative of the gamma function (the moment generating function of the Gumbel distribution), incorporates a $\gamma$ into it. In the point $t=0$ we have $$\Gamma^{\prime}(1-\beta t) \underset{t=0}{=} -\gamma \beta$$
The difference between the n-th harmonic number, $H_n = \sum_{k=1}^n \frac{1}{k}$, and the natural logarithm of $n$ approaches the Euler-Mascheroni constant. $$\lim_{n \to \infty} H_n - \log(n) = \gamma$$
1 Derivative of the gamma function
In this section we are making a connection by computing the mean of the Gumbel distribution based on the moment generating function (MGF).
Background: the derivative of the moment generating function and it's relationship to the mean
If you know the MGF, the integral below (which is also considered as the bilateral Laplace transform with a negative argument, $\mathcal{B}(-t)$)
$$M(t) = \mathcal{B}(-t) = \int_{-\infty}^{\infty} e^{tx} f(x) \,dx$$
then you can compute
$$\mu = \int_{-\infty}^{\infty} x f(x) \,dx$$
by taking the derivative of the $M(t)$. You can see the equivalence by differentiation under the integral sign, and for $t=0$ this equals the integral expression for the mean
$$\begin{array}{rcl} M^\prime(t) &=& \int_{-\infty}^{\infty} x \underbrace{e^{tx}}_{\text{$e^{tx}= 1$ if $t=0$}} f(x) \,dx \\ \\ M^\prime(0) &=& \int_{-\infty}^{\infty} x f(x) \,dx = \mu \end{array}$$
The moment generating function of the Gumbel distribution
The moment generating function of the Gumbel distribution is
$$M(t) = \Gamma(1-\beta t) e^{\mu t}$$
so this $\Gamma(1-\beta t)$ whose derivative when $t=0$ is equal to $\gamma \beta$ is 'giving' you this constant $\gamma$ in the expression for the mean of a Gumbel distributed variable.
The connection
The connection made in this section might be a bit weak and considered as cyclical reasoning. This is effectively just like the pure mechanistic integration that you mentioned but done indirectly by converting to the Laplace domain (the MGF) and performing the integration there. But why is the MGF a gamma function?
We wrote in the first sentence of this post that there are simple patterns behind the well known constants. Behind the Gumbel distribution, and the extreme value distribution in general, there is also a simple pattern. It is described by Fisher and Tippet as following
Since the extreme member of a sample of $mn$ may be regarded
as the extreme member of a sample of $n$ of the extreme members
of samples of $m$, and since, if a limiting form exist, both of these distributions will tend to the limiting form as $m$ is increased indefinitely, it follows that the limiting distribution must be such that the extreme member of a sample of $n$ from such a distribution has itself a similar distribution.
So the extreme value distribution under taking the maximum and translating and scaling, is again an extreme value distribution of the same distribution family.
The Gumbel distribution is the special case when there is no scaling. If you take the maximum of a sample of Gumbel distributed variables, then you have again a Gumbel distribution that is translated by some value $b_n$.
(Compare this with other types of operations, like scale location family, stable distributions, or inversion. Distributions from a family that are, after some operation, still a member of the same family.)
The Gumbel satisfies the following equation:
$$ F(x)^n = F(x+b_n)$$
Let's check this by filling in the function for the Gumbel distribution
$$\begin{array}{rcl} F(x)^n &=& \left(e^{e^{-(x-\mu)/\beta}} \right)^n \\
&=& e^{ne^{-(x-\mu)/\beta}}\\
& =& e^{e^{\log(n)-(x-\mu)/\beta+}}\\
& =& e^{e^{-(x-\mu-\beta \log(n))/\beta}}\\
& =& e^{e^{-(x-\mu^\prime)/\beta}}
\end{array} $$
Thus, the operation of taking the maximum from a sample of size $n$ is effectively like shifting the distribution mean by a factor $\mu^\prime = \mu + \log(n)\beta$.
For the gamma distribution, we also have a simple pattern. It has the property
$$\Gamma(z)z = \Gamma(z+1)$$
If you multiple the function with its argument then this is like shifting the argument by 1 step.
It is in these two simple patterns that we can see the connection why the MGF of the Gumbel distribution is a gamma function. Let's repeat the two simple patterns
The Gumbel distribution: The cumulative distribution function (CDF) follows the relation $$F(x)^n = F(x+b_n)$$
The gamma function: The function follows the relation $$\Gamma(z)z = \Gamma(z+1)$$
My intuition tells me that we can connect these two simple patterns by means of a relationship for the moment generating function. We might do this in such a way that we do not need to explicitly compute the integral, and we could reason that the moment generating function must have the properties of the gamma function without evaluating the integral, but just by rewriting the integral. However, I do not yet get far in showing that there must be such a connection.
Moment generating function of the maximum of a sample
Work in progress
2 Difference between the n-th harmonic number and the natural logarithm of $n$
One direct connection between the Euler-Mascheroni constant and the mean of the Gumbel distribution is the following:
Let $X_k \sim Exp(1/k)$ then the limiting distribution for the sum is for $n \to \infty$
$$-\log(n) + \sum_{k=1}^n X_k \to \text{Gumbel}(\mu =0,\beta = 1)$$
For an explanation for this connection as a sum see on math.stackexchange the question "Proof that the limit of a series of exponential distributed random variables follows a Gumbel distribution"
From this point of view, as the limit for a sum of exponential distributions, the mean of the Gumbel distribution can be expressed as the limit of the sum of the means of these exponential distributions, which is the limit of $H_n - \log(n) = \gamma$
$$\lim_{n \to \infty} E\left[ -\log(n) + \sum_{k=1}^n X_k \right] = \lim_{n \to \infty} -\log(n) + \sum_{k=1}^n \frac{1}{k} = \gamma$$
The coupon collector's problem illustrates intuitively this connection between the Gumbel distribution and a sum of variables. (Intuition about the coupon collector problem approaching a Gumbel distribution)
One way to view the extreme events as a sum could be by seeing the distribution of the extreme of a period of $k$ days and the extreme of a period of $k-1$ days as the latter being the former plus an extra little bit. In the case of exponential distributions for the daily values you get that the extra little bit is also exponential distributed and has a mean that scales as $1/k$ (the extra little bits decrease in size).
|
Is there an intuition to the mean of a Gumbel distribution being the Euler constant vis-à-vis the mo
|
The Euler-Mascheroni constant $\gamma$ is like many other constants ($e$, $\pi$, and , $\varphi$) used in a simple pattern. That is why it shows up in many places and also in different fields that do
|
Is there an intuition to the mean of a Gumbel distribution being the Euler constant vis-à-vis the modeling of extreme events?
The Euler-Mascheroni constant $\gamma$ is like many other constants ($e$, $\pi$, and , $\varphi$) used in a simple pattern. That is why it shows up in many places and also in different fields that do not seem to be connected at first glance.
Below we give two 'reasons' behind the occurrence of $\gamma$ in the expression for the mean of the Gumbel distribution. (sidenote: The word 'reason' might be a bit strong. What we do is dive a bit deeper into the Gumbel distribution and see how it relates to other constructions that relate to the constant.)
We look at two expressions of the Euler-Mascheroni constant
The derivative of the gamma function (the moment generating function of the Gumbel distribution), incorporates a $\gamma$ into it. In the point $t=0$ we have $$\Gamma^{\prime}(1-\beta t) \underset{t=0}{=} -\gamma \beta$$
The difference between the n-th harmonic number, $H_n = \sum_{k=1}^n \frac{1}{k}$, and the natural logarithm of $n$ approaches the Euler-Mascheroni constant. $$\lim_{n \to \infty} H_n - \log(n) = \gamma$$
1 Derivative of the gamma function
In this section we are making a connection by computing the mean of the Gumbel distribution based on the moment generating function (MGF).
Background: the derivative of the moment generating function and it's relationship to the mean
If you know the MGF, the integral below (which is also considered as the bilateral Laplace transform with a negative argument, $\mathcal{B}(-t)$)
$$M(t) = \mathcal{B}(-t) = \int_{-\infty}^{\infty} e^{tx} f(x) \,dx$$
then you can compute
$$\mu = \int_{-\infty}^{\infty} x f(x) \,dx$$
by taking the derivative of the $M(t)$. You can see the equivalence by differentiation under the integral sign, and for $t=0$ this equals the integral expression for the mean
$$\begin{array}{rcl} M^\prime(t) &=& \int_{-\infty}^{\infty} x \underbrace{e^{tx}}_{\text{$e^{tx}= 1$ if $t=0$}} f(x) \,dx \\ \\ M^\prime(0) &=& \int_{-\infty}^{\infty} x f(x) \,dx = \mu \end{array}$$
The moment generating function of the Gumbel distribution
The moment generating function of the Gumbel distribution is
$$M(t) = \Gamma(1-\beta t) e^{\mu t}$$
so this $\Gamma(1-\beta t)$ whose derivative when $t=0$ is equal to $\gamma \beta$ is 'giving' you this constant $\gamma$ in the expression for the mean of a Gumbel distributed variable.
The connection
The connection made in this section might be a bit weak and considered as cyclical reasoning. This is effectively just like the pure mechanistic integration that you mentioned but done indirectly by converting to the Laplace domain (the MGF) and performing the integration there. But why is the MGF a gamma function?
We wrote in the first sentence of this post that there are simple patterns behind the well known constants. Behind the Gumbel distribution, and the extreme value distribution in general, there is also a simple pattern. It is described by Fisher and Tippet as following
Since the extreme member of a sample of $mn$ may be regarded
as the extreme member of a sample of $n$ of the extreme members
of samples of $m$, and since, if a limiting form exist, both of these distributions will tend to the limiting form as $m$ is increased indefinitely, it follows that the limiting distribution must be such that the extreme member of a sample of $n$ from such a distribution has itself a similar distribution.
So the extreme value distribution under taking the maximum and translating and scaling, is again an extreme value distribution of the same distribution family.
The Gumbel distribution is the special case when there is no scaling. If you take the maximum of a sample of Gumbel distributed variables, then you have again a Gumbel distribution that is translated by some value $b_n$.
(Compare this with other types of operations, like scale location family, stable distributions, or inversion. Distributions from a family that are, after some operation, still a member of the same family.)
The Gumbel satisfies the following equation:
$$ F(x)^n = F(x+b_n)$$
Let's check this by filling in the function for the Gumbel distribution
$$\begin{array}{rcl} F(x)^n &=& \left(e^{e^{-(x-\mu)/\beta}} \right)^n \\
&=& e^{ne^{-(x-\mu)/\beta}}\\
& =& e^{e^{\log(n)-(x-\mu)/\beta+}}\\
& =& e^{e^{-(x-\mu-\beta \log(n))/\beta}}\\
& =& e^{e^{-(x-\mu^\prime)/\beta}}
\end{array} $$
Thus, the operation of taking the maximum from a sample of size $n$ is effectively like shifting the distribution mean by a factor $\mu^\prime = \mu + \log(n)\beta$.
For the gamma distribution, we also have a simple pattern. It has the property
$$\Gamma(z)z = \Gamma(z+1)$$
If you multiple the function with its argument then this is like shifting the argument by 1 step.
It is in these two simple patterns that we can see the connection why the MGF of the Gumbel distribution is a gamma function. Let's repeat the two simple patterns
The Gumbel distribution: The cumulative distribution function (CDF) follows the relation $$F(x)^n = F(x+b_n)$$
The gamma function: The function follows the relation $$\Gamma(z)z = \Gamma(z+1)$$
My intuition tells me that we can connect these two simple patterns by means of a relationship for the moment generating function. We might do this in such a way that we do not need to explicitly compute the integral, and we could reason that the moment generating function must have the properties of the gamma function without evaluating the integral, but just by rewriting the integral. However, I do not yet get far in showing that there must be such a connection.
Moment generating function of the maximum of a sample
Work in progress
2 Difference between the n-th harmonic number and the natural logarithm of $n$
One direct connection between the Euler-Mascheroni constant and the mean of the Gumbel distribution is the following:
Let $X_k \sim Exp(1/k)$ then the limiting distribution for the sum is for $n \to \infty$
$$-\log(n) + \sum_{k=1}^n X_k \to \text{Gumbel}(\mu =0,\beta = 1)$$
For an explanation for this connection as a sum see on math.stackexchange the question "Proof that the limit of a series of exponential distributed random variables follows a Gumbel distribution"
From this point of view, as the limit for a sum of exponential distributions, the mean of the Gumbel distribution can be expressed as the limit of the sum of the means of these exponential distributions, which is the limit of $H_n - \log(n) = \gamma$
$$\lim_{n \to \infty} E\left[ -\log(n) + \sum_{k=1}^n X_k \right] = \lim_{n \to \infty} -\log(n) + \sum_{k=1}^n \frac{1}{k} = \gamma$$
The coupon collector's problem illustrates intuitively this connection between the Gumbel distribution and a sum of variables. (Intuition about the coupon collector problem approaching a Gumbel distribution)
One way to view the extreme events as a sum could be by seeing the distribution of the extreme of a period of $k$ days and the extreme of a period of $k-1$ days as the latter being the former plus an extra little bit. In the case of exponential distributions for the daily values you get that the extra little bit is also exponential distributed and has a mean that scales as $1/k$ (the extra little bits decrease in size).
|
Is there an intuition to the mean of a Gumbel distribution being the Euler constant vis-à-vis the mo
The Euler-Mascheroni constant $\gamma$ is like many other constants ($e$, $\pi$, and , $\varphi$) used in a simple pattern. That is why it shows up in many places and also in different fields that do
|
48,944 |
When to use dot-product as a similarity metric
|
Euclidean distance (norm of difference) and dot predict are proportional to each other, while they are not equal, but roughly the same.
After normalizing $a$ and $b$ such that$\|a\| = 1$ and $\|b\| = 1$,
these three measures are related as:
Euclidean distance = $\| a - b \| = \sqrt{\| a \|^2 + \|b\|^2 - 2
a^Tb} =\sqrt{2 - 2 \cos(\theta_{ab})}$
Dot product = $\|a\|\|b\| \cos(\theta_{ab}) = 1 \cdot 1 \cdot
\cos(\theta_{ab}) = \cos(\theta_{ab}) $
Cosine = $\cos(\theta_{ab})$
Thus, all three similarity measures are equivalent because they are
proportional to $\cos(\theta_{ab})$.
This is also discussed in here and in the Vector space model: cosine similarity vs euclidean distance thread and Wikipedia. There was even an empirical evaluation by Qian et al (2004) concluding that
Through our theoretical analysis and experimental results, we conclude
that EUD and CAD are similar when applied to high dimensional NN
queries. For normalized data and clustered data, EUD and CAD becomes
even more similar.
Both metrics are similar and there are no strong reasons to prefer one over another in general.
|
When to use dot-product as a similarity metric
|
Euclidean distance (norm of difference) and dot predict are proportional to each other, while they are not equal, but roughly the same.
After normalizing $a$ and $b$ such that$\|a\| = 1$ and $\|b\| =
|
When to use dot-product as a similarity metric
Euclidean distance (norm of difference) and dot predict are proportional to each other, while they are not equal, but roughly the same.
After normalizing $a$ and $b$ such that$\|a\| = 1$ and $\|b\| = 1$,
these three measures are related as:
Euclidean distance = $\| a - b \| = \sqrt{\| a \|^2 + \|b\|^2 - 2
a^Tb} =\sqrt{2 - 2 \cos(\theta_{ab})}$
Dot product = $\|a\|\|b\| \cos(\theta_{ab}) = 1 \cdot 1 \cdot
\cos(\theta_{ab}) = \cos(\theta_{ab}) $
Cosine = $\cos(\theta_{ab})$
Thus, all three similarity measures are equivalent because they are
proportional to $\cos(\theta_{ab})$.
This is also discussed in here and in the Vector space model: cosine similarity vs euclidean distance thread and Wikipedia. There was even an empirical evaluation by Qian et al (2004) concluding that
Through our theoretical analysis and experimental results, we conclude
that EUD and CAD are similar when applied to high dimensional NN
queries. For normalized data and clustered data, EUD and CAD becomes
even more similar.
Both metrics are similar and there are no strong reasons to prefer one over another in general.
|
When to use dot-product as a similarity metric
Euclidean distance (norm of difference) and dot predict are proportional to each other, while they are not equal, but roughly the same.
After normalizing $a$ and $b$ such that$\|a\| = 1$ and $\|b\| =
|
48,945 |
Loss function in Supervised Learning vs Statistical Decision Theory
|
I would say this is more a difference in the form of the decision than the loss. The loss function in both cases is Loss(true state of nature, your decision), but it simplifies differently depending on the form of the decision
In point prediction settings (such as a lot of ML), the decision is a potential value of the label, and the state of nature effectively simplifies to the true value of the label, so the loss $L(y, \hat y)$ can be written as the loss from predicting $\hat y$ when the truth is $y$.
In parametric inference settings, the decision is a potential value of the parameter, and the state of nature effectively simplifies to the true parameter value, so the loss $L(\theta, \hat\theta)$ can be written as the loss from estimating $\hat\theta$ when the truth is $\theta$.
There are more complicated settings, too. For example, your decision might be an interval, and the state of nature might be a value, and the loss could be the length of the interval plus the distance from the value to the closest point of the interval (possibly zero)[PDF]. In that setting there isn't the nice correspondence between potential decisions and potential states of nature, and the loss doesn't simplify down to a summary of the error in the decision in the same way. And of course many other possibilities.
|
Loss function in Supervised Learning vs Statistical Decision Theory
|
I would say this is more a difference in the form of the decision than the loss. The loss function in both cases is Loss(true state of nature, your decision), but it simplifies differently depending o
|
Loss function in Supervised Learning vs Statistical Decision Theory
I would say this is more a difference in the form of the decision than the loss. The loss function in both cases is Loss(true state of nature, your decision), but it simplifies differently depending on the form of the decision
In point prediction settings (such as a lot of ML), the decision is a potential value of the label, and the state of nature effectively simplifies to the true value of the label, so the loss $L(y, \hat y)$ can be written as the loss from predicting $\hat y$ when the truth is $y$.
In parametric inference settings, the decision is a potential value of the parameter, and the state of nature effectively simplifies to the true parameter value, so the loss $L(\theta, \hat\theta)$ can be written as the loss from estimating $\hat\theta$ when the truth is $\theta$.
There are more complicated settings, too. For example, your decision might be an interval, and the state of nature might be a value, and the loss could be the length of the interval plus the distance from the value to the closest point of the interval (possibly zero)[PDF]. In that setting there isn't the nice correspondence between potential decisions and potential states of nature, and the loss doesn't simplify down to a summary of the error in the decision in the same way. And of course many other possibilities.
|
Loss function in Supervised Learning vs Statistical Decision Theory
I would say this is more a difference in the form of the decision than the loss. The loss function in both cases is Loss(true state of nature, your decision), but it simplifies differently depending o
|
48,946 |
Estimating growth rate from noisy data?
|
It makes sense to smooth and then calculate growth rate.
I assume you are interested in the growth rate of the latent measurement (i.e. the noise free value). Smoothing is a method of estimating this latent quantity, and so calculating the growth rate on the smooth is closer to what you want.
|
Estimating growth rate from noisy data?
|
It makes sense to smooth and then calculate growth rate.
I assume you are interested in the growth rate of the latent measurement (i.e. the noise free value). Smoothing is a method of estimating this
|
Estimating growth rate from noisy data?
It makes sense to smooth and then calculate growth rate.
I assume you are interested in the growth rate of the latent measurement (i.e. the noise free value). Smoothing is a method of estimating this latent quantity, and so calculating the growth rate on the smooth is closer to what you want.
|
Estimating growth rate from noisy data?
It makes sense to smooth and then calculate growth rate.
I assume you are interested in the growth rate of the latent measurement (i.e. the noise free value). Smoothing is a method of estimating this
|
48,947 |
Estimating growth rate from noisy data?
|
I think that the smooth-then-derive method is OK. However there are ways to avoid the 2 steps, I think. For example, one could model the derivative of the trend directly. The idea is the following:
We observe a series of data $\{x_{i}, y_{i}\}_{i=1}^{n}$ with $y_{i} \sim \mathcal{N}(\mu_{i}, \sigma^{2})$
$\mu_{i}$ is the $i$th value of a smooth trend function observable at $x_{i}$.
We are interested in estimating $\dot{\mu} = \displaystyle \frac{d\mu}{dx}$ and the growth rate can be computed as $GR_{i} = \dot{\mu}_{i}/\mu_{i}$ (in the example $dx = 1$...see also below).
In order to solve the estimation task we need the following ingredients (please notice that I tried to keep the notation as simple as possible but might be not perfect):
A smoother: I will use here P-splines as introduced by Eilers and Marx (1996). P-splines are a class of flexible smoothing techniques combining B-splines and finite difference penalties
A model for the trend function which will be:
$
\hat{\mu} = \displaystyle \int B \alpha
$,
where $\alpha$ is a vector of unknown spline coefficients to be estimated such that $\hat{\dot{\mu}}= B \hat{\alpha}$ and $B$ is a B-spline matrix built on equally spaced knots.
Our estimation task then becomes
$$
\min_{\alpha} S_{p} = \|y - \int B\alpha\|^{2} + \lambda \|D\alpha\|^{2}
$$
where $\lambda$ is a smoothing/regularization parameter and $D$ is a matrix difference operator (I will here use differences of order 2).
Once we have obtained the $\alpha$ coefficients we can compute $\hat{\mu} = \displaystyle \int B \hat{\alpha}$ and $\hat{\dot{\mu}} = B\hat{\alpha}$.
Below you will find a small code. About the code:
the integral is approximated with summation
I use the R-package JOPS to compute the $B$-matrix
the growth rate is computed 'explicitly' as $(y_{i+1}-y_{i})/y_{i}$
the fact that the $dx$ is constant and equal to 1 simplifies computations a bit
I left some comments in the code (hope they are clear enough).
library(ggplot2)
library(data.table)
library(JOPS)
set.seed(1)
SimData = data.table(x = 1:501)
SimData$y = sin(SimData$x/30)
SimData$y = 200*SimData$y - 0.01*SimData$x^2 + 10*SimData$x + 20
SimData$sy = SimData$y + 50*sin(SimData$x/5)
SimData$y = round(abs(rnorm(nrow(SimData), SimData$sy,50)) + 1)
# Extract
x = SimData$x
y = SimData$y
sy = SimData$sy
TrueGR = c(NA, diff(sy)) / sy # [(i+1)th - ith]/ith
# Integration matrix operator
n = length(x)
u = x
Ci = matrix(0, n, n)
for(i in 1:n)
{
Ci[i, ] = (x[i] >= u)
}
# Define bases
bdeg = 3
ndx = 100
B = bbase(x, bdeg = bdeg, nseg = ndx)
nb = ncol(B)
# Penalty - I fix the lambda parameter (can be selected using e.g. GCV/AIC/BIC etc)
dd = 2
la = 1e1
D = diff(diag(nb), diff = dd)
P = crossprod(D) * la
# Smooth
CB = Ci %*% B
cof = solve(crossprod(CB) + P) %*% t(CB) %*% y
mu = CB %*% cof
dmu = B %*% cof
hatGR = dmu/mu
# Plot
par(mfrow = c(2, 1), mar = c(2, 2, 2, 2))
plot(x, y, pch = 16, cex = 0.5)
lines(x, y)
lines(x, mu, col = 2, lwd = 2)
lines(x, sy, col = 3, lty = 2, lwd = 2)
legend('topleft', c('Fit', 'Signal'), col = c(2, 3),lty = c(1, 2))
plot(x, hatGR, col = 2, lwd = 2, type='l',ylim = range(TrueGR, na.rm = TRUE))
lines(x, TrueGR, type = 'l', col = 3, lwd = 2, lty = 2)
The result should look like in the plot below. I hope my answer is clear enough.
|
Estimating growth rate from noisy data?
|
I think that the smooth-then-derive method is OK. However there are ways to avoid the 2 steps, I think. For example, one could model the derivative of the trend directly. The idea is the following:
W
|
Estimating growth rate from noisy data?
I think that the smooth-then-derive method is OK. However there are ways to avoid the 2 steps, I think. For example, one could model the derivative of the trend directly. The idea is the following:
We observe a series of data $\{x_{i}, y_{i}\}_{i=1}^{n}$ with $y_{i} \sim \mathcal{N}(\mu_{i}, \sigma^{2})$
$\mu_{i}$ is the $i$th value of a smooth trend function observable at $x_{i}$.
We are interested in estimating $\dot{\mu} = \displaystyle \frac{d\mu}{dx}$ and the growth rate can be computed as $GR_{i} = \dot{\mu}_{i}/\mu_{i}$ (in the example $dx = 1$...see also below).
In order to solve the estimation task we need the following ingredients (please notice that I tried to keep the notation as simple as possible but might be not perfect):
A smoother: I will use here P-splines as introduced by Eilers and Marx (1996). P-splines are a class of flexible smoothing techniques combining B-splines and finite difference penalties
A model for the trend function which will be:
$
\hat{\mu} = \displaystyle \int B \alpha
$,
where $\alpha$ is a vector of unknown spline coefficients to be estimated such that $\hat{\dot{\mu}}= B \hat{\alpha}$ and $B$ is a B-spline matrix built on equally spaced knots.
Our estimation task then becomes
$$
\min_{\alpha} S_{p} = \|y - \int B\alpha\|^{2} + \lambda \|D\alpha\|^{2}
$$
where $\lambda$ is a smoothing/regularization parameter and $D$ is a matrix difference operator (I will here use differences of order 2).
Once we have obtained the $\alpha$ coefficients we can compute $\hat{\mu} = \displaystyle \int B \hat{\alpha}$ and $\hat{\dot{\mu}} = B\hat{\alpha}$.
Below you will find a small code. About the code:
the integral is approximated with summation
I use the R-package JOPS to compute the $B$-matrix
the growth rate is computed 'explicitly' as $(y_{i+1}-y_{i})/y_{i}$
the fact that the $dx$ is constant and equal to 1 simplifies computations a bit
I left some comments in the code (hope they are clear enough).
library(ggplot2)
library(data.table)
library(JOPS)
set.seed(1)
SimData = data.table(x = 1:501)
SimData$y = sin(SimData$x/30)
SimData$y = 200*SimData$y - 0.01*SimData$x^2 + 10*SimData$x + 20
SimData$sy = SimData$y + 50*sin(SimData$x/5)
SimData$y = round(abs(rnorm(nrow(SimData), SimData$sy,50)) + 1)
# Extract
x = SimData$x
y = SimData$y
sy = SimData$sy
TrueGR = c(NA, diff(sy)) / sy # [(i+1)th - ith]/ith
# Integration matrix operator
n = length(x)
u = x
Ci = matrix(0, n, n)
for(i in 1:n)
{
Ci[i, ] = (x[i] >= u)
}
# Define bases
bdeg = 3
ndx = 100
B = bbase(x, bdeg = bdeg, nseg = ndx)
nb = ncol(B)
# Penalty - I fix the lambda parameter (can be selected using e.g. GCV/AIC/BIC etc)
dd = 2
la = 1e1
D = diff(diag(nb), diff = dd)
P = crossprod(D) * la
# Smooth
CB = Ci %*% B
cof = solve(crossprod(CB) + P) %*% t(CB) %*% y
mu = CB %*% cof
dmu = B %*% cof
hatGR = dmu/mu
# Plot
par(mfrow = c(2, 1), mar = c(2, 2, 2, 2))
plot(x, y, pch = 16, cex = 0.5)
lines(x, y)
lines(x, mu, col = 2, lwd = 2)
lines(x, sy, col = 3, lty = 2, lwd = 2)
legend('topleft', c('Fit', 'Signal'), col = c(2, 3),lty = c(1, 2))
plot(x, hatGR, col = 2, lwd = 2, type='l',ylim = range(TrueGR, na.rm = TRUE))
lines(x, TrueGR, type = 'l', col = 3, lwd = 2, lty = 2)
The result should look like in the plot below. I hope my answer is clear enough.
|
Estimating growth rate from noisy data?
I think that the smooth-then-derive method is OK. However there are ways to avoid the 2 steps, I think. For example, one could model the derivative of the trend directly. The idea is the following:
W
|
48,948 |
Does the sum of squared "dependent" Gaussian variables still exhibit a Chi-squared distribution?
|
Suppose we have $X_1 = u_1$ and for $i>1$ $X_i = \rho X_{i-1} + u_i$. I'll set $\sigma^2_u = 1$ for simplicity. Then
$$
\begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n\end{bmatrix} = \begin{bmatrix}
1 & 0 & 0 & 0 & \dots & 0 \\
\rho & 1 & 0 & 0 & \dots & 0 \\
\rho^2 & \rho & 1 & 0 & \dots & 0 \\
&&&\vdots&&\\
\rho^{n-1} & \rho^{n-2} & &\dots& & 1
\end{bmatrix}\begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n\end{bmatrix}
$$
so $X \sim \mathcal N(\mathbf 0, \Omega_\rho)$ where $\Omega_\rho = L_\rho L_\rho^T$ with $L_\rho$ being the above matrix.
This means that $Y := X^TX = u^TL_\rho^TL_\rho u$ is a Gaussian quadratic form. If $L_\rho^TL_\rho$ were idempotent we'd be able to use Cochran's theorem to get a chi squared distribution, but that won't be true here. Quadratic forms with Gaussian RVs have been studied for years and there are lots of results and papers out there such as "The Distribution of Quadratic Forms of Gaussian Vectors" by Zorin and Iyubimov (1987).
It's a standard result that
$$
\text E[X^TX] = \text{tr}(\Omega_\rho)
$$
and
$$
\text{Var}[X^TX] = 2 \text{tr}(\Omega_\rho^2).
$$
If we are to have $Y$ follow a central chi-squared distribution then we'd need $2 \text E[Y] = \text {Var}[Y]$ which in this case implies $\text{tr}(\Omega_\rho) = \text{tr}(\Omega_\rho^2)$ which is not true in general. This means $Y$ cannot have a central chi squared distribution.
We can also use this to show that $Y$ does not necessarily have a noncentral chi squared distribution. If $Y \sim \chi^2_k(\delta)$ then $\text E[Y] = k+\delta$ and $\text {Var}[Y] = 2(k + 2\delta)$. Matching the first two moments like this leads to the system
$$
k + \delta = \text{tr}(\Omega_\rho) \\
k + 2\delta = \text{tr}(\Omega_\rho^2)
$$
so
$$
{k \choose \delta} = \begin{bmatrix} 2 & -1 \\ -1 & 1\end{bmatrix}{\text{tr } \Omega_\rho \choose \text{tr }\Omega_\rho^2}
$$
i.e.
$$
k = 2\cdot\text{tr }\Omega - \text{tr }\Omega^2 \\
\delta = \text{tr }\Omega^2 - \text{tr }\Omega.
$$
The problem is that this can lead to negative values of $k$, so this means that it also cannot be that $Y$ is guaranteed to be a noncentral chi squared.
|
Does the sum of squared "dependent" Gaussian variables still exhibit a Chi-squared distribution?
|
Suppose we have $X_1 = u_1$ and for $i>1$ $X_i = \rho X_{i-1} + u_i$. I'll set $\sigma^2_u = 1$ for simplicity. Then
$$
\begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n\end{bmatrix} = \begin{bmatrix}
1 & 0
|
Does the sum of squared "dependent" Gaussian variables still exhibit a Chi-squared distribution?
Suppose we have $X_1 = u_1$ and for $i>1$ $X_i = \rho X_{i-1} + u_i$. I'll set $\sigma^2_u = 1$ for simplicity. Then
$$
\begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n\end{bmatrix} = \begin{bmatrix}
1 & 0 & 0 & 0 & \dots & 0 \\
\rho & 1 & 0 & 0 & \dots & 0 \\
\rho^2 & \rho & 1 & 0 & \dots & 0 \\
&&&\vdots&&\\
\rho^{n-1} & \rho^{n-2} & &\dots& & 1
\end{bmatrix}\begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n\end{bmatrix}
$$
so $X \sim \mathcal N(\mathbf 0, \Omega_\rho)$ where $\Omega_\rho = L_\rho L_\rho^T$ with $L_\rho$ being the above matrix.
This means that $Y := X^TX = u^TL_\rho^TL_\rho u$ is a Gaussian quadratic form. If $L_\rho^TL_\rho$ were idempotent we'd be able to use Cochran's theorem to get a chi squared distribution, but that won't be true here. Quadratic forms with Gaussian RVs have been studied for years and there are lots of results and papers out there such as "The Distribution of Quadratic Forms of Gaussian Vectors" by Zorin and Iyubimov (1987).
It's a standard result that
$$
\text E[X^TX] = \text{tr}(\Omega_\rho)
$$
and
$$
\text{Var}[X^TX] = 2 \text{tr}(\Omega_\rho^2).
$$
If we are to have $Y$ follow a central chi-squared distribution then we'd need $2 \text E[Y] = \text {Var}[Y]$ which in this case implies $\text{tr}(\Omega_\rho) = \text{tr}(\Omega_\rho^2)$ which is not true in general. This means $Y$ cannot have a central chi squared distribution.
We can also use this to show that $Y$ does not necessarily have a noncentral chi squared distribution. If $Y \sim \chi^2_k(\delta)$ then $\text E[Y] = k+\delta$ and $\text {Var}[Y] = 2(k + 2\delta)$. Matching the first two moments like this leads to the system
$$
k + \delta = \text{tr}(\Omega_\rho) \\
k + 2\delta = \text{tr}(\Omega_\rho^2)
$$
so
$$
{k \choose \delta} = \begin{bmatrix} 2 & -1 \\ -1 & 1\end{bmatrix}{\text{tr } \Omega_\rho \choose \text{tr }\Omega_\rho^2}
$$
i.e.
$$
k = 2\cdot\text{tr }\Omega - \text{tr }\Omega^2 \\
\delta = \text{tr }\Omega^2 - \text{tr }\Omega.
$$
The problem is that this can lead to negative values of $k$, so this means that it also cannot be that $Y$ is guaranteed to be a noncentral chi squared.
|
Does the sum of squared "dependent" Gaussian variables still exhibit a Chi-squared distribution?
Suppose we have $X_1 = u_1$ and for $i>1$ $X_i = \rho X_{i-1} + u_i$. I'll set $\sigma^2_u = 1$ for simplicity. Then
$$
\begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_n\end{bmatrix} = \begin{bmatrix}
1 & 0
|
48,949 |
Precision and recall estimated with stratified sampling
|
I'm not sure I'd call this stratified sampling, but your calculations for precision and recall make sense. It's as if you cluster negative examples into 1K groups and choose a representative among them and calculate approximated statistics based on these ones.
Of course, the effect of this procedure on the training is another issue, e.g. for bayesian approaches, your priors will be calculated differently. So, you may need to account for sample weights.
|
Precision and recall estimated with stratified sampling
|
I'm not sure I'd call this stratified sampling, but your calculations for precision and recall make sense. It's as if you cluster negative examples into 1K groups and choose a representative among the
|
Precision and recall estimated with stratified sampling
I'm not sure I'd call this stratified sampling, but your calculations for precision and recall make sense. It's as if you cluster negative examples into 1K groups and choose a representative among them and calculate approximated statistics based on these ones.
Of course, the effect of this procedure on the training is another issue, e.g. for bayesian approaches, your priors will be calculated differently. So, you may need to account for sample weights.
|
Precision and recall estimated with stratified sampling
I'm not sure I'd call this stratified sampling, but your calculations for precision and recall make sense. It's as if you cluster negative examples into 1K groups and choose a representative among the
|
48,950 |
Are These Conjectures Regarding Sufficient Statistics True?
|
Nice conjectures you got there; shame if something were to happen to them
(A) is false for all $k>1$ for the simple reason that any vector of real numbers $T_1,...,T_s$ can be reduced to a single real number without loss of information (i.e., $\mathbb{R}^s$ is cardinally equivalent to $\mathbb{R}$; see e.g., here). For example (though there are many other mappings you could use), write the digits of the statistics when represented as decimal numbers as:
$$\begin{align}
T_1 &= \cdots d_{1,3} \ d_{1,2} \ d_{1,1} \ d_{1,0} \cdot d_{1,-1} \ d_{1,-2} \ d_{1,-3} \cdots \\[6pt]
T_2 &= \cdots d_{2,3} \ d_{2,2} \ d_{2,1} \ d_{2,0} \cdot d_{2,-1} \ d_{2,-2} \ d_{1,-3} \cdots \\[6pt]
&\ \ \vdots \\[6pt]
T_s &= \cdots d_{s,3} \ d_{s,2} \ d_{s,1} \ d_{s,0} \cdot d_{s,-1} \ d_{s,-2} \ d_{1,-3} \cdots \\[6pt]
\end{align}$$
and then agglomerate these into a single real number:
$$T_* \equiv \cdots d_{1,1} \cdots d_{s,1} \ d_{1,0} \cdots d_{s,0} \cdot d_{1,-1} \cdots d_{s,-1} \ d_{1,-2} \cdots d_{s,-2} \cdots$$
It is simple to reverse this process to create a mapping $T_* \mapsto (T_1,...,T_s)$, which means that $T_* \in \mathbb{R}$ is also a sufficient statistic. Consequently, for any $k>1$ if $T_1,...,T_s$ is sufficient then we can formulate the sufficient statistic $T_*$ with length $s_* = 1 < k$.
(B) is false for all $k>1$ as a corollary of the above. Reduce the sufficient statistic to $T_*$ and then add useless statistics $T_{*2},...,T_{*k}$. This then gives you a vector $T_*,T_{*2},...,T_{*k}$ with length $k$ that is not minimal sufficient (since the latter elements contribute nothing to sufficiency). More generally, sufficiency of a statistic does not imply minimal sufficiency, even if the statistic has the same dimension as the parameter vector.
(C) is false by virtue of the fact that you can construct a sufficient non-complete statistic for the dimension $k=1$. For $k>1$ you can also apply the above to construct a sufficient statistic that is non-complete.
|
Are These Conjectures Regarding Sufficient Statistics True?
|
Nice conjectures you got there; shame if something were to happen to them
(A) is false for all $k>1$ for the simple reason that any vector of real numbers $T_1,...,T_s$ can be reduced to a single real
|
Are These Conjectures Regarding Sufficient Statistics True?
Nice conjectures you got there; shame if something were to happen to them
(A) is false for all $k>1$ for the simple reason that any vector of real numbers $T_1,...,T_s$ can be reduced to a single real number without loss of information (i.e., $\mathbb{R}^s$ is cardinally equivalent to $\mathbb{R}$; see e.g., here). For example (though there are many other mappings you could use), write the digits of the statistics when represented as decimal numbers as:
$$\begin{align}
T_1 &= \cdots d_{1,3} \ d_{1,2} \ d_{1,1} \ d_{1,0} \cdot d_{1,-1} \ d_{1,-2} \ d_{1,-3} \cdots \\[6pt]
T_2 &= \cdots d_{2,3} \ d_{2,2} \ d_{2,1} \ d_{2,0} \cdot d_{2,-1} \ d_{2,-2} \ d_{1,-3} \cdots \\[6pt]
&\ \ \vdots \\[6pt]
T_s &= \cdots d_{s,3} \ d_{s,2} \ d_{s,1} \ d_{s,0} \cdot d_{s,-1} \ d_{s,-2} \ d_{1,-3} \cdots \\[6pt]
\end{align}$$
and then agglomerate these into a single real number:
$$T_* \equiv \cdots d_{1,1} \cdots d_{s,1} \ d_{1,0} \cdots d_{s,0} \cdot d_{1,-1} \cdots d_{s,-1} \ d_{1,-2} \cdots d_{s,-2} \cdots$$
It is simple to reverse this process to create a mapping $T_* \mapsto (T_1,...,T_s)$, which means that $T_* \in \mathbb{R}$ is also a sufficient statistic. Consequently, for any $k>1$ if $T_1,...,T_s$ is sufficient then we can formulate the sufficient statistic $T_*$ with length $s_* = 1 < k$.
(B) is false for all $k>1$ as a corollary of the above. Reduce the sufficient statistic to $T_*$ and then add useless statistics $T_{*2},...,T_{*k}$. This then gives you a vector $T_*,T_{*2},...,T_{*k}$ with length $k$ that is not minimal sufficient (since the latter elements contribute nothing to sufficiency). More generally, sufficiency of a statistic does not imply minimal sufficiency, even if the statistic has the same dimension as the parameter vector.
(C) is false by virtue of the fact that you can construct a sufficient non-complete statistic for the dimension $k=1$. For $k>1$ you can also apply the above to construct a sufficient statistic that is non-complete.
|
Are These Conjectures Regarding Sufficient Statistics True?
Nice conjectures you got there; shame if something were to happen to them
(A) is false for all $k>1$ for the simple reason that any vector of real numbers $T_1,...,T_s$ can be reduced to a single real
|
48,951 |
Are These Conjectures Regarding Sufficient Statistics True?
|
$U[0,\theta]$ is a counterexample to C. $T_1=\max_i X_i$ is sufficient for $\theta$ but it is not complete because $$E\left[\frac{n+1}{n}T_1\right]=\theta=E[2\bar X]$$
(I believe A and B are true.)
|
Are These Conjectures Regarding Sufficient Statistics True?
|
$U[0,\theta]$ is a counterexample to C. $T_1=\max_i X_i$ is sufficient for $\theta$ but it is not complete because $$E\left[\frac{n+1}{n}T_1\right]=\theta=E[2\bar X]$$
(I believe A and B are true.)
|
Are These Conjectures Regarding Sufficient Statistics True?
$U[0,\theta]$ is a counterexample to C. $T_1=\max_i X_i$ is sufficient for $\theta$ but it is not complete because $$E\left[\frac{n+1}{n}T_1\right]=\theta=E[2\bar X]$$
(I believe A and B are true.)
|
Are These Conjectures Regarding Sufficient Statistics True?
$U[0,\theta]$ is a counterexample to C. $T_1=\max_i X_i$ is sufficient for $\theta$ but it is not complete because $$E\left[\frac{n+1}{n}T_1\right]=\theta=E[2\bar X]$$
(I believe A and B are true.)
|
48,952 |
Role of regression model fit in causal analysis
|
A DAG is a non-parametric model of the causal relations among a set a variables. The DAG tells you that covariate and state should be adjusted for, but it tells you nothing about how to adjust. Covariate adjustment is one approach, and stratification is another. Here you are proposing three covariate adjustment models:
outcome ~ covariate + state
outcome ~ covariate + (1 | state)
outcome ~ covariate + (covariate | state)
The DAG does not help you choose which one to use. Each model represents different parametric assumptions, none of which the DAG can inform us about. There are many considerations for choosing among these models. A few are:
how mahy state's are there ? If too few then the first model is the only option.
Does covarate vary within levels of state ? If not then the third model is not appropriate. If it does, then the 3rd model may be appropriate provided that the model is supported by the data (and random slopes often are not - despite being clinically/theoretically appropriate)
Also note that, for the exact same reasons, the DAG can't tell you whether you should include nonlinear terms such as quadratic, cubic, or splines; nor can it help determine any transformations that might be needed
|
Role of regression model fit in causal analysis
|
A DAG is a non-parametric model of the causal relations among a set a variables. The DAG tells you that covariate and state should be adjusted for, but it tells you nothing about how to adjust. Covari
|
Role of regression model fit in causal analysis
A DAG is a non-parametric model of the causal relations among a set a variables. The DAG tells you that covariate and state should be adjusted for, but it tells you nothing about how to adjust. Covariate adjustment is one approach, and stratification is another. Here you are proposing three covariate adjustment models:
outcome ~ covariate + state
outcome ~ covariate + (1 | state)
outcome ~ covariate + (covariate | state)
The DAG does not help you choose which one to use. Each model represents different parametric assumptions, none of which the DAG can inform us about. There are many considerations for choosing among these models. A few are:
how mahy state's are there ? If too few then the first model is the only option.
Does covarate vary within levels of state ? If not then the third model is not appropriate. If it does, then the 3rd model may be appropriate provided that the model is supported by the data (and random slopes often are not - despite being clinically/theoretically appropriate)
Also note that, for the exact same reasons, the DAG can't tell you whether you should include nonlinear terms such as quadratic, cubic, or splines; nor can it help determine any transformations that might be needed
|
Role of regression model fit in causal analysis
A DAG is a non-parametric model of the causal relations among a set a variables. The DAG tells you that covariate and state should be adjusted for, but it tells you nothing about how to adjust. Covari
|
48,953 |
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
We can show this without having to deal with integrals, or the normal density function. It arises from a symmetry property, and the normal density of $x$ decreasing as $|x|$ increases.
First, note that this is trivial for $k \leq 0$, since then $\mathbb{P}(|X| > k) = 1$ for all $\mu$. We therefore only need to examine the case where $k > 0$.
Let $$f(\mu; k) := \mathbb{P}(|X| > k \,;\, \mu) = 1 - \Phi\left(\frac{k-\mu}{\sigma}\right) + \Phi\left(\frac{-k-\mu}{\sigma}\right).$$
This is symmetric in $\mu$, since $\Phi(x) = 1 - \Phi(-x)$:
$$f(-\mu; k) = 1 - \Phi\left(\frac{k+\mu}{\sigma}\right) + \Phi\left(\frac{-k+\mu}{\sigma}\right) = 1 + \Phi\left(\frac{-k-\mu}{\sigma}\right) - \Phi\left(\frac{k-\mu}{\sigma}\right) = f(\mu;k).$$
We can therefore restrict ourselves to considering positive values of $\mu$.
We now differentiate by $\mu$:
$$f'(\mu;k) = \frac{\textrm{d}}{\textrm{d}\mu} \mathbb{P}(|X| > k \,;\, \mu) = \phi\left(\frac{k-\mu}{\sigma}\right) - \phi\left(\frac{-k-\mu}{\sigma}\right),$$
where $\phi$ is the normal density function. For the proposed property to hold, we require $f'$ to be positive for all $\mu \geq 0$, which is equivalent to
$$\phi\left(\frac{k+\mu}{\sigma}\right) \leq \phi\left(\frac{k-\mu}{\sigma}\right) \quad \textrm{for all } \mu \geq 0.$$
This is true if and only if $|k+\mu| \geq |k-\mu|$, since $\phi(x)$ decreases as $|x|$ increases. $k + \mu > 0$, since $k > 0$ and $\mu \geq 0$, so we have two cases:
$k \geq \mu$, and we require $k+\mu \geq k-\mu$, i.e. $\mu \geq 0$, which is true.
$k < \mu$, and we require $k+\mu \geq \mu-k$, i.e. $k \geq 0$, which is true.
Therefore, the property is true for all $k$.
Some examples using R:
f <- function(mu, k) 1 - pmax(0, pnorm(k, mu, 1) - pnorm(-k, mu, 1))
curve(f(x, 0), from = -5, to = 5)
curve(f(x, 1), from = -5, to = 5)
curve(f(x, 10), from = -5, to = 5)
This also holds for any other density function that's "symmetric-decreasing". Here are some examples for the Cauchy distribution:
f <- function(mu, k) 1 - pmax(0, pcauchy(k, mu, 1) - pcauchy(-k, mu, 1))
curve(f(x, 0), from = -5, to = 5)
curve(f(x, 1), from = -5, to = 5)
curve(f(x, 10), from = -5, to = 5)
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
We can show this without having to deal with integrals, or the normal density function. It arises from a symmetry property, and the normal density of $x$ decreasing as $|x|$ increases.
First, note tha
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
We can show this without having to deal with integrals, or the normal density function. It arises from a symmetry property, and the normal density of $x$ decreasing as $|x|$ increases.
First, note that this is trivial for $k \leq 0$, since then $\mathbb{P}(|X| > k) = 1$ for all $\mu$. We therefore only need to examine the case where $k > 0$.
Let $$f(\mu; k) := \mathbb{P}(|X| > k \,;\, \mu) = 1 - \Phi\left(\frac{k-\mu}{\sigma}\right) + \Phi\left(\frac{-k-\mu}{\sigma}\right).$$
This is symmetric in $\mu$, since $\Phi(x) = 1 - \Phi(-x)$:
$$f(-\mu; k) = 1 - \Phi\left(\frac{k+\mu}{\sigma}\right) + \Phi\left(\frac{-k+\mu}{\sigma}\right) = 1 + \Phi\left(\frac{-k-\mu}{\sigma}\right) - \Phi\left(\frac{k-\mu}{\sigma}\right) = f(\mu;k).$$
We can therefore restrict ourselves to considering positive values of $\mu$.
We now differentiate by $\mu$:
$$f'(\mu;k) = \frac{\textrm{d}}{\textrm{d}\mu} \mathbb{P}(|X| > k \,;\, \mu) = \phi\left(\frac{k-\mu}{\sigma}\right) - \phi\left(\frac{-k-\mu}{\sigma}\right),$$
where $\phi$ is the normal density function. For the proposed property to hold, we require $f'$ to be positive for all $\mu \geq 0$, which is equivalent to
$$\phi\left(\frac{k+\mu}{\sigma}\right) \leq \phi\left(\frac{k-\mu}{\sigma}\right) \quad \textrm{for all } \mu \geq 0.$$
This is true if and only if $|k+\mu| \geq |k-\mu|$, since $\phi(x)$ decreases as $|x|$ increases. $k + \mu > 0$, since $k > 0$ and $\mu \geq 0$, so we have two cases:
$k \geq \mu$, and we require $k+\mu \geq k-\mu$, i.e. $\mu \geq 0$, which is true.
$k < \mu$, and we require $k+\mu \geq \mu-k$, i.e. $k \geq 0$, which is true.
Therefore, the property is true for all $k$.
Some examples using R:
f <- function(mu, k) 1 - pmax(0, pnorm(k, mu, 1) - pnorm(-k, mu, 1))
curve(f(x, 0), from = -5, to = 5)
curve(f(x, 1), from = -5, to = 5)
curve(f(x, 10), from = -5, to = 5)
This also holds for any other density function that's "symmetric-decreasing". Here are some examples for the Cauchy distribution:
f <- function(mu, k) 1 - pmax(0, pcauchy(k, mu, 1) - pcauchy(-k, mu, 1))
curve(f(x, 0), from = -5, to = 5)
curve(f(x, 1), from = -5, to = 5)
curve(f(x, 10), from = -5, to = 5)
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
We can show this without having to deal with integrals, or the normal density function. It arises from a symmetry property, and the normal density of $x$ decreasing as $|x|$ increases.
First, note tha
|
48,954 |
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
A possible approach: wlog we may assume $\mu_1 \ge 0$, $k \ge 0$.
$$
P(|X_2|>k) - P(|X_1|>k) = \int_0^{\infty}(f_2(k+t) - f_1(k+t)) - (f_1(-k-t) - f_2(-k-t)) \,dt
$$
and it would suffice to show that the integrand is nonnegative for all $t\ge0$.
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
A possible approach: wlog we may assume $\mu_1 \ge 0$, $k \ge 0$.
$$
P(|X_2|>k) - P(|X_1|>k) = \int_0^{\infty}(f_2(k+t) - f_1(k+t)) - (f_1(-k-t) - f_2(-k-t)) \,dt
$$
and it would suffice to show that
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
A possible approach: wlog we may assume $\mu_1 \ge 0$, $k \ge 0$.
$$
P(|X_2|>k) - P(|X_1|>k) = \int_0^{\infty}(f_2(k+t) - f_1(k+t)) - (f_1(-k-t) - f_2(-k-t)) \,dt
$$
and it would suffice to show that the integrand is nonnegative for all $t\ge0$.
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
A possible approach: wlog we may assume $\mu_1 \ge 0$, $k \ge 0$.
$$
P(|X_2|>k) - P(|X_1|>k) = \int_0^{\infty}(f_2(k+t) - f_1(k+t)) - (f_1(-k-t) - f_2(-k-t)) \,dt
$$
and it would suffice to show that
|
48,955 |
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
You can reduce your question to the case you understand. Let $s_i$ be the sign of $\mu_i$, that is, $s_i \in \{-1,1\}$ and $s_i \mu_i = |\mu_i|$. Since the question is about the marginals of $(X_1,X_2)$ we can use any coupling between them.
Take $X_1 = \mu_1 + Z$ and $X_2 = \mu_2 + s_1 Z$ where $Z \sim N(0,\sigma^2)$. These will have the correct marginals. Let $X'_1 = |\mu_1| + s_1 Z$ and $X'_2 = |\mu_2| + s_1 s_2 Z$.
\begin{align*}
\mathbb P(|X_1| > k ) = \mathbb P(|s_1 X_1 | > k) &= \mathbb P(|s_1 \mu_1 +s_1 Z| > k) \\
&= \mathbb P(|X'_1| > k) \\
&\le \mathbb P(|X'_2| > k) \\
&= \mathbb P(|s_2 \mu_2 + s_2 s_1 Z| > k) \\
&= \mathbb P(|\mu_2 + s_1 Z| > k) \\
&= \mathbb P(|X_2| > k).
\end{align*}
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
|
You can reduce your question to the case you understand. Let $s_i$ be the sign of $\mu_i$, that is, $s_i \in \{-1,1\}$ and $s_i \mu_i = |\mu_i|$. Since the question is about the marginals of $(X_1,X_2
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
You can reduce your question to the case you understand. Let $s_i$ be the sign of $\mu_i$, that is, $s_i \in \{-1,1\}$ and $s_i \mu_i = |\mu_i|$. Since the question is about the marginals of $(X_1,X_2)$ we can use any coupling between them.
Take $X_1 = \mu_1 + Z$ and $X_2 = \mu_2 + s_1 Z$ where $Z \sim N(0,\sigma^2)$. These will have the correct marginals. Let $X'_1 = |\mu_1| + s_1 Z$ and $X'_2 = |\mu_2| + s_1 s_2 Z$.
\begin{align*}
\mathbb P(|X_1| > k ) = \mathbb P(|s_1 X_1 | > k) &= \mathbb P(|s_1 \mu_1 +s_1 Z| > k) \\
&= \mathbb P(|X'_1| > k) \\
&\le \mathbb P(|X'_2| > k) \\
&= \mathbb P(|s_2 \mu_2 + s_2 s_1 Z| > k) \\
&= \mathbb P(|\mu_2 + s_1 Z| > k) \\
&= \mathbb P(|X_2| > k).
\end{align*}
|
Is $P(|X_1|>k)\le P(|X_2|> k)$ when $X_i\sim N(\mu_i,\sigma^2)$ and $|\mu_2| \ge |\mu_1|$?
You can reduce your question to the case you understand. Let $s_i$ be the sign of $\mu_i$, that is, $s_i \in \{-1,1\}$ and $s_i \mu_i = |\mu_i|$. Since the question is about the marginals of $(X_1,X_2
|
48,956 |
Can you use Lasso for variable selection of fixed effects, then run a mixed model using the fixed effects from Lasso?
|
There doesn't appear to be a consensus on how to perform variable selection on both fixed and random effects. There are technical papers proposing solutions to this problem, like this paper from Fan and Li.
Bondell et al. argue against separating the fixed and random when performing variable selection, as the structure of the random effects will affect which fixed effect variables are selected. I am not an expert at variable selection or mixed models, but Bondell's claims and your intuition seem correct to me. Since this problem doesn't seem to be fully solved, I don't think you'll find the clear resource you want. You will find many technical arguments like those papers or their references.
It is straightforward to produce an example where temporarily ignoring the random effects will not work. Imagine you have a longitudinal study and a random effect for each person. It wouldn't make any sense to omit the random effects because that is how you model within-person correlation.
|
Can you use Lasso for variable selection of fixed effects, then run a mixed model using the fixed ef
|
There doesn't appear to be a consensus on how to perform variable selection on both fixed and random effects. There are technical papers proposing solutions to this problem, like this paper from Fan a
|
Can you use Lasso for variable selection of fixed effects, then run a mixed model using the fixed effects from Lasso?
There doesn't appear to be a consensus on how to perform variable selection on both fixed and random effects. There are technical papers proposing solutions to this problem, like this paper from Fan and Li.
Bondell et al. argue against separating the fixed and random when performing variable selection, as the structure of the random effects will affect which fixed effect variables are selected. I am not an expert at variable selection or mixed models, but Bondell's claims and your intuition seem correct to me. Since this problem doesn't seem to be fully solved, I don't think you'll find the clear resource you want. You will find many technical arguments like those papers or their references.
It is straightforward to produce an example where temporarily ignoring the random effects will not work. Imagine you have a longitudinal study and a random effect for each person. It wouldn't make any sense to omit the random effects because that is how you model within-person correlation.
|
Can you use Lasso for variable selection of fixed effects, then run a mixed model using the fixed ef
There doesn't appear to be a consensus on how to perform variable selection on both fixed and random effects. There are technical papers proposing solutions to this problem, like this paper from Fan a
|
48,957 |
Bounding sum of quartic deviations from sample mean
|
So, I believe we have a proof - actually provided by my PhD Student Stephan Hetzenecker, who is however not here, so I'll post it in his name!
W.l.o.g., let $\mu = 0$ (otherwise define $Z_i := X_i - \mu$ and show $\sum_i(Z_i-\bar Z)^4 \leq 16 \sum_i Z_i^4 $).
Using the binomial theorem, we obtain
\begin{align} \tag{1}\label{eq:sum_binom}
\sum_i(X_i-\bar X)^4= &\sum_i \left( X_i^4- 4X_i^3\bar X
-4X_i\bar X^3
+6X_i^2\bar X^2
+{\bar X}^4\right).
\end{align}
Let $p \geq 1$. Using Jensen's inequality, we get
\begin{align} \tag{2}\label{eq:applyJensen_p}
\bar X^p := \left(\frac{1}{n} \sum_i X_i \right)^p \leq \frac{1}{n} \sum_i X_i^p.
\end{align}
Hence, applying \eqref{eq:applyJensen_p} $n$ times for $p=4$,
\begin{align} \tag{3}\label{eq:bound_barx4}
\sum_i \bar X^4 = \sum_i \left(\frac{1}{n} \sum_j X_j \right)^4 \leq \sum_i \frac{1}{n} \sum_j X_j^4 = \sum_j X_j^4 .
\end{align}
Similarly, using Hölder's inequality with $p=q=2$ and using \eqref{eq:bound_barx4},
\begin{align} \tag{4}\label{eq:bound_barx2}
\sum_i X_i^2 \bar X^2 &\leq \left(\sum_i X_i^4 \right)^{1/2} \left(\sum_i \bar X^4 \right)^{1/2}\\& \leq \left(\sum_i X_i^4 \right)^{1/2} \left( \sum_i X_i^4 \right)^{1/2} \\&= \sum_j X_j^4 .
\end{align}
Furthermore, using \eqref{eq:applyJensen_p} again
\begin{align} \tag{5}\label{eq:bound_barx3}
- \sum_i X_i\bar X^3 &= - n \bar X^4 \leq n \bar X^4 \\&\leq n \left(\frac{1}{n} \sum_j X_j^4 \right) \\&= \sum_j X_j^4
\end{align}
Using Hölder's inequality with $p=4/3$ and $q=4$ and using \eqref{eq:bound_barx4},
\begin{align} \tag{6}\label{eq:bound_barx}
-\sum_i X_i^3\bar X &\leq \sum_i \vert X_i^3\bar X \vert \\&\leq \left( \sum_i X_i^4 \right)^{3/4} \left( \sum_i \bar X^4 \right)^{1/4} \\&\leq \sum_i X_i^4
\end{align}
Combining \eqref{eq:sum_binom}, \eqref{eq:bound_barx4}, \eqref{eq:bound_barx2}, \eqref{eq:bound_barx3}, and \eqref{eq:bound_barx}, we obtain
\begin{align*}
\sum_i(X_i-\bar X)^4 \leq 16 \sum_i X_i^4,
\end{align*}
which was to be shown.
|
Bounding sum of quartic deviations from sample mean
|
So, I believe we have a proof - actually provided by my PhD Student Stephan Hetzenecker, who is however not here, so I'll post it in his name!
W.l.o.g., let $\mu = 0$ (otherwise define $Z_i := X_i - \
|
Bounding sum of quartic deviations from sample mean
So, I believe we have a proof - actually provided by my PhD Student Stephan Hetzenecker, who is however not here, so I'll post it in his name!
W.l.o.g., let $\mu = 0$ (otherwise define $Z_i := X_i - \mu$ and show $\sum_i(Z_i-\bar Z)^4 \leq 16 \sum_i Z_i^4 $).
Using the binomial theorem, we obtain
\begin{align} \tag{1}\label{eq:sum_binom}
\sum_i(X_i-\bar X)^4= &\sum_i \left( X_i^4- 4X_i^3\bar X
-4X_i\bar X^3
+6X_i^2\bar X^2
+{\bar X}^4\right).
\end{align}
Let $p \geq 1$. Using Jensen's inequality, we get
\begin{align} \tag{2}\label{eq:applyJensen_p}
\bar X^p := \left(\frac{1}{n} \sum_i X_i \right)^p \leq \frac{1}{n} \sum_i X_i^p.
\end{align}
Hence, applying \eqref{eq:applyJensen_p} $n$ times for $p=4$,
\begin{align} \tag{3}\label{eq:bound_barx4}
\sum_i \bar X^4 = \sum_i \left(\frac{1}{n} \sum_j X_j \right)^4 \leq \sum_i \frac{1}{n} \sum_j X_j^4 = \sum_j X_j^4 .
\end{align}
Similarly, using Hölder's inequality with $p=q=2$ and using \eqref{eq:bound_barx4},
\begin{align} \tag{4}\label{eq:bound_barx2}
\sum_i X_i^2 \bar X^2 &\leq \left(\sum_i X_i^4 \right)^{1/2} \left(\sum_i \bar X^4 \right)^{1/2}\\& \leq \left(\sum_i X_i^4 \right)^{1/2} \left( \sum_i X_i^4 \right)^{1/2} \\&= \sum_j X_j^4 .
\end{align}
Furthermore, using \eqref{eq:applyJensen_p} again
\begin{align} \tag{5}\label{eq:bound_barx3}
- \sum_i X_i\bar X^3 &= - n \bar X^4 \leq n \bar X^4 \\&\leq n \left(\frac{1}{n} \sum_j X_j^4 \right) \\&= \sum_j X_j^4
\end{align}
Using Hölder's inequality with $p=4/3$ and $q=4$ and using \eqref{eq:bound_barx4},
\begin{align} \tag{6}\label{eq:bound_barx}
-\sum_i X_i^3\bar X &\leq \sum_i \vert X_i^3\bar X \vert \\&\leq \left( \sum_i X_i^4 \right)^{3/4} \left( \sum_i \bar X^4 \right)^{1/4} \\&\leq \sum_i X_i^4
\end{align}
Combining \eqref{eq:sum_binom}, \eqref{eq:bound_barx4}, \eqref{eq:bound_barx2}, \eqref{eq:bound_barx3}, and \eqref{eq:bound_barx}, we obtain
\begin{align*}
\sum_i(X_i-\bar X)^4 \leq 16 \sum_i X_i^4,
\end{align*}
which was to be shown.
|
Bounding sum of quartic deviations from sample mean
So, I believe we have a proof - actually provided by my PhD Student Stephan Hetzenecker, who is however not here, so I'll post it in his name!
W.l.o.g., let $\mu = 0$ (otherwise define $Z_i := X_i - \
|
48,958 |
Bounding sum of quartic deviations from sample mean
|
Let $A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4$
$=\sum\limits_{i=1}^{n}\left(4(X_i-\mu)^2+(X_i-\bar X)^2\right)\left(2(X_i-\mu)+(X_i-\bar X)\right)\left(2(X_i-\mu)-(X_i-\bar X)\right)$
$=\sum\limits_{i=1}^{n}\left(4a_i^2+b_i^2\right)\left(2a_i+b_i\right)\left(2a_i-b_i\right)$, by letting $X_i-\mu=a_i$ and $X_i-\bar{X}=b_i$
Note that $\sum\limits_{i=1}^{n} b_i = \sum\limits_{i=1}^{n}X_i - n\bar{X}= n\bar{X}- n\bar{X} = 0$
Now, notice if $X_i$s are ordered in an increasing sequence, i.e., w.l.o.g. let $X_1\leq X_2\leq\ldots\leq X_n$, we shall have $a_1\leq a_2\leq\ldots \leq a_n$ and $b_1 \leq b_2 \leq \ldots b_n$, since $\mu, \bar{X}$ are constants.
$\implies 4a_1^2+b_1^2\leq4a_2^2+b_2^2\leq\ldots\leq 4a_n^2+b_n^2$ and $2a_1+b_1\leq 2a_2+b_2\leq\ldots 2a_n+b_n$
$\implies (4a_1^2+b_1^2)(2a_1+b_1)\leq(4a_2^2+b_2^2)(2a_2+b_2)\leq\ldots\leq (4a_n^2+b_n^2)(2a_n+b_n)$, since $4a_i^2+b_i^2 \geq 0$, being sum of squares of real numbers
Similarly, $2a_i-b_i=X_i-2\mu+\bar{X}$
$\implies 2a_1-b_1\leq 2a_2-b_2\leq\ldots\leq 2a_n-b_n$, since $X_1\leq X_2\ldots\leq X_n$
Now, let's use the following inequality from Chebyshev:
Now, applying Chebysev's inequality twice, we have,
$\frac{A}{n}=\frac{1}{n}
\sum\limits_{i=1}^{n}\left(4a_i^2+b_i^2\right)\left(2a_i+b_i\right)\left(2a_i-b_i\right)\geq \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)(2a_i+b_i)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i-b_i)\right)$
$\geq \left(\left(\frac{1}{n}\sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i+b_i)\right)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i-b_i)\right)$
$= \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)$, since $\sum\limits_{i=1}^{n}b_i=0$
$\geq \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)^2 \geq 0$, being product of sum of squares of
real numbers
$\implies A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4\geq 0$
$\implies \sum\limits_i(X_i-\bar X)^4 \leq 16\sum\limits_{i=1}^{n}(X_i-\mu)^4$
|
Bounding sum of quartic deviations from sample mean
|
Let $A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4$
$=\sum\limits_{i=1}^{n}\left(4(X_i-\mu)^2+(X_i-\bar X)^2\right)\left(2(X_i-\mu)+(X_i-\bar X)\right)\left(2(X_i-\mu)-(X_i-\bar X
|
Bounding sum of quartic deviations from sample mean
Let $A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4$
$=\sum\limits_{i=1}^{n}\left(4(X_i-\mu)^2+(X_i-\bar X)^2\right)\left(2(X_i-\mu)+(X_i-\bar X)\right)\left(2(X_i-\mu)-(X_i-\bar X)\right)$
$=\sum\limits_{i=1}^{n}\left(4a_i^2+b_i^2\right)\left(2a_i+b_i\right)\left(2a_i-b_i\right)$, by letting $X_i-\mu=a_i$ and $X_i-\bar{X}=b_i$
Note that $\sum\limits_{i=1}^{n} b_i = \sum\limits_{i=1}^{n}X_i - n\bar{X}= n\bar{X}- n\bar{X} = 0$
Now, notice if $X_i$s are ordered in an increasing sequence, i.e., w.l.o.g. let $X_1\leq X_2\leq\ldots\leq X_n$, we shall have $a_1\leq a_2\leq\ldots \leq a_n$ and $b_1 \leq b_2 \leq \ldots b_n$, since $\mu, \bar{X}$ are constants.
$\implies 4a_1^2+b_1^2\leq4a_2^2+b_2^2\leq\ldots\leq 4a_n^2+b_n^2$ and $2a_1+b_1\leq 2a_2+b_2\leq\ldots 2a_n+b_n$
$\implies (4a_1^2+b_1^2)(2a_1+b_1)\leq(4a_2^2+b_2^2)(2a_2+b_2)\leq\ldots\leq (4a_n^2+b_n^2)(2a_n+b_n)$, since $4a_i^2+b_i^2 \geq 0$, being sum of squares of real numbers
Similarly, $2a_i-b_i=X_i-2\mu+\bar{X}$
$\implies 2a_1-b_1\leq 2a_2-b_2\leq\ldots\leq 2a_n-b_n$, since $X_1\leq X_2\ldots\leq X_n$
Now, let's use the following inequality from Chebyshev:
Now, applying Chebysev's inequality twice, we have,
$\frac{A}{n}=\frac{1}{n}
\sum\limits_{i=1}^{n}\left(4a_i^2+b_i^2\right)\left(2a_i+b_i\right)\left(2a_i-b_i\right)\geq \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)(2a_i+b_i)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i-b_i)\right)$
$\geq \left(\left(\frac{1}{n}\sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i+b_i)\right)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i-b_i)\right)$
$= \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)$, since $\sum\limits_{i=1}^{n}b_i=0$
$\geq \left(\frac{1}{n} \sum\limits_{i=1}^{n}(4a_i^2+b_i^2)\right)\left(\frac{1}{n} \sum\limits_{i=1}^{n}(2a_i)\right)^2 \geq 0$, being product of sum of squares of
real numbers
$\implies A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4\geq 0$
$\implies \sum\limits_i(X_i-\bar X)^4 \leq 16\sum\limits_{i=1}^{n}(X_i-\mu)^4$
|
Bounding sum of quartic deviations from sample mean
Let $A = 16\sum\limits_{i=1}^{n}(X_i-\mu)^4-\sum\limits_i(X_i-\bar X)^4$
$=\sum\limits_{i=1}^{n}\left(4(X_i-\mu)^2+(X_i-\bar X)^2\right)\left(2(X_i-\mu)+(X_i-\bar X)\right)\left(2(X_i-\mu)-(X_i-\bar X
|
48,959 |
Is it possible Fisher information matrix be indefinite?
|
From https://en.wikipedia.org/wiki/Observed_information the observed Fisher information matrix is just the negative Hessian of the log likelihood function. If your log likelihood function is not convex then the hessian will not be positive-definite (and thus indefinite).
This is the case if you are using the "Observed Fisher Information Matrix" from https://en.wikipedia.org/wiki/Scoring_algorithm. If you are using the canonical notion of the Fisher Information Matrix (https://en.wikipedia.org/wiki/Fisher_information#Matrix_form) then it must be positive semi-definite.
|
Is it possible Fisher information matrix be indefinite?
|
From https://en.wikipedia.org/wiki/Observed_information the observed Fisher information matrix is just the negative Hessian of the log likelihood function. If your log likelihood function is not conve
|
Is it possible Fisher information matrix be indefinite?
From https://en.wikipedia.org/wiki/Observed_information the observed Fisher information matrix is just the negative Hessian of the log likelihood function. If your log likelihood function is not convex then the hessian will not be positive-definite (and thus indefinite).
This is the case if you are using the "Observed Fisher Information Matrix" from https://en.wikipedia.org/wiki/Scoring_algorithm. If you are using the canonical notion of the Fisher Information Matrix (https://en.wikipedia.org/wiki/Fisher_information#Matrix_form) then it must be positive semi-definite.
|
Is it possible Fisher information matrix be indefinite?
From https://en.wikipedia.org/wiki/Observed_information the observed Fisher information matrix is just the negative Hessian of the log likelihood function. If your log likelihood function is not conve
|
48,960 |
Best way to construct a QQ-plot
|
Your normal Q-Q plots look fine. For a sample of size $n=50,$ it seems you may be striving for more precision than Q-Q plots usually provide.
Several styles of normal Q-Q plots are
used to judge normality of a possibly normal sample. Some Q-Q plots put data on the horizontal axis and some on the vertical axis. (Preferred usage
seems to vary by country.) In R, the default is to put data on the vertical axis, but the parameter datax=T will put data on the horizontal axis.
Looking at plots
is sometimes more useful than formal tests of goodness-of-fit to a normal
distribution. But looking at a plot is not the same thing as a formal test, so assessing normality from a plot requires familiarity with the kind of plot you use. The general idea is that the normal quantile function is
transformed to a straight line in making a Q-Q plot. So the Q-Q plot of
a normal sample should be 'nearly' linear (perhaps with more deviation
from a line in the tails than in the center of the sample).
Two kinds of guide lines have been used with the Q-Q plots made in R. One kind of line (blue, left below) connects the lower quartiles (data and theoretical) with the upper quartiles. The other uses $y = -\bar X/S + x/S,$ where $\bar X$ and $S$ are the sample mean and SD, respectively (when data is on the horizontal axis).
set.seed(2021)
x = rnorm(100, 50, 7) # SD is 7
par(mfrow=c(1,2))
qqnorm(x, datax=T)
qqline(x, datax=T, col="blue", lwd=2)
qqnorm(x, datax=T)
abline(-mean(x)/sd(x), 1/sd(x), col="red", lwd=2)
par(mfrow=c(1,1))
Some statistical programs put curved 'confidence bands' around the
data cloud made by the Q-Q plot, with the idea that "too many" data points
outside the bands may serve to reject in an informal test of normality.
Another approach is to superimpose 20 additional Q-Q plots in the background of the Q-Q plot of the data, to suggest an informal 95% confidence region. Data known to be sampled from a normal distribution are used. The mean and variance of these
additional samples may be known or hypothetical $\mu$ and $\sigma$ or
the mean and SD of the sample being plotted.
To make these additional Q-Q plots (and for other purposes) it is
sometimes convenient to be able to make Q-Q plots 'from scratch'
instead of using a pre-programmed procedure.
For a sample of size $n$ it is convenient to use theoretical quantiles
and sorted data as below. The figure below uses the same data x as above.
th.quant = qnorm(seq(.5/n, 1-.5/n, length=n))
dta.quant = sort(x)
qqnorm(x, datax=T)
n = 100; th.quant = qnorm(seq(.5/n, 1-.5/n, length=n))
for(i in 1:20) {
points(sort(rnorm(n,mean(x),sd(x))), th.quant, col="green")
}
points(sort(x),th.quant, pch=19) # refresh
|
Best way to construct a QQ-plot
|
Your normal Q-Q plots look fine. For a sample of size $n=50,$ it seems you may be striving for more precision than Q-Q plots usually provide.
Several styles of normal Q-Q plots are
used to judge norma
|
Best way to construct a QQ-plot
Your normal Q-Q plots look fine. For a sample of size $n=50,$ it seems you may be striving for more precision than Q-Q plots usually provide.
Several styles of normal Q-Q plots are
used to judge normality of a possibly normal sample. Some Q-Q plots put data on the horizontal axis and some on the vertical axis. (Preferred usage
seems to vary by country.) In R, the default is to put data on the vertical axis, but the parameter datax=T will put data on the horizontal axis.
Looking at plots
is sometimes more useful than formal tests of goodness-of-fit to a normal
distribution. But looking at a plot is not the same thing as a formal test, so assessing normality from a plot requires familiarity with the kind of plot you use. The general idea is that the normal quantile function is
transformed to a straight line in making a Q-Q plot. So the Q-Q plot of
a normal sample should be 'nearly' linear (perhaps with more deviation
from a line in the tails than in the center of the sample).
Two kinds of guide lines have been used with the Q-Q plots made in R. One kind of line (blue, left below) connects the lower quartiles (data and theoretical) with the upper quartiles. The other uses $y = -\bar X/S + x/S,$ where $\bar X$ and $S$ are the sample mean and SD, respectively (when data is on the horizontal axis).
set.seed(2021)
x = rnorm(100, 50, 7) # SD is 7
par(mfrow=c(1,2))
qqnorm(x, datax=T)
qqline(x, datax=T, col="blue", lwd=2)
qqnorm(x, datax=T)
abline(-mean(x)/sd(x), 1/sd(x), col="red", lwd=2)
par(mfrow=c(1,1))
Some statistical programs put curved 'confidence bands' around the
data cloud made by the Q-Q plot, with the idea that "too many" data points
outside the bands may serve to reject in an informal test of normality.
Another approach is to superimpose 20 additional Q-Q plots in the background of the Q-Q plot of the data, to suggest an informal 95% confidence region. Data known to be sampled from a normal distribution are used. The mean and variance of these
additional samples may be known or hypothetical $\mu$ and $\sigma$ or
the mean and SD of the sample being plotted.
To make these additional Q-Q plots (and for other purposes) it is
sometimes convenient to be able to make Q-Q plots 'from scratch'
instead of using a pre-programmed procedure.
For a sample of size $n$ it is convenient to use theoretical quantiles
and sorted data as below. The figure below uses the same data x as above.
th.quant = qnorm(seq(.5/n, 1-.5/n, length=n))
dta.quant = sort(x)
qqnorm(x, datax=T)
n = 100; th.quant = qnorm(seq(.5/n, 1-.5/n, length=n))
for(i in 1:20) {
points(sort(rnorm(n,mean(x),sd(x))), th.quant, col="green")
}
points(sort(x),th.quant, pch=19) # refresh
|
Best way to construct a QQ-plot
Your normal Q-Q plots look fine. For a sample of size $n=50,$ it seems you may be striving for more precision than Q-Q plots usually provide.
Several styles of normal Q-Q plots are
used to judge norma
|
48,961 |
Deriving the limiting distribution of the Hodges-Le Cam estimator in Bickel and Doksum (2015)
|
Yes, that's for $\theta>0$, but the argument for $\theta<0$ is exactly symmetric.
The key random variable is the indicator $A_n=\{|\bar X_n|\leq n^{-1/4}\}$. Since $\bar X_n\sim N(0,1/n)$, we know $P(A_n)$ exactly. Asymptotically, if $\theta=0$, $P(A_n)\to 1$. If $\theta\neq 0$, $P(A_n)\to 0$.
Now,
$$\tilde\theta_n= A_n\times 0 + (1-A_n)\times \bar X_n$$
and so
$$\sqrt{n}(\tilde\theta-\theta)=A_n\times 0+(1-A_n)\sqrt{n}(\bar X_n-\theta)$$
and by the Continuous Mapping Theorem you get
$$\sqrt{n}(\tilde\theta-\theta)\stackrel{p}{\to}0$$
if $\theta=0$ and
$$\sqrt{n}(\tilde\theta-\theta)\stackrel{p}{\to}N(0,1)$$
if $\theta=1$.
So, for any fixed $\theta$, the asymptotic distribution is either identical to that of $\bar X_n$ (if $\theta\neq 0$) or better (if $\theta=0$).
$\tilde\theta$ has a mixed discrete and continuous distribution: it has a point mass at zero, it has zero mass on $[-n^{-1/4},0)$ and $(0,n^{-1/4}]$, and otherwise it has a strictly positive density.
$P(\tilde\theta=0)$ is well-defined and non-zero; it's just $P(A_n)$. It is non-zero because $\tilde\theta$ has point mass at zero
$P(\tilde\theta=\bar X_n)$ is well-defined and non-zero; it's just $1-P(A_n)$. It is non-zero even though $\bar X_n$ is continuous, but that's fine because $\tilde\theta-\bar X_n$ has point mass at zero.
While you didn't ask, the other interesting part about the Hodges estimator is how it breaks down. If you take $\theta=n^{-1/4}$, then $P(A_n)\approx 1/2$. When $A_n=1$, $\sqrt{n}(\tilde\theta_n-\theta)\approx n^{1/2}n^{-1/4}=n^{1/4}$, which is very large for large $n$.
|
Deriving the limiting distribution of the Hodges-Le Cam estimator in Bickel and Doksum (2015)
|
Yes, that's for $\theta>0$, but the argument for $\theta<0$ is exactly symmetric.
The key random variable is the indicator $A_n=\{|\bar X_n|\leq n^{-1/4}\}$. Since $\bar X_n\sim N(0,1/n)$, we know $P
|
Deriving the limiting distribution of the Hodges-Le Cam estimator in Bickel and Doksum (2015)
Yes, that's for $\theta>0$, but the argument for $\theta<0$ is exactly symmetric.
The key random variable is the indicator $A_n=\{|\bar X_n|\leq n^{-1/4}\}$. Since $\bar X_n\sim N(0,1/n)$, we know $P(A_n)$ exactly. Asymptotically, if $\theta=0$, $P(A_n)\to 1$. If $\theta\neq 0$, $P(A_n)\to 0$.
Now,
$$\tilde\theta_n= A_n\times 0 + (1-A_n)\times \bar X_n$$
and so
$$\sqrt{n}(\tilde\theta-\theta)=A_n\times 0+(1-A_n)\sqrt{n}(\bar X_n-\theta)$$
and by the Continuous Mapping Theorem you get
$$\sqrt{n}(\tilde\theta-\theta)\stackrel{p}{\to}0$$
if $\theta=0$ and
$$\sqrt{n}(\tilde\theta-\theta)\stackrel{p}{\to}N(0,1)$$
if $\theta=1$.
So, for any fixed $\theta$, the asymptotic distribution is either identical to that of $\bar X_n$ (if $\theta\neq 0$) or better (if $\theta=0$).
$\tilde\theta$ has a mixed discrete and continuous distribution: it has a point mass at zero, it has zero mass on $[-n^{-1/4},0)$ and $(0,n^{-1/4}]$, and otherwise it has a strictly positive density.
$P(\tilde\theta=0)$ is well-defined and non-zero; it's just $P(A_n)$. It is non-zero because $\tilde\theta$ has point mass at zero
$P(\tilde\theta=\bar X_n)$ is well-defined and non-zero; it's just $1-P(A_n)$. It is non-zero even though $\bar X_n$ is continuous, but that's fine because $\tilde\theta-\bar X_n$ has point mass at zero.
While you didn't ask, the other interesting part about the Hodges estimator is how it breaks down. If you take $\theta=n^{-1/4}$, then $P(A_n)\approx 1/2$. When $A_n=1$, $\sqrt{n}(\tilde\theta_n-\theta)\approx n^{1/2}n^{-1/4}=n^{1/4}$, which is very large for large $n$.
|
Deriving the limiting distribution of the Hodges-Le Cam estimator in Bickel and Doksum (2015)
Yes, that's for $\theta>0$, but the argument for $\theta<0$ is exactly symmetric.
The key random variable is the indicator $A_n=\{|\bar X_n|\leq n^{-1/4}\}$. Since $\bar X_n\sim N(0,1/n)$, we know $P
|
48,962 |
Simulate an example showing difference between fixed effects and mixed effects models
|
You are not doing anything wrong.
Fitting fixed effects for a grouping variable, instead of random intercepts, is a perfectly valid approach, so the results you get will be comparable.
So you might wonder: why bother fitting a mixed effects model at all ? The answer to that is when the number of groups becomes large you don't really want to fit fixed effects because you then end up with a large number of estimates. Also, you will consume a large number of degrees of freedom. Also, Occam's Razor dictates that a more parsimoneous model should be preferrd, other things being equal, and with a large number of groups, a fixed effects model is far from parsimoneous.
|
Simulate an example showing difference between fixed effects and mixed effects models
|
You are not doing anything wrong.
Fitting fixed effects for a grouping variable, instead of random intercepts, is a perfectly valid approach, so the results you get will be comparable.
So you might wo
|
Simulate an example showing difference between fixed effects and mixed effects models
You are not doing anything wrong.
Fitting fixed effects for a grouping variable, instead of random intercepts, is a perfectly valid approach, so the results you get will be comparable.
So you might wonder: why bother fitting a mixed effects model at all ? The answer to that is when the number of groups becomes large you don't really want to fit fixed effects because you then end up with a large number of estimates. Also, you will consume a large number of degrees of freedom. Also, Occam's Razor dictates that a more parsimoneous model should be preferrd, other things being equal, and with a large number of groups, a fixed effects model is far from parsimoneous.
|
Simulate an example showing difference between fixed effects and mixed effects models
You are not doing anything wrong.
Fitting fixed effects for a grouping variable, instead of random intercepts, is a perfectly valid approach, so the results you get will be comparable.
So you might wo
|
48,963 |
A proof that the median is a nonlinear statistical functional
|
Here is a very simple and almost-rigorous argument.
The medians of the two components are $m_1=\mu_1$ and $m_2=\mu_2$. In particular, they are independent of the components' standard deviations $\sigma_1$ and $\sigma_2$. So any weighted linear combination of the components' medians will also be independent of $\sigma_1$ and $\sigma_2$.
Now, consider our mixture. We can assume $\mu_1<\mu_2$. The median $m$ of the mixture will lie between $m_1=\mu_1$ and $m_2=\mu_2$.
We keep all parameters fixed and reduce $\sigma_1$. This shifts probability mass for our mixture to the left, so $m$ will get smaller. (*) Thus, $m$ is a (non-trivial) function of $\sigma_1$. But above, we have seen that any weighted linear combination of $m_1$ and $m_2$ will be independent of both $\sigma_1$ and $\sigma_2$, a contradiction. Thus, $m$ cannot be a weighted linear combination of $m_1$ and $m_2$.
Of course, it's statement (*) that is intuitively obvious but not yet rigorous. We still need to prove that $m$ is an increasing function of $\sigma_1$ if all other parameters are kept fixed. It may be possible to show that the defining function of $m$ per equation (3) in this earlier answer defines an $m$ that is a differentiable function of $\sigma_1$, then do some implicit differentiation (where we still probably need to argue that we can differentiate under the integral for the improper integral $\int_{-\infty}^m$) and finally find that $\frac{dm}{d\sigma_1}>0$.
Alternatively, it would be easy to take specific normals, say $N(0,\sigma_1^2)$ and $N(2,1)$ with equal weights. For $\sigma_1=1$, we have a symmetric situation, so $m=1$. We note that the $N(2,1)$ has probability mass of $0.067$ to the left of $0.5$ (R: pnorm(0.5,2,1)), so if we reduce $\sigma_1$ far enough that the mass of the $N(0,\sigma_1^2)$ to the left of $0.5$ is large enough, we can estimate that the new median $m'$ of the mixture with $\sigma_1\ll1$ definitely satisfies $m'<0.5$. For this, we just need to trust quantiles, or tables.
|
A proof that the median is a nonlinear statistical functional
|
Here is a very simple and almost-rigorous argument.
The medians of the two components are $m_1=\mu_1$ and $m_2=\mu_2$. In particular, they are independent of the components' standard deviations $\sigm
|
A proof that the median is a nonlinear statistical functional
Here is a very simple and almost-rigorous argument.
The medians of the two components are $m_1=\mu_1$ and $m_2=\mu_2$. In particular, they are independent of the components' standard deviations $\sigma_1$ and $\sigma_2$. So any weighted linear combination of the components' medians will also be independent of $\sigma_1$ and $\sigma_2$.
Now, consider our mixture. We can assume $\mu_1<\mu_2$. The median $m$ of the mixture will lie between $m_1=\mu_1$ and $m_2=\mu_2$.
We keep all parameters fixed and reduce $\sigma_1$. This shifts probability mass for our mixture to the left, so $m$ will get smaller. (*) Thus, $m$ is a (non-trivial) function of $\sigma_1$. But above, we have seen that any weighted linear combination of $m_1$ and $m_2$ will be independent of both $\sigma_1$ and $\sigma_2$, a contradiction. Thus, $m$ cannot be a weighted linear combination of $m_1$ and $m_2$.
Of course, it's statement (*) that is intuitively obvious but not yet rigorous. We still need to prove that $m$ is an increasing function of $\sigma_1$ if all other parameters are kept fixed. It may be possible to show that the defining function of $m$ per equation (3) in this earlier answer defines an $m$ that is a differentiable function of $\sigma_1$, then do some implicit differentiation (where we still probably need to argue that we can differentiate under the integral for the improper integral $\int_{-\infty}^m$) and finally find that $\frac{dm}{d\sigma_1}>0$.
Alternatively, it would be easy to take specific normals, say $N(0,\sigma_1^2)$ and $N(2,1)$ with equal weights. For $\sigma_1=1$, we have a symmetric situation, so $m=1$. We note that the $N(2,1)$ has probability mass of $0.067$ to the left of $0.5$ (R: pnorm(0.5,2,1)), so if we reduce $\sigma_1$ far enough that the mass of the $N(0,\sigma_1^2)$ to the left of $0.5$ is large enough, we can estimate that the new median $m'$ of the mixture with $\sigma_1\ll1$ definitely satisfies $m'<0.5$. For this, we just need to trust quantiles, or tables.
|
A proof that the median is a nonlinear statistical functional
Here is a very simple and almost-rigorous argument.
The medians of the two components are $m_1=\mu_1$ and $m_2=\mu_2$. In particular, they are independent of the components' standard deviations $\sigm
|
48,964 |
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
|
For a continuous real-valued distribution $P(X = c)=0$, where $c$ is some real-valued constant. So with continuous distributions we tend to talk about the probability of events which are ranges of values, such as $P(X \ge c)$ which covers the range from $c$ to $\infty$, or $P(X\leq c)$ which covers the range from $-\infty$ to $c$, rather than specific singular numbers.
So 'as or more extreme than $c$' is another way of saying '$\leq c$ or $\ge c$', or (in the case of two-tailed) tests '$\leq -c$ or $\ge c$' (because extreme can be in either direction from the 0 value at the null hypothesis).
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
|
For a continuous real-valued distribution $P(X = c)=0$, where $c$ is some real-valued constant. So with continuous distributions we tend to talk about the probability of events which are ranges of val
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
For a continuous real-valued distribution $P(X = c)=0$, where $c$ is some real-valued constant. So with continuous distributions we tend to talk about the probability of events which are ranges of values, such as $P(X \ge c)$ which covers the range from $c$ to $\infty$, or $P(X\leq c)$ which covers the range from $-\infty$ to $c$, rather than specific singular numbers.
So 'as or more extreme than $c$' is another way of saying '$\leq c$ or $\ge c$', or (in the case of two-tailed) tests '$\leq -c$ or $\ge c$' (because extreme can be in either direction from the 0 value at the null hypothesis).
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
For a continuous real-valued distribution $P(X = c)=0$, where $c$ is some real-valued constant. So with continuous distributions we tend to talk about the probability of events which are ranges of val
|
48,965 |
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
|
Suppose you have normal data and wonder whether they are consistent
with $H_0: \mu = 50$ or whether to reject $H_0$ in favor of
$H_a: \mu > 50.$ A sample x of size $n = 20$ has mean $\bar X = 51.25,$
and standard deviation $S = 2.954.$
So the sample mean is greater than $50.$ The question is whether it is
sufficiently greater than $50$ to say that it is significantly greater'
than 50 in a statistical sense so that $H_0$ should be rejected at the 5%
level.
sort(x)
[1] 47 47 48 49 49 49 50 50 50 50
[11] 51 51 52 53 53 54 54 54 56 58
mean(x); sd(x)
[1] 51.25
[1] 2.953588
In the plot below, the value of $\bar X$ is shown as a dotted vertical line.
stripchart(x, meth="stack", pch=19)
abline(v = 50, col="green2")
abline(v = mean(x), col="blue", lwd=2, lty="dotted")
In a t test, the test statistic $T = \frac{\bar X-50}{S/\sqrt{n}} = 1.89$
takes the variability of the data into account. The critical value $c = 1.729$
of the t test cuts probability 5% from the upper tail of Student's t
distribution with DF = 19 degrees of freedom. We reject $H_0$ at the 5%
level of significance if $T \ge c = 1.729.$ So we do reject $H_0.$
qt(.95, 19)
[1] 1.729133
In R, a formal t test of $H_0$ against $H_a$ gives the following output.
t.test(x, mu = 50, alt="g")
One Sample t-test
data: x
t = 1.8927, df = 19, p-value = 0.03687
alternative hypothesis: true mean is greater than 50
95 percent confidence interval:
50.10801 Inf
sample estimates:
mean of x
51.25
Notice that there is no mention of the critical value $c.$ Instead
we have the P-value $0.037.$ This is the probability that the
t statistic exceeds the observed value $T = 1.8927.$
1 - pt(1.8927, 19)
[1] 0.03686703
In the figure below the vertical red line is at the 5% critical value $c;$ the area under the density curve to the right of this
line is $0.05.$ The dotted black line shows the value of the t statistic; the area under the density curve to the right of this line is the P-Value.
R code for figure:
curve(dt(x, 19), -3.5, 3.5, ylab="PDF", xlab="t",
main="Density of T(19)")
abline(h = 0, col="green2")
abline(v = 0, col="green2")
abline(v = 1.729, col="red")
abline(v = 1.8927, lty="dotted", lwd=2)
Here are some comments about the use of the P-value instead of the critical
value:
It makes sense for P-values to be computed in terms of values as or more extreme than the value observed. If you are willing to reject $H_0$ for
$\bar X = 51.25$ (t statistic 1.8927), then surely you would also reject for
a more extreme value such as, say $\bar X = 53.11.$
If $T \ge c,$ the 5% critical value, then the P-value is smaller than 5%.
So it is just as easy to use the P-value to decide whether to reject as to use the critical value.
If someone wants to test at the 4% level, instead of the 5% level, then the result is to reject because the P-value is also smaller then 4%. By contrast if someone wants to test at the 1% level, then $H_0$ is not rejected because the P-value exceeds 1%. (Notice that it is not necessary
to "tell" the software what significance level you have in mind; the P-value makes it possible to use any desired significance level.)
For usual levels of significance, such as 10%, 5%, 2%, 1%, 0.1%, you
can get matching critical values from most printed tables of t distributions. However, one cannot generally get exact P-values from printed tables; P-values are 'computer age' values.
My example is for a one-sided alternative. If you are using a two-sided alternative, then you have to consider the probability of
a more extreme value in either direction. Often, the P-value
gets doubled for a two-sided test.
Note: Fake data for my example were sampled using R as shown below:
set.seed(316)
x = round(rnorm(20, 52, 3))
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
|
Suppose you have normal data and wonder whether they are consistent
with $H_0: \mu = 50$ or whether to reject $H_0$ in favor of
$H_a: \mu > 50.$ A sample x of size $n = 20$ has mean $\bar X = 51.25,$
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
Suppose you have normal data and wonder whether they are consistent
with $H_0: \mu = 50$ or whether to reject $H_0$ in favor of
$H_a: \mu > 50.$ A sample x of size $n = 20$ has mean $\bar X = 51.25,$
and standard deviation $S = 2.954.$
So the sample mean is greater than $50.$ The question is whether it is
sufficiently greater than $50$ to say that it is significantly greater'
than 50 in a statistical sense so that $H_0$ should be rejected at the 5%
level.
sort(x)
[1] 47 47 48 49 49 49 50 50 50 50
[11] 51 51 52 53 53 54 54 54 56 58
mean(x); sd(x)
[1] 51.25
[1] 2.953588
In the plot below, the value of $\bar X$ is shown as a dotted vertical line.
stripchart(x, meth="stack", pch=19)
abline(v = 50, col="green2")
abline(v = mean(x), col="blue", lwd=2, lty="dotted")
In a t test, the test statistic $T = \frac{\bar X-50}{S/\sqrt{n}} = 1.89$
takes the variability of the data into account. The critical value $c = 1.729$
of the t test cuts probability 5% from the upper tail of Student's t
distribution with DF = 19 degrees of freedom. We reject $H_0$ at the 5%
level of significance if $T \ge c = 1.729.$ So we do reject $H_0.$
qt(.95, 19)
[1] 1.729133
In R, a formal t test of $H_0$ against $H_a$ gives the following output.
t.test(x, mu = 50, alt="g")
One Sample t-test
data: x
t = 1.8927, df = 19, p-value = 0.03687
alternative hypothesis: true mean is greater than 50
95 percent confidence interval:
50.10801 Inf
sample estimates:
mean of x
51.25
Notice that there is no mention of the critical value $c.$ Instead
we have the P-value $0.037.$ This is the probability that the
t statistic exceeds the observed value $T = 1.8927.$
1 - pt(1.8927, 19)
[1] 0.03686703
In the figure below the vertical red line is at the 5% critical value $c;$ the area under the density curve to the right of this
line is $0.05.$ The dotted black line shows the value of the t statistic; the area under the density curve to the right of this line is the P-Value.
R code for figure:
curve(dt(x, 19), -3.5, 3.5, ylab="PDF", xlab="t",
main="Density of T(19)")
abline(h = 0, col="green2")
abline(v = 0, col="green2")
abline(v = 1.729, col="red")
abline(v = 1.8927, lty="dotted", lwd=2)
Here are some comments about the use of the P-value instead of the critical
value:
It makes sense for P-values to be computed in terms of values as or more extreme than the value observed. If you are willing to reject $H_0$ for
$\bar X = 51.25$ (t statistic 1.8927), then surely you would also reject for
a more extreme value such as, say $\bar X = 53.11.$
If $T \ge c,$ the 5% critical value, then the P-value is smaller than 5%.
So it is just as easy to use the P-value to decide whether to reject as to use the critical value.
If someone wants to test at the 4% level, instead of the 5% level, then the result is to reject because the P-value is also smaller then 4%. By contrast if someone wants to test at the 1% level, then $H_0$ is not rejected because the P-value exceeds 1%. (Notice that it is not necessary
to "tell" the software what significance level you have in mind; the P-value makes it possible to use any desired significance level.)
For usual levels of significance, such as 10%, 5%, 2%, 1%, 0.1%, you
can get matching critical values from most printed tables of t distributions. However, one cannot generally get exact P-values from printed tables; P-values are 'computer age' values.
My example is for a one-sided alternative. If you are using a two-sided alternative, then you have to consider the probability of
a more extreme value in either direction. Often, the P-value
gets doubled for a two-sided test.
Note: Fake data for my example were sampled using R as shown below:
set.seed(316)
x = round(rnorm(20, 52, 3))
|
Trying to understand the logic behind the p-value in hypothesis testing [duplicate]
Suppose you have normal data and wonder whether they are consistent
with $H_0: \mu = 50$ or whether to reject $H_0$ in favor of
$H_a: \mu > 50.$ A sample x of size $n = 20$ has mean $\bar X = 51.25,$
|
48,966 |
Example where the posterior from Jags and Stan are really different and have real impacts on decisions using the model
|
Whenever I want to get started with understanding a new statistical topic, I start by reading articles about it. In this case, I'd start with Carpenter et al. "Stan: A Probabilistic Programming Language," in the Journal of Statistical Software which introduces Stan. The first paragraph is enough to get us started.
The goal of the Stan project is to provide a flexible probabilistic programming language for statistical modeling along with a suite of inference tools for fitting models that are robust, scalable, and efficient.
Stan differs from BUGS (Lunn, Thomas, and Spiegelhalter 2000; Lunn, Spiegelhalter, Thomas, and Best 2009; Lunn, Jackson, Best, Thomas, and Spiegelhalter 2012) and JAGS (Plummer 2003) in two primary ways. First, Stan is based on a new imperative probabilistic programming language that is more flexible and expressive than the declarative graphical modeling languages underlying BUGS or JAGS, in ways such as declaring variables with types and supporting local variables and conditional statements. Second, Stan’s Markov chain Monte Carlo (MCMC) techniques are based on Hamiltonian Monte Carlo (HMC), a more efficient and robust sampler than Gibbs sampling or Metropolis Hastings for models with complex posteriors.1
The number at the end is a footnote. It has citations which support the claim made in that sentence. In this case, the footnote reads "Neal (2011) analyzes the scaling benfit of HMC with dimensionality. Hoffman and Gelman (2014) provide practical comparisons of Stan’s adaptive HMC algorithm with Gibbs, Metropolis, and standard HMC samplers," and the citations are
Neal R (2011). “MCMC Using Hamiltonian Dynamics.” In S Brooks, A Gelman, GL Jones, XL Meng (eds.), Handbook of Markov Chain Monte Carlo, pp. 116–162. Chapman and Hall/CRC.
Hoffman MD, Gelman A (2014). “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” Journal of Machine Learning Research, 15(Apr), 1593–1623.
which will elaborate on the differences in more detail.
It's important to point out that the advantages of stan cited in this passage are not related to the posteriors being different, but are facts about efficiency.
Indeed, one can show that, under certain conditions, Gibbs, MCMC and Metropolis-Hastings will converge to the posterior (albeit it might take far too long for the chains to mix compared to HMC/NUTS), so it would be surprising that HMC/NUTS would differ when these conditions are met.
Bob Carpenter, one of the developers of Stan, provides a concrete example of a case where Stan can solve a problem that Gibbs sampling cannot in this thread on the Stan forums.
[T]here’s an example of how to code exactly this model in the latent discrete parameters chapter of the users guide. You can find this example and others in my latest paper ["Comparing Bayesian Models of Annotation" by
Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, Massimo Poesio. Transactions of the Association for Computational Linguistics (2018)], all coded in Stan.
Gibbs is actually a very bad way to fit these models—it’s super slow to converge. These models used to take 24 hours to fit in WinBUGS with very poor mixing and they now fit in like 30 minutes in Stan. Just be careful to use reasonable inits because there’s a non-identifiability. Duco Veen’s visiting us at Columbia from Utrecht and working on a case study that should be out soon.
In other words, if you're trying to estimate this model and you run WinBUGS for 30 minutes, the chains that you get from the WinBUGS model will exhibit poor mixing, the model will not converge, and the samples will not be representative of the posterior density. At that point, you have a choice. You can wait another 23 hours and 30 minutes for the chains to mix, or you can code the model in Stan.
Not all parameterizations, or even all models, are going to be fast to estimate in Stan. There are problematic parameterizations, also discussed in the Stan User Guide, which have a geometry that's very hard for HMC/NUTS to navigate. The User Guide also contains suggested reparameterizations which can ameliorate these problems. This does not imply that all models can be estimated in Stan, or even that Stanwill be more efficient for any particular model; some models are simply challenging, either generically or for Stan specifically.
That said, Stan is a tool that solves some specific problems more quickly compared to popular alternatives. Part of obtaining expertise is knowing how to differentiate among the various alternative tools and methods for solving problems and choosing the tool that is best for the job.
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisio
|
Whenever I want to get started with understanding a new statistical topic, I start by reading articles about it. In this case, I'd start with Carpenter et al. "Stan: A Probabilistic Programming Langua
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisions using the model
Whenever I want to get started with understanding a new statistical topic, I start by reading articles about it. In this case, I'd start with Carpenter et al. "Stan: A Probabilistic Programming Language," in the Journal of Statistical Software which introduces Stan. The first paragraph is enough to get us started.
The goal of the Stan project is to provide a flexible probabilistic programming language for statistical modeling along with a suite of inference tools for fitting models that are robust, scalable, and efficient.
Stan differs from BUGS (Lunn, Thomas, and Spiegelhalter 2000; Lunn, Spiegelhalter, Thomas, and Best 2009; Lunn, Jackson, Best, Thomas, and Spiegelhalter 2012) and JAGS (Plummer 2003) in two primary ways. First, Stan is based on a new imperative probabilistic programming language that is more flexible and expressive than the declarative graphical modeling languages underlying BUGS or JAGS, in ways such as declaring variables with types and supporting local variables and conditional statements. Second, Stan’s Markov chain Monte Carlo (MCMC) techniques are based on Hamiltonian Monte Carlo (HMC), a more efficient and robust sampler than Gibbs sampling or Metropolis Hastings for models with complex posteriors.1
The number at the end is a footnote. It has citations which support the claim made in that sentence. In this case, the footnote reads "Neal (2011) analyzes the scaling benfit of HMC with dimensionality. Hoffman and Gelman (2014) provide practical comparisons of Stan’s adaptive HMC algorithm with Gibbs, Metropolis, and standard HMC samplers," and the citations are
Neal R (2011). “MCMC Using Hamiltonian Dynamics.” In S Brooks, A Gelman, GL Jones, XL Meng (eds.), Handbook of Markov Chain Monte Carlo, pp. 116–162. Chapman and Hall/CRC.
Hoffman MD, Gelman A (2014). “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” Journal of Machine Learning Research, 15(Apr), 1593–1623.
which will elaborate on the differences in more detail.
It's important to point out that the advantages of stan cited in this passage are not related to the posteriors being different, but are facts about efficiency.
Indeed, one can show that, under certain conditions, Gibbs, MCMC and Metropolis-Hastings will converge to the posterior (albeit it might take far too long for the chains to mix compared to HMC/NUTS), so it would be surprising that HMC/NUTS would differ when these conditions are met.
Bob Carpenter, one of the developers of Stan, provides a concrete example of a case where Stan can solve a problem that Gibbs sampling cannot in this thread on the Stan forums.
[T]here’s an example of how to code exactly this model in the latent discrete parameters chapter of the users guide. You can find this example and others in my latest paper ["Comparing Bayesian Models of Annotation" by
Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, Massimo Poesio. Transactions of the Association for Computational Linguistics (2018)], all coded in Stan.
Gibbs is actually a very bad way to fit these models—it’s super slow to converge. These models used to take 24 hours to fit in WinBUGS with very poor mixing and they now fit in like 30 minutes in Stan. Just be careful to use reasonable inits because there’s a non-identifiability. Duco Veen’s visiting us at Columbia from Utrecht and working on a case study that should be out soon.
In other words, if you're trying to estimate this model and you run WinBUGS for 30 minutes, the chains that you get from the WinBUGS model will exhibit poor mixing, the model will not converge, and the samples will not be representative of the posterior density. At that point, you have a choice. You can wait another 23 hours and 30 minutes for the chains to mix, or you can code the model in Stan.
Not all parameterizations, or even all models, are going to be fast to estimate in Stan. There are problematic parameterizations, also discussed in the Stan User Guide, which have a geometry that's very hard for HMC/NUTS to navigate. The User Guide also contains suggested reparameterizations which can ameliorate these problems. This does not imply that all models can be estimated in Stan, or even that Stanwill be more efficient for any particular model; some models are simply challenging, either generically or for Stan specifically.
That said, Stan is a tool that solves some specific problems more quickly compared to popular alternatives. Part of obtaining expertise is knowing how to differentiate among the various alternative tools and methods for solving problems and choosing the tool that is best for the job.
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisio
Whenever I want to get started with understanding a new statistical topic, I start by reading articles about it. In this case, I'd start with Carpenter et al. "Stan: A Probabilistic Programming Langua
|
48,967 |
Example where the posterior from Jags and Stan are really different and have real impacts on decisions using the model
|
Currently there are no obvious real world examples in which using Stan vs JAGS matters (that is, the posterior would be so different that it would induce a substantively different decisions or conclusions). This is backed by personal experience and it was further evidenced by the lack of answers to this very simple question, even after a bounty was offered.
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisio
|
Currently there are no obvious real world examples in which using Stan vs JAGS matters (that is, the posterior would be so different that it would induce a substantively different decisions or conclus
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisions using the model
Currently there are no obvious real world examples in which using Stan vs JAGS matters (that is, the posterior would be so different that it would induce a substantively different decisions or conclusions). This is backed by personal experience and it was further evidenced by the lack of answers to this very simple question, even after a bounty was offered.
|
Example where the posterior from Jags and Stan are really different and have real impacts on decisio
Currently there are no obvious real world examples in which using Stan vs JAGS matters (that is, the posterior would be so different that it would induce a substantively different decisions or conclus
|
48,968 |
Origin of terms "sensitivity" and "specificity"
|
Binney, N., C. Hyde and P. M. Bossuyt. ‘On the origin of sensitivity and specificity’. Annals of Internal Medicine, 174 (2021): 401–7. 10.7326/M20-5028.
|
Origin of terms "sensitivity" and "specificity"
|
Binney, N., C. Hyde and P. M. Bossuyt. ‘On the origin of sensitivity and specificity’. Annals of Internal Medicine, 174 (2021): 401–7. 10.7326/M20-5028.
|
Origin of terms "sensitivity" and "specificity"
Binney, N., C. Hyde and P. M. Bossuyt. ‘On the origin of sensitivity and specificity’. Annals of Internal Medicine, 174 (2021): 401–7. 10.7326/M20-5028.
|
Origin of terms "sensitivity" and "specificity"
Binney, N., C. Hyde and P. M. Bossuyt. ‘On the origin of sensitivity and specificity’. Annals of Internal Medicine, 174 (2021): 401–7. 10.7326/M20-5028.
|
48,969 |
Origin of terms "sensitivity" and "specificity"
|
tl;dr (this answer is a bit of a book report): The terms sensitivity and specificity have their origin in the roots of immunology in the first years of the 20th century, and work towards antibody testing for syphilis.
As @NickCox and others in the published literature have suggested (e.g., Lilienfield, 2007), Yerushalmy was an exponent of the biostatistical concepts of sensitivity and specificity in a 1947 article on radiological tests for tuberculosis. However, it is reasonably easy to find earlier uses of these terms in the context of medical tests (e.g., Gilman, Boerner, and Lukens, 1940) using academic search engines.
In the 2021 article by Binney, Hyde, and Bossuyt cited by @PeterMilne (and well worth the read!), the authors trace the genesis of these terms' use to immunology, and
the development of serology in the early 20th century. They are closely linked to the development of the Wassermann reaction as a test for syphilis, and to notions of species of pathogen and of specific disease entities. […Where it was] used to distinguish diseased from non-diseased patients.
The terms were already in established use in the syphilis serology literature by the late 30s and early 40s. According to these authors, the concepts of sensitivity and specificity spread to disciplines like clinical epidemiology (and presumably from there to other epidemiologies :)—including for other diseases, such as tuberculosis, decision making in medicine, and disciplines where evaluating tests is important.
According to Binney, Hyde, & Bossuyt, the term specificity has its origins in experiments around 1901 demonstrating the capacity for blood serum from a living animal previously exposed to a known pathogen to successfully destroy that specific pathogen (in contrast to other pathogens). They go on to describe Jules Jean Baptiste Vincent Bordet's advancement of the immunological concepts of antibody and complement from this specific destructive power, where antibodies were described by him as a mechanism to sensitize an organism's immune response to a specific pathogen. From these definitions, Binney, Hyde, & Bossuyt argue, come the concept that “specific entities must be determined to identify infected individuals,” and the development of the Wasserman antibody test* first embodied these concepts in technology. These authors cite work by Craig a few years later in 1910 presenting tables of numbers testing positive and negative among cases of syphilis, and percent testing positive among cases of syphilis, where the term specificity of the test was used to describe “healthy individuals in whom [syphilis] could be absolutely excluded, we tested seventy individuals, with a negative result in every instance.”
The concept of sensitivity of the test has a blurrier origin, with Binney, Hyde, and Bossuyt writing:
Through the 1920s, the term “sensitive” referred both to the fraction of true positives and, sometimes in the same article, to the delicacy of an immunologic reaction: its propensity to return positive reactions, whether these were true or false positives.
They note that this overloaded meaning extended through to the 40s in the literature, until 1942 when American immunologist Reuben Leon Kahn wrote “Strictly speaking, the term ‘sensitivity’ when used in connection with tests for syphilis should apply only to cases of syphilis.”
* The Wasserman test is not actually specific immunologically, but is diagnostically useful.
References
Craig, C.F. (1910). Observations upon the Noguchi modification of the Wassermann complement fixation test in the diagnosis of lues in the military service. Journal of Experimental Medicine. 12, 726–45.
Gilman, R. L., Boerner, F., & Lukens, M. (1940). A Simplified Complement Fixation Technic for the Diagnosis and Treatment Of Syphilis: Its Sensitivity and Specificity. Archives of Dermatology and Syphilology, 41(1), 32–37.
Kahn, R. L. (1942). Serology in Syphilis Control: Principles of Sensitivity and Specificity, With an Appendix for Health Officers and Industrial Physicians. Williams & Wilkins.
Lilienfeld, D. E. (2007). Abe and Yak: the interactions of Abraham M. Lilienfeld and Jacob Yerushalmy in the development of modern epidemiology (1945-1973). Epidemiology. 18, 507–14.
Yerushalmy, J. (1947). Statistical problems in assessing methods of medical diagnosis, with special reference to X-ray techniques. Public Health Reports. 62,1432–1449.
|
Origin of terms "sensitivity" and "specificity"
|
tl;dr (this answer is a bit of a book report): The terms sensitivity and specificity have their origin in the roots of immunology in the first years of the 20th century, and work towards antibody test
|
Origin of terms "sensitivity" and "specificity"
tl;dr (this answer is a bit of a book report): The terms sensitivity and specificity have their origin in the roots of immunology in the first years of the 20th century, and work towards antibody testing for syphilis.
As @NickCox and others in the published literature have suggested (e.g., Lilienfield, 2007), Yerushalmy was an exponent of the biostatistical concepts of sensitivity and specificity in a 1947 article on radiological tests for tuberculosis. However, it is reasonably easy to find earlier uses of these terms in the context of medical tests (e.g., Gilman, Boerner, and Lukens, 1940) using academic search engines.
In the 2021 article by Binney, Hyde, and Bossuyt cited by @PeterMilne (and well worth the read!), the authors trace the genesis of these terms' use to immunology, and
the development of serology in the early 20th century. They are closely linked to the development of the Wassermann reaction as a test for syphilis, and to notions of species of pathogen and of specific disease entities. […Where it was] used to distinguish diseased from non-diseased patients.
The terms were already in established use in the syphilis serology literature by the late 30s and early 40s. According to these authors, the concepts of sensitivity and specificity spread to disciplines like clinical epidemiology (and presumably from there to other epidemiologies :)—including for other diseases, such as tuberculosis, decision making in medicine, and disciplines where evaluating tests is important.
According to Binney, Hyde, & Bossuyt, the term specificity has its origins in experiments around 1901 demonstrating the capacity for blood serum from a living animal previously exposed to a known pathogen to successfully destroy that specific pathogen (in contrast to other pathogens). They go on to describe Jules Jean Baptiste Vincent Bordet's advancement of the immunological concepts of antibody and complement from this specific destructive power, where antibodies were described by him as a mechanism to sensitize an organism's immune response to a specific pathogen. From these definitions, Binney, Hyde, & Bossuyt argue, come the concept that “specific entities must be determined to identify infected individuals,” and the development of the Wasserman antibody test* first embodied these concepts in technology. These authors cite work by Craig a few years later in 1910 presenting tables of numbers testing positive and negative among cases of syphilis, and percent testing positive among cases of syphilis, where the term specificity of the test was used to describe “healthy individuals in whom [syphilis] could be absolutely excluded, we tested seventy individuals, with a negative result in every instance.”
The concept of sensitivity of the test has a blurrier origin, with Binney, Hyde, and Bossuyt writing:
Through the 1920s, the term “sensitive” referred both to the fraction of true positives and, sometimes in the same article, to the delicacy of an immunologic reaction: its propensity to return positive reactions, whether these were true or false positives.
They note that this overloaded meaning extended through to the 40s in the literature, until 1942 when American immunologist Reuben Leon Kahn wrote “Strictly speaking, the term ‘sensitivity’ when used in connection with tests for syphilis should apply only to cases of syphilis.”
* The Wasserman test is not actually specific immunologically, but is diagnostically useful.
References
Craig, C.F. (1910). Observations upon the Noguchi modification of the Wassermann complement fixation test in the diagnosis of lues in the military service. Journal of Experimental Medicine. 12, 726–45.
Gilman, R. L., Boerner, F., & Lukens, M. (1940). A Simplified Complement Fixation Technic for the Diagnosis and Treatment Of Syphilis: Its Sensitivity and Specificity. Archives of Dermatology and Syphilology, 41(1), 32–37.
Kahn, R. L. (1942). Serology in Syphilis Control: Principles of Sensitivity and Specificity, With an Appendix for Health Officers and Industrial Physicians. Williams & Wilkins.
Lilienfeld, D. E. (2007). Abe and Yak: the interactions of Abraham M. Lilienfeld and Jacob Yerushalmy in the development of modern epidemiology (1945-1973). Epidemiology. 18, 507–14.
Yerushalmy, J. (1947). Statistical problems in assessing methods of medical diagnosis, with special reference to X-ray techniques. Public Health Reports. 62,1432–1449.
|
Origin of terms "sensitivity" and "specificity"
tl;dr (this answer is a bit of a book report): The terms sensitivity and specificity have their origin in the roots of immunology in the first years of the 20th century, and work towards antibody test
|
48,970 |
Trouble with finding equation for predicting y based on data provided
|
It is important not to over-interpret these plots. The first plot of residuals vs fitted values is a little misleading in my opinion if you only focus on the red line, partly due to the fairly small sample size.
Yes, the red line has a curved shape, but looking at the data points, it is not clear at all that there is nonlinearity. This type of pattern can easily occur simply through random variation. We would like the line to be perfectly horizontal, but in practice this will never happen.
Here is a plot from a very simple simulation of a linear model:
set.seed(2)
X <- rnorm(30)
Y <- 10 + X + rnorm(30)
plot(lm(Y~X))
As you can see, even when sampling from a normal distribution with a linear relationship, the plot of residuals vs fitted values can still appear nonlinear if we only focus on the red line.
If you have reason to believe that all the $x$ variables are related to $y$ and there is no interdependence among the $x$s (as it appears from the corelation plots), then I would stick with the fist model.
|
Trouble with finding equation for predicting y based on data provided
|
It is important not to over-interpret these plots. The first plot of residuals vs fitted values is a little misleading in my opinion if you only focus on the red line, partly due to the fairly small s
|
Trouble with finding equation for predicting y based on data provided
It is important not to over-interpret these plots. The first plot of residuals vs fitted values is a little misleading in my opinion if you only focus on the red line, partly due to the fairly small sample size.
Yes, the red line has a curved shape, but looking at the data points, it is not clear at all that there is nonlinearity. This type of pattern can easily occur simply through random variation. We would like the line to be perfectly horizontal, but in practice this will never happen.
Here is a plot from a very simple simulation of a linear model:
set.seed(2)
X <- rnorm(30)
Y <- 10 + X + rnorm(30)
plot(lm(Y~X))
As you can see, even when sampling from a normal distribution with a linear relationship, the plot of residuals vs fitted values can still appear nonlinear if we only focus on the red line.
If you have reason to believe that all the $x$ variables are related to $y$ and there is no interdependence among the $x$s (as it appears from the corelation plots), then I would stick with the fist model.
|
Trouble with finding equation for predicting y based on data provided
It is important not to over-interpret these plots. The first plot of residuals vs fitted values is a little misleading in my opinion if you only focus on the red line, partly due to the fairly small s
|
48,971 |
Viewing a neural net loss function as a Gaussian Process
|
Disclaimer: I just glimpsed at the linked articles. My answer focuses solely on Gaussian processes per se.
Your example
If you understand your example $f(x)=x^2+Y$ as a random function of $x$, then it is a Gaussian process, if and only if $Y$ is a Gaussian random variable. If it has any other distribution, it is still a stochastic process but not a Gaussian one. But no matter what kind of process, you can determine the mean and covariance function. The mean function is $m(x) = x^2 + \mathbb{E}[Y]$ and the covariance function $C$ is
$$C(x_1,x_2)=\text{Cov}(f(x_1), f(x_2))= \text{Cov}(x_1^2 + Y, x_2^2 + Y)=\text{Cov}(Y,Y)=\text{Var}[Y].$$
This is because $x_1$ and $x_2$ are non-random, which means the respective covariance terms are zero.
This particular covariance function does not depend on $x$ i.e. it is constant. This reflects the fact that your set of random functions is the parabola $x^2$ which is just randomly shifted up and down as a whole by the values of $Y.$
Parametric families
If you can express your parametric family as a span of basis functions, as is the case for example for polynomials, you can easily turn them into a Gaussian process. Say your family consists of functions of the form $f(x)=\sum \alpha_i \phi_i(x)$ for a set of some basis functions $\phi_i$ then you can turn those into Gaussian random functions simply by turning the coefficients $\alpha_i$ into Gaussian random variables $Y_i$ to arrive at the Gaussian process
$$ G(x) = \sum_i \phi_i(x) Y_i.$$
This works only because linear combinations of Gaussian variables are again Gaussian variables, and explains what is so special about the "Gaussian" in Gaussian processes. For other distributions, including discrete ones, this is no longer true.
The idea is straightforward for finite dimensional spaces, but with the proper technical assumptions even possible for infinite dimensional spaces.
Final remark
If you want to discuss how functions look or what they do "on average" you need to find a way to express probabilities for functions. As demonstrated above, Gaussian process provide an easy and flexible way to turn any space of functions into a space of random functions. The source and purpose of those functions, i.e. whether they are regression functions or loss functions or time series of beaver dams built, does not matter the least.
|
Viewing a neural net loss function as a Gaussian Process
|
Disclaimer: I just glimpsed at the linked articles. My answer focuses solely on Gaussian processes per se.
Your example
If you understand your example $f(x)=x^2+Y$ as a random function of $x$, then it
|
Viewing a neural net loss function as a Gaussian Process
Disclaimer: I just glimpsed at the linked articles. My answer focuses solely on Gaussian processes per se.
Your example
If you understand your example $f(x)=x^2+Y$ as a random function of $x$, then it is a Gaussian process, if and only if $Y$ is a Gaussian random variable. If it has any other distribution, it is still a stochastic process but not a Gaussian one. But no matter what kind of process, you can determine the mean and covariance function. The mean function is $m(x) = x^2 + \mathbb{E}[Y]$ and the covariance function $C$ is
$$C(x_1,x_2)=\text{Cov}(f(x_1), f(x_2))= \text{Cov}(x_1^2 + Y, x_2^2 + Y)=\text{Cov}(Y,Y)=\text{Var}[Y].$$
This is because $x_1$ and $x_2$ are non-random, which means the respective covariance terms are zero.
This particular covariance function does not depend on $x$ i.e. it is constant. This reflects the fact that your set of random functions is the parabola $x^2$ which is just randomly shifted up and down as a whole by the values of $Y.$
Parametric families
If you can express your parametric family as a span of basis functions, as is the case for example for polynomials, you can easily turn them into a Gaussian process. Say your family consists of functions of the form $f(x)=\sum \alpha_i \phi_i(x)$ for a set of some basis functions $\phi_i$ then you can turn those into Gaussian random functions simply by turning the coefficients $\alpha_i$ into Gaussian random variables $Y_i$ to arrive at the Gaussian process
$$ G(x) = \sum_i \phi_i(x) Y_i.$$
This works only because linear combinations of Gaussian variables are again Gaussian variables, and explains what is so special about the "Gaussian" in Gaussian processes. For other distributions, including discrete ones, this is no longer true.
The idea is straightforward for finite dimensional spaces, but with the proper technical assumptions even possible for infinite dimensional spaces.
Final remark
If you want to discuss how functions look or what they do "on average" you need to find a way to express probabilities for functions. As demonstrated above, Gaussian process provide an easy and flexible way to turn any space of functions into a space of random functions. The source and purpose of those functions, i.e. whether they are regression functions or loss functions or time series of beaver dams built, does not matter the least.
|
Viewing a neural net loss function as a Gaussian Process
Disclaimer: I just glimpsed at the linked articles. My answer focuses solely on Gaussian processes per se.
Your example
If you understand your example $f(x)=x^2+Y$ as a random function of $x$, then it
|
48,972 |
Viewing a neural net loss function as a Gaussian Process
|
Neural Networks as Gaussian Processes
Consider a neural network with only one layer (i.e. no hidden layers, i.e. logistic regression): $$\operatorname{reg}: \mathbb{R}^N \to \mathbb{R}^M : \boldsymbol{x} \mapsto \boldsymbol{s} = \boldsymbol{W} \boldsymbol{x}.$$
If we replace the entries in $\boldsymbol{W} \in \mathbb{R}^{M \times N}$ by random values, such that $w_{ij} \sim \mathcal{N}(0, \sigma_w^2)$, the resulting function will be a random/stochastic process.
Now, let $\boldsymbol{w}_i$ be a row of $\boldsymbol{W}$, such that
$$s_i = \boldsymbol{w}_i \boldsymbol{x} = \sum_{j=1}^N w_{ij} x_j,$$
we can use the central limit theorem to conclude that $s_i$ follows a Gaussian distribution if $N \to \infty$.
Therefore, a large number of inputs ($N$) turns the random process into a Gaussian process (because the outputs are now Gaussian).
This is exactly the idea presented in your last piece of literature (Lee, 2018).
Although Lee et al. write about infinite width in every layer, I would argue that you only really need it in the penultimate layer (i.e. the inputs to the final layer).
Having infinite width everywhere just makes the computation of the mean and covariance functions tractable (at least for ReLU networks).
The Effect of Loss Functions
A loss function by itself will never be a Gaussian process because there is typically no randomness in a loss function.
This being said, the combination of neural network and loss function can give rise to a random process.
In order to assess whether this random process will still be Gaussian depends on the loss function itself.
I believe that there are no practical loss functions that would preserve Gaussianity.
E.g. when using the mean squared error, $(\operatorname{reg}(\boldsymbol{x} \mathbin{;} \boldsymbol{w}) - y)^2,$ it should be clear that the loss values will not be Gaussian.
After skimming the papers that are referenced in your question, I am not entirely sure whether they really talk about loss functions as Gaussian processes:
Pascanu et al. (2014) mention that they use random loss functions, sampled from a Gaussian process.
This would be using GPs exactly as how you described them: a distribution of functions.
Choromanska et al. (2015) seem to try to prove that a ReLU network with some loss function that uses randomness is related to a Gaussian process.
At least that would be my interpretation since I do not know much about spin-glass models.
|
Viewing a neural net loss function as a Gaussian Process
|
Neural Networks as Gaussian Processes
Consider a neural network with only one layer (i.e. no hidden layers, i.e. logistic regression): $$\operatorname{reg}: \mathbb{R}^N \to \mathbb{R}^M : \boldsymbol
|
Viewing a neural net loss function as a Gaussian Process
Neural Networks as Gaussian Processes
Consider a neural network with only one layer (i.e. no hidden layers, i.e. logistic regression): $$\operatorname{reg}: \mathbb{R}^N \to \mathbb{R}^M : \boldsymbol{x} \mapsto \boldsymbol{s} = \boldsymbol{W} \boldsymbol{x}.$$
If we replace the entries in $\boldsymbol{W} \in \mathbb{R}^{M \times N}$ by random values, such that $w_{ij} \sim \mathcal{N}(0, \sigma_w^2)$, the resulting function will be a random/stochastic process.
Now, let $\boldsymbol{w}_i$ be a row of $\boldsymbol{W}$, such that
$$s_i = \boldsymbol{w}_i \boldsymbol{x} = \sum_{j=1}^N w_{ij} x_j,$$
we can use the central limit theorem to conclude that $s_i$ follows a Gaussian distribution if $N \to \infty$.
Therefore, a large number of inputs ($N$) turns the random process into a Gaussian process (because the outputs are now Gaussian).
This is exactly the idea presented in your last piece of literature (Lee, 2018).
Although Lee et al. write about infinite width in every layer, I would argue that you only really need it in the penultimate layer (i.e. the inputs to the final layer).
Having infinite width everywhere just makes the computation of the mean and covariance functions tractable (at least for ReLU networks).
The Effect of Loss Functions
A loss function by itself will never be a Gaussian process because there is typically no randomness in a loss function.
This being said, the combination of neural network and loss function can give rise to a random process.
In order to assess whether this random process will still be Gaussian depends on the loss function itself.
I believe that there are no practical loss functions that would preserve Gaussianity.
E.g. when using the mean squared error, $(\operatorname{reg}(\boldsymbol{x} \mathbin{;} \boldsymbol{w}) - y)^2,$ it should be clear that the loss values will not be Gaussian.
After skimming the papers that are referenced in your question, I am not entirely sure whether they really talk about loss functions as Gaussian processes:
Pascanu et al. (2014) mention that they use random loss functions, sampled from a Gaussian process.
This would be using GPs exactly as how you described them: a distribution of functions.
Choromanska et al. (2015) seem to try to prove that a ReLU network with some loss function that uses randomness is related to a Gaussian process.
At least that would be my interpretation since I do not know much about spin-glass models.
|
Viewing a neural net loss function as a Gaussian Process
Neural Networks as Gaussian Processes
Consider a neural network with only one layer (i.e. no hidden layers, i.e. logistic regression): $$\operatorname{reg}: \mathbb{R}^N \to \mathbb{R}^M : \boldsymbol
|
48,973 |
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
|
I think you have made the right attempt, since the method of zero estimator is the necessary and sufficient condition for a UMVUE to exist. And here is my proof:
Let's first assume there exist a UMVUE $\hat\theta $ for $\theta$. Then, by the theorem, any zero estimator $\eta$ will have:
$E_{\theta}\eta = 0$, for any $𝜃∈Θ$.
$E_𝜃[\eta \hat \theta]=0$
So next step is to find an estimator that breaks one of these conditions.
Let's start with the first term:
$$E_𝜃[\eta ]=\frac{1}{3}[\eta(\theta-1) + \eta(\theta) + \eta(\theta+1)] = 0$$
Then any function that satisfies $\eta(x-1) + \eta(x) + \eta(x+1) = 0 $ will meet the requirement. This surely can not derive $E_{\theta}(\eta \hat\theta)= 0 $, so in this case there is no UMVUE.
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
|
I think you have made the right attempt, since the method of zero estimator is the necessary and sufficient condition for a UMVUE to exist. And here is my proof:
Let's first assume there exist a UMVUE
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
I think you have made the right attempt, since the method of zero estimator is the necessary and sufficient condition for a UMVUE to exist. And here is my proof:
Let's first assume there exist a UMVUE $\hat\theta $ for $\theta$. Then, by the theorem, any zero estimator $\eta$ will have:
$E_{\theta}\eta = 0$, for any $𝜃∈Θ$.
$E_𝜃[\eta \hat \theta]=0$
So next step is to find an estimator that breaks one of these conditions.
Let's start with the first term:
$$E_𝜃[\eta ]=\frac{1}{3}[\eta(\theta-1) + \eta(\theta) + \eta(\theta+1)] = 0$$
Then any function that satisfies $\eta(x-1) + \eta(x) + \eta(x+1) = 0 $ will meet the requirement. This surely can not derive $E_{\theta}(\eta \hat\theta)= 0 $, so in this case there is no UMVUE.
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
I think you have made the right attempt, since the method of zero estimator is the necessary and sufficient condition for a UMVUE to exist. And here is my proof:
Let's first assume there exist a UMVUE
|
48,974 |
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
|
Let's approach this from first principles.
An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$ for any integer $i.$
We are hoping to find estimators that tend to be close to $\theta$ when $\theta$ is the parameter. Clearly, when $i$ is observed the possible values of $\theta$ are limited (for sure) to the set $\{i+1,i,i-1\}.$ Thus, a good estimator is likely to guess $\theta$ is close to the observation $i.$ Let us therefore express $t$ in terms of how far it departs from the observation; namely, let
$$t_i = i + \delta_i.$$
When $\theta$ is the parameter, the outcomes $\theta-1,$ $\theta,$ and $\theta+1$ have equal probabilities of $1/3$ and all other integers have zero probability. Consequently,
The expectation of $t$ when $\theta$ is the parameter is $$E(t\mid \theta=i) = \frac{1}{3}\left(t_{i-1} + t_i + t_{i+1}\right) = i + \frac{1}{3}\left(\delta_{i-1} + \delta_i + \delta_{i+1}\right).$$ Because $t$ must be unbiased, this quantity equals $i$ no matter what $i$ might be, showing that for all $i,$ $$\delta_{i-1} + \delta_{i} + \delta_{i+1}=0.$$ Already this is a huge restriction, because if we specify (say) $\delta_0$ and $\delta_1,$ this relation recursively requires $\delta_{-1} = \delta_2 = -(\delta_0 + \delta_1),$ *etc., thereby completely determining the estimator.
The variance of $t$ is $$\operatorname{Var}(t\mid \theta=i) = \frac{1}{3}\left((t_{i-1}-i)^2 + (t_i-i)^2 + (t_{i+1}-i)^2\right) \\= \frac{1}{3}\left((\delta_{i-1}-1)^2 + \delta_i^2 + (\delta_{i+1}+1)^2\right).$$ Among all unbiased estimators, this must have the smallest variance for all $i.$
It is a straightforward exercise in algebra (or Calculus, using a Lagrange multiplier) to show that for a specific $i,$ a minimum of $(2)$ can be obtained subject to the constraint $(1)$ and implies $\delta_{i-1}=\delta_i=\delta_{i+1}.$ Since this must hold for all $i,$ clearly the $\delta_i$ are all equal, whence they must all equal $0$ (because $\delta_1 = \delta_2 = -(\delta_0+\delta_1) = -2\delta_1$ has the unique solution $\delta_1=0,$ etc.).
Consequently, if an UMVUE exists, its variance is a constant given by $(2),$ equal to $2/3.$
Unfortunately, there are unbiased estimators that achieve smaller variances for specific values of $\theta.$
For instance, suppose you had a strong belief that $\theta=0.$ You might then adjust your estimator to guess $\theta=0$ whenever an outcome consistent with that guess showed up. That is, you would set $t_0=t_1=t_{-1}=0.$ That is equivalent to $\delta_{-1}=1,$ $\delta_0=0,$ and $\delta_1=-1.$ As we have remarked earlier, these initial conditions determine $t$ completely from the recursion $(1).$ Its variance when $\theta=0$ is zero, because it always guesses the correct value of $\theta.$ You can't do any better than that! Moreover, $0 \ll 2/3$ is a huge improvement. But compensating for that is a larger variance for certain other values of $\theta.$ For instance, since $\delta_2 = \delta_{-1} = 1,$ when $\theta=1$ the possible outcomes are $0,1,2,$ for which $t$ guesses $0,$ $0,$ and $3,$ respectively, for a variance of
$$\frac{1}{3}\left((0-1)^2 + (0-1)^2 + (3-1)^2\right) = 2 \gg \frac{2}{3}.$$
This contradiction--obtaining a lower variance for certain values of $\theta$--shows no UMVUE exists.
You might enjoy re-interpreting $\delta$ as an estimator of 0 ;-).
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
|
Let's approach this from first principles.
An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
Let's approach this from first principles.
An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$ for any integer $i.$
We are hoping to find estimators that tend to be close to $\theta$ when $\theta$ is the parameter. Clearly, when $i$ is observed the possible values of $\theta$ are limited (for sure) to the set $\{i+1,i,i-1\}.$ Thus, a good estimator is likely to guess $\theta$ is close to the observation $i.$ Let us therefore express $t$ in terms of how far it departs from the observation; namely, let
$$t_i = i + \delta_i.$$
When $\theta$ is the parameter, the outcomes $\theta-1,$ $\theta,$ and $\theta+1$ have equal probabilities of $1/3$ and all other integers have zero probability. Consequently,
The expectation of $t$ when $\theta$ is the parameter is $$E(t\mid \theta=i) = \frac{1}{3}\left(t_{i-1} + t_i + t_{i+1}\right) = i + \frac{1}{3}\left(\delta_{i-1} + \delta_i + \delta_{i+1}\right).$$ Because $t$ must be unbiased, this quantity equals $i$ no matter what $i$ might be, showing that for all $i,$ $$\delta_{i-1} + \delta_{i} + \delta_{i+1}=0.$$ Already this is a huge restriction, because if we specify (say) $\delta_0$ and $\delta_1,$ this relation recursively requires $\delta_{-1} = \delta_2 = -(\delta_0 + \delta_1),$ *etc., thereby completely determining the estimator.
The variance of $t$ is $$\operatorname{Var}(t\mid \theta=i) = \frac{1}{3}\left((t_{i-1}-i)^2 + (t_i-i)^2 + (t_{i+1}-i)^2\right) \\= \frac{1}{3}\left((\delta_{i-1}-1)^2 + \delta_i^2 + (\delta_{i+1}+1)^2\right).$$ Among all unbiased estimators, this must have the smallest variance for all $i.$
It is a straightforward exercise in algebra (or Calculus, using a Lagrange multiplier) to show that for a specific $i,$ a minimum of $(2)$ can be obtained subject to the constraint $(1)$ and implies $\delta_{i-1}=\delta_i=\delta_{i+1}.$ Since this must hold for all $i,$ clearly the $\delta_i$ are all equal, whence they must all equal $0$ (because $\delta_1 = \delta_2 = -(\delta_0+\delta_1) = -2\delta_1$ has the unique solution $\delta_1=0,$ etc.).
Consequently, if an UMVUE exists, its variance is a constant given by $(2),$ equal to $2/3.$
Unfortunately, there are unbiased estimators that achieve smaller variances for specific values of $\theta.$
For instance, suppose you had a strong belief that $\theta=0.$ You might then adjust your estimator to guess $\theta=0$ whenever an outcome consistent with that guess showed up. That is, you would set $t_0=t_1=t_{-1}=0.$ That is equivalent to $\delta_{-1}=1,$ $\delta_0=0,$ and $\delta_1=-1.$ As we have remarked earlier, these initial conditions determine $t$ completely from the recursion $(1).$ Its variance when $\theta=0$ is zero, because it always guesses the correct value of $\theta.$ You can't do any better than that! Moreover, $0 \ll 2/3$ is a huge improvement. But compensating for that is a larger variance for certain other values of $\theta.$ For instance, since $\delta_2 = \delta_{-1} = 1,$ when $\theta=1$ the possible outcomes are $0,1,2,$ for which $t$ guesses $0,$ $0,$ and $3,$ respectively, for a variance of
$$\frac{1}{3}\left((0-1)^2 + (0-1)^2 + (3-1)^2\right) = 2 \gg \frac{2}{3}.$$
This contradiction--obtaining a lower variance for certain values of $\theta$--shows no UMVUE exists.
You might enjoy re-interpreting $\delta$ as an estimator of 0 ;-).
|
Proving the nonexistence of UMVUE for $\text{Unif}\{\theta-1, \theta, \theta+1\}$
Let's approach this from first principles.
An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$
|
48,975 |
Why are language modeling pre-training objectives considered unsupervised?
|
Your argument is right. From the perspective of the language model, you have well-defined target labels and use supervise learning methods to teach the model to predict the labels.
Calling it unsupervised pre-training is certainly sort of paper-publishing marketing, but it is not entirely wrong. It is unsupervised from the perspective of the downstream tasks. The MLM-pre-trained model learned something useful for a particular downstream task (e.g., sentiment analysis) without using any labeled data for the task, but using unlabeled data only.
There is also a strong analogy with clustering. The inputs to the model are very high-dimensional data: the vocabulary has tens of thousand items, there is very little structure too (all one-hot vectors are equidistant). The MLM pre-training learns to embed such inputs into a much lower-dimensional and very structured space using nothing else than unlabeled data: the text itself.
|
Why are language modeling pre-training objectives considered unsupervised?
|
Your argument is right. From the perspective of the language model, you have well-defined target labels and use supervise learning methods to teach the model to predict the labels.
Calling it unsuperv
|
Why are language modeling pre-training objectives considered unsupervised?
Your argument is right. From the perspective of the language model, you have well-defined target labels and use supervise learning methods to teach the model to predict the labels.
Calling it unsupervised pre-training is certainly sort of paper-publishing marketing, but it is not entirely wrong. It is unsupervised from the perspective of the downstream tasks. The MLM-pre-trained model learned something useful for a particular downstream task (e.g., sentiment analysis) without using any labeled data for the task, but using unlabeled data only.
There is also a strong analogy with clustering. The inputs to the model are very high-dimensional data: the vocabulary has tens of thousand items, there is very little structure too (all one-hot vectors are equidistant). The MLM pre-training learns to embed such inputs into a much lower-dimensional and very structured space using nothing else than unlabeled data: the text itself.
|
Why are language modeling pre-training objectives considered unsupervised?
Your argument is right. From the perspective of the language model, you have well-defined target labels and use supervise learning methods to teach the model to predict the labels.
Calling it unsuperv
|
48,976 |
2-dimensional minimal sufficient statistic for $U(-k\theta+k,k\theta+k)$
|
Your own attempted answer incorrectly treats $k$ as a fixed value, rather than an index. This gives you an incorrect likelihood function, which means that your subsequent work is also incorrect. We first observe the event equivalence:
$$-\theta k + k \leqslant x_k \leqslant \theta k + k
\quad \quad \quad \iff \quad \quad \quad
\Big| \frac{x_k}{k} - 1 \Big| \leqslant \theta,$$
which means that the correct likelihood function is:
$$\begin{align}
L_\theta(\mathbf{x}_n)
&= \prod_{k=1}^n \frac{1}{2k\theta} \cdot \mathbb{I}(\theta k - k \leqslant x_k \leqslant \theta k + k) \\[6pt]
&= \prod_{k=1}^n \frac{1}{2k\theta} \cdot \mathbb{I} \Bigg( \theta \geqslant \Big| \frac{x_k}{k} - 1 \Big| \Bigg) \\[6pt]
&= \frac{1}{(2\theta)^n n!} \cdot \mathbb{I} \Bigg( \theta \geqslant \max_k \Big| \frac{x_k}{k} - 1 \Big| \Bigg). \\[6pt]
\end{align}$$
Consequently, a minimal sufficient statistic for the parameter $\theta$ is:
$$\max_k \Big| \frac{x_k}{k} - 1 \Big| .$$
|
2-dimensional minimal sufficient statistic for $U(-k\theta+k,k\theta+k)$
|
Your own attempted answer incorrectly treats $k$ as a fixed value, rather than an index. This gives you an incorrect likelihood function, which means that your subsequent work is also incorrect. We
|
2-dimensional minimal sufficient statistic for $U(-k\theta+k,k\theta+k)$
Your own attempted answer incorrectly treats $k$ as a fixed value, rather than an index. This gives you an incorrect likelihood function, which means that your subsequent work is also incorrect. We first observe the event equivalence:
$$-\theta k + k \leqslant x_k \leqslant \theta k + k
\quad \quad \quad \iff \quad \quad \quad
\Big| \frac{x_k}{k} - 1 \Big| \leqslant \theta,$$
which means that the correct likelihood function is:
$$\begin{align}
L_\theta(\mathbf{x}_n)
&= \prod_{k=1}^n \frac{1}{2k\theta} \cdot \mathbb{I}(\theta k - k \leqslant x_k \leqslant \theta k + k) \\[6pt]
&= \prod_{k=1}^n \frac{1}{2k\theta} \cdot \mathbb{I} \Bigg( \theta \geqslant \Big| \frac{x_k}{k} - 1 \Big| \Bigg) \\[6pt]
&= \frac{1}{(2\theta)^n n!} \cdot \mathbb{I} \Bigg( \theta \geqslant \max_k \Big| \frac{x_k}{k} - 1 \Big| \Bigg). \\[6pt]
\end{align}$$
Consequently, a minimal sufficient statistic for the parameter $\theta$ is:
$$\max_k \Big| \frac{x_k}{k} - 1 \Big| .$$
|
2-dimensional minimal sufficient statistic for $U(-k\theta+k,k\theta+k)$
Your own attempted answer incorrectly treats $k$ as a fixed value, rather than an index. This gives you an incorrect likelihood function, which means that your subsequent work is also incorrect. We
|
48,977 |
Can adding features to linear regression increase model bias?
|
I wrote a paper about this. An early version of it is on arxiv here: https://arxiv.org/abs/2003.08449 (edit: The full published version is now available open access here: https://journals.sagepub.com/doi/full/10.1177/0962280221995963).
The short of it is yes under some conditions. Namely, there has to be some bias already in the model. If you model is already biased then bias can be increase by adding variables.
As an example, suppose we are interested in the effect of some treatment $A$ on the outcome $Y$. For simplicity, say this effect is just the regression coefficient $\beta$. Bias will be with respect to this parameter. The easiest way to increase the bias of the estimate of $\beta$, $\hat{\beta}$ is by adding variables which explain much of the variation in the treatment ($A$) and very little variation in the outcome ($Y$), except through $A$ of course.
The reason that this happens is due to the geometry of least squares. By the FWL theorem, we can always think of estimates from multivariate regression as equivalent to the estimate from a simple regression of:
The residuals of the outcome from the regression of $Y$ on all the control variables
and
The residuals of the treatment from the regression of $A$ on all the control variables
Suppose we have a variable which only has an effect on $Y$ through $A$ (an instrumental variable) and there are no interactions. When we add this variable to the regression, whatever direction the old estimate was relative to the truth gets amplified (see figure 4 in the above paper for a graph of what this looks like).
In general, a variable can cause bias amplification even if it does explain some variance of the outcome not through the treatment. In these cases, including a variable is a trade-off. By including it you remove the bias do to excluding said variable, but you also amplify the bias of the remaining biases. Whether or not there is bias amplification in the end depends on which effect is stronger. In extreme cases, especially those where there are plenty of control variables already, the amplification effect can tend to dominate because the amplification effect is hyperbolic and always in the same direction. The effect of removing omitted variable bias on the other hand is linear in these settings and may go in either direction.
There is a small literature of papers looking at these effects. The original two which investigated pure instruments are
i) Pearl (2011) https://ftp.cs.ucla.edu/pub/stat_ser/r386.pdf
ii) Wooldridge (2006) http://econ.msu.edu/faculty/wooldridge/docs/treat1r6.pdf
But has since been extended to a slightly broader class of models and under more flexible conditions.
To your point about bias-variance trade-off. In most cases, the variance will also increase when you have bias amplification as well unfortunately.
|
Can adding features to linear regression increase model bias?
|
I wrote a paper about this. An early version of it is on arxiv here: https://arxiv.org/abs/2003.08449 (edit: The full published version is now available open access here: https://journals.sagepub.com/
|
Can adding features to linear regression increase model bias?
I wrote a paper about this. An early version of it is on arxiv here: https://arxiv.org/abs/2003.08449 (edit: The full published version is now available open access here: https://journals.sagepub.com/doi/full/10.1177/0962280221995963).
The short of it is yes under some conditions. Namely, there has to be some bias already in the model. If you model is already biased then bias can be increase by adding variables.
As an example, suppose we are interested in the effect of some treatment $A$ on the outcome $Y$. For simplicity, say this effect is just the regression coefficient $\beta$. Bias will be with respect to this parameter. The easiest way to increase the bias of the estimate of $\beta$, $\hat{\beta}$ is by adding variables which explain much of the variation in the treatment ($A$) and very little variation in the outcome ($Y$), except through $A$ of course.
The reason that this happens is due to the geometry of least squares. By the FWL theorem, we can always think of estimates from multivariate regression as equivalent to the estimate from a simple regression of:
The residuals of the outcome from the regression of $Y$ on all the control variables
and
The residuals of the treatment from the regression of $A$ on all the control variables
Suppose we have a variable which only has an effect on $Y$ through $A$ (an instrumental variable) and there are no interactions. When we add this variable to the regression, whatever direction the old estimate was relative to the truth gets amplified (see figure 4 in the above paper for a graph of what this looks like).
In general, a variable can cause bias amplification even if it does explain some variance of the outcome not through the treatment. In these cases, including a variable is a trade-off. By including it you remove the bias do to excluding said variable, but you also amplify the bias of the remaining biases. Whether or not there is bias amplification in the end depends on which effect is stronger. In extreme cases, especially those where there are plenty of control variables already, the amplification effect can tend to dominate because the amplification effect is hyperbolic and always in the same direction. The effect of removing omitted variable bias on the other hand is linear in these settings and may go in either direction.
There is a small literature of papers looking at these effects. The original two which investigated pure instruments are
i) Pearl (2011) https://ftp.cs.ucla.edu/pub/stat_ser/r386.pdf
ii) Wooldridge (2006) http://econ.msu.edu/faculty/wooldridge/docs/treat1r6.pdf
But has since been extended to a slightly broader class of models and under more flexible conditions.
To your point about bias-variance trade-off. In most cases, the variance will also increase when you have bias amplification as well unfortunately.
|
Can adding features to linear regression increase model bias?
I wrote a paper about this. An early version of it is on arxiv here: https://arxiv.org/abs/2003.08449 (edit: The full published version is now available open access here: https://journals.sagepub.com/
|
48,978 |
How to deduce the function of each layer in a neural network without being explicitly told beforehand?
|
I got an answer from my supervisor, so I thought I would post it here since a few people upvoted.
Firstly, always good to be remember that "layer" is a bit or an overload term. For example, there might be many low-level layers in each high-level layer of SRCNN.
At a high level, it appears that SRCNN maps the raw pixel data to an input feature vector, then maps that input feature vector to an output feature vector, and then maps that output feature vector back to an image.
You will see this pattern quite often in machine learning. That is: (1) encode unstructured high-dimensional data as a low-dimensional input feature vector, (2) map the input feature vector to an output feature vector, and then (3) decode/map the low-dimensional output feature vector to the final prediction (e.g. output image).
In practice, this is because the non-linear mapping tools of deep learning work better with low-dimensional feature vectors than they do with high-dimensional inputs (such as raw pixel data), but there is also lots of theory to back up why this is a good thing to do.
With regards to your follow-up question: it's a mix. Some of the ideas for the design of an architecture will come from theory, some from intuition built up from practical experience, and some from experimenting and trying different things.
|
How to deduce the function of each layer in a neural network without being explicitly told beforehan
|
I got an answer from my supervisor, so I thought I would post it here since a few people upvoted.
Firstly, always good to be remember that "layer" is a bit or an overload term. For example, there mig
|
How to deduce the function of each layer in a neural network without being explicitly told beforehand?
I got an answer from my supervisor, so I thought I would post it here since a few people upvoted.
Firstly, always good to be remember that "layer" is a bit or an overload term. For example, there might be many low-level layers in each high-level layer of SRCNN.
At a high level, it appears that SRCNN maps the raw pixel data to an input feature vector, then maps that input feature vector to an output feature vector, and then maps that output feature vector back to an image.
You will see this pattern quite often in machine learning. That is: (1) encode unstructured high-dimensional data as a low-dimensional input feature vector, (2) map the input feature vector to an output feature vector, and then (3) decode/map the low-dimensional output feature vector to the final prediction (e.g. output image).
In practice, this is because the non-linear mapping tools of deep learning work better with low-dimensional feature vectors than they do with high-dimensional inputs (such as raw pixel data), but there is also lots of theory to back up why this is a good thing to do.
With regards to your follow-up question: it's a mix. Some of the ideas for the design of an architecture will come from theory, some from intuition built up from practical experience, and some from experimenting and trying different things.
|
How to deduce the function of each layer in a neural network without being explicitly told beforehan
I got an answer from my supervisor, so I thought I would post it here since a few people upvoted.
Firstly, always good to be remember that "layer" is a bit or an overload term. For example, there mig
|
48,979 |
Making sense of the $\phi_k$ correlation coefficient
|
I'm one of the $\phi_k$ authors, so happy to help out.
Indeed there is no closed-form formula for phi_k, but it boils down to interpreting the Pearson $\chi^2$ value between two (binned) variables as coming from a tilted bivariate normal distribution. The $\chi^2$ needs to pass a certain noise pedestal, else $\phi_k$ is zero.
For low statistics samples $\phi_k$ is indeed affected by statistical fluctuations (like any correlation constant). In case of low statistics the spread in the reported values goes up, but the median is unbiased (see publication). $\phi_k$ is affected more than Pearson's correlation constant, because, unlike Pearson, $\phi_k$ does not use exact positional information. For numeric variables it bins the data, and then only uses the number of counts per bin, essentially treating them as categorical variables.
So in case of a low statistics sample, be sure to always check the significance of the correlations found (this holds for any correlation constant!). For $\phi_k$ simply call:
df.significance_matrix()
to get the Z-values. In your example you will see that $\phi_k$ is only evaluated when (roughly) $Z > 0.5$, that the Z scores lie around zero, and also that none of the Z scores are truly significant, say $Z > 5$.
In general, $\phi_k$ is useful when you have a set of variables where some are categorical or ordinal. If you only have numeric variables, $\phi_k$ will work fine but other correlation constants will be more precise, in particular for very low statistics samples.
Hope this helps!
|
Making sense of the $\phi_k$ correlation coefficient
|
I'm one of the $\phi_k$ authors, so happy to help out.
Indeed there is no closed-form formula for phi_k, but it boils down to interpreting the Pearson $\chi^2$ value between two (binned) variables as
|
Making sense of the $\phi_k$ correlation coefficient
I'm one of the $\phi_k$ authors, so happy to help out.
Indeed there is no closed-form formula for phi_k, but it boils down to interpreting the Pearson $\chi^2$ value between two (binned) variables as coming from a tilted bivariate normal distribution. The $\chi^2$ needs to pass a certain noise pedestal, else $\phi_k$ is zero.
For low statistics samples $\phi_k$ is indeed affected by statistical fluctuations (like any correlation constant). In case of low statistics the spread in the reported values goes up, but the median is unbiased (see publication). $\phi_k$ is affected more than Pearson's correlation constant, because, unlike Pearson, $\phi_k$ does not use exact positional information. For numeric variables it bins the data, and then only uses the number of counts per bin, essentially treating them as categorical variables.
So in case of a low statistics sample, be sure to always check the significance of the correlations found (this holds for any correlation constant!). For $\phi_k$ simply call:
df.significance_matrix()
to get the Z-values. In your example you will see that $\phi_k$ is only evaluated when (roughly) $Z > 0.5$, that the Z scores lie around zero, and also that none of the Z scores are truly significant, say $Z > 5$.
In general, $\phi_k$ is useful when you have a set of variables where some are categorical or ordinal. If you only have numeric variables, $\phi_k$ will work fine but other correlation constants will be more precise, in particular for very low statistics samples.
Hope this helps!
|
Making sense of the $\phi_k$ correlation coefficient
I'm one of the $\phi_k$ authors, so happy to help out.
Indeed there is no closed-form formula for phi_k, but it boils down to interpreting the Pearson $\chi^2$ value between two (binned) variables as
|
48,980 |
Effect size calculation for comparison between medians
|
Cohen's d is probably not the best effect size statistic for a comparison of medians, since it is based on the mean and standard deviation(s), and is related to the calculations used in a t test.
I haven't seen this anywhere that I remember, but I suppose you could use an analogous statistic that uses the differences in medians and the median absolute deviation.
(median(B) - median(A)) / sqrt((mad(A)^2 + mad(B)^2)/2)
For normal samples, this will give a similar result to Cohen's d.
If I invented this, you can call it Mangiafico's d. :)
Of course, a simple difference in medians is also a useful effect size statistic. It isn't unit-less, but has units of the measurements. If you add confidence intervals, this is quite informative.
Finally, effect size statistics that might be used for a Wilcoxon-Mann-Whitney test would be useful: Vargha and Delaney’s A, Cliff’s delta, and the Glass rank biserial coefficient.
Addendum November 2022:
I added an implementation of the statistic that uses the differences in medians and the median absolute deviation to my rcompanion package. An example follows.
Group1 = c(0, 4, 8, 12, 16, 1, 5, 9, 13, 17, 2, 6, 10, 14, 18)
Group2 = c(8, 12, 16, 20, 24, 9, 13, 17, 21, 25, 10, 14, 18, 22, 26)
library(rcompanion)
mangiaficoD(x=Group1, y=Group2, verbose=TRUE, ci=TRUE, hist=TRUE)
### Group Statistic Value
### 1 A Median 9.00
### 2 B Median 17.00
### 3 Difference -8.00
### 4 A MAD 7.41
### 5 B MAD 7.41
### 6 Pooled MAD 7.41
### d lower.ci upper.ci
### 1 -1.08 -2.81 -0.169
Group = factor(c(rep("I", length(Group1)), rep("II", length(Group2))))
Value = c(Group1, Group2)
Data = data.frame(Group, Value)
plot(Value ~ Group, data=Data)
library(coin)
median_test(Value ~ Group, data = Data)
### Asymptotic Two-Sample Brown-Mood Median Test
### Z = -2.1589, p-value = 0.03086
Addendum 2
I mentioned Vargha and Delaney’s A above as an option for effect size. This statistic is based on the proportion of observations in Group A that are greater than observations in Group B, and so on.
For the example above, the proportion of observations in Group1 > Group2 is 0.16, and the proportion of observations in Group2 > Group1 is 0.80.
These statistics are useful to report when comparing two medians.
library(rcompanion)
vda(x=Group1, y=Group2, verbose=TRUE)
### Statistic Value
### 1 Proportion Ya > Yb 0.16
### 2 Proportion Ya < Yb 0.80
### 3 Proportion ties 0.04
### VDA
### 0.18
|
Effect size calculation for comparison between medians
|
Cohen's d is probably not the best effect size statistic for a comparison of medians, since it is based on the mean and standard deviation(s), and is related to the calculations used in a t test.
I ha
|
Effect size calculation for comparison between medians
Cohen's d is probably not the best effect size statistic for a comparison of medians, since it is based on the mean and standard deviation(s), and is related to the calculations used in a t test.
I haven't seen this anywhere that I remember, but I suppose you could use an analogous statistic that uses the differences in medians and the median absolute deviation.
(median(B) - median(A)) / sqrt((mad(A)^2 + mad(B)^2)/2)
For normal samples, this will give a similar result to Cohen's d.
If I invented this, you can call it Mangiafico's d. :)
Of course, a simple difference in medians is also a useful effect size statistic. It isn't unit-less, but has units of the measurements. If you add confidence intervals, this is quite informative.
Finally, effect size statistics that might be used for a Wilcoxon-Mann-Whitney test would be useful: Vargha and Delaney’s A, Cliff’s delta, and the Glass rank biserial coefficient.
Addendum November 2022:
I added an implementation of the statistic that uses the differences in medians and the median absolute deviation to my rcompanion package. An example follows.
Group1 = c(0, 4, 8, 12, 16, 1, 5, 9, 13, 17, 2, 6, 10, 14, 18)
Group2 = c(8, 12, 16, 20, 24, 9, 13, 17, 21, 25, 10, 14, 18, 22, 26)
library(rcompanion)
mangiaficoD(x=Group1, y=Group2, verbose=TRUE, ci=TRUE, hist=TRUE)
### Group Statistic Value
### 1 A Median 9.00
### 2 B Median 17.00
### 3 Difference -8.00
### 4 A MAD 7.41
### 5 B MAD 7.41
### 6 Pooled MAD 7.41
### d lower.ci upper.ci
### 1 -1.08 -2.81 -0.169
Group = factor(c(rep("I", length(Group1)), rep("II", length(Group2))))
Value = c(Group1, Group2)
Data = data.frame(Group, Value)
plot(Value ~ Group, data=Data)
library(coin)
median_test(Value ~ Group, data = Data)
### Asymptotic Two-Sample Brown-Mood Median Test
### Z = -2.1589, p-value = 0.03086
Addendum 2
I mentioned Vargha and Delaney’s A above as an option for effect size. This statistic is based on the proportion of observations in Group A that are greater than observations in Group B, and so on.
For the example above, the proportion of observations in Group1 > Group2 is 0.16, and the proportion of observations in Group2 > Group1 is 0.80.
These statistics are useful to report when comparing two medians.
library(rcompanion)
vda(x=Group1, y=Group2, verbose=TRUE)
### Statistic Value
### 1 Proportion Ya > Yb 0.16
### 2 Proportion Ya < Yb 0.80
### 3 Proportion ties 0.04
### VDA
### 0.18
|
Effect size calculation for comparison between medians
Cohen's d is probably not the best effect size statistic for a comparison of medians, since it is based on the mean and standard deviation(s), and is related to the calculations used in a t test.
I ha
|
48,981 |
Effect size calculation for comparison between medians
|
Mood's test is really just a chi-square test in disguise (https://en.wikipedia.org/wiki/Median_test).
Hence, I recommend using the measure of effect size for chi-square tests. There are three standards used: phi, the odds ratio, and Cramer's V. The first two are only defined when comparing two samples, while Cramer's V works for nxn contingency tables.
Here are equations for phi and V:
$$\phi= \sqrt(\chi^2/n)$$
$$V=\sqrt(\chi^2/(n \min(c-1, r-1))$$
where n is the total number of observations, $\chi^2$ is the test statistic, and r/c are the dimensions of the contingency matrix (in this case it will be two x the number of groups you are comparing). If you are interested in the odds ratio, I would recommend googling it, as it is a little bit more complicated than the others.
Note I leave interpretation of the effect size to the reader. :)
https://www.real-statistics.com/chi-square-and-f-distributions/effect-size-chi-square/
|
Effect size calculation for comparison between medians
|
Mood's test is really just a chi-square test in disguise (https://en.wikipedia.org/wiki/Median_test).
Hence, I recommend using the measure of effect size for chi-square tests. There are three standard
|
Effect size calculation for comparison between medians
Mood's test is really just a chi-square test in disguise (https://en.wikipedia.org/wiki/Median_test).
Hence, I recommend using the measure of effect size for chi-square tests. There are three standards used: phi, the odds ratio, and Cramer's V. The first two are only defined when comparing two samples, while Cramer's V works for nxn contingency tables.
Here are equations for phi and V:
$$\phi= \sqrt(\chi^2/n)$$
$$V=\sqrt(\chi^2/(n \min(c-1, r-1))$$
where n is the total number of observations, $\chi^2$ is the test statistic, and r/c are the dimensions of the contingency matrix (in this case it will be two x the number of groups you are comparing). If you are interested in the odds ratio, I would recommend googling it, as it is a little bit more complicated than the others.
Note I leave interpretation of the effect size to the reader. :)
https://www.real-statistics.com/chi-square-and-f-distributions/effect-size-chi-square/
|
Effect size calculation for comparison between medians
Mood's test is really just a chi-square test in disguise (https://en.wikipedia.org/wiki/Median_test).
Hence, I recommend using the measure of effect size for chi-square tests. There are three standard
|
48,982 |
Markov chain ( Absorption)
|
It would be overkill to solve this problem using Markov Chain theory: but the underlying concepts will help you frame it an a way that admits a simple solution.
Formulating the problem
The most fundamental concept is that of a state: we may model this situation in terms of 41 distinct positions, or "states," situated at one-meter height intervals from the bottom (height -40) to the top (height 0) of the hill. The current state, halfway up the hill, is a height of -20.
The second fundamental concept is that of independence from past events: the chance of what happens next depends only on the state, not on any details of how the man got there. Consequently, the chance of reaching the summit depends only on the state. Accordingly, if we write $s$ for a state, the chance of reaching the summit can be simply written $p(s).$ We are asked to find $p(-20).$
From any state $s$ between $-40$ and $0$ there is a $1/3$ chance that $s+1$ will be the next state and a $2/3$ chance that $s-1$ will be the next state. The most basic laws of conditional probability then imply
$$p(s) = (1/3)p(s+1) + (2/3)p(s-1) = \frac{p(s+1)+2p(s-1)}{3}.\tag{*}$$
The final step of formulating the problem treats the endpoints, or "absorbing states" $s=0$ and $s=-40.$ It should be clear that
$$p(0)=1;\ p(-40)=0.\tag{**}$$
Analysis
At this point the work may look formidable: who wants to solve a sequence of 40 equations? A nice solution method combines all the equations into a single mathematical object. But before we proceed, allow me to remark that you don't need to follow this analysis: it will suffice to check that the final formula (highlighted below) satisfies all the conditions established by the problem -- and this is just a matter of simple algebra.
At this juncture it is helpful to solve the general problem. Let's suppose there is a sequence of states $s=0,1,2,\ldots, n$ and that each state $s$ between $1$ and $n-1$ transitions to $s-1$ with probability $p$ and to $s+1$ with probability $1-p.$ For all $s$ let $a_s$ be the chance of arriving at state $0$ before hitting state $n.$ (I have dropped the previous "$p(-s)$" notation because it leads to too many p's and I have switched from indexing states with negative numbers to indexing them with positive numbers.)
As we have seen, $a_0=1,$ $a_n=0,$ and otherwise $a_{s} = pa_{s-1} + (1-p)a_{s+1}$ (the "recurrence relation"). This set of equations is neatly encoded by a polynomial
$$P(t) = a_0 + a_1 t + a_2 t^2 + \cdots + a_n t^n = a_0 + \sum_{s=1}^{n} a_s t^s.$$
Plugging in the recurrence relation and then collecting common powers of $t$ (writing $a_{n+1}=0$ for convenience) gives
$$\begin{aligned}
P(t) &= a_0 + \sum_{s=1}^n \left[pa_{s-1} + (1-p)a_{s+1}\right]t^s \\
&= a_0 + p\sum_{s=1}^n a_{s-1} t^s + (1-p)\sum_{s=1}^n a_{s+1}t^s\\
&= a_0 + pt\sum_{s=1}^n a_{s-1} t^{s-1} + \frac{1-p}{t}\sum_{s=1}^n a_{s+1}t^{s+1}\\
&= a_0 + pt\sum_{s=0}^{n-1} a_{s} t^{s} + \frac{1-p}{t}\sum_{s=2}^{n+1} a_{s}t^{s}\\
&= a_0 + pt(P(t) - a_nt^n) + \frac{1-p}{t}(P(t) - a_0 - a_1t)
\end{aligned}$$
This is a single equation for the polynomial $P$ (at least up to $t^n;$ I will ignore any coefficients of $t^n$ or higher powers that might be needed to make the equation work out exactly.) Simplify this equation a little using the initial condition $a_0=1$ and solve for $P$ to get
$$P(t) = \frac{(1-p) + (-1 + (1-p)a_1)t}{pt^2 - t + (1-p)}.$$
Now every coefficient of $P$ can be expressed in terms of the (still unknown) number $a_1.$ The value of $a_1$ is determined by the final condition $a_n=0.$
A closed formula is possible by expanding the right hand side as a partial fraction. It comes down to observing
$$\frac{1}{pt^2 - t + (1-p)} = \frac{1}{1-2p}\left(\frac{1}{1-t} - \frac{\lambda}{1 - \lambda t}\right)$$
and expanding the fractions as sums of geometric series, both of which are in the form
$$\frac{\rho}{1 - \rho t} = \rho + \rho^2 t + \rho^3 t^2 + \cdots$$
and multiplying that by the numerator $(1-p) + (-1 + (1-p)a_1)t$ to obtain $P(t).$ This gives a closed formula for every term in $P(t)$ as a function of $a_1.$
For $p\ne 1/2$ and writing $\lambda = p/(1-p)$ this approach gives the general result
$$a_s = \frac{\lambda^s - \lambda^n}{1 - \lambda^n}$$
for $s=1, 2, \ldots, n$ (and this happens to work for $s=0,$ too). (When $p=1/2,$ $\lambda=1$ makes this formula undefined. You can easily figure it out a simple formula, though, by taking the limit of $a_s$ as $\lambda\to 1$ using a single application of L'Hopital's Rule.)
As a check, it is clear this formula gives $a_0=1$ and $a_n=0.$ It remains to verify it satisfies the recurrence relation, but that's a matter of showing
$$\frac{\lambda^s - \lambda^n}{1 - \lambda^n} = a_s = pa_{s-1} + (1-p)a_{s+1} = p\frac{\lambda^{s-1} - \lambda^n}{1 - \lambda^n} + (1-p)\frac{\lambda^{s+1} - \lambda^n}{1 - \lambda^n},$$
which is straightforward.
Application
In the given problem $n=40,$ $p=1/3,$ and we are asked to find $a_{20}.$ Consequently $\lambda = (1/3)\,/\,(1-1/3) = 1/2$ and
$$a_{20} = \frac{2^{-20} - 2^{-40}}{1 - 2^{-40}} = 2^{-20} - 2^{-40} + 2^{-60} - 2^{-80} + \cdots.$$
The expansion on the right hand side can be terminated after the first two terms when computing in double precision floating point (which has a precision of $52$ binary places), giving
$$a_{20} \approx 2^{-20} - 2^{-40} \approx 9.53673406911546\times 10^{-7};$$
a little less than one in a million.
|
Markov chain ( Absorption)
|
It would be overkill to solve this problem using Markov Chain theory: but the underlying concepts will help you frame it an a way that admits a simple solution.
Formulating the problem
The most fundam
|
Markov chain ( Absorption)
It would be overkill to solve this problem using Markov Chain theory: but the underlying concepts will help you frame it an a way that admits a simple solution.
Formulating the problem
The most fundamental concept is that of a state: we may model this situation in terms of 41 distinct positions, or "states," situated at one-meter height intervals from the bottom (height -40) to the top (height 0) of the hill. The current state, halfway up the hill, is a height of -20.
The second fundamental concept is that of independence from past events: the chance of what happens next depends only on the state, not on any details of how the man got there. Consequently, the chance of reaching the summit depends only on the state. Accordingly, if we write $s$ for a state, the chance of reaching the summit can be simply written $p(s).$ We are asked to find $p(-20).$
From any state $s$ between $-40$ and $0$ there is a $1/3$ chance that $s+1$ will be the next state and a $2/3$ chance that $s-1$ will be the next state. The most basic laws of conditional probability then imply
$$p(s) = (1/3)p(s+1) + (2/3)p(s-1) = \frac{p(s+1)+2p(s-1)}{3}.\tag{*}$$
The final step of formulating the problem treats the endpoints, or "absorbing states" $s=0$ and $s=-40.$ It should be clear that
$$p(0)=1;\ p(-40)=0.\tag{**}$$
Analysis
At this point the work may look formidable: who wants to solve a sequence of 40 equations? A nice solution method combines all the equations into a single mathematical object. But before we proceed, allow me to remark that you don't need to follow this analysis: it will suffice to check that the final formula (highlighted below) satisfies all the conditions established by the problem -- and this is just a matter of simple algebra.
At this juncture it is helpful to solve the general problem. Let's suppose there is a sequence of states $s=0,1,2,\ldots, n$ and that each state $s$ between $1$ and $n-1$ transitions to $s-1$ with probability $p$ and to $s+1$ with probability $1-p.$ For all $s$ let $a_s$ be the chance of arriving at state $0$ before hitting state $n.$ (I have dropped the previous "$p(-s)$" notation because it leads to too many p's and I have switched from indexing states with negative numbers to indexing them with positive numbers.)
As we have seen, $a_0=1,$ $a_n=0,$ and otherwise $a_{s} = pa_{s-1} + (1-p)a_{s+1}$ (the "recurrence relation"). This set of equations is neatly encoded by a polynomial
$$P(t) = a_0 + a_1 t + a_2 t^2 + \cdots + a_n t^n = a_0 + \sum_{s=1}^{n} a_s t^s.$$
Plugging in the recurrence relation and then collecting common powers of $t$ (writing $a_{n+1}=0$ for convenience) gives
$$\begin{aligned}
P(t) &= a_0 + \sum_{s=1}^n \left[pa_{s-1} + (1-p)a_{s+1}\right]t^s \\
&= a_0 + p\sum_{s=1}^n a_{s-1} t^s + (1-p)\sum_{s=1}^n a_{s+1}t^s\\
&= a_0 + pt\sum_{s=1}^n a_{s-1} t^{s-1} + \frac{1-p}{t}\sum_{s=1}^n a_{s+1}t^{s+1}\\
&= a_0 + pt\sum_{s=0}^{n-1} a_{s} t^{s} + \frac{1-p}{t}\sum_{s=2}^{n+1} a_{s}t^{s}\\
&= a_0 + pt(P(t) - a_nt^n) + \frac{1-p}{t}(P(t) - a_0 - a_1t)
\end{aligned}$$
This is a single equation for the polynomial $P$ (at least up to $t^n;$ I will ignore any coefficients of $t^n$ or higher powers that might be needed to make the equation work out exactly.) Simplify this equation a little using the initial condition $a_0=1$ and solve for $P$ to get
$$P(t) = \frac{(1-p) + (-1 + (1-p)a_1)t}{pt^2 - t + (1-p)}.$$
Now every coefficient of $P$ can be expressed in terms of the (still unknown) number $a_1.$ The value of $a_1$ is determined by the final condition $a_n=0.$
A closed formula is possible by expanding the right hand side as a partial fraction. It comes down to observing
$$\frac{1}{pt^2 - t + (1-p)} = \frac{1}{1-2p}\left(\frac{1}{1-t} - \frac{\lambda}{1 - \lambda t}\right)$$
and expanding the fractions as sums of geometric series, both of which are in the form
$$\frac{\rho}{1 - \rho t} = \rho + \rho^2 t + \rho^3 t^2 + \cdots$$
and multiplying that by the numerator $(1-p) + (-1 + (1-p)a_1)t$ to obtain $P(t).$ This gives a closed formula for every term in $P(t)$ as a function of $a_1.$
For $p\ne 1/2$ and writing $\lambda = p/(1-p)$ this approach gives the general result
$$a_s = \frac{\lambda^s - \lambda^n}{1 - \lambda^n}$$
for $s=1, 2, \ldots, n$ (and this happens to work for $s=0,$ too). (When $p=1/2,$ $\lambda=1$ makes this formula undefined. You can easily figure it out a simple formula, though, by taking the limit of $a_s$ as $\lambda\to 1$ using a single application of L'Hopital's Rule.)
As a check, it is clear this formula gives $a_0=1$ and $a_n=0.$ It remains to verify it satisfies the recurrence relation, but that's a matter of showing
$$\frac{\lambda^s - \lambda^n}{1 - \lambda^n} = a_s = pa_{s-1} + (1-p)a_{s+1} = p\frac{\lambda^{s-1} - \lambda^n}{1 - \lambda^n} + (1-p)\frac{\lambda^{s+1} - \lambda^n}{1 - \lambda^n},$$
which is straightforward.
Application
In the given problem $n=40,$ $p=1/3,$ and we are asked to find $a_{20}.$ Consequently $\lambda = (1/3)\,/\,(1-1/3) = 1/2$ and
$$a_{20} = \frac{2^{-20} - 2^{-40}}{1 - 2^{-40}} = 2^{-20} - 2^{-40} + 2^{-60} - 2^{-80} + \cdots.$$
The expansion on the right hand side can be terminated after the first two terms when computing in double precision floating point (which has a precision of $52$ binary places), giving
$$a_{20} \approx 2^{-20} - 2^{-40} \approx 9.53673406911546\times 10^{-7};$$
a little less than one in a million.
|
Markov chain ( Absorption)
It would be overkill to solve this problem using Markov Chain theory: but the underlying concepts will help you frame it an a way that admits a simple solution.
Formulating the problem
The most fundam
|
48,983 |
Markov chain ( Absorption)
|
Imagine that the hill-climbing journey consists of 41 states, one for each meter possible, so states 0, 1, 3, ...., 40. The transition probability matrix then becomes a 41x41 matrix, representing the different probabilities of going from one state to another. It looks like the following:
0 1 2 -- 40
0 0 1 0 -- 0
1 2/3 0 1/3 -- 0
2 0 2/3 0 -- 0
| | | | -- |
| | | | -- |
40 0 0 0 -- 0
Let's call this matrix P. If we start at 20 meters, with other words at state 20, we can represent this as a vector (41 elements long) with the probabilities of starting in each state, called u, u=[0,0, ... , 0, 1, 0 ... 0, 0], where the 1 represent a 100% probability of starting at 20 meters.
The matrix multiplication, u*P, then becomes the probabilities of ending up in all other states at timestep t+1. If we continue to do this matrix multiplication over and over again, u*P^t, where t goes towards infinity, we will reach a steady state matrix P*. This steady state matrix represents the probabilities of ending up in all other states.
So in your case, you would do this matrix multiplication in a programming language of your choice many times (eg. 100+), and you would simply look up P[20,40], which would give you the probability of starting at 20 meters and making it all the way atop the hill!
|
Markov chain ( Absorption)
|
Imagine that the hill-climbing journey consists of 41 states, one for each meter possible, so states 0, 1, 3, ...., 40. The transition probability matrix then becomes a 41x41 matrix, representing the
|
Markov chain ( Absorption)
Imagine that the hill-climbing journey consists of 41 states, one for each meter possible, so states 0, 1, 3, ...., 40. The transition probability matrix then becomes a 41x41 matrix, representing the different probabilities of going from one state to another. It looks like the following:
0 1 2 -- 40
0 0 1 0 -- 0
1 2/3 0 1/3 -- 0
2 0 2/3 0 -- 0
| | | | -- |
| | | | -- |
40 0 0 0 -- 0
Let's call this matrix P. If we start at 20 meters, with other words at state 20, we can represent this as a vector (41 elements long) with the probabilities of starting in each state, called u, u=[0,0, ... , 0, 1, 0 ... 0, 0], where the 1 represent a 100% probability of starting at 20 meters.
The matrix multiplication, u*P, then becomes the probabilities of ending up in all other states at timestep t+1. If we continue to do this matrix multiplication over and over again, u*P^t, where t goes towards infinity, we will reach a steady state matrix P*. This steady state matrix represents the probabilities of ending up in all other states.
So in your case, you would do this matrix multiplication in a programming language of your choice many times (eg. 100+), and you would simply look up P[20,40], which would give you the probability of starting at 20 meters and making it all the way atop the hill!
|
Markov chain ( Absorption)
Imagine that the hill-climbing journey consists of 41 states, one for each meter possible, so states 0, 1, 3, ...., 40. The transition probability matrix then becomes a 41x41 matrix, representing the
|
48,984 |
Confidence interval from summary function
|
The p value is for a test of the null hypothesis that the estimate is equal to zero. It literally means the probability of observing these data (or data even further from zero), if the parameter for this estimate IS actually zero. Since this probability is extremely small (0.00000003) this is very strong evidence that the parameter is not zero and if we constructed a 95% confidence interval, this would not span zero. If the confidence interval did span zero then we would be able to say that zero is a plausible value for this estimate. With such a small p-value, this is not plausible.
|
Confidence interval from summary function
|
The p value is for a test of the null hypothesis that the estimate is equal to zero. It literally means the probability of observing these data (or data even further from zero), if the parameter for t
|
Confidence interval from summary function
The p value is for a test of the null hypothesis that the estimate is equal to zero. It literally means the probability of observing these data (or data even further from zero), if the parameter for this estimate IS actually zero. Since this probability is extremely small (0.00000003) this is very strong evidence that the parameter is not zero and if we constructed a 95% confidence interval, this would not span zero. If the confidence interval did span zero then we would be able to say that zero is a plausible value for this estimate. With such a small p-value, this is not plausible.
|
Confidence interval from summary function
The p value is for a test of the null hypothesis that the estimate is equal to zero. It literally means the probability of observing these data (or data even further from zero), if the parameter for t
|
48,985 |
Count data with largely varying sample sizes. Can I get anything from the data?
|
In principle, it seems you'd want to use a chi-squared test
to see if the two groups tend to have the same
distribution of category counts.
In practice, sparse
data as in the last few categories of your first
dataset make it impossible to do a 'standard'
chi-squared test. In particular, several expected cell
counts are smaller than five. (Some authors are OK
with a few counts as low as three as long as all the rest
are higher than five---questionable for your first dataset.)
Fortunately, the implementation of chisq.test in R
simulates reasonably accurate P-values for tests in
many such problematic situations. The simulation is OK
for the table as a whole, but if the null hypothesis of
homogeneity is rejected, any ad hoc tests
trying to identify specifically which categories differ
must be limited to categories with higher numbers
of expected counts.
Here is output from chisq.test for your first dataset:
x1 = c(45, 16, 9, 7, 5, 3, 1, 0)
x2 = c(23, 75, 145, 85, 23, 13, 9, 5)
TBL = rbind(x1, x2); TBL
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 45 16 9 7 5 3 1 0
x2 23 75 145 85 23 13 9 5
chi.out = chisq.test(TBL, sim=T)
chi.out
Pearson's Chi-squared test
with simulated p-value
(based on 2000 replicates)
data: TBL
X-squared = 127.6, df = NA, p-value = 0.0004998
The simulated P-value is much smaller the 0.05, so
there are highly significant differences among categories
for the two groups.
The chi-squared statistic $Q$ is composed of 16 components
as follows:
$$Q = \sum_{i=1}^2\sum_{j=1}^8 \frac{(X_{ij} - E_{ij})^2}{E_{ij}} = 127.6,$$
where the $X_{ij}$ are observed counts from the contingency table.
chi.out$obs
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 45 16 9 7 5 3 1 0
x2 23 75 145 85 23 13 9 5
Also, the expected counts, based on the null hypothesis,
are computed in terms of row and column totals from
the contingency table, approximately as follows:
round(chi.out$exp, 2)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 12.6 16.87 28.54 17.05 5.19 2.97 1.85 0.93
x2 55.4 74.13 125.46 74.95 22.81 13.03 8.15 4.07
Because of the low expected counts in the last two
categories, the chi-squared statistic does not
necessarily have (even approximately) the distribution
$\mathsf{Chisq}(\nu = (r-1)(c-1) = 7).$ This is the
reason for we needed to simulate the P-value of this test.
[A traditional (pre-simulation) approach might be to
combine the last three categories into one.]
The Pearson residuals are of the form
$R_{ij}=\frac{X_{ij}-E_{ij}}{\sqrt{E_{ij}}}.$
That is, $Q = \sum_{i,j}R_{ij}^2.$ By looking
among the $R_{ij}$ with largest absolute values,
one can get an idea which categories made the
most important contributions to a $Q$ large enough
to be significant:
round(chi.out$res, 2)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 9.13 -0.21 -3.66 -2.43 -0.08 0.02 -0.63 -0.96
x2 -4.35 0.10 1.74 1.16 0.04 -0.01 0.30 0.46
So it seems that comparisons involving categories
A, C, and E may be most likely to show significance.
(A superficial look at the original contingency table
shows that these categories have large and discordant
counts.)
In order to avoid false discovery from multiple tests
on the same data, you should use some method of
of choosing significance levels smaller than 5% for
such comparisons. (One possibility is Bonferroni's method; perhaps using 1% instead of 5% levels.)
Addendum: Comparison of Cat A with sum of C&D. Output from Minitab.
This is one possible ad hoc test. It uses
a simple $2 \times 2$ table that you should be able to
compute by hand. You can check your expected values in
the output below.
Data Display
Row Cat Gp1 Gp2
1 A 45 23
2 C&D 16 130
Chi-Square Test for Association: Cat, Group
Rows: Cat Columns: Group
Gp1 Gp2 All
A 45 23 68
19.38 48.62
C&D 16 130 146
41.62 104.38
All 61 153 214
Cell Contents: Count
Expected count
Pearson Chi-Square = 69.408, DF = 1, P-Value = 0.000
Very small P-value suggests that Gp 1 prefers Cat A
while Gp 2 prefers Cats B & C.
|
Count data with largely varying sample sizes. Can I get anything from the data?
|
In principle, it seems you'd want to use a chi-squared test
to see if the two groups tend to have the same
distribution of category counts.
In practice, sparse
data as in the last few categories of yo
|
Count data with largely varying sample sizes. Can I get anything from the data?
In principle, it seems you'd want to use a chi-squared test
to see if the two groups tend to have the same
distribution of category counts.
In practice, sparse
data as in the last few categories of your first
dataset make it impossible to do a 'standard'
chi-squared test. In particular, several expected cell
counts are smaller than five. (Some authors are OK
with a few counts as low as three as long as all the rest
are higher than five---questionable for your first dataset.)
Fortunately, the implementation of chisq.test in R
simulates reasonably accurate P-values for tests in
many such problematic situations. The simulation is OK
for the table as a whole, but if the null hypothesis of
homogeneity is rejected, any ad hoc tests
trying to identify specifically which categories differ
must be limited to categories with higher numbers
of expected counts.
Here is output from chisq.test for your first dataset:
x1 = c(45, 16, 9, 7, 5, 3, 1, 0)
x2 = c(23, 75, 145, 85, 23, 13, 9, 5)
TBL = rbind(x1, x2); TBL
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 45 16 9 7 5 3 1 0
x2 23 75 145 85 23 13 9 5
chi.out = chisq.test(TBL, sim=T)
chi.out
Pearson's Chi-squared test
with simulated p-value
(based on 2000 replicates)
data: TBL
X-squared = 127.6, df = NA, p-value = 0.0004998
The simulated P-value is much smaller the 0.05, so
there are highly significant differences among categories
for the two groups.
The chi-squared statistic $Q$ is composed of 16 components
as follows:
$$Q = \sum_{i=1}^2\sum_{j=1}^8 \frac{(X_{ij} - E_{ij})^2}{E_{ij}} = 127.6,$$
where the $X_{ij}$ are observed counts from the contingency table.
chi.out$obs
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 45 16 9 7 5 3 1 0
x2 23 75 145 85 23 13 9 5
Also, the expected counts, based on the null hypothesis,
are computed in terms of row and column totals from
the contingency table, approximately as follows:
round(chi.out$exp, 2)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 12.6 16.87 28.54 17.05 5.19 2.97 1.85 0.93
x2 55.4 74.13 125.46 74.95 22.81 13.03 8.15 4.07
Because of the low expected counts in the last two
categories, the chi-squared statistic does not
necessarily have (even approximately) the distribution
$\mathsf{Chisq}(\nu = (r-1)(c-1) = 7).$ This is the
reason for we needed to simulate the P-value of this test.
[A traditional (pre-simulation) approach might be to
combine the last three categories into one.]
The Pearson residuals are of the form
$R_{ij}=\frac{X_{ij}-E_{ij}}{\sqrt{E_{ij}}}.$
That is, $Q = \sum_{i,j}R_{ij}^2.$ By looking
among the $R_{ij}$ with largest absolute values,
one can get an idea which categories made the
most important contributions to a $Q$ large enough
to be significant:
round(chi.out$res, 2)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
x1 9.13 -0.21 -3.66 -2.43 -0.08 0.02 -0.63 -0.96
x2 -4.35 0.10 1.74 1.16 0.04 -0.01 0.30 0.46
So it seems that comparisons involving categories
A, C, and E may be most likely to show significance.
(A superficial look at the original contingency table
shows that these categories have large and discordant
counts.)
In order to avoid false discovery from multiple tests
on the same data, you should use some method of
of choosing significance levels smaller than 5% for
such comparisons. (One possibility is Bonferroni's method; perhaps using 1% instead of 5% levels.)
Addendum: Comparison of Cat A with sum of C&D. Output from Minitab.
This is one possible ad hoc test. It uses
a simple $2 \times 2$ table that you should be able to
compute by hand. You can check your expected values in
the output below.
Data Display
Row Cat Gp1 Gp2
1 A 45 23
2 C&D 16 130
Chi-Square Test for Association: Cat, Group
Rows: Cat Columns: Group
Gp1 Gp2 All
A 45 23 68
19.38 48.62
C&D 16 130 146
41.62 104.38
All 61 153 214
Cell Contents: Count
Expected count
Pearson Chi-Square = 69.408, DF = 1, P-Value = 0.000
Very small P-value suggests that Gp 1 prefers Cat A
while Gp 2 prefers Cats B & C.
|
Count data with largely varying sample sizes. Can I get anything from the data?
In principle, it seems you'd want to use a chi-squared test
to see if the two groups tend to have the same
distribution of category counts.
In practice, sparse
data as in the last few categories of yo
|
48,986 |
Can we solve overfitting by adding more parameters?
|
According to recent works on the Double Descent phenomena, specially Belkin's, yes, you may be able to fix overfitting with more parameters.
That happens because, according to their hypothesis, if you have just enough parameters to interpolate training data, the solution space becomes constrained, precluding you from achieving a lower norm solution.
Adding more parameters (to the limit at infinity) "opens up" solution space again, allowing for, still interpolating, smaller norm solutions.
The interesting part is that in the interpolating regime smaller loss is often achieved than in the non-interpolating regime.
That helps to explain how absurdly over-parametrized deep networks work in practice, as stochastic optimization is inherently regularized and SGD will converge to minimum norm solutions in the over-parametrized regime.
|
Can we solve overfitting by adding more parameters?
|
According to recent works on the Double Descent phenomena, specially Belkin's, yes, you may be able to fix overfitting with more parameters.
That happens because, according to their hypothesis, if you
|
Can we solve overfitting by adding more parameters?
According to recent works on the Double Descent phenomena, specially Belkin's, yes, you may be able to fix overfitting with more parameters.
That happens because, according to their hypothesis, if you have just enough parameters to interpolate training data, the solution space becomes constrained, precluding you from achieving a lower norm solution.
Adding more parameters (to the limit at infinity) "opens up" solution space again, allowing for, still interpolating, smaller norm solutions.
The interesting part is that in the interpolating regime smaller loss is often achieved than in the non-interpolating regime.
That helps to explain how absurdly over-parametrized deep networks work in practice, as stochastic optimization is inherently regularized and SGD will converge to minimum norm solutions in the over-parametrized regime.
|
Can we solve overfitting by adding more parameters?
According to recent works on the Double Descent phenomena, specially Belkin's, yes, you may be able to fix overfitting with more parameters.
That happens because, according to their hypothesis, if you
|
48,987 |
Can we solve overfitting by adding more parameters?
|
Adding parameters will lead to more overfitting. The more parameters, the more models you can represent. The more models, the more likely you'll find one that fits your training data exactly.
To avoid overfitting, choose the simplest model that does not underfit, and use cross-validation to make sure.
|
Can we solve overfitting by adding more parameters?
|
Adding parameters will lead to more overfitting. The more parameters, the more models you can represent. The more models, the more likely you'll find one that fits your training data exactly.
To avoid
|
Can we solve overfitting by adding more parameters?
Adding parameters will lead to more overfitting. The more parameters, the more models you can represent. The more models, the more likely you'll find one that fits your training data exactly.
To avoid overfitting, choose the simplest model that does not underfit, and use cross-validation to make sure.
|
Can we solve overfitting by adding more parameters?
Adding parameters will lead to more overfitting. The more parameters, the more models you can represent. The more models, the more likely you'll find one that fits your training data exactly.
To avoid
|
48,988 |
Clustering with large number of clusters
|
The solution I used, in the end, was my implementation of batched K-Means.
Usual implementations of batched K-Means do both the expectation and the maximization step on a single batch. This is not possible in this case becase the data bach must be smaller than the number of clusters.
The solution is to do the expectation step in batches on the entrire dataset, i.e., for each batch compute the cluster assigment given the current centroids and remember the assigmnet. Having the assigment, I can do do the maximization step on the entire dataset.
|
Clustering with large number of clusters
|
The solution I used, in the end, was my implementation of batched K-Means.
Usual implementations of batched K-Means do both the expectation and the maximization step on a single batch. This is not pos
|
Clustering with large number of clusters
The solution I used, in the end, was my implementation of batched K-Means.
Usual implementations of batched K-Means do both the expectation and the maximization step on a single batch. This is not possible in this case becase the data bach must be smaller than the number of clusters.
The solution is to do the expectation step in batches on the entrire dataset, i.e., for each batch compute the cluster assigment given the current centroids and remember the assigmnet. Having the assigment, I can do do the maximization step on the entire dataset.
|
Clustering with large number of clusters
The solution I used, in the end, was my implementation of batched K-Means.
Usual implementations of batched K-Means do both the expectation and the maximization step on a single batch. This is not pos
|
48,989 |
two times square in distance calculation on one example?
|
It's a typo, the kernel is
$$K(x_1,x_2)=\exp\left(-||x_1-x_2||^2 \over 2\sigma^2\right)$$
Note that if you do the calculation w/o squaring again, the result rounded upto two digits will be the same.
|
two times square in distance calculation on one example?
|
It's a typo, the kernel is
$$K(x_1,x_2)=\exp\left(-||x_1-x_2||^2 \over 2\sigma^2\right)$$
Note that if you do the calculation w/o squaring again, the result rounded upto two digits will be the same.
|
two times square in distance calculation on one example?
It's a typo, the kernel is
$$K(x_1,x_2)=\exp\left(-||x_1-x_2||^2 \over 2\sigma^2\right)$$
Note that if you do the calculation w/o squaring again, the result rounded upto two digits will be the same.
|
two times square in distance calculation on one example?
It's a typo, the kernel is
$$K(x_1,x_2)=\exp\left(-||x_1-x_2||^2 \over 2\sigma^2\right)$$
Note that if you do the calculation w/o squaring again, the result rounded upto two digits will be the same.
|
48,990 |
Finding the Q function for the EM algorithm
|
To quote verbatim from Wikipedia
The EM algorithm is used to find (local) maximum likelihood parameters
of a statistical model in cases where the equations cannot be solved
directly. Typically these models involve latent variables in addition
to unknown parameters and known data observations. That is, either
missing values exist among the data, or the model can be formulated
more simply by assuming the existence of further unobserved data
points. For example, a mixture model can be described more simply by
assuming that each observed data point has a corresponding unobserved
data point, or latent variable, specifying the mixture component to
which each data point belongs.
When considering the Normal likelihood,
the maximum likelihood equation can be solved directly
there is no obvious missing data structure with latent variable $Z$ such that
$$\int_\mathcal Z f(x,z;\mu)\,\text{d}z = \varphi(x;\mu) \tag{1}$$
and
of course, there is (are?) an infinity of ways to create (1), e.g.,
$$f(x,z;\mu)=\mathbb I_{(0,\varphi(x;\mu))}(z)$$
or
$$f(x,z;\mu)=\varphi(x;\mu)\varphi(z;\mu)\tag{2}$$
and one can try to apply EM to such completions but there is no reason that EM will be manageable in such cases. (Note: it works with (2).)
As a side remark, there is a fundamental misunderstanding in the following paragraph in Wikipedia
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations.
since maximising in both $(\theta,z)$ returns the joint mode, which differs from the marginal mode.
|
Finding the Q function for the EM algorithm
|
To quote verbatim from Wikipedia
The EM algorithm is used to find (local) maximum likelihood parameters
of a statistical model in cases where the equations cannot be solved
directly. Typically these
|
Finding the Q function for the EM algorithm
To quote verbatim from Wikipedia
The EM algorithm is used to find (local) maximum likelihood parameters
of a statistical model in cases where the equations cannot be solved
directly. Typically these models involve latent variables in addition
to unknown parameters and known data observations. That is, either
missing values exist among the data, or the model can be formulated
more simply by assuming the existence of further unobserved data
points. For example, a mixture model can be described more simply by
assuming that each observed data point has a corresponding unobserved
data point, or latent variable, specifying the mixture component to
which each data point belongs.
When considering the Normal likelihood,
the maximum likelihood equation can be solved directly
there is no obvious missing data structure with latent variable $Z$ such that
$$\int_\mathcal Z f(x,z;\mu)\,\text{d}z = \varphi(x;\mu) \tag{1}$$
and
of course, there is (are?) an infinity of ways to create (1), e.g.,
$$f(x,z;\mu)=\mathbb I_{(0,\varphi(x;\mu))}(z)$$
or
$$f(x,z;\mu)=\varphi(x;\mu)\varphi(z;\mu)\tag{2}$$
and one can try to apply EM to such completions but there is no reason that EM will be manageable in such cases. (Note: it works with (2).)
As a side remark, there is a fundamental misunderstanding in the following paragraph in Wikipedia
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations.
since maximising in both $(\theta,z)$ returns the joint mode, which differs from the marginal mode.
|
Finding the Q function for the EM algorithm
To quote verbatim from Wikipedia
The EM algorithm is used to find (local) maximum likelihood parameters
of a statistical model in cases where the equations cannot be solved
directly. Typically these
|
48,991 |
Better confidence intervals for weighted average
|
An estimator that is obviously better in some ways is
$$\hat\mu= \frac{\sum_{\textrm{observed }k} n_kx_k}{\sum_{\textrm{observed }k} n_k}$$
In particular, if $|J|$ is large enough that all $K$ distinct items
will be observed at least once (with probability going to 1) and the
error of $\hat\mu$ will be exactly zero, where your estimator (call
it $\bar x$) has error of order $|J|^{-1/2}$.
On the other hand, for smaller values of $|J|$, $\hat\mu$ is not typically unbiased, which makes confidence intervals more difficult.
On the other other hand, $\hat \mu$ looks like it should typically have smaller mean absolute error or mean squared error.
What can we say analytically?
Write $\hat m_k$ for the number of times you observe an item of type $k$ and $\hat n_k=\hat m_kM/|J|$ for the expected value of $n_k$ given $\hat m_k$. Introduce $R_k$ as the indicator
of observing item $k$ at least once (so $\hat n_k>0$).
Your estimator $\bar x$ can be written as
$$\bar x = \frac{\sum_{k=1}^K \hat m_kR_kx_k}{\sum_{k=1}^K \hat m_kR_k}$$
or equivalently as
$$\bar x = \frac{\sum_{k=1}^K \hat n_kR_kx_k}{\sum_{k=1}^K \hat n_kR_k}$$
and mine as
$$\hat\mu= \frac{\sum_{k=1}^K n_kR_kx_k}{\sum_{k=1}^K n_kR_k}$$
So we obtain $\hat\mu$ by replacing $\hat n_k$ with $n_k$.
Since $\hat n_k-n_k$ is independent of $\hat\mu$ and its distribution does not depend on the parameters $\{x_k\}$, it's pure noise and $\hat\mu$ is more accurate (but not, however, unbiased).
You can get confidence intervals for $\hat\mu$ ignoring the bias by using a bootstrap. And you could use a subsampling bootstrap to get bias-corrected intervals.
|
Better confidence intervals for weighted average
|
An estimator that is obviously better in some ways is
$$\hat\mu= \frac{\sum_{\textrm{observed }k} n_kx_k}{\sum_{\textrm{observed }k} n_k}$$
In particular, if $|J|$ is large enough that all $K$ distinc
|
Better confidence intervals for weighted average
An estimator that is obviously better in some ways is
$$\hat\mu= \frac{\sum_{\textrm{observed }k} n_kx_k}{\sum_{\textrm{observed }k} n_k}$$
In particular, if $|J|$ is large enough that all $K$ distinct items
will be observed at least once (with probability going to 1) and the
error of $\hat\mu$ will be exactly zero, where your estimator (call
it $\bar x$) has error of order $|J|^{-1/2}$.
On the other hand, for smaller values of $|J|$, $\hat\mu$ is not typically unbiased, which makes confidence intervals more difficult.
On the other other hand, $\hat \mu$ looks like it should typically have smaller mean absolute error or mean squared error.
What can we say analytically?
Write $\hat m_k$ for the number of times you observe an item of type $k$ and $\hat n_k=\hat m_kM/|J|$ for the expected value of $n_k$ given $\hat m_k$. Introduce $R_k$ as the indicator
of observing item $k$ at least once (so $\hat n_k>0$).
Your estimator $\bar x$ can be written as
$$\bar x = \frac{\sum_{k=1}^K \hat m_kR_kx_k}{\sum_{k=1}^K \hat m_kR_k}$$
or equivalently as
$$\bar x = \frac{\sum_{k=1}^K \hat n_kR_kx_k}{\sum_{k=1}^K \hat n_kR_k}$$
and mine as
$$\hat\mu= \frac{\sum_{k=1}^K n_kR_kx_k}{\sum_{k=1}^K n_kR_k}$$
So we obtain $\hat\mu$ by replacing $\hat n_k$ with $n_k$.
Since $\hat n_k-n_k$ is independent of $\hat\mu$ and its distribution does not depend on the parameters $\{x_k\}$, it's pure noise and $\hat\mu$ is more accurate (but not, however, unbiased).
You can get confidence intervals for $\hat\mu$ ignoring the bias by using a bootstrap. And you could use a subsampling bootstrap to get bias-corrected intervals.
|
Better confidence intervals for weighted average
An estimator that is obviously better in some ways is
$$\hat\mu= \frac{\sum_{\textrm{observed }k} n_kx_k}{\sum_{\textrm{observed }k} n_k}$$
In particular, if $|J|$ is large enough that all $K$ distinc
|
48,992 |
Better confidence intervals for weighted average
|
estimate the variance of the estimator using the usual CLT-based approach.
...
Can I use this information to produce estimates with smaller confidence intervals?
Yes, you can. (This is true in general. In many cases, you can do better than a normal approximation, especially when the distribution is not really a normal distribution but just approximately)
How you are gonna do it exactly will depend on the situation.
It seems like you want to compute the average of the distribution of $x$ by taking a sample.
Classically your estimate will be based on a sample of size $n$ like $x_1, \dots x_n$, and then you compute the mean and standard error.
If the distribution of $x$ is assumed to be Gaussian (or approximately Gaussian, like most sample means are anyway), then you would use:
$$\begin{array}{}
\hat{\mu} &=& \bar{x} &=& \frac{1}{n} \sum_{i=1}^n x_i\\
\hat{\sigma}_\mu & =& \frac{1}{\sqrt{n}} s &=& \frac{1}{\sqrt{n}} \sqrt{\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2}
\end{array}$$
But instead of the classical estimate of the error of the mean, you want to use some information about a special property of the data sampling which is that some of the items may occur multiple times.
The exact approach will depend on the type of joint distribution of $x_k,\pi_k$. But here we will show by means of two examples that indeed the estimates and the confidence interval can be treated differently.
Binomial distribution case
You might have a situation where there is only two items. Then the estimate of the mean all boils down to estimation of the probability $p$ for the 1st item (and $1-p$ for the second item). And the estimate of the mean becomes
$$\hat{\mu} = x_1 \hat{p} + x_2 (1-\hat{p}) = x_2 + \hat{p} (x_1 - x_2)$$
Where the estimate $\hat{p}$ relate to estimation of the parameter of a binomial distribution whose estimate of the standard error is different from the estimate of the standard error of mean. In fact there is a large variety of approaches (https://en.m.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
In this example you know all the $x_k$ because you assume that there are only two items. In reality you may have something more complex like $\pi$ being some parametric probability function/density/mass $f(x)$ telling you how probable a certain value (or range) $x$ is. And your estimate of the average of $x$ will boil down to being an estimate of the average of the distribution/function $\pi$. Depending on the type of distribution $\pi$ you will get different types of estimates and confidence intervals.
Independent $\pi$ and $x$
It could be that the items are distributed with $\pi$ and $x$ independently. Your sample could have some item $k$ occuring multiple times, but this will be partly random/noisy behaviour that tells you little about the true weighted mean.
Because of the independence of $\pi$ and $x$ you will only be interested in the distribution of $x$ and not the $\pi$. So you can estimate the mean by only considering the $m$ unique items in the sample and not all the $n$ items (ie. you ignore multiplicity)
$$\begin{array}{}
\hat{\mu} &=& \bar{x} &=& \frac{1}{m} \sum_{i=1}^m x_i\\
\hat{\sigma}_\mu & =& \frac{1}{\sqrt{m}} s &=& \frac{1}{\sqrt{m}} \sqrt{\frac{1}{m-1}\sum_{i=1}^m (x_i-\bar{x})^2}
\end{array}$$
Example computation
Let $x_k \sim N(\mu,\sigma^2)$ and independent relative frequencies $y_k \sim Uniform(a,b)$ from which we compute the normalised frequencies $\pi_k = \frac{y_k}{\sum y_k}$. Say we have 10 000 items according to this distribution and in order to estimate $\sum_{i=1}^{10000} x_i\pi_i$ we sample 5 000 times an item (with repetition).
With a simulation we can see that there can be a difference in the error with the classical estimate and the alternative estimate, with the latter being closer to zero (see the sharper distribution):
### number of repetitions
r <- 10000
### function to create fditribution with 10 000 items
items <- function(mu = 0, sigma = 1, a = 0, b = 1) {
x <- rnorm(10000,mu,sigma)
y <- runif(10000,a,b)
p <- y/sum(y)
return(list(x=x,p=p))
}
### vectors to store results
v_mu <- rep(0,r)
v_est1 <- rep(0,r)
v_est2 <- rep(0,r)
### repeat estimation several times
set.seed(1)
for (trial in 1:r) {
### create distribution
example <- items(a=1,b=1.5)
### true mean
mu <- sum(example$x*example$p)
### sample 5000 items
k <- sample(1:10000, 5000, replace = TRUE, p = example$p)
unique <- as.numeric(labels(table(k))$k)
### traditional estimate
est1 <- mean(example$x[k])
### alternative estimate
est2 <- mean(example$x[unique])
### store results
v_mu[trial] <- mu
v_est1[trial] <- est1
v_est2[trial] <- est2
}
### plotting
h1 <- hist(v_est1-mu, breaks = seq(-0.2,0.2,0.005))
h2 <- hist(v_est2-mu, breaks = seq(-0.2,0.2,0.005))
plot(h2$mids,(h2$density),type="l", log = "",
xlab = "error of estimate", ylab = "density", xlim = c(-1,1)*0.15)
lines(h1$mids,(h1$density),lty = 2)
legend(-0.15,25, c("with repetitions","without repetitions"),
lty = c(2,1),cex = 0.7)
Note that this effect will depend a lot on the particular distribution of $\pi$. In this example $\pi \sim U(1,1.1)$, which is not much variation between the different $\pi_k$ and the variance of duplicity is more noise than reflecting a true difference in $\pi_k$. You can change it a bit (e.g. use $\pi \sim U(0,1)$ or an entirely different distribution) and then the effect becomes less pronounced, or even negative. Anyway, the example in this answer shows that there will be differences in estimators and potential improvements can be made (but it will depend a lot on the knowledge of the particular underlying distribution how you are gonna approach the estimation).
|
Better confidence intervals for weighted average
|
estimate the variance of the estimator using the usual CLT-based approach.
...
Can I use this information to produce estimates with smaller confidence intervals?
Yes, you can. (This is true in genera
|
Better confidence intervals for weighted average
estimate the variance of the estimator using the usual CLT-based approach.
...
Can I use this information to produce estimates with smaller confidence intervals?
Yes, you can. (This is true in general. In many cases, you can do better than a normal approximation, especially when the distribution is not really a normal distribution but just approximately)
How you are gonna do it exactly will depend on the situation.
It seems like you want to compute the average of the distribution of $x$ by taking a sample.
Classically your estimate will be based on a sample of size $n$ like $x_1, \dots x_n$, and then you compute the mean and standard error.
If the distribution of $x$ is assumed to be Gaussian (or approximately Gaussian, like most sample means are anyway), then you would use:
$$\begin{array}{}
\hat{\mu} &=& \bar{x} &=& \frac{1}{n} \sum_{i=1}^n x_i\\
\hat{\sigma}_\mu & =& \frac{1}{\sqrt{n}} s &=& \frac{1}{\sqrt{n}} \sqrt{\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2}
\end{array}$$
But instead of the classical estimate of the error of the mean, you want to use some information about a special property of the data sampling which is that some of the items may occur multiple times.
The exact approach will depend on the type of joint distribution of $x_k,\pi_k$. But here we will show by means of two examples that indeed the estimates and the confidence interval can be treated differently.
Binomial distribution case
You might have a situation where there is only two items. Then the estimate of the mean all boils down to estimation of the probability $p$ for the 1st item (and $1-p$ for the second item). And the estimate of the mean becomes
$$\hat{\mu} = x_1 \hat{p} + x_2 (1-\hat{p}) = x_2 + \hat{p} (x_1 - x_2)$$
Where the estimate $\hat{p}$ relate to estimation of the parameter of a binomial distribution whose estimate of the standard error is different from the estimate of the standard error of mean. In fact there is a large variety of approaches (https://en.m.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
In this example you know all the $x_k$ because you assume that there are only two items. In reality you may have something more complex like $\pi$ being some parametric probability function/density/mass $f(x)$ telling you how probable a certain value (or range) $x$ is. And your estimate of the average of $x$ will boil down to being an estimate of the average of the distribution/function $\pi$. Depending on the type of distribution $\pi$ you will get different types of estimates and confidence intervals.
Independent $\pi$ and $x$
It could be that the items are distributed with $\pi$ and $x$ independently. Your sample could have some item $k$ occuring multiple times, but this will be partly random/noisy behaviour that tells you little about the true weighted mean.
Because of the independence of $\pi$ and $x$ you will only be interested in the distribution of $x$ and not the $\pi$. So you can estimate the mean by only considering the $m$ unique items in the sample and not all the $n$ items (ie. you ignore multiplicity)
$$\begin{array}{}
\hat{\mu} &=& \bar{x} &=& \frac{1}{m} \sum_{i=1}^m x_i\\
\hat{\sigma}_\mu & =& \frac{1}{\sqrt{m}} s &=& \frac{1}{\sqrt{m}} \sqrt{\frac{1}{m-1}\sum_{i=1}^m (x_i-\bar{x})^2}
\end{array}$$
Example computation
Let $x_k \sim N(\mu,\sigma^2)$ and independent relative frequencies $y_k \sim Uniform(a,b)$ from which we compute the normalised frequencies $\pi_k = \frac{y_k}{\sum y_k}$. Say we have 10 000 items according to this distribution and in order to estimate $\sum_{i=1}^{10000} x_i\pi_i$ we sample 5 000 times an item (with repetition).
With a simulation we can see that there can be a difference in the error with the classical estimate and the alternative estimate, with the latter being closer to zero (see the sharper distribution):
### number of repetitions
r <- 10000
### function to create fditribution with 10 000 items
items <- function(mu = 0, sigma = 1, a = 0, b = 1) {
x <- rnorm(10000,mu,sigma)
y <- runif(10000,a,b)
p <- y/sum(y)
return(list(x=x,p=p))
}
### vectors to store results
v_mu <- rep(0,r)
v_est1 <- rep(0,r)
v_est2 <- rep(0,r)
### repeat estimation several times
set.seed(1)
for (trial in 1:r) {
### create distribution
example <- items(a=1,b=1.5)
### true mean
mu <- sum(example$x*example$p)
### sample 5000 items
k <- sample(1:10000, 5000, replace = TRUE, p = example$p)
unique <- as.numeric(labels(table(k))$k)
### traditional estimate
est1 <- mean(example$x[k])
### alternative estimate
est2 <- mean(example$x[unique])
### store results
v_mu[trial] <- mu
v_est1[trial] <- est1
v_est2[trial] <- est2
}
### plotting
h1 <- hist(v_est1-mu, breaks = seq(-0.2,0.2,0.005))
h2 <- hist(v_est2-mu, breaks = seq(-0.2,0.2,0.005))
plot(h2$mids,(h2$density),type="l", log = "",
xlab = "error of estimate", ylab = "density", xlim = c(-1,1)*0.15)
lines(h1$mids,(h1$density),lty = 2)
legend(-0.15,25, c("with repetitions","without repetitions"),
lty = c(2,1),cex = 0.7)
Note that this effect will depend a lot on the particular distribution of $\pi$. In this example $\pi \sim U(1,1.1)$, which is not much variation between the different $\pi_k$ and the variance of duplicity is more noise than reflecting a true difference in $\pi_k$. You can change it a bit (e.g. use $\pi \sim U(0,1)$ or an entirely different distribution) and then the effect becomes less pronounced, or even negative. Anyway, the example in this answer shows that there will be differences in estimators and potential improvements can be made (but it will depend a lot on the knowledge of the particular underlying distribution how you are gonna approach the estimation).
|
Better confidence intervals for weighted average
estimate the variance of the estimator using the usual CLT-based approach.
...
Can I use this information to produce estimates with smaller confidence intervals?
Yes, you can. (This is true in genera
|
48,993 |
Using n linear regression models for different subsets vs using one model for the entire dataset
|
Are there any mathematical drawbacks with this idea?
Yes, you will lose statistical power as well as running into multiple testing problems. Don't split the dataset.
How do I compare the resulting n models to my other model? I am unsure on how to interpret the n resulting metrics (i.e., a cross-validated r squared score) to those of my other model.
You won't have to if you don't split the dataset
Is there a python/sklearn implementation of my idea? Right now I manually split the input depending on the value of c when training and evaluating the models.
Probably. It's the type of mistaken thing people do in data science, and Python is used a lot in data science. However, here we try to guide people into a better way of doing things.
Fit a single model on the entire dataset. Include the variable c as a fixed effect and also fit interactions with this variable and your other variables.
|
Using n linear regression models for different subsets vs using one model for the entire dataset
|
Are there any mathematical drawbacks with this idea?
Yes, you will lose statistical power as well as running into multiple testing problems. Don't split the dataset.
How do I compare the resulting
|
Using n linear regression models for different subsets vs using one model for the entire dataset
Are there any mathematical drawbacks with this idea?
Yes, you will lose statistical power as well as running into multiple testing problems. Don't split the dataset.
How do I compare the resulting n models to my other model? I am unsure on how to interpret the n resulting metrics (i.e., a cross-validated r squared score) to those of my other model.
You won't have to if you don't split the dataset
Is there a python/sklearn implementation of my idea? Right now I manually split the input depending on the value of c when training and evaluating the models.
Probably. It's the type of mistaken thing people do in data science, and Python is used a lot in data science. However, here we try to guide people into a better way of doing things.
Fit a single model on the entire dataset. Include the variable c as a fixed effect and also fit interactions with this variable and your other variables.
|
Using n linear regression models for different subsets vs using one model for the entire dataset
Are there any mathematical drawbacks with this idea?
Yes, you will lose statistical power as well as running into multiple testing problems. Don't split the dataset.
How do I compare the resulting
|
48,994 |
Interpreting a zero-inflation negative binomial model
|
What does the zero-inflation model actually represent?
this is a model for the occurance of non zeros vs zeros. It can be interpreted in the same was as a logistic regression model where success means a non-zero count and you are modelling the probability of obtaining a non-zero count.
Are these p-values sufficient in interpreting the model or do further statistical tests of significance need to be completed in order to infer relationships between variables?
Try to interpret the coefficient estimates, not the p values, but yes, the p values can be interpreted, as the probability of obvserving these data, or data more extreme, if the null hypothesis is true. That is, each p value relates to a specific test of a specific null hypothesis and that is the only context in which you can interpret p values.
With this type of model how would you go about determining the significance of the random effects? With non-zero inflated models I am able to do this by using the anova() function to compare a model with and without a particular random effect however when I tried to do this only one p-value is generated. As such I am not sure if this pertains to either the conditional or the zero-inflation model.
Again, don't worry too much about p values from these tests. You have repeated measures and therefore you are accounting for this using random intercepts. It is sufficient to report the variance of these random intercepts. In your case you can note that the variance of one of these variance components in both parts of the model is small in comparison to the other. Having said that, it is good to seek a parsimoneous model, so if you have reason to believe that there should not be any correlation within either of your grouping variables for either part of the model, then you can remove the corresponding random term from the model and perform a likelihood ratio test in the same way as you do with a model without zero inflation - note that you have 2 parts to the model that include random effects: the main part and the ziformula part.
|
Interpreting a zero-inflation negative binomial model
|
What does the zero-inflation model actually represent?
this is a model for the occurance of non zeros vs zeros. It can be interpreted in the same was as a logistic regression model where success mea
|
Interpreting a zero-inflation negative binomial model
What does the zero-inflation model actually represent?
this is a model for the occurance of non zeros vs zeros. It can be interpreted in the same was as a logistic regression model where success means a non-zero count and you are modelling the probability of obtaining a non-zero count.
Are these p-values sufficient in interpreting the model or do further statistical tests of significance need to be completed in order to infer relationships between variables?
Try to interpret the coefficient estimates, not the p values, but yes, the p values can be interpreted, as the probability of obvserving these data, or data more extreme, if the null hypothesis is true. That is, each p value relates to a specific test of a specific null hypothesis and that is the only context in which you can interpret p values.
With this type of model how would you go about determining the significance of the random effects? With non-zero inflated models I am able to do this by using the anova() function to compare a model with and without a particular random effect however when I tried to do this only one p-value is generated. As such I am not sure if this pertains to either the conditional or the zero-inflation model.
Again, don't worry too much about p values from these tests. You have repeated measures and therefore you are accounting for this using random intercepts. It is sufficient to report the variance of these random intercepts. In your case you can note that the variance of one of these variance components in both parts of the model is small in comparison to the other. Having said that, it is good to seek a parsimoneous model, so if you have reason to believe that there should not be any correlation within either of your grouping variables for either part of the model, then you can remove the corresponding random term from the model and perform a likelihood ratio test in the same way as you do with a model without zero inflation - note that you have 2 parts to the model that include random effects: the main part and the ziformula part.
|
Interpreting a zero-inflation negative binomial model
What does the zero-inflation model actually represent?
this is a model for the occurance of non zeros vs zeros. It can be interpreted in the same was as a logistic regression model where success mea
|
48,995 |
Does Linear regression needs target variable to be normally distributed. (GLM context)?
|
It depends on what you’re doing. If you just want to predict, then it doesn’t matter, and the Gauss-Markov theorem does not say anything about a normal error term.
However, when the error term is normal, then the OLS estimator $\hat{\beta}$ is the maximum likelihood estimator. If you don’t know about MLEs, you’ll see them over and over as you dive into statistics, but maximum likelihood is a nice property for many reasons.
Among those reasons is that the inferential methods like p-values on coefficients and F-tests of nested models come into play.
So if you want to do some kind of ANOVA, for example, the normality of the error term matters because you’re doing hypothesis testing, not prediction.
The pooled distribution of the response variable (all of your $y$s) definitely does not have to be normal, even to get that maximum likelihood property and do inference, and the predictor variables definitely don’t have to be normal. Predictors often cannot be normal, such as when they are categorical variables e.g. male/female, treatment/control, etc.
EDIT
We often talk about normal residuals. This is casual language, and experienced statisticians know what is meant, but the residuals are a discrete distribution and cannot be normal. What we assume is a normal error term, and we use the residuals to gauge if that is a good assumption or not.
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
|
It depends on what you’re doing. If you just want to predict, then it doesn’t matter, and the Gauss-Markov theorem does not say anything about a normal error term.
However, when the error term is norm
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
It depends on what you’re doing. If you just want to predict, then it doesn’t matter, and the Gauss-Markov theorem does not say anything about a normal error term.
However, when the error term is normal, then the OLS estimator $\hat{\beta}$ is the maximum likelihood estimator. If you don’t know about MLEs, you’ll see them over and over as you dive into statistics, but maximum likelihood is a nice property for many reasons.
Among those reasons is that the inferential methods like p-values on coefficients and F-tests of nested models come into play.
So if you want to do some kind of ANOVA, for example, the normality of the error term matters because you’re doing hypothesis testing, not prediction.
The pooled distribution of the response variable (all of your $y$s) definitely does not have to be normal, even to get that maximum likelihood property and do inference, and the predictor variables definitely don’t have to be normal. Predictors often cannot be normal, such as when they are categorical variables e.g. male/female, treatment/control, etc.
EDIT
We often talk about normal residuals. This is casual language, and experienced statisticians know what is meant, but the residuals are a discrete distribution and cannot be normal. What we assume is a normal error term, and we use the residuals to gauge if that is a good assumption or not.
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
It depends on what you’re doing. If you just want to predict, then it doesn’t matter, and the Gauss-Markov theorem does not say anything about a normal error term.
However, when the error term is norm
|
48,996 |
Does Linear regression needs target variable to be normally distributed. (GLM context)?
|
The only normality assumption in linear regression if you intend to do any testing is that the residuals be normally distributed. In simple linear regression with only one variable in the model, this implies that the independent variable must also be normally distributed. In multiple regression, however, the residuals must be normally distributed.
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
|
The only normality assumption in linear regression if you intend to do any testing is that the residuals be normally distributed. In simple linear regression with only one variable in the model, this
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
The only normality assumption in linear regression if you intend to do any testing is that the residuals be normally distributed. In simple linear regression with only one variable in the model, this implies that the independent variable must also be normally distributed. In multiple regression, however, the residuals must be normally distributed.
|
Does Linear regression needs target variable to be normally distributed. (GLM context)?
The only normality assumption in linear regression if you intend to do any testing is that the residuals be normally distributed. In simple linear regression with only one variable in the model, this
|
48,997 |
Hypothesis testing for logistic regression with categorical predictor
|
In R you can use glht (generalized linear hypothesis test) command in the multcomp package to define the contrasts of interest and test them, though I would advise against much reliance on p value "significance"
|
Hypothesis testing for logistic regression with categorical predictor
|
In R you can use glht (generalized linear hypothesis test) command in the multcomp package to define the contrasts of interest and test them, though I would advise against much reliance on p value "si
|
Hypothesis testing for logistic regression with categorical predictor
In R you can use glht (generalized linear hypothesis test) command in the multcomp package to define the contrasts of interest and test them, though I would advise against much reliance on p value "significance"
|
Hypothesis testing for logistic regression with categorical predictor
In R you can use glht (generalized linear hypothesis test) command in the multcomp package to define the contrasts of interest and test them, though I would advise against much reliance on p value "si
|
48,998 |
Hypothesis testing for logistic regression with categorical predictor
|
Use the likelihood ratio test (LRT) to compare the different β's as described here: https://courses.washington.edu/b515/l13.pdf
pp. 20-24. And if you don't follow it, start from page 1.
|
Hypothesis testing for logistic regression with categorical predictor
|
Use the likelihood ratio test (LRT) to compare the different β's as described here: https://courses.washington.edu/b515/l13.pdf
pp. 20-24. And if you don't follow it, start from page 1.
|
Hypothesis testing for logistic regression with categorical predictor
Use the likelihood ratio test (LRT) to compare the different β's as described here: https://courses.washington.edu/b515/l13.pdf
pp. 20-24. And if you don't follow it, start from page 1.
|
Hypothesis testing for logistic regression with categorical predictor
Use the likelihood ratio test (LRT) to compare the different β's as described here: https://courses.washington.edu/b515/l13.pdf
pp. 20-24. And if you don't follow it, start from page 1.
|
48,999 |
Is the regressor (sometimes called "independent" variable) actually independent of the response from a probabilistic perspective?
|
The "dependent" and "independent" terminology for the variables is unfortunate terminology, which is best avoided. Statistical dependence is always bidirectional ---i.e., if a variable is statistically dependent on another variable, then that second variable is also statistically dependent with the first variable. In a regression model the two variables are posited to have a statistical relationship. We treat the explanatory (regressor) variables $\mathbf{x}$ as fixed and we model the regression function $u(\mathbf{x}) = \mathbb{E}(Y|\mathbf{x})$, which is the conditional expected value of the response (regressand) variable $Y$. See this related question for more discussion on the unfortunate terminology.
|
Is the regressor (sometimes called "independent" variable) actually independent of the response from
|
The "dependent" and "independent" terminology for the variables is unfortunate terminology, which is best avoided. Statistical dependence is always bidirectional ---i.e., if a variable is statistical
|
Is the regressor (sometimes called "independent" variable) actually independent of the response from a probabilistic perspective?
The "dependent" and "independent" terminology for the variables is unfortunate terminology, which is best avoided. Statistical dependence is always bidirectional ---i.e., if a variable is statistically dependent on another variable, then that second variable is also statistically dependent with the first variable. In a regression model the two variables are posited to have a statistical relationship. We treat the explanatory (regressor) variables $\mathbf{x}$ as fixed and we model the regression function $u(\mathbf{x}) = \mathbb{E}(Y|\mathbf{x})$, which is the conditional expected value of the response (regressand) variable $Y$. See this related question for more discussion on the unfortunate terminology.
|
Is the regressor (sometimes called "independent" variable) actually independent of the response from
The "dependent" and "independent" terminology for the variables is unfortunate terminology, which is best avoided. Statistical dependence is always bidirectional ---i.e., if a variable is statistical
|
49,000 |
Confidence interval example in Computer Age Statistical Inference
|
A confidence interval estimate relates to an interval that has $\alpha \%$ probability to contain the parameter, conditional on the parameter.
This contrasts with a credible interval, which has $\alpha \%$ probability to contain the parameter, conditional on the observation.
See the example from this question/answer where a comparison is made between credible intervals and confidence intervals for estimating a parameter $\theta$ based on observation $X$. For a given prior and probability density of the observations we can plot the frequency of the success of the interval as function of the observation and as function of the true parameter.
So you see on the left image that a 95% Confidence Interval does (in some sense) imply a 95% chance of containing the mean.
But, that is when you condition on the value $\theta$. Conditional on a given observation $X$ (the right side plot) it might not be true, and the confidence interval will have different chances for different observations.
|
Confidence interval example in Computer Age Statistical Inference
|
A confidence interval estimate relates to an interval that has $\alpha \%$ probability to contain the parameter, conditional on the parameter.
This contrasts with a credible interval, which has $\alph
|
Confidence interval example in Computer Age Statistical Inference
A confidence interval estimate relates to an interval that has $\alpha \%$ probability to contain the parameter, conditional on the parameter.
This contrasts with a credible interval, which has $\alpha \%$ probability to contain the parameter, conditional on the observation.
See the example from this question/answer where a comparison is made between credible intervals and confidence intervals for estimating a parameter $\theta$ based on observation $X$. For a given prior and probability density of the observations we can plot the frequency of the success of the interval as function of the observation and as function of the true parameter.
So you see on the left image that a 95% Confidence Interval does (in some sense) imply a 95% chance of containing the mean.
But, that is when you condition on the value $\theta$. Conditional on a given observation $X$ (the right side plot) it might not be true, and the confidence interval will have different chances for different observations.
|
Confidence interval example in Computer Age Statistical Inference
A confidence interval estimate relates to an interval that has $\alpha \%$ probability to contain the parameter, conditional on the parameter.
This contrasts with a credible interval, which has $\alph
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.