idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,101 |
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$ where $X_i \sim N(0,1)$?
|
Assuming independent $X_i$, the mean $\frac{1}{n}\sum X_i$ is also normal, i.e. $N(0,1/n)$. Absolute value of it is Half-normal, which has mean $E[Y_1]=\frac{\sigma\sqrt{2}}{\sqrt{\pi}}=\sqrt{\frac{2}{n\pi}}$. For $Y_2$ we can find the expected value directly:
$$E[Y_2]=\frac{1}{n}\sum_{i=1}^n E[|X_i|]=E[|X_i|]=\sqrt\frac{2}{\pi}$$
This means $\sqrt n E[Y_1]=E[Y_2]$. I think an equality is better than inequality.
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$
|
Assuming independent $X_i$, the mean $\frac{1}{n}\sum X_i$ is also normal, i.e. $N(0,1/n)$. Absolute value of it is Half-normal, which has mean $E[Y_1]=\frac{\sigma\sqrt{2}}{\sqrt{\pi}}=\sqrt{\frac{2}
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$ where $X_i \sim N(0,1)$?
Assuming independent $X_i$, the mean $\frac{1}{n}\sum X_i$ is also normal, i.e. $N(0,1/n)$. Absolute value of it is Half-normal, which has mean $E[Y_1]=\frac{\sigma\sqrt{2}}{\sqrt{\pi}}=\sqrt{\frac{2}{n\pi}}$. For $Y_2$ we can find the expected value directly:
$$E[Y_2]=\frac{1}{n}\sum_{i=1}^n E[|X_i|]=E[|X_i|]=\sqrt\frac{2}{\pi}$$
This means $\sqrt n E[Y_1]=E[Y_2]$. I think an equality is better than inequality.
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$
Assuming independent $X_i$, the mean $\frac{1}{n}\sum X_i$ is also normal, i.e. $N(0,1/n)$. Absolute value of it is Half-normal, which has mean $E[Y_1]=\frac{\sigma\sqrt{2}}{\sqrt{\pi}}=\sqrt{\frac{2}
|
52,102 |
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$ where $X_i \sim N(0,1)$?
|
Answer:
Whatever the distribution of $X_1,...,X_n$,
$$\mathbb{E} Y_2 \geq \mathbb{E} Y_1.$$
Details:
For any $n$ numbers $X_1,..., X_n$ it is true that
$$ \sum_i |X_i| \geq |\sum_i X_i|$$
and dividing both sides by $n$:
$$ \frac{1}{n}\sum_i |X_i| \geq \frac{1}{n}|\sum_i X_i| = |\frac{1}{n}\sum_i X_i|.$$
Now, the key word is 'any', that means that if we take integrals of both sides of the above inequality, the integrand of the left hand side is uniformly higher than the integrand of the right hand side, i.e.
$$ \mathbb{E} Y_2 = \int \frac{1}{n}\sum_i |X_i| dX_1 ...dX_n \geq \int |\frac{1}{n}\sum_i X_i| dX_1...dX_n = \mathbb{E} Y_1.$$
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$
|
Answer:
Whatever the distribution of $X_1,...,X_n$,
$$\mathbb{E} Y_2 \geq \mathbb{E} Y_1.$$
Details:
For any $n$ numbers $X_1,..., X_n$ it is true that
$$ \sum_i |X_i| \geq |\sum_i X_i|$$
and dividi
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$ where $X_i \sim N(0,1)$?
Answer:
Whatever the distribution of $X_1,...,X_n$,
$$\mathbb{E} Y_2 \geq \mathbb{E} Y_1.$$
Details:
For any $n$ numbers $X_1,..., X_n$ it is true that
$$ \sum_i |X_i| \geq |\sum_i X_i|$$
and dividing both sides by $n$:
$$ \frac{1}{n}\sum_i |X_i| \geq \frac{1}{n}|\sum_i X_i| = |\frac{1}{n}\sum_i X_i|.$$
Now, the key word is 'any', that means that if we take integrals of both sides of the above inequality, the integrand of the left hand side is uniformly higher than the integrand of the right hand side, i.e.
$$ \mathbb{E} Y_2 = \int \frac{1}{n}\sum_i |X_i| dX_1 ...dX_n \geq \int |\frac{1}{n}\sum_i X_i| dX_1...dX_n = \mathbb{E} Y_1.$$
|
How can I establish an inequality between $|\frac1n \sum_{i=1}^nX_i|$ and $\frac1n\sum^n_{i=1}|X_i|$
Answer:
Whatever the distribution of $X_1,...,X_n$,
$$\mathbb{E} Y_2 \geq \mathbb{E} Y_1.$$
Details:
For any $n$ numbers $X_1,..., X_n$ it is true that
$$ \sum_i |X_i| \geq |\sum_i X_i|$$
and dividi
|
52,103 |
Probability that any two people have the same birthday?
|
Unfortunately, yes, there is flaw. According to your purported formula, the probabilty of having two people with the same birthday, when you only have $n=1$ person, is:
$$P_1 = 1 - \Big( \frac{364}{365} \Big)^1 = 1 - \frac{364}{365} = \frac{1}{365} \neq 0.$$
So, you are ascribing a non-zero probability to an impossible event. Have a think about whether that is the correct formula, and what kind of change you might make to it.
|
Probability that any two people have the same birthday?
|
Unfortunately, yes, there is flaw. According to your purported formula, the probabilty of having two people with the same birthday, when you only have $n=1$ person, is:
$$P_1 = 1 - \Big( \frac{364}{3
|
Probability that any two people have the same birthday?
Unfortunately, yes, there is flaw. According to your purported formula, the probabilty of having two people with the same birthday, when you only have $n=1$ person, is:
$$P_1 = 1 - \Big( \frac{364}{365} \Big)^1 = 1 - \frac{364}{365} = \frac{1}{365} \neq 0.$$
So, you are ascribing a non-zero probability to an impossible event. Have a think about whether that is the correct formula, and what kind of change you might make to it.
|
Probability that any two people have the same birthday?
Unfortunately, yes, there is flaw. According to your purported formula, the probabilty of having two people with the same birthday, when you only have $n=1$ person, is:
$$P_1 = 1 - \Big( \frac{364}{3
|
52,104 |
Probability that any two people have the same birthday?
|
One way to find the probability of no birthday match in a room with $n=25$ people is shown in the Wikipedia link of my first Comment. Here is a slightly different way to write it:
$$P(\text{No Match}) = \frac{{}_{365}P_{25}}{365^{25}}
= \prod_{i=0}^{24}\left(1 - \frac{i}{365}\right) = 0.4313.$$
In R, this can be evaluated as follows. [In R, 0:24 is a list
of the integers from 0 through 24; similarly for other uses of :.]
prod((365:(365-24))/365)
[1] 0.4313003
prod(1 - (0:24)/365)
[1] 0.4313003
prod(365:341)/365^25
[1] 0.4313003
So $P(\text{At least one match}) = 1 - 0.4313 = 0.5687.$
You can use R to make the first figure in the Wikipedia article as shown below.
The green line shows that for 23 people or more the probability of at
least one birthday match exceeds $1/2.$
n = 1:60
p = numeric(60)
for (i in n) {
q = prod(1 - (0:(i-1))/365)
p[i] = 1 - q
}
plot(n, p)
lines(c(0,23,23), c(.5,.5,0), col="green2")
Some people are surprised that matches occur with such high probability.
Maybe they are thinking at it would take 366 people in a room to be
sure of a match. But the graph shows that probability does not
increase linearly with room size. So it is "nearly sure" (probability 0.9941) to get a
match in a room of only 60 people. And the probability of at least one
match is above 1/2 in a room of 23 people.
Here is a table of some of these 60 probabilities (truncated at 30):
cbind(n, p)
n p
[1,] 1 0.000000000
[2,] 2 0.002739726
[3,] 3 0.008204166
[4,] 4 0.016355912
[5,] 5 0.027135574
[6,] 6 0.040462484
[7,] 7 0.056235703
[8,] 8 0.074335292
[9,] 9 0.094623834
[10,] 10 0.116948178
[11,] 11 0.141141378
[12,] 12 0.167024789
[13,] 13 0.194410275
[14,] 14 0.223102512
[15,] 15 0.252901320
[16,] 16 0.283604005
[17,] 17 0.315007665
[18,] 18 0.346911418
[19,] 19 0.379118526
[20,] 20 0.411438384
[21,] 21 0.443688335
[22,] 22 0.475695308
[23,] 23 0.507297234 # first to exceed 1/2
[24,] 24 0.538344258
[25,] 25 0.568699704
[26,] 26 0.598240820
[27,] 27 0.626859282
[28,] 28 0.654461472
[29,] 29 0.680968537
[30,] 30 0.706316243
...
[60,] 60 0.994122661
Notes: I agree with @Ben (+1) that your equation doesn't work to get the probability of a match between
two randomly chosen people. however, suppose you're among the 25 people in a room, then with probability $1 -\left(\frac{364}{365}\right)^{24} = 0.0637$ at least one other person in the room will match your birthday.
Thus, another wrong 'intuitive' approach to the main birthday problem
above is to confuse the probability someone will match your birthday
with the larger probability that some two (or more) people will have matching birthdays. (Among 25 people there are ${25 \choose 2} = 300$ pairs of people who may have matching birthdays.)
Finally, this Q&A shows a method of simulating the probability of a birthday match. With a slight modification, that method can also be used
to find the expected number of matches.
|
Probability that any two people have the same birthday?
|
One way to find the probability of no birthday match in a room with $n=25$ people is shown in the Wikipedia link of my first Comment. Here is a slightly different way to write it:
$$P(\text{No Match})
|
Probability that any two people have the same birthday?
One way to find the probability of no birthday match in a room with $n=25$ people is shown in the Wikipedia link of my first Comment. Here is a slightly different way to write it:
$$P(\text{No Match}) = \frac{{}_{365}P_{25}}{365^{25}}
= \prod_{i=0}^{24}\left(1 - \frac{i}{365}\right) = 0.4313.$$
In R, this can be evaluated as follows. [In R, 0:24 is a list
of the integers from 0 through 24; similarly for other uses of :.]
prod((365:(365-24))/365)
[1] 0.4313003
prod(1 - (0:24)/365)
[1] 0.4313003
prod(365:341)/365^25
[1] 0.4313003
So $P(\text{At least one match}) = 1 - 0.4313 = 0.5687.$
You can use R to make the first figure in the Wikipedia article as shown below.
The green line shows that for 23 people or more the probability of at
least one birthday match exceeds $1/2.$
n = 1:60
p = numeric(60)
for (i in n) {
q = prod(1 - (0:(i-1))/365)
p[i] = 1 - q
}
plot(n, p)
lines(c(0,23,23), c(.5,.5,0), col="green2")
Some people are surprised that matches occur with such high probability.
Maybe they are thinking at it would take 366 people in a room to be
sure of a match. But the graph shows that probability does not
increase linearly with room size. So it is "nearly sure" (probability 0.9941) to get a
match in a room of only 60 people. And the probability of at least one
match is above 1/2 in a room of 23 people.
Here is a table of some of these 60 probabilities (truncated at 30):
cbind(n, p)
n p
[1,] 1 0.000000000
[2,] 2 0.002739726
[3,] 3 0.008204166
[4,] 4 0.016355912
[5,] 5 0.027135574
[6,] 6 0.040462484
[7,] 7 0.056235703
[8,] 8 0.074335292
[9,] 9 0.094623834
[10,] 10 0.116948178
[11,] 11 0.141141378
[12,] 12 0.167024789
[13,] 13 0.194410275
[14,] 14 0.223102512
[15,] 15 0.252901320
[16,] 16 0.283604005
[17,] 17 0.315007665
[18,] 18 0.346911418
[19,] 19 0.379118526
[20,] 20 0.411438384
[21,] 21 0.443688335
[22,] 22 0.475695308
[23,] 23 0.507297234 # first to exceed 1/2
[24,] 24 0.538344258
[25,] 25 0.568699704
[26,] 26 0.598240820
[27,] 27 0.626859282
[28,] 28 0.654461472
[29,] 29 0.680968537
[30,] 30 0.706316243
...
[60,] 60 0.994122661
Notes: I agree with @Ben (+1) that your equation doesn't work to get the probability of a match between
two randomly chosen people. however, suppose you're among the 25 people in a room, then with probability $1 -\left(\frac{364}{365}\right)^{24} = 0.0637$ at least one other person in the room will match your birthday.
Thus, another wrong 'intuitive' approach to the main birthday problem
above is to confuse the probability someone will match your birthday
with the larger probability that some two (or more) people will have matching birthdays. (Among 25 people there are ${25 \choose 2} = 300$ pairs of people who may have matching birthdays.)
Finally, this Q&A shows a method of simulating the probability of a birthday match. With a slight modification, that method can also be used
to find the expected number of matches.
|
Probability that any two people have the same birthday?
One way to find the probability of no birthday match in a room with $n=25$ people is shown in the Wikipedia link of my first Comment. Here is a slightly different way to write it:
$$P(\text{No Match})
|
52,105 |
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance? [duplicate]
|
Vertical distance is a "real distance". The distance from a given point to any point on the line is a "real distance". The question for how to fit the best regression line is which of the infinite possible distances makes the most sense for how we are thinking about our model. That is, any number of possible loss functions could be right, it depends on our situation, our data, and our goals (it may help you to read my answer to: What is the difference between linear regression on y with x and x with y?).
It is often the case that vertical distances make the most sense, though. This would be the case when we are thinking of $Y$ as a function of $X$, which would make sense in a true experiment where $X$ is randomly assigned and the values are independently manipulated, and $Y$ is measured as a response to that intervention. It can also make sense in a predictive setting, where we want to be able to predict values of $Y$ based on knowledge of $X$ and the predictive relationship that we establish. Then, when we want to make predictions about unknown $Y$ values in the future, we will know and be using $X$. In each of these cases, we are treating $X$ as fixed and known, and that $Y$ is understood to be a function of $X$ in some sense. However, it can be the case that that mental model does not fit your situation, in which case, you would need to use a different loss function. There is no absolute 'correct' distance irrespective of the situation.
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance?
|
Vertical distance is a "real distance". The distance from a given point to any point on the line is a "real distance". The question for how to fit the best regression line is which of the infinite p
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance? [duplicate]
Vertical distance is a "real distance". The distance from a given point to any point on the line is a "real distance". The question for how to fit the best regression line is which of the infinite possible distances makes the most sense for how we are thinking about our model. That is, any number of possible loss functions could be right, it depends on our situation, our data, and our goals (it may help you to read my answer to: What is the difference between linear regression on y with x and x with y?).
It is often the case that vertical distances make the most sense, though. This would be the case when we are thinking of $Y$ as a function of $X$, which would make sense in a true experiment where $X$ is randomly assigned and the values are independently manipulated, and $Y$ is measured as a response to that intervention. It can also make sense in a predictive setting, where we want to be able to predict values of $Y$ based on knowledge of $X$ and the predictive relationship that we establish. Then, when we want to make predictions about unknown $Y$ values in the future, we will know and be using $X$. In each of these cases, we are treating $X$ as fixed and known, and that $Y$ is understood to be a function of $X$ in some sense. However, it can be the case that that mental model does not fit your situation, in which case, you would need to use a different loss function. There is no absolute 'correct' distance irrespective of the situation.
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance?
Vertical distance is a "real distance". The distance from a given point to any point on the line is a "real distance". The question for how to fit the best regression line is which of the infinite p
|
52,106 |
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance? [duplicate]
|
Summing up Michael Chernick comment and gung answer:
Both vertical and point distances are "real" - it all depends on the situation.
Ordinary linear regression assumes the $X$ value are known and the only error is in the $Y$'s. That is often a reasonable assumption.
If you assume error in the $X$'s as well, you get what is called a Deming regression, which fits a point distance.
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance?
|
Summing up Michael Chernick comment and gung answer:
Both vertical and point distances are "real" - it all depends on the situation.
Ordinary linear regression assumes the $X$ value are known and th
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance? [duplicate]
Summing up Michael Chernick comment and gung answer:
Both vertical and point distances are "real" - it all depends on the situation.
Ordinary linear regression assumes the $X$ value are known and the only error is in the $Y$'s. That is often a reasonable assumption.
If you assume error in the $X$'s as well, you get what is called a Deming regression, which fits a point distance.
|
Why does linear regression use "vertical" distance to the best-fit-line, instead of actual distance?
Summing up Michael Chernick comment and gung answer:
Both vertical and point distances are "real" - it all depends on the situation.
Ordinary linear regression assumes the $X$ value are known and th
|
52,107 |
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
In this answer to the question
$$\dfrac{{n+1 \choose x}p^x(1-p)^{n+1-x}}{{n \choose x}p^x(1-p)^{n-x}} = \dfrac{n+1}{n+1-x}(1-p)$$
represents the likelihood ratio
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}$$
If this ratio is larger than one (1), $${L(n+1\mid x,p)}>{L(n|\mid x,p)}$$ $-$ergo the likelihood increases$-$ and if it is smaller than one (1) $${L(n+1\mid x,p)}<{L(n\mid x,p)}$$ $-$ergo the likelihood decreases$-$.
To find the maximum likelihood estimator of $n\in\Bbb N$, one need find the integer value of $n$ when the ratio crosses one, since
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}=\frac{1-p}{1-\dfrac{x}{n+1}}$$
is decreasing in $n\in\Bbb N$. To wit,
$n\mapsto x/(n+1)$ is decreasing,
$n\mapsto 1-x/(n+1)$ increasing,
$n\mapsto 1/\{1-x/(n+1)\}$ decreasing,
respectively. Hence, if $x/(n+1)>p$, i.e., if $n+1<x/p$, then ${L(n+1\mid x,p)}>{L(n\mid x,p)}$ while, if $x/(n+1)<p$, i.e., f $n+1>x/p$, then ${L(n+1|x,p)}<{L(n\mid x,p)}$. This means that
$${L(\lfloor x/p\rfloor-1\mid x,p)}<{L(\lfloor,x/p\rfloor\mid x,p)}<{L(\lfloor x/p\rfloor+1\mid x,p)}$$
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
In this answer to the question
$$\dfrac{{n+1 \choose x}p^x(1-p)^{n+1-x}}{{n \choose x}p^x(1-p)^{n-x}} = \dfrac{n+1}{n+1-x}(1-p)$$
represents the likelihood ratio
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}$
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
In this answer to the question
$$\dfrac{{n+1 \choose x}p^x(1-p)^{n+1-x}}{{n \choose x}p^x(1-p)^{n-x}} = \dfrac{n+1}{n+1-x}(1-p)$$
represents the likelihood ratio
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}$$
If this ratio is larger than one (1), $${L(n+1\mid x,p)}>{L(n|\mid x,p)}$$ $-$ergo the likelihood increases$-$ and if it is smaller than one (1) $${L(n+1\mid x,p)}<{L(n\mid x,p)}$$ $-$ergo the likelihood decreases$-$.
To find the maximum likelihood estimator of $n\in\Bbb N$, one need find the integer value of $n$ when the ratio crosses one, since
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}=\frac{1-p}{1-\dfrac{x}{n+1}}$$
is decreasing in $n\in\Bbb N$. To wit,
$n\mapsto x/(n+1)$ is decreasing,
$n\mapsto 1-x/(n+1)$ increasing,
$n\mapsto 1/\{1-x/(n+1)\}$ decreasing,
respectively. Hence, if $x/(n+1)>p$, i.e., if $n+1<x/p$, then ${L(n+1\mid x,p)}>{L(n\mid x,p)}$ while, if $x/(n+1)<p$, i.e., f $n+1>x/p$, then ${L(n+1|x,p)}<{L(n\mid x,p)}$. This means that
$${L(\lfloor x/p\rfloor-1\mid x,p)}<{L(\lfloor,x/p\rfloor\mid x,p)}<{L(\lfloor x/p\rfloor+1\mid x,p)}$$
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
In this answer to the question
$$\dfrac{{n+1 \choose x}p^x(1-p)^{n+1-x}}{{n \choose x}p^x(1-p)^{n-x}} = \dfrac{n+1}{n+1-x}(1-p)$$
represents the likelihood ratio
$$\frac{L(n+1\mid x,p)}{L(n\mid x,p)}$
|
52,108 |
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
As Xi'an correctly points out, this is a maximisation problem over integers, not real numbers. The objective function is quasi-concave, so we can obtain the maximising value by finding the point at which the (forward) likelihood ratio first drops below one. His answer shows you how to do this, and I have nothing to add to that excellent explanation. However, it is worth noting that discrete optimisation problems like this can also be solved by solving the corresponding optimisation problem in the reals, and then considering the discrete argument points around the real optima.
Alternative optimisation method: In this particular problem it is also possible to obtain the answer via consideration of the corresponding maximisation problem over the reals. To do this, suppose we generalise the binomial likelihood function to allow non-integer values of $n$, while preserving its quasi-concavity:
$$L_x(n) = \frac{\Gamma(n+1)}{\Gamma(n-x+1)} (1-p)^{n-x}
\quad \quad \quad
\text{for all real } n \geqslant x.$$
This objective function is a generalisation of the binomial likelihood function ---i.e., in the special case where $n \in \mathbb{N}$ it simplifies to the binomial likelihood function you are considering. The log-likelihood function is:
$$\ell_x(n) = \ln \Gamma (n+1) - \ln \Gamma (n-x+1) + (n-x) \ln (1-p).$$
The derivatives with respect to $n$ are:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(n)
&= \psi (n+1) - \psi (n-x+1) + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{1}{n-x+i} + \ln (1-p), \\[10pt]
\frac{d^2 \ell_x}{d n^2}(n)
&= - \sum_{i=1}^x \frac{1}{(n-x+i)^2} < 0. \\[10pt]
\end{aligned} \end{equation}$$
(The first derivative here uses the digamma function.) We can see from this result that the log-likelihood is a strictly concave function, which means the likelihood is strictly quasi-concave. The MLE for $n$ occurs at the unique critical point of the function, which gives an implicit function for the real MLE $\hat{n}$. It is possible to establish that $x/p-1 \leqslant \hat{n} \leqslant x/p$ (see below). This narrows down the discrete MLE to be the unique point in this interval if $x/p \notin \mathbb{N}$, or one of the two boundary points if $x/p \in \mathbb{N}$. This gives an alternative derivation of the maximising value in the discrete case.
Establishing the inequalities: We have already established that the score function (first derivative of the log-likelihood) is a decreasing function. The critical point occurs at the unique point where this function crosses the zero line. To establish the inequalities, it is therefore sufficient to show that the score is non-positive at the argument value $n = x/n-1$ and non-negative at the argument value $n = x/p$.
The first of these two inequalities is established as follows:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(x/p-1)
&= \sum_{i=1}^x \frac{1}{x/p-1-x+i} + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{p}{x(1-p)+(i-1)p} + \ln (1-p) \\[10pt]
&= \sum_{i=0}^{x-1} \frac{p}{x(1-p)+ip} + \ln (1-p) \\[10pt]
&\geqslant \int \limits_0^x \frac{p}{x(1-p)+ip} \ di + \ln (1-p) \\[10pt]
&= \Bigg[ \ln(x(1-p)+ip) \Bigg]_{i=0}^{i=x} + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x(1-p)) + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x) - \ln(1-p) + \ln (1-p) = 0. \\[10pt]
\end{aligned} \end{equation}$$
The second inequality is established as follows:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(x/p)
&= \sum_{i=1}^x \frac{1}{x/p -x+i} + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{p}{x(1-p)+ip} + \ln (1-p) \\[10pt]
&\leqslant \int \limits_0^x \frac{p}{x(1-p)+ip} \ di + \ln (1-p) \\[10pt]
&= \Bigg[ \ln(x(1-p)+ip) \Bigg]_{i=0}^{i=x} + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x(1-p)) + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x) - \ln(1-p) + \ln (1-p) = 0. \\[10pt]
\end{aligned} \end{equation}$$
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
As Xi'an correctly points out, this is a maximisation problem over integers, not real numbers. The objective function is quasi-concave, so we can obtain the maximising value by finding the point at w
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
As Xi'an correctly points out, this is a maximisation problem over integers, not real numbers. The objective function is quasi-concave, so we can obtain the maximising value by finding the point at which the (forward) likelihood ratio first drops below one. His answer shows you how to do this, and I have nothing to add to that excellent explanation. However, it is worth noting that discrete optimisation problems like this can also be solved by solving the corresponding optimisation problem in the reals, and then considering the discrete argument points around the real optima.
Alternative optimisation method: In this particular problem it is also possible to obtain the answer via consideration of the corresponding maximisation problem over the reals. To do this, suppose we generalise the binomial likelihood function to allow non-integer values of $n$, while preserving its quasi-concavity:
$$L_x(n) = \frac{\Gamma(n+1)}{\Gamma(n-x+1)} (1-p)^{n-x}
\quad \quad \quad
\text{for all real } n \geqslant x.$$
This objective function is a generalisation of the binomial likelihood function ---i.e., in the special case where $n \in \mathbb{N}$ it simplifies to the binomial likelihood function you are considering. The log-likelihood function is:
$$\ell_x(n) = \ln \Gamma (n+1) - \ln \Gamma (n-x+1) + (n-x) \ln (1-p).$$
The derivatives with respect to $n$ are:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(n)
&= \psi (n+1) - \psi (n-x+1) + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{1}{n-x+i} + \ln (1-p), \\[10pt]
\frac{d^2 \ell_x}{d n^2}(n)
&= - \sum_{i=1}^x \frac{1}{(n-x+i)^2} < 0. \\[10pt]
\end{aligned} \end{equation}$$
(The first derivative here uses the digamma function.) We can see from this result that the log-likelihood is a strictly concave function, which means the likelihood is strictly quasi-concave. The MLE for $n$ occurs at the unique critical point of the function, which gives an implicit function for the real MLE $\hat{n}$. It is possible to establish that $x/p-1 \leqslant \hat{n} \leqslant x/p$ (see below). This narrows down the discrete MLE to be the unique point in this interval if $x/p \notin \mathbb{N}$, or one of the two boundary points if $x/p \in \mathbb{N}$. This gives an alternative derivation of the maximising value in the discrete case.
Establishing the inequalities: We have already established that the score function (first derivative of the log-likelihood) is a decreasing function. The critical point occurs at the unique point where this function crosses the zero line. To establish the inequalities, it is therefore sufficient to show that the score is non-positive at the argument value $n = x/n-1$ and non-negative at the argument value $n = x/p$.
The first of these two inequalities is established as follows:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(x/p-1)
&= \sum_{i=1}^x \frac{1}{x/p-1-x+i} + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{p}{x(1-p)+(i-1)p} + \ln (1-p) \\[10pt]
&= \sum_{i=0}^{x-1} \frac{p}{x(1-p)+ip} + \ln (1-p) \\[10pt]
&\geqslant \int \limits_0^x \frac{p}{x(1-p)+ip} \ di + \ln (1-p) \\[10pt]
&= \Bigg[ \ln(x(1-p)+ip) \Bigg]_{i=0}^{i=x} + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x(1-p)) + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x) - \ln(1-p) + \ln (1-p) = 0. \\[10pt]
\end{aligned} \end{equation}$$
The second inequality is established as follows:
$$\begin{equation} \begin{aligned}
\frac{d \ell_x}{dn}(x/p)
&= \sum_{i=1}^x \frac{1}{x/p -x+i} + \ln (1-p) \\[10pt]
&= \sum_{i=1}^x \frac{p}{x(1-p)+ip} + \ln (1-p) \\[10pt]
&\leqslant \int \limits_0^x \frac{p}{x(1-p)+ip} \ di + \ln (1-p) \\[10pt]
&= \Bigg[ \ln(x(1-p)+ip) \Bigg]_{i=0}^{i=x} + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x(1-p)) + \ln (1-p) \\[10pt]
&= \ln(x) - \ln(x) - \ln(1-p) + \ln (1-p) = 0. \\[10pt]
\end{aligned} \end{equation}$$
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
As Xi'an correctly points out, this is a maximisation problem over integers, not real numbers. The objective function is quasi-concave, so we can obtain the maximising value by finding the point at w
|
52,109 |
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
Binomial PMF is a discrete function of $n$, say $f(n)$, given others, i.e. $x,p$. We want to maximize it in terms of $n$. Typically, we would take the derivate and equate it to zero, but in discrete cases we shouldn't do that. This PDF is known to have its peak value(s) around its mean (not exactly but close). Its graph first increases, and then decreases. Sometimes, it stays constant for a little bit before decreasing. Therefore, the answer there considers the ratio $f(n+1)/f(n)$ and see if it is increasing or not. When the ratio is less than $1$, it means $f(n)$ is in the decreasing region, and the boundary $n$ is a candidate of ML estimate.
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
|
Binomial PMF is a discrete function of $n$, say $f(n)$, given others, i.e. $x,p$. We want to maximize it in terms of $n$. Typically, we would take the derivate and equate it to zero, but in discrete c
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
Binomial PMF is a discrete function of $n$, say $f(n)$, given others, i.e. $x,p$. We want to maximize it in terms of $n$. Typically, we would take the derivate and equate it to zero, but in discrete cases we shouldn't do that. This PDF is known to have its peak value(s) around its mean (not exactly but close). Its graph first increases, and then decreases. Sometimes, it stays constant for a little bit before decreasing. Therefore, the answer there considers the ratio $f(n+1)/f(n)$ and see if it is increasing or not. When the ratio is less than $1$, it means $f(n)$ is in the decreasing region, and the boundary $n$ is a candidate of ML estimate.
|
Maximum likelihood estimator of $n$ when $X \sim \mathrm{Bin}(n,p)$
Binomial PMF is a discrete function of $n$, say $f(n)$, given others, i.e. $x,p$. We want to maximize it in terms of $n$. Typically, we would take the derivate and equate it to zero, but in discrete c
|
52,110 |
Fisher exact test on 4 x 3 table, with low count?
|
Having observed zeros is not an issue for a Fisher Exact test -- nor indeed is it a problem for a chi-squared test (it's not clear why you think this would be a difficulty with an exact test; if you can clarify the source of your concern, additional explanation/clarification may be possible). An entire row or column of zeros might potentially be an issue for some implementations, though it's pretty easy to deal with if it arises (e.g. by combining with an adjacent row or column -- or equivalently, by eliminating it altogether - either way I think the outcome should be the same as including it in this case).
One potential issue for you (not a problem for the test per se, but which may cause you concerns in some cases) is that entire columns or rows of low counts may in some cases lead to difficulty obtaining small p-values (the lowest attainable p-value may be greater than some common choices of significance level). This tends to be less of an issue with larger tables (I believe it doesn't pose a problem for the table in your question unless you require quite low significance levels)
The test extends in a fairly natural way to $r\times c$ tables, using the likelihood (under the null) as a criterion by which to order the tables (effectively, likelihood is the test statistic; lower likelihood = "more extreme" in relation to calculation of p-values). The Fisher Exact test on an $r\times c$ table is sometimes referred to as a Fisher-Freeman-Halton test [1].
Indeed, the help on the R function you mention makes it quite clear that it works on $r\times c$ tables, and gives several references that relate to the $r\times c$ case.
[1] Freeman, G.H., & Halton, J.H. (1951).
"Note on exact treatment of contingency, goodness-of-fit and other
problems of significance."
Biometrika, 38, 141-149.
|
Fisher exact test on 4 x 3 table, with low count?
|
Having observed zeros is not an issue for a Fisher Exact test -- nor indeed is it a problem for a chi-squared test (it's not clear why you think this would be a difficulty with an exact test; if you c
|
Fisher exact test on 4 x 3 table, with low count?
Having observed zeros is not an issue for a Fisher Exact test -- nor indeed is it a problem for a chi-squared test (it's not clear why you think this would be a difficulty with an exact test; if you can clarify the source of your concern, additional explanation/clarification may be possible). An entire row or column of zeros might potentially be an issue for some implementations, though it's pretty easy to deal with if it arises (e.g. by combining with an adjacent row or column -- or equivalently, by eliminating it altogether - either way I think the outcome should be the same as including it in this case).
One potential issue for you (not a problem for the test per se, but which may cause you concerns in some cases) is that entire columns or rows of low counts may in some cases lead to difficulty obtaining small p-values (the lowest attainable p-value may be greater than some common choices of significance level). This tends to be less of an issue with larger tables (I believe it doesn't pose a problem for the table in your question unless you require quite low significance levels)
The test extends in a fairly natural way to $r\times c$ tables, using the likelihood (under the null) as a criterion by which to order the tables (effectively, likelihood is the test statistic; lower likelihood = "more extreme" in relation to calculation of p-values). The Fisher Exact test on an $r\times c$ table is sometimes referred to as a Fisher-Freeman-Halton test [1].
Indeed, the help on the R function you mention makes it quite clear that it works on $r\times c$ tables, and gives several references that relate to the $r\times c$ case.
[1] Freeman, G.H., & Halton, J.H. (1951).
"Note on exact treatment of contingency, goodness-of-fit and other
problems of significance."
Biometrika, 38, 141-149.
|
Fisher exact test on 4 x 3 table, with low count?
Having observed zeros is not an issue for a Fisher Exact test -- nor indeed is it a problem for a chi-squared test (it's not clear why you think this would be a difficulty with an exact test; if you c
|
52,111 |
Fisher exact test on 4 x 3 table, with low count?
|
In principle, Fisher's exact test can be used on tables of any size, with any entries. The only issue is whether it is computationally feasible, which is an issue when you are using large tables with large values. In this case the fisher.test function in R can comfortably handle the matrix you are using, and the system time for execution (on my PC) is so small it does not even register as a non-zero time.
#Input data and perform Fisher's exact test
x <- matrix(c(2, 15, 0, 17, 0, 27, 2, 15), byrow = TRUE, nrow = 4, ncol = 2);
fisher.test(x);
Fisher's Exact Test for Count Data
data: x
p-value = 0.1434
alternative hypothesis: two.sided
#Check system time for test
system.time(fisher.test(x));
user system elapsed
0 0 0
|
Fisher exact test on 4 x 3 table, with low count?
|
In principle, Fisher's exact test can be used on tables of any size, with any entries. The only issue is whether it is computationally feasible, which is an issue when you are using large tables with
|
Fisher exact test on 4 x 3 table, with low count?
In principle, Fisher's exact test can be used on tables of any size, with any entries. The only issue is whether it is computationally feasible, which is an issue when you are using large tables with large values. In this case the fisher.test function in R can comfortably handle the matrix you are using, and the system time for execution (on my PC) is so small it does not even register as a non-zero time.
#Input data and perform Fisher's exact test
x <- matrix(c(2, 15, 0, 17, 0, 27, 2, 15), byrow = TRUE, nrow = 4, ncol = 2);
fisher.test(x);
Fisher's Exact Test for Count Data
data: x
p-value = 0.1434
alternative hypothesis: two.sided
#Check system time for test
system.time(fisher.test(x));
user system elapsed
0 0 0
|
Fisher exact test on 4 x 3 table, with low count?
In principle, Fisher's exact test can be used on tables of any size, with any entries. The only issue is whether it is computationally feasible, which is an issue when you are using large tables with
|
52,112 |
Specify contrasts for lme with interactions
|
If you look at the summary of your fixed effects portion of the model, you can label each row as follows:
Value Std.Error DF t-value p-value
beta0 (Intercept) 204417.8 109088.33 168 1.87387 0.0627
beta1 Exposure2 -192542.9 58653.05 168 -3.28274 0.0013
beta2 Exposure3 -332725.9 58653.05 168 -5.67278 0.0000
beta3 SexM -232599.7 104210.57 22 -2.23202 0.0361
beta4 GenotypeA -184772.3 125723.74 22 -1.46967 0.1558
beta5 GenotypeB -40073.2 128715.16 22 -0.31133 0.7585
beta6 Basal 1650.4 48.83 41 33.79680 0.0000
beta7 Exposure2:SexM 135000.3 58133.66 168 2.32224 0.0214
beta8 Exposure3:SexM 203637.4 58133.66 168 3.50292 0.0006
beta9 Exposure2:GenotypeA 106377.5 71472.21 168 1.48838 0.1385
beta10 Exposure3:GenotypeA 159548.1 71472.21 168 2.23231 0.0269
beta11 Exposure2:GenotypeB 101246.7 70397.58 168 1.43821 0.1522
beta12 Exposure3:GenotypeB 191111.2 70397.58 168 2.71474 0.0073
So the fixed effects portion of your model looks like this:
beta0 + beta1*Exposure2 + beta2*Exposure3 + beta3*SexM +
beta4*GenotypeA + beta5*GenotypeB + beta6*Basal +
beta7*Exposure2*SexM + beta8*Exposure3*SexM +
beta9*Exposure2*GenotypeA + beta10*Exposure3*GenotypeA +
beta11*Exposure2*GenotypeB + beta12*Exposure3*GenotypeB
where the symbol * denotes multiplication and all the beta's are true fixed effects (i.e., unknown but estimable from the data via the values listed in the Value column of your fixed effects summary).
If you are interested in describing the effect of Sex, for instance, all you have to do is to find all the terms in the model which include the dummy variable SexM (which is equal to 1 for Males and 0 for Females) and group them together. The coefficient of SexM obtained after this grouping denotes the effect of Sex:
(beta3 + beta7*Exp2 + beta8*Exp3)*SexM (1)
From the above, we can see that the effect of Sex depends on the value of Exp. Recall that Exp2 and Exp3 are dummy variables defined as follows: Exp2 = 1 if Exp = 2 and 0 otherwise; Exp3 = 1 if Exp = 3 and 0 otherwise. We can exploit this to spell out the effect of SexM for:
Exp = 1 (that is, for Exp2 = 0 and Exp3 = 0);
Exp = 2 (that is, for Exp2 = 1 and Exp3 = 0);
Exp = 3 (that is, for Exp2 = 0 and Exp3 = 1).
When Exp = 1, substituting Exp2 = 0 and Exp3 = 0 in expression (1) above yields that the coefficient of SexM is beta3. This is the effect of Sex on your outcome variable when Exp = 1, all else being equal.
When Exp = 2, substituting Exp2 = 1 and Exp3 = 0 in expression (1) above yields that the coefficient of SexM is beta3 + beta7. This is the effect of Sex on your outcome variable when Exp = 2, all else being equal.
When Exp = 3, substituting Exp2 = 0 and Exp3 = 1 in expression (1) above yields that the coefficient of SexM is beta3 + beta8. This is the effect of Sex on your outcome variable when Exp = 3, all else being equal.
So if you wanted to set up linear combinations of parameters which encapsulate the effect of SexM for each value of Exp, all you need to do is to specify these combinations so they reflect the parameters you are interested in: beta3, beta3 + beta7 and beta3 + beta8. (Again, the betas are true fixed effects, not estimated fixed effects.)
Since the fixed effects portion of your model includes the parameters beta0 through beta12, you are going to set up each combination via a row vector which includes 13 components. The first component of this vector corresponds to beta0, the second component corresponds to beta1, ..., the last component corresponds to beta12.
In R, beta3 is a combination of all the fixed effects model parameters with weights given by the components of the row vector c1:
c1 <- rep(0, 13)
names(c1) <- paste0("beta",0:12)
c1[names(c1)=="beta3"] <- 1
c1
In other words, beta3 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 0*beta7 + 0*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
The linear combination of model parameters beta0 through beta12 that will encapsulate the parameter beta3 + beta7 is given by:
c2 <- rep(0, 13)
names(c2) <- paste0("beta",0:12)
c2[names(c2)=="beta3"] <- 1
c2[names(c2)=="beta7"] <- 1
c2
In other words, beta3 + beta7 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 1*beta7 + 0*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
The linear combination of model parameters beta0 through beta12 that will encapsulate the parameter beta3 + beta8 is given by:
c3 <- rep(0, 13)
names(c3) <- paste0("beta",0:12)
c3[names(c3)=="beta3"] <- 1
c3[names(c3)=="beta8"] <- 1
c3
such that beta3 + beta8 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 0*beta7 + 1*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
If you now want to simultaneously test hypotheses such as:
H0: beta3 = 0 versus Ha: beta3 != 0
H0: beta3 + beta7 = 0 versus Ha: beta3 + beta7 != 0
H0: beta3 + beta8 = 0 versus Ha: beta3 + beta8 != 0
you can achieve this by using the multcomp package in R. (Here, != 0 stands for "not equal to zero".) These hypotheses will enable you to determine whether Sex has an effect for any of the specific levels of Exp (i.e., if males differ from females with respect to the average response, all else being the same). This package can also be used to compute simultaneous confidence intervals for beta3, beta3 + beta7 and beta3 + beta8.
All you need to do to use multcomp in conjunction with your lme model is something like this:
library(multcomp)
c <- rbind(c1, c2, c3)
g <- glht(model, linfct = c)
s <- summary(g, test=adjusted("holm"))
s
ci <- confint(summary(g, test=adjusted("holm")))
ci
If you don't want to adjust p-values or confidence levels for multiplicity, just use adjusted("none") in the above.
Of course, there are R packages which will do all of the above for you with a minimal number of commands. But it helps to know how to test your own simple effects (e.g., beta3, beta3 + beta7, beta3 + beta8) when interested in probing a significant interaction.
In the approach I presented here, contrasts are specified via linear combinations of all model parameters. There are other ways to specify contrasts, which I will leave to others on this forum to address.
|
Specify contrasts for lme with interactions
|
If you look at the summary of your fixed effects portion of the model, you can label each row as follows:
Value Std.Error DF t-value p-value
beta0 (Intercept)
|
Specify contrasts for lme with interactions
If you look at the summary of your fixed effects portion of the model, you can label each row as follows:
Value Std.Error DF t-value p-value
beta0 (Intercept) 204417.8 109088.33 168 1.87387 0.0627
beta1 Exposure2 -192542.9 58653.05 168 -3.28274 0.0013
beta2 Exposure3 -332725.9 58653.05 168 -5.67278 0.0000
beta3 SexM -232599.7 104210.57 22 -2.23202 0.0361
beta4 GenotypeA -184772.3 125723.74 22 -1.46967 0.1558
beta5 GenotypeB -40073.2 128715.16 22 -0.31133 0.7585
beta6 Basal 1650.4 48.83 41 33.79680 0.0000
beta7 Exposure2:SexM 135000.3 58133.66 168 2.32224 0.0214
beta8 Exposure3:SexM 203637.4 58133.66 168 3.50292 0.0006
beta9 Exposure2:GenotypeA 106377.5 71472.21 168 1.48838 0.1385
beta10 Exposure3:GenotypeA 159548.1 71472.21 168 2.23231 0.0269
beta11 Exposure2:GenotypeB 101246.7 70397.58 168 1.43821 0.1522
beta12 Exposure3:GenotypeB 191111.2 70397.58 168 2.71474 0.0073
So the fixed effects portion of your model looks like this:
beta0 + beta1*Exposure2 + beta2*Exposure3 + beta3*SexM +
beta4*GenotypeA + beta5*GenotypeB + beta6*Basal +
beta7*Exposure2*SexM + beta8*Exposure3*SexM +
beta9*Exposure2*GenotypeA + beta10*Exposure3*GenotypeA +
beta11*Exposure2*GenotypeB + beta12*Exposure3*GenotypeB
where the symbol * denotes multiplication and all the beta's are true fixed effects (i.e., unknown but estimable from the data via the values listed in the Value column of your fixed effects summary).
If you are interested in describing the effect of Sex, for instance, all you have to do is to find all the terms in the model which include the dummy variable SexM (which is equal to 1 for Males and 0 for Females) and group them together. The coefficient of SexM obtained after this grouping denotes the effect of Sex:
(beta3 + beta7*Exp2 + beta8*Exp3)*SexM (1)
From the above, we can see that the effect of Sex depends on the value of Exp. Recall that Exp2 and Exp3 are dummy variables defined as follows: Exp2 = 1 if Exp = 2 and 0 otherwise; Exp3 = 1 if Exp = 3 and 0 otherwise. We can exploit this to spell out the effect of SexM for:
Exp = 1 (that is, for Exp2 = 0 and Exp3 = 0);
Exp = 2 (that is, for Exp2 = 1 and Exp3 = 0);
Exp = 3 (that is, for Exp2 = 0 and Exp3 = 1).
When Exp = 1, substituting Exp2 = 0 and Exp3 = 0 in expression (1) above yields that the coefficient of SexM is beta3. This is the effect of Sex on your outcome variable when Exp = 1, all else being equal.
When Exp = 2, substituting Exp2 = 1 and Exp3 = 0 in expression (1) above yields that the coefficient of SexM is beta3 + beta7. This is the effect of Sex on your outcome variable when Exp = 2, all else being equal.
When Exp = 3, substituting Exp2 = 0 and Exp3 = 1 in expression (1) above yields that the coefficient of SexM is beta3 + beta8. This is the effect of Sex on your outcome variable when Exp = 3, all else being equal.
So if you wanted to set up linear combinations of parameters which encapsulate the effect of SexM for each value of Exp, all you need to do is to specify these combinations so they reflect the parameters you are interested in: beta3, beta3 + beta7 and beta3 + beta8. (Again, the betas are true fixed effects, not estimated fixed effects.)
Since the fixed effects portion of your model includes the parameters beta0 through beta12, you are going to set up each combination via a row vector which includes 13 components. The first component of this vector corresponds to beta0, the second component corresponds to beta1, ..., the last component corresponds to beta12.
In R, beta3 is a combination of all the fixed effects model parameters with weights given by the components of the row vector c1:
c1 <- rep(0, 13)
names(c1) <- paste0("beta",0:12)
c1[names(c1)=="beta3"] <- 1
c1
In other words, beta3 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 0*beta7 + 0*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
The linear combination of model parameters beta0 through beta12 that will encapsulate the parameter beta3 + beta7 is given by:
c2 <- rep(0, 13)
names(c2) <- paste0("beta",0:12)
c2[names(c2)=="beta3"] <- 1
c2[names(c2)=="beta7"] <- 1
c2
In other words, beta3 + beta7 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 1*beta7 + 0*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
The linear combination of model parameters beta0 through beta12 that will encapsulate the parameter beta3 + beta8 is given by:
c3 <- rep(0, 13)
names(c3) <- paste0("beta",0:12)
c3[names(c3)=="beta3"] <- 1
c3[names(c3)=="beta8"] <- 1
c3
such that beta3 + beta8 = 0*beta0 + 0*beta1 + 0*beta2 + 1*beta3 + 0*beta4 + 0*beta5 + 0*beta6 + 0*beta7 + 1*beta8 + 0*beta9 + 0*beta10 + 0*beta11 + 0*beta12.
If you now want to simultaneously test hypotheses such as:
H0: beta3 = 0 versus Ha: beta3 != 0
H0: beta3 + beta7 = 0 versus Ha: beta3 + beta7 != 0
H0: beta3 + beta8 = 0 versus Ha: beta3 + beta8 != 0
you can achieve this by using the multcomp package in R. (Here, != 0 stands for "not equal to zero".) These hypotheses will enable you to determine whether Sex has an effect for any of the specific levels of Exp (i.e., if males differ from females with respect to the average response, all else being the same). This package can also be used to compute simultaneous confidence intervals for beta3, beta3 + beta7 and beta3 + beta8.
All you need to do to use multcomp in conjunction with your lme model is something like this:
library(multcomp)
c <- rbind(c1, c2, c3)
g <- glht(model, linfct = c)
s <- summary(g, test=adjusted("holm"))
s
ci <- confint(summary(g, test=adjusted("holm")))
ci
If you don't want to adjust p-values or confidence levels for multiplicity, just use adjusted("none") in the above.
Of course, there are R packages which will do all of the above for you with a minimal number of commands. But it helps to know how to test your own simple effects (e.g., beta3, beta3 + beta7, beta3 + beta8) when interested in probing a significant interaction.
In the approach I presented here, contrasts are specified via linear combinations of all model parameters. There are other ways to specify contrasts, which I will leave to others on this forum to address.
|
Specify contrasts for lme with interactions
If you look at the summary of your fixed effects portion of the model, you can label each row as follows:
Value Std.Error DF t-value p-value
beta0 (Intercept)
|
52,113 |
Specify contrasts for lme with interactions
|
You could also reparameterise your model and look at the parameter estimates
Something like
model2 <- lme(AUC ~ Exposure -1 + Exposure:Sex + Exposure:Genotype +
Basal,
random = ~ 1 | Date / Experiment / Cell,
data = mydata)
then look differences between sexes with Exposure categories and same for Genotype.
And try again with exposure nested within Sex and Genotype BUT be cautious of rising Type I error rates.
model3 <- lme(AUC ~ Sex/Exposure -1 + Genotype/Exposure +
Basal,
random = ~ 1 | Date / Experiment / Cell,
data = mydata)
|
Specify contrasts for lme with interactions
|
You could also reparameterise your model and look at the parameter estimates
Something like
model2 <- lme(AUC ~ Exposure -1 + Exposure:Sex + Exposure:Genotype +
Basal,
random =
|
Specify contrasts for lme with interactions
You could also reparameterise your model and look at the parameter estimates
Something like
model2 <- lme(AUC ~ Exposure -1 + Exposure:Sex + Exposure:Genotype +
Basal,
random = ~ 1 | Date / Experiment / Cell,
data = mydata)
then look differences between sexes with Exposure categories and same for Genotype.
And try again with exposure nested within Sex and Genotype BUT be cautious of rising Type I error rates.
model3 <- lme(AUC ~ Sex/Exposure -1 + Genotype/Exposure +
Basal,
random = ~ 1 | Date / Experiment / Cell,
data = mydata)
|
Specify contrasts for lme with interactions
You could also reparameterise your model and look at the parameter estimates
Something like
model2 <- lme(AUC ~ Exposure -1 + Exposure:Sex + Exposure:Genotype +
Basal,
random =
|
52,114 |
Specify contrasts for lme with interactions
|
I think for the kind of questions you are asking, you want to use post-hoc comparisons, such as with emmeans, rather than trying to interpret the summary output.
I have sample code below. I created some data and used a simpler model, so obviously my results will be different than yours.
Because your model is complex, I recommend reading the documentation and vignettes for the emmeans package before relying on the results.
For example, for the "What if I wanted to compare exposure 2 to exposure 3?" question, you can do pairwise comparisons of all levels of Exposure.
For the "There is a statistically significant difference in response of M vs F both during exposure 2 and 3.", you can compare levels of Sex within each level of Exposure. Or you can do pairwise comparisons of each combination of Sex and Exposure. These results could be summarized in a compact letter display (cld).
if(!require(nlme)){install.packages("nlme")}
if(!require(car)){install.packages("car")}
if(!require(emmeans)){install.packages("emmeans")}
set.seed(1234)
Exposure = factor(rep(c("1", "2", "3"), 1, each=6))
Sex = factor(rep(c("F", "M"), 9))
Subject = factor(rep(letters[1:6],3))
AUC = as.numeric(Exposure) * 2 +
as.numeric(Sex) +
as.numeric(Exposure) * as.numeric(Sex) * 1.2
AUC = AUC + rnorm(length(Exposure), 0, 1)
Data = data.frame(Exposure, Sex, Subject, AUC)
str(Data)
library(nlme)
model = lme(AUC ~ Exposure * Sex, random=~1|Subject, data=Data)
library(car)
Anova(model)
### What if I wanted to compare exposure 2 to exposure 3?
library(emmeans)
marginal = emmeans(model, ~ Exposure)
pairs(marginal)
### NOTE: Results may be misleading due to involvement in interactions
### contrast estimate SE df t.ratio p.value
### 1 - 2 -3.334045 0.4925335 8 -6.769 0.0004
### 1 - 3 -7.595154 0.4925335 8 -15.421 <.0001
### 2 - 3 -4.261108 0.4925335 8 -8.651 0.0001
### There is a statistically significant difference in response of M vs F both during exposure 2 and 3.
marginal = emmeans(model, ~ Sex | Exposure)
pairs(marginal)
### Exposure = 1:
### contrast estimate SE df t.ratio p.value
### F - M -1.577096 0.7533083 4 -2.094 0.1044
###
### Exposure = 2:
### contrast estimate SE df t.ratio p.value
### F - M -3.127110 0.7533083 4 -4.151 0.0142
###
### Exposure = 3:
### contrast estimate SE df t.ratio p.value
### F - M -4.390249 0.7533083 4 -5.828 0.0043
### Or you could compare all levels of the interaction
marginal = emmeans(model, ~ Sex + Exposure)
pairs(marginal)
cld(marginal, Letters=letters)
### Sex Exposure emmean SE df lower.CL upper.CL .group
### F 1 4.302167 0.5326694 5 2.932896 5.671437 a
### M 1 5.879262 0.5326694 4 4.400335 7.358190 ab
### F 2 6.861205 0.5326694 5 5.491935 8.230475 bc
### M 2 9.988315 0.5326694 4 8.509387 11.467242 cd
### F 3 10.490744 0.5326694 5 9.121473 11.860014 d
### M 3 14.880993 0.5326694 4 13.402065 16.359920 e
###
### Confidence level used: 0.95
### P value adjustment: tukey method for comparing a family of 6 estimates
### significance level used: alpha = 0.05
|
Specify contrasts for lme with interactions
|
I think for the kind of questions you are asking, you want to use post-hoc comparisons, such as with emmeans, rather than trying to interpret the summary output.
I have sample code below. I created s
|
Specify contrasts for lme with interactions
I think for the kind of questions you are asking, you want to use post-hoc comparisons, such as with emmeans, rather than trying to interpret the summary output.
I have sample code below. I created some data and used a simpler model, so obviously my results will be different than yours.
Because your model is complex, I recommend reading the documentation and vignettes for the emmeans package before relying on the results.
For example, for the "What if I wanted to compare exposure 2 to exposure 3?" question, you can do pairwise comparisons of all levels of Exposure.
For the "There is a statistically significant difference in response of M vs F both during exposure 2 and 3.", you can compare levels of Sex within each level of Exposure. Or you can do pairwise comparisons of each combination of Sex and Exposure. These results could be summarized in a compact letter display (cld).
if(!require(nlme)){install.packages("nlme")}
if(!require(car)){install.packages("car")}
if(!require(emmeans)){install.packages("emmeans")}
set.seed(1234)
Exposure = factor(rep(c("1", "2", "3"), 1, each=6))
Sex = factor(rep(c("F", "M"), 9))
Subject = factor(rep(letters[1:6],3))
AUC = as.numeric(Exposure) * 2 +
as.numeric(Sex) +
as.numeric(Exposure) * as.numeric(Sex) * 1.2
AUC = AUC + rnorm(length(Exposure), 0, 1)
Data = data.frame(Exposure, Sex, Subject, AUC)
str(Data)
library(nlme)
model = lme(AUC ~ Exposure * Sex, random=~1|Subject, data=Data)
library(car)
Anova(model)
### What if I wanted to compare exposure 2 to exposure 3?
library(emmeans)
marginal = emmeans(model, ~ Exposure)
pairs(marginal)
### NOTE: Results may be misleading due to involvement in interactions
### contrast estimate SE df t.ratio p.value
### 1 - 2 -3.334045 0.4925335 8 -6.769 0.0004
### 1 - 3 -7.595154 0.4925335 8 -15.421 <.0001
### 2 - 3 -4.261108 0.4925335 8 -8.651 0.0001
### There is a statistically significant difference in response of M vs F both during exposure 2 and 3.
marginal = emmeans(model, ~ Sex | Exposure)
pairs(marginal)
### Exposure = 1:
### contrast estimate SE df t.ratio p.value
### F - M -1.577096 0.7533083 4 -2.094 0.1044
###
### Exposure = 2:
### contrast estimate SE df t.ratio p.value
### F - M -3.127110 0.7533083 4 -4.151 0.0142
###
### Exposure = 3:
### contrast estimate SE df t.ratio p.value
### F - M -4.390249 0.7533083 4 -5.828 0.0043
### Or you could compare all levels of the interaction
marginal = emmeans(model, ~ Sex + Exposure)
pairs(marginal)
cld(marginal, Letters=letters)
### Sex Exposure emmean SE df lower.CL upper.CL .group
### F 1 4.302167 0.5326694 5 2.932896 5.671437 a
### M 1 5.879262 0.5326694 4 4.400335 7.358190 ab
### F 2 6.861205 0.5326694 5 5.491935 8.230475 bc
### M 2 9.988315 0.5326694 4 8.509387 11.467242 cd
### F 3 10.490744 0.5326694 5 9.121473 11.860014 d
### M 3 14.880993 0.5326694 4 13.402065 16.359920 e
###
### Confidence level used: 0.95
### P value adjustment: tukey method for comparing a family of 6 estimates
### significance level used: alpha = 0.05
|
Specify contrasts for lme with interactions
I think for the kind of questions you are asking, you want to use post-hoc comparisons, such as with emmeans, rather than trying to interpret the summary output.
I have sample code below. I created s
|
52,115 |
how is deep learning integrated into the reinforcement learning
|
The core of Q-learning is to learn a function $Q$ which maps state-action pairs to the expected discounted future reward.
This function can be represented in a variety of ways
As a look-up table (each row containing state, action, and expected reward)
As a linear or nonlinear regression model mapping state-action to reward.
Deep Q-learning is simply an extension of representation 2 to using a much deeper and higher capacity regression model.
|
how is deep learning integrated into the reinforcement learning
|
The core of Q-learning is to learn a function $Q$ which maps state-action pairs to the expected discounted future reward.
This function can be represented in a variety of ways
As a look-up table (eac
|
how is deep learning integrated into the reinforcement learning
The core of Q-learning is to learn a function $Q$ which maps state-action pairs to the expected discounted future reward.
This function can be represented in a variety of ways
As a look-up table (each row containing state, action, and expected reward)
As a linear or nonlinear regression model mapping state-action to reward.
Deep Q-learning is simply an extension of representation 2 to using a much deeper and higher capacity regression model.
|
how is deep learning integrated into the reinforcement learning
The core of Q-learning is to learn a function $Q$ which maps state-action pairs to the expected discounted future reward.
This function can be represented in a variety of ways
As a look-up table (eac
|
52,116 |
how is deep learning integrated into the reinforcement learning
|
The input into the neural network is the current observation of the environment (for example, a screen shot of a game, or a list of values from some sensors).
The output from the neural network is a list of Q-values covering each of the choices that the agent can make (in Space Invaders for example, the list might be a vector of length 4, corresponding to "move left", "move right", "stop", "shoot").
So... how do you train the agent, starting with a randomised set of network weights? The algorithm goes something like this:
Input the environment observation into the network,
Store the observation and output Q-values in the GAME_MEMORY,
Make the agent act upon the highest Q-value, or take a random step depending on the value of a slowly degrading variable (epsilon),
Has game termination state been reached? If yes, get the game score and goto #5, otherwise goto #1.
Working backwards through the GAME_MEMORY (stepping from the last move to the first move) assign the game score value to the Q-value that was acted upon for each step, remembering to reduce the score value by some factor (discount_rate) for each step back through the history.
Add the GAME_MEMORY data to GAME_TRAINING_DATA. Clear GAME_MEMORY.
Have you played sufficient games to explore the state space properly? If yes, goto #8, otherwise goto #1
Using the environment observations recorded in GAME_TRAINING_DATA as input and the now updated Q-values as output, train your network. For example, if you played 75 games and each one took 10 steps to reach a termination condition, you now have 750 pieces of data on which to train your network.**
Clear the GAME_TRAINING_DATA.
Do you think that your agent is as good as it is going to get? If so, goto #11, otherwise goto #1.
Your network is now trained and should allow an agent to input its observations and receive back a list of Q-values, the highest valued of which corresponds to the optimum decision for what it should do.*
Here is a minimal implementation of this algorithm. It needs Python 3.x, TensorFlow and Keras to run it.
*Naturally, this depends if you have allowed your agent to play enough games that he has properly explored the problem space, and that your network is designed so that it can properly learn and generalise your observations...
**Some training regimes recommend using only a subset of the captured data for training, or randomising its order.
|
how is deep learning integrated into the reinforcement learning
|
The input into the neural network is the current observation of the environment (for example, a screen shot of a game, or a list of values from some sensors).
The output from the neural network is a l
|
how is deep learning integrated into the reinforcement learning
The input into the neural network is the current observation of the environment (for example, a screen shot of a game, or a list of values from some sensors).
The output from the neural network is a list of Q-values covering each of the choices that the agent can make (in Space Invaders for example, the list might be a vector of length 4, corresponding to "move left", "move right", "stop", "shoot").
So... how do you train the agent, starting with a randomised set of network weights? The algorithm goes something like this:
Input the environment observation into the network,
Store the observation and output Q-values in the GAME_MEMORY,
Make the agent act upon the highest Q-value, or take a random step depending on the value of a slowly degrading variable (epsilon),
Has game termination state been reached? If yes, get the game score and goto #5, otherwise goto #1.
Working backwards through the GAME_MEMORY (stepping from the last move to the first move) assign the game score value to the Q-value that was acted upon for each step, remembering to reduce the score value by some factor (discount_rate) for each step back through the history.
Add the GAME_MEMORY data to GAME_TRAINING_DATA. Clear GAME_MEMORY.
Have you played sufficient games to explore the state space properly? If yes, goto #8, otherwise goto #1
Using the environment observations recorded in GAME_TRAINING_DATA as input and the now updated Q-values as output, train your network. For example, if you played 75 games and each one took 10 steps to reach a termination condition, you now have 750 pieces of data on which to train your network.**
Clear the GAME_TRAINING_DATA.
Do you think that your agent is as good as it is going to get? If so, goto #11, otherwise goto #1.
Your network is now trained and should allow an agent to input its observations and receive back a list of Q-values, the highest valued of which corresponds to the optimum decision for what it should do.*
Here is a minimal implementation of this algorithm. It needs Python 3.x, TensorFlow and Keras to run it.
*Naturally, this depends if you have allowed your agent to play enough games that he has properly explored the problem space, and that your network is designed so that it can properly learn and generalise your observations...
**Some training regimes recommend using only a subset of the captured data for training, or randomising its order.
|
how is deep learning integrated into the reinforcement learning
The input into the neural network is the current observation of the environment (for example, a screen shot of a game, or a list of values from some sensors).
The output from the neural network is a l
|
52,117 |
Simplest way for ANN to learn F = MA?
|
Yes, the network will probably approximate some sort of multiplication, but it is unlikely to generalize outside the range of inputs you train it on.
You may have more luck learning and generalizing the rule by using a quadratic neuron, which is capable of multiplying inputs. RNTNs introduced here do something to that effect.
I suggest a neuron which does something like $f(x) = a(x^TWx + b^Tx + c)$, where $a$ is the activation function (possibly linear).
|
Simplest way for ANN to learn F = MA?
|
Yes, the network will probably approximate some sort of multiplication, but it is unlikely to generalize outside the range of inputs you train it on.
You may have more luck learning and generalizing
|
Simplest way for ANN to learn F = MA?
Yes, the network will probably approximate some sort of multiplication, but it is unlikely to generalize outside the range of inputs you train it on.
You may have more luck learning and generalizing the rule by using a quadratic neuron, which is capable of multiplying inputs. RNTNs introduced here do something to that effect.
I suggest a neuron which does something like $f(x) = a(x^TWx + b^Tx + c)$, where $a$ is the activation function (possibly linear).
|
Simplest way for ANN to learn F = MA?
Yes, the network will probably approximate some sort of multiplication, but it is unlikely to generalize outside the range of inputs you train it on.
You may have more luck learning and generalizing
|
52,118 |
Simplest way for ANN to learn F = MA?
|
As Shimao said, a NNet will be able to learn some sigmoid based approximation of multiplication, but it will likely fail for new values that fall outside of the range of the training set.
Remember that a 3 layer feedforward neural net is a universal approximator, given a sufficient number of neurons. So you might end up having to resort to having one neuron per training example, in which case your NNet isn't really "learning" anything, it is just acting as a lookup table.
Another approach to your problem is to perform a log transform:
$F = ma \rightarrow log(F) = log(m) + log(a)$
A NNet can easily learn the second function, and then reverse transfrom the output.
|
Simplest way for ANN to learn F = MA?
|
As Shimao said, a NNet will be able to learn some sigmoid based approximation of multiplication, but it will likely fail for new values that fall outside of the range of the training set.
Remember th
|
Simplest way for ANN to learn F = MA?
As Shimao said, a NNet will be able to learn some sigmoid based approximation of multiplication, but it will likely fail for new values that fall outside of the range of the training set.
Remember that a 3 layer feedforward neural net is a universal approximator, given a sufficient number of neurons. So you might end up having to resort to having one neuron per training example, in which case your NNet isn't really "learning" anything, it is just acting as a lookup table.
Another approach to your problem is to perform a log transform:
$F = ma \rightarrow log(F) = log(m) + log(a)$
A NNet can easily learn the second function, and then reverse transfrom the output.
|
Simplest way for ANN to learn F = MA?
As Shimao said, a NNet will be able to learn some sigmoid based approximation of multiplication, but it will likely fail for new values that fall outside of the range of the training set.
Remember th
|
52,119 |
Generalized Pareto distribution (GPD)
|
The replacement of $z$ with $\frac{x-\mu}{\sigma}$ allows the generalization to a "location-scale family". This is common when dealing with continuous distributions. That is, tweaking $\mu$ and $\sigma$ you can center the distribution and spread the distribution as you please.
Check out what happens to the distribution yourself, remembering parameter bounds.
When it comes to tweaking the location parameter, you might want your distribution centered on certain values according to your data. If you are talking about yearly rain maxima, your location parameter might be in the hundreds range, if you are measuring temperatures in a combustion chamber, your distribution will inevitably be centered at higher levels. Similar considerations go for the scale parameter.
|
Generalized Pareto distribution (GPD)
|
The replacement of $z$ with $\frac{x-\mu}{\sigma}$ allows the generalization to a "location-scale family". This is common when dealing with continuous distributions. That is, tweaking $\mu$ and $\sigm
|
Generalized Pareto distribution (GPD)
The replacement of $z$ with $\frac{x-\mu}{\sigma}$ allows the generalization to a "location-scale family". This is common when dealing with continuous distributions. That is, tweaking $\mu$ and $\sigma$ you can center the distribution and spread the distribution as you please.
Check out what happens to the distribution yourself, remembering parameter bounds.
When it comes to tweaking the location parameter, you might want your distribution centered on certain values according to your data. If you are talking about yearly rain maxima, your location parameter might be in the hundreds range, if you are measuring temperatures in a combustion chamber, your distribution will inevitably be centered at higher levels. Similar considerations go for the scale parameter.
|
Generalized Pareto distribution (GPD)
The replacement of $z$ with $\frac{x-\mu}{\sigma}$ allows the generalization to a "location-scale family". This is common when dealing with continuous distributions. That is, tweaking $\mu$ and $\sigm
|
52,120 |
Generalized Pareto distribution (GPD)
|
The max-stability property of the GEV distribution is quite well known
in relation with the Fisher-Tippett-Gnedenko theorem. The GPD has the
following remarkable property which can be named threshold
stability and relates to the Pickands-Balkema-de Haan theorem. It
helps to understand the relation between the location $\mu$ and the
scale $\sigma$.
Assume that $X \sim \text{GPD}(0,\,\sigma,\,\xi)$, and let $\omega$ be
the upper end-point. Then for each threshold $u \in [0,\, \omega)$,
the distribution of the excess $X-u$ conditional on the exceedance
$X>u$ is the same, up to a scaling factor, as the
distribution of $X$
\begin{equation}
\tag{1}
X - u \, \vert \, X > u \quad \overset{\text{dist}}{=} \quad a(u) X
\end{equation}
where $a(u) = 1+ \xi u / \sigma> 0$. So, conditional on $X >u$, the
excess $X-u$ is GPD with location $0$ and shape
$\sigma_u := a(u) \times \sigma = \sigma + \xi u$.
An appealing interpretation is when $X$ is the lifetime of an item. If
the item is alive at time $u$, then the property tells that it will
behave as if it was a new one and if the time clock was changed with
the new unit $1 / a(u)$. See Figure, where a positive value of $\xi$ is used,
implying a rejuvenation and a thick tail.
It seems that in most applications of the GPD the parameter $\mu$ is
fixed, and is not estimated. The scale parameter $\sigma$ should then
be thought of as related to $\mu$ because the tail remains identical
when $\sigma^\star := \sigma - \xi \mu$ is constant.
The relation (1) writes as a
functional equation for the survival function $S(x) := \text{Pr}\{X >
x\}$
\begin{equation}
\tag{2}
\frac{S(x + u)}{S(u)} = S[x/a(u)] \quad \text{for all }u, \, x
\text{ with } u \in [0,\,\omega) \text{ and } x \geq 0.
\end{equation}
Interestingly, the functional equation (2) nearly characterises the GPD
survival. Consider a continuous probability distribution on
$\mathbb{R}$ with end-points $0$ and $\omega >0$ possibly
infinite. Assume that the survival function $S(x)$ is strictly
decreasing and smooth enough on $[0, \,\omega)$. If (2) holds for a
function $a(u) > 0$ which is smooth enough on $[0,\,\omega)$, then
$S(x)$ must be the survival function of a
$\text{GPD}(0, \, \sigma,\,\xi)$.
|
Generalized Pareto distribution (GPD)
|
The max-stability property of the GEV distribution is quite well known
in relation with the Fisher-Tippett-Gnedenko theorem. The GPD has the
following remarkable property which can be named threshold
|
Generalized Pareto distribution (GPD)
The max-stability property of the GEV distribution is quite well known
in relation with the Fisher-Tippett-Gnedenko theorem. The GPD has the
following remarkable property which can be named threshold
stability and relates to the Pickands-Balkema-de Haan theorem. It
helps to understand the relation between the location $\mu$ and the
scale $\sigma$.
Assume that $X \sim \text{GPD}(0,\,\sigma,\,\xi)$, and let $\omega$ be
the upper end-point. Then for each threshold $u \in [0,\, \omega)$,
the distribution of the excess $X-u$ conditional on the exceedance
$X>u$ is the same, up to a scaling factor, as the
distribution of $X$
\begin{equation}
\tag{1}
X - u \, \vert \, X > u \quad \overset{\text{dist}}{=} \quad a(u) X
\end{equation}
where $a(u) = 1+ \xi u / \sigma> 0$. So, conditional on $X >u$, the
excess $X-u$ is GPD with location $0$ and shape
$\sigma_u := a(u) \times \sigma = \sigma + \xi u$.
An appealing interpretation is when $X$ is the lifetime of an item. If
the item is alive at time $u$, then the property tells that it will
behave as if it was a new one and if the time clock was changed with
the new unit $1 / a(u)$. See Figure, where a positive value of $\xi$ is used,
implying a rejuvenation and a thick tail.
It seems that in most applications of the GPD the parameter $\mu$ is
fixed, and is not estimated. The scale parameter $\sigma$ should then
be thought of as related to $\mu$ because the tail remains identical
when $\sigma^\star := \sigma - \xi \mu$ is constant.
The relation (1) writes as a
functional equation for the survival function $S(x) := \text{Pr}\{X >
x\}$
\begin{equation}
\tag{2}
\frac{S(x + u)}{S(u)} = S[x/a(u)] \quad \text{for all }u, \, x
\text{ with } u \in [0,\,\omega) \text{ and } x \geq 0.
\end{equation}
Interestingly, the functional equation (2) nearly characterises the GPD
survival. Consider a continuous probability distribution on
$\mathbb{R}$ with end-points $0$ and $\omega >0$ possibly
infinite. Assume that the survival function $S(x)$ is strictly
decreasing and smooth enough on $[0, \,\omega)$. If (2) holds for a
function $a(u) > 0$ which is smooth enough on $[0,\,\omega)$, then
$S(x)$ must be the survival function of a
$\text{GPD}(0, \, \sigma,\,\xi)$.
|
Generalized Pareto distribution (GPD)
The max-stability property of the GEV distribution is quite well known
in relation with the Fisher-Tippett-Gnedenko theorem. The GPD has the
following remarkable property which can be named threshold
|
52,121 |
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
|
One way is to set a prior that is a constant. For example, take a simple linear regression context where you have an intercept, one slope, and an error term. If you set the prior on beta to be $\beta \sim \text{N}(5, 0)$, then no amount of data can overwhelm that prior. You are multiplying an arbitrarily large number of data points to something that has no variance; you will get 5, no matter the sample size.
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
|
One way is to set a prior that is a constant. For example, take a simple linear regression context where you have an intercept, one slope, and an error term. If you set the prior on beta to be $\beta
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
One way is to set a prior that is a constant. For example, take a simple linear regression context where you have an intercept, one slope, and an error term. If you set the prior on beta to be $\beta \sim \text{N}(5, 0)$, then no amount of data can overwhelm that prior. You are multiplying an arbitrarily large number of data points to something that has no variance; you will get 5, no matter the sample size.
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
One way is to set a prior that is a constant. For example, take a simple linear regression context where you have an intercept, one slope, and an error term. If you set the prior on beta to be $\beta
|
52,122 |
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
|
Another example would be lack of identification.
Assume a proper prior $\pi(\theta)$. We obtain that the posterior is equal to the prior, $\pi(\theta|y)=\pi(\theta)$, if $f(y|\theta)$ does not depend on $\theta$, i.e., if the likelihood is not informative about the parameter of interest:
\begin{eqnarray*}
\pi(\theta|y)&=&\frac{f(y|\theta)\pi(\theta)}{\int f(y|\theta)\pi(\theta)d\theta}\\
&=&\frac{f(y|\theta)\pi(\theta)}{f(y|\theta)\int \pi(\theta)d\theta}\\
&=&\frac{\pi(\theta)}{\int \pi(\theta)d\theta}\\
&=&\pi(\theta)
\end{eqnarray*}
Since the likelihood does not depend on $\theta$, the data does not modify our beliefs about $\theta$, so that the posterior still is equal to the prior.
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
|
Another example would be lack of identification.
Assume a proper prior $\pi(\theta)$. We obtain that the posterior is equal to the prior, $\pi(\theta|y)=\pi(\theta)$, if $f(y|\theta)$ does not depend
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
Another example would be lack of identification.
Assume a proper prior $\pi(\theta)$. We obtain that the posterior is equal to the prior, $\pi(\theta|y)=\pi(\theta)$, if $f(y|\theta)$ does not depend on $\theta$, i.e., if the likelihood is not informative about the parameter of interest:
\begin{eqnarray*}
\pi(\theta|y)&=&\frac{f(y|\theta)\pi(\theta)}{\int f(y|\theta)\pi(\theta)d\theta}\\
&=&\frac{f(y|\theta)\pi(\theta)}{f(y|\theta)\int \pi(\theta)d\theta}\\
&=&\frac{\pi(\theta)}{\int \pi(\theta)d\theta}\\
&=&\pi(\theta)
\end{eqnarray*}
Since the likelihood does not depend on $\theta$, the data does not modify our beliefs about $\theta$, so that the posterior still is equal to the prior.
|
When a prior distribution would not be overwhelmed by data, regardless of the sample size?
Another example would be lack of identification.
Assume a proper prior $\pi(\theta)$. We obtain that the posterior is equal to the prior, $\pi(\theta|y)=\pi(\theta)$, if $f(y|\theta)$ does not depend
|
52,123 |
Relationship between model over fitting and number of parameters
|
An exact answer depends on a particular statistical model and the data dimensionality. However, usually the more parameters the model has, the more functions it can represent. A common assumption is that the function generating the data is simpler than the exact random noise on the training samples, thus smaller models (in the number of parameters) will be able to model the desired function but not the noise. Fitting the noise on the training samples is overfitting.
I will give two specific examples:
1. Linear regression with polynomial model
The model is:
$y= a_0 + a_1 x + a_2x^2+\ldots +a_nx^n$
This model is able to fit exactly any consistent dataset of $n$ training samples. Consistent means there are no two samples with the same $x$ but different $y$.
This means that if the number of parameters is greater or equal to the number of training samples, you are guaranteed to overfit. However, if the data was generated by a polynomial of degree $m, m<n$, certain level of overfitting will also happen whenever the model polynomial has degree in the range $k, m<k<n$, even though it will not be able to fit the data perfectly.
2. Neural networks
Zhang et al. (2017) in their paper Understanding deep learning requires generalization show that a simple two-layer neural network with $2n+d$ parameters is capable of perfectly fitting any dataset of $n$ samples of dimension $d$.
However, note that while commonly used neural networks have much more than $2n+d$ parameters, they do not necessarily overfit: The minimum of the loss function (which corresponds to modeling the noise on the training data) cannot be found in practice, and there are many well-studied regularization methods (early stopping, to give an example) that prevent overfitting. Moreover, the mentioned paper also includes an interesting discussion about yet not understood properties of deep neural networks, which prevent overfitting.
|
Relationship between model over fitting and number of parameters
|
An exact answer depends on a particular statistical model and the data dimensionality. However, usually the more parameters the model has, the more functions it can represent. A common assumption is t
|
Relationship between model over fitting and number of parameters
An exact answer depends on a particular statistical model and the data dimensionality. However, usually the more parameters the model has, the more functions it can represent. A common assumption is that the function generating the data is simpler than the exact random noise on the training samples, thus smaller models (in the number of parameters) will be able to model the desired function but not the noise. Fitting the noise on the training samples is overfitting.
I will give two specific examples:
1. Linear regression with polynomial model
The model is:
$y= a_0 + a_1 x + a_2x^2+\ldots +a_nx^n$
This model is able to fit exactly any consistent dataset of $n$ training samples. Consistent means there are no two samples with the same $x$ but different $y$.
This means that if the number of parameters is greater or equal to the number of training samples, you are guaranteed to overfit. However, if the data was generated by a polynomial of degree $m, m<n$, certain level of overfitting will also happen whenever the model polynomial has degree in the range $k, m<k<n$, even though it will not be able to fit the data perfectly.
2. Neural networks
Zhang et al. (2017) in their paper Understanding deep learning requires generalization show that a simple two-layer neural network with $2n+d$ parameters is capable of perfectly fitting any dataset of $n$ samples of dimension $d$.
However, note that while commonly used neural networks have much more than $2n+d$ parameters, they do not necessarily overfit: The minimum of the loss function (which corresponds to modeling the noise on the training data) cannot be found in practice, and there are many well-studied regularization methods (early stopping, to give an example) that prevent overfitting. Moreover, the mentioned paper also includes an interesting discussion about yet not understood properties of deep neural networks, which prevent overfitting.
|
Relationship between model over fitting and number of parameters
An exact answer depends on a particular statistical model and the data dimensionality. However, usually the more parameters the model has, the more functions it can represent. A common assumption is t
|
52,124 |
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so over-fitted and sensitive to cross validation? Intuitively I've hold out just one sample but it seems the projection matrix can't map the held out observation correctly.
Well, the cross validation is probably doing what it is supposed to do: with almost the same training data, performance is measured. What you observe is that the models are unstable (which is one symptom of overfitting). considering your data situation, it seems totally plausible to me that the full model overfits just as badly.
Cross validation does not in itself guard against overfitting (or improve the situation) - it just tells you that you are overfitting and it is up to you to do something against that.
Keep in mind that the recommended number of training cases where you can be reasonably sure of having a stable fitting for (unregularized) linear classifiers like LDA is n > 3 to 5 p in each class. In your case that would be, say, 200 * 7 * 5 = 7000 cases, so with 500 cases you are more than an order of magnitude below that recommendation.
Suggestions:
As you look at LDA as a projection method, you can also check out PLS (partial least squares). It is related to LDA (Barker & Rayens: Partial least squares for discrimination J Chemom, 2003, 17, 166-173).
In contrast to PCA, PLS takes the dependent variable into account for its projection. But in contrast to LDA (and like PCA) it directly offering regularization.
In small sample size situations where n is barely larger than p, many problems can be solved by linear classification. I'd recommend checking whether the nonlinear 2nd stage in your classification is really necessary.
Unstable models may be improved by switching to an aggregated (ensemble) model. While bagging is the most famous variety, you can also aggregate cross validation LDA (e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations Anal Bioanal Chem, 2008, 390, 1261-1271.
DOI: 10.1007/s00216-007-1818-6
)
Because of the pooling of the covariance matrix, I'd expect your uneven distribution of cases over the different classes to be less difficult for LDA compared to many other classifiers such as SVM. Of course this comes at the cost that a common covariance matrix may not be a good description of your data. However, if your classes are very unequal (or you even have rather ill-defined negative classes such as "something went wrong with the process") you may want to look into one-class classifiers. They typically need more training cases than discriminative classifiers, but they do have the advantage that recognition of classes where you have sufficient cases will not be compromised by classes with only few training instances, and said ill-defined classes can be described as the case belongs to none of the well-defined classes.
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so o
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so over-fitted and sensitive to cross validation? Intuitively I've hold out just one sample but it seems the projection matrix can't map the held out observation correctly.
Well, the cross validation is probably doing what it is supposed to do: with almost the same training data, performance is measured. What you observe is that the models are unstable (which is one symptom of overfitting). considering your data situation, it seems totally plausible to me that the full model overfits just as badly.
Cross validation does not in itself guard against overfitting (or improve the situation) - it just tells you that you are overfitting and it is up to you to do something against that.
Keep in mind that the recommended number of training cases where you can be reasonably sure of having a stable fitting for (unregularized) linear classifiers like LDA is n > 3 to 5 p in each class. In your case that would be, say, 200 * 7 * 5 = 7000 cases, so with 500 cases you are more than an order of magnitude below that recommendation.
Suggestions:
As you look at LDA as a projection method, you can also check out PLS (partial least squares). It is related to LDA (Barker & Rayens: Partial least squares for discrimination J Chemom, 2003, 17, 166-173).
In contrast to PCA, PLS takes the dependent variable into account for its projection. But in contrast to LDA (and like PCA) it directly offering regularization.
In small sample size situations where n is barely larger than p, many problems can be solved by linear classification. I'd recommend checking whether the nonlinear 2nd stage in your classification is really necessary.
Unstable models may be improved by switching to an aggregated (ensemble) model. While bagging is the most famous variety, you can also aggregate cross validation LDA (e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations Anal Bioanal Chem, 2008, 390, 1261-1271.
DOI: 10.1007/s00216-007-1818-6
)
Because of the pooling of the covariance matrix, I'd expect your uneven distribution of cases over the different classes to be less difficult for LDA compared to many other classifiers such as SVM. Of course this comes at the cost that a common covariance matrix may not be a good description of your data. However, if your classes are very unequal (or you even have rather ill-defined negative classes such as "something went wrong with the process") you may want to look into one-class classifiers. They typically need more training cases than discriminative classifiers, but they do have the advantage that recognition of classes where you have sufficient cases will not be compromised by classes with only few training instances, and said ill-defined classes can be described as the case belongs to none of the well-defined classes.
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so o
|
52,125 |
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
Looks like your sample size is not a lot bigger than the dimensionality of the data (feature set size). That can be a problem for LDA and it can overfit. Since it relies on computing the within-class scatter matrix which requires the scenario of N >> p (# samples >> # features).
One quick way to check if you are overfitting with LDA is to look at the projections. As a result of LDA you have C-1 projection vectors. I would try projecting data on those vectors one by one and visualize it. If LDA indeed overfitted - you will see that the classes separate almost perfectly and are clustered around separate points on the projected axis. (In case of p > N all the samples would get projected onto C different points with classes separated perfectly).
This effect was termed "data piling" by J. S. Marron in his paper Distance Weighted Discrimination. For reference of how it might look like you can check the figures in that paper.
So assuming that is what happening I would do one of the following:
1) Use a regularized version of LDA. The simplest idea is probably just adding some constant to the diagonal of within-class scatter matrix in order to increase the variance in all directions. But there are a lot of different way you can regularize LDA.
2) Use another method for dimensionality reduction that is adapted to your scenario of small sample size. Distance Weighted Discrimination (DWD) might be a good choice here.
3) Get more samples (always recommended)
[1] Distance-Weighted Discrimination. J. S. Marron, Michael J. Todd and Jeongyoun Ahn. Journal of the American Statistical Association Vol. 102, No. 480 (Dec., 2007), pp. 1267-1271
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
Looks like your sample size is not a lot bigger than the dimensionality of the data (feature set size). That can be a problem for LDA and it can overfit. Since it relies on computing the within-class
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
Looks like your sample size is not a lot bigger than the dimensionality of the data (feature set size). That can be a problem for LDA and it can overfit. Since it relies on computing the within-class scatter matrix which requires the scenario of N >> p (# samples >> # features).
One quick way to check if you are overfitting with LDA is to look at the projections. As a result of LDA you have C-1 projection vectors. I would try projecting data on those vectors one by one and visualize it. If LDA indeed overfitted - you will see that the classes separate almost perfectly and are clustered around separate points on the projected axis. (In case of p > N all the samples would get projected onto C different points with classes separated perfectly).
This effect was termed "data piling" by J. S. Marron in his paper Distance Weighted Discrimination. For reference of how it might look like you can check the figures in that paper.
So assuming that is what happening I would do one of the following:
1) Use a regularized version of LDA. The simplest idea is probably just adding some constant to the diagonal of within-class scatter matrix in order to increase the variance in all directions. But there are a lot of different way you can regularize LDA.
2) Use another method for dimensionality reduction that is adapted to your scenario of small sample size. Distance Weighted Discrimination (DWD) might be a good choice here.
3) Get more samples (always recommended)
[1] Distance-Weighted Discrimination. J. S. Marron, Michael J. Todd and Jeongyoun Ahn. Journal of the American Statistical Association Vol. 102, No. 480 (Dec., 2007), pp. 1267-1271
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
Looks like your sample size is not a lot bigger than the dimensionality of the data (feature set size). That can be a problem for LDA and it can overfit. Since it relies on computing the within-class
|
52,126 |
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
LDA is optimal when the distribution of features, conditional on the labels is Gaussian with equal, but unstructured covariance matrices. If conditional Gaussian model doesn't hold approximately, you may not want to use LDA. The results of your LOO-CV suggests
The conditional Gaussian model is a poor fit and/or
You don't have enough observations to precisely estimate the within-class covariance matrix.
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
|
LDA is optimal when the distribution of features, conditional on the labels is Gaussian with equal, but unstructured covariance matrices. If conditional Gaussian model doesn't hold approximately, you
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
LDA is optimal when the distribution of features, conditional on the labels is Gaussian with equal, but unstructured covariance matrices. If conditional Gaussian model doesn't hold approximately, you may not want to use LDA. The results of your LOO-CV suggests
The conditional Gaussian model is a poor fit and/or
You don't have enough observations to precisely estimate the within-class covariance matrix.
|
Why linear discriminant analysis is sensitive to cross validation (LDA overfit problem)?
LDA is optimal when the distribution of features, conditional on the labels is Gaussian with equal, but unstructured covariance matrices. If conditional Gaussian model doesn't hold approximately, you
|
52,127 |
Loss vs. Classification Accuracy in applied problems
|
Accuracy is essentially the mean of the Losses under a zero-one loss function, so to answer your question, yes accuracy is just a loss function.
More specifically: For the Zero-one loss function is defined as:
$L(y,y^*) =\begin{cases}
0,& \text{if } y = y*\\
1, & \text{otherwise}
\end{cases}$
So the mean across all y is obviously the accuracy.
|
Loss vs. Classification Accuracy in applied problems
|
Accuracy is essentially the mean of the Losses under a zero-one loss function, so to answer your question, yes accuracy is just a loss function.
More specifically: For the Zero-one loss function is de
|
Loss vs. Classification Accuracy in applied problems
Accuracy is essentially the mean of the Losses under a zero-one loss function, so to answer your question, yes accuracy is just a loss function.
More specifically: For the Zero-one loss function is defined as:
$L(y,y^*) =\begin{cases}
0,& \text{if } y = y*\\
1, & \text{otherwise}
\end{cases}$
So the mean across all y is obviously the accuracy.
|
Loss vs. Classification Accuracy in applied problems
Accuracy is essentially the mean of the Losses under a zero-one loss function, so to answer your question, yes accuracy is just a loss function.
More specifically: For the Zero-one loss function is de
|
52,128 |
Loss vs. Classification Accuracy in applied problems
|
I want to argue that the premise of your question is flawed.
In practical problems, where we want to for instance predict if a subject has a certain disease or not, we usually take classification accuracy as a measure [...]
Maybe some people do, but I think there is a fair perspective that anyone doing this is not doing their job. It's certainly not an advisable practice.
If I'm building a system to predict whether someone has a disease or not, my primary responsibility is to think through how to operationalize these predictions. Really, I'm not actually building a system to make predictions, I'm building a system to advise a doctor on how to intervene. A prediction may be part of this system, yes, but it is not the whole system.
In constructing a decision rule for the doctor, I need to weigh the consequences of my advise: what are the consequences if I advise the doctor to intervene as if the patient has cancer when they in fact do not, what are the consequences if I advise the doctor to behave as if the patient does not have cancer when they in fact do, and so forth. The evaluation of my decision rule must take into account these costs. Accuracy is irrelevant. Not only irrelevant, but harmful in this case. A good decision rule will have poor accuracy.
So what of a model, how to evaluate that? The model should not predict whether a patient has the disease or not (again, accuracy is irrelevant), but should inform us of the probability the patient has cancer. Our intention may be to use this probability to construct the decision rule above, but the model used in constructing these probabilities should be evaluated on the basis of its job: do the probabilities faithfully reflect the true probabilities in the population.
This is a separation of concerns: models predict probabilities, decision rules tell us how to act (and should be informed by probabilities). Good separation of concerns maximize our flexibility in taking action, and provide maximal information about the situation. Probabilities are the one true path.
|
Loss vs. Classification Accuracy in applied problems
|
I want to argue that the premise of your question is flawed.
In practical problems, where we want to for instance predict if a subject has a certain disease or not, we usually take classification acc
|
Loss vs. Classification Accuracy in applied problems
I want to argue that the premise of your question is flawed.
In practical problems, where we want to for instance predict if a subject has a certain disease or not, we usually take classification accuracy as a measure [...]
Maybe some people do, but I think there is a fair perspective that anyone doing this is not doing their job. It's certainly not an advisable practice.
If I'm building a system to predict whether someone has a disease or not, my primary responsibility is to think through how to operationalize these predictions. Really, I'm not actually building a system to make predictions, I'm building a system to advise a doctor on how to intervene. A prediction may be part of this system, yes, but it is not the whole system.
In constructing a decision rule for the doctor, I need to weigh the consequences of my advise: what are the consequences if I advise the doctor to intervene as if the patient has cancer when they in fact do not, what are the consequences if I advise the doctor to behave as if the patient does not have cancer when they in fact do, and so forth. The evaluation of my decision rule must take into account these costs. Accuracy is irrelevant. Not only irrelevant, but harmful in this case. A good decision rule will have poor accuracy.
So what of a model, how to evaluate that? The model should not predict whether a patient has the disease or not (again, accuracy is irrelevant), but should inform us of the probability the patient has cancer. Our intention may be to use this probability to construct the decision rule above, but the model used in constructing these probabilities should be evaluated on the basis of its job: do the probabilities faithfully reflect the true probabilities in the population.
This is a separation of concerns: models predict probabilities, decision rules tell us how to act (and should be informed by probabilities). Good separation of concerns maximize our flexibility in taking action, and provide maximal information about the situation. Probabilities are the one true path.
|
Loss vs. Classification Accuracy in applied problems
I want to argue that the premise of your question is flawed.
In practical problems, where we want to for instance predict if a subject has a certain disease or not, we usually take classification acc
|
52,129 |
Loss vs. Classification Accuracy in applied problems
|
They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your model (e.g. weights in neural network). It is what you try to optimize in the training by updating weights.
Accuracy is more from an applied perspective. Once you find the optimized parameters above, you use this metrics to evaluate how accurate your model's prediction is compared to the true data.
Let us use a toy classification example. You want to predict gender from one's weight and height. You have 3 data, they are as follows:(0 stands for male, 1 stands for female)
$y_1 = 0, x_{1w}= 50kg, x_{2h} = 160cm$;
$y_2 = 0, x_{2w} = 60kg, x_{2h} = 170cm$;
$y_3 = 1, x_{3w} = 55kg, x_{3h} = 175cm$;
You use a simple logistic regression model that is $y = \frac{1}{1+e^-(b_1*x_w+b2*x_h)}$
How do you find $b_1$ and $b_2$? you define a loss first and use optimization method to minimize the loss in an iterative way by updating $b_1$ and $b_2$.
In our example, a typical loss for this binary classification problem can be:
$$-\sum_{i=1}^{3}y_{i}log(\hat{y_i}) + (1-y_{i})log(1-\hat{y_i})$$
We don't know what $b_1$ and $b_2$ should be. Let us make a random guess say $b_1$ = 0.1 and $b_2$ = -0.03. Then what is our loss now?
$\hat{y}_1 = \frac{1}{(1+e^{-(0.1*50-0.03*160)})} = 0.549834 = 0.55$
$\hat{y}_2 = \frac{1}{(1+e^{-(0.1*60-0.03*170)}}= 0.7109495 = 0.71$
$\hat{y}_3 = \frac{1}{(1+e^{-(0.1*55-0.03*175)}}= 0.5621765 = 0.56$
so the loss is $(-log(1-0.55) + -log(1-0.71) - log(0.56))$ = 2.6162
Then you learning algorithm (e.g. gradient descent) will find a way to update $b_1$ and $b_2$ to decrease the loss.
What if b1=0.1 and b2=-0.03 is the final b1 and b2 (output from gradient descent), what is the accuracy now?
Let's assume if $\hat{y} >= 0.5$, we decide our prediction is female(1). otherwise it would be 0. Therefore, our algorithm predict $y_1 = 1, y_2 = 1$ and $y_3 = 1$. What is our accuracy? We make wrong prediction on $y_1$ and y2 and make correct one on $y_3$. So now our accuracy is $\frac{1}{3}$ = 33.33%
PS: In Amir's answer, back-propagation is said to be an optimization method in NN. I think it would be treated as a way to find gradient for weights in NN. Common optimization method in NN are GradientDescent and Adam.
|
Loss vs. Classification Accuracy in applied problems
|
They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your mode
|
Loss vs. Classification Accuracy in applied problems
They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your model (e.g. weights in neural network). It is what you try to optimize in the training by updating weights.
Accuracy is more from an applied perspective. Once you find the optimized parameters above, you use this metrics to evaluate how accurate your model's prediction is compared to the true data.
Let us use a toy classification example. You want to predict gender from one's weight and height. You have 3 data, they are as follows:(0 stands for male, 1 stands for female)
$y_1 = 0, x_{1w}= 50kg, x_{2h} = 160cm$;
$y_2 = 0, x_{2w} = 60kg, x_{2h} = 170cm$;
$y_3 = 1, x_{3w} = 55kg, x_{3h} = 175cm$;
You use a simple logistic regression model that is $y = \frac{1}{1+e^-(b_1*x_w+b2*x_h)}$
How do you find $b_1$ and $b_2$? you define a loss first and use optimization method to minimize the loss in an iterative way by updating $b_1$ and $b_2$.
In our example, a typical loss for this binary classification problem can be:
$$-\sum_{i=1}^{3}y_{i}log(\hat{y_i}) + (1-y_{i})log(1-\hat{y_i})$$
We don't know what $b_1$ and $b_2$ should be. Let us make a random guess say $b_1$ = 0.1 and $b_2$ = -0.03. Then what is our loss now?
$\hat{y}_1 = \frac{1}{(1+e^{-(0.1*50-0.03*160)})} = 0.549834 = 0.55$
$\hat{y}_2 = \frac{1}{(1+e^{-(0.1*60-0.03*170)}}= 0.7109495 = 0.71$
$\hat{y}_3 = \frac{1}{(1+e^{-(0.1*55-0.03*175)}}= 0.5621765 = 0.56$
so the loss is $(-log(1-0.55) + -log(1-0.71) - log(0.56))$ = 2.6162
Then you learning algorithm (e.g. gradient descent) will find a way to update $b_1$ and $b_2$ to decrease the loss.
What if b1=0.1 and b2=-0.03 is the final b1 and b2 (output from gradient descent), what is the accuracy now?
Let's assume if $\hat{y} >= 0.5$, we decide our prediction is female(1). otherwise it would be 0. Therefore, our algorithm predict $y_1 = 1, y_2 = 1$ and $y_3 = 1$. What is our accuracy? We make wrong prediction on $y_1$ and y2 and make correct one on $y_3$. So now our accuracy is $\frac{1}{3}$ = 33.33%
PS: In Amir's answer, back-propagation is said to be an optimization method in NN. I think it would be treated as a way to find gradient for weights in NN. Common optimization method in NN are GradientDescent and Adam.
|
Loss vs. Classification Accuracy in applied problems
They are two different metrics to evaluate your model's performance usually being used in different phases.
Loss is often used in the training process to find the "best" parameter values for your mode
|
52,130 |
Loss vs. Classification Accuracy in applied problems
|
A related discussion can be found here: What are the impacts of choosing different loss functions in classification to approximate 0-1 loss
As mentioned by @Tilefish Poele Classification Accuracy is one type of Loss, which is 0-1 loss. There are other types of loss exist for different purpose.
I will give one example on why we need more loss functions and why accuracy is useless in some cases. Think about imbalanced classification, such as fraud detection. 99.9% of the time, a transaction is not a fraud transaction, so, simply predicting all transactions are not fraud will have very high accuracy. But such system is useless. This is why sometimes weighted loss are needed.
For hinge loss and logistic loss, one way of thinking is they are convex and can approximate 0-1 loss, which is none-convex and hard to optimize. Logistic loss has some probabilistic interpretations (maximize the likelihood estimation). But I am not aware any interpretation on hinge loss.
|
Loss vs. Classification Accuracy in applied problems
|
A related discussion can be found here: What are the impacts of choosing different loss functions in classification to approximate 0-1 loss
As mentioned by @Tilefish Poele Classification Accuracy is o
|
Loss vs. Classification Accuracy in applied problems
A related discussion can be found here: What are the impacts of choosing different loss functions in classification to approximate 0-1 loss
As mentioned by @Tilefish Poele Classification Accuracy is one type of Loss, which is 0-1 loss. There are other types of loss exist for different purpose.
I will give one example on why we need more loss functions and why accuracy is useless in some cases. Think about imbalanced classification, such as fraud detection. 99.9% of the time, a transaction is not a fraud transaction, so, simply predicting all transactions are not fraud will have very high accuracy. But such system is useless. This is why sometimes weighted loss are needed.
For hinge loss and logistic loss, one way of thinking is they are convex and can approximate 0-1 loss, which is none-convex and hard to optimize. Logistic loss has some probabilistic interpretations (maximize the likelihood estimation). But I am not aware any interpretation on hinge loss.
|
Loss vs. Classification Accuracy in applied problems
A related discussion can be found here: What are the impacts of choosing different loss functions in classification to approximate 0-1 loss
As mentioned by @Tilefish Poele Classification Accuracy is o
|
52,131 |
Ellipse formula from points
|
A straightforward way, especially when you expect the points to fall exactly on an ellipse (yet which works even when they don't), is to observe that an ellipse is the set of zeros of a second order polynomial
$$0 = P(x,y) = -1 + \beta_x\,x + \beta_y\,y + \beta_{xy}\,xy + \beta_{x^2}\,x^2 + \beta_{y^2}\,y^2$$
You can therefore estimate the coefficients from five or more points $(x_i,\,y_i)$ using least squares (without a constant term). The response variable is a vector of ones while the explanatory variables are $(x_i,\,y_i, \,x_iy_i,\,x_i^2,\,y_i^2)$.
Here are 60 points drawn with considerable error.
The fit is shown as a black ellipse. The center and axes of the true underlying ellipse are plotted for reference.
When the points are known to fall on an ellipse, you may use any five of the points to estimate its parameters. (When you work out the normal equations you will obtain an explicit formula for the ellipse in terms of those ten coordinates.) It's best to choose the points situated widely around the ellipse rather than clustered in one place.
This is the R code used to do the work. The three lines in the middle after "Estimate the parameters" illustrate the use of least squares to find the coefficients.
center <- c(1,2)
axis.main <- c(3,1)
axis.lengths <- c(4/3, 1/2)
sigma <- 1/20 # Error SD in each coordinate
n <- 60 # Number of points to generate
set.seed(17)
#
# Compute the axes.
#
axis.main <- axis.main / sqrt(crossprod(axis.main))
axis.aux <- c(-axis.main[2], axis.main[1]) * axis.lengths[2]
axis.main <- axis.main * axis.lengths[1]
axes <- cbind(axis.main, axis.aux)
#
# Generate points along the ellipse.
#
s <- seq(0, 2*pi, length.out=n+1)[-1]
s.c <- cos(s)
s.s <- sin(s)
x <- axis.main[1] * s.c + axis.aux[1] * s.s + center[1] + rnorm(n, sd=sigma)
y <- axis.main[2] * s.c + axis.aux[2] * s.s + center[2] + rnorm(n, sd=sigma)
#
# Estimate the parameters.
#
X <- as.data.frame(cbind(One=1, x, y, xy=x*y, x2=x^2, y2=y^2))
fit <- lm(One ~ . - 1, X)
beta.hat <- coef(fit)
#
# Plot the estimate, the point, and the original axes.
#
evaluate <- function(x, y, beta) {
if (missing(y)) {
y <- x[, 2]; x <- x[, 1]
}
as.vector(cbind(x, y, x*y, x^2, y^2) %*% beta - 1)
}
e.x <- diff(range(x)) / 40
e.y <- diff(range(y)) / 40
n.x <- 100
n.y <- 60
u <- seq(min(x)-e.x, max(x)+e.x, length.out=n.x)
v <- seq(min(y)-e.y, max(y)+e.y, length.out=n.y)
z <- matrix(evaluate(as.matrix(expand.grid(u, v)), beta=beta.hat), n.x)
contour(u, v, z, levels=0, lwd=2, xlab="x", ylab="y", asp=1)
arrows(center[1], center[2], axis.main[1]+center[1], axis.main[2]+center[2],
length=0.15, angle=15)
arrows(center[1], center[2], axis.aux[1]+center[1], axis.aux[2]+center[2],
length=0.15, angle=15)
points(center[1], center[2])
points(x,y, pch=19, col="Red")
|
Ellipse formula from points
|
A straightforward way, especially when you expect the points to fall exactly on an ellipse (yet which works even when they don't), is to observe that an ellipse is the set of zeros of a second order p
|
Ellipse formula from points
A straightforward way, especially when you expect the points to fall exactly on an ellipse (yet which works even when they don't), is to observe that an ellipse is the set of zeros of a second order polynomial
$$0 = P(x,y) = -1 + \beta_x\,x + \beta_y\,y + \beta_{xy}\,xy + \beta_{x^2}\,x^2 + \beta_{y^2}\,y^2$$
You can therefore estimate the coefficients from five or more points $(x_i,\,y_i)$ using least squares (without a constant term). The response variable is a vector of ones while the explanatory variables are $(x_i,\,y_i, \,x_iy_i,\,x_i^2,\,y_i^2)$.
Here are 60 points drawn with considerable error.
The fit is shown as a black ellipse. The center and axes of the true underlying ellipse are plotted for reference.
When the points are known to fall on an ellipse, you may use any five of the points to estimate its parameters. (When you work out the normal equations you will obtain an explicit formula for the ellipse in terms of those ten coordinates.) It's best to choose the points situated widely around the ellipse rather than clustered in one place.
This is the R code used to do the work. The three lines in the middle after "Estimate the parameters" illustrate the use of least squares to find the coefficients.
center <- c(1,2)
axis.main <- c(3,1)
axis.lengths <- c(4/3, 1/2)
sigma <- 1/20 # Error SD in each coordinate
n <- 60 # Number of points to generate
set.seed(17)
#
# Compute the axes.
#
axis.main <- axis.main / sqrt(crossprod(axis.main))
axis.aux <- c(-axis.main[2], axis.main[1]) * axis.lengths[2]
axis.main <- axis.main * axis.lengths[1]
axes <- cbind(axis.main, axis.aux)
#
# Generate points along the ellipse.
#
s <- seq(0, 2*pi, length.out=n+1)[-1]
s.c <- cos(s)
s.s <- sin(s)
x <- axis.main[1] * s.c + axis.aux[1] * s.s + center[1] + rnorm(n, sd=sigma)
y <- axis.main[2] * s.c + axis.aux[2] * s.s + center[2] + rnorm(n, sd=sigma)
#
# Estimate the parameters.
#
X <- as.data.frame(cbind(One=1, x, y, xy=x*y, x2=x^2, y2=y^2))
fit <- lm(One ~ . - 1, X)
beta.hat <- coef(fit)
#
# Plot the estimate, the point, and the original axes.
#
evaluate <- function(x, y, beta) {
if (missing(y)) {
y <- x[, 2]; x <- x[, 1]
}
as.vector(cbind(x, y, x*y, x^2, y^2) %*% beta - 1)
}
e.x <- diff(range(x)) / 40
e.y <- diff(range(y)) / 40
n.x <- 100
n.y <- 60
u <- seq(min(x)-e.x, max(x)+e.x, length.out=n.x)
v <- seq(min(y)-e.y, max(y)+e.y, length.out=n.y)
z <- matrix(evaluate(as.matrix(expand.grid(u, v)), beta=beta.hat), n.x)
contour(u, v, z, levels=0, lwd=2, xlab="x", ylab="y", asp=1)
arrows(center[1], center[2], axis.main[1]+center[1], axis.main[2]+center[2],
length=0.15, angle=15)
arrows(center[1], center[2], axis.aux[1]+center[1], axis.aux[2]+center[2],
length=0.15, angle=15)
points(center[1], center[2])
points(x,y, pch=19, col="Red")
|
Ellipse formula from points
A straightforward way, especially when you expect the points to fall exactly on an ellipse (yet which works even when they don't), is to observe that an ellipse is the set of zeros of a second order p
|
52,132 |
What is the connection (if any) and difference between logistic regression and survival analysis?
|
They are different categories of things - survival analysis is the analysis of data where time to a given event is the dependent variable. The given event may include death, failure of a machine, a criminal's time to (re)offending or becoming ill, for example. It uses a number of techniques to analyse this data, including certain generalised linear models, including sometimes for the purpose of analysing whether specific variables influence the probability of an event, logistic regression.
Overall, survival analysis encompasses many techniques and methods to achieve different subordinate objectives, including tools for exploratory data analysis, distribution fitting and methods for designing experiments. Hence, I don't think you can meaningfully say 'build a model using survival analysis' rather than, perhaps 'build a model using Weibull regression/ Cox regression', as examples of tools closely associated with survival analysis.
Logistic regression, on the other, is a regression technique for analysing binary data. Hence, it is a single tool (but still a very powerful tool useful in many contexts) rather than an overall category of analysis. It could possibly be described as one of the central tools of categorical data analysis, where categorical data analysis belongs to the same category as survival analysis.
|
What is the connection (if any) and difference between logistic regression and survival analysis?
|
They are different categories of things - survival analysis is the analysis of data where time to a given event is the dependent variable. The given event may include death, failure of a machine, a cr
|
What is the connection (if any) and difference between logistic regression and survival analysis?
They are different categories of things - survival analysis is the analysis of data where time to a given event is the dependent variable. The given event may include death, failure of a machine, a criminal's time to (re)offending or becoming ill, for example. It uses a number of techniques to analyse this data, including certain generalised linear models, including sometimes for the purpose of analysing whether specific variables influence the probability of an event, logistic regression.
Overall, survival analysis encompasses many techniques and methods to achieve different subordinate objectives, including tools for exploratory data analysis, distribution fitting and methods for designing experiments. Hence, I don't think you can meaningfully say 'build a model using survival analysis' rather than, perhaps 'build a model using Weibull regression/ Cox regression', as examples of tools closely associated with survival analysis.
Logistic regression, on the other, is a regression technique for analysing binary data. Hence, it is a single tool (but still a very powerful tool useful in many contexts) rather than an overall category of analysis. It could possibly be described as one of the central tools of categorical data analysis, where categorical data analysis belongs to the same category as survival analysis.
|
What is the connection (if any) and difference between logistic regression and survival analysis?
They are different categories of things - survival analysis is the analysis of data where time to a given event is the dependent variable. The given event may include death, failure of a machine, a cr
|
52,133 |
What is the connection (if any) and difference between logistic regression and survival analysis?
|
They have different dependent variables (for logistic 1/0 and for survival time to event).
So the short answer is no - you cannot compare the coefficients.
|
What is the connection (if any) and difference between logistic regression and survival analysis?
|
They have different dependent variables (for logistic 1/0 and for survival time to event).
So the short answer is no - you cannot compare the coefficients.
|
What is the connection (if any) and difference between logistic regression and survival analysis?
They have different dependent variables (for logistic 1/0 and for survival time to event).
So the short answer is no - you cannot compare the coefficients.
|
What is the connection (if any) and difference between logistic regression and survival analysis?
They have different dependent variables (for logistic 1/0 and for survival time to event).
So the short answer is no - you cannot compare the coefficients.
|
52,134 |
Probability for class in xgboost
|
predict_proba yields estimated probabilities that a sample is in class 1.
Note that the speaker in the comment is the author of xgboost, so this is the definitive answer on the subject.
|
Probability for class in xgboost
|
predict_proba yields estimated probabilities that a sample is in class 1.
Note that the speaker in the comment is the author of xgboost, so this is the definitive answer on the subject.
|
Probability for class in xgboost
predict_proba yields estimated probabilities that a sample is in class 1.
Note that the speaker in the comment is the author of xgboost, so this is the definitive answer on the subject.
|
Probability for class in xgboost
predict_proba yields estimated probabilities that a sample is in class 1.
Note that the speaker in the comment is the author of xgboost, so this is the definitive answer on the subject.
|
52,135 |
How to generate a sequence of timestamps in R?
|
Computers have different ways of storing time data. For example, R uses date-time classes POSIXlt and POSIXct. From the documentation
Class "POSIXct" represents the (signed) number of seconds since the
beginning of 1970 (in the UTC time zone) as a numeric vector.
So time is stored as a number of seconds
Sys.time()
## [1] "2017-02-03 10:34:35 CET"
as.numeric(Sys.time())
## [1] 1486114478
this means that if you want to sample timestamps, then you simply need to sample values from $0$ to $k$ (maximal number of seconds from the origin of choice), and then transform them to timestamps, e.g.
u <- runif(10, 0, 60) # "noise" to add or subtract from some timepoint
as.POSIXlt(u, origin = "2017-02-03 08:00:00") # sample 60 seconds starting from this origin (i.e. time 0)from this origin (i.e. time 0)
## [1] "2017-02-03 09:00:44 CET" "2017-02-03 09:00:30 CET" "2017-02-03 09:00:06 CET" "2017-02-03 09:00:12 CET" "2017-02-03 09:00:36 CET"
## [6] "2017-02-03 09:00:16 CET" "2017-02-03 09:00:18 CET" "2017-02-03 09:00:34 CET" "2017-02-03 09:00:22 CET" "2017-02-03 09:00:35 CET"
Outside of R you also can follow such procedure by sampling some values and adding (or subtracting) them from some time-object like =NOW() in Excel or systime in databases etc.
Notice that this procedure enables you to sample from non-uniformly distributed time if you sample from different distribution, for example, normal distribution as in the example below.
hist(as.POSIXlt("2017-02-03 08:00:00") + rnorm(1e6, 0, 60*60), 100)
|
How to generate a sequence of timestamps in R?
|
Computers have different ways of storing time data. For example, R uses date-time classes POSIXlt and POSIXct. From the documentation
Class "POSIXct" represents the (signed) number of seconds since t
|
How to generate a sequence of timestamps in R?
Computers have different ways of storing time data. For example, R uses date-time classes POSIXlt and POSIXct. From the documentation
Class "POSIXct" represents the (signed) number of seconds since the
beginning of 1970 (in the UTC time zone) as a numeric vector.
So time is stored as a number of seconds
Sys.time()
## [1] "2017-02-03 10:34:35 CET"
as.numeric(Sys.time())
## [1] 1486114478
this means that if you want to sample timestamps, then you simply need to sample values from $0$ to $k$ (maximal number of seconds from the origin of choice), and then transform them to timestamps, e.g.
u <- runif(10, 0, 60) # "noise" to add or subtract from some timepoint
as.POSIXlt(u, origin = "2017-02-03 08:00:00") # sample 60 seconds starting from this origin (i.e. time 0)from this origin (i.e. time 0)
## [1] "2017-02-03 09:00:44 CET" "2017-02-03 09:00:30 CET" "2017-02-03 09:00:06 CET" "2017-02-03 09:00:12 CET" "2017-02-03 09:00:36 CET"
## [6] "2017-02-03 09:00:16 CET" "2017-02-03 09:00:18 CET" "2017-02-03 09:00:34 CET" "2017-02-03 09:00:22 CET" "2017-02-03 09:00:35 CET"
Outside of R you also can follow such procedure by sampling some values and adding (or subtracting) them from some time-object like =NOW() in Excel or systime in databases etc.
Notice that this procedure enables you to sample from non-uniformly distributed time if you sample from different distribution, for example, normal distribution as in the example below.
hist(as.POSIXlt("2017-02-03 08:00:00") + rnorm(1e6, 0, 60*60), 100)
|
How to generate a sequence of timestamps in R?
Computers have different ways of storing time data. For example, R uses date-time classes POSIXlt and POSIXct. From the documentation
Class "POSIXct" represents the (signed) number of seconds since t
|
52,136 |
Proportionality assumption in Cox Regression Model
|
The Cox proportional hazards model can be described as follows:
$$h(t|X)=h_{0}(t)e^{\beta X}$$
where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$ is a vector of coefficients and $X$ is a vector of covariates.
As you will know, the Cox model is a semi-parametric model in that it is only partially defined parametrically. Essentially, the covariate part assumes a functional form whereas the baseline part has no parametric functional form (its form is that of a step function).
Additionally, the survival curve of the Cox model is:
$$\begin{align}
S(t|X)&=\text{exp}\bigg(-\int_{0}^{t}h_{0}(t)e^{\beta X}\,dt\bigg)\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad&\overset{*}{=}\text{exp}\big(-H_{0}(t)\big)^{\text{exp}(\beta X)}\quad\quad\quad ^{*}\bigg(H_{0}(t)=\int_{0}^{t}h_{0}(t)\,dt\bigg)\\
&\overset{**}{=}S_{0}(t)^{\text{exp}(\beta X)}\quad\quad\quad\quad\quad\quad\,\,\,\, ^{**}\Big(S_{0}(t)=\text{exp}\big(-H_{0}(t)\big)\Big)\\\\
\end{align}$$
where $S(t)$ is the survival function at time $t$, $S_{0}(t)$ is the baseline survival function at time $t$ and $H_{0}(t)$ is the baseline cumulative hazard function at time $t$.
The proportionality assumption can be best illustrated as follows, let's assume there is only 1 covariate which is binary ($X=\{0,1\}$):
$$\begin{align}
\frac{h(t|X=1)}{h(t|X=0)}&=\frac{h_{0}(t)\text{exp}(\beta(1))}{h_{0}(t)\text{exp}(\beta(0))}\\
&=\text{exp}(\beta(1-0))\\
&=\text{exp}(\beta)
\end{align}$$
which is constant. Thus, the relative risk of two individuals with different covariate values is independent of time or constant at all times. This is an inherent assumption of the Cox model (and any other proportional hazards model).
Given the assumption, it is important to check the results of any fitting to ensure the underlying assumption isn't violated. If we take the functional form of the survival function defined above and apply the following transformation, we arrive at:
$$\text{log}(-\text{log}(S(t)))=\text{log}(-\text{log}(S_{0}(t)))+\beta X$$
Therefore, we know that if the proportionality assumption holds, the difference between the curves with covariate $X=\{0,1\}$ should be constant by amount $\beta$. Thus, the two curves will be parallel but one shifted up or down by $\beta$.
The following is an example of what you might consider a covariate satisfying proportional hazards.
The following is an example where it is not so evident if the proportional hazards assumption is satisfied by the covariate.
There are many other ways to assess whether the assumption is satisfied with a lot of literature available (@IWS points you in the right direction in his answer). The above example is just a nice way to illustrate the concept and conveys the point easily.
|
Proportionality assumption in Cox Regression Model
|
The Cox proportional hazards model can be described as follows:
$$h(t|X)=h_{0}(t)e^{\beta X}$$
where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$
|
Proportionality assumption in Cox Regression Model
The Cox proportional hazards model can be described as follows:
$$h(t|X)=h_{0}(t)e^{\beta X}$$
where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$ is a vector of coefficients and $X$ is a vector of covariates.
As you will know, the Cox model is a semi-parametric model in that it is only partially defined parametrically. Essentially, the covariate part assumes a functional form whereas the baseline part has no parametric functional form (its form is that of a step function).
Additionally, the survival curve of the Cox model is:
$$\begin{align}
S(t|X)&=\text{exp}\bigg(-\int_{0}^{t}h_{0}(t)e^{\beta X}\,dt\bigg)\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad&\overset{*}{=}\text{exp}\big(-H_{0}(t)\big)^{\text{exp}(\beta X)}\quad\quad\quad ^{*}\bigg(H_{0}(t)=\int_{0}^{t}h_{0}(t)\,dt\bigg)\\
&\overset{**}{=}S_{0}(t)^{\text{exp}(\beta X)}\quad\quad\quad\quad\quad\quad\,\,\,\, ^{**}\Big(S_{0}(t)=\text{exp}\big(-H_{0}(t)\big)\Big)\\\\
\end{align}$$
where $S(t)$ is the survival function at time $t$, $S_{0}(t)$ is the baseline survival function at time $t$ and $H_{0}(t)$ is the baseline cumulative hazard function at time $t$.
The proportionality assumption can be best illustrated as follows, let's assume there is only 1 covariate which is binary ($X=\{0,1\}$):
$$\begin{align}
\frac{h(t|X=1)}{h(t|X=0)}&=\frac{h_{0}(t)\text{exp}(\beta(1))}{h_{0}(t)\text{exp}(\beta(0))}\\
&=\text{exp}(\beta(1-0))\\
&=\text{exp}(\beta)
\end{align}$$
which is constant. Thus, the relative risk of two individuals with different covariate values is independent of time or constant at all times. This is an inherent assumption of the Cox model (and any other proportional hazards model).
Given the assumption, it is important to check the results of any fitting to ensure the underlying assumption isn't violated. If we take the functional form of the survival function defined above and apply the following transformation, we arrive at:
$$\text{log}(-\text{log}(S(t)))=\text{log}(-\text{log}(S_{0}(t)))+\beta X$$
Therefore, we know that if the proportionality assumption holds, the difference between the curves with covariate $X=\{0,1\}$ should be constant by amount $\beta$. Thus, the two curves will be parallel but one shifted up or down by $\beta$.
The following is an example of what you might consider a covariate satisfying proportional hazards.
The following is an example where it is not so evident if the proportional hazards assumption is satisfied by the covariate.
There are many other ways to assess whether the assumption is satisfied with a lot of literature available (@IWS points you in the right direction in his answer). The above example is just a nice way to illustrate the concept and conveys the point easily.
|
Proportionality assumption in Cox Regression Model
The Cox proportional hazards model can be described as follows:
$$h(t|X)=h_{0}(t)e^{\beta X}$$
where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$
|
52,137 |
Proportionality assumption in Cox Regression Model
|
Basically, if the effect is proportional, this means the effect is constant over time. In other words: the hazard rate ratio is constant over time. Simple methods of checking this are graphically through plotting Schoefeld residuals, or by adding an interaction with time to your cox model and checking whether it significantly improves your model. Note that this last option does require some additional settings in the analytic software you are using and is not an interaction of coefficient and eventual survival time.
|
Proportionality assumption in Cox Regression Model
|
Basically, if the effect is proportional, this means the effect is constant over time. In other words: the hazard rate ratio is constant over time. Simple methods of checking this are graphically thro
|
Proportionality assumption in Cox Regression Model
Basically, if the effect is proportional, this means the effect is constant over time. In other words: the hazard rate ratio is constant over time. Simple methods of checking this are graphically through plotting Schoefeld residuals, or by adding an interaction with time to your cox model and checking whether it significantly improves your model. Note that this last option does require some additional settings in the analytic software you are using and is not an interaction of coefficient and eventual survival time.
|
Proportionality assumption in Cox Regression Model
Basically, if the effect is proportional, this means the effect is constant over time. In other words: the hazard rate ratio is constant over time. Simple methods of checking this are graphically thro
|
52,138 |
Order of operations in statistics
|
Everything to the left of the $\mid$ is the event whose conditional probability is being talked about; everything to the right of the $\mid$
is the conditioning event, the one that we assume has occurred. Commas
generally mean intersection. Thus, $P(A\mid B, C)$ is the same
as $P(A\mid B\cap C)$ or $P(A\mid (B\cap C))$, the conditional probability of the event $A$
conditioned on the occurrence of the event $B\cap C$ or
$(B\cap C)$ depending on how nitpicky you are about parentheses.
In fact, it is best to be nitpicky about parentheses when one is
starting to learn the subject (or when one is teaching the subject
to beginners) because many beginners think of
$P(A \mid B \cap C)$ as meaning "the probability that
both the events (i) $A$ given $B$, and (ii) $C$, have occurred."
There is no event called "$A$ given $B$" that we can intersect
with $C$; there is only the event $A$,
and we can talk of the unconditional probability that $A$ occurred,
or of the conditional probability that $A$ occurred. Here, we
are considering the conditional probability of $A$, and the
conditioning event in this instance is $B\cap C$.
That vertical bar
is a bright shining line that separates the conditioned event from
the conditioning event.
|
Order of operations in statistics
|
Everything to the left of the $\mid$ is the event whose conditional probability is being talked about; everything to the right of the $\mid$
is the conditioning event, the one that we assume has occu
|
Order of operations in statistics
Everything to the left of the $\mid$ is the event whose conditional probability is being talked about; everything to the right of the $\mid$
is the conditioning event, the one that we assume has occurred. Commas
generally mean intersection. Thus, $P(A\mid B, C)$ is the same
as $P(A\mid B\cap C)$ or $P(A\mid (B\cap C))$, the conditional probability of the event $A$
conditioned on the occurrence of the event $B\cap C$ or
$(B\cap C)$ depending on how nitpicky you are about parentheses.
In fact, it is best to be nitpicky about parentheses when one is
starting to learn the subject (or when one is teaching the subject
to beginners) because many beginners think of
$P(A \mid B \cap C)$ as meaning "the probability that
both the events (i) $A$ given $B$, and (ii) $C$, have occurred."
There is no event called "$A$ given $B$" that we can intersect
with $C$; there is only the event $A$,
and we can talk of the unconditional probability that $A$ occurred,
or of the conditional probability that $A$ occurred. Here, we
are considering the conditional probability of $A$, and the
conditioning event in this instance is $B\cap C$.
That vertical bar
is a bright shining line that separates the conditioned event from
the conditioning event.
|
Order of operations in statistics
Everything to the left of the $\mid$ is the event whose conditional probability is being talked about; everything to the right of the $\mid$
is the conditioning event, the one that we assume has occu
|
52,139 |
Order of operations in statistics
|
Order doesn't matter
Order doesn't matter in this setting, so there isn't any order of operations to worry about. Explicitly:
$$P(a \mid b, c) = P(a \mid c, b)$$
This is because the AND logical concept doesn't depend on order. Consider the statement "It is Wednesday AND I am a student". This is an equivalent logical statement to "I am a student AND it is Wednesday"
Let's say $A$ represents whether or not I am currently in the library. Then $P(A \mid \text{student AND Wednesday})$ is intuitive no different than $P(A \mid \text{wednesday AND student})$
|
Order of operations in statistics
|
Order doesn't matter
Order doesn't matter in this setting, so there isn't any order of operations to worry about. Explicitly:
$$P(a \mid b, c) = P(a \mid c, b)$$
This is because the AND logical conce
|
Order of operations in statistics
Order doesn't matter
Order doesn't matter in this setting, so there isn't any order of operations to worry about. Explicitly:
$$P(a \mid b, c) = P(a \mid c, b)$$
This is because the AND logical concept doesn't depend on order. Consider the statement "It is Wednesday AND I am a student". This is an equivalent logical statement to "I am a student AND it is Wednesday"
Let's say $A$ represents whether or not I am currently in the library. Then $P(A \mid \text{student AND Wednesday})$ is intuitive no different than $P(A \mid \text{wednesday AND student})$
|
Order of operations in statistics
Order doesn't matter
Order doesn't matter in this setting, so there isn't any order of operations to worry about. Explicitly:
$$P(a \mid b, c) = P(a \mid c, b)$$
This is because the AND logical conce
|
52,140 |
Why is this definition of the Central Limit Theorem not incorrect?
|
Note that in your expression
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] $$
There is nowhere a reference to $\lim_{n\to\infty}\bar{X}_n$. It doesn't matter what this last part converges to - you're working with a different expression. It seems you were trying to do something like
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] = \Pr\bigg[\frac{\lim_{n\to\infty}\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] = \Pr\bigg[\frac{\mu-\mu}{\sigma/\sqrt n} \le z\bigg] = \mathrm{Pr}[0\le z]$$
But you can't do that, just like in normal calculus you can't do something like
$$\lim_{t\to\infty}\left(\frac1t\cdot t\right)=\left(\lim_{t\to\infty}\frac1t\right)t=0.$$
In reality, it's true that as $n$ increases, the difference $\bar{X}_n-\mu$ becomes smaller, but you also multiply by $\sqrt{n}$ which gets larger and offsets this. The combined effect is that as $n$ increases, $\frac{\bar{X}_n-\mu}{\sigma/\sqrt n}$ becomes closer to a standard normal distribution.
|
Why is this definition of the Central Limit Theorem not incorrect?
|
Note that in your expression
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] $$
There is nowhere a reference to $\lim_{n\to\infty}\bar{X}_n$. It doesn't matter what this
|
Why is this definition of the Central Limit Theorem not incorrect?
Note that in your expression
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] $$
There is nowhere a reference to $\lim_{n\to\infty}\bar{X}_n$. It doesn't matter what this last part converges to - you're working with a different expression. It seems you were trying to do something like
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] = \Pr\bigg[\frac{\lim_{n\to\infty}\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] = \Pr\bigg[\frac{\mu-\mu}{\sigma/\sqrt n} \le z\bigg] = \mathrm{Pr}[0\le z]$$
But you can't do that, just like in normal calculus you can't do something like
$$\lim_{t\to\infty}\left(\frac1t\cdot t\right)=\left(\lim_{t\to\infty}\frac1t\right)t=0.$$
In reality, it's true that as $n$ increases, the difference $\bar{X}_n-\mu$ becomes smaller, but you also multiply by $\sqrt{n}$ which gets larger and offsets this. The combined effect is that as $n$ increases, $\frac{\bar{X}_n-\mu}{\sigma/\sqrt n}$ becomes closer to a standard normal distribution.
|
Why is this definition of the Central Limit Theorem not incorrect?
Note that in your expression
$$ \lim_{n\to\infty} \Pr\bigg[\frac{\bar{X}_n-\mu}{\sigma/\sqrt n} \le z\bigg] $$
There is nowhere a reference to $\lim_{n\to\infty}\bar{X}_n$. It doesn't matter what this
|
52,141 |
Why is this definition of the Central Limit Theorem not incorrect?
|
$\bar{X}$ is defined as $\frac{1}{n}\sum_{i=1}^nX_i$, your definition misses an average, and summation should start at $i=1$.
$\mu$ is the population mean, not the sample mean ($\bar{X}$ is). Likewise, $\sigma$ is the population standard deviation.
The crucial difference to the LLN is that the difference between $\bar{X}$ and $\mu$ (which indeed vanishes by the LLN) is scaled by $\sqrt{n}$, which diverges. So rewrite (1) as $\sqrt{n}(\bar{X}-\mu)/\sigma$, and it turns out (a proper proof would be too long here) that this product of two things, one of which tends to 0 and the other to infinity indeed (under suitable assumptions) still has a distribution asymptotically.
|
Why is this definition of the Central Limit Theorem not incorrect?
|
$\bar{X}$ is defined as $\frac{1}{n}\sum_{i=1}^nX_i$, your definition misses an average, and summation should start at $i=1$.
$\mu$ is the population mean, not the sample mean ($\bar{X}$ is). Likewise
|
Why is this definition of the Central Limit Theorem not incorrect?
$\bar{X}$ is defined as $\frac{1}{n}\sum_{i=1}^nX_i$, your definition misses an average, and summation should start at $i=1$.
$\mu$ is the population mean, not the sample mean ($\bar{X}$ is). Likewise, $\sigma$ is the population standard deviation.
The crucial difference to the LLN is that the difference between $\bar{X}$ and $\mu$ (which indeed vanishes by the LLN) is scaled by $\sqrt{n}$, which diverges. So rewrite (1) as $\sqrt{n}(\bar{X}-\mu)/\sigma$, and it turns out (a proper proof would be too long here) that this product of two things, one of which tends to 0 and the other to infinity indeed (under suitable assumptions) still has a distribution asymptotically.
|
Why is this definition of the Central Limit Theorem not incorrect?
$\bar{X}$ is defined as $\frac{1}{n}\sum_{i=1}^nX_i$, your definition misses an average, and summation should start at $i=1$.
$\mu$ is the population mean, not the sample mean ($\bar{X}$ is). Likewise
|
52,142 |
Likelihood Ratio Test for the variance of a normal distribution
|
I don't think your derivation of the likelihood ratio test is correct. Let's start from the beginning. I will write everything in terms of the variance since this way we can use some known results about normal distributions. This does not change the nature of the problem either.
We wish to test
$$ H _0 : \sigma^2 = \sigma_0^2 \quad \text{vs} \quad \sigma^2 \neq \sigma_0^2 $$
for a normal model, i.e. $X\sim N\left( \mu, \sigma^2 \right)$. Let $\omega$ denote the restricted set of parameters and $\Omega$ the unrestricted one. It is then easy to see that under $H_0$,
$$\omega = \left\{ -\infty<\mu <\infty, \ \sigma^2 = \sigma^2_0 \right\}$$
while
$$\Omega = \left\{ -\infty<\mu <\infty, \sigma^2 > 0 \right\} $$
Note that the second set is much larger since we are also allowed to vary $\sigma^2$. In fact, we can maximize in two dimensions under $\Omega$. Assuming then we have a sample of iid observations we will be looking at the quotient
$$ L = \frac{ \sup_{\mu, \sigma^2 \in \omega} f \left( \mathbf{x}, \mu, \sigma^2 \right) }{\sup_{\mu, \sigma^2 \in \Omega} f \left( \mathbf{x}, \mu, \sigma^2 \right)} $$
and we will be rejecting the null hypothesis for low values. In most cases, only an asymptotic rejection rule may be obtained from this but since we are dealing with a normal distribution, the problem becomes quite tractable. So let's maximize and see what we got.
In $\omega$ you can verify that the optimization yields $\hat{\mu} = \bar{x}$ and $\hat{\sigma}^2 = \sigma^2_0$, no room to maneuver here, while in $\Omega$ we get the regular mle solution, namely $\hat{\mu} = \bar{x}$ and $\hat{\sigma}^2 = n^{-1} \sum_{i=1}^n \left(x_i - \bar{x} \right)^2$. Insert these values into the likelihood ratio to obtain the rejection rule
$$ \frac{ \left( \frac{1}{\sigma_0^2} \right) ^{n/2} \exp \left\{ - \frac{1}{2\sigma^2_0} \sum_{i=1}^n \left(x_i - \bar{x} \right)^2 \right\}} {\left( \frac{1}{ \hat {\sigma}^2} \right) ^{n/2} \exp\left\{-\frac{n}{2} \right\} } \leq c \tag{1}$$
which after merging constants and simplifying is equivalent to
$$ \left( \frac{\hat{\sigma}^2}{\sigma_0^2} \right)^{n/2} \exp\left\{ - \frac{n}{2} \frac{\hat{\sigma}^2}{\sigma_0^2} \right\} \leq k $$
Thus we are left with a function $f(x) = x^c e^{-cx}$ which is unimodal with maximum at $x=1$. Here is what it looks like
From this, we may conclude that the null hypothesis will be rejected for too small or too large values of $x$. But $x$ is just $\frac{\hat{\sigma}^2}{\sigma_0^2}$, hence the corresponding rejection regions are
$$\frac{\hat{\sigma}^2}{\sigma_0^2} \leq k_1 \quad \text{or} \quad \frac{\hat{\sigma}^2}{\sigma_0^2} \geq k_2 $$.
All that remains now is to determine the distribution of this quantity, under $H_0$, i.e. for $\sigma_0^2 = \sigma^2$. Recalling that for normal models
$$\frac{ \left(n-1\right) S^2}{\sigma^2} \sim \chi^2 (n-1)$$
we can find quantiles from the $\chi^2$ distribution such that we have a 5% level test, which would then call us to reject $H_0$ if
$$ \frac{ \left(n-1\right) S^2}{\sigma_0 ^2} < \chi^2 _{0.025} (n-1) \quad \text{or} \quad \frac{ \left(n-1\right) S^2}{\sigma_0 ^2} > \chi^2 _{0.975} (n-1) $$
By the way, the result is exact so you don't need the asymptotics. It is often the case that such results hold exactly for the normal distribution but you would need the asymptotic $\chi^2$ otherwise.
Hope this helps.
|
Likelihood Ratio Test for the variance of a normal distribution
|
I don't think your derivation of the likelihood ratio test is correct. Let's start from the beginning. I will write everything in terms of the variance since this way we can use some known results abo
|
Likelihood Ratio Test for the variance of a normal distribution
I don't think your derivation of the likelihood ratio test is correct. Let's start from the beginning. I will write everything in terms of the variance since this way we can use some known results about normal distributions. This does not change the nature of the problem either.
We wish to test
$$ H _0 : \sigma^2 = \sigma_0^2 \quad \text{vs} \quad \sigma^2 \neq \sigma_0^2 $$
for a normal model, i.e. $X\sim N\left( \mu, \sigma^2 \right)$. Let $\omega$ denote the restricted set of parameters and $\Omega$ the unrestricted one. It is then easy to see that under $H_0$,
$$\omega = \left\{ -\infty<\mu <\infty, \ \sigma^2 = \sigma^2_0 \right\}$$
while
$$\Omega = \left\{ -\infty<\mu <\infty, \sigma^2 > 0 \right\} $$
Note that the second set is much larger since we are also allowed to vary $\sigma^2$. In fact, we can maximize in two dimensions under $\Omega$. Assuming then we have a sample of iid observations we will be looking at the quotient
$$ L = \frac{ \sup_{\mu, \sigma^2 \in \omega} f \left( \mathbf{x}, \mu, \sigma^2 \right) }{\sup_{\mu, \sigma^2 \in \Omega} f \left( \mathbf{x}, \mu, \sigma^2 \right)} $$
and we will be rejecting the null hypothesis for low values. In most cases, only an asymptotic rejection rule may be obtained from this but since we are dealing with a normal distribution, the problem becomes quite tractable. So let's maximize and see what we got.
In $\omega$ you can verify that the optimization yields $\hat{\mu} = \bar{x}$ and $\hat{\sigma}^2 = \sigma^2_0$, no room to maneuver here, while in $\Omega$ we get the regular mle solution, namely $\hat{\mu} = \bar{x}$ and $\hat{\sigma}^2 = n^{-1} \sum_{i=1}^n \left(x_i - \bar{x} \right)^2$. Insert these values into the likelihood ratio to obtain the rejection rule
$$ \frac{ \left( \frac{1}{\sigma_0^2} \right) ^{n/2} \exp \left\{ - \frac{1}{2\sigma^2_0} \sum_{i=1}^n \left(x_i - \bar{x} \right)^2 \right\}} {\left( \frac{1}{ \hat {\sigma}^2} \right) ^{n/2} \exp\left\{-\frac{n}{2} \right\} } \leq c \tag{1}$$
which after merging constants and simplifying is equivalent to
$$ \left( \frac{\hat{\sigma}^2}{\sigma_0^2} \right)^{n/2} \exp\left\{ - \frac{n}{2} \frac{\hat{\sigma}^2}{\sigma_0^2} \right\} \leq k $$
Thus we are left with a function $f(x) = x^c e^{-cx}$ which is unimodal with maximum at $x=1$. Here is what it looks like
From this, we may conclude that the null hypothesis will be rejected for too small or too large values of $x$. But $x$ is just $\frac{\hat{\sigma}^2}{\sigma_0^2}$, hence the corresponding rejection regions are
$$\frac{\hat{\sigma}^2}{\sigma_0^2} \leq k_1 \quad \text{or} \quad \frac{\hat{\sigma}^2}{\sigma_0^2} \geq k_2 $$.
All that remains now is to determine the distribution of this quantity, under $H_0$, i.e. for $\sigma_0^2 = \sigma^2$. Recalling that for normal models
$$\frac{ \left(n-1\right) S^2}{\sigma^2} \sim \chi^2 (n-1)$$
we can find quantiles from the $\chi^2$ distribution such that we have a 5% level test, which would then call us to reject $H_0$ if
$$ \frac{ \left(n-1\right) S^2}{\sigma_0 ^2} < \chi^2 _{0.025} (n-1) \quad \text{or} \quad \frac{ \left(n-1\right) S^2}{\sigma_0 ^2} > \chi^2 _{0.975} (n-1) $$
By the way, the result is exact so you don't need the asymptotics. It is often the case that such results hold exactly for the normal distribution but you would need the asymptotic $\chi^2$ otherwise.
Hope this helps.
|
Likelihood Ratio Test for the variance of a normal distribution
I don't think your derivation of the likelihood ratio test is correct. Let's start from the beginning. I will write everything in terms of the variance since this way we can use some known results abo
|
52,143 |
How to obtain Tukey Table in R?
|
Here's part of the table you linked to:
The first few rows are obtained by:
> qtukey(p = 0.95, nmeans = 2:10, df = 5)
[1] 3.635351 4.601725 5.218325 5.673125 6.032903 6.329901 6.582301 6.801398
[9] 6.994698
> qtukey(p = 0.99, nmeans = 2:10, df = 5)
[1] 5.702311 6.975727 7.804156 8.421495 8.913107 9.320875 9.668681
[8] 9.971483 10.239281
> qtukey(p = 0.95, nmeans = 2:10, df = 6)
[1] 3.460456 4.339195 4.895599 5.304891 5.628353 5.895309 6.122202 6.319211
[9] 6.493085
|
How to obtain Tukey Table in R?
|
Here's part of the table you linked to:
The first few rows are obtained by:
> qtukey(p = 0.95, nmeans = 2:10, df = 5)
[1] 3.635351 4.601725 5.218325 5.673125 6.032903 6.329901 6.582301 6.801398
[9] 6
|
How to obtain Tukey Table in R?
Here's part of the table you linked to:
The first few rows are obtained by:
> qtukey(p = 0.95, nmeans = 2:10, df = 5)
[1] 3.635351 4.601725 5.218325 5.673125 6.032903 6.329901 6.582301 6.801398
[9] 6.994698
> qtukey(p = 0.99, nmeans = 2:10, df = 5)
[1] 5.702311 6.975727 7.804156 8.421495 8.913107 9.320875 9.668681
[8] 9.971483 10.239281
> qtukey(p = 0.95, nmeans = 2:10, df = 6)
[1] 3.460456 4.339195 4.895599 5.304891 5.628353 5.895309 6.122202 6.319211
[9] 6.493085
|
How to obtain Tukey Table in R?
Here's part of the table you linked to:
The first few rows are obtained by:
> qtukey(p = 0.95, nmeans = 2:10, df = 5)
[1] 3.635351 4.601725 5.218325 5.673125 6.032903 6.329901 6.582301 6.801398
[9] 6
|
52,144 |
How to obtain Tukey Table in R?
|
Here is a way to generate QTable into a data frame.
You can change the grid limits according to your needs.
QTable <- expand.grid(alpha=c(0.01,0.05),
groups=seq(2,10,1),
df=seq(2,120,1))
QTable$QVal=qtukey(1-QTable$alpha,QTable$groups,df=QTable$df)
head(QTable)
alpha groups df QVal
1 0.01 2 2 13.902105
2 0.05 2 2 6.079637
3 0.01 3 2 19.015496
4 0.05 3 2 8.330783
5 0.01 4 2 22.563706
6 0.05 4 2 9.799011
|
How to obtain Tukey Table in R?
|
Here is a way to generate QTable into a data frame.
You can change the grid limits according to your needs.
QTable <- expand.grid(alpha=c(0.01,0.05),
groups=seq(2,10,1),
|
How to obtain Tukey Table in R?
Here is a way to generate QTable into a data frame.
You can change the grid limits according to your needs.
QTable <- expand.grid(alpha=c(0.01,0.05),
groups=seq(2,10,1),
df=seq(2,120,1))
QTable$QVal=qtukey(1-QTable$alpha,QTable$groups,df=QTable$df)
head(QTable)
alpha groups df QVal
1 0.01 2 2 13.902105
2 0.05 2 2 6.079637
3 0.01 3 2 19.015496
4 0.05 3 2 8.330783
5 0.01 4 2 22.563706
6 0.05 4 2 9.799011
|
How to obtain Tukey Table in R?
Here is a way to generate QTable into a data frame.
You can change the grid limits according to your needs.
QTable <- expand.grid(alpha=c(0.01,0.05),
groups=seq(2,10,1),
|
52,145 |
How to obtain Tukey Table in R?
|
For exaplanatory purpose, I tried to make up a function which provides full Tukey Table, see my comments inside. I still think that the R-help is misleading in this case:
QTable<-function(dfrange=10,nurange=20,alpha=0.05,digs=3){
ROWS<-dfrange
COLS<-nurange
tabl<-matrix(nrow=ROWS,ncol=COLS)
for(a in 2:COLS){
tabl[1,a]=a
}
for(b in 2:ROWS){
tabl[b,1]=b
}
for(i in 2:ROWS){
for(j in 2:COLS){
tabl[i,j]<-qtukey(alpha,j,i,nranges=1,lower.tail=FALSE)
#R has a wrong description for the parameter 'nmeans',
#To get a correct result, always set 'nranges'=1, from the description #of R:
#i=n-nu;j=nu->n=i+j;nu=j->number of groups nu=j;
#Each group has n/nu elements, which is (i+j)/j
#The above interpretation is wrong, the parameter 'nmeans'
#should be regarded as number of groups when 'nranges'=1
tabl[i,j]<-round(tabl[i,j],digs)
}
}
tmp<-as.data.frame(tabl)
colnames(tmp)<-as.character(tmp[1,])
tmp<-tmp[-1,]
rownames(tmp)<-as.character(tmp[,1])
tmp<-tmp[,-1]
message("significant level=",alpha)
message("nu=1:",nurange,", df=1:",dfrange)
message("Rows are the value sequence of df;Columns are the nu sequence.")
print(tmp)
}
QTable(dfrange=20,nurange=8,alpha=0.01)
|
How to obtain Tukey Table in R?
|
For exaplanatory purpose, I tried to make up a function which provides full Tukey Table, see my comments inside. I still think that the R-help is misleading in this case:
QTable<-function(dfrange=10,n
|
How to obtain Tukey Table in R?
For exaplanatory purpose, I tried to make up a function which provides full Tukey Table, see my comments inside. I still think that the R-help is misleading in this case:
QTable<-function(dfrange=10,nurange=20,alpha=0.05,digs=3){
ROWS<-dfrange
COLS<-nurange
tabl<-matrix(nrow=ROWS,ncol=COLS)
for(a in 2:COLS){
tabl[1,a]=a
}
for(b in 2:ROWS){
tabl[b,1]=b
}
for(i in 2:ROWS){
for(j in 2:COLS){
tabl[i,j]<-qtukey(alpha,j,i,nranges=1,lower.tail=FALSE)
#R has a wrong description for the parameter 'nmeans',
#To get a correct result, always set 'nranges'=1, from the description #of R:
#i=n-nu;j=nu->n=i+j;nu=j->number of groups nu=j;
#Each group has n/nu elements, which is (i+j)/j
#The above interpretation is wrong, the parameter 'nmeans'
#should be regarded as number of groups when 'nranges'=1
tabl[i,j]<-round(tabl[i,j],digs)
}
}
tmp<-as.data.frame(tabl)
colnames(tmp)<-as.character(tmp[1,])
tmp<-tmp[-1,]
rownames(tmp)<-as.character(tmp[,1])
tmp<-tmp[,-1]
message("significant level=",alpha)
message("nu=1:",nurange,", df=1:",dfrange)
message("Rows are the value sequence of df;Columns are the nu sequence.")
print(tmp)
}
QTable(dfrange=20,nurange=8,alpha=0.01)
|
How to obtain Tukey Table in R?
For exaplanatory purpose, I tried to make up a function which provides full Tukey Table, see my comments inside. I still think that the R-help is misleading in this case:
QTable<-function(dfrange=10,n
|
52,146 |
How to interpret 95% confidence interval for Area Under Curve of ROC?
|
A confidence interval is an interval-estimate for some true value of a parameter. Let us (as an example) start with e.g. a confidence interval for the mean of a normal distribution and then move on to ROC and AUC so that one sees the analogy.
Assume that you have a random normal variable $X \sim N(\mu;\sigma)$. Where $\mu$ is the unknown population mean and, to keep it simple, let us assume that $\sigma$ is known.
We now draw a sample of size $n$ from the distribution of X, i.e. we get a sample $x_1, x_2, \dots x_n$. The goal is to have an idea about the unknown $\mu$ using the sample drawn. It is well known that the arithemetic average $\bar{x}=\frac{1}{n}\sum_i x_i$ is an unbiased (point) estimator for (the unknown) $\mu$ and that $[\bar{x}-1.96\frac{\sigma}{\sqrt{n}};\bar{x}+1.96\frac{\sigma}{\sqrt{n}}]$ is a $95\%$ confidence interval for (the unknown) $\mu$.
If we draw another sample $y_1, \dots , y_n$ from the distribtion of $X$ then, in the same way we will find another confidence interval for the (unknown) $\mu$ as $[\bar{y}-1.96\frac{\sigma}{\sqrt{n}};\bar{y}+1.96\frac{\sigma}{\sqrt{n}}]$.
So each time we draw a sample of size $n$ from the distribution of $X$, we find a confidence interval for the (unknown) $\mu$ and all these intervals will be different. The fact that it is a $95\%$ confidence interval means that, if we draw an 'infinite' number of samples of size $n$ from the distribution of $X$, and for each of these samples we compute the $95\%$ confidence interval, then $95\%$ of all these intervals (one interval for each sample) will contain the unknown $\mu$. (so sometimes , namely $5\%$ of the intervals, such an interval will not contain the unknown $\mu$, so sometimes you have bad luck.)
The same holds for the AUC, when you compute the AUC, you compute it from a sample, in other words what you compute is an estimate for the true unknown AUC. Similarly you can, for the sample that you have, compute a confidence interval for the true but unknown AUC. If you were able to draw an infinite number of samples, and for each sample obtained compute the confidence interval for the true AUC, then $95\%$ of these computed intervals would contain the true but unknown AUC.
Note that the interval is random, because it is computed from a random sample. The true AUC is not random, it is some unknown property of your population.
Unfortunately you can not draw an infinite number of samples, most of the time you have only one sample, so you will have to do it with one interval, but you are rather confident ($95\%$ of the so computed intervals will contain the true unknown AUC) that this interval will contain the true AUC. And yes, if the lower border of the interval is higer than 0.5 then you can be rather confident that your model is not the random model, but, as above, you may also have had bad luck with the sample.
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
|
A confidence interval is an interval-estimate for some true value of a parameter. Let us (as an example) start with e.g. a confidence interval for the mean of a normal distribution and then move on to
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
A confidence interval is an interval-estimate for some true value of a parameter. Let us (as an example) start with e.g. a confidence interval for the mean of a normal distribution and then move on to ROC and AUC so that one sees the analogy.
Assume that you have a random normal variable $X \sim N(\mu;\sigma)$. Where $\mu$ is the unknown population mean and, to keep it simple, let us assume that $\sigma$ is known.
We now draw a sample of size $n$ from the distribution of X, i.e. we get a sample $x_1, x_2, \dots x_n$. The goal is to have an idea about the unknown $\mu$ using the sample drawn. It is well known that the arithemetic average $\bar{x}=\frac{1}{n}\sum_i x_i$ is an unbiased (point) estimator for (the unknown) $\mu$ and that $[\bar{x}-1.96\frac{\sigma}{\sqrt{n}};\bar{x}+1.96\frac{\sigma}{\sqrt{n}}]$ is a $95\%$ confidence interval for (the unknown) $\mu$.
If we draw another sample $y_1, \dots , y_n$ from the distribtion of $X$ then, in the same way we will find another confidence interval for the (unknown) $\mu$ as $[\bar{y}-1.96\frac{\sigma}{\sqrt{n}};\bar{y}+1.96\frac{\sigma}{\sqrt{n}}]$.
So each time we draw a sample of size $n$ from the distribution of $X$, we find a confidence interval for the (unknown) $\mu$ and all these intervals will be different. The fact that it is a $95\%$ confidence interval means that, if we draw an 'infinite' number of samples of size $n$ from the distribution of $X$, and for each of these samples we compute the $95\%$ confidence interval, then $95\%$ of all these intervals (one interval for each sample) will contain the unknown $\mu$. (so sometimes , namely $5\%$ of the intervals, such an interval will not contain the unknown $\mu$, so sometimes you have bad luck.)
The same holds for the AUC, when you compute the AUC, you compute it from a sample, in other words what you compute is an estimate for the true unknown AUC. Similarly you can, for the sample that you have, compute a confidence interval for the true but unknown AUC. If you were able to draw an infinite number of samples, and for each sample obtained compute the confidence interval for the true AUC, then $95\%$ of these computed intervals would contain the true but unknown AUC.
Note that the interval is random, because it is computed from a random sample. The true AUC is not random, it is some unknown property of your population.
Unfortunately you can not draw an infinite number of samples, most of the time you have only one sample, so you will have to do it with one interval, but you are rather confident ($95\%$ of the so computed intervals will contain the true unknown AUC) that this interval will contain the true AUC. And yes, if the lower border of the interval is higer than 0.5 then you can be rather confident that your model is not the random model, but, as above, you may also have had bad luck with the sample.
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
A confidence interval is an interval-estimate for some true value of a parameter. Let us (as an example) start with e.g. a confidence interval for the mean of a normal distribution and then move on to
|
52,147 |
How to interpret 95% confidence interval for Area Under Curve of ROC?
|
Probably the best interpretation would be in terms of the so-called $c$ statistic, which turns out to equal the area under the ROC curve. That is, if you are trying to predict some response $Y$ (which is often binary) using a score $X$, then the $c$ statistic is defined as $P(X^\prime > X \mid Y^\prime > Y)$, where $X^\prime$ and $Y^\prime$ are independent copies of $X$ and $Y$.
You would then be $95\%$ confident that the "true" value of this conditional probability lies within the specified interval. This would allow you to somewhat more formally reject the claim that your model is no better than random if the lower bound is above $1/2$.
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
|
Probably the best interpretation would be in terms of the so-called $c$ statistic, which turns out to equal the area under the ROC curve. That is, if you are trying to predict some response $Y$ (whic
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
Probably the best interpretation would be in terms of the so-called $c$ statistic, which turns out to equal the area under the ROC curve. That is, if you are trying to predict some response $Y$ (which is often binary) using a score $X$, then the $c$ statistic is defined as $P(X^\prime > X \mid Y^\prime > Y)$, where $X^\prime$ and $Y^\prime$ are independent copies of $X$ and $Y$.
You would then be $95\%$ confident that the "true" value of this conditional probability lies within the specified interval. This would allow you to somewhat more formally reject the claim that your model is no better than random if the lower bound is above $1/2$.
|
How to interpret 95% confidence interval for Area Under Curve of ROC?
Probably the best interpretation would be in terms of the so-called $c$ statistic, which turns out to equal the area under the ROC curve. That is, if you are trying to predict some response $Y$ (whic
|
52,148 |
Explaining Consistency of estimators to a non-statistical audience
|
This is an indirect approach that might help lead you toward considering the question in a different light.
Let me play devil's advocate for a moment.
In practice*, how much does consistency matter?
* (you might think about whether your lay audience would care about anything else)
When you have data, you have some particular sample size, $n=n_0$. Certainly you care about behavior at that sample size. If you're pondering several possible sample sizes, behavior at those several sample sizes would matter.
I'm never likely to see a sample size of a trillion. But is consistency actually relevant even at a specific sample size of much larger order, like $n=10^{120}$? It doesn't tell me anything about the behavior at my actual sample size.
Why would behavior at the limit of some sequence of sample sizes that you will never see be of any consequence? There are certainly times when it might be convenient in some sense, or nice to have, but that alone isn't much of an argument that it's actually important.
If you can answer that question, you might see a way to motivate it to a lay audience.
If you have difficulty with that question, explaining it to a lay audience is not your first problem (your first problem would be more like why is it even important to you?).
|
Explaining Consistency of estimators to a non-statistical audience
|
This is an indirect approach that might help lead you toward considering the question in a different light.
Let me play devil's advocate for a moment.
In practice*, how much does consistency matter?
|
Explaining Consistency of estimators to a non-statistical audience
This is an indirect approach that might help lead you toward considering the question in a different light.
Let me play devil's advocate for a moment.
In practice*, how much does consistency matter?
* (you might think about whether your lay audience would care about anything else)
When you have data, you have some particular sample size, $n=n_0$. Certainly you care about behavior at that sample size. If you're pondering several possible sample sizes, behavior at those several sample sizes would matter.
I'm never likely to see a sample size of a trillion. But is consistency actually relevant even at a specific sample size of much larger order, like $n=10^{120}$? It doesn't tell me anything about the behavior at my actual sample size.
Why would behavior at the limit of some sequence of sample sizes that you will never see be of any consequence? There are certainly times when it might be convenient in some sense, or nice to have, but that alone isn't much of an argument that it's actually important.
If you can answer that question, you might see a way to motivate it to a lay audience.
If you have difficulty with that question, explaining it to a lay audience is not your first problem (your first problem would be more like why is it even important to you?).
|
Explaining Consistency of estimators to a non-statistical audience
This is an indirect approach that might help lead you toward considering the question in a different light.
Let me play devil's advocate for a moment.
In practice*, how much does consistency matter?
|
52,149 |
Explaining Consistency of estimators to a non-statistical audience
|
Create a simple but realistic example, with a known ‘feature of the population’ (i.e., parameter). Simulate from this example, and create a plot, with the number of observations on the x-axis and ‘cumulative’ parameter estimates on the y-axis. Mark the population parameter with a red horizontal line. Point out that the estimates converges¹, but to a completely different value than the ‘real‘ value (i.e., the parameter), even if the number of observations are in the millions.
¹ If the estimator doesn’t converge, simply point this out. It may also be useful to show several realisations, to drive home the point.
|
Explaining Consistency of estimators to a non-statistical audience
|
Create a simple but realistic example, with a known ‘feature of the population’ (i.e., parameter). Simulate from this example, and create a plot, with the number of observations on the x-axis and ‘cum
|
Explaining Consistency of estimators to a non-statistical audience
Create a simple but realistic example, with a known ‘feature of the population’ (i.e., parameter). Simulate from this example, and create a plot, with the number of observations on the x-axis and ‘cumulative’ parameter estimates on the y-axis. Mark the population parameter with a red horizontal line. Point out that the estimates converges¹, but to a completely different value than the ‘real‘ value (i.e., the parameter), even if the number of observations are in the millions.
¹ If the estimator doesn’t converge, simply point this out. It may also be useful to show several realisations, to drive home the point.
|
Explaining Consistency of estimators to a non-statistical audience
Create a simple but realistic example, with a known ‘feature of the population’ (i.e., parameter). Simulate from this example, and create a plot, with the number of observations on the x-axis and ‘cum
|
52,150 |
Explaining Consistency of estimators to a non-statistical audience
|
Reasons to be Consistent, Part III:
1) Defense: smiling, an estimator property is "asymptotic" when we don't have a clue as to when it will actually start to visibly affect the behavior and the results of an estimator. It may take a sample of immense size, it may take a few dozens of observations. So we want to have consistency in order to safeguard against being led ashtray, and without even knowing it. Since patients' health is at stake, I guess this alone should be an argument that health professionals would listen to.
A graphical exposition could posses two values, the true value (and the associated treatment of the patient), and the probability limit of the estimator (wrong value), and its associated treatment. If the treatment in the second case is different (and possibly irrelevant/detrimental) than the one under consistency, then you have the potential danger the users will bear (and impose on the patient) by using the inconsistent estimator.
2) In most cases, inconsistency also means the existence of bias even if large amounts of information gather (although strictly speaking, the concept of (un)biasedness at a limiting situation has more than one definitions). So you could fall back to something like "even if sample sizes are small and you don't think that inconsistency matters, as measurements accumulate, if you pool them, their average will also be wrong" -since the averaging operation is something that everybody feels familiar with. So inconsistency makes pooling of the obtained estimates, or of the data proper, misleading, something that sabotages any mid-term / long-term attempt to uncover the true situation.
Reasons to be Helpful, Part III:
Can you give them a positive result? Is there an alternative estimator that performs the same job, and is also consistent? And if yes, how it compares as regards finite-sample properties, like bias, variance, Mean Squared Error?
Reasons to Worry, Part III:
The real tough situation would be if a) there is no alternative or b) the alternative is consistent but it performs worse in finite sample properties. Here you enter into Risk and Decision Theory proper, in which case, @whuber should jump in and clear the fog.
|
Explaining Consistency of estimators to a non-statistical audience
|
Reasons to be Consistent, Part III:
1) Defense: smiling, an estimator property is "asymptotic" when we don't have a clue as to when it will actually start to visibly affect the behavior and the resu
|
Explaining Consistency of estimators to a non-statistical audience
Reasons to be Consistent, Part III:
1) Defense: smiling, an estimator property is "asymptotic" when we don't have a clue as to when it will actually start to visibly affect the behavior and the results of an estimator. It may take a sample of immense size, it may take a few dozens of observations. So we want to have consistency in order to safeguard against being led ashtray, and without even knowing it. Since patients' health is at stake, I guess this alone should be an argument that health professionals would listen to.
A graphical exposition could posses two values, the true value (and the associated treatment of the patient), and the probability limit of the estimator (wrong value), and its associated treatment. If the treatment in the second case is different (and possibly irrelevant/detrimental) than the one under consistency, then you have the potential danger the users will bear (and impose on the patient) by using the inconsistent estimator.
2) In most cases, inconsistency also means the existence of bias even if large amounts of information gather (although strictly speaking, the concept of (un)biasedness at a limiting situation has more than one definitions). So you could fall back to something like "even if sample sizes are small and you don't think that inconsistency matters, as measurements accumulate, if you pool them, their average will also be wrong" -since the averaging operation is something that everybody feels familiar with. So inconsistency makes pooling of the obtained estimates, or of the data proper, misleading, something that sabotages any mid-term / long-term attempt to uncover the true situation.
Reasons to be Helpful, Part III:
Can you give them a positive result? Is there an alternative estimator that performs the same job, and is also consistent? And if yes, how it compares as regards finite-sample properties, like bias, variance, Mean Squared Error?
Reasons to Worry, Part III:
The real tough situation would be if a) there is no alternative or b) the alternative is consistent but it performs worse in finite sample properties. Here you enter into Risk and Decision Theory proper, in which case, @whuber should jump in and clear the fog.
|
Explaining Consistency of estimators to a non-statistical audience
Reasons to be Consistent, Part III:
1) Defense: smiling, an estimator property is "asymptotic" when we don't have a clue as to when it will actually start to visibly affect the behavior and the resu
|
52,151 |
Explaining Consistency of estimators to a non-statistical audience
|
Thank you so much for your responses.
I have had a lot of people in the discipline ask me "Who cares what happens at infinity? We are never going to get there"..similar to what Glen posted and my response has been "Why don't you consider the first entry in your sample as the mean of the sample?" though this seems a rather indirect answer to the question.
As a partial response to Glen, people in the medical field extensively use the bootstrap which critically depends on consistency. Alecos' answer hits a few other issues bang on!
My answer is in the part (b) reason to Worry domain of Alecos' answer. The answer obtained at present is something a lot of people want to believe so an alternative may not be well received.
Defending consistency turns out to be very difficult on a practical data set. I thought I could rely on the several fantastic texts on decision theory but all of them seem to pass inconsistency off as in inconvenience more so than a problem you need to be rid of.
To be more concrete, most texts in Theoretical Stats refrain from statements of the form "Inconsistent estimators are bad..." as opposed to ML texts which pretty much flat out state that "Overfitting is bad...". Can someone please explain why this is so?
|
Explaining Consistency of estimators to a non-statistical audience
|
Thank you so much for your responses.
I have had a lot of people in the discipline ask me "Who cares what happens at infinity? We are never going to get there"..similar to what Glen posted and my resp
|
Explaining Consistency of estimators to a non-statistical audience
Thank you so much for your responses.
I have had a lot of people in the discipline ask me "Who cares what happens at infinity? We are never going to get there"..similar to what Glen posted and my response has been "Why don't you consider the first entry in your sample as the mean of the sample?" though this seems a rather indirect answer to the question.
As a partial response to Glen, people in the medical field extensively use the bootstrap which critically depends on consistency. Alecos' answer hits a few other issues bang on!
My answer is in the part (b) reason to Worry domain of Alecos' answer. The answer obtained at present is something a lot of people want to believe so an alternative may not be well received.
Defending consistency turns out to be very difficult on a practical data set. I thought I could rely on the several fantastic texts on decision theory but all of them seem to pass inconsistency off as in inconvenience more so than a problem you need to be rid of.
To be more concrete, most texts in Theoretical Stats refrain from statements of the form "Inconsistent estimators are bad..." as opposed to ML texts which pretty much flat out state that "Overfitting is bad...". Can someone please explain why this is so?
|
Explaining Consistency of estimators to a non-statistical audience
Thank you so much for your responses.
I have had a lot of people in the discipline ask me "Who cares what happens at infinity? We are never going to get there"..similar to what Glen posted and my resp
|
52,152 |
R optim function - Setting constraints for individual parameters
|
I would recommend re-parametrizing the problem so that it is unconstrained.
Say by mapping the non-negative parameter with a log transform.
|
R optim function - Setting constraints for individual parameters
|
I would recommend re-parametrizing the problem so that it is unconstrained.
Say by mapping the non-negative parameter with a log transform.
|
R optim function - Setting constraints for individual parameters
I would recommend re-parametrizing the problem so that it is unconstrained.
Say by mapping the non-negative parameter with a log transform.
|
R optim function - Setting constraints for individual parameters
I would recommend re-parametrizing the problem so that it is unconstrained.
Say by mapping the non-negative parameter with a log transform.
|
52,153 |
R optim function - Setting constraints for individual parameters
|
You can set the constraints for the unconstrained parameters to $\pm \infty$ (and the ceiling for the non-negative parameters to $+\infty$).
optim(par=theta, fn=min.RSS, lower=c(0, -Inf, -Inf, 0), upper=rep(Inf, 4),
method="L-BFGS-B")
Technically the upper argument is unnecessary in this case, as its default value is Inf. However I like to be explicit when specifying bounds.
|
R optim function - Setting constraints for individual parameters
|
You can set the constraints for the unconstrained parameters to $\pm \infty$ (and the ceiling for the non-negative parameters to $+\infty$).
optim(par=theta, fn=min.RSS, lower=c(0, -Inf, -Inf, 0), up
|
R optim function - Setting constraints for individual parameters
You can set the constraints for the unconstrained parameters to $\pm \infty$ (and the ceiling for the non-negative parameters to $+\infty$).
optim(par=theta, fn=min.RSS, lower=c(0, -Inf, -Inf, 0), upper=rep(Inf, 4),
method="L-BFGS-B")
Technically the upper argument is unnecessary in this case, as its default value is Inf. However I like to be explicit when specifying bounds.
|
R optim function - Setting constraints for individual parameters
You can set the constraints for the unconstrained parameters to $\pm \infty$ (and the ceiling for the non-negative parameters to $+\infty$).
optim(par=theta, fn=min.RSS, lower=c(0, -Inf, -Inf, 0), up
|
52,154 |
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
|
predict() in lme4 does not work well unless the grouping factor specification is "realistic". If we use samples from the observed data, we get reasonable predictions. I think this is a bug in predict.merMod()
This is lme4 1.1-7
my.fit <- glmer(smoker ~ biomarker + year + sex + age + (1|id), data = df, family = binomial(link='logit'), nAGQ = 0)
## predict with the observed data
predictions <- predict(my.fit, type = "response")
mean(predictions, na.rm = TRUE)
[1] 0.1144976
## predict for year 1996 to year 2014
new.df <- df[sample(nrow(df), replace = TRUE), ]
sapply(1996:2014, function(x) {
new.df$year <- x
predictions <- predict(my.fit, newdata = new.df, type = "response", allow.new.levels=TRUE)
mean(predictions, na.rm = TRUE)
})
[1] 0.15785350 0.15374013 0.14974742 0.14586612 0.14208791 0.13840528 0.13481145 0.13130033
[9] 0.12786645 0.12450488 0.12121121 0.11798144 0.11481198 0.11169959 0.10864132 0.10563448
[17] 0.10267662 0.09976548 0.09689897
Compare that with what you get if you set id to a new value, not present in df when my.fit was evaluated.
new.df$id <- 1
sapply(1996:2014, function(x) {
new.df$year <- x
predictions <- predict(my.fit, newdata = new.df, type = "response", allow.new.levels=TRUE)
mean(predictions, na.rm = TRUE)
})
[1] 0.07292960 0.06660550 0.06079003 0.05544891 0.05054904 0.04605868 0.04194759 0.03818706
[9] 0.03475000 0.03161093 0.02874596 0.02613281 0.02375069 0.02158030 0.01960378 0.01780458
[17] 0.01616745 0.01467832 0.01332425
If you want confidence intervals around the predicted means, I can recommend the boot package
my.bootstrap.predictions.f <- function(data, indices){
return(mean(predict(my.fit, newdata = data[indices, ], type = "response", allow.new.levels=TRUE), na.rm=TRUE))
}
## predict for year 1996 to year 2014
new.df <- df[sample(nrow(df), replace = TRUE), ]
time.period <- 1996:2014
my.results <- matrix(nrow=length(time.period), ncol = 4)
for(x in 1:length(time.period)){
my.results[x, 1] <- time.period[x]
new.df$year <- time.period[x]
## bootstrap using a realistic number of samples per year, say 20000
my.boot.obj <- boot(data = new.df[sample(nrow(new.df), 20000, replace = TRUE), ], statistic = my.bootstrap.predictions.f, R = 100)
my.results[x, 2] <- my.boot.obj[[1]]
my.results[x, 3:4] <- quantile(my.boot.obj[[2]], c(0.025, 0.975))
}
colnames(my.results) <- c("Year", "mean proportion", "lower.ci", "upper.ci")
Using R = 2 in the call to boot, I got the following results:
my.results
Year mean proportion lower.ci upper.ci
[1,] 1996 0.15546203 0.15074711 0.15913858
[2,] 1997 0.15259172 0.15187967 0.15367255
[3,] 1998 0.14850053 0.14672908 0.14758575
[4,] 1999 0.14304792 0.14076788 0.14285565
[5,] 2000 0.14340505 0.14308830 0.14354602
[6,] 2001 0.13575446 0.13580401 0.13906530
[7,] 2002 0.13378345 0.13092585 0.13734706
[8,] 2003 0.13135301 0.13367143 0.13379709
[9,] 2004 0.12884709 0.12817521 0.12889604
[10,] 2005 0.12407854 0.12164363 0.12338582
[11,] 2006 0.11703560 0.11692974 0.12129822
[12,] 2007 0.11701868 0.11660557 0.12143424
[13,] 2008 0.11427061 0.11247954 0.11578390
[14,] 2009 0.10743174 0.10681350 0.11119430
[15,] 2010 0.10769067 0.10547794 0.10833410
[16,] 2011 0.10677037 0.10734999 0.10867513
[17,] 2012 0.10064624 0.09991072 0.10205153
[18,] 2013 0.09735750 0.09704462 0.10003968
[19,] 2014 0.09448228 0.09331708 0.09346973
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
|
predict() in lme4 does not work well unless the grouping factor specification is "realistic". If we use samples from the observed data, we get reasonable predictions. I think this is a bug in predict.
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
predict() in lme4 does not work well unless the grouping factor specification is "realistic". If we use samples from the observed data, we get reasonable predictions. I think this is a bug in predict.merMod()
This is lme4 1.1-7
my.fit <- glmer(smoker ~ biomarker + year + sex + age + (1|id), data = df, family = binomial(link='logit'), nAGQ = 0)
## predict with the observed data
predictions <- predict(my.fit, type = "response")
mean(predictions, na.rm = TRUE)
[1] 0.1144976
## predict for year 1996 to year 2014
new.df <- df[sample(nrow(df), replace = TRUE), ]
sapply(1996:2014, function(x) {
new.df$year <- x
predictions <- predict(my.fit, newdata = new.df, type = "response", allow.new.levels=TRUE)
mean(predictions, na.rm = TRUE)
})
[1] 0.15785350 0.15374013 0.14974742 0.14586612 0.14208791 0.13840528 0.13481145 0.13130033
[9] 0.12786645 0.12450488 0.12121121 0.11798144 0.11481198 0.11169959 0.10864132 0.10563448
[17] 0.10267662 0.09976548 0.09689897
Compare that with what you get if you set id to a new value, not present in df when my.fit was evaluated.
new.df$id <- 1
sapply(1996:2014, function(x) {
new.df$year <- x
predictions <- predict(my.fit, newdata = new.df, type = "response", allow.new.levels=TRUE)
mean(predictions, na.rm = TRUE)
})
[1] 0.07292960 0.06660550 0.06079003 0.05544891 0.05054904 0.04605868 0.04194759 0.03818706
[9] 0.03475000 0.03161093 0.02874596 0.02613281 0.02375069 0.02158030 0.01960378 0.01780458
[17] 0.01616745 0.01467832 0.01332425
If you want confidence intervals around the predicted means, I can recommend the boot package
my.bootstrap.predictions.f <- function(data, indices){
return(mean(predict(my.fit, newdata = data[indices, ], type = "response", allow.new.levels=TRUE), na.rm=TRUE))
}
## predict for year 1996 to year 2014
new.df <- df[sample(nrow(df), replace = TRUE), ]
time.period <- 1996:2014
my.results <- matrix(nrow=length(time.period), ncol = 4)
for(x in 1:length(time.period)){
my.results[x, 1] <- time.period[x]
new.df$year <- time.period[x]
## bootstrap using a realistic number of samples per year, say 20000
my.boot.obj <- boot(data = new.df[sample(nrow(new.df), 20000, replace = TRUE), ], statistic = my.bootstrap.predictions.f, R = 100)
my.results[x, 2] <- my.boot.obj[[1]]
my.results[x, 3:4] <- quantile(my.boot.obj[[2]], c(0.025, 0.975))
}
colnames(my.results) <- c("Year", "mean proportion", "lower.ci", "upper.ci")
Using R = 2 in the call to boot, I got the following results:
my.results
Year mean proportion lower.ci upper.ci
[1,] 1996 0.15546203 0.15074711 0.15913858
[2,] 1997 0.15259172 0.15187967 0.15367255
[3,] 1998 0.14850053 0.14672908 0.14758575
[4,] 1999 0.14304792 0.14076788 0.14285565
[5,] 2000 0.14340505 0.14308830 0.14354602
[6,] 2001 0.13575446 0.13580401 0.13906530
[7,] 2002 0.13378345 0.13092585 0.13734706
[8,] 2003 0.13135301 0.13367143 0.13379709
[9,] 2004 0.12884709 0.12817521 0.12889604
[10,] 2005 0.12407854 0.12164363 0.12338582
[11,] 2006 0.11703560 0.11692974 0.12129822
[12,] 2007 0.11701868 0.11660557 0.12143424
[13,] 2008 0.11427061 0.11247954 0.11578390
[14,] 2009 0.10743174 0.10681350 0.11119430
[15,] 2010 0.10769067 0.10547794 0.10833410
[16,] 2011 0.10677037 0.10734999 0.10867513
[17,] 2012 0.10064624 0.09991072 0.10205153
[18,] 2013 0.09735750 0.09704462 0.10003968
[19,] 2014 0.09448228 0.09331708 0.09346973
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
predict() in lme4 does not work well unless the grouping factor specification is "realistic". If we use samples from the observed data, we get reasonable predictions. I think this is a bug in predict.
|
52,155 |
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
|
To get the confidence interval, you could also try the effects package
lmer.1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
library("effects")
# obtain a fit at different estimates of the predictor
ef.1=effect(c("Days"),lmer.1)
df.ef=data.frame(ef.1)
df.ef
plot(effect(c("Days"),lmer.1),grid=TRUE)
here is fitted value and the CI. The width of the CI bands increases because the model was a random slope model.
Days fit se lower upper
1 0 251.4051 6.824557 237.9377 264.8726
2 2 272.3397 7.094226 258.3401 286.3393
3 4 293.2742 8.555537 276.3909 310.1576
4 6 314.2088 10.732292 293.0299 335.3877
5 8 335.1434 13.277148 308.9425 361.3443
If you were to use a random intercept only model like this (as in your example),
lmer.1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy)
you shall get parallel CI bands
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
|
To get the confidence interval, you could also try the effects package
lmer.1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
library("effects")
# obtain a fit at different estimates of the pr
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
To get the confidence interval, you could also try the effects package
lmer.1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
library("effects")
# obtain a fit at different estimates of the predictor
ef.1=effect(c("Days"),lmer.1)
df.ef=data.frame(ef.1)
df.ef
plot(effect(c("Days"),lmer.1),grid=TRUE)
here is fitted value and the CI. The width of the CI bands increases because the model was a random slope model.
Days fit se lower upper
1 0 251.4051 6.824557 237.9377 264.8726
2 2 272.3397 7.094226 258.3401 286.3393
3 4 293.2742 8.555537 276.3909 310.1576
4 6 314.2088 10.732292 293.0299 335.3877
5 8 335.1434 13.277148 308.9425 361.3443
If you were to use a random intercept only model like this (as in your example),
lmer.1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy)
you shall get parallel CI bands
|
Obtaining adjusted (predicted) proportions with lme4 - using the glmer-function
To get the confidence interval, you could also try the effects package
lmer.1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
library("effects")
# obtain a fit at different estimates of the pr
|
52,156 |
Find the mode of a probability distribution function
|
Consider that there are shapes of pdf that have a mode, but at which the derivative of the pdf is not zero (the Laplace being an obvious example).
There are also cases where there's no mode in the domain of the variable (examples below).
That is, we can't say as a general statement "the mode can be obtained by taking the derivative of $g(x)$ and setting it to zero".
You must give up trying to look at the value where the derivative is zero when it's clear that the reasoning underlying its use (that the derivative is zero at the mode) fails. Here's a simple example where it works for part of the domain of a parameter, but not everywhere:
(Here I've include $x=0$ in the domain for the exponential case; if it was excluded, strictly there's no mode in the domain for that case (the supremum would be $1$, which is the value of $\frac{1}{\mu} \exp(-\frac{x}{\mu})$ at $0$, but the density would not be defined there). For the case where the shape parameters $<1$, you couldn't include $x=0$, and there would be no mode.)
So there's no use looking for where the derivative is zero when it isn't zero at the mode - indeed it may not be zero anywhere.
Further, for some densities, even when the derivative is 0, it doesn't imply there's a mode there. Consider this density (a beta density):
There's not a local mode where the derivative is zero; it's an antimode. You can also have a density with a horizontal point of inflexion which will be neither a mode nor an antimode:
As a result, it's not sufficient to simply calculate a formula at which the derivative is zero; even if you can calculate such values, that may not tell you where the modes are.
You must work out when that calculation corresponds to modes of the density; where that fails, quite often the location of the mode is obvious (if it exists at all).
A first exercise would be to draw the density at a few values of $\alpha$ and see how it behaves. Then you should be able to bring some reasoning to bear on the problem, which will tell you where the mode is when $\alpha <\frac{1}{2}$.
It's possible for the function to be everywhere decreasing in the domain; if it's on an open interval there may be no mode in that interval. For example, in the case of the gamma with $\alpha<1$, it's not uncommon to say "there's a mode at 0" even though the limit isn't in the interval - it's strictly incorrect to say there's a mode in that case (but usually one can understand the actual intent if someone says there's a mode at 0 even though the function is unbounded in the limit).
Here's an example of what your density looks like at $\beta=\sigma=1$ for three different values of $\alpha$ near $0.5$:
It does clearly suggest the behavior for $\alpha<\frac{1}{2}$ may be monotonically decreasing. The obvious thing to do is to try to check whether it's the case that $g(x+\epsilon)<g(x)$ with $\alpha<\frac{1}{2}$ when $\epsilon>0$ (e.g. is the derivative always negative when $\alpha<\frac{1}{2}$?)
If it's the case that it is monotonic decreasing (you should carry out such a check for yourself), it doesn't strictly have a mode; best to simply describe the behavior near 0 for $\alpha<\frac{1}{2}$.
|
Find the mode of a probability distribution function
|
Consider that there are shapes of pdf that have a mode, but at which the derivative of the pdf is not zero (the Laplace being an obvious example).
There are also cases where there's no mode in the d
|
Find the mode of a probability distribution function
Consider that there are shapes of pdf that have a mode, but at which the derivative of the pdf is not zero (the Laplace being an obvious example).
There are also cases where there's no mode in the domain of the variable (examples below).
That is, we can't say as a general statement "the mode can be obtained by taking the derivative of $g(x)$ and setting it to zero".
You must give up trying to look at the value where the derivative is zero when it's clear that the reasoning underlying its use (that the derivative is zero at the mode) fails. Here's a simple example where it works for part of the domain of a parameter, but not everywhere:
(Here I've include $x=0$ in the domain for the exponential case; if it was excluded, strictly there's no mode in the domain for that case (the supremum would be $1$, which is the value of $\frac{1}{\mu} \exp(-\frac{x}{\mu})$ at $0$, but the density would not be defined there). For the case where the shape parameters $<1$, you couldn't include $x=0$, and there would be no mode.)
So there's no use looking for where the derivative is zero when it isn't zero at the mode - indeed it may not be zero anywhere.
Further, for some densities, even when the derivative is 0, it doesn't imply there's a mode there. Consider this density (a beta density):
There's not a local mode where the derivative is zero; it's an antimode. You can also have a density with a horizontal point of inflexion which will be neither a mode nor an antimode:
As a result, it's not sufficient to simply calculate a formula at which the derivative is zero; even if you can calculate such values, that may not tell you where the modes are.
You must work out when that calculation corresponds to modes of the density; where that fails, quite often the location of the mode is obvious (if it exists at all).
A first exercise would be to draw the density at a few values of $\alpha$ and see how it behaves. Then you should be able to bring some reasoning to bear on the problem, which will tell you where the mode is when $\alpha <\frac{1}{2}$.
It's possible for the function to be everywhere decreasing in the domain; if it's on an open interval there may be no mode in that interval. For example, in the case of the gamma with $\alpha<1$, it's not uncommon to say "there's a mode at 0" even though the limit isn't in the interval - it's strictly incorrect to say there's a mode in that case (but usually one can understand the actual intent if someone says there's a mode at 0 even though the function is unbounded in the limit).
Here's an example of what your density looks like at $\beta=\sigma=1$ for three different values of $\alpha$ near $0.5$:
It does clearly suggest the behavior for $\alpha<\frac{1}{2}$ may be monotonically decreasing. The obvious thing to do is to try to check whether it's the case that $g(x+\epsilon)<g(x)$ with $\alpha<\frac{1}{2}$ when $\epsilon>0$ (e.g. is the derivative always negative when $\alpha<\frac{1}{2}$?)
If it's the case that it is monotonic decreasing (you should carry out such a check for yourself), it doesn't strictly have a mode; best to simply describe the behavior near 0 for $\alpha<\frac{1}{2}$.
|
Find the mode of a probability distribution function
Consider that there are shapes of pdf that have a mode, but at which the derivative of the pdf is not zero (the Laplace being an obvious example).
There are also cases where there's no mode in the d
|
52,157 |
Why is p for 8 times heads out of 21 flips not 8/21?
|
8/21 is the proportion of heads in the result.
Instead of calculating the probability of 8 heads, you can calculate the probability that the proportion of heads in 21 coin flips will be 8/21. They both turn out 0.097 (assuming your calculation is correct)
|
Why is p for 8 times heads out of 21 flips not 8/21?
|
8/21 is the proportion of heads in the result.
Instead of calculating the probability of 8 heads, you can calculate the probability that the proportion of heads in 21 coin flips will be 8/21. They bot
|
Why is p for 8 times heads out of 21 flips not 8/21?
8/21 is the proportion of heads in the result.
Instead of calculating the probability of 8 heads, you can calculate the probability that the proportion of heads in 21 coin flips will be 8/21. They both turn out 0.097 (assuming your calculation is correct)
|
Why is p for 8 times heads out of 21 flips not 8/21?
8/21 is the proportion of heads in the result.
Instead of calculating the probability of 8 heads, you can calculate the probability that the proportion of heads in 21 coin flips will be 8/21. They bot
|
52,158 |
Why is p for 8 times heads out of 21 flips not 8/21?
|
Think of one fair die with $21$ sides, of which $8$ have the letter $H$ inscribed, and the other $13$ have the letter $T$ inscribed.
Throw the die once. What is the probability that you will get an $H$? It is $8/21$. Now compact the $21$ dimensions into just $2$, taking into account how many time each letter appeared in the $21$-dimensional world: it appears that the $T$ dimension should have a higher probability of occurring than the $H$ dimension, in one throw in the $2$-dimensional world, if one wanted to keep the correspondence with the $21$-dimensional world...
...which tells us that the sample proportions of a sequence of results from independent throws of a coin, estimate the probability distribution that holds for each throw.
|
Why is p for 8 times heads out of 21 flips not 8/21?
|
Think of one fair die with $21$ sides, of which $8$ have the letter $H$ inscribed, and the other $13$ have the letter $T$ inscribed.
Throw the die once. What is the probability that you will get an
|
Why is p for 8 times heads out of 21 flips not 8/21?
Think of one fair die with $21$ sides, of which $8$ have the letter $H$ inscribed, and the other $13$ have the letter $T$ inscribed.
Throw the die once. What is the probability that you will get an $H$? It is $8/21$. Now compact the $21$ dimensions into just $2$, taking into account how many time each letter appeared in the $21$-dimensional world: it appears that the $T$ dimension should have a higher probability of occurring than the $H$ dimension, in one throw in the $2$-dimensional world, if one wanted to keep the correspondence with the $21$-dimensional world...
...which tells us that the sample proportions of a sequence of results from independent throws of a coin, estimate the probability distribution that holds for each throw.
|
Why is p for 8 times heads out of 21 flips not 8/21?
Think of one fair die with $21$ sides, of which $8$ have the letter $H$ inscribed, and the other $13$ have the letter $T$ inscribed.
Throw the die once. What is the probability that you will get an
|
52,159 |
Why is p for 8 times heads out of 21 flips not 8/21?
|
What am I then calculating by $8/21=0.38$?
You're calculating an estimate of the probability that one future coin flip will turn up heads.
Of course, by doing so you're ignoring everything else you know about the coin, including your expectation that the coin is fair, and treating it as a binomial experiment with an unknown probability of success. Which is kind of a silly thing to do when you know you're dealing with a fair coin. (If you don't have any reason to believe the coin should be fair, this may be a reasonable thing to do.)
|
Why is p for 8 times heads out of 21 flips not 8/21?
|
What am I then calculating by $8/21=0.38$?
You're calculating an estimate of the probability that one future coin flip will turn up heads.
Of course, by doing so you're ignoring everything else you k
|
Why is p for 8 times heads out of 21 flips not 8/21?
What am I then calculating by $8/21=0.38$?
You're calculating an estimate of the probability that one future coin flip will turn up heads.
Of course, by doing so you're ignoring everything else you know about the coin, including your expectation that the coin is fair, and treating it as a binomial experiment with an unknown probability of success. Which is kind of a silly thing to do when you know you're dealing with a fair coin. (If you don't have any reason to believe the coin should be fair, this may be a reasonable thing to do.)
|
Why is p for 8 times heads out of 21 flips not 8/21?
What am I then calculating by $8/21=0.38$?
You're calculating an estimate of the probability that one future coin flip will turn up heads.
Of course, by doing so you're ignoring everything else you k
|
52,160 |
How do you pronounce "LASSO"?
|
Rob Tibshirani pronounces it the first way ("LAS-so"), which seems fairly definitive to me. However, Trevor Hastie pronounces it the second way ("las-SOO") and is from South Africa, so I'd agree with @Glen_b and say that any common local pronunciation of the word would be appropriate.
|
How do you pronounce "LASSO"?
|
Rob Tibshirani pronounces it the first way ("LAS-so"), which seems fairly definitive to me. However, Trevor Hastie pronounces it the second way ("las-SOO") and is from South Africa, so I'd agree with
|
How do you pronounce "LASSO"?
Rob Tibshirani pronounces it the first way ("LAS-so"), which seems fairly definitive to me. However, Trevor Hastie pronounces it the second way ("las-SOO") and is from South Africa, so I'd agree with @Glen_b and say that any common local pronunciation of the word would be appropriate.
|
How do you pronounce "LASSO"?
Rob Tibshirani pronounces it the first way ("LAS-so"), which seems fairly definitive to me. However, Trevor Hastie pronounces it the second way ("las-SOO") and is from South Africa, so I'd agree with
|
52,161 |
K-means cluster analysis with K=2 as a binary classifier
|
It depends on what you mean by "did pretty well" and on the population. For general adult populations in the developed world I would not expect this to work very well: heights and weights alone are not great at distinguishing the genders.
The best and easiest way to assess the situation is to make a scatterplot of height and weight, distinguishing the point symbols by gender. This one is from the (US) NHANES 2011-2012 data, where I have removed data for anyone younger than 18 years. Note the logarithmic scales, which render each point cloud approximately oval in shape. (You may guess which kind of symbol--solid red or open blue--corresponds to which gender.)
The substantial overlap between the clouds for the two genders (between 160 and 170 centimeters, approximately) shows that no cluster analysis based solely on height and weight could possibly do a very good job discriminating men from women. The partial lack of overlap, revealed by the cloud of blue above 180 cm and cloud of red below 150 cm, shows that a clustering result would nevertheless have some discriminating power. Whether this would be good enough depends on your objectives and standards for predictive accuracy.
If, in your dataset, the two clouds appear to have little or no overlap, then not only can you expect a cluster analysis (like K-means) to work well, you can already see where the cluster centers should be and where a dividing line ("linear discriminator") would approximately be located.
Here are two k-means solutions for these data: one based on the logarithms and another based on separately standardized heights and weights. The two clusters are distinguished by the lightness of the symbols.
(The number of cases shown in these plots is 90 less than the number reported in the first figure due to missing values, which should originally have been excluded.)
Evidently in both cases the clusters, although associated with gender, fail to separate the two colors very well. The better-looking solution, based on the standardized data, yields these cross-tabulation statistics of cluster and gender:
Cluster
Gender 1 2
Male 1951 786
Female 586 2202
29% of all males and 21% of all females are mis-classified.
|
K-means cluster analysis with K=2 as a binary classifier
|
It depends on what you mean by "did pretty well" and on the population. For general adult populations in the developed world I would not expect this to work very well: heights and weights alone are n
|
K-means cluster analysis with K=2 as a binary classifier
It depends on what you mean by "did pretty well" and on the population. For general adult populations in the developed world I would not expect this to work very well: heights and weights alone are not great at distinguishing the genders.
The best and easiest way to assess the situation is to make a scatterplot of height and weight, distinguishing the point symbols by gender. This one is from the (US) NHANES 2011-2012 data, where I have removed data for anyone younger than 18 years. Note the logarithmic scales, which render each point cloud approximately oval in shape. (You may guess which kind of symbol--solid red or open blue--corresponds to which gender.)
The substantial overlap between the clouds for the two genders (between 160 and 170 centimeters, approximately) shows that no cluster analysis based solely on height and weight could possibly do a very good job discriminating men from women. The partial lack of overlap, revealed by the cloud of blue above 180 cm and cloud of red below 150 cm, shows that a clustering result would nevertheless have some discriminating power. Whether this would be good enough depends on your objectives and standards for predictive accuracy.
If, in your dataset, the two clouds appear to have little or no overlap, then not only can you expect a cluster analysis (like K-means) to work well, you can already see where the cluster centers should be and where a dividing line ("linear discriminator") would approximately be located.
Here are two k-means solutions for these data: one based on the logarithms and another based on separately standardized heights and weights. The two clusters are distinguished by the lightness of the symbols.
(The number of cases shown in these plots is 90 less than the number reported in the first figure due to missing values, which should originally have been excluded.)
Evidently in both cases the clusters, although associated with gender, fail to separate the two colors very well. The better-looking solution, based on the standardized data, yields these cross-tabulation statistics of cluster and gender:
Cluster
Gender 1 2
Male 1951 786
Female 586 2202
29% of all males and 21% of all females are mis-classified.
|
K-means cluster analysis with K=2 as a binary classifier
It depends on what you mean by "did pretty well" and on the population. For general adult populations in the developed world I would not expect this to work very well: heights and weights alone are n
|
52,162 |
K-means cluster analysis with K=2 as a binary classifier
|
Yes, it does sound sensible. I am not sure why you would suspect it did not.
Men tend to be both taller and heavier than women. Exact numbers vary with country (some data here on weight and here on height. Combining them ought to make the classification even better.
|
K-means cluster analysis with K=2 as a binary classifier
|
Yes, it does sound sensible. I am not sure why you would suspect it did not.
Men tend to be both taller and heavier than women. Exact numbers vary with country (some data here on weight and here on he
|
K-means cluster analysis with K=2 as a binary classifier
Yes, it does sound sensible. I am not sure why you would suspect it did not.
Men tend to be both taller and heavier than women. Exact numbers vary with country (some data here on weight and here on height. Combining them ought to make the classification even better.
|
K-means cluster analysis with K=2 as a binary classifier
Yes, it does sound sensible. I am not sure why you would suspect it did not.
Men tend to be both taller and heavier than women. Exact numbers vary with country (some data here on weight and here on he
|
52,163 |
K-means cluster analysis with K=2 as a binary classifier
|
Be careful of artifacts.
K-means assumes that every attribute has the same weight.
If, say, one attribute is the height in meters, and the other is the weight in g, then the result of k-means will depend almost exclusively on the weight.
If this attribute then is useful for separating your two classes, the outcome will look much more impressive than it is logically.
Visualize, visualize, visualize! Often such artifacts can be seen already in a primitive visualization. In your case, I recommend looking at histograms as well as scatterplots; both with class labels and clusters visualized.
|
K-means cluster analysis with K=2 as a binary classifier
|
Be careful of artifacts.
K-means assumes that every attribute has the same weight.
If, say, one attribute is the height in meters, and the other is the weight in g, then the result of k-means will dep
|
K-means cluster analysis with K=2 as a binary classifier
Be careful of artifacts.
K-means assumes that every attribute has the same weight.
If, say, one attribute is the height in meters, and the other is the weight in g, then the result of k-means will depend almost exclusively on the weight.
If this attribute then is useful for separating your two classes, the outcome will look much more impressive than it is logically.
Visualize, visualize, visualize! Often such artifacts can be seen already in a primitive visualization. In your case, I recommend looking at histograms as well as scatterplots; both with class labels and clusters visualized.
|
K-means cluster analysis with K=2 as a binary classifier
Be careful of artifacts.
K-means assumes that every attribute has the same weight.
If, say, one attribute is the height in meters, and the other is the weight in g, then the result of k-means will dep
|
52,164 |
How to explain borderline p-values to non-stats people
|
Use "words", do not talk to non-technical people about p-values. They won't understand.
Use your domain knowledge. It might be on the borderline of the 5% significance level, but your domain knowledge might tell you that it is an important factor.
If your domain knowledge tells you that the coefficient must be positive (or must be negative), then you can do a one-sided test.
It is still significant at the 10% level anyway.
Edit:
To address @whuber's comment: My answer was intended to focus on domain knowledge, not not obscuring the fact that a statistical test was failed. In short, see @whuber's comment.
|
How to explain borderline p-values to non-stats people
|
Use "words", do not talk to non-technical people about p-values. They won't understand.
Use your domain knowledge. It might be on the borderline of the 5% significance level, but your domain knowledge
|
How to explain borderline p-values to non-stats people
Use "words", do not talk to non-technical people about p-values. They won't understand.
Use your domain knowledge. It might be on the borderline of the 5% significance level, but your domain knowledge might tell you that it is an important factor.
If your domain knowledge tells you that the coefficient must be positive (or must be negative), then you can do a one-sided test.
It is still significant at the 10% level anyway.
Edit:
To address @whuber's comment: My answer was intended to focus on domain knowledge, not not obscuring the fact that a statistical test was failed. In short, see @whuber's comment.
|
How to explain borderline p-values to non-stats people
Use "words", do not talk to non-technical people about p-values. They won't understand.
Use your domain knowledge. It might be on the borderline of the 5% significance level, but your domain knowledge
|
52,165 |
How to explain borderline p-values to non-stats people
|
Tell the story you found out, not to explain the maths behind your findings.
Non statistical-people wants to know about the stories behind the data. Usually, they use your analysis to take decision in a fast way.
|
How to explain borderline p-values to non-stats people
|
Tell the story you found out, not to explain the maths behind your findings.
Non statistical-people wants to know about the stories behind the data. Usually, they use your analysis to take decision in
|
How to explain borderline p-values to non-stats people
Tell the story you found out, not to explain the maths behind your findings.
Non statistical-people wants to know about the stories behind the data. Usually, they use your analysis to take decision in a fast way.
|
How to explain borderline p-values to non-stats people
Tell the story you found out, not to explain the maths behind your findings.
Non statistical-people wants to know about the stories behind the data. Usually, they use your analysis to take decision in
|
52,166 |
How to explain borderline p-values to non-stats people
|
Drop the hypothesis test and the p value altogether. I'm not confident that 95% of "stats people" really understand p values; with "non-stats people", I'd bet it's less than half. Don't go there unless you must.
In your example, it sounds like you're trying to reject the null that the slope parameter for the predictor in question is zero. Is that really what you want to do? Might you prefer to estimate what the slope parameter actually is, and express your degree of confidence in that estimate? My guess is that most non-stats people would prefer the latter. It doesn't just take statistical education to understand a p value; it also takes a fair amount of epistemology to appreciate the value of falsification, and why a big scientific study could somehow not produce good enough results to conclude anything trustworthy enough to deserve printing. Conversely, positively framed information appeals to basic intuition.
Consider presenting your results as effect size estimates with confidence intervals. This won't entail falsificationism, will focus on the information gained from your study, and will communicate what needs to be said about the population from whence you sampled. The slope coefficient itself can be interpreted in simple terms of size (e.g., a standardized $\beta=.15$ is probably a fairly weak relationship, but if you have 3× larger standard error, maybe your $\beta=.45$; that's a fairly strong relationship in many contexts), or in literal terms of the regression equation (e.g., for one unit increase in your predictor, your model estimates that the outcome increases by b). Either way, it's much more intuitive than the correct understanding of a p value. It's somewhat different information, but again, it's probably the information a non-academic audience would want first. They might even have a healthily skeptical attitude toward the results of your study with regard to conclusions about the population based on intuition alone, so I don't think you cause any harm necessarily and inherently by focusing attention on your study's estimates instead of on its implications for the population.
For academic, statistics, and all other kinds of audiences, confidence intervals should suffice as the vehicle of information regarding the population. In your case, while presenting your effect size estimate and suggesting its implications, it may be worthwhile to draw attention to the marginally >5% chance that the real relationship between predictor and outcome could actually operate very (probably even negligibly) weakly in the opposite direction from what your study indicates in the overall population...or it may not. Non-academic audiences may be just as content with a 90% confidence interval that leaves only a 10% doubt that your study has enumerated the correct range of possibilities for the strength of the relationship in question, and if you modeled your data correctly, you can be 90% confident that there is a relationship in the direction your model indicates. Furthermore, a confidence interval draws attention to the possibility that the relationship is larger than your model estimates. IMO, focusing on null hypotheses tends to draw attention toward the more dismissive half of the confidence interval, but that is only half of the story.
|
How to explain borderline p-values to non-stats people
|
Drop the hypothesis test and the p value altogether. I'm not confident that 95% of "stats people" really understand p values; with "non-stats people", I'd bet it's less than half. Don't go there unles
|
How to explain borderline p-values to non-stats people
Drop the hypothesis test and the p value altogether. I'm not confident that 95% of "stats people" really understand p values; with "non-stats people", I'd bet it's less than half. Don't go there unless you must.
In your example, it sounds like you're trying to reject the null that the slope parameter for the predictor in question is zero. Is that really what you want to do? Might you prefer to estimate what the slope parameter actually is, and express your degree of confidence in that estimate? My guess is that most non-stats people would prefer the latter. It doesn't just take statistical education to understand a p value; it also takes a fair amount of epistemology to appreciate the value of falsification, and why a big scientific study could somehow not produce good enough results to conclude anything trustworthy enough to deserve printing. Conversely, positively framed information appeals to basic intuition.
Consider presenting your results as effect size estimates with confidence intervals. This won't entail falsificationism, will focus on the information gained from your study, and will communicate what needs to be said about the population from whence you sampled. The slope coefficient itself can be interpreted in simple terms of size (e.g., a standardized $\beta=.15$ is probably a fairly weak relationship, but if you have 3× larger standard error, maybe your $\beta=.45$; that's a fairly strong relationship in many contexts), or in literal terms of the regression equation (e.g., for one unit increase in your predictor, your model estimates that the outcome increases by b). Either way, it's much more intuitive than the correct understanding of a p value. It's somewhat different information, but again, it's probably the information a non-academic audience would want first. They might even have a healthily skeptical attitude toward the results of your study with regard to conclusions about the population based on intuition alone, so I don't think you cause any harm necessarily and inherently by focusing attention on your study's estimates instead of on its implications for the population.
For academic, statistics, and all other kinds of audiences, confidence intervals should suffice as the vehicle of information regarding the population. In your case, while presenting your effect size estimate and suggesting its implications, it may be worthwhile to draw attention to the marginally >5% chance that the real relationship between predictor and outcome could actually operate very (probably even negligibly) weakly in the opposite direction from what your study indicates in the overall population...or it may not. Non-academic audiences may be just as content with a 90% confidence interval that leaves only a 10% doubt that your study has enumerated the correct range of possibilities for the strength of the relationship in question, and if you modeled your data correctly, you can be 90% confident that there is a relationship in the direction your model indicates. Furthermore, a confidence interval draws attention to the possibility that the relationship is larger than your model estimates. IMO, focusing on null hypotheses tends to draw attention toward the more dismissive half of the confidence interval, but that is only half of the story.
|
How to explain borderline p-values to non-stats people
Drop the hypothesis test and the p value altogether. I'm not confident that 95% of "stats people" really understand p values; with "non-stats people", I'd bet it's less than half. Don't go there unles
|
52,167 |
How to explain borderline p-values to non-stats people
|
Keep it simple.
I believe this correlation to be valid, but with the data I have at
the moment, it may or may not be. I need at least twice (or 3X or
10X, etc.) the data to be sure, one way or the other.
If that doesn't get a satisfactory response, consider an analogy.
We all know that home prices are related to square footage, right? If
I gathered a small sampling of home sales - say 5 in Beverly Hills, 5
in rural Montana, and 1 in New York City - we probably wouldn't be
able to prove it statistically. That's where I'm at now. My
instincts tell me this relationship is valid. My data is telling me
"maybe, maybe not". It has crossed the threshold into maybe not
territory, but just barely.
Regardless, I would stay away from discussing p-values all together. The topic will make a classroom full of students who want to do statistical work for a living fall asleep.
|
How to explain borderline p-values to non-stats people
|
Keep it simple.
I believe this correlation to be valid, but with the data I have at
the moment, it may or may not be. I need at least twice (or 3X or
10X, etc.) the data to be sure, one way or t
|
How to explain borderline p-values to non-stats people
Keep it simple.
I believe this correlation to be valid, but with the data I have at
the moment, it may or may not be. I need at least twice (or 3X or
10X, etc.) the data to be sure, one way or the other.
If that doesn't get a satisfactory response, consider an analogy.
We all know that home prices are related to square footage, right? If
I gathered a small sampling of home sales - say 5 in Beverly Hills, 5
in rural Montana, and 1 in New York City - we probably wouldn't be
able to prove it statistically. That's where I'm at now. My
instincts tell me this relationship is valid. My data is telling me
"maybe, maybe not". It has crossed the threshold into maybe not
territory, but just barely.
Regardless, I would stay away from discussing p-values all together. The topic will make a classroom full of students who want to do statistical work for a living fall asleep.
|
How to explain borderline p-values to non-stats people
Keep it simple.
I believe this correlation to be valid, but with the data I have at
the moment, it may or may not be. I need at least twice (or 3X or
10X, etc.) the data to be sure, one way or t
|
52,168 |
How to explain borderline p-values to non-stats people
|
If you think 300 points is limited, than that's the most important argumentation. Hypothesis tests are designed towards keeping the null, unless there is strong evidence against it.
Absence of evidence is not evidence of absence.
I guess also non-stats people should get this.
|
How to explain borderline p-values to non-stats people
|
If you think 300 points is limited, than that's the most important argumentation. Hypothesis tests are designed towards keeping the null, unless there is strong evidence against it.
Absence of evide
|
How to explain borderline p-values to non-stats people
If you think 300 points is limited, than that's the most important argumentation. Hypothesis tests are designed towards keeping the null, unless there is strong evidence against it.
Absence of evidence is not evidence of absence.
I guess also non-stats people should get this.
|
How to explain borderline p-values to non-stats people
If you think 300 points is limited, than that's the most important argumentation. Hypothesis tests are designed towards keeping the null, unless there is strong evidence against it.
Absence of evide
|
52,169 |
How to explain borderline p-values to non-stats people
|
I think even non-stats people can understand a distribution curve - most people have seen that in high school and/or college. So you could also show it visually in the sense that people will understand rejection regions vis-a-vis your null hypothesis, etc. You could literally shade those in - "here's where we make this decision and here's where our test came up. See how close that is?". I think that would get your desired effect across.
|
How to explain borderline p-values to non-stats people
|
I think even non-stats people can understand a distribution curve - most people have seen that in high school and/or college. So you could also show it visually in the sense that people will understan
|
How to explain borderline p-values to non-stats people
I think even non-stats people can understand a distribution curve - most people have seen that in high school and/or college. So you could also show it visually in the sense that people will understand rejection regions vis-a-vis your null hypothesis, etc. You could literally shade those in - "here's where we make this decision and here's where our test came up. See how close that is?". I think that would get your desired effect across.
|
How to explain borderline p-values to non-stats people
I think even non-stats people can understand a distribution curve - most people have seen that in high school and/or college. So you could also show it visually in the sense that people will understan
|
52,170 |
Does a Neural Network actually need an activation function or is that just for Back Propagation?
|
You built a multilayer neural network with a linear hidden layer. Linear units in the hidden layer negates the purpose of having a hidden layer. The weights between your inputs and the hidden layer, and the weights between the hidden layer and the output layer are effectively a single set of weights. A neural network with a single set of weights is a linear model performing regression.
Here's a vector of your linear hidden units
$$
H = [h_1, h_2,.. ,h_n]
$$
The equation the governs the forward propagation of $x$ through your network is then
$$
\bar{y} = W'(Hx) \Rightarrow (W'H)x
$$
Thus an n-layered feed forward neural network with linear hidden layers is equivalent to a output layer given by
$$
W=W'\prod_i H_i
$$
If you only have linear units then the hidden layer(s) are doing nothing. Hinton et al recommends rectified linear units, which are $\text{max}(0, x)$. It's simple and doesn't suffer the vanishing gradient problem of sigmoidal functions. Similarly you might choose soft-plus function, $\log(1 + e^x)$ which is a non-sparse smooth approximation.
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
|
You built a multilayer neural network with a linear hidden layer. Linear units in the hidden layer negates the purpose of having a hidden layer. The weights between your inputs and the hidden layer, a
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
You built a multilayer neural network with a linear hidden layer. Linear units in the hidden layer negates the purpose of having a hidden layer. The weights between your inputs and the hidden layer, and the weights between the hidden layer and the output layer are effectively a single set of weights. A neural network with a single set of weights is a linear model performing regression.
Here's a vector of your linear hidden units
$$
H = [h_1, h_2,.. ,h_n]
$$
The equation the governs the forward propagation of $x$ through your network is then
$$
\bar{y} = W'(Hx) \Rightarrow (W'H)x
$$
Thus an n-layered feed forward neural network with linear hidden layers is equivalent to a output layer given by
$$
W=W'\prod_i H_i
$$
If you only have linear units then the hidden layer(s) are doing nothing. Hinton et al recommends rectified linear units, which are $\text{max}(0, x)$. It's simple and doesn't suffer the vanishing gradient problem of sigmoidal functions. Similarly you might choose soft-plus function, $\log(1 + e^x)$ which is a non-sparse smooth approximation.
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
You built a multilayer neural network with a linear hidden layer. Linear units in the hidden layer negates the purpose of having a hidden layer. The weights between your inputs and the hidden layer, a
|
52,171 |
Does a Neural Network actually need an activation function or is that just for Back Propagation?
|
If you don't have non-linear activation functions, then you end up with a network as powerful in its expressive power as a linear model. Simply view it as a linear algebra problem. Intuitively if you have linear transformation encoded by a matrix $A$ and you compose an initial vector $x$ with multiple linear transformation, then you still end up with a linear transformation:
$$ T_1( ... T_n(x) ) = A_1 \cdot ... \cdot A_n x $$
Essentially if you move points so the grids stay parallel and evenly spaced you can't randomly introduce a curve. So everything remains linear.
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
|
If you don't have non-linear activation functions, then you end up with a network as powerful in its expressive power as a linear model. Simply view it as a linear algebra problem. Intuitively if you
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
If you don't have non-linear activation functions, then you end up with a network as powerful in its expressive power as a linear model. Simply view it as a linear algebra problem. Intuitively if you have linear transformation encoded by a matrix $A$ and you compose an initial vector $x$ with multiple linear transformation, then you still end up with a linear transformation:
$$ T_1( ... T_n(x) ) = A_1 \cdot ... \cdot A_n x $$
Essentially if you move points so the grids stay parallel and evenly spaced you can't randomly introduce a curve. So everything remains linear.
|
Does a Neural Network actually need an activation function or is that just for Back Propagation?
If you don't have non-linear activation functions, then you end up with a network as powerful in its expressive power as a linear model. Simply view it as a linear algebra problem. Intuitively if you
|
52,172 |
median(a)/median(b) not equal median(a/b)
|
This is a property of mathematics, it is actually rare that the order of operations does not matter, e.g. the log of the square root is not the same as the square root of the log (except for a few special cases).
We often focus on some of those special cases where due to operations distributing, associating, and commuting (flashbacks to algebra, oh no!) we can do things in either order. For example to compute a mean we can either add the numbers together then divide the sum by $n$, or we can divide each number by $n$ then sum those values. This is because division (multiplication) distributes over addition. With paired data we have the fact that the mean of the differences is the difference of the means. But these are the rarer cases, not the rule.
So in general you should not expect to get the same result when you do things in a different order, it is also not true that the mean of the ratios is the ratio of the means, so why should it be true for the median?
|
median(a)/median(b) not equal median(a/b)
|
This is a property of mathematics, it is actually rare that the order of operations does not matter, e.g. the log of the square root is not the same as the square root of the log (except for a few spe
|
median(a)/median(b) not equal median(a/b)
This is a property of mathematics, it is actually rare that the order of operations does not matter, e.g. the log of the square root is not the same as the square root of the log (except for a few special cases).
We often focus on some of those special cases where due to operations distributing, associating, and commuting (flashbacks to algebra, oh no!) we can do things in either order. For example to compute a mean we can either add the numbers together then divide the sum by $n$, or we can divide each number by $n$ then sum those values. This is because division (multiplication) distributes over addition. With paired data we have the fact that the mean of the differences is the difference of the means. But these are the rarer cases, not the rule.
So in general you should not expect to get the same result when you do things in a different order, it is also not true that the mean of the ratios is the ratio of the means, so why should it be true for the median?
|
median(a)/median(b) not equal median(a/b)
This is a property of mathematics, it is actually rare that the order of operations does not matter, e.g. the log of the square root is not the same as the square root of the log (except for a few spe
|
52,173 |
median(a)/median(b) not equal median(a/b)
|
You might find it even more surprising to discover that even with a nice linear operator like expectation, you still have this issue (median is non linear, mean is linear):
$$\text{mean}(A)/\text{mean}(B) \neq \text{mean}(A/B)$$
For example:
a=1:5
b=6:10
mean(a)/mean(b)
[1] 0.375
mean(a/b)
[1] 0.3543651
But then you should already know that in general $E(XY)\neq E(X)E(Y)$, so perhaps this shouldn't surprise at all!
|
median(a)/median(b) not equal median(a/b)
|
You might find it even more surprising to discover that even with a nice linear operator like expectation, you still have this issue (median is non linear, mean is linear):
$$\text{mean}(A)/\text{mean
|
median(a)/median(b) not equal median(a/b)
You might find it even more surprising to discover that even with a nice linear operator like expectation, you still have this issue (median is non linear, mean is linear):
$$\text{mean}(A)/\text{mean}(B) \neq \text{mean}(A/B)$$
For example:
a=1:5
b=6:10
mean(a)/mean(b)
[1] 0.375
mean(a/b)
[1] 0.3543651
But then you should already know that in general $E(XY)\neq E(X)E(Y)$, so perhaps this shouldn't surprise at all!
|
median(a)/median(b) not equal median(a/b)
You might find it even more surprising to discover that even with a nice linear operator like expectation, you still have this issue (median is non linear, mean is linear):
$$\text{mean}(A)/\text{mean
|
52,174 |
median(a)/median(b) not equal median(a/b)
|
Erm, for the (obvious?) reason that the fractional representation of a number is not unique, the medians can't be the same.
EDIT: as per Glen's request
The location of the median relies on the ordering of numbers in your set. Suppose you order your numbers from smallest to largest, so you have something like this {1,2,3}. You can maintain the same median if you perform a transformation only if that transformation preserves the ordering. For example, if you add 1 to every single number in your set, location of the median doesn't change: {2, 3, 4}, i.e. it is still located in the second position.
Any linear transformation maintains the order. The maintenance of order is key. That's the "property of mathematics" that's really being referred to below. (And how you define order is also key. Note that order is essentially a notion of distance. 2 is closer to 3 than to 4 because the distance between 2 and 3 is smaller than the distance between 2 and 4). That's why, for positive data, in some instances, it is allowed to apply a log transform of your data - you haven't fundamentally altered the ordering of the numbers, and thus you haven't changed the underlying relationship between your variables. You can do a log transform for say, income data, but you can't do it for inflation data.
If a transformation is not linear, the order is not necessarily maintained. Transforming every number into a fraction is not a linear transformation because the fractional representation of a number is not unique. 1/2 is the same as 2/4. That's why for "large" sets the location of the median changes if you transform your numbers into fractions in the way that you described. For large enough sets, you're eventually going to run into a situation where you have the same fraction in multiple places.If that happens, your set "shrinks" and so the median must change.
|
median(a)/median(b) not equal median(a/b)
|
Erm, for the (obvious?) reason that the fractional representation of a number is not unique, the medians can't be the same.
EDIT: as per Glen's request
The location of the median relies on the orderi
|
median(a)/median(b) not equal median(a/b)
Erm, for the (obvious?) reason that the fractional representation of a number is not unique, the medians can't be the same.
EDIT: as per Glen's request
The location of the median relies on the ordering of numbers in your set. Suppose you order your numbers from smallest to largest, so you have something like this {1,2,3}. You can maintain the same median if you perform a transformation only if that transformation preserves the ordering. For example, if you add 1 to every single number in your set, location of the median doesn't change: {2, 3, 4}, i.e. it is still located in the second position.
Any linear transformation maintains the order. The maintenance of order is key. That's the "property of mathematics" that's really being referred to below. (And how you define order is also key. Note that order is essentially a notion of distance. 2 is closer to 3 than to 4 because the distance between 2 and 3 is smaller than the distance between 2 and 4). That's why, for positive data, in some instances, it is allowed to apply a log transform of your data - you haven't fundamentally altered the ordering of the numbers, and thus you haven't changed the underlying relationship between your variables. You can do a log transform for say, income data, but you can't do it for inflation data.
If a transformation is not linear, the order is not necessarily maintained. Transforming every number into a fraction is not a linear transformation because the fractional representation of a number is not unique. 1/2 is the same as 2/4. That's why for "large" sets the location of the median changes if you transform your numbers into fractions in the way that you described. For large enough sets, you're eventually going to run into a situation where you have the same fraction in multiple places.If that happens, your set "shrinks" and so the median must change.
|
median(a)/median(b) not equal median(a/b)
Erm, for the (obvious?) reason that the fractional representation of a number is not unique, the medians can't be the same.
EDIT: as per Glen's request
The location of the median relies on the orderi
|
52,175 |
Concerns about the size of odds-ratio estimates in binary logistic regression model
|
Thanks for adding the table
Given this, I think the OR is fine. The logistic regression you posted also included another variable, but, in the table, the OR for L1 = 4 vs. L1 = 0 is $\frac{17537*1328}{1284*44} = 412.23$ which is actually a little larger than the one from the regression.
It's a very strong relationship.
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
|
Thanks for adding the table
Given this, I think the OR is fine. The logistic regression you posted also included another variable, but, in the table, the OR for L1 = 4 vs. L1 = 0 is $\frac{17537*1328}
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
Thanks for adding the table
Given this, I think the OR is fine. The logistic regression you posted also included another variable, but, in the table, the OR for L1 = 4 vs. L1 = 0 is $\frac{17537*1328}{1284*44} = 412.23$ which is actually a little larger than the one from the regression.
It's a very strong relationship.
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
Thanks for adding the table
Given this, I think the OR is fine. The logistic regression you posted also included another variable, but, in the table, the OR for L1 = 4 vs. L1 = 0 is $\frac{17537*1328}
|
52,176 |
Concerns about the size of odds-ratio estimates in binary logistic regression model
|
An important point needs to be added to Peter's good answer:
372 times as likely
is definitely not correct, although you would be about the ten gazillionth person to make this mistake. A number close to 400 is the ratio not of two probabilities but of two odds. The ratio of the two corresponding probabilities is [1328/(1328+44)] / [1284/(1284+17537)] = .968/.068 = 14.
One might substitute for "probability" the term "risk" or even "chance" or "likelihood," but it is not correct to substitute any of the above in place of "odds."
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
|
An important point needs to be added to Peter's good answer:
372 times as likely
is definitely not correct, although you would be about the ten gazillionth person to make this mistake. A number cl
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
An important point needs to be added to Peter's good answer:
372 times as likely
is definitely not correct, although you would be about the ten gazillionth person to make this mistake. A number close to 400 is the ratio not of two probabilities but of two odds. The ratio of the two corresponding probabilities is [1328/(1328+44)] / [1284/(1284+17537)] = .968/.068 = 14.
One might substitute for "probability" the term "risk" or even "chance" or "likelihood," but it is not correct to substitute any of the above in place of "odds."
|
Concerns about the size of odds-ratio estimates in binary logistic regression model
An important point needs to be added to Peter's good answer:
372 times as likely
is definitely not correct, although you would be about the ten gazillionth person to make this mistake. A number cl
|
52,177 |
Can someone give a simple guide of Dirichlet process clustering?
|
What is the difference between (Dirichlet) distribution and (Dirichlet) process?
The difference between a Dirichlet distribution and a Dirichlet process is perhaps easier to understand when you understand the difference between a Gaussian distribution and a Gaussian process. A Gaussian distribution pertains to the possible realizations of a single random variable. A Gaussian process constitutes a set of random variables in which this set is ordered. Often, this ordering is in time, but it doesn't need to be. We can draw a set of observations directly from the Gaussian process, or equivalently we can draw observations from a multivariate normal distribution (every linear combination of its components is normally distributed).
The Dirichlet distribution is a multivariate generalization of the beta distribution. It can be used to define probabilities for entities that sum up to 1. One of these entities that comes into mind are probabilities themselves of course. :-) Parallel to the Gaussian process, the Dirichlet process can be seen as a set of random variables. In this case nothing is said about ordering, but there is something said about a partition of a subset of these random variables. We can draw observations directly from a Dirichlet process, or equivalently we can draw observations from a (of course multivariate) Dirichlet distribution (every linear combination of its components is not a Dirichlet distribution though [1]).
How is Pólya's urn or stick breaking related to the Dirichlet process?
Suppose, we have drawn many observations from a Dirichlet process. We subsequently study the distribution of these observations. As you probably know, the incredible feat of the Dirichlet Process is to pick the same value from a continuous range of all possible values time after time (governed by the alpha parameter). The frequencies with which different values are repeatedly picked can be obtained by multiple means. The Chinese Restaurant Process is just one of the manners. The Pólya's urn strategy with colored balls that you duplicate and put back and where you pick a random new color if you pick a black ball is exactly the same as the CRP. The stick-breaking process is different though. It doesn't say anything about the "values" (colors) of each piece of the stick that you break in a generative fashion. The only thing after all the breaking is the number of balls of the same color, the distribution (of the colors).
How to do Gibbs sampling?
The Dirichlet process has a very interesting property. In the analog of the CRP, you can take a random customer out and "act as if" this is the first customer. In Gibbs sampling you calculate all the time the conditional of one of the random variables in a multivariate distribution given all the others, so if there is a closed-form formulation of this procedure, Gibbs sampling is very natural. The exchangeability property is worth to explore. Try for example to calculate the probability that two totally random customers (e.g. the first and the last) sit at the same table with alpha=1. Gibbs sampling on the level of the customers though, is not so smart, because - if you recall the restaurant - there are a lot of customers at the same table. So, it would be nice if you can update directly an entire table. Implementation details like this always refer to Neal's technical report which I also recommend [2].
[1] On the distribution of linear combinations of the components of a Dirichlet random vector (2000) Provost, Cheong
[2] Markov Chain Sampling Methods for Dirichlet Process Mixture Models (1998) Neal
|
Can someone give a simple guide of Dirichlet process clustering?
|
What is the difference between (Dirichlet) distribution and (Dirichlet) process?
The difference between a Dirichlet distribution and a Dirichlet process is perhaps easier to understand when you unders
|
Can someone give a simple guide of Dirichlet process clustering?
What is the difference between (Dirichlet) distribution and (Dirichlet) process?
The difference between a Dirichlet distribution and a Dirichlet process is perhaps easier to understand when you understand the difference between a Gaussian distribution and a Gaussian process. A Gaussian distribution pertains to the possible realizations of a single random variable. A Gaussian process constitutes a set of random variables in which this set is ordered. Often, this ordering is in time, but it doesn't need to be. We can draw a set of observations directly from the Gaussian process, or equivalently we can draw observations from a multivariate normal distribution (every linear combination of its components is normally distributed).
The Dirichlet distribution is a multivariate generalization of the beta distribution. It can be used to define probabilities for entities that sum up to 1. One of these entities that comes into mind are probabilities themselves of course. :-) Parallel to the Gaussian process, the Dirichlet process can be seen as a set of random variables. In this case nothing is said about ordering, but there is something said about a partition of a subset of these random variables. We can draw observations directly from a Dirichlet process, or equivalently we can draw observations from a (of course multivariate) Dirichlet distribution (every linear combination of its components is not a Dirichlet distribution though [1]).
How is Pólya's urn or stick breaking related to the Dirichlet process?
Suppose, we have drawn many observations from a Dirichlet process. We subsequently study the distribution of these observations. As you probably know, the incredible feat of the Dirichlet Process is to pick the same value from a continuous range of all possible values time after time (governed by the alpha parameter). The frequencies with which different values are repeatedly picked can be obtained by multiple means. The Chinese Restaurant Process is just one of the manners. The Pólya's urn strategy with colored balls that you duplicate and put back and where you pick a random new color if you pick a black ball is exactly the same as the CRP. The stick-breaking process is different though. It doesn't say anything about the "values" (colors) of each piece of the stick that you break in a generative fashion. The only thing after all the breaking is the number of balls of the same color, the distribution (of the colors).
How to do Gibbs sampling?
The Dirichlet process has a very interesting property. In the analog of the CRP, you can take a random customer out and "act as if" this is the first customer. In Gibbs sampling you calculate all the time the conditional of one of the random variables in a multivariate distribution given all the others, so if there is a closed-form formulation of this procedure, Gibbs sampling is very natural. The exchangeability property is worth to explore. Try for example to calculate the probability that two totally random customers (e.g. the first and the last) sit at the same table with alpha=1. Gibbs sampling on the level of the customers though, is not so smart, because - if you recall the restaurant - there are a lot of customers at the same table. So, it would be nice if you can update directly an entire table. Implementation details like this always refer to Neal's technical report which I also recommend [2].
[1] On the distribution of linear combinations of the components of a Dirichlet random vector (2000) Provost, Cheong
[2] Markov Chain Sampling Methods for Dirichlet Process Mixture Models (1998) Neal
|
Can someone give a simple guide of Dirichlet process clustering?
What is the difference between (Dirichlet) distribution and (Dirichlet) process?
The difference between a Dirichlet distribution and a Dirichlet process is perhaps easier to understand when you unders
|
52,178 |
Can someone give a simple guide of Dirichlet process clustering?
|
These are two great tutorials,
"Introduction to the Dirichlet Distribution and Related Processes"
"A Very Gentle Note on the Construction of Dirichlet Process"
specially the first one, with a reference to a very succinct tutorial on measure theory. I would start with the first one, because it starts by introducing the Dirichlet distribution and sampling, and then moves one to the continuous case, the Dirichlet process. It helped me a lot to understand it.
The tutorial address the first two questions. The sampling schemes are derived in detail. The Dirichlet process is a generalization of the Dirichlet distribution.
The Dirichlet distribution is a distribution over the distributions modelling discrete events from a given number of categories. In other words, you can model densities of probability of categorical variables with a Dirichlet distribution. The Dirichlet process allows you to model distributions over continuous variables.
|
Can someone give a simple guide of Dirichlet process clustering?
|
These are two great tutorials,
"Introduction to the Dirichlet Distribution and Related Processes"
"A Very Gentle Note on the Construction of Dirichlet Process"
specially the first one, with a referenc
|
Can someone give a simple guide of Dirichlet process clustering?
These are two great tutorials,
"Introduction to the Dirichlet Distribution and Related Processes"
"A Very Gentle Note on the Construction of Dirichlet Process"
specially the first one, with a reference to a very succinct tutorial on measure theory. I would start with the first one, because it starts by introducing the Dirichlet distribution and sampling, and then moves one to the continuous case, the Dirichlet process. It helped me a lot to understand it.
The tutorial address the first two questions. The sampling schemes are derived in detail. The Dirichlet process is a generalization of the Dirichlet distribution.
The Dirichlet distribution is a distribution over the distributions modelling discrete events from a given number of categories. In other words, you can model densities of probability of categorical variables with a Dirichlet distribution. The Dirichlet process allows you to model distributions over continuous variables.
|
Can someone give a simple guide of Dirichlet process clustering?
These are two great tutorials,
"Introduction to the Dirichlet Distribution and Related Processes"
"A Very Gentle Note on the Construction of Dirichlet Process"
specially the first one, with a referenc
|
52,179 |
When do you use AIC vs. BIC [duplicate]
|
The AIC and BIC optimize different things.
AIC is basically suitable for a situation where you don't necessarily think there's 'a model' so much as a bunch of effects of different sizes, and you're in a situation you want to get good prediction error. As such, as the sample size expands, the AIC choice of model expands as well, as smaller and smaller effects become relevant (in the sense that including them is on average better than excluding them).
BIC on the other hand basically assumes the model is in the candidate set and you want to find it.
BIC tends to hone in on one model as the number of observations grows, AIC really doesn't.
As a result, at large $n$, AIC tends to pick somewhat larger models than BIC. If you're trying to understand what the main drivers are, you might want something more like BIC. If that's less important than good MSPE, you might lean more toward AIC.
|
When do you use AIC vs. BIC [duplicate]
|
The AIC and BIC optimize different things.
AIC is basically suitable for a situation where you don't necessarily think there's 'a model' so much as a bunch of effects of different sizes, and you're i
|
When do you use AIC vs. BIC [duplicate]
The AIC and BIC optimize different things.
AIC is basically suitable for a situation where you don't necessarily think there's 'a model' so much as a bunch of effects of different sizes, and you're in a situation you want to get good prediction error. As such, as the sample size expands, the AIC choice of model expands as well, as smaller and smaller effects become relevant (in the sense that including them is on average better than excluding them).
BIC on the other hand basically assumes the model is in the candidate set and you want to find it.
BIC tends to hone in on one model as the number of observations grows, AIC really doesn't.
As a result, at large $n$, AIC tends to pick somewhat larger models than BIC. If you're trying to understand what the main drivers are, you might want something more like BIC. If that's less important than good MSPE, you might lean more toward AIC.
|
When do you use AIC vs. BIC [duplicate]
The AIC and BIC optimize different things.
AIC is basically suitable for a situation where you don't necessarily think there's 'a model' so much as a bunch of effects of different sizes, and you're i
|
52,180 |
When do you use AIC vs. BIC [duplicate]
|
When used for forward or backward model selection, the BIC penalizes the number of parameters in the model to a greater extent than AIC. Consequently, you'll arrive at a model with fewer parameters in it, on average.
|
When do you use AIC vs. BIC [duplicate]
|
When used for forward or backward model selection, the BIC penalizes the number of parameters in the model to a greater extent than AIC. Consequently, you'll arrive at a model with fewer parameters in
|
When do you use AIC vs. BIC [duplicate]
When used for forward or backward model selection, the BIC penalizes the number of parameters in the model to a greater extent than AIC. Consequently, you'll arrive at a model with fewer parameters in it, on average.
|
When do you use AIC vs. BIC [duplicate]
When used for forward or backward model selection, the BIC penalizes the number of parameters in the model to a greater extent than AIC. Consequently, you'll arrive at a model with fewer parameters in
|
52,181 |
R model.tables() incorrect means – possible bug?
|
As you point out, the individual cell means match, but where you see the problem is in the marginal means. There are multiple ways to calculate the marginal means. Suppose that the data has information on sex (male/female) and age (old/young) and we want to calculate the margin for sex. One approach is to ignore the age variable and just take the mean of all the males and the mean of all the females. Another approach is to find the mean of males by averaging the mean of old males and the mean of young males (add the 2 means and divide by 2). In a balanced design those 2 methods will give the same answer (can be shown with simple algebra), but in the unbalanced case they will usually give different answers because the weight that each data point contributes to the overall mean is different. With model based means you can get different weightings from the 2 I mentioned (I used them for examples as simple ways to understand). I expect in your case that R and SPSS are likely using different approaches.
|
R model.tables() incorrect means – possible bug?
|
As you point out, the individual cell means match, but where you see the problem is in the marginal means. There are multiple ways to calculate the marginal means. Suppose that the data has informat
|
R model.tables() incorrect means – possible bug?
As you point out, the individual cell means match, but where you see the problem is in the marginal means. There are multiple ways to calculate the marginal means. Suppose that the data has information on sex (male/female) and age (old/young) and we want to calculate the margin for sex. One approach is to ignore the age variable and just take the mean of all the males and the mean of all the females. Another approach is to find the mean of males by averaging the mean of old males and the mean of young males (add the 2 means and divide by 2). In a balanced design those 2 methods will give the same answer (can be shown with simple algebra), but in the unbalanced case they will usually give different answers because the weight that each data point contributes to the overall mean is different. With model based means you can get different weightings from the 2 I mentioned (I used them for examples as simple ways to understand). I expect in your case that R and SPSS are likely using different approaches.
|
R model.tables() incorrect means – possible bug?
As you point out, the individual cell means match, but where you see the problem is in the marginal means. There are multiple ways to calculate the marginal means. Suppose that the data has informat
|
52,182 |
R model.tables() incorrect means – possible bug?
|
@mnel is correct in that because of the unbalanced design, the order of the terms matter in the output of model.tables.
ADDED: In the help file for aov, we read that it "is designed for balanced designs, and the results can be hard to interpret without balance." So if you want simple descriptive statistics, better to ask for them directly.
Now, it would be better if you had posted a full data set yourself, even if you had to make one an alternate one that showed the same problem. But you got lucky and a curious reader wanted to know what was going on, so I did that for you. Here's a sample data set:
library(reshape2)
set.seed(5)
d <- expand.grid(a=factor(LETTERS[1:2]), b=factor(letters[1:2]))
d <- d[rep(1:4, c(15,9,11,10)),]
d$y <- round(rnorm(nrow(d), mean=10, sd=2),1)
And we see that the order of the terms in the model matters (output truncated):
> model.tables(aov(y ~ a*b, data=d), "means")
a A B
10.43 9.921
b a b
9.843 10.64
> model.tables(aov(y ~ b*a, data=d), "means")
b a b
9.867 10.61
a A B
10.46 9.877
The first term in the model agrees with the actual mean and the other is different.
> tapply(d$y, d$a, mean)
A B
10.426923 9.921053
> tapply(d$y, d$b, mean)
a b
9.866667 10.609524
Note that I say different, not wrong. It's telling you something correct about the model. I'm not sure what, actually, but I'm curious enough that I may look into the code for model.tables to see what. (Or maybe not, it's getting late.)
|
R model.tables() incorrect means – possible bug?
|
@mnel is correct in that because of the unbalanced design, the order of the terms matter in the output of model.tables.
ADDED: In the help file for aov, we read that it "is designed for balanced des
|
R model.tables() incorrect means – possible bug?
@mnel is correct in that because of the unbalanced design, the order of the terms matter in the output of model.tables.
ADDED: In the help file for aov, we read that it "is designed for balanced designs, and the results can be hard to interpret without balance." So if you want simple descriptive statistics, better to ask for them directly.
Now, it would be better if you had posted a full data set yourself, even if you had to make one an alternate one that showed the same problem. But you got lucky and a curious reader wanted to know what was going on, so I did that for you. Here's a sample data set:
library(reshape2)
set.seed(5)
d <- expand.grid(a=factor(LETTERS[1:2]), b=factor(letters[1:2]))
d <- d[rep(1:4, c(15,9,11,10)),]
d$y <- round(rnorm(nrow(d), mean=10, sd=2),1)
And we see that the order of the terms in the model matters (output truncated):
> model.tables(aov(y ~ a*b, data=d), "means")
a A B
10.43 9.921
b a b
9.843 10.64
> model.tables(aov(y ~ b*a, data=d), "means")
b a b
9.867 10.61
a A B
10.46 9.877
The first term in the model agrees with the actual mean and the other is different.
> tapply(d$y, d$a, mean)
A B
10.426923 9.921053
> tapply(d$y, d$b, mean)
a b
9.866667 10.609524
Note that I say different, not wrong. It's telling you something correct about the model. I'm not sure what, actually, but I'm curious enough that I may look into the code for model.tables to see what. (Or maybe not, it's getting late.)
|
R model.tables() incorrect means – possible bug?
@mnel is correct in that because of the unbalanced design, the order of the terms matter in the output of model.tables.
ADDED: In the help file for aov, we read that it "is designed for balanced des
|
52,183 |
R model.tables() incorrect means – possible bug?
|
Watch out: the model.tables() function only works with balanced designs. If you want to have the marginal means for unbalanced design you should use the popMeans() function. Imagine you have the following model:
Check.Model <- aov(dependent ~ factor1 + factor2, data=data.data)
If you would want the marginal means for the levels of factor1 (i.e., averaged over the levels of factor2) in an unbalanced design, you should use the popMeans() function from the doBy package:
popMeans(Check.Model, eff=c("factor1"))
|
R model.tables() incorrect means – possible bug?
|
Watch out: the model.tables() function only works with balanced designs. If you want to have the marginal means for unbalanced design you should use the popMeans() function. Imagine you have the follo
|
R model.tables() incorrect means – possible bug?
Watch out: the model.tables() function only works with balanced designs. If you want to have the marginal means for unbalanced design you should use the popMeans() function. Imagine you have the following model:
Check.Model <- aov(dependent ~ factor1 + factor2, data=data.data)
If you would want the marginal means for the levels of factor1 (i.e., averaged over the levels of factor2) in an unbalanced design, you should use the popMeans() function from the doBy package:
popMeans(Check.Model, eff=c("factor1"))
|
R model.tables() incorrect means – possible bug?
Watch out: the model.tables() function only works with balanced designs. If you want to have the marginal means for unbalanced design you should use the popMeans() function. Imagine you have the follo
|
52,184 |
Why doesn't the Cramér-Rao lower bound apply?
|
Are you aware of the three regularity conditions that must be satisfied for the CR lower bound to apply? It looks like it violates the condition that the bounds of the distribution function must not depend upon the quantity being estimated. $\theta$ determines the bounds of the distribution. See the Wikipedia article, regularity condition 1: http://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound
|
Why doesn't the Cramér-Rao lower bound apply?
|
Are you aware of the three regularity conditions that must be satisfied for the CR lower bound to apply? It looks like it violates the condition that the bounds of the distribution function must not d
|
Why doesn't the Cramér-Rao lower bound apply?
Are you aware of the three regularity conditions that must be satisfied for the CR lower bound to apply? It looks like it violates the condition that the bounds of the distribution function must not depend upon the quantity being estimated. $\theta$ determines the bounds of the distribution. See the Wikipedia article, regularity condition 1: http://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound
|
Why doesn't the Cramér-Rao lower bound apply?
Are you aware of the three regularity conditions that must be satisfied for the CR lower bound to apply? It looks like it violates the condition that the bounds of the distribution function must not d
|
52,185 |
Why doesn't the Cramér-Rao lower bound apply?
|
The Cramer-Rao Lower Bound (CRLB) is valid only for densities that are sufficiently regular. In particular, the support of the density f(x; θ) cannot depend upon the parameter θ. This is because f(x; θ) must be such that the order of integration of f(x; θ) with respect to x and differentiation of f(x; θ) with respect to θ can be interchanged. For the example you provided, the support of f(x; θ) depends upon the parameter (0 < x < 3θ). Therefore, the CRLB does not apply.
The regularity conditions come from the proof of the CRLB. If they do not hold, the proof is invalid.
|
Why doesn't the Cramér-Rao lower bound apply?
|
The Cramer-Rao Lower Bound (CRLB) is valid only for densities that are sufficiently regular. In particular, the support of the density f(x; θ) cannot depend upon the parameter θ. This is because f(x
|
Why doesn't the Cramér-Rao lower bound apply?
The Cramer-Rao Lower Bound (CRLB) is valid only for densities that are sufficiently regular. In particular, the support of the density f(x; θ) cannot depend upon the parameter θ. This is because f(x; θ) must be such that the order of integration of f(x; θ) with respect to x and differentiation of f(x; θ) with respect to θ can be interchanged. For the example you provided, the support of f(x; θ) depends upon the parameter (0 < x < 3θ). Therefore, the CRLB does not apply.
The regularity conditions come from the proof of the CRLB. If they do not hold, the proof is invalid.
|
Why doesn't the Cramér-Rao lower bound apply?
The Cramer-Rao Lower Bound (CRLB) is valid only for densities that are sufficiently regular. In particular, the support of the density f(x; θ) cannot depend upon the parameter θ. This is because f(x
|
52,186 |
Pie charts vs. dot plots
|
There are two different types of chart that that are referred to as 'dotplots' and I think that you are getting the two confused. The type of dotplot that it looks like you are thinking about is really a variation on a histogram and does not convey the same type of information that a pie chart would.
The type of dotplot from Cleveland is essentially a bar chart with a dot placed at the end of each bar, then the bar is removed. So even with millions of data points, they would be tabled the same as for creating a pie chart, then a single dot is plotted for each category. The summary preparing for the plot is the same in a pie chart and a dotplot: the difference is in a pie chart you are trying to compare non-aligned angles or areas (and the temptation to add chartjunk or otherwise distort the perception of the values is much higher) and in the dotplot you are comparing points on an aligned scale.
If you want the viewer to be able to easily judge percentage of the whole then just make sure that the axis for the dot positions goes from 0 to the total count. You can also easily add another axis (or replace the main one) that shows the percentage rather than the counts, then the percentage can be read off that axis much more accurately than estimating angles and areas in pie charts.
Here are a couple of examples using R:
This is the type of dotplot that I think you are thinking of, and this would not replace a pie chart:
library(TeachingDemos)
dots(round( rnorm(100),0 ) )
But this is the type of dotplot being referred to in Cleveland as a replacement for pie charts:
# steal data from ?pie
pie.sales <- c(0.12, 0.3, 0.26, 0.16, 0.04, 0.12)
names(pie.sales) <- c("Blueberry", "Cherry",
"Apple", "Boston Cream", "Other", "Vanilla Cream")
par(mfrow=c(2,1))
dotchart(pie.sales*100)
# or
par(xaxs='i')
dotchart( pie.sales*100, xlim=c(0,100) )
|
Pie charts vs. dot plots
|
There are two different types of chart that that are referred to as 'dotplots' and I think that you are getting the two confused. The type of dotplot that it looks like you are thinking about is real
|
Pie charts vs. dot plots
There are two different types of chart that that are referred to as 'dotplots' and I think that you are getting the two confused. The type of dotplot that it looks like you are thinking about is really a variation on a histogram and does not convey the same type of information that a pie chart would.
The type of dotplot from Cleveland is essentially a bar chart with a dot placed at the end of each bar, then the bar is removed. So even with millions of data points, they would be tabled the same as for creating a pie chart, then a single dot is plotted for each category. The summary preparing for the plot is the same in a pie chart and a dotplot: the difference is in a pie chart you are trying to compare non-aligned angles or areas (and the temptation to add chartjunk or otherwise distort the perception of the values is much higher) and in the dotplot you are comparing points on an aligned scale.
If you want the viewer to be able to easily judge percentage of the whole then just make sure that the axis for the dot positions goes from 0 to the total count. You can also easily add another axis (or replace the main one) that shows the percentage rather than the counts, then the percentage can be read off that axis much more accurately than estimating angles and areas in pie charts.
Here are a couple of examples using R:
This is the type of dotplot that I think you are thinking of, and this would not replace a pie chart:
library(TeachingDemos)
dots(round( rnorm(100),0 ) )
But this is the type of dotplot being referred to in Cleveland as a replacement for pie charts:
# steal data from ?pie
pie.sales <- c(0.12, 0.3, 0.26, 0.16, 0.04, 0.12)
names(pie.sales) <- c("Blueberry", "Cherry",
"Apple", "Boston Cream", "Other", "Vanilla Cream")
par(mfrow=c(2,1))
dotchart(pie.sales*100)
# or
par(xaxs='i')
dotchart( pie.sales*100, xlim=c(0,100) )
|
Pie charts vs. dot plots
There are two different types of chart that that are referred to as 'dotplots' and I think that you are getting the two confused. The type of dotplot that it looks like you are thinking about is real
|
52,187 |
Pie charts vs. dot plots
|
Greg Snow's response has covered much about dot plot. I'd just like to suggest an alternative which you can compress the dimension further:
Sorry the legend is missing but the idea is pretty much here. Instead of displaying the four pieces of data on four horizontal lines, we can put them in one line with accumulative percentage as the scale. This way, the difference between dots will allow quantitative comparison just like usual dot plots do. In addition, it can overcome the difficulty of comparing multiple pie charts: if we need to show data from another entity, we could just add one more horizontal line in the illustration.
Reference code:
library(lattice)
perc <- c(100, 60, 30, 10)
setnum <- rep(1,4)
category <- c("A", "B", "C", "D")
dotplot(setnum ~ perc, group=category, xlim=c(-5,105),ylab="", xlab="Cumulative %", pch=16)
|
Pie charts vs. dot plots
|
Greg Snow's response has covered much about dot plot. I'd just like to suggest an alternative which you can compress the dimension further:
Sorry the legend is missing but the idea is pretty much her
|
Pie charts vs. dot plots
Greg Snow's response has covered much about dot plot. I'd just like to suggest an alternative which you can compress the dimension further:
Sorry the legend is missing but the idea is pretty much here. Instead of displaying the four pieces of data on four horizontal lines, we can put them in one line with accumulative percentage as the scale. This way, the difference between dots will allow quantitative comparison just like usual dot plots do. In addition, it can overcome the difficulty of comparing multiple pie charts: if we need to show data from another entity, we could just add one more horizontal line in the illustration.
Reference code:
library(lattice)
perc <- c(100, 60, 30, 10)
setnum <- rep(1,4)
category <- c("A", "B", "C", "D")
dotplot(setnum ~ perc, group=category, xlim=c(-5,105),ylab="", xlab="Cumulative %", pch=16)
|
Pie charts vs. dot plots
Greg Snow's response has covered much about dot plot. I'd just like to suggest an alternative which you can compress the dimension further:
Sorry the legend is missing but the idea is pretty much her
|
52,188 |
The most common extracted features for image recognition
|
Different categories of image features come to mind:
Color features such as color histograms which could for instance be in RGB or HSV space
Other histogram approaches, e.g. histogram of oriented gradients (HOG)
Texture features such as Tamura's or Haralick's
SIFT and SURF features are popular as well
Luckily libraries exist that provide access to many image features. Have a look WND-CHARM, they claim to support ~3000 different image descriptors.
|
The most common extracted features for image recognition
|
Different categories of image features come to mind:
Color features such as color histograms which could for instance be in RGB or HSV space
Other histogram approaches, e.g. histogram of oriented gra
|
The most common extracted features for image recognition
Different categories of image features come to mind:
Color features such as color histograms which could for instance be in RGB or HSV space
Other histogram approaches, e.g. histogram of oriented gradients (HOG)
Texture features such as Tamura's or Haralick's
SIFT and SURF features are popular as well
Luckily libraries exist that provide access to many image features. Have a look WND-CHARM, they claim to support ~3000 different image descriptors.
|
The most common extracted features for image recognition
Different categories of image features come to mind:
Color features such as color histograms which could for instance be in RGB or HSV space
Other histogram approaches, e.g. histogram of oriented gra
|
52,189 |
The most common extracted features for image recognition
|
One standard approach is to use a restricted Boltzmann machine to do the feature extraction, and then reconsider the RBM as a neural network and finish the training using back-propagation. See, for example,
G. E. Hinton, "To Recognize Shapes, First Learn to Generate images," Progress in brain research, vol. 165, pp. 535-547, 2007.
This is an example of automated feature extraction. It sounds like you are also interested in human-directed feature extraction, for which I am looking forward to other people's answers…
|
The most common extracted features for image recognition
|
One standard approach is to use a restricted Boltzmann machine to do the feature extraction, and then reconsider the RBM as a neural network and finish the training using back-propagation. See, for e
|
The most common extracted features for image recognition
One standard approach is to use a restricted Boltzmann machine to do the feature extraction, and then reconsider the RBM as a neural network and finish the training using back-propagation. See, for example,
G. E. Hinton, "To Recognize Shapes, First Learn to Generate images," Progress in brain research, vol. 165, pp. 535-547, 2007.
This is an example of automated feature extraction. It sounds like you are also interested in human-directed feature extraction, for which I am looking forward to other people's answers…
|
The most common extracted features for image recognition
One standard approach is to use a restricted Boltzmann machine to do the feature extraction, and then reconsider the RBM as a neural network and finish the training using back-propagation. See, for e
|
52,190 |
How to visualize two bar charts with very different scales without looking redundant
|
In general, if you have two different measurements on each of a set of observations, and you think there may be a relationship between them, I think it's best to visualize them with a scatterplot. I don't know if you use R, but here is some simple code and a sample plot:
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
windows()
plot(speed, weight)
This plot doesn't look very exciting, mainly because you only have 4 data points.
Another way to visualize data is to use a dotplot. This is an especially good way to look at data that represent simple magnitudes, which is what would you have, if you were looking at only one of your variables. Note that this is the same thing a bar chart provides, it's just that dotplots have been shown to be easier for people to extract the information. The question is, can you look at two variables at the same time in such a way that you could perhaps see relationships, but without redundancy?
One way to deal with this general problem is to plot two variables on the same plot (in this case, the same dotplot). This sort of thing is very commonly done with time series data in economics (here's one I found through Googling). The trick is to find a way to get two different scales on the same plot. This can be done by rescaling one of the variables in terms of the other; in addition, you must rescale the axis values of the other variable into the terms of the first. These 'rescalings' must be linear transformations so as not to change the data in a meaningful way. The following is some R code that does this in a way which is incredibly kluge-y, but that I hope will be easy to follow:
sM = mean(speed); wM = mean(weight)
sSD = sd(speed); wSD = sd(weight)
weightZ = (weight-wM) / wSD
convertedW = (weightZ*sSD) + sM
sTicks = c(0:8)
sTicksZ = (sTicks-sM) / sSD
convertedST = (sTicksZ*wSD) + wM
convertedST = round(convertedST)
sY = seq(from=1.1, to=4.1, by=1)
wY = seq(from=0.9, to=3.9, by=1)
windows()
plot(speed, sY, pch=1, col="red", axes=F, xlab="", ylab="", ylim=c(0.5, 4.5), xlim=c(0,8))
points(convertedW, wY, pch=2, col="blue")
abline(h=c(1:4), lty="dashed", col="lightgray")
box()
axis(side=2, at=c(1:4), labels=c("a","b","c","d"))
axis(side=3, at=sTicks, col="red")
axis(side=1, at=sTicks, labels=convertedST, col="blue")
mtext("Speed", side=3, line=2.5, cex=1.5, col="red")
mtext("Weight", side=1, line=2.5, cex=1.5, col="blue")
legend("bottomright", legend=c("Speed", "Weight"), pch=c(1,2), col=c("red","blue"))
With smaller amounts of data, as you have here, this may be more informative.
|
How to visualize two bar charts with very different scales without looking redundant
|
In general, if you have two different measurements on each of a set of observations, and you think there may be a relationship between them, I think it's best to visualize them with a scatterplot. I
|
How to visualize two bar charts with very different scales without looking redundant
In general, if you have two different measurements on each of a set of observations, and you think there may be a relationship between them, I think it's best to visualize them with a scatterplot. I don't know if you use R, but here is some simple code and a sample plot:
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
windows()
plot(speed, weight)
This plot doesn't look very exciting, mainly because you only have 4 data points.
Another way to visualize data is to use a dotplot. This is an especially good way to look at data that represent simple magnitudes, which is what would you have, if you were looking at only one of your variables. Note that this is the same thing a bar chart provides, it's just that dotplots have been shown to be easier for people to extract the information. The question is, can you look at two variables at the same time in such a way that you could perhaps see relationships, but without redundancy?
One way to deal with this general problem is to plot two variables on the same plot (in this case, the same dotplot). This sort of thing is very commonly done with time series data in economics (here's one I found through Googling). The trick is to find a way to get two different scales on the same plot. This can be done by rescaling one of the variables in terms of the other; in addition, you must rescale the axis values of the other variable into the terms of the first. These 'rescalings' must be linear transformations so as not to change the data in a meaningful way. The following is some R code that does this in a way which is incredibly kluge-y, but that I hope will be easy to follow:
sM = mean(speed); wM = mean(weight)
sSD = sd(speed); wSD = sd(weight)
weightZ = (weight-wM) / wSD
convertedW = (weightZ*sSD) + sM
sTicks = c(0:8)
sTicksZ = (sTicks-sM) / sSD
convertedST = (sTicksZ*wSD) + wM
convertedST = round(convertedST)
sY = seq(from=1.1, to=4.1, by=1)
wY = seq(from=0.9, to=3.9, by=1)
windows()
plot(speed, sY, pch=1, col="red", axes=F, xlab="", ylab="", ylim=c(0.5, 4.5), xlim=c(0,8))
points(convertedW, wY, pch=2, col="blue")
abline(h=c(1:4), lty="dashed", col="lightgray")
box()
axis(side=2, at=c(1:4), labels=c("a","b","c","d"))
axis(side=3, at=sTicks, col="red")
axis(side=1, at=sTicks, labels=convertedST, col="blue")
mtext("Speed", side=3, line=2.5, cex=1.5, col="red")
mtext("Weight", side=1, line=2.5, cex=1.5, col="blue")
legend("bottomright", legend=c("Speed", "Weight"), pch=c(1,2), col=c("red","blue"))
With smaller amounts of data, as you have here, this may be more informative.
|
How to visualize two bar charts with very different scales without looking redundant
In general, if you have two different measurements on each of a set of observations, and you think there may be a relationship between them, I think it's best to visualize them with a scatterplot. I
|
52,191 |
How to visualize two bar charts with very different scales without looking redundant
|
It's OK to have two graphs share an axis.
But it's best to avoid one graph with two scales in the same dimension. There is too much potential for misreading (mainly assuming the alignment carries some significance). See the Stephen Few article Dual-Scaled Axes in Graphs: Are They Ever the Best Solution?.
|
How to visualize two bar charts with very different scales without looking redundant
|
It's OK to have two graphs share an axis.
But it's best to avoid one graph with two scales in the same dimension. There is too much potential for misreading (mainly assuming the alignment carries som
|
How to visualize two bar charts with very different scales without looking redundant
It's OK to have two graphs share an axis.
But it's best to avoid one graph with two scales in the same dimension. There is too much potential for misreading (mainly assuming the alignment carries some significance). See the Stephen Few article Dual-Scaled Axes in Graphs: Are They Ever the Best Solution?.
|
How to visualize two bar charts with very different scales without looking redundant
It's OK to have two graphs share an axis.
But it's best to avoid one graph with two scales in the same dimension. There is too much potential for misreading (mainly assuming the alignment carries som
|
52,192 |
How to visualize two bar charts with very different scales without looking redundant
|
You could also transform absolute scales into relative by using z-transformation (or any other that you think is more suitable).
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
speed=scale(speed)
weight=scale(weight)
rng=extendrange(range(c(speed,weight)))
plot(speed, type="b", col="red", ylim=rng,ylab="z-values",xlab="",xaxt="n",bty="n")
points(weight, type="b",col="blue")
legend("topleft",legend=c("speed","weight"),col=c("red","blue"),lty=1, bty="n",pch=1)
axis(1, at=lbls, labels=lbls)
The nice feature of this approach is that it can be used on more than two scales. And although it is not as informative on the values of individual points, it makes scales more comparable.
|
How to visualize two bar charts with very different scales without looking redundant
|
You could also transform absolute scales into relative by using z-transformation (or any other that you think is more suitable).
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
speed=s
|
How to visualize two bar charts with very different scales without looking redundant
You could also transform absolute scales into relative by using z-transformation (or any other that you think is more suitable).
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
speed=scale(speed)
weight=scale(weight)
rng=extendrange(range(c(speed,weight)))
plot(speed, type="b", col="red", ylim=rng,ylab="z-values",xlab="",xaxt="n",bty="n")
points(weight, type="b",col="blue")
legend("topleft",legend=c("speed","weight"),col=c("red","blue"),lty=1, bty="n",pch=1)
axis(1, at=lbls, labels=lbls)
The nice feature of this approach is that it can be used on more than two scales. And although it is not as informative on the values of individual points, it makes scales more comparable.
|
How to visualize two bar charts with very different scales without looking redundant
You could also transform absolute scales into relative by using z-transformation (or any other that you think is more suitable).
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
speed=s
|
52,193 |
How to visualize two bar charts with very different scales without looking redundant
|
Given that your data sets are different by orders of magnitude you might want to use logarithmic scale for your x-axis (or take the log of all samples before plotting.) That way the you can still see the variation within the same order of magnitude relatively clearly while the empty space between the sets is condensed.
Example in R:
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
barplot(rbind(speed, weight), log='x', beside=T, horiz=T,
legend.text=T, args.legend=list(x='right'))
|
How to visualize two bar charts with very different scales without looking redundant
|
Given that your data sets are different by orders of magnitude you might want to use logarithmic scale for your x-axis (or take the log of all samples before plotting.) That way the you can still see
|
How to visualize two bar charts with very different scales without looking redundant
Given that your data sets are different by orders of magnitude you might want to use logarithmic scale for your x-axis (or take the log of all samples before plotting.) That way the you can still see the variation within the same order of magnitude relatively clearly while the empty space between the sets is condensed.
Example in R:
speed = c(2.2, 4.7, 7.3, 3.1)
weight = c(500, 222, 999, 1000)
barplot(rbind(speed, weight), log='x', beside=T, horiz=T,
legend.text=T, args.legend=list(x='right'))
|
How to visualize two bar charts with very different scales without looking redundant
Given that your data sets are different by orders of magnitude you might want to use logarithmic scale for your x-axis (or take the log of all samples before plotting.) That way the you can still see
|
52,194 |
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
The individual plots are on the scale of the linear predictor, i.e. a scale that is -Inf to +Inf. The inverse of the link function is used to map from this scale to the 0, ..., 1 scale of the response. Further note that each smooth is subject to centring constraints and so is centred about 0.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
The individual plots are on the scale of the linear predictor, i.e. a scale that is -Inf to +Inf. The inverse of the link function is used to map from this scale to the 0, ..., 1 scale of the response
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
The individual plots are on the scale of the linear predictor, i.e. a scale that is -Inf to +Inf. The inverse of the link function is used to map from this scale to the 0, ..., 1 scale of the response. Further note that each smooth is subject to centring constraints and so is centred about 0.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
The individual plots are on the scale of the linear predictor, i.e. a scale that is -Inf to +Inf. The inverse of the link function is used to map from this scale to the 0, ..., 1 scale of the response
|
52,195 |
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
What is being plotted is the log-odds. It's log(p/(1-p)). That's the space of the logistic regression. You can convert the values using the logistic distribution and the qlogis and plogis functions.
I don't know what GAM functions you're using but often times there are options to get the p-values out.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
What is being plotted is the log-odds. It's log(p/(1-p)). That's the space of the logistic regression. You can convert the values using the logistic distribution and the qlogis and plogis functions
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
What is being plotted is the log-odds. It's log(p/(1-p)). That's the space of the logistic regression. You can convert the values using the logistic distribution and the qlogis and plogis functions.
I don't know what GAM functions you're using but often times there are options to get the p-values out.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
What is being plotted is the log-odds. It's log(p/(1-p)). That's the space of the logistic regression. You can convert the values using the logistic distribution and the qlogis and plogis functions
|
52,196 |
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
The other answers already provide a good enough explanation, but I wanted to provide a worked example to show the differences in case somebody discovers this question down the road. I have fit a model using the biopsy data in the MASS package in R. The only adaptation to this data that I made was converting the character values from the outcome variable to a binary 0/1 value.
#### Load Libraries ####
library(mgcv)
library(dplyr)
#### Save Data From MASS Package ####
biopsy <- MASS::biopsy %>%
as_tibble() %>%
mutate(class = ifelse(class=="benign",0,1))
biopsy
I then fit the model with 6 predictors using cubic regression splines and REML fitting with a logistic regression model.
#### Fit Model ####
fit <- gam(
class
~ s(V1, bs = "cr")
+ s(V2, bs = "cr")
+ s(V3, bs = "cr")
+ s(V4, bs = "cr")
+ s(V5, bs = "cr")
+ s(V6, bs = "cr"),
data = biopsy,
method = "REML",
family = binomial
)
Finally, I plot the model with predicted probability. The first part of the plot function uses the model fit, the second part transforms the plotted values into predicted probability, the third part puts it all on one page, and the last argument includes standard error around the mean.
#### Plot Model by Predicted Probability ####
plot(fit,
trans=plogis,
pages=1,
seWithMean = T)
Your plot should look like this:
Here you can see the values are now bounded between 0 and 1, with each decimal value directly interpretable as a percentage in the "class" outcome. As an example, the plot here shows that a zero value on V1 (tumor clump thickness) has a predicted probability of malign tumor growth of around 20%, whereas chances of having a malign tumor increase to near 100% at the maximum value of V1.
However, these are the probabilities when centered to an average value of .5. Including the intercept allows one to look at the probabilities when all other predictors are set to their average value. This can be achieved with the following code:
#### Include Intercept ####
plot(fit,
trans=plogis,
pages=1,
seWithMean = T,
shift = coef(fit)[1])
Giving you this plot:
Contrarily, simply removing the trans=plogis argument plots by log odds, which is likely what you saw when you plotted the model.
#### Removing Trans=Plogis Argument ####
plot(fit,
pages=1,
seWithMean = T)
Which looks like this:
Hopefully the difference between both plotting methods is more clear now.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
|
The other answers already provide a good enough explanation, but I wanted to provide a worked example to show the differences in case somebody discovers this question down the road. I have fit a model
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
The other answers already provide a good enough explanation, but I wanted to provide a worked example to show the differences in case somebody discovers this question down the road. I have fit a model using the biopsy data in the MASS package in R. The only adaptation to this data that I made was converting the character values from the outcome variable to a binary 0/1 value.
#### Load Libraries ####
library(mgcv)
library(dplyr)
#### Save Data From MASS Package ####
biopsy <- MASS::biopsy %>%
as_tibble() %>%
mutate(class = ifelse(class=="benign",0,1))
biopsy
I then fit the model with 6 predictors using cubic regression splines and REML fitting with a logistic regression model.
#### Fit Model ####
fit <- gam(
class
~ s(V1, bs = "cr")
+ s(V2, bs = "cr")
+ s(V3, bs = "cr")
+ s(V4, bs = "cr")
+ s(V5, bs = "cr")
+ s(V6, bs = "cr"),
data = biopsy,
method = "REML",
family = binomial
)
Finally, I plot the model with predicted probability. The first part of the plot function uses the model fit, the second part transforms the plotted values into predicted probability, the third part puts it all on one page, and the last argument includes standard error around the mean.
#### Plot Model by Predicted Probability ####
plot(fit,
trans=plogis,
pages=1,
seWithMean = T)
Your plot should look like this:
Here you can see the values are now bounded between 0 and 1, with each decimal value directly interpretable as a percentage in the "class" outcome. As an example, the plot here shows that a zero value on V1 (tumor clump thickness) has a predicted probability of malign tumor growth of around 20%, whereas chances of having a malign tumor increase to near 100% at the maximum value of V1.
However, these are the probabilities when centered to an average value of .5. Including the intercept allows one to look at the probabilities when all other predictors are set to their average value. This can be achieved with the following code:
#### Include Intercept ####
plot(fit,
trans=plogis,
pages=1,
seWithMean = T,
shift = coef(fit)[1])
Giving you this plot:
Contrarily, simply removing the trans=plogis argument plots by log odds, which is likely what you saw when you plotted the model.
#### Removing Trans=Plogis Argument ####
plot(fit,
pages=1,
seWithMean = T)
Which looks like this:
Hopefully the difference between both plotting methods is more clear now.
|
Plotting a logistic GAM model in R - why is the scale not 0-1?
The other answers already provide a good enough explanation, but I wanted to provide a worked example to show the differences in case somebody discovers this question down the road. I have fit a model
|
52,197 |
If there was a certification exam for statisticians, what would be the syllabus?
|
The Royal Statistical Society offers three levels of professional exam in statistics. Their documentation includes the syllabuses (pdf) and a series of reading lists. Far too much material to summarise here!
|
If there was a certification exam for statisticians, what would be the syllabus?
|
The Royal Statistical Society offers three levels of professional exam in statistics. Their documentation includes the syllabuses (pdf) and a series of reading lists. Far too much material to summaris
|
If there was a certification exam for statisticians, what would be the syllabus?
The Royal Statistical Society offers three levels of professional exam in statistics. Their documentation includes the syllabuses (pdf) and a series of reading lists. Far too much material to summarise here!
|
If there was a certification exam for statisticians, what would be the syllabus?
The Royal Statistical Society offers three levels of professional exam in statistics. Their documentation includes the syllabuses (pdf) and a series of reading lists. Far too much material to summaris
|
52,198 |
If there was a certification exam for statisticians, what would be the syllabus?
|
The American Statistical Association, The Royal Statistical Society, and the australian and Canadian societies all have certification. In the case of ASA where I have certification and RSS where I have an application in, there is not a required test. They look for information from your CV and references.
|
If there was a certification exam for statisticians, what would be the syllabus?
|
The American Statistical Association, The Royal Statistical Society, and the australian and Canadian societies all have certification. In the case of ASA where I have certification and RSS where I ha
|
If there was a certification exam for statisticians, what would be the syllabus?
The American Statistical Association, The Royal Statistical Society, and the australian and Canadian societies all have certification. In the case of ASA where I have certification and RSS where I have an application in, there is not a required test. They look for information from your CV and references.
|
If there was a certification exam for statisticians, what would be the syllabus?
The American Statistical Association, The Royal Statistical Society, and the australian and Canadian societies all have certification. In the case of ASA where I have certification and RSS where I ha
|
52,199 |
Best method to visualize large interaction between two factors
|
If you are interested in visualizing an interaction effect specifically, you can subtract main effects (i.e., average factor effect, say $x_i$ and $x_j$) from each treatment mean (combination of factor levels, indexed by $i$ and $j$) based on the relation
$$\gamma_{ij} = \bar x_{ij} - \bar x_i - \bar x_j + \bar x$$
This will yield $i$ (or $j$) curves where every value are expressed as deviation from a baseline which is simply the grand mean ($\bar x$). This idea is developed in Howell, Statistical Methods for Psychology. Below is an illustration with one of Howell's dataset (a study on number of words recalled as a function of subjects' age and recall condition, N=100).
|
Best method to visualize large interaction between two factors
|
If you are interested in visualizing an interaction effect specifically, you can subtract main effects (i.e., average factor effect, say $x_i$ and $x_j$) from each treatment mean (combination of facto
|
Best method to visualize large interaction between two factors
If you are interested in visualizing an interaction effect specifically, you can subtract main effects (i.e., average factor effect, say $x_i$ and $x_j$) from each treatment mean (combination of factor levels, indexed by $i$ and $j$) based on the relation
$$\gamma_{ij} = \bar x_{ij} - \bar x_i - \bar x_j + \bar x$$
This will yield $i$ (or $j$) curves where every value are expressed as deviation from a baseline which is simply the grand mean ($\bar x$). This idea is developed in Howell, Statistical Methods for Psychology. Below is an illustration with one of Howell's dataset (a study on number of words recalled as a function of subjects' age and recall condition, N=100).
|
Best method to visualize large interaction between two factors
If you are interested in visualizing an interaction effect specifically, you can subtract main effects (i.e., average factor effect, say $x_i$ and $x_j$) from each treatment mean (combination of facto
|
52,200 |
Best method to visualize large interaction between two factors
|
In R, there is a function coplot which plots your main effect and outcome in a grid of domains for each conditioning factor in a scatter-plot matrix. By examining the change between trends in the panel of plots, you can assess the extent of interaction present in the data.
|
Best method to visualize large interaction between two factors
|
In R, there is a function coplot which plots your main effect and outcome in a grid of domains for each conditioning factor in a scatter-plot matrix. By examining the change between trends in the pane
|
Best method to visualize large interaction between two factors
In R, there is a function coplot which plots your main effect and outcome in a grid of domains for each conditioning factor in a scatter-plot matrix. By examining the change between trends in the panel of plots, you can assess the extent of interaction present in the data.
|
Best method to visualize large interaction between two factors
In R, there is a function coplot which plots your main effect and outcome in a grid of domains for each conditioning factor in a scatter-plot matrix. By examining the change between trends in the pane
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.